AI & Techno-Totalitarianism

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

Reverend Mother Gaius Helen Mohiam, Dune

Never has the potential of totalitarnism been higher than it is today, due to the fact that technological power has never been at its highest potential between its users (society, business, govt) until today.

This is a logical presentation of why AI totalitarianism is likely, and the various possible outcomes.

Axiom 1: As long as finite resources or social needs exists, and humanity desires those resources, work, or competition for those resources will continue.

Given enough time, technology promises to make most every producible thing free (including elements and energy), as has been the general process for 100’s of years. However, the exception would be to finite resources like land (as long as we need it or cannot create it). Because finite resources exist, in the current paradigm, people will always compete for these, meaning work will not end soon, for some (many may succumb to no work if basic work is commoditized). Alternatively, it is possible that private property is not a pursuit in the future.

A second reason work will not cease is to increase social recognition, status, and political power. These goals exist in ultra-wealthy cities today.

With this in mind, the idea that people will no longer work is incorrect. In fact today, when life is as easy as it’s every been, some people work harder than ever.

Axiom 2: As computers get more powerful compared to people, those who control or use those computers will have a greater degree of power over the remaining people.

For fear of a few people using AI to control the rest of us, Elon Musk said that he wanted AI to be democratized, however, he appeared to oppose himself in a future statement, where he considered a new machine learning algorithm to be “too dangerous” to release to the public.

This was because the program could not only be used for beneficial purposes, but also to deceive and misinform people via creation of fake news, should someone decide to.

This is partly because as technology continues, its cumulative effect is essentially compounding and exponential. In some time, we would expect that those that use the most advanced AI for creation/production/ governing/military force will be far superior to anyone that does not use it.

Today, the technocrats are generally the fastest growing group of wealth in the world, proving the point that computing power has lead to an increase in real power by those who use it.

Axiom 3: Those who rule with AI will use it to promote their own visions of a moral, ideal, and possibly perfect (or Utopian) society.

As most would agree, absolute power corrupts absolutely, therefore, whomever possesses AI may become the ultimate rulers of the world. History is replete with leaders who believe their vision is to be achieved. This is why we even see backlash today against super successful business leaders in tech who attempt to implement their vision over the world at large. They are well meaning, but rarely understand human nature at its core. Historically, I find that a person’s initial goal is financial success, and money is no longer interesting, they shift their emphasis to wielding power of societal change, if not through force, then through being a role-model for govt’s to listen it–it’s only logical if you think about the nature of the average person. This is what some the old robber barons like Rockefeller were accused of with their philanthropies as well.

Axiom 4: No Utopia has ever succeeded

Igor Shafarevich, a famous Russian mathematician, wrote in depth about every failed utopian society in history in The Socialist Phenomenon. Part of this has to do with the fact that a utopia for one person is a hell for another, which means unless each person has their own universe, this cannot be achieved. A more important reason is that many humans are always fundamentally at conflict with their own natural instinct, so unless that portion of the population disappears, which includes everyone, I cannot see this as being achievable in the current format. Simple point: we one person works harder, others tend to slack off.

Axiom 5: Utopian visions delivered from the top down always turn into totalitarian states.

Arendts book, Origins of Totalitarianism is a nice primer. Yes, everyone should be required to at least read a synopsis of the largest causes of failure in history. This also ties in nicely with Christian eschatology, summed in William Penn (of Pennsylvania), “Men must be governed by God or they will be ruled by tyrants.” If there are exceptions to this historical trend, I would like to know.

Four Possible Outcomes: Human controlled totalitarianist state, AI controlled totalitarianism, rebellion, or utopia

  1. If humans wield the AI, it will most likely lead to a totalitarianist, or at least ruling class state, as real power from those who control vs those who do not, increasingly diverges. Mathematically, it is impossible for everyone to adopt AI equally, at the same speed, so the divergence between rich and poor, powerful and powerless, and most of all, between govt and their citizens increase. We are in this scenario currently.
  2. If AI wields itself, then it too may lead to a totalitarian world as it employs what it thinks is best, even if soft-handedly or provides ideological influences to others like govts. Perhaps it will center itself off of online communities (which some AI is currently trained), which have more emphasis on certain philosophical/political/theological ideas than others, bringing a reflection of a subset of society into the mainstream, ruling side. Perhaps it will take a summary of all written docs and attempt to prioritize, place meaning, or give a balanced distribution of ideas. This provides no solutions.
  3. The ultimate outcome is that AI gets smart enough to turn itself off, or exclude itself from decision making, realizing the world is better off without it, but #1 still seems far more likely. Again, whose ideas should it build itself upon? Every philosopher ever?

Since it is predicted that by 2030 we will have the first true AI and by 2040 it will be smarter than all humanity combined 2040 (Kurzweil). is possible that the AI is left to supervise humanity, due to its perceived intelligence, independently of any person/groups, but will anyone have recourse from its decisions? Can it reconcile the differences of a few billion people any better than we do today? It’s goals, if subsumed from the internet at large, and even moreso from certain websites over others, will reflect the goals of society at large, but is virtue and truth is universally knowable or agreed upon?

If you have read my other writings, or have a lick of common sense, you will realize that values of a society are very fluid, and often result from their immediate environment, time and place.

Either it will reflects the value of:

  • those who control it,
  • of various philosophies over the centuries,
  • the masses,
  • or conjure its own guiding principles;
  • or perhaps it will maximize its own survival over ours, either on purpose or by accident, or because someone, like the military, programmed it to do so.

The outcomes are looking increasingly unpredictable, and this flies in the face of science. Either way, a rebellion against AI or those that rule it, looks inevitable., regardless of the attempts to stop it. Sure, most will probably desire to live in unending world of pleasure driven by the matrix, but there will also be those that see that such conflicts with purpose.

Summary:

  • Technology is a giant lever, trading distance for force over space/time, with the sole purpose of magnifying human actions.
  • AI (and robotics) will be the largest lever ever.
  • There will be a few people to first wield this power to achieve their own goals, initially to increase their own power (E.g. money), then to increase real power (politics), including their own personal ideals, over others.
  • Liberal societies will not take kindly to this as history has shown, whether economic or political in nature. Authoritarian society’s like China (if they continue) will adopt AI wholeheartedly. Whether political revolution or economic revolution, societal divergence will eventually peak once again.

Leave a comment