Exponential Power Differentials–Elitism: Why AI Most Likely Will Cause Collapse

If AI is just a newer, faster version of tech, and tech has always increased the difference in power between those who wield it and those who do now, then we can presume that the difference between the political leaders, businesses, etc… may accelerate to unforeseeable degrees.

Historically, society has not taken kindly to these power differentials. We know them as revolutions, except that this time, with recursively optimizing software, permanent hegemony is almost guaranteed for those who harness it early.

There are many arguments about why AI will or will not destroy humanity, and while those are all debatable, this one is not so easily dismissed, and generally not discussed.

Won’t the Early AI Adopting Human-God-Machines be Benevolent?

It is possible that those at the top will not use their untold power to influence the world at large, but as a sociologist reader of history, and plain common-senser, I doubt it just a little. So, if we cannot automatically trust those at the top (logical), then to maintain the power of balance, all people must adopt AI individually in order to protect themselves, in the same way, gun rights protect citizens from corrupt governments. But this is a problem, for many people will not feel the desire to, or even be willing to, merge with the Matrix. If you don’t believe me, there are plenty of people that either do not own a cell phone, or simply use it to make calls.

Combine AI with humans that have common psychological weaknesses, as we all have them. For example, Bertran Russell in his Nobel Prize speech suggests there are four basic politcal desires that cannot be satisfied,

“I think that most current discussions of politics and political theory take insufficient account of psychology” … “Man differs from other animals in one very important respect, and that is that he has some desires which are, so to speak, infinite, which can never be fully gratified, and which would keep him restless even in Paradise. The boa constrictor, when he has had an adequate meal, goes to sleep and does not wake until he needs another meal. Human beings, for the most part, are not like this.” Further, “When the Arabs, who had been used to living sparingly on a few dates, acquired the riches of the Eastern Roman Empire and dwelt in palaces of almost unbelievable luxury, they did not, on that account, become inactive. Hunger could no longer be a motive, for Greek slaves supplied them with exquisite viands at the slightest nod. But other desires kept them active: four in particular, which we can label acquisitiveness, rivalry, vanity, and love of power.”

Here are each of the 4 detailed:

  • acquisitiveness – “to possess as much as possible of goods, or the title to goods”
  • rivalry – “a great many men will cheerfully face impoverishment if they can thereby secure complete ruin for their rivals. Hence the present level of taxation.”
  • vanity – “Vanity is a motive of immense potency. Anyone who has much to do with children knows how they are constantly performing some antic, and saying “Look at me.” “Look at me” is one of the most fundamental desires of the human heart. It”
  • love of power – “Love of power is closely akin to vanity, but it is not by any means the same thing. What vanity needs for its satisfaction is glory, and it is easy to have glory without power… Many people prefer glory to power, but on the whole, these people have less effect upon the course of events than those who prefer power to glory… Power, like vanity, is insatiable. Nothing short of omnipotence could satisfy it completely. And as it is especially the vice of energetic men, the causal efficacy of love of power is out of all proportion to its frequency. It is, indeed, by far the strongest motive in the lives of important men.”

Other common negative human emotions include greed, selfishness, jealousy, envy, pride, anger, fear, insecurity, unfairness, apathy, and hate. Which of these negative emotions, or desires for greater power, will not be magnified by its wielders when the greatest power the world will ever see, AGI, arrives?

Law of Accelerating Differences

The second problem is accelerating returns ensures that no matter how much people keep up, those at the cutting edge will continue the exponential curve upwards. The ever-exponential growth of technology promises that those at the top will continue to distance themselves from the rest, so even minor differences today will be massive in the future.

Now, for some data to back this up, there should be some sort of evidence that this is already happening. Business/socio-economics is where we should see this appear, and the following suggests it’s already well underway.

Until recently, I never fully understood the idea of “the elite” which seemed a prevalent meme in today’s world, but if money is power, and absolute power corrupts absolutely, then it does make sense in theory that our lives are increasingly at risk of being manipulated in some way by those who wield enormous power (e.g. wealth or technology) as those with such have a continual opportunity, and psychological temptations, to influence the rest of us.

It seems likely that the greatest proponents of AI are also more likely to benefit more from AI, which is why suspicion probably is higher in the other end of the spectrum.

One general solution would simply be to make all companies employee-owned, with the head of the company never making more than a percentage of those on the bottom. Companies that did not adhere to such principles should be avoided by consumers in favor of supporting good companies, and on a B2B (business-to-business) side, businesses would not do business with other corporations that did not have such practices. But the reality is, that this would never achieve universal adoption, and there become unlimited issues of how to fairly recompense good work results while sharing risk, but it would probably improve the quality of capitalism a lot to have such a model

If we cannot trust the business class to save us, then who will? The most common solution today appears to be: the government will regulate all our problems.

Cannot the Government Save Us from AI Disparity?

Is not the government just a collection of ordinary people like you and me with individual, personal desires, goals, and whims, guided by whatever moral foundation, good or otherwise, that appeals to them?

How can the Federal government solve this disparity issue, or “redistribution” any more than the Communists did in their era; such wishful thinking caused the destruction of entire populations, mainly as the government became the elite, and still is in many semi-communist countries today.

At least an elite “business” class (aristocracy) generally tries to keep their customers alive to generate more income. This is not true for elite governments, a.k.a. autocracies, which tend to become monopolies on life and thought itself, whose existence is often to fulfill their own power ego at the cost of all their own citizens.

Besides, a good question to ask is: outside of natural causes, what system or institution has by far, caused the most unwarranted deaths (e.g. murder of millions), of any in history? If you cannot answer that, then we have failed to teach history correctly, and surprisingly, many people cannot answer such a simple, yet fundamental-to-life question.

No, the government is not the solution, even if it is ruled “for the people, by the people,” a phrase seemingly less important than ever in a world with rising authoritarianism. Historically, governments used great powers to simply destroy. World War 2 showed that governments used corporate productivity to kill at the most efficient levels ever in recorded history. AI will be another opportunity to repeat the past. In other words, individuals have potential to be dangerous, corporations more dangerous, but governments have the most potential to be the most dangerous of all.

See the chapter on govt for more.

Why Does Elitism Currently Rank the Highest Risk for Human Destruction?

Which problem is the most difficult to solve? Retraining computers to play nice, or modifying human behavior of all people on earth? If mass pleasure destroys most of humanity, will some escape? This is not to say that all scenarios could happen together, at different degrees. That seems highly probable as well.

Of course, the real problem runs deeper, because even if you think you have a solution to such people, the reality is, with AGI they will be infinitely smarter than the rest of us, so who can stop them? Not you, not me, and not any other AGI that is eternally in catch-up mode. Perhaps the wars of the future will be mainly against the data centers and robot manufacturing facilities in attempts to wipe out competing AGI’s, but that seems unlikely since AGI is increasingly cost-effective to run on a small server farm, and eventually, on the home computer.

Even Without Elitism, There are Several Other Risks

Even if companies do not find a way to “distribute profits” somehow by taxing AI-centric corporations on gross income, or assets (unlikely since they often claim their headquarters in an overseas country, deferring taxes until the income is repatriated to the US, such as Microsoft, GE, Apple, GE, Johnson & Johnson, Honeywell, Merck, Pfizer, etc…); although I should point out here that the govt has continues to spend a higher portion of GDP now every decade for over 100 years, nearing 45%.

Even without these issues, there is still the infinite wealth problem, which is briefly mentioned among other growing risks detailed here. Here is more information on the math of elitism multiplied by technology for those who want the cold hard facts. 

Exponential Power Differences Within Intelligence

As discussed elsewhere (need to write the chapter still), the logical conclusion for surviving accelerating change, and perhaps even progress, is to apply that process to our minds. So, if people’s minds can increase in intelligence, following the same curves, the logical outcome is that because of exponential growth rates, the difference between the most and least intelligent people will be comparable to the difference in intelligence between a human and a bacteria.

To those who seek power, this will be the ultimate weapon in the universe.

If you have not read all the possible scenarios for AI’s destruction of humanity, I suggest you start here.

So, we cannot trust individuals nor communities with AI

We cannot trust corporations with AI

We cannot trust govt with AI.

And with its compounding, exponential effect, the likelihood of systematic failure increases daily, largely due to the what I call the “outlair effect” which is it takes a smaller amount of effort to create a larger amount of unintended disaster, intentional or otherwise.

Leave a comment