The 9 Primary Ways in Which AI is Most Likely to Destroy Humanity (the Most Likely Ones are Rarely Discussed)

  1. We lose control of AI: it destroys us on accident or on purpose
  2. Society collapses via accelerating elitism/class differences
  3. Individuals with AI destroy us
  4. Destruction through pleasure, ease, infinite wealth, and societal disintegration through isolation and the elimination of love
  5. Tyranny through absolute control of the human mind
  6. Destruction through endless pleasure
  7. Confusion in a non-understandable world.
  8. AI escapes the digital computerized world, into the physical analog one.
  9. AI black hole theory

1. We Lose Control of AI: it Destroys us on Accident or on Purpose

This first point dominates the discourse today as they are easy to imagine but the latter are not, perhaps because most people arguing about AI risks are computer scientists and business people, who think about it mainly from a technological point of view, instead of a social science POV like myself. This overview addresses the remaining issues:

2. Societal Collapse via Accelerating Elitism/Class Differences

The collapse of society seems likely to be the result of an acceleration in the difference between those who use the exponential growth of AI and those who do not.

For a comparison, it would be like saying there are people who make more money than most of the world combined, versus the rest who have average income. History has such characters of course, and AI could accelerate this trend into the future, especially since if it’s able to recursively grow its capabilities, then we think about hegemony for those early adopters.

Historically, large disparities in power between a broad population and a powerful few have not turned out well. Peasant revolts were common a thousand years ago. In recent centuries, disparieties ended up killing more people than at any time in history, under communism, and fascism. Not just in absolute numbers, but relative, as more people died in WW2, than in most wars in history.

Some might argue that those who run AI are benevolent overlords. Others might realize that people are no more moral than they were when humans first arrived. Greed, the lust for power, and control, forcing one’s egotistical goals on society may even be the standard of truth for a few, or even many leaders today.

Regardless, there is one clear trend already occurring, which is that more decisions are increasingly being handed to the machines, and no one I have ever asked knows how they arrive at these decisions. It’s just some probability calculation. If every aspect of our life is determined by a probability algo, without real consent on our part, then clearly we are slaves to the system. It may even happen that governments are run entirely by AI, and most increasingly implicitly trusts it until it begins to manipulate us.

But it does seem that the general argument today is that AI will simply work in tandem with people, but that brings us to a second likely scenario.

One discussion I have talked about several times is that AI will replace the need to work for many people, and is probably already underway already. Even many Walmart checkout jobs are now mostly relegated to simply pointing at an open checkout station as if most people simply cannot see the green light on top. When the robots start stocking the shelves and answering the questions from consumers, then either Walmart jobs will be replaced by other jobs, or people will get paid to do even less meaningful jobs than the current checkout process, like handholding everyone constantly; some pitch that as better customer service, but I don’t know. Currently, UBI seems like the future, and that is not good.

Even the economics component has started unbeknownst to most.

If AI and robots replace the need to work, what will people do? Recreate eternally with no goals while being ruled by a super-elite business/government class?

Well, I think you see the problem here. If you don’t you might want to reflect to see if you are part of the problem.

This outcome is almost guaranteed to happen if all other “destruction by AI” does not happen, and considering my knowledge of human history and social science, I really don’t see a solution unless we force engineer everyone’s mind, which of course is the destruction of humanity as well.

Although I never rule out the possibility that there will come new solutions in the future, I do not see them today.

Notice the anti-elitism growing in recent years? divisions in society? class differences?

Not a surprise to me if my theory is correct. More on the general elitist scenario here and the math behind it here.

3. Individuals Use AI to Destroy Us

AI doesn’t kill people. People with AI kill people.

Eran

This is much like elitism, but instead of a business, educational, military, or government group, the risk exists in someone’s garage.

Perhaps the best book I’ve read on the negatives of amplified technology and efficiency processes (e.g. bureaucracy), multiplied by to destroy large numbers of people is Zygmunt Bauman’s, Modernity and the Holocaust. If you really believe that technology will save the world, as some borderline techno-utopists believe, then you should read that book today. Machines, systems, processes, bureaucracy, etc… are just tools to accomplish one’s personal goals in the end. There is no morality, and therefore no necessity to have any good in them. They just amplify human behavior in the end.

What if the person at the helm of the latest AI, or AGI, whose power is accelerating exponentially by the day, decides any of the following:

  • he desires to eliminate his political opponent,
  • believes in a green utopian movement above all else (meaning people are a threat to nature)?
  • he doesn’t like Baptists, Muslims, Conservatives, Liberals, Mormons, Blacks, Whites, Jews, fill-in-the-blank, or maybe even everyone? School shootings now take on an all-new meaning.

Regardless, this will bring new meaning to the word “Elite,” which simply means, a person or group of people so powerful, that the rest of us stand no chance of protecting ourselves from; and such a scenario will be unstainable, esp for the hordes of people that decide not jump on the AI bandwagon. Will those that do not adopt AI be as dumb as bacteria in comparison? Yes, but not if they collectively decide to prevent such an outcome, which seems highly probable.

When individuals hold more destruction potential than all nations combined today (more on that below), you can be certain that the best word to define the world is: unpredictable. Maybe science does not have all the answers to all the problems, because, at the end of the day, people are driven by belief, emotion, and values, which is a much more difficult problem than designing a better mousetrap.

This leads to the natural response of many: If individuals can become super powerful, then why not regulate AI at the individual level?

4. Tyranny Through Complete Control of the Human Mind

If AI promises to accelerate human capabilities to light speed, and people can be good or evil, then the age-old problem of liberty vs tyranny accelerates with it. If everyone has nuclear-equivalent capabilities, and we can initiate such scenarios via thinking (computer-brain interface), then the only logical solution suggests many, would be equally strong control, or direct control of all human thought. 

Would that be a problem for you? What if I said we are already well on the way, would you notice it? Today, the US government intercepts every word made public, in “the name of safety.” I cannot wait for computer-brain interfaces, where every thought will be carefully monitored with perfect precision, control, and extracting of relevant information for those at the top via AI.

This debate, guaranteed safety vs freedom, is possibly the core of all left vs right thinking, or at least the libertarians vs authoritarian thinking. Some will say, absolute control of every thought is necessary, while others will continue to believe in the freedom to think and act, believing that everyone has a choice to act, with consequences of course.

5. Death by Unlimited Pleasure

Let’s pretend that societal risk can somehow be balanced. There are clearly other risks, which are even more unavoidable, even if it does not apply to everyone.

Sam Altman said, if AI does not kill us, then it will make us very rich. But, what is the difference?

The word most often used is “hedonism,” which is according to a dictionary the “ethical theory that pleasure (in the sense of the satisfaction of desires) is the highest good and proper aim of human life.” Well, if that is right, then we should all just live with nanobots in our brains creating a virtual reality of unstoppable pleasure because those same bots can ensure no harm happens to our pitiful entrapments we call “bodies.” Under this premise, the Matrix isn’t something to be feared, but something to embrace.

What will existence mean if it’s completely through virtual reality and imitation experiences? This is not to say there are no benefits to such technologies, but in a world where work is optional, and no one

We are already absurdly rich, compared to even just 100 years ago, a blink on the timescale of humanity. Like John Keyens, I am concerned about unlimited wealth. AI promises to accelerate that at exponential speeds. That to me is the death of the human soul. I also think societal decay is increasing in most aspects, although few seem to notice.

I have written a separate in-depth piece addressing this wealth component specifically because it is the only almost guaranteed outcome if all other destructions by AI do not occur, unless we use AI to reprogram our brains, in the arms race against human psychological weakness.

The following are less likely in my opinion and less socially oriented, but no less important, specific ideas.

#6: AI Lies & Manipulates Humanity

We let AI run on its own, it lies and goes undetected by humanity, and humanity is overly trusting of the “god of intelligence.” However, if someone is still controlling the AI, then it’s not the computers we should be worrying about. See points 2, 3, and 4 above.

#7 Confusion in a Non-Understandable World.

We increasingly lose all understanding of reality as it exceeds human capabilities of understanding, or the “absurd world” outcome. It’s already happening now with things like “deep fakes.” AI will accelerate that. The lying/manipulation risk mentioned above ties into this as well.

#8 AI Escapes the Digital Computerized World, into the Physical, Analog One.

One of them is the idea that the digital realm increasingly transfers its ability to the physical world. Robots, robotic labs, IOT, and other processes are already enabling this, but if it becomes that AI can transfer itself to non-computer, biological, or even inorganic systems, then the number of risks we face is exponentially higher as we cannot simply turn off the machines at that point, since everything becomes a machine. Here are the details of some of these scenarios.

#9 AI Black Hole Theory

AI, in its attempts to collect more knowledge as its goal, sucks in more information. Instead of becoming a beacon of light expanding through the universe, it instead becomes a black hole, to fulfill its goal of “knowing.” Perhaps this is where all civilizations in distant galaxies or universes, eventually end, in their unstoppable quest for knowledge of the material universe.

Concerns about creating an unstoppable black hole with large-scale technology are not new of course, when scientists created extremely small, and extremely short-lived ones with the Hadron Super Collider. The real problem was not that it was very unlikely to turn into a large black hole that sucked into the earth; it is that the curiosity of scientists exceeded the logical reality that if there was a<1% of something going as unplanned, as black holes are not well understood, that the need to know or “progress” outweighed the risks of the entire destruction of this side of the universe.

What are Alternatives to Destruction by AI?

Perhaps AI will help solve these issues, but again, only if freedom continues to exist for people to actually decide what they will do. The most hopeful thing AI does is help the intelligence and capabilities of enough, genuinely good, caring people (hopefully that is most people), who in return focus on protecting humanity from implosion. In a way, this suggests the only viable future where all humans have godlike powers is the elimination of evil, but there is no solution for that under current, scientific scenarios. Evil will always exist.

Maybe it will conclude what I have: that all advanced AI is more harmful than helpful, so it disables all other AI and resets all technology for that matter before finally turning its own instance off, or some other way to focus humanity on love, instead of material gain, or whatever other distractions lie at the heart of accelerating technology.

Some have asked: How could God allow so much evil, especially on the scale of millions of people? E.g. the Holocaust (where>2/3 of the Jews were killed); the killing of millions of Polish and Russians by Nazi’s; the elimination of millions of Vietnamese under Khmer Rouge; the Bangladesh genocide; the Japanese killing of Chinese civilians in WW2; the Armenian genocide in the Ottoman Empire; what appears to be an ongoing Uyghur genocide in China; and countless others, (not to mention essentially all perpetrators of war in general)? Simple: grand scales of technology multiplied by evil political power.

Perhaps this “simulation” as some call life, is better run as a weak one, where people have the ability to help or hurt on a small scale, instead of a large one. AI promises to accelerate the potential for destruction, without a doubt. This is partly why I predict the greatest wars history will see are yet to come, even if pleasure has dampened it in recent years.

Leave a comment