AI: A Nuclear Lever for Huaminty & the Mind. You Too Will be a Nuclear Power Soon

AI is a Lever, Not Unlike a Nuke, and We Will Each Get Our Own

All technology ever created shares one thing in common: It is always a lever; a lever that trades force for time and/or distance. Machines are a set of levers. AI is just the unimaginably huge version of this latest lever brought about by the incredible computing power of machines we know as computers. There are two aspects of this I want to discuss: The risks on the human mind, and these risks multiplied by the coming real power potential to destroy humanity. Sounds so nice, doesn’t it?

Assumption #1: All levers magnify human behavior.

What behaviors will it magnify? Animalistic, power-hungry, status-seeking, and pleasure-seeking behaviors; or, kindness, giving, and love? The instinctual desires all fundamentally achieve the same purpose of increasing one’s chance of survival at the most rudimentary levels, while the higher level functions like love, should overshadow these core desires. Many of these “survival” oriented ones: dishonesty, manipulation, jealousy, apathy, anger, and greed usually come at the expense of others. Others, like seeking relaxation (laziness in its extreme form) and stimulation seeking (pleasure/indulgence/self-gratification), harm the self, and indirectly others.

The flawed natures in each of us will be magnified greatly by super tech. E.g. People that cannot manage money on a small scale, do just as bad on a large scale. Power does not change or resolve the good and bad in people–it generally magnifies them. Conversely, if we have positive attributes, technology will amplify those as well.

Computers, and even more, mobile phones, are increasingly creative or destructive potential for the mind, whether seeking the answers to life’s most important problems or just filling it with another shot of pleasure in a world of growing ease. Now amplify this millions or billions of times with AI.

Assumption #2: AI will become more dangerous than nuclear bombs are today.

Advanced AGI, or any computing for that matter, carries more risk than a nuclear bomb because, for example, it could by used by an individual, group, or government to:

  • shut off all grid power in an area, country, or world
  • create a bio-engineered virus with ease that wipes out all humans, or specific target groups
  • create propaganda or psychological manipulation on a global scale

In the longer term, other risks will arise. For example, as we better understand the human brain, and are able to control and improve it, negative actors will use the same knowledge to potentially directly manipulate minds. Some have recently tried to assure us that current technology is limited to read-only information from the brain, but it seems improbable that will remain the case forever, especially once there is a perceived need or opportunity to increase brain intelligence by altering it. Once you can “write” or modify brain thought, manipulation of the mind becomes an even greater threat. Viruses are no longer limited to computers, but the mind as well.

If computers become more powerful than a nuclear bomb, with everyone having access to them, then I sure hope defense is the number one priority. However, at the moment, I see relatively little effort to slow the already massively growing tech addiction occurring today as few seem to interpret pleasure as harm. Machines will need to constantly fight other machines in order for every human on earth just to survive at all. Logically, this is the result of all technology, but it may become every second, every day, everywhere. If you think pushback against vaccines (I believe it is mainly ideologically driven, not anti-science) is bad now, just wait till we all need to have an AGI-powered nanobot vaccine put into our bloodstream.

If North Korea with nuclear capabilities worries you even in the tiniest, just wait till AGI arrives. Notice that increasingly poor countries get access to nuclear bombs, which are only limited by highly regulated uranium, whereas with AI, everyone has access to silicon.

Assumption #3: Everyone will have access to such nuclear-type AI capabilities

Therefore, it can be concluded that everyone with sufficiently advanced AGI will have more power than the greatest nuclear nations of today.

There will not be a person who does not have access to it. Remember the subway bombings in Japan in the 1980s where they homebrewed botulism and nerve agents to try to begin about the end of the world? They could have gotten much further with AGI’s help to perfect a bioweapon. While the deaths of 50 people in a subway are worrisome, Japan’s democide-driven killing of 3M-10M Chinese and other Asians during WW2 seems a lot scarier, yet unknown to most–perhaps this is because although airplanes are safer than cars, they somehow seem scarier to most.

How about when authoritarian governments throughout history have tried to destroy entire populations in the name of some destructive government vision/utopia? It’s important to point out that the risk of destruction of the human race could come from anywhere. More on government risks a bit later.

Even the science industry is not immune from great, unintentional destruction risk.

  • Africanized (killer) honeybees, which now slowly take over in the Western Hemisphere, were the result of 26 cross-bred queen bees, designed to increase honey production, that accidentally escaped from a lab in Brazil in 1957.
  • Some people suggest recent Coronavirus outbreaks were lab-created/modified organisms. I do not know if they were “manufactured”, but the probability of such realities increases daily.

Won’t AI be Programmed to be Safe?

With so much discussion about whether AI is safe, there seems to be this fairly unaddressed issue: AI is not limited to a few large companies (which would be a different concern if true). Today, countless teams of people and individuals are building their own as I write this.

AI will solve disease, pain, hunger, suffering, and even death. It will make unlimited clean energy and free, unlimited education for everyone alive. But what about the equally large risks it brings with it?

Analyzing the rapidly decreasing costs of nukes will demonstrate an analogy for AI risks multiplied by acceleration.

If North Korea does indeed have around 60 (nukes), that puts the cost of each warhead at between around $18 million and $53 million.” This is their estimated total program cost by South Korea. I am going to use the average cost for my later analysis, so we will say $35M.

That means the cost of the NK program today vs the original costs of the Manhattan Project is 281x cheaper today. Just like how computers and space flight are much cheaper today.

However, those were the total program costs. Looking at just the cost of the nuke, it is much lower: “20 nuclear gravity bombs … ~$4.9 million each” (What Nuclear Weapons Delivery Systems Really Cost (brookings.edu)). This means that the nukes really only cost 0.1% of the total program costs, which is a remarkable proof of govt. inefficacy, but I digress. Note that those bombs are ” 30 times as powerful as the bomb that destroyed Hiroshima in 1945″ (The B61 Family of Nuclear Bombs – Federation Of American Scientists (fas.org)) so the cost of destroying billions or trillions of dollars worth of tech (e.g. a city) is possible with tech that is potentially a million times cheaper. Uranium is hard to find though relatively speaking.

Of course, AI will cause the cost of all real-world destruction technologies to decrease at a continually accelerating rate, whether the cost of assembling a nuke, or, even having it assemble a personal bomb via a molecular assembly machine, or another tech (the easier solution though of course is just biological warfare as viruses are already well capable of doing most of the job). Remember, the cost of all good things in the world will drop dramatically as well as the cost of all bad things.

Sure, there will be defense systems from other AI’s that is logical, but it’s also disturbing that no one will be able to function without a computer protecting them all the time, even if it’s mainly at the govt level.*

Assumption #4: Nuclear war has almost occurred several times,, by accident, over the last few decades.

Therefore, probability

Won’t AI Be Regulated? The Gun Debate Meets AI (a.k.a. Hyper-Nukes)

Who gets to use AI if it’s like a gun, or nuke, on steroids? One argument is that:

  • Individuals cannot be trusted because of mass shootings today.

It is difficult to imagine the debates about gun control, applied to future “AI control,” where the debate about “the power to protect one’s self” is often overshadowed by “the power one individual to inflict upon society.” So the other side of the coin is:

If government is historically a risk to society, then should people not be able to protect themselves for such cases?

The American Constitution was largely designed to help citizens protect themselves from tyrannical governments, as that is what Europe knew for so long, by enabling citizens with rights and self-defense should the written law no longer work, because like all technology, all institutions and governments are not guaranteed to be good forever, so when you have a bad government in place, a population needs to be able to protect themselves from such abuse of power. 

While many fear genocides, for example, almost all genocides in history were caused or supported by governments; to be precise, responsible for 48 of the top 50 largest genocides listed on Wikipedia. When you count it precisely in percentages of lives, it becomes clearer as governments have been responsible for >99.99% of the potentially 40 million genocidal deaths. Add in the destruction of one’s own population as in Red Communism, and the death tolls double triple by most estimates. Add in just your general offensive, power-seeking wars, and the number explodes even higher. For example, in 1938, the Jewish population was banned from manufacturing firearms and ammo, then later, banned from owning guns altogether.

So the question is, why should we trust the government to regulate the most powerful force on earth, if the government is at risk of becoming tyrannical at some point again, top-down power, with power-seeking centralized governments, whose leaders are not accountable to its citizens? If the state has a monopoly on violence (Max Weber), then maybe the monopoly needs to be dismantled to a large degree due to its frequent abuse, or at least a better system to prevent its abuse. Libertarianism, the opposite of authoritarianism, is likely the safest option for a society.

Of course, the irony in this destruction of humanity by the government is that in an age of “exponentially growing intelligence,” virtually everyone is ignorant of such essential, critical historical facts and risks, which is that the single largest cause of non-natural death is not individuals with guns. It is always powerful governments/militaries. I have yet to meet anyone who really perceives this. Logically this risk remains–not “individuals.”

Can We Build Defense Systems to Protect Ourselves from Super AI?

As the promise of advancing technology makes the world appear more magical, a.k.a. difficult to understand, we stand little chance of surviving without our intelligence keeping up. Enter stage right: Kurzweil. Unless we become cybernetic organisms, or computer-human hybrids, as Ray Kurzweil proposes, just to make sense of a reality dominated by super-powerful, manipulative, and controlling machines, then we stand little chance of surviving. Without implanting the internet directly in our brains and modifying our brains to be as smart as Einstein, then we will by default become as unintelligent as worms are to humans today.

Considering that most people have an appendage with them, called a cell phone, and the distance between the computer and the brain decreases, a microcomputer in our brain and bloodstream does not really deny the general direction at present. However, plenty of people do not use such technologies today. Should these people be run over by machines and have no ability to have a family simply because they will not put a computer in their brains?

Besides, I debate whether people are actually getting smarter as machines get more capable of thinking for us. The basic economics of supply and demand says that if “demand” for something like food increases then the “supply” tends to follow, and vice-versa; so if the demand for intelligence from people decreases as computers become “super-intelligent” for us, completely eliminating the need for people to think at all, then maybe the supply will follow suit. Comparatively, as the need for strong bodies has decreased due to machines doing most of our work today, the supply of strong bodies is nearly gone. Should not intelligence follow the same route?

Even without this merging-with-computers, or “intelligence” issue, there remains a less disputable argument, which is that the risk of elitism will become exponential as it is tied to the exponential acceleration in technology. This is discussed in Exponential Power Differentials–Elitism: Why AI Most Likely Will Cause Collapse

Can We Just Program the Machines to Be Caring?

If machines adopt such humanlike traits, having biases/imperfections, and human emotions programmed into it, then we might ask: what will a super-powerful computer having a bad day do? Giving a computer “simulated feelings” or emotions seems to be a response to the problem where a machine without humanity is a machine that accidentally destroys humanity through some poor goal, like trying to become more efficient.

Okay, so then, let’s just program them for good emotions. If a machine could be programmed to use love as its #1 principle, then maybe there’s some hope, but any machine can be programmed to do whatever its creator desires; so maybe we will have super-caring machines and other super-malevolent, power-seeking machines. This seems likely.

I can tell you though that in today’s world, the number of people seeking to increase their skills in gaining power and pleasure, severely outnumber those seeking to increase their skills in love, so I expect the number of caring machines to be greatly outnumbered by the anti-social ones (let’s hope the killer-bee model of aggression replacing docile bees does not extend to humanity).

Hopefully, these extremes of power differences don’t create existential risk, in the way investing markets often have. Elsewhere, I discuss how the extreme leverage of a tiny investment firm, LTCM, nearly collapsed the global banking system, even though they were simply using derivatives and huge leverage. Derivatives are growing exponentially, and some analysts assume will “balance out” in the end because of all the “counter bets,” but there is no historical reason to believe that. It is possible that extreme leverage will bring extreme risk, once one trade, company, or other element gains enough momentum. Logically, the largest risks in the future are the ones we have not seen yet.

Leave a comment