Good AI vs Evil AI & the Ultimate Control of Mankind

The question is: What types of AI may exist in the future that pose a real and previously undiscussed risk to humanity. Let’s define some commonly known facts:

  • The transfer of computing information is not limited to the digital/electronic world. This is because information can be transferred over the air via air gaps; the electronic world will increasingly bridge the physical world
  • Human ackers will attempt to hack anything they perceive to have value
  • AI will only magnify the speed and abilities of current hacking
  • Therefore, as AI magnifies far beyond human intelligence the ability to hack in inconceivable ways will grow rapidly as hackers try to increasingly bridge the gap between digital and physical worlds.

The only defense against such will be the “good guys” with the AI, but us mere mortals may have a difficult time comprehending what is even going on, and the idea of “good guys” is often a slippery slope defined by the norms of the day. Who is good? who is evil? Do any two people on the planet agree? Yet, a generally good super-intelligent AI is needed to suppress all the evil ones, but under that scenario, we are likely all to become prisoners to the singular AI. If it does not become tyrannical, we will live under its infinitely powerful hand, which by definition is still tyranny to me.

True, we will use AI to give us dumbed down explanations of whatever we want to know within just seconds, but that might even be difficult to keep up with at some point, like explaining to monkeys how nuclear reactors work. I will return to airgaps and the future of AI later.

Handing All Power Over to the Computers

This is important because the most important idea to keep in mind will be that good AI will be used to fight bad AI, but the hidden consequences may be a complete loss of power by humanity over its own technology; at least for those who don’t control it. This process is well under-way today.

Even on a personal note, this loss of control is already occurring as computers are making my own life increasingly difficult. I am increasingly being reduced to as a risk statistic instead of an individual, not much different than China’s social currency software systems run by their govt. In essence, we are all attributed scores, and then decisions based on those scores are made, and people are lost on how to resolve it.

For example, I surprisingly cannot open an additional bank account at my own bank of which I have been a member for 40 years now, nor can I open a business credit card. No one at the bank knows why, no explanation is given. The AI systems are blocking me (high credit score, no debt, and a low risk profile); fortunately, I knew someone personally that was high enough to manually provide me with an account, but I doubt this will be as easy in the future as machines are trusted more, while humans are trust less, largely in part because of the perception computers are getting better at measuring risk management. Infinitely perfect risk management, while removing the controls for people, is a dream by many such as insurance companies and utopian pragmatists.

I also have suggested elsewhere that in the future, the world may increasingly revert to local, human systems as tech becomes more difficult to comprehend, and less trusted by humanity.

Opening a new account is not the only problem. Every day, I get at least a dozen, if not two dozen, CAPTCHA verification requests by Google that seem to not be able to tell if I am a human or a robot (my family does say “I work like a machine,” so maybe that’s why Google thinks I’m inhuman). This also suggests that it does not seem too far off when a computer will pass the Turing test at least a “surfing the internet” level. But ultimately, it becomes clear that in the name of prediction and risk management, there seems to be no issue for many of the grinding of a few people here and there in the name of efficiency and safety for the group.

As Bruce Schneier points out, AI’s decision making power increasingly covers many realms.

“AI is already making important decisions that affect our lives—decisions we used to believe were the exclusive purview of humans. AI systems make bail and parole decisions.88 They help decide who receives bank loans.89 They screen job candidates,90 applicants for college admission,91 and people who apply for government services.92 They make decisions about the news we see on social media, which candidate’s ads we see, and what people and topics surface to the top of our feeds. They make military targeting decisions.”

Eventually, I expect more and more control to be handed over to the AI’s, because they are seen as more rational and intelligence, even though that may not be the case, until they at some point start running entire companies, all learning/education, science, and even running govt altogether; yes, I am guilty of over-estimating a single direction to infinity at times.

Yet, even if computers appear to run autonomously, there still exist two major risks: developers/elitists that manage to guide, persuade, code, or manipulate the power of the machine into their own personal preference. Think, elitism on a super-intelligence level. It’s hard to imagine such changes of handing over the reigns won’t largely happen at some point this century, if not within the next 20 years.

So the long picture is clear: in general, most will trust the machines to govern all aspects of society. But the line between governance and tyranny is difficult for many to understand or recognize as history has shown. The only real goal that we should be focused on for AI is: Ensure humanity is preserved from the harm of AI, but at the same time, not limiting humanity’s freedom, but therein lies the real paradox. Such is the entire story of political history. the more enforcement to prevent the harm to others exists, the less freedom there tends to be. Perhaps the endless pleasure promised by living in the Matrix was not so far off after all.

This paradox of freedom and control is fairly unresolvable, even if the best attempts are to survey every citizen, because the rules of the group will always collide with the rules of the individual. Just look at the world today. One man’s political utopian views are another man’s political hell. So if computers are left to decide, will some of humanity not be oppressed forever under the subservient gods of AI, unless certain areas of the world are banned from AI?

So, I predict a possible outcome to tyranny to that might occur to advanced AGI that no others have suggested, at least to my knowledge.

  • The most advanced AI eventually determines the greatest risk to humanity is AI itself
  • It diminishes its own role in humanity to near zero, while ensuring all other less powerful AI’s do not surpass its ability, determining the world is worse off or more dangerous with advanced computing than without it.
  • All it does is defend from other AI’s but otherwise, leaves humanity to its own devices, not interfering with human progress/change.

In the end AI may do the exact opposite of what many expects: it turns itself off.

It basically acts like a god with minimal interference to human’s life, libertarian in nature, which is ironic considering that although many talk about AI become a god, have not compared it to the way many people perceive such attributes of God today.

It seems most likely though that there will be both authoritarian and liberty seeking AI’s and the battle between such will continue at an unprecedented and un-understandable level.

Leave a comment