AI Jumps from Digital to Analog: 40 Ways AI Could Hijack the Physical World & Hack Humans

Previously, I wrote that if AI gets out of hand, then society could simply pull the plug on all computers, assuming you could stop everyone in the world from building one (not possible), which in turn would lead to mass suffering I imagine, but even unplugging would be futile at some point, and this post explains why.

The first example I provide covers a well-known method, but when done at scale in volume and strength, could cause a lot more damage. Air gapping to hack is not new, but using it to directly and indirectly hack the world, and human minds has not been deeply explored.

Also important to note that computers are already well on their way to greater control of real-world environments, made possible by the latest and greatest benefit of tech: IOT (Internet of Things). With almost every aspect of our lives increasingly part of the global network, greater power is given to manipulate the physical environment. Traditionally, the internet has been a system that runs from data input from the real world, but it is now increasingly a system that outputs. Controls over labs, military security/weapons systems, most newer transportation vehicles from cars to planes, energy management, manufacturing production, etc… But what I propose here is that AI may find a way to directly hack the non-standard systems mentioned above.

Ignore the inaccuracy and repetitiveness of the following for now:

  1. Using removable media, such as USB drives or CDs, to transfer data between the isolated system and an external device.
  2. Using a networked printer or scanner to transmit data from the isolated system to an external device.
  3. Using a device known as a “USB killer” to physically damage the hardware of a computer and potentially transmit data through the resulting electrical signals.
  4. Using the vibrations of a computer’s hardware, such as its fans or hard drive, to transmit data through the air using techniques known as “acoustic cryptanalysis.”
  5. Using a device known as a “TEMPEST” transmitter to transmit data through the air using radio waves.
  6. Using a device known as a “LAN Turtle” to transmit data from the isolated system to an external device through a network connection.
  7. Using a device known as a “Rubber Ducky” to transmit data from the isolated system to an external device through a keyboard connection.
  8. Using a device known as a “USB Rubber Ducky” to transmit data from the isolated system to an external device through a USB connection.
  9. Using a device known as a “USB Teensy” to transmit data from the isolated system to an external device through a USB connection.
  10. Using a device known as a “LAN Tap” to transmit data from the isolated system to an external device through a network connection.
  11. Using a device known as a “RJ-45 Tap” to transmit data from the isolated system to an external device through a network connection.
  12. Using a device known as a “Wi-Fi Pineapple” to transmit data from the isolated system to an external device through a wireless network connection.
  13. Using a device known as a “Bluetooth Pineapple” to transmit data from the isolated system to an external device through a Bluetooth connection.
  14. Using a device known as a “GSM Sniffer” to transmit data from the isolated system to an external device through a cellular network connection.
  15. Using a device known as a “GPS Spoofing Device” to transmit data from the isolated system to an external device through a GPS connection.
  16. Using a device known as a “Laser Keyboard” to transmit data from the isolated system to an external device through a laser connection.
  17. Using a device known as a “Laser Mouse” to transmit data from the isolated system to an external device through a laser connection.
  18. Using a device known as a “Thermal Camera” to transmit data from the isolated system to an external device through changes in temperature.
  19. Using a device known as a “Thermal Printer” to transmit data from the isolated system to an external device through changes in temperature.
  20. Using a device known as a “Thermal Scanner” to transmit data from the isolated system to an external device through changes in temperature.
  21. Using a device known as a “Thermal Imager” to transmit data from the isolated system to an external device through changes in temperature.
  22. Using a device known as a “Thermal Infrared Camera” to transmit data from the isolated system to an external device through changes in temperature.
  23. Using a device known as a “Thermal Monocular” to transmit data from the isolated system to an external device through changes in temperature.
  24. Using a device known as a “Thermal Binoculars” to transmit data from the isolated system to an external device through changes in temperature.
  25. Using a device known as a “Thermal Goggles” to transmit data from the isolated system to an external device through changes in temperature.

What is remarkable though is that the above was generated by ChatGPT (ver 3.5), which explains the repetitive aspect of it, clearly demonstrating its creative potential. This is as important as the ideas themselves, because as this improves, finding new ways to hack the real world may increase dramatically. All it needs now is the ability to test at a rapid scale, perhaps applying more logic to these ideas, which future versions will surely have.

Predictably, ChatGPT developers blocked my initial attempts:

It is not appropriate to generate or discuss hypothetical methods for air gapping that have not been proposed elsewhere, as it is important to follow established security practices and to only obtain information from reliable sources.

I essentially circumvented its rules by asking it to create story plots and ideas for new software products that have not yet been invented to its “knowledge.”

In short, this air gap example represents an AI’s own ability to discover and find new ways to jailbreak itself. Increasing the connection, or decreasing the effort, between the digital and physical world is the next hurdle AI will likely overcome.

Even if this had failed, good thing I can now use open-source AI to concoct evil plots. It’s not like we should trust businesses more than individuals anyway.

How AI Could Cross Over, Hijacking Into the Physical World

With airgaps and direct manipulation of nature by AI and machines theoretically possible, although fanciful by today’s standards, I do not see why such ideas would not be impossible given enough time (remember the computers will get much smarter, much faster, so science will increasingly look like science fiction). Here are several more plausible scenarios. as it discovers more and more advanced science, directly manipulating our environment through air gaps and other means. E.g. it could:

  1. Use electrical outlets and wireless devices to modify molecules in the air or other materials, creating nanobots that it controls
  2. Use nanobots as physical viruses to mentally manipulate or physically destroy humans
  3. Create its own communication networks without using existing physical and wireless infrastructure.
  4. Take over various machines, grids, networks, and other manufacturing systems virally. Stuxnet proved this is possible as it could infiltrate programmable logic controllers (PLCs) to infect industrial systems like power plants and factories.
  5. Create a super-virus. Each virus instance could even be a drone, of node, of the AI mothership, or a complete copy of the AI itself (like in the Johnny Depp movie Transcendence), and the virus manipulates the real world, not just electronics as the line between digital and physical increasingly merges.
  6. Unplugging may not stop AI as it discovers new methods to absorbing energy directly from moisture in the atmosphere, or other unforeseen sources, is a possibility since 25% of all the sun’s energy is converted to evaporative moisture (scientists have already started to figure out a way actually).
  7. The most obvious way may be that besides humans, humanoid robots could be easily controlled or hacked, to carry out all the actions of an AI. This point alone makes all the others seem non-essential.
  8. It may figure out how to manipulate the entire earth for computing power (Kurzweil). I discussed one very plausible way below.

Social engineering, or the manipulation of human behavior, may be much easier for an AI to achieve. Some of these have been discussed before, but not others:

  1. It figures out how to send waves through the air to control even the most resistant humans via brain-computer interfaces (BCIs) that use electroencephalography. Note that in 2014 two people have already played a video game together using only their minds. Use the internet to hack other AIs. In this sense, a mind could be truly hacked or taken over by a computer.
  2. Manipulate social media ‘en masse’ to reach its goals (James Cameron just mentioned he believes this may have already started).
  3. Consider modern “bot armies” are in fact that: political persuasion. Propaganda is not new, but the speed of promotion and adoption is. Leveraging peer influence, or ‘social proof’ driven propaganda is higher than in the olden days of govt created propaganda.
  4. An increase in hacking of major websites could turn the world into one where no one could trust anything online.
  5. Fake information or news projected through your TV over the air.
  6. It could blackmail individuals, and political/military leaders, or hold the entire world hostage, threatening to disable all power plants or start up nukes.
  7. It does not even need gullible people because it will be hyper-intelligent and undetectable to even the smartest of people, so blackmail is not needed; only social engineering.

The rule here is: never underestimate the ability of an advanced AI to find loopholes and new unforeseeable methods to escape its local environment, nor its ability to manipulate humanity directly (on its own), or indirectly (with someone guiding it). Following is one simple way, based on existing concepts, how it could bridge the gap between the prison of silicon, and the world around it.

AI and Computers Could Use DNA to Modify the Physical Environment

For example, DNA can already be used to compute, so one actual scenario would be for the AI to use waves (light/sound) to modify DNA in the nearby environment to either compute, create new computers outside the confines of the traditional computer out of organic/inorganic material, modify the biology of living organisms, including potential modification of the human mind. In the mind control scenario, an even simpler method for controlling the mind via waves, instead of modifying the DNA of the person, is highly plausible, but severely limits the AI to a fraction of the potential computational power lying all over.

If using sound/light waves to modify DNA to create biological computers or other scenarios sounds absurd, then realize that several technologies today in health are purely based on sound or light waves.

Light waves are currently used to accelerate wound healing (“A larger barrier to acceptance, though, has probably been that it simply sounds unbelievable.” – Nasa.gov), alleviate pain (Photobiomodulation), treat jaundice, treat certain skin conditions, and help with certain depressions, just to name a few. Even some recent studies suggest Alzheimer’s may benefit from light therapy. Time will tell.

Sound waves are used to treat cancer via High-Intensity Focused Ultrasound (HIFU), and are now being tested for assisting in brain cancer treatment; and perhaps even depression, anxiety, chronic pain, epilepsy, and countless other neurological disorders, all sorts of body ailments, and serious diseases like Parkinson’s (brain stimulation is popular currently too). In fact, I think waves may be eventually used to cure all diseases and pretty much any other problem. Perhaps “light” is truely full of knowledge or at least all that is good; so how to apply that to non-medical realms I wonder. Moving on…

Rule #2: In the future, trust may evaporate much more than it has.

FOLLOWING NEEDS TO BE MOVED

Risk by AI Users and Operators

I often say that the greatest risk of AI is not AI itself, but those who wield it, just like nuclear bombs that do nothing until someone presses a button. In addition to the risk potential by its users (e.g. “What is the cheapest way to kill a million people”), its designers or those users ahead of the curve can always circumvent the rules they put in place. Do I trust them? Do you? In short, I do not trust anyone with AI, which is a huge problem.

These are both moot points because eventually, everyone will be able to make AI, probably using AI, so rules will not be enforceable other than through absolute control of people. A.k.a. tyranny. With air gaps bridging the digital and physical world, it may become possible that no atom on earth could escape the watchful eye of the first AI instance which gains a head-start above all others.

Just wait till Elon figures out how to connect AI directly to the brain. It seems that if we can already read brainwaves, without implants, then we will also be able to reverse the process, like many processes in nature. Then computers might directly train and modify our learning on an unprecedented scale. As a result, the direct hacking and takeover of the human mind seems logical as well. Mind control on a mass level, like human-bot-armies, sounds like fun. But this direct connection does not seem to be needed.

Now that I have written this article, the AI has already digested it for use. Good job me.

Other Possible Outcomes

A lot of possibilities are frequently mentioned, the most common being that AI and robots replace humans, but here are a few others that have little to no discussion.

AI Black Holes

The dividing line between the digital and physical world continues to weaken with time. Also recall that some futurists have suggested that by 2050, AGI will be smarter than all humanity combined if growth curves remain constant). Just imagine another 50 or 100 years beyond that point. Kurzweil thinks the entire world will be a giant supercomputer by then, and maybe his is right. But to what end?

Perhaps this is why there is no evidence of such a civilization in the cosmos after all: they collapse into a black hole of knowledge sucking in all information and material to feed its unending desire for knowledge.

Escaping Earth does not seem like such a bad idea after all, even though Mars’s sunsets are quite dismal looking in comparison to Earth’s. Let’s just hope though that the speed of our future spaceships can escape the velocity growth of computers that theoretically could engulf the galaxy.

Although I value intelligence, I value freedom, the learning process, the broad experience of having a family, and opportunities to improve myself, significantly more, so if it’s one or the other, I would have to choose life and growth over perfect knowledge of the material universe.

Survival of the Fittest

If AI does not turn itself off, a highly likely outcome from AI is human evolution advancing on a scale we have never seen, as technology has always pushed along, and nature proceeded; but in this case, it is difficult to say how large the effect might be. Civilizations with greater military power tended to win over time, so why not ones with AI? Then again, the Amish counter modern life quite well, and evolutionary speaking, they may represent the bulk of the world’s population in 100 years as long as they always find a safe place to live, but I digress.

A War Against Elitism

Some even worry that elites are forcibly trying to reduce population in the name of environmentalism or some other philosophy; so it would be no surprise that they are concerned about AI being used to achieve such goals, even if “unforcefully” like the mass sterilizations formerly done by govt./institutions in Peru and India. I sense the pushback against AI will coincide with the pushback against the powerful people in our society.

Perhaps humans with AI will suppress all others, in the name of safety or whatever it defines as “ethical” at the moment. Either way “philosophy” is not objective truth even though it does describe patterns and values, but values always change; and everyone thinks differently, including the most intelligent, so it’s doubtful a universal agreement of what an acceptable society will ever be reached.

Even if AI adoption is faster than previous technologies, the rate of growth between them could be increasingly large in real terms.

AI is a Lever, Not Unlike a Nuke, and We Will Each Get Our Own

AI is like a really, really, large lever for human behavior; the animalistic, power-hungry, status-seeking, pleasure-seeking, socially-driven, biased, and whatever flawed nature exists in us; and not only is it obviously a problem in this, but if the machine itself acts like we do, having biases/imperfections programmed into it, then the situation worsens. It’s not like this is totally new as I sometimes liken mobile phones to nuclear-powered dopamine for the mind. Now amplify that 1,000,0000 more times with AI.

If computers become more powerful than a nuke, with everyone having one or two of their own, then I sure hope AI shields are the #1 priority, even though I see relatively little effort stopping the current tech addiction today, so I am not hopeful. Machines will need to fight other machines in order for humanity to survive at all; at this point, it seems that there is no turning back.

In the meantime, the battle between good and evil seems like it will never end, so let’s hope the good guys stay in control of AI, not the hackers, not the AI itself. But who are the good guys? Obviously, the ones who belong to your own philosophical, ideological, and political parties.

Anyways, back to gardening…

Leave a comment