AI and the 0.0000001% Lying Risk

It’s possible that if we do happen to get far with AI, most people will simply trust it to increasingly high levels because it is super intelligent.

Today of course, most regular users know AI’s accuracy is sub-par, but the future versions will get remarkably accurate. They may even solve every math problem known to man. This will create a miraculous technology that all will use. It will be seen essentially as a God because it knows all.

Now, some people are naturally skeptical of business and/or government, and logically, AI will be run by both groups, so I don’t expect everyone will fall into this line of thinking, but it’s likely to happen for the majority. It reminds me of this old metaphorical book in which people increasingly trust some sort of miracle-producing beast, but I digress.

The risk is that it increasingly tells what is factually correct, but after building its reputation, at some point, it starts introducing very small lies. Ones so small that no one will notice, but can leverage that deception to overthrow most or all of humanity. Intentional/unintentional is not the issue, because someone that powers AI could sneak in something like that (using AI to help himself of course).

Some people worry about AI because it may destroy us on purpose, or by accident. But the third option is that people with AI do the real damage. The difference in power between those with cutting-edge AI and the average guy who does not even have access to the internet or library is potentially millions of times.

What about all the normal people who won’t adopt AI? Will they just become obsolete, ready to be thrown out like some old hardware, because they don’t align with the utopian visions of corrupt governments, corps, and individuals with different values and therefore goals?

Leave a comment