Why Isaac Asimov’s 3 Laws Mean Little to Nothing

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The problem is not the law, but interpreting the law. For example, Constitutional laws were formed 200 years ago, and since then, we have done nothing but debate what they mean and how to interpret them. The Talmud is another example as it interprets the law found in the Torah (Jewish scripture) over 1000’s of years, and it’s constantly debated. Most important perhaps is the realization that interpretation of moral law is rarely fixed in the long run, even though hard sciences (laboratory science) tends to accumulate and compound in a positive direction over the long run (although I am tempted to doubt this).

Generally, as a social scientist, I find that laws essentially reorient themselves to tie to societal values over time (familial, governance, and community). Second, even societies themselves and their values change as Plato described the continual rotation of republics to oligarchies, then to kings, then back to republics again. On the other hand, I doubt there will be a mass return to horse and wagon (the benefits of science/tech being cumulative).

Let’s take a few examples from Asimov’s list.

The plethora of questions that could arise from this include:

  1. What does “injure” mean, to what lengths? Could such a definition include “negative harm” which means it must do everything possible to protect a human from physical AND emotional harm? Conveniently, such definitions are harm are as clear as mud. If someone wants to harm themselves, will they forcefully insert themselves? Could the behavior of people increasingly become risky as they know no robot will let them get hurt? The end of free will could arise as the result of such a law. Even eating sugar, or insulting someone, disciplining children, etc…, could be seen as harmful, so where to draw the line?
  2. What about destruction of property, private or public? What about destruction of the natural environment, which few agree on today what is an acceptable level. This is not as large of an issue since new laws could be created to close these loopholes. If robots or human-made laws increasingly define robots as human (something that gets talked about quite often today), then what? What if robots define humans as robots due to our increasing level of technology in ourselves? Again, if the robot creates only psychological damage to others, where is the line drawn?
  3. What happens when the robots are both mentally (AI) and physically (robotics) stronger than humans? What recourse will we have to disable them, if harm is ever interpreted as the robots being “turned off”? This could be an interpretation or direct program (e.g. ‘If human attempts to disable robot, then prevent that from happening). Perhaps there will be bugs, as there always is in software. Then what? Our only recourse may be stronger AI to disable the robots.

Perhaps most important challenges are:

  1. If robots will be something anyone can build, why would everyone program the same laws in their robots? True, govt. will impose such laws on equipment, but many will violate the law no doubt.
  2. If AI governs itself, could it change those laws to fit some other goals or interpret those laws however it sees fit (again based on goals perhaps). Either way.

What if AI determines that preserving nature is a priority, or on equal footing, with preserving humanity? That might cause it to eliminate or reduce humans; and now the problem becomes clearer: AI is likely to only have goals that

A. Have been programmed into it by humans

B. Developed on its own by assimilating all values from human created texts, which inevitably will lead to bias.

The best hope to resolve these issues will be early on in the period after robots are established. Sure, I trust humans generally resolve issues, but do I trust robots to do the same?

Perhaps, the computers will turn themselves off in logical response to their own calculated risk to humanity. Well, except for the ones that programmers give other goals to…

Leave a comment