Isaac Asimov Explains His Three Laws of Robots

A handful of futurists, philosophers, and technophiles believe we are approaching what they call the “singularity”: a point in time when smart machines became much smarter, stronger, and faster than their creators, and then become self-conscious. If there’s any chance of this occurring, it’s worthwhile to ponder the consequences. But we do already, all the time—in existentially bleak scenarios like Blade Runner, the Terminator series, the rebooted Battlestar Galactica (and its failed prequel Caprica).

The prospects are never pleasant. Robotic engineers in these worlds hardly seem to bother teaching their machines the kind of moral code that would keep them from turning and destroying us (that is, when they aren’t explicitly designed to do so).

I wonder about this conceptual gap—convenient as it may be in narrative terms—given that Isaac Asimov, one of the forefathers of robot fiction invented just such a moral code. In the video above, he outlines it (with his odd pronunciation of “robot”). The code consists of three laws; in his fiction these are hardwired into each robot’s “positronic brain,” a fictional computer that gives robots something of a human-like consciousness.

First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov devoted a good deal of his writing career to the subject of robots, so it’s safe to say, he’d done quite bit of thinking about how they would fit into the worlds he invented. In doing so, Asimov had to solve the problem of how robots would interact with humans once they had some degree of free will. But are his three laws sufficient? Many of Asimov’s stories–I, Robot, for example–turn on some failure or confusion between them. And even for their chase scenes, explosions, and melodrama, the three screen explorations of artificial life mentioned above thoughtfully exploit philosophical ambiguities and insufficiencies in Asimov’s simple system.

For one thing, while Asimov’s robots were hunks of metal, taking only vaguely humanoid form, the robots of our current imaginings emerge from an uncanny valley with realistic skin and hair or even a genetic code and circulatory system. They are possible sexual partners, friends and lovers, co-workers and superiors. They can deceive us as to their nature (a fourth law by Bulgarian novelist Lyuben Dilov states that a robot “must establish its identity as a robot in all cases”); they can conceive children or desires their creators never intended. These differences beg important questions: how ethical are these laws? How feasible? When the singularity occurs, will Skynet become aware of itself and destroy us?

Unlike Asimov, we now live in a time where the questions have direct applicability to robots living among us, outside the pages of sci-fi. As Japanese and South Korean roboticists have found, the three laws cannot address what they call “open texture risk”— unpredictable interactions in unstructured environments. Humans rely on nuanced and often preconscious readings of complex social codes and the fine shades of meaning embedded in natural language; machines have no such subtlety… yet. Whether or not they can develop it is an open question, making humanoid robots with artificial intelligence an “open texture risk.” But as you can see from the video below, we’re perhaps much closer to Blade Runner or AI than to the clunky, interstellar mining machines in Asimov’s fiction.

Josh Jones is a doctoral candidate in English at Fordham University and a co-founder and former managing editor of Guernica / A Magazine of Arts and Politics.

by | Permalink | Comments (5) |

Support Open Culture

We’re hoping to rely on our loyal readers rather than erratic ads. To support Open Culture’s educational mission, please consider making a donation. We accept PayPal, Venmo (@openculture), Patreon and Crypto! Please find all options here. We thank you!

Comments (5)
You can skip to the end and leave a response. Pinging is currently not allowed.
  • One of the more interesting characters in the Foundation books is definitely R. Daneel Olivaw and his various identities.

  • Maxim Ray says:

    It was a very interesting read. I had never realized that someone made laws for robots. The very human like robot video was very cool as well.

  • Steve says:

    Interesting, this ‘uncanny valley.’ I wonder if we will face this as well on the intelligence-facet.
    I wonder what first laws the new AI robots will make for us: 1:”thou shalt not program any robot.” 2:thou shalt not imitate Arnold Schwartzeneggar’s “I’ll be back.”

  • Tim says:

    If you do a little more research you’ll find out that Asimov was horrified at the idea of his laws being put into practice. Most of his robot stories were about how the laws didn’t work.

  • Christopher says:

    As i scream into the void. I think Asimov’s universe banned robots. I am just searching for this in Asimov’s stories. In the last Foundation. Do they find a distant station ‘manned’ by an ageing robot? Ate a number of stories about mining operations ‘manned’ by robots. The first comment mentions Olivaw. Isnt olivaw the 20 000 year old robot who guided Harry Sheldon the history math wiz, found living on the second secret foundation planet.

    Why did Asimov ban robots? How does Asimov explain this?

Leave a Reply

Open Culture was founded by Dan Colman.