Isaac Asimov Explains His Three Laws of Robots

A hand­ful of futur­ists, philoso­phers, and technophiles believe we are approach­ing what they call the “sin­gu­lar­i­ty”: a point in time when smart machines became much smarter, stronger, and faster than their cre­ators, and then become self-con­scious. If there’s any chance of this occur­ring, it’s worth­while to pon­der the con­se­quences. But we do already, all the time—in exis­ten­tial­ly bleak sce­nar­ios like Blade Run­ner, the Ter­mi­na­tor series, the reboot­ed Bat­tlestar Galac­ti­ca (and its failed pre­quel Capri­ca).

The prospects are nev­er pleas­ant. Robot­ic engi­neers in these worlds hard­ly seem to both­er teach­ing their machines the kind of moral code that would keep them from turn­ing and destroy­ing us (that is, when they aren’t explic­it­ly designed to do so).

I won­der about this con­cep­tu­al gap—convenient as it may be in nar­ra­tive terms—given that Isaac Asi­mov, one of the fore­fa­thers of robot fic­tion invent­ed just such a moral code. In the video above, he out­lines it (with his odd pro­nun­ci­a­tion of “robot”). The code con­sists of three laws; in his fic­tion these are hard­wired into each robot’s “positron­ic brain,” a fic­tion­al com­put­er that gives robots some­thing of a human-like con­scious­ness.

First Law: A robot may not injure a human being, or, through inac­tion, allow a human being to come to harm.
Sec­ond Law: A robot must obey the orders giv­en it by human beings except where such orders would con­flict with the First Law.
Third Law: A robot must pro­tect its own exis­tence as long as such pro­tec­tion does not con­flict with the First or Sec­ond Law.

Isaac Asi­mov devot­ed a good deal of his writ­ing career to the sub­ject of robots, so it’s safe to say, he’d done quite bit of think­ing about how they would fit into the worlds he invent­ed. In doing so, Asi­mov had to solve the prob­lem of how robots would inter­act with humans once they had some degree of free will. But are his three laws suf­fi­cient? Many of Asimov’s sto­ries–I, Robot, for example–turn on some fail­ure or con­fu­sion between them. And even for their chase scenes, explo­sions, and melo­dra­ma, the three screen explo­rations of arti­fi­cial life men­tioned above thought­ful­ly exploit philo­soph­i­cal ambi­gu­i­ties and insuf­fi­cien­cies in Asimov’s sim­ple sys­tem.

For one thing, while Asimov’s robots were hunks of met­al, tak­ing only vague­ly humanoid form, the robots of our cur­rent imag­in­ings emerge from an uncan­ny val­ley with real­is­tic skin and hair or even a genet­ic code and cir­cu­la­to­ry sys­tem. They are pos­si­ble sex­u­al part­ners, friends and lovers, co-work­ers and supe­ri­ors. They can deceive us as to their nature (a fourth law by Bul­gar­i­an nov­el­ist Lyuben Dilov states that a robot “must estab­lish its iden­ti­ty as a robot in all cas­es”); they can con­ceive chil­dren or desires their cre­ators nev­er intend­ed. These dif­fer­ences beg impor­tant ques­tions: how eth­i­cal are these laws? How fea­si­ble? When the sin­gu­lar­i­ty occurs, will Skynet become aware of itself and destroy us?

Unlike Asi­mov, we now live in a time where the ques­tions have direct applic­a­bil­i­ty to robots liv­ing among us, out­side the pages of sci-fi. As Japan­ese and South Kore­an roboti­cists have found, the three laws can­not address what they call “open tex­ture risk”— unpre­dictable inter­ac­tions in unstruc­tured envi­ron­ments. Humans rely on nuanced and often pre­con­scious read­ings of com­plex social codes and the fine shades of mean­ing embed­ded in nat­ur­al lan­guage; machines have no such sub­tle­ty… yet. Whether or not they can devel­op it is an open ques­tion, mak­ing humanoid robots with arti­fi­cial intel­li­gence an “open tex­ture risk.” But as you can see from the video below, we’re per­haps much clos­er to Blade Run­ner or AI than to the clunky, inter­stel­lar min­ing machines in Asi­mov’s fic­tion.

Josh Jones is a doc­tor­al can­di­date in Eng­lish at Ford­ham Uni­ver­si­ty and a co-founder and for­mer man­ag­ing edi­tor of Guer­ni­ca / A Mag­a­zine of Arts and Pol­i­tics.


by | Permalink | Comments (5) |

Sup­port Open Cul­ture

We’re hop­ing to rely on our loy­al read­ers rather than errat­ic ads. To sup­port Open Cul­ture’s edu­ca­tion­al mis­sion, please con­sid­er mak­ing a dona­tion. We accept Pay­Pal, Ven­mo (@openculture), Patre­on and Cryp­to! Please find all options here. We thank you!


Comments (5)
You can skip to the end and leave a response. Pinging is currently not allowed.
  • One of the more inter­est­ing char­ac­ters in the Foun­da­tion books is def­i­nite­ly R. Daneel Oli­vaw and his var­i­ous iden­ti­ties.

  • Maxim Ray says:

    It was a very inter­est­ing read. I had nev­er real­ized that some­one made laws for robots. The very human like robot video was very cool as well.

  • Steve says:

    Inter­est­ing, this ‘uncan­ny val­ley.’ I won­der if we will face this as well on the intel­li­gence-facet.
    I won­der what first laws the new AI robots will make for us: 1:“thou shalt not pro­gram any robot.” 2:thou shalt not imi­tate Arnold Schwartzeneg­gar’s “I’ll be back.”

  • Tim says:

    If you do a lit­tle more research you’ll find out that Asi­mov was hor­ri­fied at the idea of his laws being put into prac­tice. Most of his robot sto­ries were about how the laws did­n’t work.

  • Christopher says:

    As i scream into the void. I think Asi­mov’s uni­verse banned robots. I am just search­ing for this in Asi­mov’s sto­ries. In the last Foun­da­tion. Do they find a dis­tant sta­tion ‘manned’ by an age­ing robot? Ate a num­ber of sto­ries about min­ing oper­a­tions ‘manned’ by robots. The first com­ment men­tions Oli­vaw. Isnt oli­vaw the 20 000 year old robot who guid­ed Har­ry Shel­don the his­to­ry math wiz, found liv­ing on the sec­ond secret foun­da­tion plan­et.

    Why did Asi­mov ban robots? How does Asi­mov explain this?

Leave a Reply

Quantcast