A handÂful of futurÂists, philosoÂphers, and technophiles believe we are approachÂing what they call the “sinÂguÂlarÂiÂty”: a point in time when smart machines became much smarter, stronger, and faster than their creÂators, and then become self-conÂscious. If there’s any chance of this occurÂring, it’s worthÂwhile to ponÂder the conÂseÂquences. But we do already, all the time—in exisÂtenÂtialÂly bleak sceÂnarÂios like Blade RunÂner, the TerÂmiÂnaÂtor series, the rebootÂed BatÂtlestar GalacÂtiÂca (and its failed preÂquel CapriÂca).
The prospects are nevÂer pleasÂant. RobotÂic engiÂneers in these worlds hardÂly seem to bothÂer teachÂing their machines the kind of moral code that would keep them from turnÂing and destroyÂing us (that is, when they aren’t explicÂitÂly designed to do so).
I wonÂder about this conÂcepÂtuÂal gap—convenient as it may be in narÂraÂtive terms—given that Isaac AsiÂmov, one of the foreÂfaÂthers of robot ficÂtion inventÂed just such a moral code. In the video above, he outÂlines it (with his odd proÂnunÂciÂaÂtion of “robot”). The code conÂsists of three laws; in his ficÂtion these are hardÂwired into each robot’s “positronÂic brain,” a ficÂtionÂal comÂputÂer that gives robots someÂthing of a human-like conÂsciousÂness.
First Law: A robot may not injure a human being, or, through inacÂtion, allow a human being to come to harm.
SecÂond Law: A robot must obey the orders givÂen it by human beings except where such orders would conÂflict with the First Law.
Third Law: A robot must proÂtect its own exisÂtence as long as such proÂtecÂtion does not conÂflict with the First or SecÂond Law.
Isaac AsiÂmov devotÂed a good deal of his writÂing career to the subÂject of robots, so it’s safe to say, he’d done quite bit of thinkÂing about how they would fit into the worlds he inventÂed. In doing so, AsiÂmov had to solve the probÂlem of how robots would interÂact with humans once they had some degree of free will. But are his three laws sufÂfiÂcient? Many of Asimov’s stoÂries–I, Robot, for example–turn on some failÂure or conÂfuÂsion between them. And even for their chase scenes, exploÂsions, and meloÂdraÂma, the three screen exploÂrations of artiÂfiÂcial life menÂtioned above thoughtÂfulÂly exploit philoÂsophÂiÂcal ambiÂguÂiÂties and insufÂfiÂcienÂcies in Asimov’s simÂple sysÂtem.
For one thing, while Asimov’s robots were hunks of metÂal, takÂing only vagueÂly humanoid form, the robots of our curÂrent imagÂinÂings emerge from an uncanÂny valÂley with realÂisÂtic skin and hair or even a genetÂic code and cirÂcuÂlaÂtoÂry sysÂtem. They are posÂsiÂble sexÂuÂal partÂners, friends and lovers, co-workÂers and supeÂriÂors. They can deceive us as to their nature (a fourth law by BulÂgarÂiÂan novÂelÂist Lyuben Dilov states that a robot “must estabÂlish its idenÂtiÂty as a robot in all casÂes”); they can conÂceive chilÂdren or desires their creÂators nevÂer intendÂed. These difÂferÂences beg imporÂtant quesÂtions: how ethÂiÂcal are these laws? How feaÂsiÂble? When the sinÂguÂlarÂiÂty occurs, will Skynet become aware of itself and destroy us?
Unlike AsiÂmov, we now live in a time where the quesÂtions have direct applicÂaÂbilÂiÂty to robots livÂing among us, outÂside the pages of sci-fi. As JapanÂese and South KoreÂan robotiÂcists have found, the three laws canÂnot address what they call “open texÂture risk”— unpreÂdictable interÂacÂtions in unstrucÂtured enviÂronÂments. Humans rely on nuanced and often preÂconÂscious readÂings of comÂplex social codes and the fine shades of meanÂing embedÂded in natÂurÂal lanÂguage; machines have no such subÂtleÂty… yet. Whether or not they can develÂop it is an open quesÂtion, makÂing humanoid robots with artiÂfiÂcial intelÂliÂgence an “open texÂture risk.” But as you can see from the video below, we’re perÂhaps much closÂer to Blade RunÂner or AI than to the clunky, interÂstelÂlar minÂing machines in AsiÂmov’s ficÂtion.
Josh Jones is a docÂtorÂal canÂdiÂdate in EngÂlish at FordÂham UniÂverÂsiÂty and a co-founder and forÂmer manÂagÂing ediÂtor of GuerÂniÂca / A MagÂaÂzine of Arts and PolÂiÂtics.