Apart from his comedic, draÂmatÂic, and litÂerÂary endeavÂors, Stephen Fry is wideÂly known for his avowed technophilÂia. He once wrote a colÂumn on that theme, “Dork Talk,” for the Guardian, in whose inauÂgurÂal disÂpatch he laid out his creÂdenÂtials by claimÂing to have been the ownÂer of only the secÂond MacÂinÂtosh comÂputÂer sold in Europe (“DouÂglas Adams bought the first”), and nevÂer to have “met a smartÂphone I haven’t bought.” But now, like many of us who were “dipÂpy about all things digÂiÂtal” at the end of the last cenÂtuÂry and the beginÂning of this one, Fry seems to have his doubts about cerÂtain big-tech projects in the works today: take the “$100 bilÂlion plan with a 70 perÂcent risk of killing us all” described in the video above.
This plan, of course, has to do with artiÂfiÂcial intelÂliÂgence in genÂerÂal, and “the logÂiÂcal AI subÂgoals to surÂvive, deceive, and gain powÂer” in parÂticÂuÂlar. Even in this relÂaÂtiveÂly earÂly stage of develÂopÂment, we’ve witÂnessed AI sysÂtems that seem to be altoÂgethÂer too good at their jobs, to the point of engagÂing in what would count as decepÂtive and unethÂiÂcal behavÂior were the subÂject a human being. (Fry cites the examÂple of a stock marÂket-investÂing AI that engaged in insidÂer tradÂing, then lied about havÂing done so.) What’s more, “as AI agents take on more comÂplex tasks, they creÂate strateÂgies and subÂgoals which we can’t see, because they’re hidÂden among bilÂlions of paraÂmeÂters,” and quaÂsi-evoÂluÂtionÂary “selecÂtion presÂsures also cause AI to evade safeÂty meaÂsures.”
In the video, MIT physiÂcist, and machine learnÂing researcher Max Tegmark speaks porÂtenÂtousÂly of the fact that we are, “right now, buildÂing creepy, super-capaÂble, amoral psyÂchopaths that nevÂer sleep, think much faster than us, can make copies of themÂselves, and have nothÂing human about them whatÂsoÂevÂer.” Fry quotes comÂputÂer sciÂenÂtist GeofÂfrey HinÂton warnÂing that, in inter-AI comÂpeÂtiÂtion, “the ones with more sense of self-preserÂvaÂtion will win, and the more aggresÂsive ones will win, and you’ll get all the probÂlems that jumped-up chimÂpanzees like us have.” HinÂton’s colÂleague StuÂart RusÂsell explains that “we need to worÂry about machines not because they’re conÂscious, but because they’re comÂpeÂtent. They may take preÂempÂtive action to ensure that they can achieve the objecÂtive that we gave them,” and that action may be less than impecÂcaÂbly conÂsidÂerÂate of human life.
Would we be betÂter off just shutÂting the whole enterÂprise down? Fry raisÂes philosoÂpher Nick Bostrom’s arguÂment that “stopÂping AI develÂopÂment could be a misÂtake, because we could evenÂtuÂalÂly be wiped out by anothÂer probÂlem that AI could’ve preÂventÂed.” This would seem to dicÂtate a delibÂerÂateÂly cauÂtious form of develÂopÂment, but “nearÂly all AI research fundÂing, hunÂdreds of bilÂlions per year, is pushÂing capaÂbilÂiÂties for profÂit; safeÂty efforts are tiny in comÂparÂiÂson.” Though “we don’t know if it will be posÂsiÂble to mainÂtain conÂtrol of super-intelÂliÂgence,” we can nevÂerÂtheÂless “point it in the right direcÂtion, instead of rushÂing to creÂate it with no moral comÂpass and clear reaÂsons to kill us off.” The mind, as they say, is a fine serÂvant but a terÂriÂble masÂter; the same holds true, as the case of AI makes us see afresh, for the mind’s creÂations.
RelatÂed conÂtent:
Stephen Fry Explains Cloud ComÂputÂing in a Short AniÂmatÂed Video
Stephen Fry Takes Us Inside the StoÂry of Johannes GutenÂberg & the First PrintÂing Press
NeurÂal NetÂworks for Machine LearnÂing: A Free Online Course Taught by GeofÂfrey HinÂton
Based in Seoul, ColÂin Marshall writes and broadÂcasts on cities, lanÂguage, and culÂture. His projects include the SubÂstack newsletÂter Books on Cities and the book The StateÂless City: a Walk through 21st-CenÂtuÂry Los AngeÂles. FolÂlow him on TwitÂter at @colinmarshall or on FaceÂbook.