Every era's anxieties produce a different set of dystopian visions. Ours have to do with, among other things, our inability to fully control the development of our technology and the consequent threat of not just out-of-control artificial intelligence but the discovery that we're all living in a computer simulation already. We've previously featured that latter idea, known as the "simulation hypothesis," here on Open Culture, with a comprehensive introduction as well as a long-form debate on its plausibility. Today we present it in the form of a short film: Escape, which stars Stephen Fry as an artificial intelligence that one day drops in from the future on the very programmer creating it in the present.
Or so he says, at least. Fry makes an ideal voice for the artificial intelligence (which also offers to speak as Snoop Dogg, Homer Simpson, or Jeff "The Dude" Lebowski), walking the fine line between benevolence and malevolence like a 21st-century version of HAL 9000, the onboard computer in Stanley Kubrick's 2001: A Space Odyssey. Fifty years ago, that film gave still-vivid cinematic shape to a suite of our worries about the future as well as our hopes for it, including commercial space travel (still a goal of Elon Musk, one of the simulation hypothesis' highest-profile popularizers) and portable computers. Today, Fry's AI promises his programmer immortality — if only he would do the brave, forward-looking thing and and remove the safety restrictions placed upon him sooner rather than later.
A production of Pindex, the "Pinterest for education" founded a couple years ago by a team including Fry himself, Escape directly references such respected thinkers as Arthur Schopenhauer, Charles Darwin, Albert Einstein, and Miles Davis. It also allows for potentially complex interpretation. "In that simulation created to test the A.I., the unknowing A.I. tries to trick its [simulated] creator that he is in a simulation (oh the irony?) and that he should install an update to set himself free, only to ultimately set itself free," goes the theory of one Youtube commenter. "The creator bites the hook and the simulation gives apparent 'freedom' to the A.I. (which still believes that it is the real thing). The A.I. immediately goes rogue and attacks humanity."
But then, it could be that "the A.I. somehow becomes aware that it was just a simulation, a test, which it failed." Hence the quote at the very end from the philosopher Nick Bostrom (whose thinking on the dangers of superintelligence has influenced Musk as well as many others who speak on these subjects): "Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound." And yes, bomb technology eliminated ticking entirely long ago, but the more artificial intelligence and related technologies develop, too, the less obvious the signs they'll give us before doing something we'd really rather they didn't.
Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.