In 2014, Google acquired DeepMind, a company which soon made news when its artificial intelligence software defeated the world's best player of the Chinese strategy game, Go. What's DeepMind up to these days? More elemental things--like teaching itself to walk. Above, watch what happens when, on the fly, DeepMind's AI learns to walk, run, jump, and climb. Sure, it all seems a little kooky--until you realize that if DeepMind's AI can learn to walk in hours, it can take your job in a matter of years.
What's director Michel Gondry up to these days? Apparently, trying to show that you can do smart things--like make serious movies--with that smartphone in your pocket. The director of Eternal Sunshine of the Spotless Mind and the Noam Chomsky animated documentary Is the Man Who Is Tall Happy?has just released "Détour," a short film shot purely on his iPhone 7 Plus. Subtitled in English, "Détour" runs about 12 minutes and follows "the adventures of a small tricycle as it sets off along French roads in search of its young owner." Watch it, then ask yourself, was this really not made with a traditional camera? And then ask yourself, what's my excuse for not getting out there and making movies?
If you'd like to support Open Culture and our mission, please consider making a donation to our site. It's hard to rely 100% on ads, and your contributions will help us provide the best free cultural and educational materials.
When television appeared in Japan in the 1950s, most people in that still-poor country could only satisfy their curiosity about it by watching the display models in store windows. But by the 1980s, the Japanese had become not just astonishingly rich but world leaders in technology as well. It took something special to make Tokyoites stop on the streets of Akihabara, the city's go-to district for high technology, but stop they did in 1990 when, in the windows of Sony Town, appeared Infinite Escher.
Produced by Sony HDVS Soft Center as a showcase for the company's brand new high-definition video technology, this short film caused passersby, according to the video description, to "gasp in amazement at the clarity and sharp crisp focus of the picture."
Running seven and a half minutes, it tells the story of a bespectacled New York City teenager (played by a young Sean Lennon, son of John Lennon and Yoko Ono) who steps off the school bus one afternoon to find M.C. Escher-style visual motifs in the urban landscape all around him: a jigsaw puzzle piece-shaped curbside puddle, a transparent geometrically patterned basketball.
When he goes home to sketch a few artistic-mathematical ideas of his own, he looks into an awfully familiar-looking reflecting sphere and gets sucked into a completely Escherian realm. This sequence demonstrates not just the look of Sony's high-definition video, but the then-state-of-the-art techniques for dropping real-life characters into computer-generated settings and vice versa. In addition to the visions of the Dutch graphic designer who not just imagined but rendered the impossible, Sony also brought in two of the other powerful creative minds, Japanese musician Ryuichi Sakamoto to create the score and Korean video artist Nam June Paik to do the art direction.
Watching Infinite Escher today may first underscore just how far high-definition video and computer graphics have come over the past 27 years, but it ultimately shows another example of how Escher's visions, even after the artist's death in 1972, have remained so compelling that each era — with its own technological, cultural, and aesthetic trends — pays its own kind of tribute to them.
I’ve never quite understood why the phrase “revisionist history” became purely pejorative. Of course, it has its Orwellian dark side, but all knowledge has to be revised periodically, as we acquire new information and, ideally, discard old prejudices and narrow frames of reference. A failure to do so seems fundamentally regressive, not only in political terms, but also in terms of how we value accurate, interesting, and engaged scholarship. Such research has recently brought us fascinating stories about previously marginalized people who made significant contributions to scientific discovery, such as NASA's “human computers,” portrayed in the book Hidden Figures, then dramatized in the film of the same name.
Likewise, the many women who worked at Bletchley Park during World War II—helping to decipher encryptions like the Nazi Enigma Code (out of nearly 10,000 codebreakers, about 75% were women)—have recently been getting their historical due, thanks to “revisionist” researchers. And, as we noted in arecent post, we might not know much, if anything, about silent film star Hedy Lamarr’s significant contributions to wireless, GPS, and Bluetooth technology were it not for the work of historians like Richard Rhodes. These few examples, among many, show us a fuller, more accurate, and more interesting view of the history of science and technology, and they inspire women and girls who want to enter the field, yet have grown up with few role models to encourage them.
We can add to the pantheon of great women in science the name Ada Byron, Countess of Lovelace, the daughter of Romantic poet Lord Byron. Lovelace has been renowned, as Hank Green tells us in the video at the top of the post, for writing the first computer program, “despite living a century before the invention of the modern computer.” This picture of Lovelace has been a controversial one. “Historians disagree,” writes prodigious mathematician Stephen Wolfram. “To some she is a great hero in the history of computing; to others an overestimated minor figure.”
Wolfram spent some time with “many original documents” to untangle the mystery. “I feel like I’ve finally gotten to know Ada Lovelace,” he writes, “and gotten a grasp on her story. In some ways it’s an ennobling and inspiring story; in some ways it’s frustrating and tragic.” Educated in math and music by her mother, Anne Isabelle Milbanke, Lovelace became acquainted with mathematics professor Charles Babbage, the inventor of a calculating machine called the Difference Engine, “a 2-foot-high hand-cranked contraption with 2000 brass parts.” Babbage encouraged her to pursue her interests in mathematics, and she did so throughout her life.
Widely acknowledged as one of the forefathers of computing, Babbage eventually corresponded with Lovelace on the creation of another machine, the Analytical Engine, which “supported a whole list of possible kinds of operations, that could in effect be done in arbitrarily programmed sequence.” When, in 1842, Italian mathematician Louis Menebrea published a paper in French on the Analytical Engine, “Babbage enlisted Ada as translator,” notes the San Diego Supercomputer Center's Women in Science project. “During a nine-month period in 1842-43, she worked feverishly on the article and a set of Notes she appended to it. These are the source of her enduring fame.” (You can read her translation and notes here.)
In the course of his research, Wolfram pored over Babbage and Lovelace’s correspondence about the translation, which reads “a lot like emails about a project might today, apart from being in Victorian English.” Although she built on Babbage and Menebrea’s work, “She was clearly in charge” of successfully extrapolating the possibilities of the Analytical Engine, but she felt “she was first and foremost explaining Babbage’s work, so wanted to check things with him.” Her additions to the work were very well-received—Michael Faraday called her “the rising star of Science”—and when her notes were published, Babbage wrote, “you should have written an original paper.”
Unfortunately, as a woman, “she couldn’t get access to the Royal Society’s library in London,” and her ambitions were derailed by a severe health crisis. Lovelace died of cancer at the age of 37, and for some time, her work sank into semi-obscurity. Though some historians have seen her as simply an expositor of Babbage’s work, Wolfram concludes that it was Ada who had the idea of “what the Analytical Engine should be capable of.” Her notes suggested possibilities Babbage had never dreamed. As the Women in Science project puts it, "She rightly saw [the Analytical Engine] as what we would call a general-purpose computer. It was suited for 'developping [sic] and tabulating any function whatever. . . the engine [is] the material expression of any indefinite function of any degree of generality and complexity.' Her Notes anticipate future developments, including computer-generated music."
A popular thought experiment asks us to imagine an advanced alien species arriving on Earth, not in an H.G. Wells-style invasion, but as advanced, bemused, and benevolent observers. “Wouldn’t they be appalled,” we wonder, “shocked, confused at how backward we are?” It’s a purely rhetorical device—the secular equivalent of taking a “god’s eye view” of human folly. Few people seriously entertain the possibility in polite company. Unless they work at NASA or the SETI program.
In 1977, upon the launching of Voyager 1 and Voyager 2, a committee working under Carl Sagan produced the so-called “Golden Records,” actual phonographic LPs made of copper containing “a collection of sounds and images,” writes Joss Fong at Vox, “that will probably outlast all human artifacts on Earth.” While they weren’t preparing for a visitation on Earth, they did—relying not on wishful thinking but on the controversial Drake Equation—fully expect that other technological civilizations might well exist in the cosmos, and assumed a likelihood we might encounter one, at least via remote.
Sagan tasked himself with compiling what he called a “bottle” in “the cosmic ocean,” and something of a time capsule of humanity. Over a year’s time, Sagan and his team collected 116 images and diagrams, natural sounds, spoken greetings in 55 languages, printed messages, and musical selections from around the world--things that would communicate to aliens what our human civilization is essentially all about. The images were encoded onto the records in black and white (you can see them all in the Vox video above in color). The audio, which you can play in its entirety below, was etched into the surface of the record. On the cover were etched a series of pictographic instructions for how to play and decode its contents. (Scroll over the interactive image at the top to see each symbol explained.)
Fong outlines those contents, writing, “any aliens who come across the Golden Record are in for a treat.” That is, if they are able to make sense of it and don’t find us horribly backward. Among the audio selections are greetings from then-UN Secretary General Kurt Waldheim, whale songs, Bach’s Brandenberg Concerto No. 2 in F, Senegalese percussion, Aborigine songs, Peruvian panpipes and drums, Navajo chant, Blind Willie Johnson’s “Dark Was the Night” (playing in the Vox video), more Bach, Beethoven, and “Johnny B. Goode.” Challenged over including “adolescent” rock and roll, Sagan replied, “there are a lot of adolescents on the planet.” The Beatles reportedly wanted to contribute “Here Comes the Sun,” but their record company wouldn’t allow it, presumably fearing copyright infringement from aliens.
Also contained in the spacefaring archive is a message from then-president Jimmy Carter, who writes optimistically, “We are a community of 240 million human beings among the more than 4 billion who inhabit planet Earth. We human beings are still divided into nation states, but these states are rapidly becoming a single global civilization.” The messages on Voyagers 1 and 2, Carter forecasts, are “likely to survive a billion years into our future, when our civilization is profoundly altered and the surface of the Earth may be vastly changed.” The team chose not to include images of war and human cruelty.
We only have a few years left to find out whether either Voyager will encounter other beings. “Incredibly,” writes Fong, the probes “are still communicating with Earth—they aren’t expected to lose power until the 2020s.” It seems even more incredible, forty years later, when we consider their primitive technology: “an 8-track memory system and onboard computers that are thousands of times weaker than the phone in your pocket.”
The Voyagers were not the first probes sent to interstellar space. Pioneer 10 and 11 were launched in 1972 and 1973, each containing a Sagan-designed aluminum plaque with a few simple messages and depictions of a nude man and woman, an addition that scandalized some puritanical critics. NASA has since lost touch with both Pioneers, but you may recall that in 2006, the agency launched the New Horizons probe, which passed by Pluto in 2015 and should reach interstellar space in another thirty years.
Perhaps due to the lack of the departed Sagan’s involvement, the latest “bottle” contains no introductions. But there is time to upload some, and one of the Golden Record team members, Jon Lomberg, wants to do just that, sending a crowdsourced “message to the stars.” Lomberg’s New Horizon’s Message Initiative is a “global project that brings the people of the world together to speak as one.” The limitations of analog technology have made the Golden Record selections seem quite narrow from our data-saturated point of view. The new message might contain almost anything we can imagine. Visit the project's site to sign the petition, donate, and consider, just what would you want an alien civilization to hear, see, and understand about the best of humanity circa 2017?
Not at all, my dear Mr. Bell. A second's worth of research reveals that a 21-year-old upstart named Philo Taylor Farnsworth invented television. By 1927, when he unveiled it to the public, you’d already been dead for five years.
You invented the telephone, a fact of which we’re all very aware.
Though you might want to look into intellectual property law.... Historic figures make popular pitchmen, especially if - like Lincoln, Copernicus, and a red hot Alexander Hamilton, they’ve been in the grave for over 100 years. (Hint - you’ve got five years to go.)
Or you could take it as a compliment! You’ve made an impression so lasting, the briefest of establishing shots is all we television audiences need to understand the advertiser's premise.
Thusly can you be co-opted into selling the American public on the apparently revolutionary concept of chicken for breakfast, above.
And that’s just the tip of the iceberg!
Mr. Watson gets a cameo in your 1975 ad for Carefree Gum. You definitely come off the better of the two.
You’re an obvious choice for a recent AT&T spot tracing a line from your revelatory moment to 20-something hipsters wielding smartphones and sparklers on a Brooklyn rooftop. Their devices aren’t the only thing connecting you. It’s also the beards…
Apologies for the beardlessness of this 10 year old, low-budget spot for Able Computing in Papua New Guinea. Possibly the costumer thought Einstein invented the phone? Or maybe the creative director was counting on the local viewing audience not to sweat the small stuff. Your invention matters more than your facial hair.
Lego took a cue from the 80s Muppet Babies craze by sending you back to childhood. They also saddled you and your mom with American accents, a regrettably common practice. I bet you would’ve liked Legos, though. They’re like blocks.
As for this one, your guess is as good as mine.
Readers, please share your favorite ads featuring historic figures in the comments below.
Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine. See her onstage in New York City in Paul David Young’s Faust 3, an indictment of the Trump administration that adapts and mangles Goethe's Faust (Parts 1 and 2) and the Gospels in the King James translation, as well as bits of Yeats, Shakespeare, Christmas carols, Stephen Foster, John Donne, Heiner Müller, Julia Ward Howe, and Abel Meeropol. Follow her @AyunHalliday.
We know they’re coming. The robots. To take our jobs. While humans turn on each other, find scapegoats, try to bring back the past, and ignore the future, machine intelligences replace us as quickly as their designers get them out of beta testing. We can’t exactly blame the robots. They don’t have any say in the matter. Not yet, anyway. But it’s a fait accompli say the experts. “The promise,” writes MIT Technology Review, “is that intelligent machines will be able to do every task better and more cheaply than humans. Rightly or wrongly, one industry after another is falling under its spell, even though few have benefited significantly so far.”
You can see many of the answers plotted on the chart above. Grace and her co-authors asked 1,634 experts, and found that they “believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years.” That means all jobs: not only driving trucks, delivering by drone, running cash registers, gas stations, phone support, weather forecasts, investment banking, etc, but also performing surgery, which may happen in less than 40 years, and writing New York Times bestsellers, which may happen by 2049.
That’s right, AI may perform our cultural and intellectual labor, making art and films, writing books and essays, and creating music. Or so the experts say. Already a Japanese AI program has written a short novel, and almost won a literary prize for it. And the first milestone on the chart has already been reached; last year, Google’s AI AlphaGo beat Lee Sedol, the South Korean grandmaster of Go, the ancient Chinese game “that’s exponentially more complex than chess,” as Cade Metz writes at Wired. (Humane video game design, on the other hand, may have a ways to go yet.)
Perhaps these feats partly explain why, as Grace and the other researchers found, Asian respondents expected the rise of the machines “much sooner than North America.” Other cultural reasons surely abound—likely those same quirks that make Americans embrace creationism, climate-denial, and fearful conspiracy theories and nostalgia by the tens of millions. The future may be frightening, but we should have seen this coming. Sci-fi visionaries have warned us for decades to prepare for our technology to overtake us.
In the 1960s Alan Watts foresaw the future of automation and the almost pathological fixation we would develop for “job creation” as more and more necessary tasks fell to the robots and human labor became increasingly superfluous. (Hear him make his prediction above.) Like many a technologist and futurist today, Watts advocated for Universal Basic Income, a way of ensuring that all of us have the means to survive while we use our newly acquired free time to consciously shape the world the machines have learned to maintain for us.
What may have seemed like a Utopian idea then (though it almost became policy under Nixon), may become a necessity as AI changes the world, writes MIT, “at breakneck speed.”
Get the best cultural and educational resources on the web curated for you in a daily email. We never spam. Unsubscribe at any time.
FOLLOW ON SOCIAL MEDIA
Open Culture editor Dan Colman scours the web for the best educational media. He finds the free courses and audio books you need, the language lessons & movies you want, and plenty of enlightenment in between.