“We never dreamt that it would be this clear, this beautiful.” That’s how NASA scientist J.T. Heineck responded when he got his first glimpse of images that captured “the first-ever images of the interaction of shockwaves from two supersonic aircraft in flight.”
The images feature a pair of T‑38s from the U.S. Air Force Test Pilot School at Edwards Air Force Base, flying in formation at supersonic speeds. The T‑38s are flying approximately 30 feet away from each other, with the trailing aircraft flying about 10 feet lower than the leading T‑38. With exceptional clarity, the flow of the shock waves from both aircraft is seen, and for the first time, the interaction of the shocks can be seen in flight.
“We’re looking at a supersonic flow, which is why we’re getting these shockwaves,” said Neal Smith, a research engineer with AerospaceComputing Inc. at NASA Ames’ fluid mechanics laboratory.
“What’s interesting is, if you look at the rear T‑38, you see these shocks kind of interact in a curve,” he said. “This is because the trailing T‑38 is flying in the wake of the leading aircraft, so the shocks are going to be shaped differently. This data is really going to help us advance our understanding of how these shocks interact…”
While NASA has previously used the schlieren photography technique to study shockwaves, the AirBOS 4 flights featured an upgraded version of the previous airborne schlieren systems, allowing researchers to capture three times the amount of data in the same amount of time.
“We’re seeing a level of physical detail here that I don’t think anybody has ever seen before,” said Dan Banks, senior research engineer at NASA Armstrong. “Just looking at the data for the first time, I think things worked out better than we’d imagined. This is a very big step.”
Along with hundreds of other seaside cities, island towns, and entire islands, historic Venice, the floating city, may soon sink beneath the waves if sea levels continue their rapid rise. The city is slowly tilting to the East and has seen historic floods inundate over 70 percent of its palazzo- and basilica-lined streets. But should such tragic losses come to pass, we’ll still have Venice, or a digital version of it, at least—one that aggregates 1,000 years of art, architecture, and “mundane paperwork about shops and businesses” to create a virtual time machine. An “ambitious project to digitize 10 centuries of the Venetian state’s archives,” the Venice Time Machine uses the latest in “deep learning” technology for historical reconstructions that won’t get washed away.
The Venice Time Machine doesn’t only proof against future calamity. It also sets machines to a task no living human has yet to undertake. Most of the huge collection at the State Archives “has never been read by modern historians,” points out the narrator of the Nature video at the top.
This endeavor stands apart from other digital humanities projects, Alison Abbott writes at Nature, “because of its ambitious scale and the new technologies it hopes to use: from state-of-the-art scanners that could even read unopened books, to adaptable algorithms that will turn handwritten documents into digital, searchable text.”
In addition to posterity, the beneficiaries of this effort include historians, economists, and epidemiologists, “eager to access the written records left by tens of thousands of ordinary citizens.” Lorraine Daston, director of the Max Planck Institute for the History of Science in Berlin describes the anticipation scholars feel in particularly vivid terms: “We are in a state of electrified excitement about the possibilities,” she says, “I am practically salivating.” Project head Frédéric Kaplan, a Professor of Digital Humanities at the École polytechnique fédérale de Lausanne (EPFL), compares the archival collection to “’dark matter’—documents that hardly anyone has studied before.”
Using big data and AI to reconstruct the history of Venice in virtual form will not only make the study of that history a far less hermetic affair; it might also “reshape scholars’ understanding of the past,” Abbott points out, by democratizing narratives and enabling “historians to reconstruct the lives of hundreds of thousands of ordinary people—artisans and shopkeepers, envoys and traders.” The Time Machine’s site touts this development as a “social network of the middle ages,” able to “bring back the past as a common resource for the future.” The comparison might be unfortunate in some respects. Social networks, like cable networks, and like most historical narratives, have become dominated by famous names.
By contrast, the Time Machine model—which could soon lead to AI-created virtual Amsterdam and Paris time machines—promises a more street-level view, and one, moreover, that can engage the public in ways sealed and cloistered artifacts cannot. “We historians were baptized with the dust of archives,” says Daston. “The future may be different.” The future of Venice, in real life, might be uncertain. But thanks to the Venice Time Machine, its past is poised take on thriving new life. See previews of the Time Machine in the videos further up, learn more about the project here, and see Kaplan explain the “information time machine” in his TED talk above.
If you follow edtech, you know the name Andrew Ng. He’s the Stanford computer science professor who co-founded MOOC-provider Coursera and later became chief scientist at Baidu. Since leaving Baidu, he’s been working on several artificial intelligence projects, including a series of Deep Learning courses that he unveiled in 2017. And now comes AI for Everyone–an online course that makes artificial intelligence intelligible to a broad audience.
The meaning behind common AI terminology, including neural networks, machine learning, deep learning, and data science.
What AI realistically can–and cannot–do.
How to spot opportunities to apply AI to problems in your own organization.
What it feels like to build machine learning and data science projects.
How to work with an AI team and build an AI strategy in an organization.
How to navigate ethical and societal discussions surrounding AI.
The four-week course takes about eight hours to complete. You can audit it for free. However if you want to earn a certificate–which you can then share on your LinkedIn profile, printed resumes and CVs–the course will run $49.
Vincent van Gogh died in 1890, long before the emergence of any of the visual technologies that impress us here in the 21st century. But the distinctive vision of reality expressed through paintings still captivates us, and perhaps captivates us more than ever: the latest of the many tributes we continue to pay to van Gogh’s art takes the form Van Gogh, Starry Night, a “digital exhibition” at the Atelier des Lumières, a disused foundry turned projector- and sound system-laden multimedia space in Paris. “Projected on all the surfaces of the Atelier,” its site says of the exhibition, “this new visual and musical production retraces the intense life of the artist.”
Van Gogh’s intensity manifested in various ways, including more than 2,000 paintings painted in the last decade of his life alone. Van Gogh, Starry Night surrounds its visitors with the painter’s work, “which radically evolved over the years, from The Potato Eaters (1885), Sunflowers (1888) and Starry Night (1889) to Bedroom at Arles (1889), from his sunny landscapes and nightscapes to his portraits and still lives.”
Both Van Gogh, Starry Night and Dreamed Japan run until the end of this year. If you happen to have a chance to make it out to the Atelier des Lumières, first consider downloading the exhibition’s smartphone and tablet application that provides recorded commentary on van Gogh’s masterpieces. That counts as one more layer of this elaborate audiovisual experience that, despite employing the height of modern museum technology, nevertheless draws all its aesthetic inspiration from 19th-century paintings — and will send those who experience it back to those 19th-century paintings with a heightened appreciation. Nearly 130 years after Van Gogh’s death, we’re still using all the ingenuity we can muster to see the world as he did.
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.
Though global espionage remains a going concern in the 21st century, somehow the popular stories we tell about it return again and again to the Cold War. Maybe it has to do with the demand those mostly pre-digital decades made upon the physical ingenuity of spies as well as the tools of spycraft. Take, for instance, one particularly ingenious CIA-issued tool kit on display at the International Spy Museum in Washington, D.C. “Filled with escape tools,” says the Spy Museum’s web site, “this kit could be stashed inside the body where it would not be found during a search.” Take one guess as to where inside the body, exactly, it could be stashed.
You can get a closer look at the rectal tool kit in the Atlas Obscura video above. This “tightly sealed, pill-shaped container full of tools that could aid an escape from various sticky situations,” as that site’s Lizzie Philip describes it, “was issued to CIA operatives during the height of the Cold War.”
Built to contain a variety of escape tools like “drill bits, saws and knives,” it presented quite an engineering challenge: its materials, one needs hardly add, “could not splinter or create sharp edges that could injure users,” and “it had to seal tightly to not let anything seep in or poke out.” Upon seeing an item like this, which commands so much attention at the Spy Museum, one wonders whether all the spying that went on during Cold War was really so glamorous after all.
Has it crossed the mind of, say, John Le Carré, his writing career a nearly sixty-year-long deflation of the pretensions of spycraft, to write about the ins and outs of rectal tool kits? But then, personal experience has granted him much more knowledge about the tactics of British espionage than those of the American variety. As surely as he knows the MI5’s official motto, “Regnum Defende,” he must also know the unofficial motto that pokes fun at the organization’s aggressive culture of blame avoidance, “Rectum Defende” — words that, in light of the knowledge about just where the agents of Britain’s main ally were storing their tools, take on a whole new meaning.
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.
We are a haunted species: haunted by the specter of climate change, of economic collapse, and of automation making our lives redundant. When Marx used the specter metaphor in his manifesto, he was ironically invoking Gothic tropes. But Communism was not a boogeyman. It was a coming reality, for a time at least. Likewise, we face very real and substantial coming realities. But in far too many instances, they are also manufactured, under ideologies that insist there is no alternative.
But let’s assume there are other ways to order our priorities, such as valuing human life as an end in itself. Perhaps then we could treat the threat of automation as a ghost: insubstantial, immaterial, maybe scary but harmless. Or treat it as an opportunity to order our lives the way we want. We could stop inventing bullshit, low-paying, wasteful jobs that contribute to cycles of poverty and environmental degradation. We could slash the number of hours we work and spend time with people and pursuits we love.
We have been taught to think of this scenario as a fantasy. Or, as Buckminster Fuller declared in1970—on the threshold of the “Malthusian-Darwinian” wave of neoliberal thought to come—“We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery…. He must justify his right to exist.” In current parlance, every person must somehow “add value” to shareholders’ portfolios. The shareholders themselves are under no obligation to return the favor.
What about adding value to our own lives? “The true business of people,” says Fuller, “should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.” Against the “specious notion” that everyone should have to make a wage to live–this “nonsense of earning a living”–he takes a more magnanimous view: “It is a fact today that one in ten thousand of us can make a technological breakthrough capable of supporting all the rest,” who then may go on to make millions of small breakthroughs of their own.
He may have sounded overconfident at the time. But fifty years later, we see engineers, developers, and analysts of all kinds proclaiming the coming age of automation in our lifetimes, with a majority of jobs to be fully or partially automated in 10–15 years. It is a technological breakthrough capable of dispensing with huge numbers of people, unless its benefits are widely shared. The corporate world sticks its head in the sand and issues guidelines for retraining, a solution that will still leave masses unemployed. No matter the state of the most recent jobs report, serious losses in nearly every sector, especially manufacturing and service work, are unavoidable.
The jobs we invent have changed since Fuller’s time, become more contingent and less secure. But the obsession with creating them, no matter their impact or intent, has only grown, a runaway delusion no one can seem to stop. Should we fear automation? Only if we collectively decide the current course of action is all there is, that “everybody has to earn a living”—meaning turn a profit—or drop dead. As Congresswoman Alexandria Ocasio-Cortez—echoing Fuller—put it recently at SXSW, “we live in a society where if you don’t have a job, you are left to die. And that is, at its core, our problem…. We should not be haunted by the specter of being automated out of work.”
“We should be excited about automation,” she went on, “because what it could potentially mean is more time to educate ourselves, more time creating art, more time investing in and investigating the sciences.” However that might be achieved, through subsidized health, education, and basic services, new New Deal and Civil Rights policies, a Universal Basic Income, or some creative synthesis of all of the above, it will not produce a utopia—no political solution is up that task. But considering the benefits of subsidizing our humanity, and the alternative of letting its value decline, it seems worth a shot to try what economist Bill Black calls the “progressive policy core,” which, coincidentally, happens to be “centrist in terms of the electorate’s preferences.”
Is the singularity upon us? AI seems poised to replace everyone, even artists whose work can seem like an inviolably human industry. Or maybe not. Nick Cave’s poignant answer to a fan question might persuade you a machine will never write a great song, though it might master all the moves to write a good one. An AI-written novel did almost win a Japanese literary award. A suitably impressive feat, even if much of the authorship should be attributed to the program’s human designers.
But what about literary criticism? Is this an art that a machine can do convincingly? The answer may depend on whether you consider it an art at all. For those who do, no artificial intelligence will ever properly develop the theory of mind needed for subtle, even moving, interpretations. On the other hand, one group of researchers has succeeded in using “sophisticated computing power, natural language processing, and reams of digitized text,” writes Atlantic editor Adrienne LaFrance, “to map the narrative patterns in a huge corpus of literature.” The name of their literary criticism machine? The Hedonometer.
We can treat this as an exercise in compiling data, but it’s arguable that the results are on par with work from the comparative mythology school of James Frazier and Joseph Campbell. A more immediate comparison might be to the very deft, if not particularly subtle, Kurt Vonnegut, who—before he wrote novels like Slaughterhouse Five and Cat’s Cradle—submitted a master’s thesis in anthropology to the University of Chicago. His project did the same thing as the machine, 35 years earlier, though he may not have had the wherewithal to read “1,737 English-language works of fiction between 10,000 and 200,000 words long” while struggling to finish his graduate program. (His thesis, by the way, was rejected.)
Those numbers describe the dataset from Project Gutenberg fed into the The Hedonometer by the computer scientists at the University of Vermont and the University of Adelaide. After the computer finished “reading,” it then plotted “the emotional trajectory” of all of the stories using a “sentiment analysis to generate an emotional arc for each work.” What it found were six broad categories of story, listed below:
Rags to Riches (rise)
Riches to Rags (fall)
Man in a Hole (fall then rise)
Icarus (rise then fall)
Cinderella (rise then fall then rise)
Oedipus (fall then rise then fall)
How does this endeavor compare with Vonnegut’s project? (See him present the theory below.) The novelist used more or less the same methodology, in human form, to come up with eight universal story arcs or “shapes of stories.” Vonnegut himself left out the Rags to Riches category; he called it an anomaly, though he did have a heading for the same rising-only story arc—the Creation Story—which he deemed an uncommon shape for Western fiction. He did include the Cinderella arc, and was pleased by his discovery that its shape mirrored the New Testament arc, which he also included in his schema, an act the AI surely would have judged redundant.
Contra Vonnegut, the AI found that one-fifth of all the works it analyzed were Rags-to-Riches stories. It determined that this arc was far less popular with readers than “Oedipus,” “Man in a Hole,” and “Cinderella.” Its analysis does get much more granular, and to allay our suspicions, the researchers promise they did not control the outcome of the experiment. “We’re not imposing a set of shapes,” says lead author Andy Reagan, Ph.D. candidate in mathematics at the University of Vermont. “Rather: the math and machine learning have identified them.”
But the authors do provide a lot of their own interpretation of the data, from choosing representative texts—like Harry Potter and the Deathly Hallows—to illustrate “nested and complicated” plot arcs, to providing the guiding assumptions of the exercise. One of those assumptions, unsurprisingly given the authors’ fields of interest, is that math and language are interchangeable. “Stories are encoded in art, language, and even in the mathematics of physics,” they write in the introduction to their paper, published on Arxiv.org.
“We use equations,” they go on, “to represent both simple and complicated functions that describe our observations of the real world.” If we accept the premise that sentences and integers and lines of code are telling the same stories, then maybe there isn’t as much difference between humans and machines as we would like to think.
We're hoping to rely on loyal readers, rather than erratic ads. Please click the Donate button and support Open Culture. You can use Paypal, Venmo, Patreon, even Crypto! We thank you!
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.