From a distance of half a century, we look back on the moon landing as a thoroughly analog affair, an old-school engineering project of the kind seldom even proposed anymore in this digital age. But the Apollo 11 mission could never have happened without computers and the people who program them, a fact that has become better-known in recent years thanks to public interest in the work of Margaret Hamilton, director of the Software Engineering Division of MIT’s Instrumentation Laboratory when it developed on-board flight software for NASA‘s Apollo space program. You can learn more about Hamilton, whom we’ve previously featured here on Open Culture, from the short MAKERS profile video above.
Today we consider software engineering a perfectly viable field, but back in the mid-1960s, when Hamilton first joined the Apollo project, it didn’t even have a name. “I came up with the term ‘software engineering,’ and it was considered a joke,” says Hamilton, who remembers her colleagues making remarks like, “What, software is engineering?”
But her own experience went some way toward proving that working in code had become as important as working in steel. Only by watching her young daughter play at the same controls the astronauts would later use did she realize that just one human error could potentially bring the mission into ruin — and that she could minimize the possibility by taking it into account when designing its software. Hamilton’s proposal met with resistance, NASA’s official line at the time being that “astronauts are trained never to make a mistake.”
But Hamilton persisted, prevailed, and was vindicated during the moon landing itself, when an astronaut did make a mistake, one that caused an overloading of the flight computer. The whole landing might have been aborted if not for Hamilton’s foresight in implementing an “asynchronous executive” function capable, in the event of an overload, of setting less important tasks aside and prioritizing more important ones. “The software worked just the way it should have,” Hamilton says in the Christie’s video on the incident above, describing what she felt afterward as “a combination of excitement and relief.” Engineers of software, hardware, and everything else know that feeling when they see a complicated project work — but surely few know it as well as Hamilton and her Apollo collaborators do.
Human imagination seems seriously limited when faced with the cosmic scope of time and space. We can imagine, through stop-motion animation and CGI, what it might be like to walk the earth with creatures the size of office buildings. But how to wrap our heads around the fact that they lived hundreds of millions of years ago, on a planet some four and a half billion years old? We trust the science, but can’t rely on intuition alone to guide us to such mind-boggling knowledge.
At the other end of the scale, events measured in nanoseconds, or billionths of a second, seem inconceivable, even to someone as smart as Grace Hopper, the Navy mathematician who invented COBOL and helped built the first computer. Or so she says in the 1983 video clip above from one of her many lectures in her role as a guest lecturer at universities, museums, military bodies, and corporations.
When she first heard of “circuits that acted in nanoseconds,” she says, “billionths of a second… Well, I didn’t know what a billion was…. And if you don’t know what a billion is, how on earth do you know what a billionth is? Finally, one morning in total desperation, I called over the engineering building, and I said, ‘Please cut off a nanosecond and send it to me.” What she asked for, she explains, and shows the class, was a piece of wire representing the distance a signal could travel in a nanosecond.
Now of course it wouldn’t really be through wire — it’d be out in space, the velocity of light. So if we start with a velocity of light and use your friendly computer, you’ll discover that a nanosecond is 11.8 inches long, the maximum limiting distance that electricity can travel in a billionth of a second.
Follow the rest of her explanation, with wire props, and see if you can better understand a measure of time beyond the reaches of conscious experience. The explanation was immediately successful when she began using it in the late 1960s “to demonstrate how designing smaller components would produce faster computers,” writes the National Museum of American History. The bundle of wires below, each about 30cm (11.8 inches) long, comes from a lecture Hopper gave museum docents in March 1985.
Like the age of the dinosaurs, the nanosecond may only represent a small fraction of the incomprehensibly small units of time scientists are eventually able to measure—and computer scientists able to access. “Later,” notes the NMAH, “as components shrank and computer speeds increased, Hopper used grains of pepper to represent the distance electricity traveled in a picosecond, one trillionth of a second.”
At this point, the map becomes no more revealing than the unknown territory, invisible to the naked eye, inconceivable but through wild leaps of imagination. But if anyone could explain the increasingly inexplicable in terms most anyone could understand, it was the brilliant but down-to-earth Hopper.
What triggered the worst impulses of the Internet last week?
The world’s first photo of a black hole, which proved the presence of troll life here on earth, and confirms that female scientists, through no fault of their own, have a much longer way to go, baby.
If you want a taste, sort the comments on the two year old TED Talk, above, so they’re ordered “newest first.”
Katie Bouman, soon-to-be assistant professor of computing and mathematical sciences at the California Institute of Technology, was a PhD candidate at MIT two years ago, when she taped the talk, but she could’ve passed for a nervous high schooler competing in the National Science Bowl finals, in clothes borrowed from Aunt Judy, who works at the bank.
The focus of her studies were the ways in which emerging computational methods could help expand the boundaries of interdisciplinary imaging.
Prior to last week, I’m not sure how well I could have parsed the focus of her work had she not taken the time to help less STEM-inclined viewers such as myself wrap our heads around her highly technical, then-wholly-theoretical subject.
What I know about black holes could still fit in a thimble, and in truth, my excitement about one being photographed for the first time pales in comparison to my excitement about Game of Thrones returning to the airwaves.
I’ve always been very one-sided about science and when I was younger I concentrated almost all my effort on it. I didn’t have time to learn and I didn’t have much patience with what’s called the humanities, even though in the university there were humanities that you had to take. I tried my best to avoid somehow learning anything and working at it. It was only afterwards, when I got older, that I got more relaxed, that I’ve spread out a little bit. I’ve learned to draw and I read a little bit, but I’m really still a very one-sided person and I don’t know a great deal. I have a limited intelligence and I use it in a particular direction.
I’m pretty sure my lack of passion for science is not tied to my gender. Some of my best friends are guys who feel the same. (Some of them don’t like team sports either.)
But I couldn’t help but experience a wee thrill that this young woman, a science nerd who admittedly could’ve used a few theater nerd tips regarding relaxation and public speaking, realized her dream—an honest to goodness photo of a black hole just like the one she talked about in her TED Talk, “How to take a picture of a black hole.”
Bouman and the 200+ colleagues she acknowledges and thanks at every opportunity, achieved their goal, not with an earth-sized camera but rather a network of linked telescopes, much as she had described two years earlier, when she invoked disco balls, Mick Jagger, oranges, selfies, and a jigsaw puzzle in an effort to help people like me understand.
Look at that sucker (or, more accurately, its shadow!) That thing’s 500 million trillion kilometers from Earth!
I’ll bet a lot of elementary science teachers, be they male, female, or non-binary, are going to make science fun by having their students draw pictures of the picture of the black hole.
If we could go back (or forward) in time, I can almost guarantee that mine would be among the best because while I didn’t “get” science (or gym), I was a total art star with the crayons.
Then, crafty as Lord Petyr Baelish when presentation time rolled around, I would partner with a girl like Katie Bouman, who could explain the science with winning vigor. She genuinely seems to embrace the idea that it “takes a village,” and that one’s fellow villagers should be credited whenever possible.
(How did I draw the black hole, you ask? Honestly, it’s not that much harder than drawing a doughnut. Now back to Katie!)
Alas, her professional warmth failed to register with legions of Internet trolls who began sliming her shortly after a colleague at MIT shared a beaming snapshot of her, taken, presumably, with a regular old phone as the black hole made its debut. That pic cemented her accidental status as the face of this project.
Note to the trolls—it wasn’t a dang selfie.
“I’m so glad that everyone is as excited as we are and people are finding our story inspirational,’’ Bouman toldThe New York Times. “However, the spotlight should be on the team and no individual person. Focusing on one person like this helps no one, including me.”
Although Bouman was a junior team member, she and other grad students made major contributions. She directed the verification of images, the selection of imaging parameters, and authored an imaging algorithm that researchers used in the creation of three scripted code pipelines from which the instantly-famous picture was cobbled together.
One of the insights Katie brought to our imaging group is that there are natural images. Just think about the photos you take with your camera phone—they have certain properties…. If you know what one pixel is, you have a good guess as to what the pixel is next to it.
Part of the reason that some posters found Bouman immediately suspicious had to do with her gender. Famously, a number of prominent men like disgraced former CERN physicist Alessandro Strumia have argued that women aren’t being discriminated against in science — they simply don’t like it, or don’t have the aptitude for it. That argument fortifies a notion that women don’t belong in science, or can’t really be doing the work. So women like Bouman must be fakes, this warped line of thinking goes…
Even I, whose 7th grade science teacher tempered a bad grade on my report card by saying my interest in theater would likely serve me much better than anything I might eek from her class, know that just as many girls and women excel at science, technology, engineering, and math as excel in the arts. (Sometimes they excel at both!)
(And power to every little boy with his sights set on nursing, teaching, or ballet!)
(How many black holes have the haters photographed recently?)
Saying that she was part of a larger team doesn’t diminish her work, or minimize her involvement in what is already a history-making project. Highlighting the achievements of a brilliant, enthusiastic scientist does not diminish the contributions of the other 214 people who worked on the project, either. But what it is doing is showing a different model for a scientist than the one most of us grew up with. That might mean a lot to some kids — maybe kids who look like her — making them excited about studying the wonders of the Universe.
Is the singularity upon us? AI seems poised to replace everyone, even artists whose work can seem like an inviolably human industry. Or maybe not. Nick Cave’s poignant answer to a fan question might persuade you a machine will never write a great song, though it might master all the moves to write a good one. An AI-written novel did almost win a Japanese literary award. A suitably impressive feat, even if much of the authorship should be attributed to the program’s human designers.
But what about literary criticism? Is this an art that a machine can do convincingly? The answer may depend on whether you consider it an art at all. For those who do, no artificial intelligence will ever properly develop the theory of mind needed for subtle, even moving, interpretations. On the other hand, one group of researchers has succeeded in using “sophisticated computing power, natural language processing, and reams of digitized text,” writes Atlantic editor Adrienne LaFrance, “to map the narrative patterns in a huge corpus of literature.” The name of their literary criticism machine? The Hedonometer.
We can treat this as an exercise in compiling data, but it’s arguable that the results are on par with work from the comparative mythology school of James Frazier and Joseph Campbell. A more immediate comparison might be to the very deft, if not particularly subtle, Kurt Vonnegut, who—before he wrote novels like Slaughterhouse Five and Cat’s Cradle—submitted a master’s thesis in anthropology to the University of Chicago. His project did the same thing as the machine, 35 years earlier, though he may not have had the wherewithal to read “1,737 English-language works of fiction between 10,000 and 200,000 words long” while struggling to finish his graduate program. (His thesis, by the way, was rejected.)
Those numbers describe the dataset from Project Gutenberg fed into the The Hedonometer by the computer scientists at the University of Vermont and the University of Adelaide. After the computer finished “reading,” it then plotted “the emotional trajectory” of all of the stories using a “sentiment analysis to generate an emotional arc for each work.” What it found were six broad categories of story, listed below:
Rags to Riches (rise)
Riches to Rags (fall)
Man in a Hole (fall then rise)
Icarus (rise then fall)
Cinderella (rise then fall then rise)
Oedipus (fall then rise then fall)
How does this endeavor compare with Vonnegut’s project? (See him present the theory below.) The novelist used more or less the same methodology, in human form, to come up with eight universal story arcs or “shapes of stories.” Vonnegut himself left out the Rags to Riches category; he called it an anomaly, though he did have a heading for the same rising-only story arc—the Creation Story—which he deemed an uncommon shape for Western fiction. He did include the Cinderella arc, and was pleased by his discovery that its shape mirrored the New Testament arc, which he also included in his schema, an act the AI surely would have judged redundant.
Contra Vonnegut, the AI found that one-fifth of all the works it analyzed were Rags-to-Riches stories. It determined that this arc was far less popular with readers than “Oedipus,” “Man in a Hole,” and “Cinderella.” Its analysis does get much more granular, and to allay our suspicions, the researchers promise they did not control the outcome of the experiment. “We’re not imposing a set of shapes,” says lead author Andy Reagan, Ph.D. candidate in mathematics at the University of Vermont. “Rather: the math and machine learning have identified them.”
But the authors do provide a lot of their own interpretation of the data, from choosing representative texts—like Harry Potter and the Deathly Hallows—to illustrate “nested and complicated” plot arcs, to providing the guiding assumptions of the exercise. One of those assumptions, unsurprisingly given the authors’ fields of interest, is that math and language are interchangeable. “Stories are encoded in art, language, and even in the mathematics of physics,” they write in the introduction to their paper, published on Arxiv.org.
“We use equations,” they go on, “to represent both simple and complicated functions that describe our observations of the real world.” If we accept the premise that sentences and integers and lines of code are telling the same stories, then maybe there isn’t as much difference between humans and machines as we would like to think.
Many see the realms of literature and computers as not just completely separate, but growing more distant from one another all the time. Donald Knuth, one of the most respected figures of all the most deeply computer-savvy in Silicon Valley, sees it differently. His claims to fame include The Art of Computer Programming, an ongoing multi-volume series of books whose publication began more than fifty years ago, and the digital typesetting system TeX, which, in a recent profile of Knuth, the New York Times‘ Siobhan Roberts describes as “the gold standard for all forms of scientific communication and publication.”
Some, Roberts writes, consider TeX “Dr. Knuth’s greatest contribution to the world, and the greatest contribution to typography since Gutenberg.” At the core of his lifelong work is an idea called “literate programming,” which emphasizes “the importance of writing code that is readable by humans as well as computers — a notion that nowadays seems almost twee.
Dr. Knuth has gone so far as to argue that some computer programs are, like Elizabeth Bishop’s poems and Philip Roth’s American Pastoral, works of literature worthy of a Pulitzer.” Knuth’s mind, technical achievements, and style of communication have earned him the informal title of “the Yoda of Silicon Valley.”
That appellation also reflects a depth of technical wisdom only attainable by getting to the very bottom of things, which in Knuth’s case means fully understanding how computer programming works all the way down to the most basic level. (This in contrast to the average programmer, writes Roberts, who “no longer has time to manipulate the binary muck, and works instead with hierarchies of abstraction, layers upon layers of code — and often with chains of code borrowed from code libraries.) Now everyone can get more than a taste of Knuth’s perspective and thoughts on computers, programming, and a host of related subjects on the Youtube channel of Stanford University, where Knuth is now professor emeritus (and where he still gives informal lectures under the banner “Computer Musings”).
Stanford’s online archive of Donald Knuth Lectures now numbers 110, ranging across the decades and covering such subjects as the usage and mechanics of TeX, the analysis of algorithms, and the nature of mathematical writing. “I am worried that algorithms are getting too prominent in the world,” he tells Roberts in the New York Times profile. “It started out that computer scientists were worried nobody was listening to us. Now I’m worried that too many people are listening.” But having become a computer scientist before the field of computer science even had a name, the now-octogenarian Knuth possesses a rare perspective to which anyone in 21st-century technology could certainly benefit from exposure.
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.
When it first hit the market in 1982, the compact disc famously promised “perfect sound that lasts forever.” But innovation has a way of marching continually on, and naturally the innovators soon started wondering: what if perfect sound isn’t enough? What if consumers want something to go with it, something to look at? And so, when compact disc co-developers Sony and Philips updated its standards, they included documentation on the use of the format’s channels not occupied by audio data. So was born the CD+G, which boasted “not only the CD’s full, digital sound, but also video information — graphics — viewable on any television set or video monitor.”
That text comes from a package scan posted by the online CD+G Museum, whose Youtube channel features rips of nearly every record released on the format, beginning with the first, the Firesign Theatre’s Eat or Be Eaten.
When it came out, listeners who happened to own a CD+G-compatible player (or a CD+G-compatible video game console, my own choice at the time having been the Turbografx-16) could see that beloved “head comedy” troupe’s densely layered studio production and even more densely layered humor accompanied by images rendered in psychedelic color — or as psychedelic as images can get with only sixteen colors available on the palette, not to mention a resolution of 288 pixels by 192 pixels, not much larger than a icon on the home screen of a modern smartphone. Those limitations may make CD+G graphics look unimpressive today, but just imagine what a cutting-edge novelty they must have seemed in the late 1980s when they first appeared.
Displaying lyrics for karaoke singers was the most obvious use of CD+G technology, but its short lifespan also saw a fair few experiments on such other major-label releases, all viewable at the CD+G Museum, as Lou Reed’s New York, which combines lyrics with digitized photography of the eponymous city; Talking Heads’ Naked, which provides musical information such as the chord changes and instruments playing on each phrase; Johann Sebastian Bach’s St. Matthew Passion, which translates the libretto alongside works of art; and Devo’s single “Disco Dancer,” which tells the origin story of those “five Spudboys from Ohio.” With these and almost every other CD+G release available at the CD+G museum, you’ll have no shortage of not just background music but background visuals for your next late-80s-early-90s-themed party.
In 1704, Isaac Newton predicted the end of the world sometime around (or after, “but not before”) the year 2060, using a strange series of mathematical calculations. Rather than study what he called the “book of nature,” he took as his source the supposed prophecies of the book of Revelation. While such predictions have always been central to Christianity, it is startling for modern people to look back and see the famed astronomer and physicist indulging them. For Newton, however, as Matthew Stanley writes at Science, “laying the foundation of modern physics and astronomy was a bit of a sideshow. He believed that his truly important work was deciphering ancient scriptures and uncovering the nature of the Christian religion.”
Over three hundred years later, we still have plenty of religious doomsayers predicting the end of the world with Bible codes. But in recent times, their ranks have seemingly been joined by scientists whose only professed aim is interpreting data from climate research and sustainability estimates given population growth and dwindling resources. The scientific predictions do not draw on ancient texts or theology, nor involve final battles between good and evil. Though there may be plagues and other horrible reckonings, these are predictably causal outcomes of over-production and consumption rather than divine wrath. Yet by some strange fluke, the science has arrived at the same apocalyptic date as Newton, plus or minus a decade or two.
The “end of the world” in these scenarios means the end of modern life as we know it: the collapse of industrialized societies, large-scale agricultural production, supply chains, stable climates, nation states…. Since the late sixties, an elite society of wealthy industrialists and scientists known as the Club of Rome (a frequent player in many conspiracy theories) has foreseen these disasters in the early 21st century. One of the sources of their vision is a computer program developed at MIT by computing pioneer and systems theorist Jay Forrester, whose model of global sustainability, one of the first of its kind, predicted civilizational collapse in 2040. “What the computer envisioned in the 1970s has by and large been coming true,” claims Paul Ratner at Big Think.
Those predictions include population growth and pollution levels, “worsening quality of life,” and “dwindling natural resources.” In the video at the top, see Australia’s ABC explain the computer’s calculations, “an electronic guided tour of our global behavior since 1900, and where that behavior will lead us,” says the presenter. The graph spans the years 1900 to 2060. “Quality of life” begins to sharply decline after 1940, and by 2020, the model predicts, the metric contracts to turn-of-the-century levels, meeting the sharp increase of the “Zed Curve” that charts pollution levels. (ABC revisited this reporting in 1999 with Club of Rome member Keith Suter.)
You can probably guess the rest—or you can read all about it in the 1972 Club of Rome-published report Limits to Growth, which drew wide popular attention to Jay Forrester’s books Urban Dynamics (1969) and World Dynamics(1971). Forrester, a figure of Newtonian stature in the worlds of computer science and management and systems theory—though not, like Newton, a Biblical prophecy enthusiast—more or less endorsed his conclusions to the end of his life in 2016. In one of his last interviews, at the age of 98, he told the MIT Technology Review, “I think the books stand all right.” But he also cautioned against acting without systematic thinking in the face of the globally interrelated issues the Club of Rome ominously calls “the problematic”:
Time after time … you’ll find people are reacting to a problem, they think they know what to do, and they don’t realize that what they’re doing is making a problem. This is a vicious [cycle], because as things get worse, there is more incentive to do things, and it gets worse and worse.
Where this vague warning is supposed to leave us is uncertain. If the current course is dire, “unsystematic” solutions may be worse? This theory also seems to leave powerfully vested human agents (like Exxon’s executives) wholly unaccountable for the coming collapse. Limits to Growth—scoffed at and disparagingly called “neo-Malthusian” by a host of libertarian critics—stands on far surer evidentiary footing than Newton’s weird predictions, and its climate forecasts, notes Christian Parenti, “were alarmingly prescient.” But for all this doom and gloom it’s worth bearing in mind that models of the future are not, in fact, the future. There are hard times ahead, but no theory, no matter how sophisticated, can account for every variable.
On a page for its School of Technology, Rasmussen College lists six “Assumptions to Avoid” for women who want to enter the field of computer science. I couldn’t comment on whether these “assumptions” (alleged misconceptions like “the work environment is hostile to women”) are actually disproved by the commentary. But I might suggest a seventh “assumption to avoid”—that women haven’t always been computer scientists, integral to the development of the computer, programming languages, and every other aspect of computing, even 100 years before computers existed.
In fact, one of the most notable women in computer science, Grace Hopper, served as a member of the Harvard team that built the first computer, the room-sized Mark I designed in 1944 by physics professor Howard Aiken. Hopper also helped develop COBOL, the first universal programming language for business, still widely in use today, a system based on written English rather than on symbols or numbers. And she is credited with coining the term “computer bug” (and by extension “debug”), when she and her associates found a moth stuck inside the Mark II in 1947. (“From then on,” she told Time magazine in 1984, “when anything went wrong with a computer, we said it had bugs in it.”)
These are but a few of her achievements in a computer science career that spanned more than 42 years, during which time she rose through the ranks of the Naval Reserves, then later active naval duty, retiring as the oldest commissioned officer, a rear admiral, at age 79.
In addition to winning distinguished awards and commendations over the course of her career—including the first-ever computer science “Man of the Year” award—Hopper also acquired a few distinguished nicknames, including “Amazing Grace” and “Grandma COBOL.” She may become known to a new generation by the nickname, “Queen of Code,” the title of a recent documentary from FiveThirtyEight’s “Signals” series. Directed by Community star Gillian Jacobs, the short film, which you can watch in full here, tells the story of her “inimitable legacy as a brilliant programmer and pioneering woman in a male-dominated field,” writes Allison McCann at FiveThirtyEight.
Hopper’s name may be “mysteriously absent from many history books,” as Amy Poehler’s Smart Girls notes, but before her death in 1992, she was introduced to millions through TV appearances on shows like Late Night with David Letterman (top) and 60 Minutes, just above. As you’ll see in these clips, Hopper wasn’t just a crack mathematician and programmer but also an ace public speaker whose deadpan humor cracked up Letterman and the groups of students and fellow scientists she frequently addressed.
The 60 Minutes segment notes that Hopper became “one of that small band of brothers and sisters who ushered in the computer revolution” when she left her professor’s job at Vassar at the start of WWII to serve in the Naval Reserve, where she was assigned to the Bureau of Ships Computation Project at Harvard. But she never stopped being an educator and considered “training young people” her second-most important accomplishment. In this, her legacy lives on as well.
Get the best cultural and educational resources on the web curated for you in a daily email. We never spam. Unsubscribe at any time.
FOLLOW ON SOCIAL MEDIA
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.
Open Culture (openculture.com) and our trusted partners use technology such as cookies on our website to personalise ads, support social media features, and analyze our traffic. Please click below to consent to the use of this technology while browsing our site.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.