How Margaret Hamilton Wrote the Computer Code That Helped Save the Apollo Moon Landing Mission

From a distance of half a century, we look back on the moon landing as a thoroughly analog affair, an old-school engineering project of the kind seldom even proposed anymore in this digital age. But the Apollo 11 mission could never have happened without computers and the people who program them, a fact that has become better-known in recent years thanks to public interest in the work of Margaret Hamilton, director of the Software Engineering Division of MIT's Instrumentation Laboratory when it developed on-board flight software for NASA's Apollo space program. You can learn more about Hamilton, whom we've previously featured here on Open Culture, from the short MAKERS profile video above.

Today we consider software engineering a perfectly viable field, but back in the mid-1960s, when Hamilton first joined the Apollo project, it didn't even have a name. "I came up with the term 'software engineering,' and it was considered a joke," says Hamilton, who remembers her colleagues making remarks like, "What, software is engineering?"




But her own experience went some way toward proving that working in code had become as important as working in steel. Only by watching her young daughter play at the same controls the astronauts would later use did she realize that just one human error could potentially bring the mission into ruin — and that she could minimize the possibility by taking it into account when designing its software. Hamilton's proposal met with resistance, NASA's official line at the time being that "astronauts are trained never to make a mistake."

But Hamilton persisted, prevailed, and was vindicated during the moon landing itself, when an astronaut did make a mistake, one that caused an overloading of the flight computer. The whole landing might have been aborted if not for Hamilton's foresight in implementing an "asynchronous executive" function capable, in the event of an overload, of setting less important tasks aside and prioritizing more important ones. "The software worked just the way it should have," Hamilton says in the Christie's video on the incident above, describing what she felt afterward as "a combination of excitement and relief." Engineers of software, hardware, and everything else know that feeling when they see a complicated project work — but surely few know it as well as Hamilton and her Apollo collaborators do.

Related Content:

Margaret Hamilton, Lead Software Engineer of the Apollo Project, Stands Next to Her Code That Took Us to the Moon (1969)

How 1940s Film Star Hedy Lamarr Helped Invent the Technology Behind Wi-Fi & Bluetooth During WWII

Meet Grace Hopper, the Pioneering Computer Scientist Who Helped Invent COBOL and Build the Historic Mark I Computer (1906-1992)

How Ada Lovelace, Daughter of Lord Byron, Wrote the First Computer Program in 1842–a Century Before the First Computer

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall, on Facebook, or on Instagram.

Pioneering Computer Scientist Grace Hopper Shows Us How to Visualize a Nanosecond (1983)

Human imagination seems seriously limited when faced with the cosmic scope of time and space. We can imagine, through stop-motion animation and CGI, what it might be like to walk the earth with creatures the size of office buildings. But how to wrap our heads around the fact that they lived hundreds of millions of years ago, on a planet some four and a half billion years old? We trust the science, but can’t rely on intuition alone to guide us to such mind-boggling knowledge.

At the other end of the scale, events measured in nanoseconds, or billionths of a second, seem inconceivable, even to someone as smart as Grace Hopper, the Navy mathematician who invented COBOL and helped built the first computer. Or so she says in the 1983 video clip above from one of her many lectures in her role as a guest lecturer at universities, museums, military bodies, and corporations.




When she first heard of “circuits that acted in nanoseconds,” she says, “billionths of a second… Well, I didn’t know what a billion was…. And if you don’t know what a billion is, how on earth do you know what a billionth is? Finally, one morning in total desperation, I called over the engineering building, and I said, ‘Please cut off a nanosecond and send it to me.” What she asked for, she explains, and shows the class, was a piece of wire representing the distance a signal could travel in a nanosecond.

Now of course it wouldn’t really be through wire — it’d be out in space, the velocity of light. So if we start with a velocity of light and use your friendly computer, you’ll discover that a nanosecond is 11.8 inches long, the maximum limiting distance that electricity can travel in a billionth of a second.

Follow the rest of her explanation, with wire props, and see if you can better understand a measure of time beyond the reaches of conscious experience. The explanation was immediately successful when she began using it in the late 1960s “to demonstrate how designing smaller components would produce faster computers,” writes the National Museum of American History. The bundle of wires below, each about 30cm (11.8 inches) long, comes from a lecture Hopper gave museum docents in March 1985.

Photo via the National Museum of American History

Like the age of the dinosaurs, the nanosecond may only represent a small fraction of the incomprehensibly small units of time scientists are eventually able to measure—and computer scientists able to access. “Later,” notes the NMAH, “as components shrank and computer speeds increased, Hopper used grains of pepper to represent the distance electricity traveled in a picosecond, one trillionth of a second.”

At this point, the map becomes no more revealing than the unknown territory, invisible to the naked eye, inconceivable but through wild leaps of imagination. But if anyone could explain the increasingly inexplicable in terms most anyone could understand, it was the brilliant but down-to-earth Hopper.

via Kottke

Related Content:

Meet Grace Hopper, the Pioneering Computer Scientist Who Helped Invent COBOL and Build the Historic Mark I Computer (1906-1992)

The Map of Computer Science: New Animation Presents a Survey of Computer Science, from Alan Turing to “Augmented Reality”

Free Online Computer Science Courses 

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

How to Take a Picture of a Black Hole: Watch the 2017 Ted Talk by Katie Bouman, the MIT Grad Student Who Helped Take the Groundbreaking Photo

What triggered the worst impulses of the Internet last week?

The world's first photo of a black hole, which proved the presence of troll life here on earth, and confirms that female scientists, through no fault of their own, have a much longer way to go, baby.

If you want a taste, sort the comments on the two year old TED Talk, above, so they're ordered  "newest first."

Katie Bouman, soon-to-be assistant professor of computing and mathematical sciences at the California Institute of Technology, was a PhD candidate at MIT two years ago, when she taped the talk, but she could've passed for a nervous high schooler competing in the National Science Bowl finals, in clothes borrowed from Aunt Judy, who works at the bank.




The focus of her studies were the ways in which emerging computational methods could help expand the boundaries of interdisciplinary imaging.

Prior to last week, I’m not sure how well I could have parsed the focus of her work had she not taken the time to help less STEM-inclined viewers such as myself wrap our heads around her highly technical, then-wholly-theoretical subject.

What I know about black holes could still fit in a thimble, and in truth, my excitement about one being photographed for the first time pales in comparison to my excitement about Game of Thrones returning to the airwaves.

Fortunately, we’re not obligated to be equally turned on by the same interests, an idea theoretical physicist Richard Feynman promoted:

I've always been very one-sided about science and when I was younger I concentrated almost all my effort on it. I didn't have time to learn and I didn't have much patience with what's called the humanities, even though in the university there were humanities that you had to take. I tried my best to avoid somehow learning anything and working at it. It was only afterwards, when I got older, that I got more relaxed, that I've spread out a little bit. I've learned to draw and I read a little bit, but I'm really still a very one-sided person and I don't know a great deal. I have a limited intelligence and I use it in a particular direction.

I'm pretty sure my lack of passion for science is not tied to my gender. Some of my best friends are guys who feel the same. (Some of them don't like team sports either.)

But I couldn't help but experience a wee thrill that this young woman, a science nerd who admittedly could’ve used a few theater nerd tips regarding relaxation and public speaking, realized her dream—an honest to goodness photo of a black hole just like the one she talked about in her TED Talk,  "How to take a picture of a black hole."

Bouman and the 200+ colleagues she acknowledges and thanks at every opportunity, achieved their goal, not with an earth-sized camera but rather a network of linked telescopes, much as she had described two years earlier, when she invoked disco balls, Mick Jagger, oranges, selfies, and a jigsaw puzzle in an effort to help people like me understand.

Look at that sucker (or, more accurately, its shadow!) That thing’s 500 million trillion kilometers from Earth!

(That's much farther than King's Landing is from Winterfell.)

I’ll bet a lot of elementary science teachers, be they male, female, or non-binary, are going to make science fun by having their students draw pictures of the picture of the black hole.

If we could go back (or forward) in time, I can almost guarantee that mine would be among the best because while I didn’t “get” science (or gym), I was a total art star with the crayons.

Then, crafty as Lord Petyr Baelish when presentation time rolled around, I would partner with a girl like Katie Bouman, who could explain the science with winning vigor. She genuinely seems to embrace the idea that it “takes a village,” and that one’s fellow villagers should be credited whenever possible.

(How did I draw the black hole, you ask? Honestly, it's not that much harder than drawing a doughnut. Now back to Katie!)

Alas, her professional warmth failed to register with legions of Internet trolls who began sliming her shortly after a colleague at MIT shared a beaming snapshot of her, taken, presumably, with a regular old phone as the black hole made its debut. That pic cemented her accidental status as the face of this project.

Note to the trolls—it wasn't a dang selfie.

“I’m so glad that everyone is as excited as we are and people are finding our story inspirational,’’ Bouman told The New York Times. “However, the spotlight should be on the team and no individual person. Focusing on one person like this helps no one, including me.”

Although Bouman was a junior team member, she and other grad students made major contributions. She directed the verification of images, the selection of imaging parameters, and authored an imaging algorithm that researchers used in the creation of three scripted code pipelines from which the instantly-famous picture was cobbled together.

As Vincent Fish, a research scientist at MIT's Haystack Observatory told CNN:

One of the insights Katie brought to our imaging group is that there are natural images. Just think about the photos you take with your camera phone—they have certain properties.... If you know what one pixel is, you have a good guess as to what the pixel is next to it.

Hey, that makes sense.

As The Verge’s science editor, Mary Beth Griggs, points out, the rush to defame Bouman is of a piece with some of the non-virtual realities women in science face:

Part of the reason that some posters found Bouman immediately suspicious had to do with her gender. Famously, a number of prominent men like disgraced former CERN physicist Alessandro Strumia have argued that women aren’t being discriminated against in science — they simply don’t like it, or don’t have the aptitude for it. That argument fortifies a notion that women don’t belong in science, or can’t really be doing the work. So women like Bouman must be fakes, this warped line of thinking goes…

Even I, whose 7th grade science teacher tempered a bad grade on my report card by saying my interest in theater would likely serve me much better than anything I might eek from her class, know that just as many girls and women excel at science, technology, engineering, and math as excel in the arts. (Sometimes they excel at both!)

(And power to every little boy with his sights set on nursing, teaching, or ballet!)

(How many black holes have the haters photographed recently?)

Griggs continues:

Saying that she was part of a larger team doesn’t diminish her work, or minimize her involvement in what is already a history-making project. Highlighting the achievements of a brilliant, enthusiastic scientist does not diminish the contributions of the other 214 people who worked on the project, either. But what it is doing is showing a different model for a scientist than the one most of us grew up with. That might mean a lot to some kids — maybe kids who look like her — making them excited about studying the wonders of the Universe.

via BoingBoing

Related Content:

Women’s Hidden Contributions to Modern Genetics Get Revealed by New Study: No Longer Will They Be Buried in the Footnotes

New Augmented Reality App Celebrates Stories of Women Typically Omitted from U.S. History Textbooks

Stephen Hawking (RIP) Explains His Revolutionary Theory of Black Holes with the Help of Chalkboard Animations

Watch a Star Get Devoured by a Supermassive Black Hole

Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine.  Join her in New York City tonight for the next installment of her book-based variety show, Necromancers of the Public Domain. Follow her @AyunHalliday.

Artificial Intelligence Identifies the Six Main Arcs in Storytelling: Welcome to the Brave New World of Literary Criticism

Is the singularity upon us? AI seems poised to replace everyone, even artists whose work can seem like an inviolably human industry. Or maybe not. Nick Cave’s poignant answer to a fan question might persuade you a machine will never write a great song, though it might master all the moves to write a good one. An AI-written novel did almost win a Japanese literary award. A suitably impressive feat, even if much of the authorship should be attributed to the program’s human designers.

But what about literary criticism? Is this an art that a machine can do convincingly? The answer may depend on whether you consider it an art at all. For those who do, no artificial intelligence will ever properly develop the theory of mind needed for subtle, even moving, interpretations. On the other hand, one group of researchers has succeeded in using “sophisticated computing power, natural language processing, and reams of digitized text,” writes Atlantic editor Adrienne LaFrance, “to map the narrative patterns in a huge corpus of literature.” The name of their literary criticism machine? The Hedonometer.




We can treat this as an exercise in compiling data, but it's arguable that the results are on par with work from the comparative mythology school of James Frazier and Joseph Campbell. A more immediate comparison might be to the very deft, if not particularly subtle, Kurt Vonnegut, who—before he wrote novels like Slaughterhouse Five and Cat’s Cradlesubmitted a master’s thesis in anthropology to the University of Chicago. His project did the same thing as the machine, 35 years earlier, though he may not have had the wherewithal to read “1,737 English-language works of fiction between 10,000 and 200,000 words long" while struggling to finish his graduate program. (His thesis, by the way, was rejected.)

Those numbers describe the dataset from Project Gutenberg fed into the The Hedonometer by the computer scientists at the University of Vermont and the University of Adelaide. After the computer finished "reading," it then plotted “the emotional trajectory” of all of the stories using a “sentiment analysis to generate an emotional arc for each work.” What it found were six broad categories of story, listed below:

  1. Rags to Riches (rise)
  2. Riches to Rags (fall)
  3. Man in a Hole (fall then rise)
  4. Icarus (rise then fall)
  5. Cinderella (rise then fall then rise)
  6. Oedipus (fall then rise then fall)

How does this endeavor compare with Vonnegut’s project? (See him present the theory below.) The novelist used more or less the same methodology, in human form, to come up with eight universal story arcs or “shapes of stories.” Vonnegut himself left out the Rags to Riches category; he called it an anomaly, though he did have a heading for the same rising-only story arc—the Creation Story—which he deemed an uncommon shape for Western fiction. He did include the Cinderella arc, and was pleased by his discovery that its shape mirrored the New Testament arc, which he also included in his schema, an act the AI surely would have judged redundant.

Contra Vonnegut, the AI found that one-fifth of all the works it analyzed were Rags-to-Riches stories. It determined that this arc was far less popular with readers than “Oedipus,” “Man in a Hole,” and “Cinderella.” Its analysis does get much more granular, and to allay our suspicions, the researchers promise they did not control the outcome of the experiment. “We’re not imposing a set of shapes,” says lead author Andy Reagan, Ph.D. candidate in mathematics at the University of Vermont. “Rather: the math and machine learning have identified them.”

But the authors do provide a lot of their own interpretation of the data, from choosing representative texts—like Harry Potter and the Deathly Hallows—to illustrate “nested and complicated” plot arcs, to providing the guiding assumptions of the exercise. One of those assumptions, unsurprisingly given the authors’ fields of interest, is that math and language are interchangeable. “Stories are encoded in art, language, and even in the mathematics of physics,” they write in the introduction to their paper, published on Arxiv.org.

“We use equations," they go on, "to represent both simple and complicated functions that describe our observations of the real world.” If we accept the premise that sentences and integers and lines of code are telling the same stories, then maybe there isn’t as much difference between humans and machines as we would like to think.

via The Atlantic

Related Content:

Nick Cave Answers the Hotly Debated Question: Will Artificial Intelligence Ever Be Able to Write a Great Song?

Kurt Vonnegut Diagrams the Shape of All Stories in a Master’s Thesis Rejected by U. Chicago

Kurt Vonnegut Maps Out the Universal Shapes of Our Favorite Stories

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

Watch 110 Lectures by Donald Knuth, “the Yoda of Silicon Valley,” on Programming, Mathematical Writing, and More

Many see the realms of literature and computers as not just completely separate, but growing more distant from one another all the time. Donald Knuth, one of the most respected figures of all the most deeply computer-savvy in Silicon Valley, sees it differently. His claims to fame include The Art of Computer Programming, an ongoing multi-volume series of books whose publication began more than fifty years ago, and the digital typesetting system TeX, which, in a recent profile of Knuth, the New York Times' Siobhan Roberts describes as "the gold standard for all forms of scientific communication and publication."

Some, Roberts writes, consider TeX "Dr. Knuth’s greatest contribution to the world, and the greatest contribution to typography since Gutenberg." At the core of his lifelong work is an idea called "literate programming," which emphasizes "the importance of writing code that is readable by humans as well as computers — a notion that nowadays seems almost twee.




Dr. Knuth has gone so far as to argue that some computer programs are, like Elizabeth Bishop’s poems and Philip Roth’s American Pastoral, works of literature worthy of a Pulitzer." Knuth's mind, technical achievements, and style of communication have earned him the informal title of "the Yoda of Silicon Valley."

That appellation also reflects a depth of technical wisdom only attainable by getting to the very bottom of things, which in Knuth's case means fully understanding how computer programming works all the way down to the most basic level. (This in contrast to the average programmer, writes Roberts, who "no longer has time to manipulate the binary muck, and works instead with hierarchies of abstraction, layers upon layers of code — and often with chains of code borrowed from code libraries.) Now everyone can get more than a taste of Knuth's perspective and thoughts on computers, programming, and a host of related subjects on the Youtube channel of Stanford University, where Knuth is now professor emeritus (and where he still gives informal lectures under the banner "Computer Musings").

Stanford's online archive of Donald Knuth Lectures now numbers 110, ranging across the decades and covering such subjects as the usage and mechanics of TeX, the analysis of algorithms, and the nature of mathematical writing. "I am worried that algorithms are getting too prominent in the world,” he tells Roberts in the New York Times profile. “It started out that computer scientists were worried nobody was listening to us. Now I’m worried that too many people are listening." But having become a computer scientist before the field of computer science even had a name, the now-octogenarian Knuth possesses a rare perspective to which anyone in 21st-century technology could certainly benefit from exposure.

Related Content:

Free Online Computer Science Courses

50 Famous Academics & Scientists Talk About God

The Secret History of Silicon Valley

When J.M. Coetzee Secretly Programmed Computers to Write Poetry in the 1960s

Introduction to Computer Science and Programming: A Free Course from MIT

Peter Thiel’s Stanford Course on Startups: Read the Lecture Notes Free Online

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

Discover Rare 1980s CDs by Lou Reed, Devo & Talking Heads That Combined Music with Computer Graphics

When it first hit the market in 1982, the compact disc famously promised "perfect sound that lasts forever." But innovation has a way of marching continually on, and naturally the innovators soon started wondering: what if perfect sound isn't enough? What if consumers want something to go with it, something to look at? And so, when compact disc co-developers Sony and Philips updated its standards, they included documentation on the use of the format's channels not occupied by audio data. So was born the CD+G, which boasted "not only the CD's full, digital sound, but also video information — graphics — viewable on any television set or video monitor."

That text comes from a package scan posted by the online CD+G Museum, whose Youtube channel features rips of nearly every record released on the format, beginning with the first, the Firesign Theatre's Eat or Be Eaten.




When it came out, listeners who happened to own a CD+G-compatible player (or a CD+G-compatible video game console, my own choice at the time having been the Turbografx-16) could see that beloved "head comedy" troupe's densely layered studio production and even more densely layered humor accompanied by images rendered in psychedelic color — or as psychedelic as images can get with only sixteen colors available on the palette, not to mention a resolution of 288 pixels by 192 pixels, not much larger than a icon on the home screen of a modern smartphone. Those limitations may make CD+G graphics look unimpressive today, but just imagine what a cutting-edge novelty they must have seemed in the late 1980s when they first appeared.

Displaying lyrics for karaoke singers was the most obvious use of CD+G technology, but its short lifespan also saw a fair few experiments on such other major-label releases, all viewable at the CD+G Museum, as Lou Reed's New York, which combines lyrics with digitized photography of the eponymous city; Talking Heads' Naked, which provides musical information such as the chord changes and instruments playing on each phrase; Johann Sebastian Bach's St. Matthew Passion, which translates the libretto alongside works of art; and Devo's single "Disco Dancer," which tells the origin story of those "five Spudboys from Ohio." With these and almost every other CD+G release available at the CD+G museum, you'll have no shortage of not just background music but background visuals for your next late-80s-early-90s-themed party.

Related Content:

Watch 1970s Animations of Songs by Joni Mitchell, Jim Croce & The Kinks, Aired on The Sonny & Cher Show

The Story of How Beethoven Helped Make It So That CDs Could Play 74 Minutes of Music

Discover the Lost Early Computer Art of Telidon, Canada’s TV Proto-Internet from the 1970s

Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

M.I.T. Computer Program Alarmingly Predicts in 1973 That Civilization Will End by 2040

In 1704, Isaac Newton predicted the end of the world sometime around (or after, "but not before") the year 2060, using a strange series of mathematical calculations. Rather than study what he called the “book of nature,” he took as his source the supposed prophecies of the book of Revelation. While such predictions have always been central to Christianity, it is startling for modern people to look back and see the famed astronomer and physicist indulging them. For Newton, however, as Matthew Stanley writes at Science, “laying the foundation of modern physics and astronomy was a bit of a sideshow. He believed that his truly important work was deciphering ancient scriptures and uncovering the nature of the Christian religion.”

Over three hundred years later, we still have plenty of religious doomsayers predicting the end of the world with Bible codes. But in recent times, their ranks have seemingly been joined by scientists whose only professed aim is interpreting data from climate research and sustainability estimates given population growth and dwindling resources. The scientific predictions do not draw on ancient texts or theology, nor involve final battles between good and evil. Though there may be plagues and other horrible reckonings, these are predictably causal outcomes of over-production and consumption rather than divine wrath. Yet by some strange fluke, the science has arrived at the same apocalyptic date as Newton, plus or minus a decade or two.




The “end of the world” in these scenarios means the end of modern life as we know it: the collapse of industrialized societies, large-scale agricultural production, supply chains, stable climates, nation states…. Since the late sixties, an elite society of wealthy industrialists and scientists known as the Club of Rome (a frequent player in many conspiracy theories) has foreseen these disasters in the early 21st century. One of the sources of their vision is a computer program developed at MIT by computing pioneer and systems theorist Jay Forrester, whose model of global sustainability, one of the first of its kind, predicted civilizational collapse in 2040. “What the computer envisioned in the 1970s has by and large been coming true,” claims Paul Ratner at Big Think.

Those predictions include population growth and pollution levels, “worsening quality of life,” and “dwindling natural resources.” In the video at the top, see Australia's ABC explain the computer’s calculations, “an electronic guided tour of our global behavior since 1900, and where that behavior will lead us,” says the presenter. The graph spans the years 1900 to 2060. "Quality of life" begins to sharply decline after 1940, and by 2020, the model predicts, the metric contracts to turn-of-the-century levels, meeting the sharp increase of the “Zed Curve" that charts pollution levels. (ABC revisited this reporting in 1999 with Club of Rome member Keith Suter.)

You can probably guess the rest—or you can read all about it in the 1972 Club of Rome-published report Limits to Growth, which drew wide popular attention to Jay Forrester’s books Urban Dynamics (1969) and World Dynamics (1971). Forrester, a figure of Newtonian stature in the worlds of computer science and management and systems theory—though not, like Newton, a Biblical prophecy enthusiast—more or less endorsed his conclusions to the end of his life in 2016. In one of his last interviews, at the age of 98, he told the MIT Technology Review, “I think the books stand all right.” But he also cautioned against acting without systematic thinking in the face of the globally interrelated issues the Club of Rome ominously calls “the problematic”:

Time after time … you’ll find people are reacting to a problem, they think they know what to do, and they don’t realize that what they’re doing is making a problem. This is a vicious [cycle], because as things get worse, there is more incentive to do things, and it gets worse and worse.

Where this vague warning is supposed to leave us is uncertain. If the current course is dire, “unsystematic” solutions may be worse? This theory also seems to leave powerfully vested human agents (like Exxon's executives) wholly unaccountable for the coming collapse. Limits to Growth—scoffed at and disparagingly called “neo-Malthusian” by a host of libertarian critics—stands on far surer evidentiary footing than Newton’s weird predictions, and its climate forecasts, notes Christian Parenti, “were alarmingly prescient.” But for all this doom and gloom it’s worth bearing in mind that models of the future are not, in fact, the future. There are hard times ahead, but no theory, no matter how sophisticated, can account for every variable.

via Big Think

Related Content:

In 1704, Isaac Newton Predicts the World Will End in 2060

A Century of Global Warming Visualized in a 35 Second Video

A Map Shows What Happens When Our World Gets Four Degrees Warmer: The Colorado River Dries Up, Antarctica Urbanizes, Polynesia Vanishes

It’s the End of the World as We Know It: The Apocalypse Gets Visualized in an Inventive Map from 1486

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

More in this category... »
Quantcast