Meet the Physicist Who Has Created 1600+ Wikipedia Entries for Important Female & Minority Scientists

I find nothing more rewarding, honestly, than seeing people get recognized and championed for what they’ve done. – Dr. Jess Wade

As far as centuries go, the 21st one is a relatively good time to be a girl with an interest in STEM.

Modern science-loving girls find themselves born into a world where books and TV shows celebrating their interest proliferate. Their classrooms are festooned with posters of trailblazing female scientists. Even Barbie has ditched her bathing suit for a lab coat and a microscope.




You’d think Wikipedia would have kept pace in this climate.

And it has…thanks almost entirely to the efforts of Dr. Jess Wade, a 33-year-old Imperial College Research Fellow who spends her days investigating spin selective charge transport through chiral systems in the Department of Materials.

Her evenings, however, belong to Wikipedia.

That’s when she drafts entries for under recognized female scientists and scientists of color.

“I had a target for doing one a day, but sometimes I get too excited and do three,” she told The Guardian in 2018.

To date she’s added more than 1,600 names, striving to make their biographies as fully fleshed out as any of the write ups for the white male scientists who flourish on the site.

This requires some forensic digging. Discovering a subject’s maiden name is often the critical step to finding her PhD thesis and early influences.

A handful of Wade’s entries have been stricken for the truly maddening reason that their subjects are too obscure to warrant inclusion.

Wade’s own Wikipedia entry notes the hypocrisy of this logic, referring readers to a 2019 Chemistry World article in which she’s quoted:

When you make a page and it is disputed for deletion, it is not only annoying because your work is being deleted. It’s also incredibly intrusive and degrading to have someone discuss whether someone’s notable enough to be on Wikipedia – a website that has pages about almost every pop song, people who are extras in films no one has ever heard of and people who were in sports teams that never scored.

Below are just a few of the 1600+ female scientists she’s introduced to a wider audience. While history abounds with nearly invisible names whose discoveries and contributions have been inadequately recognized, or all too frequently attributed to male colleagues, these women are all contemporary.

Nuclear chemist Clarice Phelps was part of the team that helped discover, tennessine, the second heaviest known element.

Mathematician Gladys Mae West was one of the developers of GPS.

Physical chemist June Lindsey played a key role in the discovery of the DNA double helix.

Oceanographer and climate scientist Kim Cobb uses corals and cave stalagmites to inform projections of future climate change.

Vaccinologist Sarah Gilbert led the team that developed the Oxford/AstraZeneca vaccine (and inspired a Barbie created in her image, though you can be assured that the Wikipedia entry Wade researched and wrote for her came first.)

Wade’s hope is that a higher representation of female scientists and scientists of color on a crowdsourced, easily-accessed platform like Wikipedia will deal a blow to ingrained gender bias, expanding public perception of who can participate in these sorts of careers and encouraging young girls to pursue these courses of study. As she told the New York Times:

I’ve always done a lot of work to try to get young people — particularly girls and children from lower socioeconomic backgrounds and people of color — to think about studying physics at high school, because physics is still very much that kind of elitist, white boy subject.

Our science can only benefit the whole of society if it’s done by the whole of society. And that’s not currently the case.

Unsurprisingly, Wade is often asked how to foster and support girls with an interest in science, beyond upping the number of role models available to them on Wikipedia.

The way forward, she told NBC, is not attention-getting “whiz bang” one-off events and assemblies, but rather paying skilled teachers as well as bankers, to mentor students on their course of study, and also help them apply for grants, fellowships and other opportunities. As students prepare to enter the workforce, clearly communicated sexual harassment policies and assistance with childcare and eldercare become crucial:

Ultimately, we don’t only need to increase the number of girls choosing science, we need to increase the proportion of women who stay in science.

Listen to Jess Wade talk about her Wikipedia project on NPR’s science program Short Wave here.

Related Content:

Women Scientists Launch a Database Featuring the Work of 9,000 Women Working in the Sciences

“The Matilda Effect”: How Pioneering Women Scientists Have Been Denied Recognition and Written Out of Science History

The Little-Known Female Scientists Who Mapped 400,000 Stars Over a Century Ago: An Introduction to the “Harvard Computers”

Ayun Halliday is the Chief Primatologist of the East Village Inky zine and author, most recently, of Creative, Not Famous: The Small Potato Manifesto.  Follow her @AyunHalliday.

Christopher Hitchens’ Final Interview: Hear the Newly-Released Uncut Conversation with Richard Dawkins

Never was there such an exhilarating time and place to be interested in atheism than the internet of ten or fifteen years ago. “People compiled endless lists of arguments and counterarguments for or against atheism,” remembers blogger Scott Alexander. One atheist newsgroup “created a Dewey-Decimal-system-esque index of almost a thousand creationist arguments” and “painstakingly debunked all of them.” In turn, their creationist arch-enemies “went through and debunked all of their debunkings.” Readers could enjoy a host of atheism-themed web comics and “the now-infamous r/atheism subreddit, which at the time was one of Reddit’s highest-ranked, beating topics like ‘news,’ ‘humor,’ and — somehow — ‘sex.’ At the time, this seemed perfectly normal.”

This was the culture in which Richard Dawkins published The God Delusion, in 2006, and Christopher Hitchens published his God Is Not Great: How Religion Poisons Everything in 2007. “I’m not just doing what publishers like and coming up with a provocative subtitle,” Alexander quotes Hitchens as saying.  “I mean to say it infects us in our most basic integrity. It says we can’t be moral without ‘Big Brother,’ without a totalitarian permission, means we can’t be good to one another without this, we must be afraid, we must also be forced to love someone whom we fear — the essence of sadomasochism, the essence of abjection, the essence of the master-slave relationship and that knows that death is coming and can’t wait to bring it on.”




Dawkins and Hitchens became known as two of the “Four Horsemen of the Non-Apocalypse,” a group of public intellectuals that also included Sam Harris and Daniel Dennett. The label stuck after all of them sat down for a two-hour conversation on video in the fall 2007, during which each man laid out his critique of the religious worldview. Four years later, Dawkins and Hitchens sat down for another recorded conversation, this time one-on-one and with a much different tone. Having suffered from cancer for more than a year, Hitchens seemed not to be long for this world, and indeed, he would be dead in just two months. But his condition hardly stopped him from speaking with his usual incisiveness on topics of great interest, and especially his and Dawkins’ shared bête noire of fundamentalist religion.

Dawkins, a biologist, sees in the power granted to religion a threat to hard-won scientific knowledge about the nature of reality; Hitchens, a writer and thinker in the tradition of George Orwell, saw it as one of the many forms of totalitarianism that has ever threatened the intellectual and bodily freedom of humankind. In this, Hitchens’ final interview (which was printed in Hitchens’ Last Interview book and whose uncut audio recording came available only this year), Dawkins expresses some concern that he’s become a “bore” with his usual anti-religious defense of science. Nonsense, Hitchens says: an honest scientist risks being called a bore just as an honest journalist risks being called strident, but nevertheless, “you’ve got to bang on.”

Related content:

Does God Exist? Christopher Hitchens Debates Christian Philosopher William Lane Craig (2009)

Is There an Afterlife? Christopher Hitchens Speculates in an Animated Video

Christopher Hitchens: No Deathbed Conversion for Me, Thanks, But it was Good of You to Ask

Master Curator Paul Holdengräber Interviews Hitchens, Herzog, Gourevitch & Other Leading Thinkers

The Last Interview Book Series Features the Final Words of Cultural Icons: Borges to Bowie, Philip K. Dick to Frida Kahlo

Richard Dawkins on Why We Should Believe in Science: “It Works … Bitches”

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the Substack newsletter Books on Cities, the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall, on Facebook, or on Instagram.

Behold a Book of Color Shades Depicted with Feathers (Circa 1915)

Perhaps the 143 colors showcased in The Bayer Company’s early 20th-century sample book, Shades on Feathers, could be collected in the field, but it would involve a lot of travel and patience, and the stalking of several endangered if not downright extinct avian species.

Far easier, and much less expensive, for milliners, designers and decorators to dye plain white feathers  exotic shades, following the instructions in the sample book.




Such artificially obtained rainbows owe a lot to William Henry Perkin, a teenage student of German chemist August Wilhelm von Hofmann, who spent Easter vacation of 1856 experimenting with aniline, an organic base his teacher had earlier discovered in coal tar.  Hoping to hit on a synthetic form of quinine, he accidentally hit on a solution that colored silk a lovely purple shade – an inadvertent eureka moment that ranks right up there with penicillin and the pretzel.

A Science Museum Group profile details what happened next:

Perkin named the colour mauve and the dye mauveine. He decided to try to market his discovery instead of returning to college.

On 26 August 1856, the Patent Office granted Perkin a patent for ‘a new colouring matter for dyeing with a lilac or purple colour stuffs of silk, cotton, wool, or other materials’.

Perkin’s next step was to interest cloth dyers and printers in his discovery. He had no experience of the textile trade and little knowledge of large-scale chemical manufacture. He corresponded with Robert and John Pullar in Glasgow, who offered him support. Perkin’s luck changed towards the end of 1857 when the Empress Eugénie, wife of Napoleon III, decided that mauve was the colour to wear. In January 1858, Queen Victoria followed suit, wearing mauve to her daughter’s wedding.

Cue an explosion of dye manufacturers across Great Britain and Europe, including Bayer, producer of the feather sample book. The survival of this artifact is somewhat miraculous given how vulnerable antique feathers are to environmental factors, pests, and improper storage.

(The sample book recommends cleaning the feathers prior to dying in a lukewarm solution of small amounts of olive oil soap and ammonia.)

The Science History Institute, owner of this unusual object, estimates that the undated book was produced between 1913 and 1918, the year the Migratory Bird Act Treaty outlawed the hunting of birds whose feathers humans deemed particularly fashionable.

Peruse the Science History Institute of Philadelphia’s digitized copy of the Shades on Feathers sample book here.

via Messy Nessy

Related Content 

Download 435 High Resolution Images from John J. Audubon’s The Birds of America

The Birdsong Project Features 220 Musicians, Actors, Artists & Writers Paying Tribute to Birds: Watch Performances by Yo-Yo Ma, Elvis Costello and Beck

The Bird Library: A Library Built Especially for Our Fine Feathered Friends

Ayun Halliday is the Chief Primatologist of the East Village Inky zine and author, most recently, of Creative, Not Famous: The Small Potato Manifesto.  Follow her @AyunHalliday.

Is There Life After Death?: Michio Kaku, Bill Nye, Sam Harris & More Explore One of Life’s Biggest Questions

We should probably not look to science to have cherished beliefs confirmed. As scientific understanding of the world has progressed over the centuries, it has brought on a loss of humans’ status as privileged beings at the center of the universe whose task is to subdue and conquer nature. (The stubborn persistence of those attitudes among the powerful has not served the species well.) We are not special, but we are still responsible, we have learned — maybe totally responsible for our lives on this planet. The methods of science do not lend themselves to soothing existential anxiety.

But what about the most cherished, and likely ancient, of human beliefs: faith in an afterlife?  Ideas of an underworld, or heaven, or hell have animated human culture since its earliest origins. There is no society in the world where we will not find some belief in an afterlife existing comfortably alongside life’s most mundane events. Is it a harmful idea? Is there any real evidence to support it? And which version of an afterlife — if such a thing existed — should we believe?




Such questions stack up. Answers in forms science can reconcile seem diminishingly few. Nonetheless, as we see in the Big Think video above, scientists, science communicators, and science enthusiasts are willing to discuss the possibility, or impossibility, of continuing after death. We begin with NASA astronomer Michelle Thaller, who references Einstein’s theory of the universe as fully complete, “so every point in the past and every point in the future are just as real as the point of time you feel yourself in right now.” Time spreads out in a landscape, each moment already mapped and surveyed.

When a close friend died, Einstein wrote a letter to his friend’s wife explaining, “Your husband, my friend, is just over the next hill. He’s still there” — in a theoretical sense. It may not have been the comfort she was looking for. The hope of an afterlife is that we’ll see our loved ones again, something Einstein’s solution does not allow. Sam Harris — who has leaned into the mystical practice of meditation while pulling it from its religious context — admits that death is a “dark mystery.” When people die, “there’s just the sheer not knowing what happened to them. And into this void, religion comes rushing with a very consoling story, saying nothing happened them; they’re in a better place and you’re going to meet up with them after.”

The story isn’t always so consoling, depending on how punitive the religion, but it does offer an explanation and sense of certainty in the face of “sheer not knowing.” The human mind does not tolerate uncertainty particularly well. Death feels like the greatest unknown of all. (Harris’ argument parallels that of anthropologist Pascal Boyer on the origin of all religions.) But the phenomenon of death is not unknown to us. We are surrounded by it daily, from the plants and animals we consume to the pets we sadly let go when their lifespans end. Do we keep ourselves up wondering what happened to these beings? Maybe our spiritual or religious beliefs aren’t always about death….

“In the Old Testament there isn’t really any sort of view of the afterlife,” says Rob Bell, a spiritual teacher (and the only talking head here not aligned with a scientific institution or rationalist movement). “This idea that the whole thing is about when you die is not really the way that lots of people have thought about it.” For many religious practitioners, the idea of eternal life means “living in harmony with the divine right now.” For many, this “right now” — this very moment and each one we experience after it — is eternal. See more views of the afterlife above from science educators like Bill Nye and scientists like Michio Kaku, who says the kind of afterlives we’ve only seen in science fiction — “digital and genetic immortality” — “are within reach.”

Related Content:

Benedict Cumberbatch Reads Nick Cave’s Beautiful Letter About Grief

Richard Feynman on Religion, Science, the Search for Truth & Our Willingness to Live with Doubt

Michio Kaku & Brian Green Explain String Theory in a Nutshell: Elegant Explanations of an Elegant Theory

Philosopher Sam Harris Leads You Through a 26-Minute Guided Meditation

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness.

Sci-Fi Writer Arthur C. Clarke Predicts the Future in 1964: Artificial Intelligence, Instantaneous Global Communication, Remote Work, Singularity & More

Are you feeling confident about the future? No? We understand. Would you like to know what it was like to feel a deep certainty that the decades to come were going to be filled with wonder and the fantastic? Well then, gaze upon this clip from the BBC Archive YouTube channel of sci-fi author Arthur C. Clarke predicting the future in 1964.

Although we best know him for writing 2001: A Space Odyssey, the 1964 television viewing public would have known him for his futurism and his talent for calmly explaining all the great things to come. In the late 1940s, he had already predicted telecommunication satellites. In 1962 he published his collected essays, Profiles of the Future, which contains many of the ideas in this clip.




Here he correctly predicts the ease with which we can be contacted wherever in the world we choose to, where we can contact our friends “anywhere on earth even if we don’t know their location.” What Clarke doesn’t predict here is how “location” isn’t a thing when we’re on the internet. He imagines people working just as well from Tahiti or Bali as they do from London. Clarke sees this advancement as the downfall of the modern city, as we do not need to commute into the city to work. Now, as so many of us are doing our jobs from home post-COVID, we’ve also discovered the dystopia in that fantasy. (It certainly hasn’t dropped the cost of rent.)

Next, he predicts advances in biotechnology that would allow us to, say, train monkeys to work as servants and workers. (Until, he jokes, they form a union and “we’d be back right where we started.) Perhaps, he says, humans have stopped evolving—what comes next is artificial intelligence (although that phrase had yet to be used) and machine evolution, where we’d be honored to be the “stepping stone” towards that destiny. Make of that what you will. I know you might think it would be cool to have a monkey butler, but c’mon, think of the ethics, not to mention the cost of bananas.

Pointing out where Clarke gets it wrong is too easy—-nobody gets it right all of the time. However, it is fascinating that some things that have never come to pass—-being able to learn a language overnight, or erasing your memories—have managed to resurface over the years as fiction films, like Eternal Sunshine of the Spotless Mind. His ideas of cryogenic suspension are staples of numerous hard sci-fi films.

And we are still waiting for the “Replicator” machine, which would make exact duplicates of objects (and by so doing cause a collapse into “gluttonous barbarism” because we’d want unlimited amounts of everything.) Some commenters call this a precursor to 3-D printing. I’d say otherwise, but something very close to it might be around the corner. Who knows? Clarke himself agrees about all this conjecture-—it’s doomed to fail.

“That is why the future is so endlessly fascinating. Try as we can, we’ll never outguess it.”

Related Content:

Hear Arthur C. Clarke Read 2001: A Space Odyssey: A Vintage 1976 Vinyl Recording

Isaac Asimov Predicts the Future on The David Letterman Show (1980)

How Previous Decades Predicted the Future: The 21st Century as Imagined in the 1900s, 1950s, 1980s, and Other Eras

Octavia Butler’s Four Rules for Predicting the Future

Ted Mills is a freelance writer on the arts who currently hosts the Notes from the Shed podcast and is the producer of KCRW’s Curious Coast. You can also follow him on Twitter at @tedmills, and/or watch his films here.

Marie Curie’s Ph.D. Thesis on Radioactivity–Which Made Her the First Woman in France to Receive a Doctoral Degree in Physics


For her groundbreaking research on radioactivity, Marie Curie won the Nobel Prize. Or rather, she won two, one for physics and another for chemistry, making her the only Nobel Laureate in more than one science. What’s more, her first Nobel came in 1903, the very same year she completed her PhD thesis at the Sorbonne. In Recherches sur les substances radioactives (or Research on Radioactive Substances), Curie “talks about the discovery of the new elements radium and polonium, and also describes how she gained one of the first understandings of the new physical phenomenon of radioactivity.”

So says science Youtuber Toby Hendy in the introduction below to Curie’s thesis–a thesis that made her the first woman in France to receive a doctoral degree in physics. “Following on from the discovery of X-rays by Wilhelm Roentgen in 1895 and Henri Becquerel’s discovery that uranium salts emitted similar penetration properties,” says The Document Centre, Curie “investigated uranium rays as a starting point, but in the process discovered that the air around uranium rays is made to conduct electricity.”




Her deduction that “the process was caused by properties of the atoms themselves” — a revolutionary finding that overturned previously held notions in physics — led her eventually to discover radium and polonium, which would get her that second Nobel in 1911.

Unlike her Nobel Prize in physics, which she shared with her husband Pierre and the physicist Henri Becquerel, Marie Curie won her Nobel Prize in chemistry alone. By 1911 Pierre had been dead for half a decade, but Marie’s scientific genius couldn’t be stopped from continuing their pioneering research as far as she could take it in her own lifetime. She clearly knew how vast a field her work, with and without her husband, had opened up: “Our researches upon the new radio-active bodies have given rise to a scientific movement,” she writes at the end of Recherches sur les substances radioactives. That movement continues to make discoveries more than a century later — and her original thesis itself remains radioactive.

Related content:

An Animated Introduction to the Life & Work of Marie Curie, the First Female Nobel Laureate

Marie Curie Became the First Woman to Win a Nobel Prize, the First Person to Win Twice, and the Only Person in History to Win in Two Different Sciences

Marie Curie Invented Mobile X-Ray Units to Help Save Wounded Soldiers in World War I

How American Women “Kickstarted” a Campaign to Give Marie Curie a Gram of Radium, Raising $120,000 in 1921

Marie Curie Attended a Secret, Underground “Flying University” When Women Were Banned from Polish Universities

Marie Curie’s Research Papers Are Still Radioactive 100+ Years Later

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the Substack newsletter Books on Cities, the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall, on Facebook, or on Instagram.

Why Do Oreos Never Come Apart Evenly?: MIT Researchers Build an “Oreometer” to Find the Answer

Despite having been around for well over a century, the Oreo cookie has managed to retain certain mysteries. Why, for example, does it never come apart evenly? Though different Oreo-eaters prefer different methods of Oreo-eating, an especially popular approach to the world’s most popular cookie involves twisting it open before consumption. That action produces two separate chocolate wafers, but as even kindergarteners know from long and frustrating experience, the crème filling sticks only to one side. It seems that no manual technique, no matter how advanced, can split the contents of an Oreo close to evenly, and only recently have a team of researchers at the Massachusetts Institute of Technology sought an explanation.

This endeavor necessitated an investigation of the Oreo’s rheology — the study of the flow of matter, especially liquids but also “soft solids” like crème filling. Like all scientific research, it involved intensive experimentation, and even the invention of a new measurement device: in this case, a simple 3D-printable “Oreometer” (seen in animated action above) that uses pennies and rubber bands.




With it the researchers applied “applied varying degrees of torque and angular rotation, noting the values that successfully twisted each cookie apart,” writes MIT News‘ Jennifer Chu. “In all, the team went through about 20 boxes of Oreos, including regular, Double Stuf, and Mega Stuf levels of filling, and regular, dark chocolate, and ‘golden’ wafer flavors. Surprisingly, they found that no matter the amount of cream filling or flavor, the cream almost always separated onto one wafer.”

Crystal Owens, a mechanical engineering PhD candidate working on this project, puts this down in large part to how Oreos are made. “Videos of the manufacturing process show that they put the first wafer down, then dispense a ball of cream onto that wafer before putting the second wafer on top. Apparently that little time delay may make the cream stick better to the first wafer.” But other physical factors also bear on the phenomenon as well, as documented in the paper Owens and her collaborators published earlier this year in the journal Physics of Fluid. “We introduce Oreology (/ɔriːˈɒlədʒi/), from the Nabisco Oreo for “cookie” and the Greek rheo logia for ‘flow study,’ as the study of the flow and fracture of sandwich cookies,” they write in its abstract. For a scientifically inclined youngster, one could hardly imagine a more compelling field.

Related content:

Science & Cooking: Harvard’s Free Course on Making Cakes, Paella & Other Delicious Food

Norman Rockwell’s Typewritten Recipe for His Favorite Oatmeal Cookies

Dessert Recipes of Iconic Thinkers: Emily Dickinson’s Coconut Cake, George Orwell’s Christmas Pudding, Alice B. Toklas’ Hashish Fudge & More

Making Chocolate the Traditional Way, From Bean to Bar: A Short French Film

MIT Researchers 3D Print a Bridge Imagined by Leonardo da Vinci in 1502— and Prove That It Actually Works

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the Substack newsletter Books on Cities, the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

In 1704, Isaac Newton Predicted That the World Will End in 2060

Newton Letter

We have become quite used to pronouncements of doom, from scientists predicting the sixth mass extinction due to the measurable effects of climate change, and from religionists declaring the apocalypse due to a surfeit of sin. It’s almost impossible to imagine these two groups of people agreeing on anything other than the ominous portent of their respective messages. But in the early days of the scientific revolution—the days of Shakespeare contemporary Francis Bacon, and later 17th century Descartes—it was not at all unusual to find both kinds of reasoning, or unreasoning, in the same person, along with beliefs in magic, divination, astrology, etc.

Yet even in this maelstrom of heterodox thought and practices, Sir Isaac Newton stood out as a particularly odd co-existence of esoteric biblical prophecy, occult beliefs, and a rigid, formal mathematics that not only adhered to the inductive scientific method, but also expanded its potential by applying general axioms to specific cases.




Yet many of Newton’s general principles would seem totally inimical to the naturalism of most physicists today. As he was formulating the principles of gravity and three laws of motion, for example, Newton also sought the legendary Philosopher’s Stone and attempted to turn metal to gold. Moreover, the devoutly religious Newton wrote theological treatises interpreting Biblical prophecies and predicting the end of the world. The date he arrived at? 2060.

NewtonPapers1AP_468x603

Newton seems, writes science blog Another Pale Blue Dot, “as confident of his predictions in this realm as he was in the rational world of science.” In a 1704 letter exhibited at Jerusalem’s Hebrew University, above, Newton describes his “recconing”:

So then the time times & half a time are 42 months or 1260 days or three years & an half, recconing twelve months to a yeare & 30 days to a month as was done in the Calendar of the primitive year. And the days of short lived Beasts being put for the years of lived [sic] kingdoms, the period of 1260 days, if dated from the complete conquest of the three kings A.C. 800, will end A.C. 2060. It may end later, but I see no reason for its ending sooner.

Newton further demonstrates his confidence in the next sentence, writing that his intent, “though not to assert” an answer, should in any event “put a stop the rash conjectures of fancifull men who are frequently predicting the time of the end.” Indeed. So how did he arrive at this number? Newton applied a rigorous method, that is to be sure.

If you have the patience for exhaustive description of how he worked out his prediction using the Book of Daniel, you may read one here by historian of science Stephen Snobelen, who also points out how widespread the interest in Newton’s odd beliefs has become, reaching across every continent, though scholars have known about this side of the Enlightenment giant for a long time.

For a sense of the exacting, yet completely bizarre flavor of Newton’s prophetic calculations, see another Newton letter at the of the post, transcribed below.

Prop. 1. The 2300 prophetick days did not commence before the rise of the little horn of the He Goat.

2 Those day [sic] did not commence a[f]ter the destruction of Jerusalem & ye Temple by the Romans A.[D.] 70.

3 The time times & half a time did not commence before the year 800 in wch the Popes supremacy commenced

4 They did not commence after the re[ig]ne of Gregory the 7th. 1084

5 The 1290 days did not commence b[e]fore the year 842.

6 They did not commence after the reigne of Pope Greg. 7th. 1084

7 The diffence [sic] between the 1290 & 1335 days are a parts of the seven weeks.

Therefore the 2300 years do not end before ye year 2132 nor after 2370.

The time times & half time do n[o]t end before 2060 nor after [2344]

The 1290 days do not begin [this should read: end] before 2090 [Newton might mean: 2132] nor after 1374 [sic; Newton probably means 2374]

The editorial insertions are Professor Snobelen’s, who thinks the letter dates “from after 1705,” and that “the shaky handwriting suggests a date of composition late in Newton’s life.” Whatever the exact date, we see him much less certain here; Newton pushes around some other dates—2344, 2090 (or 2132), 2374. All of them seem arbitrary, but “given the nice roundness of the number,” writes Motherboard, “and the fact that it appears in more than one letter,” 2060 has become his most memorable dating for the apocalypse.

It’s important to note that Newton didn’t believe the world would “end” in the sense of cease to exist or burn up in holy flames. His end times philosophy resembles that of a surprising number of current day evangelicals: Christ would return and reign for a millennium, the Jewish diaspora would return to Israel and would, he wrote, set up “a flourishing and everlasting Kingdom.” We hear such statements often from televangelists, school boards, governors, and presidential candidates.

As many people have argued, despite Newton’s conception of his scientific work as a bulwark against other theologies, it ultimately became a foundation for Deism and Naturalism, and has allowed scientists to make accurate predictions for hundreds of years. 20th century physics may have shown us a much more radically unstable universe than Newton ever imagined, but his theories are, as Isaac Asimov would put it, “not so much wrong as incomplete,” and still essential to our understanding of certain fundamental phenomena. But as fascinating and curious as Newton’s other interests may be, there’s no more reason to credit his prophetic calculations than those of the Millerites, Harold Camping, or any other apocalyptic doomsday sect.

Note: An earlier version of this post appeared on our site in 2015.

Related Content:

M.I.T. Computer Program Predicts in 1973 That Civilization Will End by 2040

Isaac Newton Creates a List of His 57 Sins (Circa 1662)

Isaac Newton Conceived of His Most Groundbreaking Ideas During the Great Plague of 1665

Videos Recreate Isaac Newton’s Neat Alchemy Experiments: Watch Silver Get Turned Into Gold

The Iconic Design of the Doomsday Clock Was Created 75 Years Ago: It Now Says We’re 100 Seconds to Midnight

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

More in this category... »
Quantcast
Open Culture was founded by Dan Colman.