How Can We Know What is True? And What Is BS? Tips from Carl Sagan, Richard Feynman & Michael Shermer

Science denialism may be a deeply entrenched and enormously damaging political phenomenon. But it is not a wholly practical one, or we would see many more people abandon medical science, air travel, computer technology, etc. Most of us tacitly agree that we know certain truths about the world—gravitational force, navigational technology, the germ theory of disease, for example. How do we acquire such knowledge, and how do we use the same method to test and evaluate the many new claims we're bombarded with daily?

The problem, many professional skeptics would say, is that we’re largely unaware of the epistemic criteria for our thinking. We believe some ideas and doubt others for a host of reasons, many of them having nothing to do with standards of reason and evidence scientists strive towards. Many professional skeptics even have the humility to admit that skeptics can be as prone to irrationality and cognitive biases as anyone else.




Carl Sagan had a good deal of patience with unreason, at least in his writing and television work, which exhibits so much rhetorical brilliance and depth of feeling that he might have been a poet in another life. His style and personality made him a very effective science communicator. But what he called his “Baloney Detection Kit,” a set of “tools for skeptical thinking,” is not at all unique to him. Sagan’s principles agree with those of all proponents of logic and the scientific method. You can read just a few of his prescriptions below, and a full unabridged list here.

Wherever possible there must be independent confirmation of the “facts.”

Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.

Arguments from authority carry little weight — “authorities” have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.

Spin more than one hypothesis. If there’s something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives.

Try not to get overly attached to a hypothesis just because it’s yours. It’s only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don’t, others will.

Another skeptic, founder and editor of Skeptic magazine Michael Shermer, surrounds his epistemology with a sympathetic neuroscience frame. We’re all prone to “believing weird things,” as he puts it in his book Why People Believe Weird Things and his short video above, where he introduces, following Sagan, his own “Baloney Detection Kit.” The human brain, he explains, evolved to see patterns everywhere as a matter of survival. All of our brains do it, and we all get a lot of false positives.

Many of those false positives become widespread cultural beliefs. Shermer himself has been accused of insensitive cultural bias (evident in the beginning of his video), intellectual arrogance, and worse. But he admits up front that scientific thinking should transcend individual personalities, including his own. “You shouldn’t believe anybody based on authority or whatever position they might have,” he says. “You should check it out yourself.”

Some of the ways to do so when we encounter new ideas involve asking “How reliable is the source of the claim?” and “Have the claims been verified by somebody else?” Returning to Sagan’s work, Shermer offers an example of contrasting scientific and pseudoscientific approaches—the SETI (Search for Extraterrestrial Intelligence) Institute and UFO believers. The latter, he says, uncritically seek out confirmation for their beliefs, where the scientists at SETI rigorously try to disprove hypotheses in order to rule out false claims.

Yet it remains the case that many people—and not all of them in good faith—think they’re using science when they aren’t. Another popular science communicator, physicist Richard Feynman, recommended one method for testing whether we really understand a concept or whether we’re just repeating something that sounds smart but makes no logical sense, what Feynman calls “a mystic formula for answering questions.” Can a concept be explained in plain English, without any technical jargon? Can we ask questions about it and make direct observations that confirm or disconfirm its claims?

Feynman was especially sensitive to what he called “intellectual tyranny in the name of science.” And he recognized that turning forms of knowing into empty rituals resulted in pseudoscientific thinking. In a wonderfully rambling, informal, and autobiographical speech he gave in 1966 to a meeting of the National Science Teachers Association, Feynman concluded that thinking scientifically as a practice requires skepticism of science as an institution.

“Science is the belief in the ignorance of experts,” says Feynman. “If they say to you, ‘Science has shown such and such,’ you might ask, ‘How does science show it? How did the scientists find out? How? What? Where?’” Asking such questions does not mean we should reject scientific conclusions because they conflict with cherished beliefs, but rather that we shouldn't take even scientific claims on faith.

For elaboration on Shermer, Sagan and Feynman's approaches to telling good scientific thinking from bad, read these articles in our archive:

Carl Sagan Presents His “Baloney Detection Kit”: 8 Tools for Skeptical Thinking

Richard Feynman Creates a Simple Method for Telling Science From Pseudoscience (1966)

Richard Feynman’s “Notebook Technique” Will Help You Learn Any Subject–at School, at Work, or in Life

Michael Shermer’s Baloney Detection Kit: What to Ask Before Believing

 

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

Marie Curie Invented Mobile X-Ray Units to Help Save Wounded Soldiers in World War I

These days the phrase “mobile x-ray unit” is likely to spark heated debate about privacy, public health, and freedom of information, especially in New York City, where the police force has been less than forthcoming about its use of military grade Z Backscatter surveillance vans.

A hundred years ago, Mobile X-Ray Units were a brand new innovation, and a godsend for soldiers wounded on the front in WW1. Prior to the advent of this technology, field surgeons racing to save lives operated blindly, often causing even more injury as they groped for bullets and shrapnel whose precise locations remained a mystery.




Marie Curie was just setting up shop at Paris’ Radium Institute, a world center for the study of radioactivity, when war broke out. Many of her researchers left to fight, while Curie personally delivered France’s sole sample of radium by train to the temporarily relocated seat of government in Bordeaux.

“I am resolved to put all my strength at the service of my adopted country, since I cannot do anything for my unfortunate native country just now…,” Curie, a Pole by birth, wrote to her lover, physicist Paul Langevin on New Year’s Day, 1915.

To that end, she envisioned a fleet of vehicles that could bring X-ray equipment much closer to the battlefield, shifting their coordinates as necessary.

Rather than leaving the execution of this brilliant plan to others, Curie sprang into action.

She studied anatomy and learned how to operate the equipment so she would be able to read X-ray films like a medical professional.

She learned how to drive and fix cars.

She used her connections to solicit donations of vehicles, portable electric generators, and the necessary equipment, kicking in generously herself. (When she got the French National Bank to accept her gold Nobel Prize medals on behalf of the war effort, she spent the bulk of her prize purse on war bonds.)

She was hampered only by backwards-thinking bureaucrats whose feathers ruffled at the prospect of female technicians and drivers, no doubt forgetting that most of France’s able-bodied men were otherwise engaged.

Curie, no stranger to sexism, refused to bend to their will, delivering equipment to the front line and X-raying wounded soldiers, assisted by her 17-year-old daughter, Irène, who like her mother, took care to keep her emotions in check while working with maimed and distressed patients.

"In less than two years," writes Amanda Davis at The Institute, "the number of units had grown substantially, and the Curies had set up a training program at the Radium Institute to teach other women to operate the equipment." Eventually, they recruited about 150 women, training them to man the Little Curies, as the mobile radiography units came to be known.

via Brain Pickings

Related Content:

Marie Curie Attended a Secret, Underground “Flying University” When Women Were Banned from Polish Universities

An Animated Introduction to the Life & Work of Marie Curie, the First Female Nobel Laureate

Marie Curie’s Research Papers Are Still Radioactive 100+ Years Later

Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine. Her interest in women's wartime contributions has manifested itself in comics on "Crazy Bet" Van Lew and the Maidenform factory's manufacture of WWII carrier pigeon vests. Follow her @AyunHalliday.

The Map of Computer Science: New Animation Presents a Survey of Computer Science, from Alan Turing to “Augmented Reality”

I’ve never wanted to start a sentence with “I’m old enough to remember…” because, well, who does? But here we are. I remember the enormously successful Apple IIe and Commodore 64, and a world before Microsoft. Smart phones were science fiction. To do much more than word process or play games one had to learn a programming language. These ancient days seemed at the time—and in hindsight as well—to be the very dawn of computing. Before the personal computer, such devices were the size of kitchen appliances and were hidden away in military installations, universities, and NASA labs.

But of course we all know that the history of computing goes far beyond the early 80s: at least back to World War II, and perhaps even much farther. Do we begin with the abacus, the 2,200-Year-Old Antikythera Mechanism, the astrolabe, Ada Lovelace and Charles Babbage? The question is maybe one of definitions. In the short, animated video above, physicist, science writer, and YouTube educator Dominic Walliman defines the computer according to its basic binary function of “just flipping zeros and ones,” and he begins his condensed history of computer science with tragic genius Alan Turing of Turing Test and Bletchley Park codebreaking fame.




Turing’s most significant contribution to computing came from his 1936 concept of the “Turing Machine,” a theoretical mechanism that could, writes the Cambridge Computer Laboratory “simulate ANY computer algorithm, no matter how complicated it is!” All other designs, says Walliman—apart from a quantum computer—are equivalent to the Turing Machine, “which makes it the foundation of computer science.” But since Turing’s time, the simple design has come to seem endlessly capable of adaptation and innovation.

Walliman illustrates the computer's exponential growth by pointing out that a smart phone has more computing power than the entire world possessed in 1963, and that the computing capability that first landed astronauts on the moon is equal to “a couple of Nintendos” (first generation classic consoles, judging by the image). But despite the hubris of the computer age, Walliman points out that “there are some problems which, due to their very nature, can never be solved by a computer” either because of the degree of uncertainty involved or the degree of inherent complexity. This fascinating, yet abstract discussion is where Walliman’s “Map of Computer Science” begins, and for most of us this will probably be unfamiliar territory.

We’ll feel more at home once the map moves from the region of Computer Theory to that of Computer Engineering, but while Walliman covers familiar ground here, he does not dumb it down. Once we get to applications, we’re in the realm of big data, natural language processing, the internet of things, and “augmented reality.” From here on out, computer technology will only get faster, and weirder, despite the fact that the “underlying hardware is hitting some hard limits.” Certainly this very quick course in Computer Science only makes for an introductory survey of the discipline, but like Wallman’s other maps—of mathematics, physics, and chemistry—this one provides us with an impressive visual overview of the field that is both broad and specific, and that we likely wouldn’t encounter anywhere else.

As with his other maps, Walliman has made this the Map of Computer Science available as a poster, perfect for dorm rooms, living rooms, or wherever else you might need a reminder.

Related Content:

Free Online Computer Science Courses

How Ada Lovelace, Daughter of Lord Byron, Wrote the First Computer Program in 1842–a Century Before the First Computer

Watch Breaking the Code, About the Life & Times of Alan Turing (1996)

The Map of Mathematics: Animation Shows How All the Different Fields in Math Fit Together

The Map of Physics: Animation Shows How All the Different Fields in Physics Fit Together

The Map of Chemistry: New Animation Summarizes the Entire Field of Chemistry in 12 Minutes

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

Trigonometry Discovered on a 3700-Year-Old Ancient Babylonian Tablet

One presumption of television shows like Ancient Aliens and books like Chariots of the Gods is that ancient people—particularly non-western people—couldn’t possibly have constructed the elaborate infrastructure and monumental architecture and statuary they did without the help of extra-terrestrials. The idea is intriguing, giving us the hugely ambitious sci-fi fantasies woven into Ridley Scott’s revived Alien franchise. It is also insulting in its level of disbelief about the capabilities of ancient Egyptians, Mesopotamians, South Americans, South Sea Islanders, etc.

We assume the Greeks perfected geometry, for example, and refer to the Pythagorean theorem, although this principle was probably well-known to ancient Indians. Since at least the 1940s, mathematicians have also known that the “Pythagorean triples”—integer solutions to the theorem—appeared 1000 years before Pythagoras on a Babylonian tablet called Plimpton 322. Dating back to sometime between 1822 and 1762 B.C. and discovered in southern Iraq in the early 1900s, the tablet has recently been re-examined by mathematicians Daniel Mansfield and Norman Wildberger of Australia’s University of New South Wales and found to contain even more ancient mathematical wisdom, “a trigonometric table, which is 3,000 years ahead of its time.”




In a paper published in Historia Mathematica the two conclude that Plimpton 322’s Babylonian creators detailed a “novel kind of trigonometry,” 1000 years before Pythagoras and Greek astronomer Hipparchus, who has typically received credit for trigonometry’s discovery. In the video above, Mansfield introduces the unique properties of this “scientific marvel of the ancient world," an enigma that has “puzzled mathematicians,” he writes in his article, “for more than 70 years.” Mansfield is confident that his research will fundamentally change the way we understand scientific history. He may be overly optimistic about the cultural forces that shape historical narratives, and he is not without his scholarly critics either.

Eleanor Robson, an expert on Mesopotamia at University College London has not published a formal critique, but she did take to Twitter to register her dissent, writing, “for any historical document, you need to be able to read the language & know the historical context to make sense of it. Maths is no exception.” The trigonometry hypothesis, she writes in a follow-up tweet, is “tediously wrong.” Mansfield and Wildberger may not be experts in ancient Mesopotamian language and culture, it's true, but Robson is also not a mathematician. “The strongest argument” in the Australian researchers’ favor, writes Kenneth Chang at The New York Times, is that “the table works for trigonomic calculations.” As Mansfield says, “you don’t make a trigonomic table by accident.”

Plimpton 322 uses ratios rather than angles and circles. “But when you arrange it such a way so that you can use any known ratio of a triangle to find the other side of a triangle,” says Mansfield, “then it becomes trigonometry. That’s what we can use this fragment for.” As for what the ancient Babylonians used it for, we can only speculate. Robson and others have proposed that the tablet was a teaching guide. Mansfield believes “Plimpton 322 was a powerful tool that could have been used for surveying fields or making architectural calculations to build palaces, temples or step pyramids.”

Whatever its ancient use, Mansfield thinks the tablet “has great relevance for our modern world… practical applications in surveying, computer graphics and education.” Given the possibilities, Plimpton 322 might serve as “a rare example of the ancient world teaching us something new,” should we choose to learn it. That knowledge probably did not originate in outer space.

Related Content:

How the Ancient Greeks Shaped Modern Mathematics: A Short, Animated Introduction

Ancient Maps that Changed the World: See World Maps from Ancient Greece, Babylon, Rome, and the Islamic World

Hear The Epic of Gilgamesh Read in the Original Akkadian and Enjoy the Sounds of Mesopotamia

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

A New Animation Explains How Caffeine Keeps Us Awake

Let’s preface this by recalling that Honoré de Balzac drank up to 50 cups of coffee a day and lived to the ripe old age of … 51.

Of course, he produced dozens of novels, plays, and short stories before taking his leave. Perhaps his caffeine habit had a little something to do with that?

Pharmacist Hanan Qasim’s TED-Ed primer on how caffeine keeps us awake top loads the positive effects of the most world’s commonly used psychoactive substance. Global consumption is equivalent to the weight of 14 Eiffel Towers, measured in drops of coffee, soda, chocolate, energy drinks, decaf…and that’s just humans. Insects get theirs from nectar, though with them, a little goes a very long, potentially deadly way.




Caffeine’s structural resemblance to the neurotransmitter adenosine is what gives it that special oomph. Adenosine causes sleepiness by plugging into neural receptors in the brain, causing them to fire more sluggishly. Caffeine takes advantage of their similar molecular structures to slip into these receptors, effectively stealing adenosine’s parking space.

With a bioavailability of 99%, this interloper arrives ready to party.

On the plus side, caffeine is both a mental and physical pick me up.

In appropriate doses, it can keep your mind from wandering during a late night study session.

It lifts the body’s metabolic rate and boosts performance during exercise—an effect that’s easily counteracted by getting the bulk of your caffeine from chocolate or sweetened soda, or by dumping another Eiffel Tower’s worth of sugar into your coffee.

There’s even some evidence that moderate consumption may reduce the likelihood of such diseases as Parkinson’s, Alzheimer’s, and cancer.

What to do when that caffeine effect starts wearing off?

Gulp down more!

As with many drugs, prolonged usage diminishes the sought-after effects, causing its devotees (or addicts, if you like) to seek out higher doses, negative side effects be damned. Nervous jitters, incontinence, birth defects, raised heart rate and blood pressure… it’s a compelling case for sticking with water.

Animator Draško Ivezić (a 3-latte-a-day man, according to his studio’s website) does a hilarious job of personifying both caffeine and the humans in its thrall, particularly an egg-shaped new father.

Go to TED-Ed to learn more, or test your grasp of caffeine with a quiz.

Related Content:

Wake Up & Smell the Coffee: The New All-in-One Coffee-Maker/Alarm Clock is Finally Here!

Physics & Caffeine: Stop Motion Film Uses a Cup of Coffee to Explain Key Concepts in Physics

This is Coffee!: A 1961 Tribute to Our Favorite Stimulant

Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine.  Follow her @AyunHalliday.

Margaret Hamilton, Lead Software Engineer of the Apollo Project, Stands Next to Her Code That Took Us to the Moon (1969)

Photo courtesy of MIT Museum

When I first read news of the now-infamous Google memo writer who claimed with a straight face that women are biologically unsuited to work in science and tech, I nearly choked on my cereal. A dozen examples instantly crowded to mind of women who have pioneered the very basis of our current technology while operating at an extreme disadvantage in a culture that explicitly believed they shouldn’t be there, this shouldn’t be happening, women shouldn’t be able to do a “man’s job!”

The memo, as Megan Molteni and Adam Rogers write at Wired, “is a species of discourse peculiar to politically polarized times: cherry-picking scientific evidence to support a pre-existing point of view.” Its specious evolutionary psychology pretends to objectivity even as it ignores reality. As Mulder would say, the truth is out there, if you care to look, and you don’t need to dig through classified FBI files. Just, well, Google it. No, not the pseudoscience, but the careers of women in STEM without whom we might not have such a thing as Google.




Women like Margaret Hamilton, who, beginning in 1961, helped NASA “develop the Apollo program’s guidance system” that took U.S. astronauts to the moon, as Maia Weinstock reports at MIT News. “For her work during this period, Hamilton has been credited with popularizing the concept of software engineering." Robert McMillan put it best in a 2015 profile of Hamilton:

It might surprise today’s software makers that one of the founding fathers of their boys’ club was, in fact, a mother—and that should give them pause as they consider why the gender inequality of the Mad Men era persists to this day.

Hamilton was indeed a mother in her twenties with a degree in mathematics, working as a programmer at MIT and supporting her husband through Harvard Law, after which she planned to go to graduate school. “But the Apollo space program came along” and contracted with NASA to fulfill John F. Kennedy’s famous promise made that same year to land on the moon before the decade’s end—and before the Soviets did. NASA accomplished that goal thanks to Hamilton and her team.

Photo courtesy of MIT Museum

Like many women crucial to the U.S. space program (many doubly marginalized by race and gender), Hamilton might have been lost to public consciousness were it not for a popular rediscovery. “In recent years,” notes Weinstock, "a striking photo of Hamilton and her team’s Apollo code has made the rounds on social media.” You can see that photo at the top of the post, taken in 1969 by a photographer for the MIT Instrumentation Laboratory. Used to promote the lab’s work on Apollo, the original caption read, in part, “Here, Margaret is shown standing beside listings of the software developed by her and the team she was in charge of, the LM [lunar module] and CM [command module] on-board flight software team.”

As Hank Green tells it in his condensed history above, Hamilton “rose through the ranks to become head of the Apollo Software development team.” Her focus on errors—how to prevent them and course correct when they arise—“saved Apollo 11 from having to abort the mission” of landing Neil Armstrong and Buzz Aldrin on the moon’s surface. McMillan explains that “as Hamilton and her colleagues were programming the Apollo spacecraft, they were also hatching what would become a $400 billion industry.” At Futurism, you can read a fascinating interview with Hamilton, in which she describes how she first learned to code, what her work for NASA was like, and what exactly was in those books stacked as high as she was tall. As a woman, she may have been an outlier in her field, but that fact is much better explained by the Occam’s razor of prejudice than by anything having to do with evolutionary determinism.

Note: You can now find Hamilton's code on Github.

Related Content:

How 1940s Film Star Hedy Lamarr Helped Invent the Technology Behind Wi-Fi & Bluetooth During WWII

How Ada Lovelace, Daughter of Lord Byron, Wrote the First Computer Program in 1842–a Century Before the First Computer

NASA Puts Its Software Online & Makes It Free to Download

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

Neil deGrasse Tyson is Creating a New Space Exploration Video Game with the Help of George R.R. Martin & Neil Gaiman

Although Neil deGrasse Tyson is somewhat hesitant to go in on plans to terraform and colonize Mars, that doesn’t mean he doesn’t like a good ol’--yet science-based--video game. Several outlets announced recently that the videogame Space Odyssey, spearheaded by deGrasse Tyson--one of America’s main defenders of logic and Enlightenment--has surpassed its Kickstarter funding goal. The game promises to send players on “real science-based missions to explore space, colonize planets, create and mod in real time."

In the game, according to deGrasse Tyson, “you control the formation of planets, of comets, of life, civilization. You could maybe tweak the force of gravity and see what effect that might have.” It will be, he says, “an exploration into the laws of physics and how they shape the world in which we live.”

The game has been forming for several years now, and most importantly to our readers, has called in several sci-fi and fantasy writers to help create the various worlds in the game, as they have aptly demonstrated their skills in doing so on the printed page. That includes George R.R. Martin, currently ignoring whatever HBO is doing to his creation Game of Thrones; Neil Gaiman, who creates a new universe every time he drops a new novel; and Len Wein, who has had a hand in creating both DC’s Swamp Thing and Marvel’s Wolverine. Also on board: deGrasse Tyson’s buddy Bill Nye, former NASA astronaut Mike Massimino, and astrophysicist Charles Liu.

The idea of world/galaxy-building is not new in video games, especially recently. No Man’s Sky (2015) features “eighteen quintillion full-featured planets” and Minecraft seems limitless. But Space Odyssey (still a temporary title!) is the first to have deGrasse Tyson and friends working the controls in the background. And a game is as good as the visionaries behind it.

 

According to the Kickstarter page, the raised funds will go into “the ability to have this community play the game and engage with it while the final build is underway. As the Kickstarter gaming community begins to beta test game-play and provide feedback, we can begin to use the funds raised via Kickstarter to incorporate your modding, mapping and building suggestions, together building the awesome gaming experience you helped to create.”

DeGrasse Tyson will be in the game himself, urging players onward. There’s no indication whether Mr. Martin will be popping up, though.

Related Content:

Neil deGrasse Tyson: “Because of Pink Floyd, I’ve Spent Decades Undoing the Idea That There’s a Dark Side of the Moon”

David Byrne & Neil deGrasse Tyson Explain the Importance of an Arts Education (and How It Strengthens Science & Civilization)

Are We Living in a Computer Simulation?: A 2-Hour Debate with Neil Degrasse Tyson, David Chalmers, Lisa Randall, Max Tegmark & More

Ted Mills is a freelance writer on the arts who currently hosts the artist interview-based FunkZone Podcast and is the producer of KCRW's Curious Coast. You can also follow him on Twitter at @tedmills, read his other arts writing at tedmills.com and/or watch his films here.

More in this category... »
Quantcast