As the co-founders of Impactstory describe it, Unpaywall is “an extension for Chrome and Firefox that links you to free full-text as you browse research articles. Hit a paywall? No problem: click the green tab and read it free!”
Their FAQ gets into the mechanics a little more, but here’s the gist of how it works: “When you view a paywalled research article, Unpaywall automatically looks for a copy in our index of over 10 million free, legal fulltext PDFs. If we find one, click the green tab to read the article.”
While many science publishers put a paywall in front of scientific articles, it’s often the case that these articles have been published elsewhere in an open format. “More and more funders and universities are requiring authors to upload copies of their papers to [open] repositories. This has created a deep resource of legal open access papers…” And that’s what Unpaywall draws on.
This seems like quite a boon for researchers, journalists, students and policymakers. You can download the Unpaywall extension for Chrome and Firefox, or learn more about the new service at the Unpaywall website.
If you would like to sign up for Open Culture’s free email newsletter, please find it here. It’s a great way to see our new posts, all bundled in one email, each day.
If you would like to support the mission of Open Culture, consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us continue providing the best free cultural and educational materials to learners everywhere. You can contribute through PayPal, Patreon, and Venmo (@openculture). Thanks!
What do we live in: the only universe that exists, or an elaborate computer simulation of a universe? The question would have fascinated Isaac Asimov, and that presumably counts as one of the reasons the Isaac Asimov Memorial Debate took it as its subject last year. Though the so-called “simulation hypothesis” has, in various forms, crossed the minds of thinkers for millennia, it’s enjoyed a particular moment in the zeitgeist in recent years, not least because Elon Musk has publicly stated his view that, in all probability, we do indeed live in a simulation. And, if you can’t trust the guy who hit it big with Tesla and PayPal on the nature of reality, who can you?
Well, you might also consider listening to the perspectives of New York University philosopher David Chalmers, MIT cosmologist Max Tegmark, and three theoretical physicists, James Gates of the University of Maryland, Lisa Randall of Harvard, and Zohreh Davoudi of MIT.
They, with moderation by Neil DeGrasse Tyson, dig into the simulation hypothesis for two hours, approaching from all different angles its origin, its plausibility, and its implications. Davoudi, who has done serious research on the question, brings her work to bear; Randall, who finds little reason to credit the notion that we live in a simulation in the first place, has more of an interest in why others find it so compelling all of a sudden.
Whether you believe it, reject it, or simply enjoy entertaining the idea, you can’t help but feel a strong reaction of one kind or another to the simulation hypothesis, and Tyson contributes his usual humor to knock the discussion back down to Earth whenever it threatens to become too abstract. But how should we respond to the possibility of living in computed reality in the here and now (or “here” and now,” if you prefer)? The Matrixproposed a kind of simulation-hypothesis world whose heroes break out, but we may ultimately have no more ability to see the hardware running our world than Mario can see the hardware running his. “If you’re not sure whether you’re actually simulated or not,” says Tegmark, “my advice to you is to go out there and live really interesting lives and do unexpected things so the simulators don’t get bored and shut you down.” In these unreal times, you could certainly do worse.
Appearing at Oxford’s Sheldonian Theater in 2013, evolutionary biologist Richard Dawkins fielded a question that’s now being asked unnervingly often in our anti-Enlightenment age.
Audience member: “The question is about the nature of scientific evidence. You both said, and I think most people here would agree with you, that we’re justified in holding a belief if there is evidence for it, or there are logical arguments we can find that support it. But it seems like this in itself is a belief, which would require some form of evidence. If so, I’m wondering what you think would count as evidence in favour of that and, if not, how do we justify choosing that heuristic without appealing to the same standard that we are trying to justify?”
Dawkins: “How do we justify, as it were, that science would give us the truth? It works. Planes fly, cars drive, computers compute. If you base medicine on science, you cure people; if you base the design of planes on science, they fly; if you base the design of rockets on science, they reach the moon. It works … bitches.”
A fascinating 20th century literary strain, “documentary poetics,” melds journalistic accounts, photography, official texts and memos, politics, and scientific and technical writing with lyrical and literary language. Perhaps best exemplified by Muriel Rukeyser, the category also includes, at certain times, James Agee, Langston Hughes, Richard Wright, Zora Neale Hurston, and—currently—Claudia Rankine and “powerhouse” new poet Solmaz Sharif. It does not include Edgar Allan Poe, famously alcoholic 19th century master of the macabre and “father of the detective story.”
But you’ll forgive me for thinking, excitedly, that it just might, when I learned Poe had published a text called The Conchologist’s First Book (1839), a condensation, rearrangement, and “remixing,” as Rebecca Onion writes atSlate, of “an existing… beautiful and expensive” science textbook, Thomas Wyatt’s Manual of Conchology, including the original plates and a “new preface and introduction.”
My mind reeled: what wondrous horrors might the morose, romantic Poe have contributed to such an enterprise, his best-selling work, it turns out, in his lifetime. (For which Poe was paid $50 and, typically, received no royalties). What kind of experimental madness might these covers contain?
As I might have assumed from the book’s total obscurity, Poe’s writerly contributions to the project were meager. For all his genius as a storyteller, he could be a long-winded bore as an essayist. It seems he thought this aspect of his voice was best suited to the original writing he did for Conchologist’s First. His biographers, notes University of Houston professor emeritus John H. Lienhard, all “mutter an embarrassed apology for Poe’s shady side-track—then hurry back to talk about The Raven.” Onion quotes one biographer Jeffrey Meyers, who writes, “Poe’s boring pedantic and hair-splitting Preface was absolutely guaranteed to torment and discourage even the most passionately interested schoolboy.”
As for its “shadiness,” the book also elicits embarrassment from Poe devotees because, as esteemed biologist and historian of science Stephen J. Gould wrote in his exculpatory essay “Poe’s Greatest Hit,” it was “basically a scam,” though “not so badly done” as most allege. The naturalist Wyatt, a friend of Poe’s, had begged his publisher to release an abridged student edition of his original lavish and pricey $8 textbook, which had not sold well. When the publisher balked, Wyatt contracted Poe to lend his name and considerable editorial skill to a more-or-less bootleg “CliffsNotes” version to be sold for $1.50. To make matters worse, Poe and Wyatt were both accused of plagiarism, having “lifted chunks of their book from an English naturalist, Thomas Brown,” Lienhard points out.
Gould defended Poe as a rewriter of others’ work. “Yes, Poe plagiarized,” as Lienhard summarizes the argument. He presented Brown’s, and Wyatt’s, work as his own, but, “fluent in French, [he] went back to read Georges Cuvier, the great French naturalist” and made his own translations. He wrote his own introductory material, and he reorganized Wyatt’s book in such a way as to provide “genuinely useful insight into biological taxonomy.” Poe’s edition—with its “formidable subtitle,” A System of Testaceous Malacology, arranged Expressly for the Use of Schools—actually proved a hit with students, and likely not only because it sold cheap. It was the only publication in Poe’s lifetime to make it to a second edition.
Maybe humanist readers approach the work with biases firmly in place, expecting a genre that’s dry by its very nature to contain all the literary brilliance and entertaining intrigue of “The Tell-Tale Heart.” Lienhard suggests as much, describing irritation at how his “literary friends” ignore the scientific work of writers like Thoreau, Thomas Paine, Goethe, and poet Oliver Goldsmith. “Poe’s excursion into natural philosophy,” he writes, “was an embarrassment to people who are embarrassed by science in the first place.” Maybe.
Both Gould and Lienhard shrug off the less-than-scrupulous circumstances of the book’s creation, the latter citing a “cynical remark” by playwright Wilson Mizner: “If you steal from one author, it’s plagiarism. If you steal from many, it’s research.” At least he doesn’t go as far as Mark Twain, who once wrote in defense of Helen Keller, after she was charged with literary borrowing, “the kernel, the soul—let us go further and say the substance, the bulk, the actual and valuable material of all human utterance—is plagiarism.”
Richard Feynman knew his stuff. Had he not, he probably wouldn’t have won the Nobel Prize in Physics, let alone his various other prestigious scientific awards. But his reputation for learning all his life long with a special depth and rigor survives him, and in a sense accounts for his fame — of a degree that ensures his stern yet playful face will gaze out from dorm-room posters for generations to come — even more than does his “real” work. Many students of physics still, understandably, want to be like Feynman, but everyone else, even those of us with no interest in physics whatsoever, could also do well to learn from him: not from what he thought about, but from how he thought about it.
On his Study Hacks Blog, computer science professor Cal Newport explains what he calls “the Feynman notebook technique,” whereby “dedicating a notebook to a new learning task” can provide “concrete cues” to help mitigate the difficulty of starting out toward the mastery of a subject.
Feynman did it himself at least since his graduate-school days at Princeton when, according to biographer James Gleick, he once prepared for his oral examinations by opening a fresh notebook titled “NOTEBOOK OF THINGS I DON’T KNOW ABOUT.” In it “he reorganized his knowledge. He worked for weeks at disassembling each branch of physics, oiling the parts, and putting them back together, looking all the while for the raw edges and inconsistencies. He tried to find the essential kernels of each subject.”
“At first, the notebook pages are empty,” writes Newport, “but as they fill with careful notes, your knowledge also grows. The drive to fill more pages keeps your motivation stoked.” In other, more general terms: “Translate your growing knowledge of something hard into a concrete form and you’re more likely to keep investing the mental energy needed to keep learning.” But how sure can you feel of your newly acquired knowledge if you don’t regularly test it? Feynman had to go face-to-face with the elders of the Princeton physics department, but if you don’t benefit from that kind of institutional threat, you might consider putting into practice another Feynman technique: “teaching” what you’ve learned to someone else.
In addition to being a great scientist, explains study-skills vlogger Thomas Frank, Feynman “was also a great teacher and a great explainer,” owing to his ability to “boil down incredibly complex concepts and put them in simple language that other people could understand.” Only when Feynman could do that did he know he truly understood a concept himself — be it a concept in physics, safecracking, or bongo-playing. As Frank explains, “if you’re shaky on a concept and you want to quickly improve your understanding,” try your hand at producing a Feynmanesque simple explanation, which will “test your understanding and challenge your assumptions.” Just make sure to bear in mind one of Feynman’s most quotable quotes: “The first principle is that you must not fool yourself — and you are the easiest person to fool.” And if you find that you have indeed fooled yourself, head right back to the drawing board — or rather, to the notebook.
Since our website took flight a decade ago, we’ve kept you apprised of the free offerings made available by NASA–everything from collections of photography and space sounds, to software, ebooks, and posters. But there’s one item we missed last summer (blame it on the heat!). And that’s NASA PubSpace, an online archive that gives you free access to science journal articles funded by the space agency. Previously, these articles were hidden behind paywalls. Now, “all NASA-funded authors and co-authors … will be required to deposit copies of their peer-reviewed scientific publications and associated data into” NASA PubSpace.
If you would like to sign up for Open Culture’s free email newsletter, please find it here. It’s a great way to see our new posts, all bundled in one email, each day.
If you would like to support the mission of Open Culture, consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us continue providing the best free cultural and educational materials to learners everywhere. You can contribute through PayPal, Patreon, and Venmo (@openculture). Thanks!
One potential drawback of genius, it seems, is restlessness, a mind perpetually on the move. Of course, this is what makes many celebrated thinkers and artists so productive. That and the extra hours some gain by sacrificing sleep. Voltaire reportedly drank up to 50 cups of coffee a day, and seems to have suffered no particularly ill effects. Balzac did the same, and died at 51. The caffeine may have had something to do with it. Both Socrates and Samuel Johnson believed that sleep is wasted time, and “so for years has thought grey-haired Richard Buckminster Fuller,” wrote Time magazine in 1943, “futurific inventor of the Dymaxion house, the Dymaxion car and the Dymaxion globe.”
Engineer and visionary Fuller intended his “Dymaxion” brand to revolutionize every aspect of human life, or—in the now-slightly-dated parlance of our obsession with all things hacking—he engineered a series of radical “lifehacks.” Given his views on sleep, that seemingly essential activity also received a Dymaxion upgrade, the trademarked name combining “dynamic,” “maximum,” and “tension.” “Two hours of sleep a day,” Fuller announced, “is plenty.” Did he consult with specialists? Medical doctors? Biologists? Nothing as dull as that. He did what many a mad scientist does in the movies. (In the search, as Vincent Price says at the end of The Fly, “for the truth.”) He cooked up a theory, and tested it on himself.
“Fuller,” Time reported, “reasoned that man has a primary store of energy, quickly replenished, and a secondary reserve (second wind) that takes longer to restore.” He hypothesized that we would need less sleep if we stopped to take a nap at “the first sign of fatigue.” Fuller trained himself to do just that, forgoing the typical eight hours, more or less, most of us get per night. He found—as have many artists and researchers over the years—that “after a half-hour nap he was completely refreshed.” Naps every six hours allowed him to shrink his total sleep per 24-hour period to two hours. Did he, like the 50s mad scientist, become a tragic victim of his own experiment?
No danger of merging him with a fly or turning him invisible. The experiment’s failure may have meant a day in bed catching up on lost sleep. Instead, Fuller kept up it for two full years, 1932 and 1933, and reported feeling in “the most vigorous and alert condition that I have ever enjoyed.” He might have slept two hours a day in 30 minute increments indefinitely, Time suggests, but found that his “business associates… insisted on sleeping like other men,” and wouldn’t adapt to his eccentric schedule, though some not for lack of trying. In his book BuckyWorksJ. Baldwin claims, “I can personally attest that many of his younger colleagues and students could not keep up with him. He never seemed to tire.”
A research organization looked into the sleep system and “noted that not everyone was able to train themselves to sleep on command.” The point may seem obvious to the significant number of people who suffer from insomnia. “Bucky disconcerted observers,” Baldwin writes, “by going to sleep in thirty seconds, as if he had thrown an Off switch in his head. It happened so quickly that it looked like he had had a seizure.” Buckminster Fuller was undoubtedly an unusual human, but human all the same. Time reported that “most sleep investigators agree that the first hours of sleep are the soundest.” A Colgate University researcher at the time discovered that “people awakened after four hours’ sleep were just as alert, well-coordinated physically and resistant to fatigue” as those who slept the full eight.
Sleep research since the forties has made a number of other findings about variable sleep schedules among humans, studying shift workers’ sleep and the so-called “biphasic” pattern common in cultures with very late bedtimes and siestas in the middle of the day. The success of this sleep rhythm “contradicts the normal idea of a monophasic sleeping schedule,” writes Evan Murray at MIT’s Culture Shock, “in which all our time asleep is lumped into one block.” Biphasic sleep results in six or seven hours of sleep rather than the seven to nine of monophasic sleepers. Polyphasic sleeping, however, the kind pioneered by Fuller, seems to genuinely result in even less needed sleep for many. It’s an idea that’s only become widespread “within roughly the last decade,” Murray noted in 2009. He points to the rediscovery, without any clear indebtedness, of Fuller’s Dymaxion system by college student Maria Staver, who named her method “Uberman,” in honor of Nietzsche, and spread its popularity through a blog and a book.
Murray also reports on another blogger, Steve Pavlina, who conducted the experiment on himself and found that “over a period of 5 1/2 months, he was successful in adapting completely,” reaping the benefits of increased productivity. But like Fuller, Pavlina gave it up, not for “health reasons,” but because, he wrote, “the rest of the world is monophasic” or close to it. Our long block of sleep apparently contains a good deal of “wasted transition time” before we arrive at the necessary REM state. Polyphasic sleep trains our brains to get to REM more quickly and efficiently. For this reason, writes Murray, “I believe it can work for everyone.” Perhaps it can, provided they are willing to bear the social cost of being out of sync with the rest of the world. But people likely to practice Dymaxion Sleep for several months or years probably already are.
You don’t need to know anything at all about classical music, nor have any liking for it even, to be deeply moved by that most famous of symphonies, Ludwig van Beethoven’s 9th—“perhaps the most iconic work of the Western musical tradition,” writes The Juilliard Journal in an article about its handwritten score. Commissioned in 1817, the sublime work was only completed in 1824. By that time, its composer was completely and totally deaf. At the first performance, Beethoven did not notice that the massive final choral movement had ended, and one of the musicians had to turn him around to acknowledge the audience.
This may seem, says researcher Natalya St. Clair in the TED-Ed video above, like some “cruel joke,” but it’s the truth. Beethoven was so deaf that some of the most interesting artifacts he left behind are the so-called “conversation books,” kept from 1818 onward to communicate with visitors who had to write down their questions and replies. How then might it have been possible for the composer to create such enduringly thrilling, rapturous works of aural art?
Using the delicate, melancholy “Moonlight Sonata” (which the composer wrote in 1801, when he could still hear), St. Clair attempts to show us how Beethoven used mathematical “patterns hidden beneath the beautiful sounds.” (In the short video below from documentary The Genius of Beethoven, see the onset of Beethoven’s hearing loss in a dramatic reading of his letters.) According to St. Clair’s theory, Beethoven composed by observing “the mathematical relationship between the pitch frequency of different notes,” though he did not write his symphonies in calculus. It’s left rather unclear how the composer’s supposed intuition of mathematics and pitch corresponds with his ability to express such a range of emotions through music.
We can learn more about Beethoven’s deafness and its biological relationship to his compositional style in the short video below with research fellow Edoardo Saccenti and his colleague Age Smilde from the Biosystems Data Analysis Group at Amsterdam’s Swammerdam Institute for Life Sciences. By counting the high and low frequencies in Beethoven’s complete string quartets, a task that took Saccenti many weeks, he and his team were able to show how three distinct compositional styles “correspond to stages in the progression of his deafness,” as they write in their paper (which you can download in PDF here).
The progression is unusual. As his condition worsened, Beethoven included fewer and fewer high frequency sounds in his compositions (giving cellists much more to do). By the time we get to 1824–26, “the years of the late string quartets and of complete deafness”—and of the completion of the 9th—the high notes have returned, due in part, Smilde says, to “the balance between an auditory feedback and the inner ear.” Beethoven’s reliance on his “inner ear” made his music “much and much richer.” How? As one violinist in the clip puts it, he was “given more freedom because he was not attached anymore to the physical sound, [he could] just use his imagination.”
For all of the compelling evidence presented here, whether Beethoven’s genius in his painful later years is attributable to his intuition of complex mathematical patterns or to the total free rein of his imaginative inner ear may in fact be undiscoverable. In any case, no amount of rational explanation can explain away our astonishment that the man who wrote the unfailingly powerful, awesomely dynamic “Ode to Joy” finale (conducted above by Leonard Bernstein), couldn’t actually hear any of the music.
We're hoping to rely on loyal readers, rather than erratic ads. Please click the Donate button and support Open Culture. You can use Paypal, Venmo, Patreon, even Crypto! We thank you!
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.