Psilocybin Could Soon Be a Legal Treatment for Depression: Johns Hopkins Professor, Roland Griffiths, Explains How Psilocybin Can Relieve Suffering

Much of the recent scientific research into psychedelics has picked up where researchers left off in the mid-20th century, before LSD, psilocybin, and other psychoactive drugs became countercultural means of consciousness expansion, and then banned, illegal substances the government sought to control. Scientists from several fields studied psychedelics as treatments for addiction, depression, and anxiety, and end-of-life care. These applications were conceived and tested several decades ago.

Now, thanks to some serious investment from high-profile institutions like Johns Hopkins University, and thanks to changing government attitudes toward psychoactive drugs, it may be possible for psilocybin, the active ingredient in “magic mushrooms,” to get legal approval for therapy in a clinical setting by 2021. “For the first time in U.S. history,” Shelby Hartman reports at Rolling Stone, “a psychedelic drug is on the fast track to getting approved for treating depression by the federal government.”

As Michael Pollan has detailed in his latest book, How to Change Your Mind, the possibilities for psilocybin and other such drugs are vast. “But before the Food and Drug Administration can be petitioned to reclassify it,” Brittany Shoot notes at Fortune, the drug “first has to clear phase III clinical trials. The entire process is expected to take about five years.” In the TEDMED video above, you can see Roland R. Griffiths, Professor of Psychiatry and Behavioral Sciences at Johns Hopkins, discuss the ways in which psilocybin, “under supported conditions, can occasion mystical-type experiences associated with enduring positive changes in attitudes and behavior.”

The implications of this research span the fields of ethics and medicine, psychology and religion, and it’s fitting that Dr. Griffiths leads off with a statement about the compatibility of spirituality and science, supported by a quote from Einstein, who said “the most beautiful and profound emotion we can experience is the sensation of the mystical. It’s the source of all true science.” But the work Griffiths and others have been engaged in is primarily practical in nature—though it does not at all exclude the mystical—like finding effective means to treat depression in cancer patients, for example.

“Sixteen million Americans suffer from depression and approximately one-third of them are treatment resistant,” Hartman writes. “Depression is also an epidemic worldwide, affecting 300 million people around the world.” Psychotropic drugs like psilocybin, LSD, and MDMA (which is not classified as a psychedelic), have been shown for a long time to work for many people suffering from severe mental illness and addictions.

Although such drugs present some potential for abuse, they are not highly addictive, especially relative to the flood of opioids on the legal market that are currently devastating whole communities as people use them to self-medicate. It seems that what has most prevented psychedelics from being researched and prescribed has as much or more to do with long-standing prejudice and fear as it does with a genuine concern for public health. (And that’s not even to mention the financial interests who exert tremendous pressure on drug policy.)

But now, Hartman writes, “it appears [researchers] have come too far to go back—and the federal government is finally recognizing it, too.” Find out why this research matters in Dr. Griffiths' talk, Pollan’s book, the Multidisciplinary Association for Psychedelic Studies, and some of the posts we’ve linked to below.

Related Content:

How to Use Psychedelic Drugs to Improve Mental Health: Michael Pollan’s New Book, How to Change Your Mind, Makes the Case

New LSD Research Provides the First Images of the Brain on Acid, and Hints at Its Potential to Promote Creativity

Artist Draws 9 Portraits While on LSD: Inside the 1950s Experiments to Turn LSD into a “Creativity Pill”

When Aldous Huxley, Dying of Cancer, Left This World Tripping on LSD, Experiencing “the Most Serene, the Most Beautiful Death” (1963)

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

Why Read Waiting For Godot?: An Animated Case for Samuel Beckett’s Classic Absurdist Play

Iseult Gillespie’s latest literature themed TED-Ed lesson—Why should you read Waiting For Godot?—poses a question that’s not too difficult to answer these days.

The meaning of this surprisingly sturdy Absurdist play is famously open for debate.

Author Samuel Beckett told Roger Blin, who directed and acted in its first production at the Théâtre de Babylon in 1953, that all he knew for certain was that the two main characters, Vladimir and Estragon, wore bowler hats.




(Another thing he felt sure of was that they were male, and should only be brought to life by those in possession of a prostate gland, a specification that rankles female theater artists eager to take a crack at characters who now seem as universal as any in Shakespeare. The Beckett estate’s vigorous enforcement of the late playwright’s wishes is itself the subject of a play, The Underpants Godot by Duncan Pflaster.)

A “tragicomedy in two acts,” according to Beckett, Waiting for Godot emerged during a vibrant moment for experimental theater, as playwrights turned their backs on convention to address the devastation of WWII.

Comedy got darker. Boredom, religious dread, and existential despair were major themes.

Perhaps we are on the brink of such a period ourselves?

Critics, scholars, and directors have found Godot a meaningful lens through which to consider the Cold War, the French resistance, England’s colonization of Ireland, and various forms of apocalyptic near-future.

Perhaps THAT is why we should read (and/or watch) Waiting for Godot.

Vladimir:

Was I sleeping, while the others suffered? Am I sleeping now? Tomorrow, when I wake, or think I do, what shall I say of today? That with Estragon my friend, at this place, until the fall of night, I waited for Godot? That Pozzo passed, with his carrier, and that he spoke to us? Probably. But in all that what truth will there be? (Estragon, having struggled with his boots in vain, is dozing off again. Vladimir looks at him.) He'll know nothing. He'll tell me about the blows he received and I'll give him a carrot. (Pause.) Astride of a grave and a difficult birth. Down in the hole, lingeringly, the grave digger puts on the forceps. We have time to grow old. The air is full of our cries. (He listens.) But habit is a great deadener. (He looks again at Estragon.) At me too someone is looking, of me too someone is saying, He is sleeping, he knows nothing, let him sleep on. (Pause.) I can't go on! (Pause.) What have I said?

Gillespie’s lesson, animated by Tomás Pichardo-Espaillat, above, includes a supplemental trove of resources and a quiz that educators can customize online.

Related Content:

Samuel Beckett Directs His Absurdist Play Waiting for Godot (1985)

Hear Waiting for Godot, the Acclaimed 1956 Production Starring The Wizard of Oz’s Bert Lahr

An Animated Introduction to Samuel Beckett, Absurdist Playwright, Novelist & Poet

“Try Again. Fail Again. Fail Better”: How Samuel Beckett Created the Unlikely Mantra That Inspires Entrepreneurs Today

The Books Samuel Beckett Read and Really Liked (1941-1956)

Watch the Opening Credits of an Imaginary 70s Cop Show Starring Samuel Beckett

Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine.  Her play Zamboni Godot premiered in New York City in 2017. Join her in NYC on Monday, October 15 for another monthly installment of her book-based variety show, Necromancers of the Public Domain. Follow her @AyunHalliday.

This Is Your Kids’ Brains on Internet Algorithms: A Chilling Case Study Shows What’s Wrong with the Internet Today

Multimedia artist and writer James Bridle has a new book out, and it’s terrifying—appropriately so, I would say—in its analysis of “the dangers of trusting computers to explain (and, increasingly, run) the world,” as Adi Robertson writes at The Verge. Summing up one of his arguments in his New Dark Age: Technology and the End of the Future, Bridle writes, “We know more and more about the world, while being less and less able to do anything about it.” As Bridle tells Robertson in a short interview, he doesn’t see the problems as irremediable, provided we gain “some kind of agency within these systems.” But he insists that we must face head-on certain facts about our dystopian, sci-fi-like reality.

In the brief TED talk above, you can see Bridle do just that, beginning with an analysis of the millions of proliferating videos for children, with billions of views, on YouTube, a case study that quickly goes to some disturbing places. Videos showing a pair of hands unwrapping chocolate eggs to reveal a toy within “are like crack for little kids,” says Bridle, who watch them over and over. Autoplay ferries them on to weirder and weirder iterations, which eventually end up with dancing Hitlers and their favorite cartoon characters performing lewd and violent acts. Some of the videos seem to be made by professional animators and "wholesome kid's entertainers," some seem assembled by software, some by “people who clearly shouldn’t be around children at all.”




The algorithms that drive the bizarre universe of these videos are used to “hack the brains of very small children in return for advertising revenue,” says Bridle. “At least that what I hope they’re doing it for.” Bridle soon bridges the machinery of kids’ YouTube with the adult version. “It’s impossible to know,” he says, who’s posting these millions of videos, “or what their motives might be…. Really it’s exactly the same mechanism that’s happening across most of our digital services, where it’s impossible to know where this information is coming from.” The children’s videos are “basically fake news for kids. We’re training them from birth to click on the very first link that comes along, regardless of what the source is.”

High school and college teachers already deal with the problem of students who cannot judge good information from bad—and who cannot really be blamed for it, since millions of adults seem unable to do so as well. In surveying YouTube children’s videos, Bridle finds himself asking the same questions that arise in response to so much online content: “Is this a bot? Is this a person? Is this a troll? What does it mean that we can’t tell the difference between these things anymore?” The language of online content is a hash of popular tags meant to be read by machine algorithms, not humans. But real people performing in an “algorithmically optimized system” seem forced to “act out these increasingly bizarre combinations of words.”

Within this culture, he says, “even if you’re human, you have to end up behaving like a machine just to survive.” What makes the scenario even darker is that machines replicate the worst aspects of human behavior, not because they’re evil but because that’s what they’re taught to do. To think that technology is neutral is a dangerously naïve view, Bridle argues. Humans encode their historical biases into the data, then entrust to A.I. such critical functions as not only children’s entertainment, but also predictive policing and recommending criminal sentences. As Bridle notes in the short video above, A.I. inherits the racism of its creators, rather than acting as a “leveling force."

As we’ve seen the CEOs of tech companies taken to task for the use of their platforms for propaganda, disinformation, hate speech, and wild conspiracy theories, we’ve also seen them respond to the problem by promising to solve it with more automated machine learning algorithms. In other words, to address the issues with the same technology that created them—technology that no one really seems to understand. Letting “unaccountable systems” driven almost solely by ads control global networks with ever-increasing influence over world affairs seems wildly irresponsible, and has already created a situation, Bridle argues in his book, in which imperialism has “moved up to infrastructure level” and conspiracy theories are the most “powerful narratives of our time,” as he says below.

Bridle’s claims might themselves sound like alarmist conspiracies if they weren’t so alarmingly obvious to most anyone paying attention. In an essay on Medium he writes a much more in-depth analysis of YouTube kids’ content, developing one of the arguments in his book. Bridle is one of many writers and researchers covering this terrain. Some other good popular books on the subject come from scholars and technologists like Tim Wu and Jaron Lanier. They are well worth reading and paying attention to, even if we might disagree with some of their arguments and prescriptions.

As Bridle himself argues in his interview at The Verge, the best approach to dealing with what seems like a nightmarish situation is to develop a “systemic literacy,” learning “to think clearly about subjects that seem difficult and complex,” but which nonetheless, as we can clearly see, have tremendous impact on our everyday lives and the society our kids will inherit.

Related Content:

How Information Overload Robs Us of Our Creativity: What the Scientific Research Shows

The Case for Deleting Your Social Media Accounts & Doing Valuable “Deep Work” Instead, According to Prof. Cal Newport

The Diderot Effect: Enlightenment Philosopher Denis Diderot Explains the Psychology of Consumerism & Our Wasteful Spending

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

How Does Language Shape the Way We Think? Cognitive Scientist Lera Boroditsky Explains

Imagine a jellyfish waltzing in a library while thinking about quantum mechanics. "If everything has gone relatively well in your life so far," cognitive scientist Lera Boroditsky says in the TED Talk above, "you probably haven't had that thought before." But now you have, all thanks to language, the remarkable ability by which "we humans are able to transmit our ideas across vast reaches of space and time" and "knowledge across minds."

Though we occasionally hear about startling rates of language extinction — Boroditsky quotes some estimates as predicting half the world's languages gone in the next century — a great variety still thrive. Does that mean we have an equal variety of essentially different ways of thinking? In both this talk and an essay for Edge.org, Boroditsky presents intriguing pieces of evidence that what language we speak does affect the way we conceive of the world and our ideas about it. These include an Aboriginal tribe in Australia who always and everywhere use cardinal directions to describe space ("Oh, there's an ant on your southwest leg") and the differences in how languages label the color spectrum.




"Russian speakers have to differentiate between light blue, goluboy, and dark blue, siniy," says the Belarus-born, American-raised Boroditsky. "When we test people's ability to perceptually discriminate these colors, what we find is that Russian speakers are faster across this linguistic boundary. They're faster to be able to tell the difference between a light and dark blue." Hardly a yawning cognitive gap, you might think, but just imagine how many such differences exist between languages, and how the habits of mind they shape potentially add up.

"You don't even need to go into the lab to see these effects of language; you can see them with your own eyes in an art gallery," writes Boroditsky in her Edge essay. "How does an artist decide whether death, say, or time should be painted as a man or a woman? It turns out that in 85 percent of such personifications, whether a male or female figure is chosen is predicted by the grammatical gender of the word in the artist's native language." More Germans paint death as a man, and more Russians paint it as a woman. Personally, I'd like to see all the various ways artists speaking all the world's languages paint that waltzing jellyfish thinking about quantum mechanics in the library. We'd better hurry commissioning them, though, before too many more of those languages vanish.

Related Content:

Learn 40+ Languages for Free: Spanish, English, Chinese & More

A Colorful Map Visualizes the Lexical Distances Between Europe’s Languages: 54 Languages Spoken by 670 Million People

How Languages Evolve: Explained in a Winning TED-Ed Animation

Speaking in Whistles: The Whistled Language of Oaxaca, Mexico

Steven Pinker Explains the Neuroscience of Swearing (NSFW)

Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

What Ancient Chinese Philosophy Can Teach Us About Living the Good Life Today: Lessons from Harvard’s Popular Professor, Michael Puett

It has at times been concerning for some Buddhist scholars and teachers to watch mindfulness become an integral part of self-help programs. A casual attitude toward the practice of mindfulness meditation can make it seem accessible by making it seem relaxing and effortless, which often results in missing the point entirely. Whatever the school, lineage, or particular tradition from which they come, the source texts and sages tend to agree: the purpose of meditation is not self improvement—but to realize that there may, indeed, be no such thing as a self.

Instead, we are all epiphenomenon arising from combinations of ever-shifting elements (the aggregates, or skandhas). The self is a conventionally useful illusion. This notion in the ancient Indian texts has its echo in Scottish enlightenment philosopher David Hume’s so-called “bundle theory,” but Hume's thoughts about the self have mostly remained obscure footnotes in western thought, rather than central premises in its philosophies and religions. But as thinkers in India took the self apart, so too did philosophers in ancient China, before Buddhism reached the country during the Han Dynasty.




Harvard Professor Michael Puett has been lecturing on Chinese philosophy to audiences of hundreds of students—and at 21st century temples of self-actualization like TED and the School of Life. He has co-authored a book on the subject, The Path: What Chinese Philosophers Can Teach Us About the Good Life, drawn from his enormously popular university courses, in which he expounds the philosophies of Confucius, Mencius, Zhuangzi, and Xunzi. The book has found a ready audience, and Puett’s “Classical Chinese Ethical and Political Theory” is the 3rd most popular class among Harvard undergraduates, behind intro to economics and computer science. What Professor Puett offers, in his distillation of ancient Chinese wisdom, is not at all to be construed as self-help.

Rather, he says, “I think of it as sort of anti-self-help. Self-help tends to be about learning to love yourself and embrace yourself for who you are. A lot of these ideas are saying precisely the opposite—no, you overcome the self, you break the self. You should not be happy with who you are.” Lest this sound like some form of violence, we must understand, Puett tells Tim Dowling at The Guardian, that in “breaking” the self, we are only doing harm to an illusion. As in the Buddhist thought that took root in China, so too in the earlier Confucianism: there is no self, just a “a messy and potentially ugly bunch of stuff.”

While our current circumstances may seem unique in world history, Puett shows his students how Chinese philosophers 2,500 years ago also experienced rapid societal change and upheaval, as his co-author Christine Gross-Loh writes at The Atlantic; they navigated and understood "a world where human relationships are challenging, narcissism and self-centeredness are on the rise, and there is disagreement on the best way for people to live harmoniously together." A majority of students at Harvard are driven to pursue "practical, predetermined" careers. By teaching them Confucian and Daoist philosophy, Puett tries to help them become more spontaneous and open to change.

Whatever we call it, the interacting phenomenon that give rise to the self cannot, we know, be observed in anything resembling an unchanging steady state. Yet Western culture (for several motivated reasons) has lagged far behind both intuitive and scientific observations of this fact. Puett's students have been told, “’Find your true self, especially during these four years of college,’” and “try and be sincere and authentic to who you really are” in making choices about careers, partners, passions, and consumer products. They take to his class because “they’ve spent 20 years looking for this true self and not finding it.”

In the two lectures above—a shorter one at the top from TEDx Nashville and a longer talk above for Ivy, “The Social University”—you can get a taste of Puett’s enthusiastic style. Chinese philosophy, “in its strong form,” he says above, “can truly change one’s life.” Not by making us more empowered, personally-fulfilled agents who re-create reality to better meet our narrow specs. But rather, as he tells Dowling, by training us “to become incredibly good at dealing with this capricious world.”

Related Content:

An Introduction to Confucius’ Life & Thought Through Two Animated Videos

The Philosophical Appreciation of Rocks in China & Japan: A Short Introduction to an Ancient Tradition

Learn Islamic & Indian Philosophy with 107 Episodes of the History of Philosophy Without Any Gaps Podcast

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

Depression & Melancholy: Animated Videos Explain the Crucial Difference Between Everyday Sadness and Clinical Depression

“Depression,” the TED-Ed video above informs us, “is the leading cause of disability in the world.” This may be a hard fact to swallow, the product, we might think, of pharmaceutical advertising. We all feel down from time to time, we think. “Then circumstances change, and those sad feelings disappear.” Isn’t it like this for everyone? It is not. “Clinical depression is different. It’s a medical disorder, and it won’t go away just because you want it to.”

Depression can linger for up to two weeks, and become so debilitating that sufferers cannot work or play. It interferes with important relationships and “can have a lot of different symptoms: a low mood, loss of interest in things you’d normally enjoy, changes in appetite, feeling worthless or excessively guilty,” restlessness and insomnia, or extreme lethargy, poor concentration, and possible thoughts of suicide. But surely we can hear a paid promotional voice when the narrator states, “If you have at least 5 of those symptoms, according to psychiatric guidelines, you qualify for a diagnosis of depression.”




What we don’t typically hear about in pharmaceutical ads are the measurable physiological changes depression writes in the brain, including decreased brain matter in the frontal lobe and atrophy of the hippocampus. These effects are measurable in humans and rats, in study after study after study. But while most of us know the names of a neurotransmitter or two these days, not even neuroscientists fully understand the biology of depression. They do know that some combination of medication, therapy, and, in extreme cases electroconvulsive treatment, can allow people to more fully experience life.

People in treatment will still feel “down” on occasion, just like everyone does. But depression, the explainer wants us to understand, should never be compared to ordinary sadness. Its effects on behavior and brain health are too wide-ranging, pervasive, persistent, and detrimental. These effects can be invisible, which adds to an unfortunate social stigma that dissuades people from seeking treatment. The more we talk about depression openly, rather than treating as it as a shameful secret, the more likely people at risk will be to seek help.

Just as depression cannot be alleviated by trivializing or ignoring it, the condition does not respond to being romanticized. While, indeed, many a famous painter, poet, actor, etc. has suffered from clinical depression—and made it a part of their art—their examples should not suggest to us that artists shouldn’t get treatment. Sadness is never trivial.

Unlike physical pain, it is difficult, for example, to pinpoint the direct causes of sadness. As the short video above demonstrates, the assumption that sadness is caused by external events arose relatively recently. The humoral system of the ancient Greeks treated all sadness as a biological phenomenon. Greek physicians believed it was an expression of black bile, or “melaina kole,” from which we derive the word "melancholy." It seems we’ve come full circle, in a way. Ancient humoral theorists recommended nutrition, medical treatment, and physical exercise as treatments for melancholia, just as doctors do today for depression.

But melancholy is a much broader term, not a scientific designation; it is a collection of ideas about sadness that span thousands of years. Nearly all of those ideas include some sense that sadness is an essential experience. “If you’ve never felt melancholy,” the narrator says, “you’ve missed out on part of what it means to be human.” Thinkers have described melancholia as a precursor to, or inevitable result of, acquiring wisdom. One key example, Robert Burton’s 1621 text The Anatomy of Melancholy, "the apogee of Renaissance scholarship," set the tone for discussions of melancholy for the next few centuries.

The scientific/philosophical/literary text argues, “he that increaseth wisdom, increaseth sorrow,” a sentiment the Romantic poets turned on its head. Before them came John Milton, whose 1645 poem Il Penseroso addresses melancholy as “thou Goddes, sage and holy… Sober, stedfast, and demure.” The deity Melancholy oversees the contemplative life and reveals essential truths through “Gorgeous Tragedy.”

One of the poem’s loftiest themes showed the way forward for the Romantics: “The poet who seeks to attain the highest level of creative expression must embrace the divine,” write Milton scholars Katherine Lynch and Thomas H. Luxon, "which can only be accomplished by following the path set out in Il Penseroso.” The divine, in this case, takes the form of sadness personified. Yet this poem cannot be read in isolation: its companion, L’Allegro, praises Mirth, and of sadness says, “Hence loathed Melancholy / Of Cerberus, and blackest midnight born, in Stygian Cave forlorn / ‘Mongst horrid shapes, and shrieks, and sights unholy.”

Rather than contradict each other, these two characterizations speak to the ambivalent attitudes, and vastly different experiences, humans have about sadness. Fleeting bouts of melancholy can be sweet, touching, and beautiful, inspiring art, music, and poetry. Sadness can force us to reckon with life’s unpleasantness rather than deny or avoid it. On the other hand, in its most extreme, chronically intractable forms, such as what we now call clinical depression, sadness can destroy our capacity to act, to appreciate beauty and learn important lessons, marking the critical difference between a universal existential condition and a, thankfully, treatable physical disease.

Related Content:

Stanford’s Robert Sapolsky Demystifies Depression, Which, Like Diabetes, Is Rooted in Biology

How Baking, Cooking & Other Daily Activities Help Promote Happiness and Alleviate Depression and Anxiety

A Unified Theory of Mental Illness: How Everything from Addiction to Depression Can Be Explained by the Concept of “Capture”

Stephen Fry on Coping with Depression: It’s Raining, But the Sun Will Come Out Again

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

Why Incompetent People Think They’re Amazing: An Animated Lesson from David Dunning (of the Famous “Dunning-Kruger Effect”)

The business world has long had special jargon for the Kafkaesque incompetence bedeviling the ranks of upper management. There is “the Peter principle,” first described in a satirical book of the same name in 1968. More recently, we have the positive notion of “failing upward.” The concept has inspired a mantra, “fail harder, fail faster,” as well as popular books like The Gift of Failure. Famed research professor, author, and TED talker Brené Brown has called TED “the failure conference," and indeed, a “FailCon” does exist, “in over a dozen cities on 6 continents around the globe.”

The candor about this most unavoidable of human phenomena may prove a boon to public health, lowering levels of hypertension by a significant margin. But is there a danger in praising failure too fervently? (Samuel Beckett’s quote on the matter, beloved by many a 21st century thought leader, proves decidedly more ambiguous in context.) Might it present an even greater opportunity for people to “rise to their level of incompetence”? Given the prevalence of the “Dunning-Kruger Effect,” a cognitive bias explained by John Cleese in a previous post, we may not be well-placed to know whether our efforts constitute success or failure, or whether we actually have the skills we think we do.




First described by social psychologists David Dunning (University of Michigan) and Justin Kruger (N.Y.U.) in 1999, the effect “suggests that we’re not very good at evaluating ourselves accurately.” So says the narrator of the TED-Ed lesson above, scripted by Dunning and offering a sober reminder of the human propensity for self-delusion. “We frequently overestimate our own abilities,” resulting in widespread “illusory superiority” that makes “incompetent people think they’re amazing.” The effect greatly intensifies at the lower end of the scale; it is often “those with the least ability who are most likely to overrate their skills to the greatest extent.” Or as Cleese plainly puts it, some people “are so stupid, they have no idea how stupid they are.”

Combine this with the converse effect—the tendency of skilled individuals to underrate themselves—and we have the preconditions for an epidemic of mismatched skill sets and positions. But while imposter syndrome can produce tragic personal results and deprive the world of talent, the Dunning-Kruger effect’s worst casualties affect us all adversely. People “measurably poor at logical reasoning, grammar, financial knowledge, math, emotional intelligence, running medical lab tests, and chess all tend to rate their expertise almost as favorably as actual experts do.” When such people get promoted up the chain, they can unwittingly do a great deal of harm.

While arrogant self-importance plays its role in fostering delusions of expertise, Dunning and Kruger found that most of us are subject to the effect in some area of our lives simply because we lack the skills to understand how bad we are at certain things. We don't know the rules well enough to successfully, creatively break them. Until we have some basic understanding of what constitutes competence in a particular endeavor, we cannot even understand that we’ve failed.

Real experts, on the other hand, tend to assume their skills are ordinary and unremarkable. “The result is that people, whether they’re inept or highly skilled, are often caught in a bubble of inaccurate self-perception." How can we get out? The answers won’t surprise you. Listen to constructive feedback and never stop learning, behavior that can require a good deal of vulnerability and humility.

Related Content:

John Cleese on How “Stupid People Have No Idea How Stupid They Are” (a.k.a. the Dunning-Kruger Effect)

Research Finds That Intellectual Humility Can Make Us Better Thinkers & People; Good Thing There’s a Free Course on Intellectual Humility

The Power of Empathy: A Quick Animated Lesson That Can Make You a Better Person

Free Online Psychology & Neuroscience Courses

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

More in this category... »
Quantcast