Yuval Noah Harari Explains How to Protect Your Mind in the Age of AI

You could say that we live in the age of arti­fi­cial intel­li­gence, although it feels truer about no aspect of our lives than it does of adver­tis­ing. “If you want to sell some­thing to peo­ple today, you call it AI,” says Yuval Noah Harari in the new Big Think video above, even if the prod­uct has only the vaguest tech­no­log­i­cal asso­ci­a­tion with that label. To deter­mine whether some­thing should actu­al­ly be called arti­fi­cial­ly intel­li­gent, ask whether it can “learn and change by itself and come up with deci­sions and ideas that we don’t antic­i­pate,” indeed can’t antic­i­pate. That AI-enabled waf­fle iron being pitched to you prob­a­bly does­n’t make the cut, but you may already be inter­act­ing with numer­ous sys­tems that do.

As the author of the glob­al best­seller Sapi­ens and oth­er books con­cerned with the long arc of human civ­i­liza­tion, Harari has giv­en a good deal of thought to how tech­nol­o­gy and soci­ety inter­act. “In the twen­ti­eth cen­tu­ry, the rise of mass media and mass infor­ma­tion tech­nol­o­gy, like the tele­graph and radio and tele­vi­sion” formed “the basis for large-scale demo­c­ra­t­ic sys­tems,” but also for “large-scale total­i­tar­i­an sys­tems.”

Unlike in the ancient world, gov­ern­ments could at least begin to “micro­man­age the social and eco­nom­ic and cul­tur­al lives of every indi­vid­ual in the coun­try.” Even the vast sur­veil­lance appa­ra­tus and bureau­cra­cy of the Sovi­et Union “could not sur­veil every­body all the time.” Alas, Harari antic­i­pates, things will be dif­fer­ent in the AI age.

Human-oper­at­ed organ­ic net­works are being dis­placed by AI-oper­at­ed inor­gan­ic ones, which “are always on, and there­fore they might force us to be always on, always being watched, always being mon­i­tored.” As they gain dom­i­nance, “the whole of life is becom­ing like one long job inter­view.” At the same time, even if you were already feel­ing inun­dat­ed by infor­ma­tion before, you’ve more than like­ly felt the waters rise around you due to the infi­nite pro­duc­tion capac­i­ties of AI. One indi­vid­ual-lev­el strat­e­gy Harari rec­om­mends to coun­ter­act the flood is going on an “infor­ma­tion diet,” restrict­ing the flow of that “food of the mind,” which only some­times has any­thing to do with the truth. If we binge on “all this junk infor­ma­tion, full of greed and hate and fear, we will have sick minds; per­haps a peri­od of absti­nence can restore a cer­tain degree of men­tal health. You might con­sid­er spend­ing the rest of the day tak­ing in as lit­tle new infor­ma­tion as pos­si­ble — just as soon as you fin­ish catch­ing up on Open Cul­ture, of course.

Relat­ed con­tent:

Sci-Fi Writer Arthur C. Clarke Pre­dict­ed the Rise of Arti­fi­cial Intel­li­gence & the Exis­ten­tial Ques­tions We Would Need to Answer (1978)

Will Machines Ever Tru­ly Think? Richard Feyn­man Con­tem­plates the Future of Arti­fi­cial Intel­li­gence (1985)

Isaac Asi­mov Describes How Arti­fi­cial Intel­li­gence Will Lib­er­ate Humans & Their Cre­ativ­i­ty: Watch His Last Major Inter­view (1992)

How Will AI Change the World?: A Cap­ti­vat­ing Ani­ma­tion Explores the Promise & Per­ils of Arti­fi­cial Intel­li­gence

Stephen Fry Explains Why Arti­fi­cial Intel­li­gence Has a “70% Risk of Killing Us All”

Yuval Noah Harari and Fareed Zakaria Break Down What’s Hap­pen­ing in the Mid­dle East

Based in Seoul, Col­in Marshall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities and the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les. Fol­low him on the social net­work for­mer­ly known as Twit­ter at @colinmarshall.


by | Permalink | Comments (3) |

Sup­port Open Cul­ture

We’re hop­ing to rely on our loy­al read­ers rather than errat­ic ads. To sup­port Open Cul­ture’s edu­ca­tion­al mis­sion, please con­sid­er mak­ing a dona­tion. We accept Pay­Pal, Ven­mo (@openculture), Patre­on and Cryp­to! Please find all options here. We thank you!


Comments (3)
You can skip to the end and leave a response. Pinging is currently not allowed.
  • Rod Stasick says:

    I’m weary of the con­stant “sky is falling” takes on what’s mis­tak­en­ly labeled “A.I.” These are syn­er­getic extrap­ola­tive sys­tems, not autonomous over­lords. The real issue isn’t that they’ll sud­den­ly out­smart us — it’s how cor­po­ra­tions wield them. We’ve already seen the blue­print in social media: tools designed to make our atten­tion more pre­dictable, prof­itable, and pli­ant. Fear-dri­ven nar­ra­tives cast humans as help­less, but what’s over­looked is the obvi­ous — these sys­tems are shaped by cor­po­rate incen­tives: sur­veil­lance, manip­u­la­tion, prof­it extrac­tion. The dan­ger isn’t that “AI” thinks; it’s that it extends social media’s log­ic into near­ly every cor­ner of our lives.

  • Steven Graziano says:

    I’m more inter­est­ed in pro­tect­ing myself from Yuval Noah Harari.

  • Anil and The NICircle ⭕️ says:

    Beyond the Label: AI vs. True Learn­ing
    NI Cir­cle Brief — Opinion/Commentary — August 23, 2025
    Sum­ma­ry
    Yuval Noah Harari observes that today many prod­ucts get labeled ‘AI’ to boost appeal—even when they do not learn
    by them­selves. He sug­gests a prac­ti­cal test: does the sys­tem learn and change on its own, gen­er­at­ing ideas or
    deci­sions we did not and could not antic­i­pate? Nat­ur­al Intel­li­gence (NI) agrees: the word ‘AI’ should be reserved for
    sys­tems exhibit­ing gen­uine adap­tive learn­ing. The deep­er ques­tion is how soci­ety pre­serves trust and bal­ance as real
    learn­ing sys­tems increas­ing­ly shape deci­sions in finance, health, edu­ca­tion, and gov­er­nance.
    NI Com­pass Read­ing
    Read­ing & Guid­ance
    We are in a hype-sat­u­rat­ed moment. Act now to restore clar­i­ty before trust decays.
    Imbal­ance: illu­sion vs. real­i­ty. Mar­ket­ing inflates claims; users can­not see lim­its. Coo
    M■y■ (appear­ance) vs. Satya (truth). See through the label to the func­tion: does it t
    Adopt hon­est label­ing; add a ‘learn­ing dis­clo­sure’ line; pub­lish mod­el lim­its; pro­vide
    Direc­tion NI Lens North (When) Astro­log­i­cal / Tim­ing West (What) Nat­ur­al / Con­sti­tu­tion East (Sto­ry) Mytho­log­i­cal / Mean­ing South (How) Prac­ti­cal / Tiny Actions Grace■Seed Actions (Do This Now)
    • **Learn­ing Dis­clo­sure:** Add a plain■language line to prod­uct pages: ‘This tool does/does not learn by itself.’
    • **Lim­its & Esca­la­tion:** Pub­lish known lim­i­ta­tions and pro­vide a clear path to a human helper for edge cas­es.
    • **Evi­dence Tag­ging:** When claim­ing ‘AI■powered’, link to a short note show­ing how the sys­tem learns (data,
    update cadence, eval­u­a­tion).
    • **Ener­gy Hon­esty:** Include a sim­ple ener­gy foot­print note to align incen­tives toward effi­cient, mean­ing­ful use.
    Notes & Sources
    • Big Think video (Aug 2025): Yuval Noah Harari on AI hype vs. real learn­ing sys­tems.
    • Gen­er­al indus­try def­i­n­i­tions: adap­tive learn­ing as a cri­te­ri­on dis­tin­guish­ing automa­tion from AI.
    • Stan­ford HAI (back­ground): Human■Centered AI and trans­paren­cy prac­tices.
    Dis­claimer: Cir­cle ■■ NI Reflec­tion Series — Opinion/Commentary. The NI Com­pass and Grace■Seed Actions are orig­i­nal pro­pos­als,
    not indus­try stan­dards. Quotes and ref­er­ences are based on pub­lic com­men­tary; inter­pre­ta­tions are ours.
    Prin­ci­ple: Nature doesn’t race; it bal­ances. • Con­tact: Anil K. Agar­w­al • Cir­cle ■■

Leave a Reply

Quantcast