Before ChatGPT, There Was ELIZA: Watch the 1960s Chatbot in Action

In 1966, the soci­ol­o­gist and crit­ic Philip Rieff pub­lished The Tri­umph of the Ther­a­peu­tic, which diag­nosed how thor­ough­ly the cul­ture of psy­chother­a­py had come to influ­ence ways of life and thought in the mod­ern West. That same year, in the jour­nal Com­mu­ni­ca­tions of the Asso­ci­a­tion for Com­put­ing Machin­ery, the com­put­er sci­en­tist Joseph Weizen­baum pub­lished “ELIZA — A Com­put­er Pro­gram For the Study of Nat­ur­al Lan­guage Com­mu­ni­ca­tion Between Man and Machine.” Could it be a coin­ci­dence that the pro­gram Weizen­baum explained in that paper — the ear­li­est “chat­bot,” as we would now call it — is best known for respond­ing to its user’s input in the non­judg­men­tal man­ner of a ther­a­pist?

ELIZA was still draw­ing inter­est in the nine­teen-eight­ies, as evi­denced by the tele­vi­sion clip above. “The com­put­er’s replies seem very under­stand­ing,” says its nar­ra­tor, “but this pro­gram is mere­ly trig­gered by cer­tain phras­es to come out with stock respons­es.” Yet even though its users knew full well that “ELIZA did­n’t under­stand a sin­gle word that was being typed into it,” that did­n’t stop some of their inter­ac­tions with it from becom­ing emo­tion­al­ly charged. Weizen­baum’s pro­gram thus pass­es a kind of “Tur­ing test,” which was first pro­posed by pio­neer­ing com­put­er sci­en­tist Alan Tur­ing to deter­mine whether a com­put­er can gen­er­ate out­put indis­tin­guish­able from com­mu­ni­ca­tion with a human being.

In fact, 60 years after Weizen­baum first began devel­op­ing it, ELIZA — which you can try online here — seems to be hold­ing its own in that are­na. “In a preprint research paper titled ‘Does GPT‑4 Pass the Tur­ing Test?,’ two researchers from UC San Diego pit­ted Ope­nAI’s GPT‑4 AI lan­guage mod­el against human par­tic­i­pants, GPT‑3.5, and ELIZA to see which could trick par­tic­i­pants into think­ing it was human with the great­est suc­cess,” reports Ars Tech­ni­ca’s Benj Edwards. This study found that “human par­tic­i­pants cor­rect­ly iden­ti­fied oth­er humans in only 63 per­cent of the inter­ac­tions,” and that ELIZA, with its tricks of reflect­ing users’ input back at them, “sur­passed the AI mod­el that pow­ers the free ver­sion of Chat­G­PT.”

This isn’t to imply that Chat­G­P­T’s users might as well go back to Weizen­baum’s sim­ple nov­el­ty pro­gram. Still, we’d sure­ly do well to revis­it his sub­se­quent think­ing on the sub­ject of arti­fi­cial intel­li­gence. Lat­er in his career, writes Ben Tarnoff in the Guardian, Weizen­baum pub­lished “arti­cles and books that con­demned the world­view of his col­leagues and warned of the dan­gers posed by their work. Arti­fi­cial intel­li­gence, he came to believe, was an ‘index of the insan­i­ty of our world.’ ” Even in 1967, he was argu­ing that “no com­put­er could ever ful­ly under­stand a human being. Then he went one step fur­ther: no human being could ever ful­ly under­stand anoth­er human being” — a propo­si­tion arguably sup­port­ed by near­ly a cen­tu­ry and a half of psy­chother­a­py.

Relat­ed con­tent:

A New Course Teach­es You How to Tap the Pow­ers of Chat­G­PT and Put It to Work for You

Thanks to Arti­fi­cial Intel­li­gence, You Can Now Chat with His­tor­i­cal Fig­ures: Shake­speare, Ein­stein, Austen, Socrates & More

Noam Chom­sky on Chat­G­PT: It’s “Basi­cal­ly High-Tech Pla­gia­rism” and “a Way of Avoid­ing Learn­ing”

What Hap­pens When Some­one Cro­chets Stuffed Ani­mals Using Instruc­tions from Chat­G­PT

Noam Chom­sky Explains Where Arti­fi­cial Intel­li­gence Went Wrong

Based in Seoul, Col­in Marshall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall or on Face­book.


by | Permalink | Comments (0) |

Sup­port Open Cul­ture

We’re hop­ing to rely on our loy­al read­ers rather than errat­ic ads. To sup­port Open Cul­ture’s edu­ca­tion­al mis­sion, please con­sid­er mak­ing a dona­tion. We accept Pay­Pal, Ven­mo (@openculture), Patre­on and Cryp­to! Please find all options here. We thank you!


Leave a Reply

Quantcast