What Happens When Artificial Intelligence Creates Images to Match the Lyrics of Iconic Songs: David Bowie’s “Starman,” Led Zeppelin’s “Stairway to Heaven”, ELO’s “Mr. Blue Sky” & More

Lyri­cists must write con­crete­ly enough to be evoca­tive, yet vague­ly enough to allow each lis­ten­er his per­son­al inter­pre­ta­tion. The nine­teen-six­ties and sev­en­ties saw an espe­cial­ly rich bal­ance struck between res­o­nant ambi­gu­i­ty and mas­sive pop­u­lar­i­ty — aid­ed, as many involved par­ties have admit­ted, by the use of cer­tain psy­choac­tive sub­stances. Half a cen­tu­ry lat­er, the visions induced by those same sub­stances offer the clos­est com­par­i­son to the strik­ing fruits of visu­al arti­fi­cial-intel­li­gence projects like Google’s Deep Dream a few years ago or DALL‑E today. Only nat­ur­al, per­haps, that these advanced appli­ca­tions would soon­er or lat­er be fed psy­che­del­ic song lyrics.

The video at the top of the post presents the Elec­tric Light Orches­tra’s 1977 hit “Mr. Blue Sky” illus­trat­ed by images gen­er­at­ed by arti­fi­cial intel­li­gence straight from its words. This came as a much-antic­i­pat­ed endeav­or for Youtube chan­nel SolarProphet, which has also put up sim­i­lar­ly AI-accom­pa­nied pre­sen­ta­tions of such already goofy-image-filled com­e­dy songs as Lemon Demon’s “The Ulti­mate Show­down” and Neil Ciciere­ga’s “It’s Gonna Get Weird.”

Youtu­ber Daara has also cre­at­ed ten entries in this new genre, includ­ing Queen’s “Don’t Stop Me Now,” The Eagles’ “Hotel Cal­i­for­nia,” and (the recent­ly-fea­tured-on-Open-Cul­ture) Kate Bush’s “Run­ning Up That Hill.”

Jut above appears a video for David Bowie’s “Star­man” with AI-visu­al­ized lyrics, cre­at­ed by Youtu­ber Aidon­t­know. Cre­at­ed isn’t too strong a word, since DALL‑E and oth­er appli­ca­tions cur­rent­ly avail­able to the pub­lic pro­vide a selec­tion of images for each prompt, leav­ing it to human users to pro­vide specifics about the aes­thet­ic — and, in the case of these videos, to select the result that best suits each line. One delight of this par­tic­u­lar pro­duc­tion, apart from the boo­gieing chil­dren, is see­ing how the AI imag­ines var­i­ous star­men wait­ing in the sky, all of whom look sus­pi­cious­ly like ear­ly-sev­en­ties Bowie. Of all his songs of that peri­od, sure­ly “Life on Mars?” would be choice num­ber one for an AI music video — but then, its imagery may well be too bizarre for cur­rent tech­nol­o­gy to han­dle.

Relat­ed con­tent:

Dis­cov­er DALL‑E, the Arti­fi­cial Intel­li­gence Artist That Lets You Cre­ate Sur­re­al Art­work

Arti­fi­cial Intel­li­gence Pro­gram Tries to Write a Bea­t­les Song: Lis­ten to “Daddy’s Car”

What Hap­pens When Arti­fi­cial Intel­li­gence Lis­tens to John Coltrane’s Inter­stel­lar Space & Starts to Cre­ate Its Own Free Jazz

Arti­fi­cial Intel­li­gence Writes a Piece in the Style of Bach: Can You Tell the Dif­fer­ence Between JS Bach and AI Bach?

Arti­fi­cial Intel­li­gence Cre­ates Real­is­tic Pho­tos of Peo­ple, None of Whom Actu­al­ly Exist

Nick Cave Answers the Hot­ly Debat­ed Ques­tion: Will Arti­fi­cial Intel­li­gence Ever Be Able to Write a Great Song?

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall, on Face­book, or on Insta­gram.

“When We All Have Pocket Telephones”: A 1920s Comic Accurately Predicts Our Cellphone-Dominated Lives

Much has been said late­ly about jokes that “haven’t aged well.” Some­times it has do to with shift­ing pub­lic sen­si­bil­i­ties, and some­times with a gag’s exag­ger­a­tion hav­ing been sur­passed by the facts of life. As a Twit­ter user named Max Salt­man post­ed not long ago, “I love find­ing New York­er car­toons so dat­ed that the joke is lost entire­ly and the car­toons become just descrip­tions of peo­ple doing nor­mal things.” The exam­ples includ­ed a par­ty­go­er admit­ting that “I haven’t read it yet, but I’ve down­loaded it from the inter­net,” and a teacher admon­ish­ing her stu­dents to “keep your eyes on your own screen.”

All of those New York­er car­toons appear to date from the nine­teen-nineties. Even more pre­scient yet much old­er is the Dai­ly Mir­ror car­toon at the top of the post, drawn by artist W. K. Haselden at some point between 1919 and 1923. It envi­sions a time “when we all have pock­et tele­phones,” liable to ring at the most incon­ve­nient times: “when run­ning for a train,” “when your hands are full,” “at a con­cert,” even “when you are being mar­ried.” Such a com­ic strip could nev­er, as they say, be pub­lished today — not because of its poten­tial to offend mod­ern sen­si­tiv­i­ties, but because of its sheer mun­dan­i­ty.

For here in the twen­ty-twen­ties, we all, indeed, have pock­et tele­phones. Not only that, we’ve grown so accus­tomed to them that Haselden’s car­toon feels rem­i­nis­cent of the turn of the mil­len­ni­um, when the nov­el­ty and pres­tige of cell­phones (to say noth­ing of their grat­ing­ly sim­ple ring­tones) made them feel more intru­sive in day-to-day-life. Now, increas­ing­ly, cell­phones are day-to-day life. Far from the lit­er­al “pock­et tele­phones” envi­sioned a cen­tu­ry ago, they’ve worked their way into near­ly every aspect of human exis­tence, includ­ing those Haselden could nev­er have con­sid­ered.

Yet this was­n’t the first time any­one had imag­ined such a thing. “Rumors of a ‘pock­et phone’ had been ring­ing around the world since 1906,” writes Laugh­ing Squid’s Lori Dorn. “A man named Charles E. Alden claimed to have cre­at­ed a device that could eas­i­ly fit inside a vest pock­et and used a ‘wire­less bat­tery.’ ” In the event, it would take near­ly eight decades for the first cell­phone to arrive on the mar­ket, and three more on top of that for them to become indis­pens­able in the West. Now the “pock­et tele­phone” has become the defin­ing device of our era all over the world, though the social norms around its use do remain a work in progress.

via Laugh­ing Squid

Relat­ed con­tent:

The First Cell­phone: Dis­cov­er Motorola’s DynaT­AC 8000X, a 2‑Pound Brick Priced at $3,995 (1984)

Lyn­da Bar­ry on How the Smart­phone Is Endan­ger­ing Three Ingre­di­ents of Cre­ativ­i­ty: Lone­li­ness, Uncer­tain­ty & Bore­dom

Film­mak­er Wim Wen­ders Explains How Mobile Phones Have Killed Pho­tog­ra­phy

A 1947 French Film Accu­rate­ly Pre­dict­ed Our 21st-Cen­tu­ry Addic­tion to Smart­phones

The World’s First Mobile Phone Shown in 1922 Vin­tage Film

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall, on Face­book, or on Insta­gram.

Damien Hirst’s NFT Experiment Comes to an End: How Many Buyers Chose Digital Tokens Over Physical Artworks?

Damien Hirst is into NFTs. Some will regard this as a reflec­tion on the artist, and oth­ers a reflec­tion on the tech­nol­o­gy. Whether you take those reflec­tions to be pos­i­tive or neg­a­tive reveals some­thing about your own con­cept of how the art world, the busi­ness world, and the dig­i­tal world inter­sect. So will your reac­tion to The Cur­ren­cy, Hirst’s just-com­plet­ed art project and tech­no­log­i­cal exper­i­ment. Launched in July of last year, it pro­duced 10,000 unique non-fun­gi­ble tokens “that were each asso­ci­at­ed with cor­re­spond­ing art­works the British artist made in 2016,” as Art­net’s Car­o­line Gold­stein writes. “The dig­i­tal tokens were sold via a lot­tery sys­tem for $2,000.”

Hirst also laid down an unprece­dent­ed con­di­tion: he announced “that his col­lec­tors would have to make a choice between the phys­i­cal art­work and its dig­i­tal ver­sion, and set a one-year dead­line — ask­ing them, in effect, to vote for which had more last­ing val­ue.” For each buy­er who choos­es the orig­i­nal work, Hirst would assign its NFT to an inac­ces­si­ble address, the clos­est thing to destroy­ing it. And for each buy­er who choos­es the NFT, Hirst would throw the paper ver­sion onto a bon­fire. The final num­bers, as Hirst tweet­ed out at the end of last month, came to “5,149 phys­i­cals and 4,851 NFTs (mean­ing I will have to burn 4,851 cor­re­spond­ing phys­i­cal Ten­ders).” Hirst also retained 1,000 copies for him­self.

“In the begin­ning I had thought I would def­i­nite­ly choose all phys­i­cal,” Hirst explains. “Then I thought half-half and then I felt I had to keep all my 1,000 as NFTs and then all paper again and round and round I’ve gone, head in a spin.” In the end he went whol­ly dig­i­tal, hav­ing decid­ed that “I need to show my 100 per­cent sup­port and con­fi­dence in the NFT world (even though it means I will have to destroy the cor­re­spond­ing 1000 phys­i­cal art­works).” Per­haps this was a vic­to­ry of Hirst’s neophil­ia, but then, those instincts have served him well before: few liv­ing artists have man­aged to draw such pub­lic fas­ci­na­tion, enam­ored or hos­tile, for so many years straight — let alone such for­mi­da­ble sale prices, and not just for his stuffed shark.

“I’ve nev­er real­ly under­stood mon­ey,” Hirst says to Stephen Fry in the video above. (You can watch an extend­ed ver­sion of their con­ver­sa­tion here.) “All these things — art, mon­ey, com­merce — they’re all ethe­re­al,” ulti­mate­ly based on noth­ing more than “belief and trust.” Return­ing to the tech­niques of his ear­ly “spot paint­ings” — those he made him­self before farm­ing the task out to stead­ier-hand­ed assis­tants — and mint­ing the results into unique dig­i­tal objects for sale was per­haps an attempt to get his head around the even less intu­itive con­cept of the NFT. All told, The Cur­ren­cy brought in about $89 mil­lion in rev­enue. More telling will be the price of its tokens on the sec­ondary mar­ket, where they’re chang­ing hands at the moment for around $7,000: a price impos­si­ble prop­er­ly to eval­u­ate for now, and thus not with­out the thrilling ambi­gu­i­ty of cer­tain mod­ern art­works.

via Art­net

Relat­ed con­tent:

What are Non-Fun­gi­ble Tokens (NFTs)? And How Can a Work of Dig­i­tal Art Sell for $69 Mil­lion

Bri­an Eno Shares His Crit­i­cal Take on Art & NFTs: “I Main­ly See Hus­tlers Look­ing for Suck­ers”

The Art Mar­ket Demys­ti­fied in Four Short Doc­u­men­taries

Mark Rothko Is Toast… and More Edi­ble Art from SFMOMA

Damien Hirst Takes Us Through His New Exhi­bi­tion at Tate Mod­ern

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall, on Face­book, or on Insta­gram.

The First Photographs Taken by the Webb Telescope: See Faraway Galaxies & Nebulae in Unprecedented Detail


Late last year we fea­tured the amaz­ing engi­neer­ing of the James Webb Space Tele­scope, which is now the largest opti­cal tele­scope in space. Capa­ble of reg­is­ter­ing phe­nom­e­na old­er, more dis­tant, and fur­ther off the vis­i­ble spec­trum than any pre­vi­ous device, it will no doubt show us a great many things we’ve nev­er seen before. In fact, it’s already begun: ear­li­er this week, NASA’s God­dard Space Flight Cen­ter released the first pho­tographs tak­en through the Webb tele­scope, which “rep­re­sent the first wave of full-col­or sci­en­tif­ic images and spec­tra the obser­va­to­ry has gath­ered, and the offi­cial begin­ning of Webb’s gen­er­al sci­ence oper­a­tions.”

The areas of out­er space depict­ed in unprece­dent­ed detail by these pho­tos include the Cari­na Neb­u­la (top), the South­ern Ring Neb­u­la (2nd image on this page), and the galaxy clus­ters known as Stephan’s Quin­tet (the home of the angels in It’s a Won­der­ful Life) and SMACS 0723 (bot­tom).

That last, notes Petapix­el’s Jaron Schnei­der, “is the high­est res­o­lu­tion pho­to of deep space that has ever been tak­en,” and the light it cap­tures “has trav­eled for more than 13 bil­lion years.” What this com­pos­ite image shows us, as NASA explains, is SMACS 0723 “as it appeared 4.6 bil­lion years ago” — and its “slice of the vast uni­verse cov­ers a patch of sky approx­i­mate­ly the size of a grain of sand held at arm’s length by some­one on the ground.”

All this can be a bit dif­fi­cult to get one’s head around, at least if one is pro­fes­sion­al­ly involved with nei­ther astron­o­my nor cos­mol­o­gy. But few imag­i­na­tions could go un-cap­tured by the rich­ness of the images them­selves. Sharp, rich in col­or, var­ied in tex­ture — and in the case of the Cari­na Neb­u­la or “Cos­mic Cliffs,” NASA adds, “seem­ing­ly three-dimen­sion­al” — they could have come straight from a state-of-the-art sci­ence-fic­tion movie. In fact they out­do even the most advanced sci-fi visions, as NASA’s Earth­rise out­did even the uncan­ni­ly real­is­tic-in-ret­ro­spect views of the Earth from space imag­ined by Stan­ley Kubrick and his col­lab­o­ra­tors in 2001: A Space Odyssey.

But these pho­tos are the fruits of a real-life jour­ney toward the final fron­tier, one you can fol­low in real time on NASA’s “Where Is Webb?” track­er. “Webb was designed to spend the next decade in space,” writes Colos­sal’s Grace Ebert. “How­ev­er, a suc­cess­ful launch pre­served sub­stan­tial fuel, and NASA now antic­i­pates a trove of insights about the uni­verse for the next twen­ty years.” That’s quite a long run by the cur­rent stan­dards of space explo­ration — but then, by the scale of space and time the Webb tele­scope has new­ly opened up, even 100 mil­len­nia is the blink of an eye.

Relat­ed con­tent:

The Amaz­ing Engi­neer­ing of James Webb Tele­scope

How to Take a Pic­ture of a Black Hole: Watch the 2017 Ted Talk by Katie Bouman, the MIT Grad Stu­dent Who Helped Take the Ground­break­ing Pho­to

How Sci­en­tists Col­orize Those Beau­ti­ful Space Pho­tos Tak­en By the Hub­ble Space Tele­scope

The Very First Pic­ture of the Far Side of the Moon, Tak­en 60 Years Ago

The First Images and Video Footage from Out­er Space, 1946–1959

The Beau­ty of Space Pho­tog­ra­phy

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall, on Face­book, or on Insta­gram.

Computer Scientist Andrew Ng Presents a New Series of Machine Learning Courses–an Updated Version of the Popular Course Taken by 5 Million Students

Back in 2017, Cours­era co-founder and for­mer Stan­ford com­put­er sci­ence pro­fes­sor Andrew Ng launched a five-part series of cours­es on “Deep Learn­ing” on the edtech plat­form, a series meant to “help you mas­ter Deep Learn­ing, apply it effec­tive­ly, and build a career in AI.” These cours­es extend­ed his ini­tial Machine Learn­ing course, which has attract­ed almost 5 mil­lion stu­dents since 2012, in an effort, he said, to build “a new AI-pow­ered soci­ety.”

Ng’s goals are ambi­tious, to “teach mil­lions of peo­ple to use these AI tools so they can go and invent the things that no large com­pa­ny, or com­pa­ny I could build, could do.” His new Machine Learn­ing Spe­cial­iza­tion at Cours­era takes him sev­er­al steps fur­ther in that direc­tion with an “updat­ed ver­sion of [his] pio­neer­ing Machine Learn­ing course,” notes Cours­er­a’s descrip­tion, pro­vid­ing “a broad intro­duc­tion to mod­ern machine learn­ing.” The spe­cial­iza­tion’s three cours­es include 1) Super­vised Machine Learn­ing: Regres­sion and Clas­si­fi­ca­tion, 2) Advanced Learn­ing Algo­rithms, and 3) Unsu­per­vised Learn­ing, Rec­om­menders, Rein­force­ment Learn­ing. Col­lec­tive­ly, the cours­es in the spe­cial­iza­tion will teach you to:

  • Build machine learn­ing mod­els in Python using pop­u­lar machine learn­ing libraries NumPy and scik­it-learn.
  • Build and train super­vised machine learn­ing mod­els for pre­dic­tion and bina­ry clas­si­fi­ca­tion tasks, includ­ing lin­ear regres­sion and logis­tic regres­sion.
  • Build and train a neur­al net­work with Ten­sor­Flow to per­form mul­ti-class clas­si­fi­ca­tion.
  • Apply best prac­tices for machine learn­ing devel­op­ment so that your mod­els gen­er­al­ize to data and tasks in the real world.
  • Build and use deci­sion trees and tree ensem­ble meth­ods, includ­ing ran­dom forests and boost­ed trees.
  • Use unsu­per­vised learn­ing tech­niques for unsu­per­vised learn­ing: includ­ing clus­ter­ing and anom­aly detec­tion.
  • Build rec­om­mender sys­tems with a col­lab­o­ra­tive fil­ter­ing approach and a con­tent-based deep learn­ing method.
  • Build a deep rein­force­ment learn­ing mod­el.

The skills stu­dents learn in Ng’s spe­cial­iza­tion will bring them clos­er to careers in big data, machine learn­ing, and AI engi­neer­ing. Enroll in Ng’s Spe­cial­iza­tion here free for 7 days and explore the mate­ri­als in all three cours­es. If you’re con­vinced the spe­cial­iza­tion is for you, you’ll pay $49 per month until you com­plete the three-course spe­cial­iza­tion, and you’ll earn a cer­tifi­cate upon com­ple­tion of a hands-on project using all of your new machine learn­ing skills. You can sign up for the Machine Learn­ing Spe­cial­iza­tion here.

Note: Open Cul­ture has a part­ner­ship with Cours­era. If read­ers enroll in cer­tain Cours­era cours­es and pro­grams, it helps sup­port Open Cul­ture.

Relat­ed Con­tent:

New Deep Learn­ing Cours­es Released on Cours­era, with Hope of Teach­ing Mil­lions the Basics of Arti­fi­cial Intel­li­gence

Cours­era Makes Cours­es & Cer­tifi­cates Free Dur­ing Coro­n­avirus Quar­an­tine: Take Cours­es in Psy­chol­o­gy, Music, Well­ness, Pro­fes­sion­al Devel­op­ment & More Online

Google & Cours­era Launch Career Cer­tifi­cates That Pre­pare Stu­dents for Jobs in 6 Months: Data Ana­lyt­ics, Project Man­age­ment and UX Design

Google Unveils a Dig­i­tal Mar­ket­ing & E‑Commerce Cer­tifi­cate: 7 Cours­es Will Help Pre­pare Stu­dents for an Entry-Lev­el Job in 6 Months       

Josh Jones is a writer and musi­cian based in Durham, NC. Fol­low him at @jdmagness

Discover DALL‑E, the Artificial Intelligence Artist That Lets You Create Surreal Artwork

DALL‑E, an arti­fi­cial intel­li­gence sys­tem that gen­er­ates viable-look­ing art in a vari­ety of styles in response to user sup­plied text prompts, has been gar­ner­ing a lot of inter­est since it debuted this spring.

It has yet to be released to the gen­er­al pub­lic, but while we’re wait­ing, you could have a go at DALL‑E Mini, an open source AI mod­el that gen­er­ates a grid of images inspired by any phrase you care to type into its search box.

Co-cre­ator Boris Day­ma explains how DALL‑E Mini learns by view­ing mil­lions of cap­tioned online images:

Some of the con­cepts are learnt (sic) from mem­o­ry as it may have seen sim­i­lar images. How­ev­er, it can also learn how to cre­ate unique images that don’t exist such as “the Eif­fel tow­er is land­ing on the moon” by com­bin­ing mul­ti­ple con­cepts togeth­er.

Sev­er­al mod­els are com­bined togeth­er to achieve these results:

• an image encoder that turns raw images into a sequence of num­bers with its asso­ci­at­ed decoder

• a mod­el that turns a text prompt into an encod­ed image

• a mod­el that judges the qual­i­ty of the images gen­er­at­ed for bet­ter fil­ter­ing 

My first attempt to gen­er­ate some art using DALL‑E mini failed to yield the hoped for weird­ness.  I blame the bland­ness of my search term — “toma­to soup.”

Per­haps I’d have bet­ter luck “Andy Warhol eat­ing a bowl of toma­to soup as a child in Pitts­burgh.”

Ah, there we go!

I was curi­ous to know how DALL‑E Mini would riff on its name­sake artist’s han­dle (an hon­or Dali shares with the tit­u­lar AI hero of Pixar’s 2018 ani­mat­ed fea­ture, WALL‑E.)

Hmm… seems like we’re back­slid­ing a bit.

Let me try “Andy Warhol eat­ing a bowl of toma­to soup as a child in Pitts­burgh with Sal­vador Dali.”

Ye gods! That’s the stuff of night­mares, but it also strikes me as pret­ty legit mod­ern art. Love the spar­ing use of red. Well done, DALL‑E mini.

At this point, van­i­ty got the bet­ter of me and I did the AI art-gen­er­at­ing equiv­a­lent of googling my own name, adding “in a tutu” because who among us hasn’t dreamed of being a bal­le­ri­na at some point?

Let that be a les­son to you, Pan­do­ra…

Hope­ful­ly we’re all plan­ning to use this play­ful open AI tool for good, not evil.

Hyperallergic’s Sarah Rose Sharp raised some valid con­cerns in rela­tion to the orig­i­nal, more sophis­ti­cat­ed DALL‑E:

It’s all fun and games when you’re gen­er­at­ing “robot play­ing chess” in the style of Matisse, but drop­ping machine-gen­er­at­ed imagery on a pub­lic that seems less capa­ble than ever of dis­tin­guish­ing fact from fic­tion feels like a dan­ger­ous trend.

Addi­tion­al­ly, DALL‑E’s neur­al net­work can yield sex­ist and racist images, a recur­ring issue with AI tech­nol­o­gy. For instance, a reporter at Vice found that prompts includ­ing search terms like “CEO” exclu­sive­ly gen­er­at­ed images of White men in busi­ness attire. The com­pa­ny acknowl­edges that DALL‑E “inher­its var­i­ous bias­es from its train­ing data, and its out­puts some­times rein­force soci­etal stereo­types.”

Co-cre­ator Day­ma does not duck the trou­bling impli­ca­tions and bias­es his baby could unleash:

While the capa­bil­i­ties of image gen­er­a­tion mod­els are impres­sive, they may also rein­force or exac­er­bate soci­etal bias­es. While the extent and nature of the bias­es of the DALL·E mini mod­el have yet to be ful­ly doc­u­ment­ed, giv­en the fact that the mod­el was trained on unfil­tered data from the Inter­net, it may gen­er­ate images that con­tain stereo­types against minor­i­ty groups. Work to ana­lyze the nature and extent of these lim­i­ta­tions is ongo­ing, and will be doc­u­ment­ed in more detail in the DALL·E mini mod­el card.

The New York­er car­toon­ists Ellis Rosen and Jason Adam Katzen­stein con­jure anoth­er way in which DALL‑E mini could break with the social con­tract:

And a Twit­ter user who goes by St. Rev. Dr. Rev blows minds and opens mul­ti­ple cans of worms, using pan­els from car­toon­ist Joshua Bark­man’s beloved web­com­ic, False Knees:

Pro­ceed with cau­tion, and play around with DALL‑E mini here.

Get on the wait­list for orig­i­nal fla­vor DALL‑E access here.

 

Relat­ed Con­tent

Arti­fi­cial Intel­li­gence Brings to Life Fig­ures from 7 Famous Paint­ings: The Mona Lisa, Birth of Venus & More

Google App Uses Machine Learn­ing to Dis­cov­er Your Pet’s Look Alike in 10,000 Clas­sic Works of Art

Arti­fi­cial Intel­li­gence for Every­one: An Intro­duc­to­ry Course from Andrew Ng, the Co-Founder of Cours­era

- Ayun Hal­l­i­day is the Chief Pri­ma­tol­o­gist of the East Vil­lage Inky zine and author, most recent­ly, of Cre­ative, Not Famous: The Small Pota­to Man­i­festo.  Fol­low her @AyunHalliday.

Watch the First Movie Ever Streamed on the Net: Wax or the Discovery of Television Among the Bees (1991)

When the World Wide Web made its pub­lic debut in the ear­ly nine­teen-nineties, it fas­ci­nat­ed many and struck some as rev­o­lu­tion­ary, but the idea of watch­ing a film online would still have sound­ed like sheer fan­ta­sy. Yet on May 23rd, 1993, report­ed the New York Times’ John Markoff, “a small audi­ence scat­tered among a few dozen com­put­er lab­o­ra­to­ries gath­ered” to “watch the first movie to be trans­mit­ted on the Inter­net — the glob­al com­put­er net­work that con­nects mil­lions of sci­en­tists and aca­d­e­m­ic researchers and hith­er­to has been a medi­um for swap­ping research notes and an occa­sion­al still image.”

That expla­na­tion speaks vol­umes about how life online was per­ceived by the aver­age New York Times read­er three decades ago. But it was hard­ly the aver­age New York Times read­er who tuned into the inter­net’s very first film screen­ing, whose fea­ture pre­sen­ta­tion was Wax or the Dis­cov­ery of Tele­vi­sion Among the Bees. Com­plet­ed in 1991 by artist David Blair, this hybrid fic­tion and essay-film offered to its view­ers what Times crit­ic Stephen Hold­en called “a mul­ti-gen­er­a­tional fam­i­ly saga as it might be imag­ined by a cyber­punk nov­el­ist. It flash­es all the way back to the sto­ry of Cain and Abel and the Tow­er of Babel and for­ward to the nar­ra­tor’s own death, birth and rebirth in an act of vio­lence.”

Jacob Mak­er, the nar­ra­tor, was once a hum­ble mis­sile-guid­ance sys­tem engi­neer. But increas­ing dis­en­chant­ment with his line of work pushed him into the api­ar­i­an arts, in homage to his famous bee­keep­er grand­son Jacob Hive Mak­er. That the lat­ter is played by William S. Bur­roughs sug­gests that Wax has the mak­ings of a “cult clas­sic,” as does the film’s con­struc­tion, in large part out of found footage, jux­ta­posed and manip­u­lat­ed into a dig­i­tal psy­che­delia. Its nar­ra­tive — amus­ing, ref­er­ence-rich, and bewil­der­ing­ly com­plex for an 85-minute run­time — has Jacob men­tal­ly over­tak­en by his own bees, who implant a tele­vi­sion into his brain and repro­gram him as an assas­sin.

With Wax, writes Screen Slate’s Sean Ben­jamin, “Blair laid an extrap­o­la­tion of La Jetée atop a bedrock of Thomas Pyn­chon and came out with some­thing clos­est to ear­ly Peter Green­away — yet ulti­mate­ly sin­gu­lar.” And on an inter­net that could only broad­cast it “at the dream-like rate of two frames a sec­ond” in black-and-white, it must have made for a sin­gu­lar view­ing expe­ri­ence indeed. Back then, as Markoff wrote, “dig­i­tal broad­cast­ing was not yet ready for prime time.”

Today, in our age of stream­ing, dig­i­tal broad­cast­ing has dis­placed prime time, and it feels only prop­er that we can watch Wax on Youtube, where Blair has uploaded it as part of a larg­er, ongo­ing, and not-eas­i­ly-grasped ongo­ing dig­i­tal film project. “There is a sense in which we have all had tele­vi­sions implant­ed in our heads,” Hold­en reflect­ed in 1992. “Who real­ly knows what those end­less reruns are doing to us?” Even now, the inter­net has only just begun to trans­form not just how we watch movies, but how we com­mu­ni­cate, con­duct our dai­ly lives, and even think. We can all see some­thing of our­selves in Jacob Mak­er — and on today’s inter­net, we can see it much more clear­ly.

Relat­ed con­tent:

The Very First Web­cam Was Invent­ed to Keep an Eye on a Cof­fee Pot at Cam­bridge Uni­ver­si­ty

Cyber­punk: 1990 Doc­u­men­tary Fea­tur­ing William Gib­son & Tim­o­thy Leary Intro­duces the Cyber­punk Cul­ture

Dar­win: A 1993 Film by Peter Green­away

Mes­mer­iz­ing Time­lapse Film Cap­tures the Won­der of Bees Being Born

The First Music Stream­ing Ser­vice Was Invent­ed in 1881: Dis­cov­er the Théâtro­phone

4,000+ Free Movies Online: Great Clas­sics, Indies, Noir, West­erns, Doc­u­men­taries & More

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall, on Face­book, or on Insta­gram.

Remembering Dave Smith (RIP), the Father of MIDI & the Creator of the 80s’ Most Beloved Synthesizer, the Prophet‑5

Some founders rest on their lau­rels, build indus­tries around them­selves like a cocoon, and nev­er escape or out­grow the big achieve­ment that made their name. Some, like Dave Smith — the so-called “father of MIDI,” and one of the most inno­v­a­tive syn­the­siz­er pio­neers of the last sev­er­al decades – don’t stop cre­at­ing for long enough to col­lect dust. You may nev­er have heard of Smith, but you’ve heard his tech­nol­o­gy. Before pio­neer­ing MIDI (Musi­cal Instru­ment Dig­i­tal Inter­face), the dig­i­tal stan­dard that allows hun­dreds of elec­tron­ic instru­ments to play nice­ly with each oth­er across com­put­er and soft­ware mak­ers, Smith found­ed Sequen­tial Cir­cuits and built one of the most revered syn­the­siz­ers ever made, the Prophet‑5, invent­ed in 1977 and essen­tial to the sound of the 1980s and beyond.

Smith’s key­boards made appear­ances on stage, video, and albums through­out the decade. Duran Duran’s Nick Rhodes used the Prophet‑5 on the band’s first album and “vir­tu­al­ly every record I have made since then,” he said in a state­ment. “With­out Dav­e’s vision and inge­nu­ity,” Rhodes went on, “the sound of the 1980s would have been very dif­fer­ent, he tru­ly changed the son­ic sound­scape of a gen­er­a­tion.”

Sequen­tial synths appeared on albums by bands as dis­parate as The Cure and Daryl Hall & John Oates, who demon­strate the dream-like, ethe­re­al capa­bil­i­ties of the Prophet‑5 — the first ful­ly pro­gram­ma­ble poly­phon­ic ana­log synth — in “I Can’t Go for That (No Can Do).” The Prophet‑5 also drove the sound of Radio­head­’s Kid A, and indie dance dar­lings Hot Chip wrote they would be “noth­ing with­out what [Smith] cre­at­ed.” Few vin­tage synths are as desir­able as the Prophet‑5.

The orig­i­nal Prophet is “not immune to the dark side of vin­tage synths,” writes Vin­tage Synth Explor­er, includ­ing prob­lems such as unsta­ble tun­ing and a lack of MIDI. Smith fixed that issue him­self with new iter­a­tions of the Prophet and oth­er synths fea­tur­ing his most famous post-Prophet‑5 tech­nol­o­gy. “Like so many bril­liant and cre­ative peo­ple,” the MIDI Asso­ci­a­tion writes, Smith “always focused on the future.” He was “not actu­al­ly a big fan of being called the ‘Father of MIDI.’ ” Many peo­ple con­tributed to the devel­op­ment of the tech­nol­o­gy, espe­cial­ly Roland founder Iku­taro Kake­hashi, who won a tech­ni­cal Gram­my with Smith in 2013 for the pro­to­col that made its debut as a new stan­dard in 1983.

Smith pre­ferred mak­ing hard­ware instru­ments and “almost begrudg­ing­ly accept­ed inter­views about his con­tri­bu­tions to MIDI.…. He was also not a big fan of orga­ni­za­tions, com­mit­tees and meet­ings.” He was a synth lover’s synth mak­er, a design­er and engi­neer with a “deep under­stand­ing of what musi­cians want­ed,” says Rhodes. Col­lab­o­ra­tions with Yama­ha and Korg pro­duced more soft­ware inno­va­tions in the 90s, but in the 2000s, Smith returned to Sequen­tial Cir­cuits and debuted the Prophet X, Prophet‑6, and OB‑6 with Tom Ober­heim. The two design­ers col­lab­o­rat­ed in 2021 on the Ober­heim OB-X8 and Smith intro­duced it just weeks before his death.

He had trav­eled a long way from invent­ing the Prophet‑5 in 1977 and pre­sent­ing a paper in 1981 to the Audio Engi­neer­ing Soci­ety on what he then called a Uni­ver­sal Syn­the­siz­er Inter­face. Smith him­self nev­er seemed to stop and look back, but lovers of his famous instru­ments are hap­py we still can, and that elec­tron­ic instru­ments and com­put­ers can talk to each oth­er eas­i­ly thanks to MIDI. Few of those instru­ments sound as good as the orig­i­nal, how­ev­er. See a demon­stra­tion of the Prophet-5’s range of sounds in the video just above and hear more tracks that show off the synth in the list here.

Relat­ed Con­tent:

The Sto­ry of the Syn­thAxe, the Aston­ish­ing 1980s Gui­tar Syn­the­siz­er: Only 100 Were Ever Made

Wendy Car­los Demon­strates the Moog Syn­the­siz­er on the BBC (1970)

Thomas Dol­by Explains How a Syn­the­siz­er Works on a Jim Hen­son Kids Show (1989)

Josh Jones is a writer and musi­cian based in Durham, NC. Fol­low him at @jdmagness

« Go BackMore in this category... »
Quantcast