Amazon Offers Free AI Courses, Aiming to Help 2 Million People Build AI Skills by 2025

Late last year, Ama­zon announced AI Ready, a new ini­tia­tive “designed to pro­vide free AI skills train­ing to 2 mil­lion peo­ple glob­al­ly by 2025.” This includes eight free AI and gen­er­a­tive AI cours­es, some designed for begin­ners, and oth­ers designed for more advanced stu­dents.

As the Wall Street Jour­nal pod­cast notes above, Ama­zon cre­at­ed the AI Ready ini­tia­tive with three goals in mind: 1) to increase the over­all num­ber of peo­ple in the work­force who have a basic under­stand­ing of AI, 2.) to com­pete with Microsoft and oth­er big com­pa­nies for AI tal­ent, and 3.) to expose a large num­ber of peo­ple to Ama­zon’s AI sys­tems.

For those new to AI, you may want to explore these AI Ready cours­es:

You can find more infor­ma­tion (includ­ing more free cours­es) on this AI Ready page. We have oth­er free AI cours­es list­ed in the Relat­eds below.

Note: Until Feb­ru­ary 1, 2024, Cours­era is run­ning a spe­cial deal where you can get $200 off of Cours­era Plus and gain unlim­it­ed access to cours­es & cer­tifi­cates, includ­ing a lot of cours­es on AI. Get details here.

Relat­ed Con­tent 

Arti­fi­cial Intel­li­gence for Every­one: An Intro­duc­to­ry Course from Andrew Ng, the Co-Founder of Cours­era

A New Course Teach­es You How to Tap the Pow­ers of Chat­G­PT and Put It to Work for You

Gen­er­a­tive AI for Every­one: A Free Course from AI Pio­neer Andrew Ng

Google Launch­es a Free Course on Arti­fi­cial Intel­li­gence: Sign Up for Its New “Machine Learn­ing Crash Course”

How to Learn Data Ana­lyt­ics in 2024: Earn a Pro­fes­sion­al Cer­tifi­cate That Will Help Pre­pare You for a Job in 6 Months

Generative AI for Everyone: A Free Course from AI Pioneer Andrew Ng

Andrew Ng–an AI pio­neer and Stan­ford com­put­er sci­ence professor–has released a new course called Gen­er­a­tive AI for Every­one. Designed for a non-tech­ni­cal audi­ence, the course will “guide you through how gen­er­a­tive AI works and what it can (and can’t) do. It includes hands-on exer­cis­es where you’ll learn to use gen­er­a­tive AI to help in day-to-day work.”  The course also explains “how to think through the life­cy­cle of a gen­er­a­tive AI project, from con­cep­tion to launch, includ­ing how to build effec­tive prompts,” and it dis­cuss­es “the poten­tial oppor­tu­ni­ties and risks that gen­er­a­tive AI tech­nolo­gies present to indi­vid­u­als, busi­ness­es, and soci­ety.” Giv­en the com­ing preva­lence of AI, it’s worth spend­ing six hours with this course (the esti­mat­ed time need­ed to com­plete it). You can audit Gen­er­a­tive AI for Every­one for free, and watch all of the lec­tures at no cost. If you would like to take the course and earn a cer­tifi­cate, it will cost $49.

Gen­er­a­tive AI for Every­one will be added to our col­lec­tion, 1,700 Free Online Cours­es from Top Uni­ver­si­ties.

Relat­ed Con­tent 

Google Launch­es a Free Course on Arti­fi­cial Intel­li­gence: Sign Up for Its New “Machine Learn­ing Crash Course”

Com­put­er Sci­en­tist Andrew Ng Presents a New Series of Machine Learn­ing Courses–an Updat­ed Ver­sion of the Pop­u­lar Course Tak­en by 5 Mil­lion Stu­dents

Stephen Fry Reads Nick Cave’s Stir­ring Let­ter About Chat­G­PT and Human Cre­ativ­i­ty: “We Are Fight­ing for the Very Soul of the World”

How Will AI Change the World?: A Captivating Animation Explores the Promise & Perils of Artificial Intelligence

Many of us can remem­ber a time when arti­fi­cial intel­li­gence was wide­ly dis­missed as a sci­ence-fic­tion­al pipe dream unwor­thy of seri­ous research and invest­ment. That time, safe to say, has gone. “With­in a decade,” writes blog­ger Samuel Ham­mond, the devel­op­ment of arti­fi­cial intel­li­gence could bring about a world in which “ordi­nary peo­ple will have more capa­bil­i­ties than a CIA agent does today. You’ll be able to lis­ten in on a con­ver­sa­tion in an apart­ment across the street using the sound vibra­tions off a chip bag” (as pre­vi­ous­ly fea­tured here on Open Cul­ture.) “You’ll be able to replace your face and voice with those of some­one else in real time, allow­ing any­one to social­ly engi­neer their way into any­thing.”

And that’s the benign part. “Death-by-kamikaze drone will sur­pass mass shoot­ings as the best way to enact a lurid revenge. The courts, mean­while, will be flood­ed with law­suits because who needs to pay attor­ney fees when your phone can file an air­tight motion for you?” All this “will be enough to make the sta­blest genius feel schiz­o­phrenic.” But “it doesn’t have to be this way. We can fight AI fire with AI fire and adapt our prac­tices along the way.” You can hear a con­sid­ered take on how we might man­age that in the ani­mat­ed TED-Ed video above, adapt­ed from an inter­view with com­put­er sci­en­tist Stu­art Rus­sell, author of the pop­u­lar text­book Arti­fi­cial Intel­li­gence: A Mod­ern Approach as well as Human Com­pat­i­ble: Arti­fi­cial Intel­li­gence and the Prob­lem of Con­trol.

“The prob­lem with the way we build AI sys­tems now is we give them a fixed objec­tive,” Rus­sell says. “The algo­rithms require us to spec­i­fy every­thing in the objec­tive.” Thus an AI charged with de-acid­i­fy­ing the oceans could quite plau­si­bly come to the solu­tion of set­ting off “a cat­alyt­ic reac­tion that does that extreme­ly effi­cient­ly, but con­sumes a quar­ter of the oxy­gen in the atmos­phere, which would appar­ent­ly cause us to die fair­ly slow­ly and unpleas­ant­ly over the course of sev­er­al hours.” The key to this prob­lem, Rus­sell argues, is to pro­gram in a cer­tain lack of con­fi­dence: “It’s when you build machines that believe with cer­tain­ty that they have the objec­tive, that’s when you get sort of psy­cho­path­ic behav­ior, and I think we see the same thing in humans.”

A less exis­ten­tial but more com­mon wor­ry has to do with unem­ploy­ment. Full AI automa­tion of the ware­house tasks still per­formed by humans, for exam­ple, “would, at a stroke, elim­i­nate three or four mil­lion jobs.” Rus­sell here turns to E. M. Forster, who in the 1909 sto­ry “The Machine Stops” envi­sions a future in which “every­one is entire­ly machine-depen­dent,” with lives not unlike the e‑mail- and Zoom meet­ing-filled ones we lead today. The nar­ra­tive plays out as a warn­ing that “if you hand over the man­age­ment of your civ­i­liza­tion to machines, you then lose the incen­tive to under­stand it your­self or to teach the next gen­er­a­tion how to under­stand it.” The mind, as the say­ing goes, is a won­der­ful ser­vant but a ter­ri­ble mas­ter. The same is true of machines — and even truer, we may well find, of mechan­i­cal minds.

Relat­ed con­tent:

Dis­cov­er DALL‑E, the Arti­fi­cial Intel­li­gence Artist That Lets You Cre­ate Sur­re­al Art­work

Experts Pre­dict When Arti­fi­cial Intel­li­gence Will Take Our Jobs: From Writ­ing Essays, Books & Songs, to Per­form­ing Surgery and Dri­ving Trucks

Sci-Fi Writer Arthur C. Clarke Pre­dicts the Future in 1964: Arti­fi­cial Intel­li­gence, Instan­ta­neous Glob­al Com­mu­ni­ca­tion, Remote Work, Sin­gu­lar­i­ty & More

Stephen Fry Voic­es a New Dystopi­an Short Film About Arti­fi­cial Intel­li­gence & Sim­u­la­tion The­o­ry: Watch Escape

Stephen Hawk­ing Won­ders Whether Cap­i­tal­ism or Arti­fi­cial Intel­li­gence Will Doom the Human Race

Hunter S. Thomp­son Chill­ing­ly Pre­dicts the Future, Telling Studs Terkel About the Com­ing Revenge of the Eco­nom­i­cal­ly & Tech­no­log­i­cal­ly “Obso­lete” (1967)

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall or on Face­book.

Behold Illustrations of Every Shakespeare Play Created by Artificial Intelligence

William Shake­speare’s plays have endured not just because of their inher­ent dra­mat­ic and lin­guis­tic qual­i­ties, but also because each era has found its own way of envi­sion­ing and re-envi­sion­ing them. The tech­nol­o­gy involved in stage pro­duc­tions has changed over the past four cen­turies, of course, but so has the tech­nol­o­gy involved in art itself. A few years ago, we fea­tured here on Open Cul­ture an archive of 3,000 illus­tra­tions of Shake­speare’s com­plete works going back to the mid-nine­teenth cen­tu­ry. That site was the PhD project of Cardiff Uni­ver­si­ty’s Michael Good­man, who has recent­ly com­plet­ed anoth­er dig­i­tal Shake­speare project, this time using arti­fi­cial intel­li­gence: Paint the Pic­ture to the Word.

“Every image col­lect­ed here has been gen­er­at­ed by Sta­ble Dif­fu­sion, a pow­er­ful text-to-image AI,” writes Good­man on this new pro­jec­t’s About page. “To cre­ate an image using this tech­nol­o­gy a user sim­ply types a descrip­tion of what they want to see into a text box and the AI will then pro­duce sev­er­al images cor­re­spond­ing to that ini­tial tex­tu­al prompt,” much as with the also-new AI-based art gen­er­a­tor DALL‑E.

Each of the many images Good­man cre­at­ed is inspired by a Shake­speare play. “Some of the illus­tra­tions are expres­sion­is­tic (King John, Julius Cae­sar), while some are more lit­er­al (Mer­ry Wives of Wind­sor).” All “offer a visu­al idea or a gloss on the plays: Hen­ry VIII, with the cen­tral char­ac­ters rep­re­sent­ed in fuzzy felt, is grim­ly iron­ic, while in Per­i­cles both Mar­i­ana and her father are seen through a watery prism, echo­ing that play’s con­cern with sea imagery.”

Select­ing one of his many gen­er­at­ed images per play, Good­man has cre­at­ed an entire dig­i­tal exhi­bi­tion whose works nev­er repeat a style or a sen­si­bil­i­ty, whether with a dog-cen­tric nine­teen-eight­ies col­lage rep­re­sent­ing Two Gen­tle­men of Verona, a stark­ly near-abstract vision of Mac­beth’s Weird Sis­ters or Much Ado About Noth­ing ren­dered as a mod­ern-day rom-com. The­ater com­pa­nies could hard­ly fail to take notice of these images’ poten­tial as pro­mo­tion­al posters, but Paint the Pic­ture to the Word also demon­strates some­thing larg­er: Shake­speare’s plays have long stim­u­lat­ed human intel­li­gence, but they turn out to work on arti­fi­cial intel­li­gence as well. Vis­it Paint the Pic­ture to the Word here.

Relat­ed con­tent:

3,000 Illus­tra­tions of Shakespeare’s Com­plete Works from Vic­to­ri­an Eng­land, Neat­ly Pre­sent­ed in a New Dig­i­tal Archive

John Austen’s Haunt­ing Illus­tra­tions of Shakespeare’s Ham­let: A Mas­ter­piece of the Aes­thet­ic Move­ment (1922)

Fol­ger Shake­speare Library Puts 80,000 Images of Lit­er­ary Art Online, and They’re All Free to Use

Arti­fi­cial Intel­li­gence Brings to Life Fig­ures from 7 Famous Paint­ings: The Mona Lisa, Birth of Venus & More

DALL‑E, the New AI Art Gen­er­a­tor, Is Now Open for Every­one to Use

An AI-Gen­er­at­ed Paint­ing Won First Prize at a State Fair & Sparked a Debate About the Essence of Art

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall or on Face­book.

What Happens When Artificial Intelligence Creates Images to Match the Lyrics of Iconic Songs: David Bowie’s “Starman,” Led Zeppelin’s “Stairway to Heaven”, ELO’s “Mr. Blue Sky” & More

Lyri­cists must write con­crete­ly enough to be evoca­tive, yet vague­ly enough to allow each lis­ten­er his per­son­al inter­pre­ta­tion. The nine­teen-six­ties and sev­en­ties saw an espe­cial­ly rich bal­ance struck between res­o­nant ambi­gu­i­ty and mas­sive pop­u­lar­i­ty — aid­ed, as many involved par­ties have admit­ted, by the use of cer­tain psy­choac­tive sub­stances. Half a cen­tu­ry lat­er, the visions induced by those same sub­stances offer the clos­est com­par­i­son to the strik­ing fruits of visu­al arti­fi­cial-intel­li­gence projects like Google’s Deep Dream a few years ago or DALL‑E today. Only nat­ur­al, per­haps, that these advanced appli­ca­tions would soon­er or lat­er be fed psy­che­del­ic song lyrics.

The video at the top of the post presents the Elec­tric Light Orches­tra’s 1977 hit “Mr. Blue Sky” illus­trat­ed by images gen­er­at­ed by arti­fi­cial intel­li­gence straight from its words. This came as a much-antic­i­pat­ed endeav­or for Youtube chan­nel SolarProphet, which has also put up sim­i­lar­ly AI-accom­pa­nied pre­sen­ta­tions of such already goofy-image-filled com­e­dy songs as Lemon Demon’s “The Ulti­mate Show­down” and Neil Ciciere­ga’s “It’s Gonna Get Weird.”

Youtu­ber Daara has also cre­at­ed ten entries in this new genre, includ­ing Queen’s “Don’t Stop Me Now,” The Eagles’ “Hotel Cal­i­for­nia,” and (the recent­ly-fea­tured-on-Open-Cul­ture) Kate Bush’s “Run­ning Up That Hill.”

Jut above appears a video for David Bowie’s “Star­man” with AI-visu­al­ized lyrics, cre­at­ed by Youtu­ber Aidon­t­know. Cre­at­ed isn’t too strong a word, since DALL‑E and oth­er appli­ca­tions cur­rent­ly avail­able to the pub­lic pro­vide a selec­tion of images for each prompt, leav­ing it to human users to pro­vide specifics about the aes­thet­ic — and, in the case of these videos, to select the result that best suits each line. One delight of this par­tic­u­lar pro­duc­tion, apart from the boo­gieing chil­dren, is see­ing how the AI imag­ines var­i­ous star­men wait­ing in the sky, all of whom look sus­pi­cious­ly like ear­ly-sev­en­ties Bowie. Of all his songs of that peri­od, sure­ly “Life on Mars?” would be choice num­ber one for an AI music video — but then, its imagery may well be too bizarre for cur­rent tech­nol­o­gy to han­dle.

Relat­ed con­tent:

Dis­cov­er DALL‑E, the Arti­fi­cial Intel­li­gence Artist That Lets You Cre­ate Sur­re­al Art­work

Arti­fi­cial Intel­li­gence Pro­gram Tries to Write a Bea­t­les Song: Lis­ten to “Daddy’s Car”

What Hap­pens When Arti­fi­cial Intel­li­gence Lis­tens to John Coltrane’s Inter­stel­lar Space & Starts to Cre­ate Its Own Free Jazz

Arti­fi­cial Intel­li­gence Writes a Piece in the Style of Bach: Can You Tell the Dif­fer­ence Between JS Bach and AI Bach?

Arti­fi­cial Intel­li­gence Cre­ates Real­is­tic Pho­tos of Peo­ple, None of Whom Actu­al­ly Exist

Nick Cave Answers the Hot­ly Debat­ed Ques­tion: Will Arti­fi­cial Intel­li­gence Ever Be Able to Write a Great Song?

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall, on Face­book, or on Insta­gram.

Computer Scientist Andrew Ng Presents a New Series of Machine Learning Courses–an Updated Version of the Popular Course Taken by 5 Million Students

Back in 2017, Cours­era co-founder and for­mer Stan­ford com­put­er sci­ence pro­fes­sor Andrew Ng launched a five-part series of cours­es on “Deep Learn­ing” on the edtech plat­form, a series meant to “help you mas­ter Deep Learn­ing, apply it effec­tive­ly, and build a career in AI.” These cours­es extend­ed his ini­tial Machine Learn­ing course, which has attract­ed almost 5 mil­lion stu­dents since 2012, in an effort, he said, to build “a new AI-pow­ered soci­ety.”

Ng’s goals are ambi­tious, to “teach mil­lions of peo­ple to use these AI tools so they can go and invent the things that no large com­pa­ny, or com­pa­ny I could build, could do.” His new Machine Learn­ing Spe­cial­iza­tion at Cours­era takes him sev­er­al steps fur­ther in that direc­tion with an “updat­ed ver­sion of [his] pio­neer­ing Machine Learn­ing course,” notes Cours­er­a’s descrip­tion, pro­vid­ing “a broad intro­duc­tion to mod­ern machine learn­ing.” The spe­cial­iza­tion’s three cours­es include 1) Super­vised Machine Learn­ing: Regres­sion and Clas­si­fi­ca­tion, 2) Advanced Learn­ing Algo­rithms, and 3) Unsu­per­vised Learn­ing, Rec­om­menders, Rein­force­ment Learn­ing. Col­lec­tive­ly, the cours­es in the spe­cial­iza­tion will teach you to:

  • Build machine learn­ing mod­els in Python using pop­u­lar machine learn­ing libraries NumPy and scik­it-learn.
  • Build and train super­vised machine learn­ing mod­els for pre­dic­tion and bina­ry clas­si­fi­ca­tion tasks, includ­ing lin­ear regres­sion and logis­tic regres­sion.
  • Build and train a neur­al net­work with Ten­sor­Flow to per­form mul­ti-class clas­si­fi­ca­tion.
  • Apply best prac­tices for machine learn­ing devel­op­ment so that your mod­els gen­er­al­ize to data and tasks in the real world.
  • Build and use deci­sion trees and tree ensem­ble meth­ods, includ­ing ran­dom forests and boost­ed trees.
  • Use unsu­per­vised learn­ing tech­niques for unsu­per­vised learn­ing: includ­ing clus­ter­ing and anom­aly detec­tion.
  • Build rec­om­mender sys­tems with a col­lab­o­ra­tive fil­ter­ing approach and a con­tent-based deep learn­ing method.
  • Build a deep rein­force­ment learn­ing mod­el.

The skills stu­dents learn in Ng’s spe­cial­iza­tion will bring them clos­er to careers in big data, machine learn­ing, and AI engi­neer­ing. Enroll in Ng’s Spe­cial­iza­tion here free for 7 days and explore the mate­ri­als in all three cours­es. If you’re con­vinced the spe­cial­iza­tion is for you, you’ll pay $49 per month until you com­plete the three-course spe­cial­iza­tion, and you’ll earn a cer­tifi­cate upon com­ple­tion of a hands-on project using all of your new machine learn­ing skills. You can sign up for the Machine Learn­ing Spe­cial­iza­tion here.

Note: Open Cul­ture has a part­ner­ship with Cours­era. If read­ers enroll in cer­tain Cours­era cours­es and pro­grams, it helps sup­port Open Cul­ture.

Relat­ed Con­tent:

New Deep Learn­ing Cours­es Released on Cours­era, with Hope of Teach­ing Mil­lions the Basics of Arti­fi­cial Intel­li­gence

Cours­era Makes Cours­es & Cer­tifi­cates Free Dur­ing Coro­n­avirus Quar­an­tine: Take Cours­es in Psy­chol­o­gy, Music, Well­ness, Pro­fes­sion­al Devel­op­ment & More Online

Google & Cours­era Launch Career Cer­tifi­cates That Pre­pare Stu­dents for Jobs in 6 Months: Data Ana­lyt­ics, Project Man­age­ment and UX Design

Google Unveils a Dig­i­tal Mar­ket­ing & E‑Commerce Cer­tifi­cate: 7 Cours­es Will Help Pre­pare Stu­dents for an Entry-Lev­el Job in 6 Months       

Josh Jones is a writer and musi­cian based in Durham, NC. Fol­low him at @jdmagness

Discover DALL‑E, the Artificial Intelligence Artist That Lets You Create Surreal Artwork

DALL‑E, an arti­fi­cial intel­li­gence sys­tem that gen­er­ates viable-look­ing art in a vari­ety of styles in response to user sup­plied text prompts, has been gar­ner­ing a lot of inter­est since it debuted this spring.

It has yet to be released to the gen­er­al pub­lic, but while we’re wait­ing, you could have a go at DALL‑E Mini, an open source AI mod­el that gen­er­ates a grid of images inspired by any phrase you care to type into its search box.

Co-cre­ator Boris Day­ma explains how DALL‑E Mini learns by view­ing mil­lions of cap­tioned online images:

Some of the con­cepts are learnt (sic) from mem­o­ry as it may have seen sim­i­lar images. How­ev­er, it can also learn how to cre­ate unique images that don’t exist such as “the Eif­fel tow­er is land­ing on the moon” by com­bin­ing mul­ti­ple con­cepts togeth­er.

Sev­er­al mod­els are com­bined togeth­er to achieve these results:

• an image encoder that turns raw images into a sequence of num­bers with its asso­ci­at­ed decoder

• a mod­el that turns a text prompt into an encod­ed image

• a mod­el that judges the qual­i­ty of the images gen­er­at­ed for bet­ter fil­ter­ing 

My first attempt to gen­er­ate some art using DALL‑E mini failed to yield the hoped for weird­ness.  I blame the bland­ness of my search term — “toma­to soup.”

Per­haps I’d have bet­ter luck “Andy Warhol eat­ing a bowl of toma­to soup as a child in Pitts­burgh.”

Ah, there we go!

I was curi­ous to know how DALL‑E Mini would riff on its name­sake artist’s han­dle (an hon­or Dali shares with the tit­u­lar AI hero of Pixar’s 2018 ani­mat­ed fea­ture, WALL‑E.)

Hmm… seems like we’re back­slid­ing a bit.

Let me try “Andy Warhol eat­ing a bowl of toma­to soup as a child in Pitts­burgh with Sal­vador Dali.”

Ye gods! That’s the stuff of night­mares, but it also strikes me as pret­ty legit mod­ern art. Love the spar­ing use of red. Well done, DALL‑E mini.

At this point, van­i­ty got the bet­ter of me and I did the AI art-gen­er­at­ing equiv­a­lent of googling my own name, adding “in a tutu” because who among us hasn’t dreamed of being a bal­le­ri­na at some point?

Let that be a les­son to you, Pan­do­ra…

Hope­ful­ly we’re all plan­ning to use this play­ful open AI tool for good, not evil.

Hyperallergic’s Sarah Rose Sharp raised some valid con­cerns in rela­tion to the orig­i­nal, more sophis­ti­cat­ed DALL‑E:

It’s all fun and games when you’re gen­er­at­ing “robot play­ing chess” in the style of Matisse, but drop­ping machine-gen­er­at­ed imagery on a pub­lic that seems less capa­ble than ever of dis­tin­guish­ing fact from fic­tion feels like a dan­ger­ous trend.

Addi­tion­al­ly, DALL‑E’s neur­al net­work can yield sex­ist and racist images, a recur­ring issue with AI tech­nol­o­gy. For instance, a reporter at Vice found that prompts includ­ing search terms like “CEO” exclu­sive­ly gen­er­at­ed images of White men in busi­ness attire. The com­pa­ny acknowl­edges that DALL‑E “inher­its var­i­ous bias­es from its train­ing data, and its out­puts some­times rein­force soci­etal stereo­types.”

Co-cre­ator Day­ma does not duck the trou­bling impli­ca­tions and bias­es his baby could unleash:

While the capa­bil­i­ties of image gen­er­a­tion mod­els are impres­sive, they may also rein­force or exac­er­bate soci­etal bias­es. While the extent and nature of the bias­es of the DALL·E mini mod­el have yet to be ful­ly doc­u­ment­ed, giv­en the fact that the mod­el was trained on unfil­tered data from the Inter­net, it may gen­er­ate images that con­tain stereo­types against minor­i­ty groups. Work to ana­lyze the nature and extent of these lim­i­ta­tions is ongo­ing, and will be doc­u­ment­ed in more detail in the DALL·E mini mod­el card.

The New York­er car­toon­ists Ellis Rosen and Jason Adam Katzen­stein con­jure anoth­er way in which DALL‑E mini could break with the social con­tract:

And a Twit­ter user who goes by St. Rev. Dr. Rev blows minds and opens mul­ti­ple cans of worms, using pan­els from car­toon­ist Joshua Bark­man’s beloved web­com­ic, False Knees:

Pro­ceed with cau­tion, and play around with DALL‑E mini here.

Get on the wait­list for orig­i­nal fla­vor DALL‑E access here.


Relat­ed Con­tent

Arti­fi­cial Intel­li­gence Brings to Life Fig­ures from 7 Famous Paint­ings: The Mona Lisa, Birth of Venus & More

Google App Uses Machine Learn­ing to Dis­cov­er Your Pet’s Look Alike in 10,000 Clas­sic Works of Art

Arti­fi­cial Intel­li­gence for Every­one: An Intro­duc­to­ry Course from Andrew Ng, the Co-Founder of Cours­era

- Ayun Hal­l­i­day is the Chief Pri­ma­tol­o­gist of the East Vil­lage Inky zine and author, most recent­ly, of Cre­ative, Not Famous: The Small Pota­to Man­i­festo.  Fol­low her @AyunHalliday.

How Peter Jackson Used Artificial Intelligence to Restore the Video & Audio Featured in The Beatles: Get Back

Much has been made in recent years of the “de-aging” process­es that allow actors to cred­i­bly play char­ac­ters far younger than them­selves. But it has also become pos­si­ble to de-age film itself, as demon­strat­ed by Peter Jack­son’s cel­e­brat­ed new docu-series The Bea­t­les: Get Back. The vast major­i­ty of the mate­r­i­al that com­pris­es its near­ly eight-hour run­time was orig­i­nal­ly shot in 1969, under the direc­tion of Michael Lind­say-Hogg for the doc­u­men­tary that became Let It Be.

Those who have seen both Lin­day-Hog­g’s and Jack­son’s doc­u­men­taries will notice how much sharp­er, smoother, and more vivid the very same footage looks in the lat­ter, despite the six­teen-mil­lime­ter film hav­ing lan­guished for half a cen­tu­ry. The kind of visu­al restora­tion and enhance­ment seen in Get Back was made pos­si­ble by tech­nolo­gies that have only emerged in the past few decades — and pre­vi­ous­ly seen in Jack­son’s They Shall Not Grow Old, a doc­u­men­tary acclaimed for its restora­tion of cen­tu­ry-old World War I footage to a time-trav­el-like degree of verisimil­i­tude.

“You can’t actu­al­ly just do it with off-the-shelf soft­ware,” Jack­son explained in an inter­view about the restora­tion process­es involved in They Shall Not Grow Old. This neces­si­tat­ed mar­shal­ing, at his New Zealand com­pa­ny Park Road Post Pro­duc­tion, “a depart­ment of code writ­ers who write com­put­er code in soft­ware.” In oth­er words, a suf­fi­cient­ly ambi­tious project of visu­al revi­tal­iza­tion — mak­ing media from bygone times even more life­like than it was to begin with — becomes as much a job of tra­di­tion­al film-restora­tion or visu­al-effects as of com­put­er pro­gram­ming.

This also goes for the less obvi­ous but no-less-impres­sive treat­ment giv­en by Jack­son and his team to the audio that came with the Let It Be footage. Record­ed in large part monau­ral­ly, these tapes pre­sent­ed a for­mi­da­ble pro­duc­tion chal­lenge. John, Paul, George, and Ringo’s instru­ments share a sin­gle track with their voic­es — and not just their singing voic­es, but their speak­ing ones as well. On first lis­ten, this ren­ders many of their con­ver­sa­tions inaudi­ble, and prob­a­bly by design: “If they were in a con­ver­sa­tion,” said Jack­son, they would turn their amps up loud and they’d strum the gui­tar.”

This means of keep­ing their words from Lind­say-Hogg and his crew worked well enough in the whol­ly ana­log late 1960s, but it has proven no match for the arti­fi­cial intelligence/machine learn­ing of the 2020s. “We devised a tech­nol­o­gy that is called demix­ing,” said Jack­son. “You teach the com­put­er what a gui­tar sounds like, you teach them what a human voice sounds like, you teach it what a drum sounds like, you teach it what a bass sounds like.” Sup­plied with enough son­ic data, the sys­tem even­tu­al­ly learned to dis­tin­guish from one anoth­er not just the sounds of the Bea­t­les’ instru­ments but of their voic­es as well.

Hence, in addi­tion to Get Back’s rev­e­la­to­ry musi­cal moments, its many once-pri­vate but now crisply audi­ble exchanges between the Fab Four. “Oh, you’re record­ing our con­ver­sa­tion?” George Har­ri­son at one point asks Lind­say-Hogg in a char­ac­ter­is­tic tone of faux sur­prise. But if he could hear the record­ings today, his sur­prise would sure­ly be real.

Relat­ed Con­tent:

Watch Paul McCart­ney Com­pose The Bea­t­les Clas­sic “Get Back” Out of Thin Air (1969)

Peter Jack­son Gives Us an Entic­ing Glimpse of His Upcom­ing Bea­t­les Doc­u­men­tary The Bea­t­les: Get Back

Lennon or McCart­ney? Sci­en­tists Use Arti­fi­cial Intel­li­gence to Fig­ure Out Who Wrote Icon­ic Bea­t­les Songs

Arti­fi­cial Intel­li­gence Pro­gram Tries to Write a Bea­t­les Song: Lis­ten to “Daddy’s Car”

Watch The Bea­t­les Per­form Their Famous Rooftop Con­cert: It Hap­pened 50 Years Ago Today (Jan­u­ary 30, 1969)

How Peter Jack­son Made His State-of-the-Art World War I Doc­u­men­tary They Shall Not Grow Old: An Inside Look

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities and cul­ture. His projects include the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall or on Face­book.

More in this category... »
Open Culture was founded by Dan Colman.