What Happens When Artificial Intelligence Creates Images to Match the Lyrics of Iconic Songs: David Bowie’s “Starman,” Led Zeppelin’s “Stairway to Heaven”, ELO’s “Mr. Blue Sky” & More

Lyri­cists must write con­crete­ly enough to be evoca­tive, yet vague­ly enough to allow each lis­ten­er his per­son­al inter­pre­ta­tion. The nine­teen-six­ties and sev­en­ties saw an espe­cial­ly rich bal­ance struck between res­o­nant ambi­gu­i­ty and mas­sive pop­u­lar­i­ty — aid­ed, as many involved par­ties have admit­ted, by the use of cer­tain psy­choac­tive sub­stances. Half a cen­tu­ry lat­er, the visions induced by those same sub­stances offer the clos­est com­par­i­son to the strik­ing fruits of visu­al arti­fi­cial-intel­li­gence projects like Google’s Deep Dream a few years ago or DALL‑E today. Only nat­ur­al, per­haps, that these advanced appli­ca­tions would soon­er or lat­er be fed psy­che­del­ic song lyrics.

The video at the top of the post presents the Elec­tric Light Orches­tra’s 1977 hit “Mr. Blue Sky” illus­trat­ed by images gen­er­at­ed by arti­fi­cial intel­li­gence straight from its words. This came as a much-antic­i­pat­ed endeav­or for Youtube chan­nel SolarProphet, which has also put up sim­i­lar­ly AI-accom­pa­nied pre­sen­ta­tions of such already goofy-image-filled com­e­dy songs as Lemon Demon’s “The Ulti­mate Show­down” and Neil Ciciere­ga’s “It’s Gonna Get Weird.”

Youtu­ber Daara has also cre­at­ed ten entries in this new genre, includ­ing Queen’s “Don’t Stop Me Now,” The Eagles’ “Hotel Cal­i­for­nia,” and (the recent­ly-fea­tured-on-Open-Cul­ture) Kate Bush’s â€śRun­ning Up That Hill.”

Jut above appears a video for David Bowie’s “Star­man” with AI-visu­al­ized lyrics, cre­at­ed by Youtu­ber Aidon­t­know. Cre­at­ed isn’t too strong a word, since DALL‑E and oth­er appli­ca­tions cur­rent­ly avail­able to the pub­lic pro­vide a selec­tion of images for each prompt, leav­ing it to human users to pro­vide specifics about the aes­thet­ic — and, in the case of these videos, to select the result that best suits each line. One delight of this par­tic­u­lar pro­duc­tion, apart from the boo­gieing chil­dren, is see­ing how the AI imag­ines var­i­ous star­men wait­ing in the sky, all of whom look sus­pi­cious­ly like ear­ly-sev­en­ties Bowie. Of all his songs of that peri­od, sure­ly “Life on Mars?” would be choice num­ber one for an AI music video — but then, its imagery may well be too bizarre for cur­rent tech­nol­o­gy to han­dle.

Relat­ed con­tent:

Dis­cov­er DALL‑E, the Arti­fi­cial Intel­li­gence Artist That Lets You Cre­ate Sur­re­al Art­work

Arti­fi­cial Intel­li­gence Pro­gram Tries to Write a Bea­t­les Song: Lis­ten to “Daddy’s Car”

What Hap­pens When Arti­fi­cial Intel­li­gence Lis­tens to John Coltrane’s Inter­stel­lar Space & Starts to Cre­ate Its Own Free Jazz

Arti­fi­cial Intel­li­gence Writes a Piece in the Style of Bach: Can You Tell the Dif­fer­ence Between JS Bach and AI Bach?

Arti­fi­cial Intel­li­gence Cre­ates Real­is­tic Pho­tos of Peo­ple, None of Whom Actu­al­ly Exist

Nick Cave Answers the Hot­ly Debat­ed Ques­tion: Will Arti­fi­cial Intel­li­gence Ever Be Able to Write a Great Song?

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities, the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall, on Face­book, or on Insta­gram.

Computer Scientist Andrew Ng Presents a New Series of Machine Learning Courses–an Updated Version of the Popular Course Taken by 5 Million Students

Back in 2017, Cours­era co-founder and for­mer Stan­ford com­put­er sci­ence pro­fes­sor Andrew Ng launched a five-part series of cours­es on “Deep Learn­ing” on the edtech plat­form, a series meant to “help you mas­ter Deep Learn­ing, apply it effec­tive­ly, and build a career in AI.” These cours­es extend­ed his ini­tial Machine Learn­ing course, which has attract­ed almost 5 mil­lion stu­dents since 2012, in an effort, he said, to build “a new AI-pow­ered soci­ety.”

Ng’s goals are ambi­tious, to “teach mil­lions of peo­ple to use these AI tools so they can go and invent the things that no large com­pa­ny, or com­pa­ny I could build, could do.” His new Machine Learn­ing Spe­cial­iza­tion at Cours­era takes him sev­er­al steps fur­ther in that direc­tion with an “updat­ed ver­sion of [his] pio­neer­ing Machine Learn­ing course,” notes Cours­er­a’s descrip­tion, pro­vid­ing “a broad intro­duc­tion to mod­ern machine learn­ing.” The spe­cial­iza­tion’s three cours­es include 1) Super­vised Machine Learn­ing: Regres­sion and Clas­si­fi­ca­tion, 2) Advanced Learn­ing Algo­rithms, and 3) Unsu­per­vised Learn­ing, Rec­om­menders, Rein­force­ment Learn­ing. Col­lec­tive­ly, the cours­es in the spe­cial­iza­tion will teach you to:

  • Build machine learn­ing mod­els in Python using pop­u­lar machine learn­ing libraries NumPy and scik­it-learn.
  • Build and train super­vised machine learn­ing mod­els for pre­dic­tion and bina­ry clas­si­fi­ca­tion tasks, includ­ing lin­ear regres­sion and logis­tic regres­sion.
  • Build and train a neur­al net­work with Ten­sor­Flow to per­form mul­ti-class clas­si­fi­ca­tion.
  • Apply best prac­tices for machine learn­ing devel­op­ment so that your mod­els gen­er­al­ize to data and tasks in the real world.
  • Build and use deci­sion trees and tree ensem­ble meth­ods, includ­ing ran­dom forests and boost­ed trees.
  • Use unsu­per­vised learn­ing tech­niques for unsu­per­vised learn­ing: includ­ing clus­ter­ing and anom­aly detec­tion.
  • Build rec­om­mender sys­tems with a col­lab­o­ra­tive fil­ter­ing approach and a con­tent-based deep learn­ing method.
  • Build a deep rein­force­ment learn­ing mod­el.

The skills stu­dents learn in Ng’s spe­cial­iza­tion will bring them clos­er to careers in big data, machine learn­ing, and AI engi­neer­ing. Enroll in Ng’s Spe­cial­iza­tion here free for 7 days and explore the mate­ri­als in all three cours­es. If you’re con­vinced the spe­cial­iza­tion is for you, you’ll pay $49 per month until you com­plete the three-course spe­cial­iza­tion, and you’ll earn a cer­tifi­cate upon com­ple­tion of a hands-on project using all of your new machine learn­ing skills. You can sign up for the Machine Learn­ing Spe­cial­iza­tion here.

Note: Open Cul­ture has a part­ner­ship with Cours­era. If read­ers enroll in cer­tain Cours­era cours­es and pro­grams, it helps sup­port Open Cul­ture.

Relat­ed Con­tent:

New Deep Learn­ing Cours­es Released on Cours­era, with Hope of Teach­ing Mil­lions the Basics of Arti­fi­cial Intel­li­gence

Cours­era Makes Cours­es & Cer­tifi­cates Free Dur­ing Coro­n­avirus Quar­an­tine: Take Cours­es in Psy­chol­o­gy, Music, Well­ness, Pro­fes­sion­al Devel­op­ment & More Online

Google & Cours­era Launch Career Cer­tifi­cates That Pre­pare Stu­dents for Jobs in 6 Months: Data Ana­lyt­ics, Project Man­age­ment and UX Design

Google Unveils a Dig­i­tal Mar­ket­ing & E‑Commerce Cer­tifi­cate: 7 Cours­es Will Help Pre­pare Stu­dents for an Entry-Lev­el Job in 6 Months       

Josh Jones is a writer and musi­cian based in Durham, NC. Fol­low him at @jdmagness

Discover DALL‑E, the Artificial Intelligence Artist That Lets You Create Surreal Artwork

DALL‑E, an arti­fi­cial intel­li­gence sys­tem that gen­er­ates viable-look­ing art in a vari­ety of styles in response to user sup­plied text prompts, has been gar­ner­ing a lot of inter­est since it debuted this spring.

It has yet to be released to the gen­er­al pub­lic, but while we’re wait­ing, you could have a go at DALL‑E Mini, an open source AI mod­el that gen­er­ates a grid of images inspired by any phrase you care to type into its search box.

Co-cre­ator Boris Day­ma explains how DALL‑E Mini learns by view­ing mil­lions of cap­tioned online images:

Some of the con­cepts are learnt (sic) from mem­o­ry as it may have seen sim­i­lar images. How­ev­er, it can also learn how to cre­ate unique images that don’t exist such as “the Eif­fel tow­er is land­ing on the moon” by com­bin­ing mul­ti­ple con­cepts togeth­er.

Sev­er­al mod­els are com­bined togeth­er to achieve these results:

• an image encoder that turns raw images into a sequence of num­bers with its asso­ci­at­ed decoder

• a mod­el that turns a text prompt into an encod­ed image

• a mod­el that judges the qual­i­ty of the images gen­er­at­ed for bet­ter fil­ter­ing 

My first attempt to gen­er­ate some art using DALL‑E mini failed to yield the hoped for weird­ness.  I blame the bland­ness of my search term — “toma­to soup.”

Per­haps I’d have bet­ter luck “Andy Warhol eat­ing a bowl of toma­to soup as a child in Pitts­burgh.”

Ah, there we go!

I was curi­ous to know how DALL‑E Mini would riff on its name­sake artist’s han­dle (an hon­or Dali shares with the tit­u­lar AI hero of Pixar’s 2018 ani­mat­ed fea­ture, WALL‑E.)

Hmm… seems like we’re back­slid­ing a bit.

Let me try “Andy Warhol eat­ing a bowl of toma­to soup as a child in Pitts­burgh with Sal­vador Dali.”

Ye gods! That’s the stuff of night­mares, but it also strikes me as pret­ty legit mod­ern art. Love the spar­ing use of red. Well done, DALL‑E mini.

At this point, van­i­ty got the bet­ter of me and I did the AI art-gen­er­at­ing equiv­a­lent of googling my own name, adding “in a tutu” because who among us hasn’t dreamed of being a bal­le­ri­na at some point?

Let that be a les­son to you, Pan­do­ra…

Hope­ful­ly we’re all plan­ning to use this play­ful open AI tool for good, not evil.

Hyperallergic’s Sarah Rose Sharp raised some valid con­cerns in rela­tion to the orig­i­nal, more sophis­ti­cat­ed DALL‑E:

It’s all fun and games when you’re gen­er­at­ing “robot play­ing chess” in the style of Matisse, but drop­ping machine-gen­er­at­ed imagery on a pub­lic that seems less capa­ble than ever of dis­tin­guish­ing fact from fic­tion feels like a dan­ger­ous trend.

Addi­tion­al­ly, DALL‑E’s neur­al net­work can yield sex­ist and racist images, a recur­ring issue with AI tech­nol­o­gy. For instance, a reporter at Vice found that prompts includ­ing search terms like “CEO” exclu­sive­ly gen­er­at­ed images of White men in busi­ness attire. The com­pa­ny acknowl­edges that DALL‑E “inher­its var­i­ous bias­es from its train­ing data, and its out­puts some­times rein­force soci­etal stereo­types.”

Co-cre­ator Day­ma does not duck the trou­bling impli­ca­tions and bias­es his baby could unleash:

While the capa­bil­i­ties of image gen­er­a­tion mod­els are impres­sive, they may also rein­force or exac­er­bate soci­etal bias­es. While the extent and nature of the bias­es of the DALL·E mini mod­el have yet to be ful­ly doc­u­ment­ed, giv­en the fact that the mod­el was trained on unfil­tered data from the Inter­net, it may gen­er­ate images that con­tain stereo­types against minor­i­ty groups. Work to ana­lyze the nature and extent of these lim­i­ta­tions is ongo­ing, and will be doc­u­ment­ed in more detail in the DALL·E mini mod­el card.

The New York­er car­toon­ists Ellis Rosen and Jason Adam Katzen­stein con­jure anoth­er way in which DALL‑E mini could break with the social con­tract:

And a Twit­ter user who goes by St. Rev. Dr. Rev blows minds and opens mul­ti­ple cans of worms, using pan­els from car­toon­ist Joshua Bark­man’s beloved web­com­ic, False Knees:

Pro­ceed with cau­tion, and play around with DALL‑E mini here.

Get on the wait­list for orig­i­nal fla­vor DALL‑E access here.

 

Relat­ed Con­tent

Arti­fi­cial Intel­li­gence Brings to Life Fig­ures from 7 Famous Paint­ings: The Mona Lisa, Birth of Venus & More

Google App Uses Machine Learn­ing to Dis­cov­er Your Pet’s Look Alike in 10,000 Clas­sic Works of Art

Arti­fi­cial Intel­li­gence for Every­one: An Intro­duc­to­ry Course from Andrew Ng, the Co-Founder of Cours­era

- Ayun Hal­l­i­day is the Chief Pri­ma­tol­o­gist of the East Vil­lage Inky zine and author, most recent­ly, of Cre­ative, Not Famous: The Small Pota­to Man­i­festo.  Fol­low her @AyunHalliday.

How Peter Jackson Used Artificial Intelligence to Restore the Video & Audio Featured in The Beatles: Get Back

Much has been made in recent years of the “de-aging” process­es that allow actors to cred­i­bly play char­ac­ters far younger than them­selves. But it has also become pos­si­ble to de-age film itself, as demon­strat­ed by Peter Jack­son’s cel­e­brat­ed new docu-series The Bea­t­les: Get Back. The vast major­i­ty of the mate­r­i­al that com­pris­es its near­ly eight-hour run­time was orig­i­nal­ly shot in 1969, under the direc­tion of Michael Lind­say-Hogg for the doc­u­men­tary that became Let It Be.

Those who have seen both Lin­day-Hog­g’s and Jack­son’s doc­u­men­taries will notice how much sharp­er, smoother, and more vivid the very same footage looks in the lat­ter, despite the six­teen-mil­lime­ter film hav­ing lan­guished for half a cen­tu­ry. The kind of visu­al restora­tion and enhance­ment seen in Get Back was made pos­si­ble by tech­nolo­gies that have only emerged in the past few decades — and pre­vi­ous­ly seen in Jack­son’s They Shall Not Grow Old, a doc­u­men­tary acclaimed for its restora­tion of cen­tu­ry-old World War I footage to a time-trav­el-like degree of verisimil­i­tude.

“You can’t actu­al­ly just do it with off-the-shelf soft­ware,” Jack­son explained in an inter­view about the restora­tion process­es involved in They Shall Not Grow Old. This neces­si­tat­ed mar­shal­ing, at his New Zealand com­pa­ny Park Road Post Pro­duc­tion, “a depart­ment of code writ­ers who write com­put­er code in soft­ware.” In oth­er words, a suf­fi­cient­ly ambi­tious project of visu­al revi­tal­iza­tion — mak­ing media from bygone times even more life­like than it was to begin with — becomes as much a job of tra­di­tion­al film-restora­tion or visu­al-effects as of com­put­er pro­gram­ming.

This also goes for the less obvi­ous but no-less-impres­sive treat­ment giv­en by Jack­son and his team to the audio that came with the Let It Be footage. Record­ed in large part monau­ral­ly, these tapes pre­sent­ed a for­mi­da­ble pro­duc­tion chal­lenge. John, Paul, George, and Ringo’s instru­ments share a sin­gle track with their voic­es — and not just their singing voic­es, but their speak­ing ones as well. On first lis­ten, this ren­ders many of their con­ver­sa­tions inaudi­ble, and prob­a­bly by design: “If they were in a con­ver­sa­tion,” said Jack­son, they would turn their amps up loud and they’d strum the gui­tar.”

This means of keep­ing their words from Lind­say-Hogg and his crew worked well enough in the whol­ly ana­log late 1960s, but it has proven no match for the arti­fi­cial intelligence/machine learn­ing of the 2020s. “We devised a tech­nol­o­gy that is called demix­ing,” said Jack­son. “You teach the com­put­er what a gui­tar sounds like, you teach them what a human voice sounds like, you teach it what a drum sounds like, you teach it what a bass sounds like.” Sup­plied with enough son­ic data, the sys­tem even­tu­al­ly learned to dis­tin­guish from one anoth­er not just the sounds of the Bea­t­les’ instru­ments but of their voic­es as well.

Hence, in addi­tion to Get Back’s rev­e­la­to­ry musi­cal moments, its many once-pri­vate but now crisply audi­ble exchanges between the Fab Four. “Oh, you’re record­ing our con­ver­sa­tion?” George Har­ri­son at one point asks Lind­say-Hogg in a char­ac­ter­is­tic tone of faux sur­prise. But if he could hear the record­ings today, his sur­prise would sure­ly be real.

Relat­ed Con­tent:

Watch Paul McCart­ney Com­pose The Bea­t­les Clas­sic “Get Back” Out of Thin Air (1969)

Peter Jack­son Gives Us an Entic­ing Glimpse of His Upcom­ing Bea­t­les Doc­u­men­tary The Bea­t­les: Get Back

Lennon or McCart­ney? Sci­en­tists Use Arti­fi­cial Intel­li­gence to Fig­ure Out Who Wrote Icon­ic Bea­t­les Songs

Arti­fi­cial Intel­li­gence Pro­gram Tries to Write a Bea­t­les Song: Lis­ten to “Daddy’s Car”

Watch The Bea­t­les Per­form Their Famous Rooftop Con­cert: It Hap­pened 50 Years Ago Today (Jan­u­ary 30, 1969)

How Peter Jack­son Made His State-of-the-Art World War I Doc­u­men­tary They Shall Not Grow Old: An Inside Look

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities and cul­ture. His projects include the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall or on Face­book.

Artificial Intelligence for Everyone: An Introductory Course from Andrew Ng, the Co-Founder of Coursera

If you fol­low edtech, you know the name Andrew Ng. He’s the Stan­ford com­put­er sci­ence pro­fes­sor who co-found­ed MOOC-provider Cours­era and lat­er became chief sci­en­tist at Baidu. Since leav­ing Baidu, he’s been work­ing on sev­er­al arti­fi­cial intel­li­gence projects, includ­ing a series of Deep Learn­ing cours­es that he unveiled in 2017. And now comes AI for Every­one–an online course that makes arti­fi­cial intel­li­gence intel­li­gi­ble to a broad audi­ence.

In this large­ly non-tech­ni­cal course, stu­dents will learn:

  • The mean­ing behind com­mon AI ter­mi­nol­o­gy, includ­ing neur­al net­works, machine learn­ing, deep learn­ing, and data sci­ence.
  • What AI real­is­ti­cal­ly can–and cannot–do.
  • How to spot oppor­tu­ni­ties to apply AI to prob­lems in your own orga­ni­za­tion.
  • What it feels like to build machine learn­ing and data sci­ence projects.
  • How to work with an AI team and build an AI strat­e­gy in an orga­ni­za­tion.
  • How to nav­i­gate eth­i­cal and soci­etal dis­cus­sions sur­round­ing AI.

The four-week course takes about eight hours to com­plete. You can audit it for free. How­ev­er if you want to earn a certificate–which you can then share on your LinkedIn pro­file, print­ed resumes and CVs–the course will run $49.

AI for Every­one will be added to our list of Free Com­put­er Sci­ence cours­es, a sub­set of our larg­er col­lec­tion, 1,700 Free Online Cours­es from Top Uni­ver­si­ties.

Relat­ed Con­tent:

Nick Cave Answers the Hot­ly Debat­ed Ques­tion: Will Arti­fi­cial Intel­li­gence Ever Be Able to Write a Great Song?

Arti­fi­cial Intel­li­gence Brings Sal­vador Dalí Back to Life: “Greet­ings, I Am Back”

Arti­fi­cial Intel­li­gence Iden­ti­fies the Six Main Arcs in Sto­ry­telling: Wel­come to the Brave New World of Lit­er­ary Crit­i­cism

New Deep Learn­ing Cours­es Released on Cours­era, with Hope of Teach­ing Mil­lions the Basics of Arti­fi­cial Intel­li­gence

by | Permalink | Make a Comment ( 4 ) |

A Free Oxford Course on Deep Learning: Cutting Edge Lessons in Artificial Intelligence

Nan­do de Fre­itas is a “machine learn­ing pro­fes­sor at Oxford Uni­ver­si­ty, a lead research sci­en­tist at Google Deep­Mind, and a Fel­low of the Cana­di­an Insti­tute For Advanced Research (CIFAR) in the Neur­al Com­pu­ta­tion and Adap­tive Per­cep­tion pro­gram.”

Above, you can watch him teach an Oxford course on Deep Learn­ing, a hot sub­field of machine learn­ing and arti­fi­cial intel­li­gence which cre­ates neur­al networks–essentially com­plex algo­rithms mod­eled loose­ly after the human brain–that can rec­og­nize pat­terns and learn to per­form tasks.

To com­ple­ment the 16 lec­tures you can also find lec­ture slides, prac­ti­cals, and prob­lems sets on this Oxford web site. If you’d like to learn about Deep Learn­ing in a MOOC for­mat, be sure to check out the new series of cours­es cre­at­ed by Andrew Ng on Cours­era.

Oxford’s Deep Learn­ing course will be added to our list of Free Online Com­put­er Sci­ence Cours­es, part of our meta col­lec­tion, 1,700 Free Online Cours­es from Top Uni­ver­si­ties.

Relat­ed Con­tent:

Google Launch­es Free Course on Deep Learn­ing: The Sci­ence of Teach­ing Com­put­ers How to Teach Them­selves

New Deep Learn­ing Cours­es Released on Cours­era, with Hope of Teach­ing Mil­lions the Basics of Arti­fi­cial Intel­li­gence

Neur­al Net­works for Machine Learn­ing: A Free Online Course

by | Permalink | Make a Comment ( 2 ) |

Google Launches a Free Course on Artificial Intelligence: Sign Up for Its New “Machine Learning Crash Course”

As part of an effort to make Arti­fi­cial Intel­li­gence more com­pre­hen­si­ble to the broad­er pub­lic, Google has cre­at­ed an edu­ca­tion­al web­site Learn with Google AI, which includes, among oth­er things, a new online course called Machine Learn­ing Crash Course. The course pro­vides “exer­cis­es, inter­ac­tive visu­al­iza­tions, and instruc­tion­al videos that any­one can use to learn and prac­tice [Machine Learn­ing] con­cepts.” To date, more than 18,000 Googlers have enrolled in the course. And now it’s avail­able for every­one, every­where. You can sup­ple­ment it with oth­er AI cours­es found in the Relat­eds below.

Machine Learn­ing Crash Course will be added to our list of Free Online Com­put­er Sci­ence Cours­es, a sub­set of our col­lec­tion, 1,700 Free Online Cours­es from Top Uni­ver­si­ties.

If you would like to sign up for Open Culture’s free email newslet­ter, please find it here. Or fol­low our posts on Threads, Face­book, BlueSky or Mastodon.

If you would like to sup­port the mis­sion of Open Cul­ture, con­sid­er mak­ing a dona­tion to our site. It’s hard to rely 100% on ads, and your con­tri­bu­tions will help us con­tin­ue pro­vid­ing the best free cul­tur­al and edu­ca­tion­al mate­ri­als to learn­ers every­where. You can con­tribute through Pay­Pal, Patre­on, and Ven­mo (@openculture). Thanks!

via Google Blog

Relat­ed Con­tent:

Arti­fi­cial Intel­li­gence: A Free Online Course from MIT

Google Launch­es Free Course on Deep Learn­ing: The Sci­ence of Teach­ing Com­put­ers How to Teach Them­selves

New Deep Learn­ing Cours­es Released on Cours­era, with Hope of Teach­ing Mil­lions the Basics of Arti­fi­cial Intel­li­gence

Neur­al Net­works for Machine Learn­ing: A Free Online Course

 

by | Permalink | Make a Comment ( 28 ) |

New Deep Learning Courses Released on Coursera, with Hope of Teaching Millions the Basics of Artificial Intelligence

FYI: If you fol­low edtech, you know the name Andrew Ng. He’s the Stan­ford com­put­er sci­ence pro­fes­sor, who co-found­ed MOOC-provider Cours­era and lat­er became chief sci­en­tist at Baidu. Since leav­ing Baidu, he’s been work­ing on three arti­fi­cial intel­li­gence projects, the first of which he unveiled yes­ter­day. On Medi­um, he wrote:

I have been work­ing on three new AI projects, and am thrilled to announce the first one: deeplearning.ai, a project ded­i­cat­ed to dis­sem­i­nat­ing AI knowl­edge, is launch­ing a new sequence of Deep Learn­ing cours­es on Cours­era. These cours­es will help you mas­ter Deep Learn­ing, apply it effec­tive­ly, and build a career in AI.

Speak­ing to the MIT Tech­nol­o­gy Review, Ng elab­o­rat­ed: “The thing that real­ly excites me today is build­ing a new AI-pow­ered soci­ety… I don’t think any one com­pa­ny could do all the work that needs to be done, so I think the only way to get there is if we teach mil­lions of peo­ple to use these AI tools so they can go and invent the things that no large com­pa­ny, or com­pa­ny I could build, could do.”

Andrew’s new 5‑part series of cours­es on Deep Learn­ing can be accessed here. Cours­es include: Neur­al Net­works and Deep Learn­ing, Improv­ing Deep Neur­al Net­works, Struc­tur­ing Machine Learn­ing Projects, Con­vo­lu­tion­al Neur­al Net­works, and Sequence Mod­els.

You can find these cours­es on our list of Free Com­put­er Sci­ence Cours­es, a sub­set of our col­lec­tion, 1,700 Free Online Cours­es from Top Uni­ver­si­ties.

If you would like to sign up for Open Culture’s free email newslet­ter, please find it here. Or fol­low our posts on Threads, Face­book, BlueSky or Mastodon.

If you would like to sup­port the mis­sion of Open Cul­ture, con­sid­er mak­ing a dona­tion to our site. It’s hard to rely 100% on ads, and your con­tri­bu­tions will help us con­tin­ue pro­vid­ing the best free cul­tur­al and edu­ca­tion­al mate­ri­als to learn­ers every­where. You can con­tribute through Pay­Pal, Patre­on, and Ven­mo (@openculture). Thanks!

Relat­ed Con­tent:

Google Launch­es Free Course on Deep Learn­ing: The Sci­ence of Teach­ing Com­put­ers How to Teach Them­selves

Google’s Deep­Mind AI Teach­es Itself to Walk, and the Results Are Kooky, No Wait, Chill­ing

Arti­fi­cial Intel­li­gence: A Free Online Course from MIT

« Go BackMore in this category... »
Quantcast
Open Culture was founded by Dan Colman.