The Seven Road-Tested Habits of Effective Artists

Fif­teen years ago, a young con­struc­tion work­er named Andrew Price went in search of free 3d soft­ware to help him achieve his goal of ren­der­ing a 3D car.

He stum­bled onto Blender, a just-the-tick­et open source soft­ware that helps users with every aspect of 3D creation—modeling, rig­ging, ani­ma­tion, sim­u­la­tion, ren­der­ing, com­posit­ing, and motion track­ing.

Price describes his ear­ly learn­ing style as “play­ing it by ear,” sam­pling tuto­ri­als, some of which he couldn’t be both­ered to com­plete.

Desire for free­lance gigs led him to forge a new iden­ti­ty, that of a Blender Guru, whose tuto­ri­als, pod­casts, and arti­cles would help oth­er new users get the hang of the soft­ware.

But it wasn’t declar­ing him­self an expert that ulti­mate­ly improved his artis­tic skills. It was hold­ing his own feet over the fire by plac­ing a bet with his younger cousin, who stood to gain $1000 if Price failed to rack up 1,000 “likes” by post­ing 2D draw­ings to Art­Sta­tion with­in a 6‑month peri­od.

(If he succeeded—which he did, 3 days before his self-imposed deadline—his cousin owed him noth­ing. Loss aver­sion proved to be a more pow­er­ful moti­va­tor than any car­rot on a stick…)

In order to snag the req­ui­site likes, Price found that he need­ed to revise some habits and com­mit to a more robust dai­ly prac­tice, a jour­ney he detailed in a pre­sen­ta­tion at the 2016 Blender Con­fer­ence.

Price con­fess­es that the chal­lenge taught him much about draw­ing and paint­ing, but even more about hav­ing an effec­tive artis­tic prac­tice. His sev­en rules apply to any num­ber of cre­ative forms:

 

Andrew Price’s Rules for an Effec­tive Artist Prac­tice:

  1. Prac­tice Dai­ly

A num­ber of pro­lif­ic artists have sub­scribed to this belief over the years, includ­ing nov­el­ist (and moth­er!) JK Rowl­ing, come­di­an Jer­ry Sein­feld, auto­bi­o­graph­i­cal per­former Mike Bir­bli­gia, and mem­oirist David Sedaris.

If you feel too fried to uphold your end of the bar­gain, pre­tend to go easy on your­self with a lit­tle trick Price picked up from music pro­duc­er Rick Rubin: Do the absolute min­i­mum. You’ll like­ly find that per­form­ing the min­i­mum posi­tions you to do much more than that. Your resis­tance is not so much to the doing as it is to the embark­ing.

  1. Quan­ti­ty over Per­fec­tion­ism Mas­querad­ing as Qual­i­ty

This harkens back to Rule Num­ber One. Who are we to say which of our works will be judged wor­thy. Just keep putting it out there—remember it’s all prac­tice, and law of aver­ages favors those whose out­put is, like Picasso’s, prodi­gious. Don’t stand in the way of progress by split­ting a sin­gle work’s end­less hairs.

  1. Steal With­out Rip­ping Off

Immerse your­self in the cre­ative bril­liance of those you admire. Then prof­it off your own improved efforts, a prac­tice advo­cat­ed by the likes of musi­cian David Bowie, com­put­er vision­ary Steve Jobs, and artist/social com­men­ta­tor Banksy.

  1. Edu­cate Your­self

As a stand-alone, that old chest­nut about prac­tice mak­ing per­fect is not suf­fi­cient to the task. Whether you seek out online tuto­ri­als, as Price did, enroll in a class, or des­ig­nate a men­tor, a con­sci­en­tious com­mit­ment to study your craft will help you to bet­ter mas­ter it.

  1. Give your­self a break

Bang­ing your head against the wall is not good for your brain. Price cel­e­brates author Stephen King’s prac­tice of giv­ing the first draft of a new nov­el six weeks to mar­i­nate. Your break may be short­er. Three days may be ample to juice you up cre­ative­ly. Just make sure it’s in your cal­en­dar to get back to it.

  1. Seek Feed­back

Film­mak­er Tai­ka Wait­i­tirap­per Kanye Westand the big goril­las at Pixar are not threat­ened by oth­ers’ opin­ions. Seek them out. You may learn some­thing.

  1. Cre­ate What You Want To

Pas­sion projects are the key to cre­ative longevi­ty and plea­sur­able process. Don’t cater to a fick­le pub­lic, or the shift­ing sands of fash­ion. Pur­sue the sorts of things that inter­est you.

Implic­it in Price’s sev­en com­mand­ments is the notion that some­thing may have to budge—your night­ly cock­tails, the num­ber of hours spent on social media, that extra half hour in bed after the alarm goes off… Don’t neglect your famil­ial or civic oblig­a­tions, but nei­ther should you short­change your art. Life’s too short.

Read the tran­script of Andrew Price’s Blender Con­fer­ence pre­sen­ta­tion here.

Relat­ed Con­tent:

The Dai­ly Habits of Famous Writ­ers: Franz Kaf­ka, Haru­ki Muraka­mi, Stephen King & More

The Dai­ly Habits of High­ly Pro­duc­tive Philoso­phers: Niet­zsche, Marx & Immanuel Kant

How to Read Many More Books in a Year: Watch a Short Doc­u­men­tary Fea­tur­ing Some of the World’s Most Beau­ti­ful Book­stores

Ayun Hal­l­i­day is an author, illus­tra­tor, the­ater mak­er and Chief Pri­ma­tol­o­gist of the East Vil­lage Inky zine.  Join her in NYC on Mon­day, Decem­ber 9 when her month­ly book-based vari­ety show, Necro­mancers of the Pub­lic Domain cel­e­brates Dennison’s Christ­mas Book (1921). Fol­low her @AyunHalliday.

How Margaret Hamilton Wrote the Computer Code That Helped Save the Apollo Moon Landing Mission

From a dis­tance of half a cen­tu­ry, we look back on the moon land­ing as a thor­ough­ly ana­log affair, an old-school engi­neer­ing project of the kind sel­dom even pro­posed any­more in this dig­i­tal age. But the Apol­lo 11 mis­sion could nev­er have hap­pened with­out com­put­ers and the peo­ple who pro­gram them, a fact that has become bet­ter-known in recent years thanks to pub­lic inter­est in the work of Mar­garet Hamil­ton, direc­tor of the Soft­ware Engi­neer­ing Divi­sion of MIT’s Instru­men­ta­tion Lab­o­ra­to­ry when it devel­oped on-board flight soft­ware for NASA’s Apol­lo space pro­gram. You can learn more about Hamil­ton, whom we’ve pre­vi­ous­ly fea­tured here on Open Cul­ture, from the short MAKERS pro­file video above.

Today we con­sid­er soft­ware engi­neer­ing a per­fect­ly viable field, but back in the mid-1960s, when Hamil­ton first joined the Apol­lo project, it did­n’t even have a name. “I came up with the term ‘soft­ware engi­neer­ing,’ and it was con­sid­ered a joke,” says Hamil­ton, who remem­bers her col­leagues mak­ing remarks like, “What, soft­ware is engi­neer­ing?”

But her own expe­ri­ence went some way toward prov­ing that work­ing in code had become as impor­tant as work­ing in steel. Only by watch­ing her young daugh­ter play at the same con­trols the astro­nauts would lat­er use did she real­ize that just one human error could poten­tial­ly bring the mis­sion into ruin — and that she could min­i­mize the pos­si­bil­i­ty by tak­ing it into account when design­ing its soft­ware. Hamil­ton’s pro­pos­al met with resis­tance, NASA’s offi­cial line at the time being that “astro­nauts are trained nev­er to make a mis­take.”

But Hamil­ton per­sist­ed, pre­vailed, and was vin­di­cat­ed dur­ing the moon land­ing itself, when an astro­naut did make a mis­take, one that caused an over­load­ing of the flight com­put­er. The whole land­ing might have been abort­ed if not for Hamil­ton’s fore­sight in imple­ment­ing an “asyn­chro­nous exec­u­tive” func­tion capa­ble, in the event of an over­load, of set­ting less impor­tant tasks aside and pri­or­i­tiz­ing more impor­tant ones. “The soft­ware worked just the way it should have,” Hamil­ton says in the Christie’s video on the inci­dent above, describ­ing what she felt after­ward as “a com­bi­na­tion of excite­ment and relief.” Engi­neers of soft­ware, hard­ware, and every­thing else know that feel­ing when they see a com­pli­cat­ed project work — but sure­ly few know it as well as Hamil­ton and her Apol­lo col­lab­o­ra­tors do.

Relat­ed Con­tent:

Mar­garet Hamil­ton, Lead Soft­ware Engi­neer of the Apol­lo Project, Stands Next to Her Code That Took Us to the Moon (1969)

How 1940s Film Star Hedy Lamarr Helped Invent the Tech­nol­o­gy Behind Wi-Fi & Blue­tooth Dur­ing WWII

Meet Grace Hop­per, the Pio­neer­ing Com­put­er Sci­en­tist Who Helped Invent COBOL and Build the His­toric Mark I Com­put­er (1906–1992)

How Ada Lovelace, Daugh­ter of Lord Byron, Wrote the First Com­put­er Pro­gram in 1842–a Cen­tu­ry Before the First Com­put­er

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall, on Face­book, or on Insta­gram.

Pioneering Computer Scientist Grace Hopper Shows Us How to Visualize a Nanosecond (1983)

Human imag­i­na­tion seems seri­ous­ly lim­it­ed when faced with the cos­mic scope of time and space. We can imag­ine, through stop-motion ani­ma­tion and CGI, what it might be like to walk the earth with crea­tures the size of office build­ings. But how to wrap our heads around the fact that they lived hun­dreds of mil­lions of years ago, on a plan­et some four and a half bil­lion years old? We trust the sci­ence, but can’t rely on intu­ition alone to guide us to such mind-bog­gling knowl­edge.

At the oth­er end of the scale, events mea­sured in nanosec­onds, or bil­lionths of a sec­ond, seem incon­ceiv­able, even to some­one as smart as Grace Hop­per, the Navy math­e­mati­cian who invent­ed COBOL and helped built the first com­put­er. Or so she says in the 1983 video clip above from one of her many lec­tures in her role as a guest lec­tur­er at uni­ver­si­ties, muse­ums, mil­i­tary bod­ies, and cor­po­ra­tions.

When she first heard of “cir­cuits that act­ed in nanosec­onds,” she says, “bil­lionths of a sec­ond… Well, I didn’t know what a bil­lion was…. And if you don’t know what a bil­lion is, how on earth do you know what a bil­lionth is? Final­ly, one morn­ing in total des­per­a­tion, I called over the engi­neer­ing build­ing, and I said, ‘Please cut off a nanosec­ond and send it to me.” What she asked for, she explains, and shows the class, was a piece of wire rep­re­sent­ing the dis­tance a sig­nal could trav­el in a nanosec­ond.

Now of course it wouldn’t real­ly be through wire — it’d be out in space, the veloc­i­ty of light. So if we start with a veloc­i­ty of light and use your friend­ly com­put­er, you’ll dis­cov­er that a nanosec­ond is 11.8 inch­es long, the max­i­mum lim­it­ing dis­tance that elec­tric­i­ty can trav­el in a bil­lionth of a sec­ond.

Fol­low the rest of her expla­na­tion, with wire props, and see if you can bet­ter under­stand a mea­sure of time beyond the reach­es of con­scious expe­ri­ence. The expla­na­tion was imme­di­ate­ly suc­cess­ful when she began using it in the late 1960s “to demon­strate how design­ing small­er com­po­nents would pro­duce faster com­put­ers,” writes the Nation­al Muse­um of Amer­i­can His­to­ry. The bun­dle of wires below, each about 30cm (11.8 inch­es) long, comes from a lec­ture Hop­per gave muse­um docents in March 1985.

Pho­to via the Nation­al Muse­um of Amer­i­can His­to­ry

Like the age of the dinosaurs, the nanosec­ond may only rep­re­sent a small frac­tion of the incom­pre­hen­si­bly small units of time sci­en­tists are even­tu­al­ly able to measure—and com­put­er sci­en­tists able to access. “Lat­er,” notes the NMAH, “as com­po­nents shrank and com­put­er speeds increased, Hop­per used grains of pep­per to rep­re­sent the dis­tance elec­tric­i­ty trav­eled in a picosec­ond, one tril­lionth of a sec­ond.”

At this point, the map becomes no more reveal­ing than the unknown ter­ri­to­ry, invis­i­ble to the naked eye, incon­ceiv­able but through wild leaps of imag­i­na­tion. But if any­one could explain the increas­ing­ly inex­plic­a­ble in terms most any­one could under­stand, it was the bril­liant but down-to-earth Hop­per.

via Kot­tke

Relat­ed Con­tent:

Meet Grace Hop­per, the Pio­neer­ing Com­put­er Sci­en­tist Who Helped Invent COBOL and Build the His­toric Mark I Com­put­er (1906–1992)

The Map of Com­put­er Sci­ence: New Ani­ma­tion Presents a Sur­vey of Com­put­er Sci­ence, from Alan Tur­ing to “Aug­ment­ed Real­i­ty”

Free Online Com­put­er Sci­ence Cours­es 

Josh Jones is a writer and musi­cian based in Durham, NC. Fol­low him at @jdmagness

How to Take a Picture of a Black Hole: Watch the 2017 Ted Talk by Katie Bouman, the MIT Grad Student Who Helped Take the Groundbreaking Photo

What trig­gered the worst impuls­es of the Inter­net last week?

The world’s first pho­to of a black hole, which proved the pres­ence of troll life here on earth, and con­firms that female sci­en­tists, through no fault of their own, have a much longer way to go, baby.

If you want a taste, sort the com­ments on the two year old TED Talk, above, so they’re ordered  “newest first.”

Katie Bouman, soon-to-be assis­tant pro­fes­sor of com­put­ing and math­e­mat­i­cal sci­ences at the Cal­i­for­nia Insti­tute of Tech­nol­o­gy, was a PhD can­di­date at MIT two years ago, when she taped the talk, but she could’ve passed for a ner­vous high school­er com­pet­ing in the Nation­al Sci­ence Bowl finals, in clothes bor­rowed from Aunt Judy, who works at the bank.

The focus of her stud­ies were the ways in which emerg­ing com­pu­ta­tion­al meth­ods could help expand the bound­aries of inter­dis­ci­pli­nary imag­ing.

Pri­or to last week, I’m not sure how well I could have parsed the focus of her work had she not tak­en the time to help less STEM-inclined view­ers such as myself wrap our heads around her high­ly tech­ni­cal, then-whol­ly-the­o­ret­i­cal sub­ject.

What I know about black holes could still fit in a thim­ble, and in truth, my excite­ment about one being pho­tographed for the first time pales in com­par­i­son to my excite­ment about Game of Thrones return­ing to the air­waves.

For­tu­nate­ly, we’re not oblig­at­ed to be equal­ly turned on by the same inter­ests, an idea the­o­ret­i­cal physi­cist Richard Feyn­man pro­mot­ed:

I’ve always been very one-sided about sci­ence and when I was younger I con­cen­trat­ed almost all my effort on it. I did­n’t have time to learn and I did­n’t have much patience with what’s called the human­i­ties, even though in the uni­ver­si­ty there were human­i­ties that you had to take. I tried my best to avoid some­how learn­ing any­thing and work­ing at it. It was only after­wards, when I got old­er, that I got more relaxed, that I’ve spread out a lit­tle bit. I’ve learned to draw and I read a lit­tle bit, but I’m real­ly still a very one-sided per­son and I don’t know a great deal. I have a lim­it­ed intel­li­gence and I use it in a par­tic­u­lar direc­tion.

I’m pret­ty sure my lack of pas­sion for sci­ence is not tied to my gen­der. Some of my best friends are guys who feel the same. (Some of them don’t like team sports either.)

But I could­n’t help but expe­ri­ence a wee thrill that this young woman, a sci­ence nerd who admit­ted­ly could’ve used a few the­ater nerd tips regard­ing relax­ation and pub­lic speak­ing, real­ized her dream—an hon­est to good­ness pho­to of a black hole just like the one she talked about in her TED Talk,  “How to take a pic­ture of a black hole.”

Bouman and the 200+ col­leagues she acknowl­edges and thanks at every oppor­tu­ni­ty, achieved their goal, not with an earth-sized cam­era but rather a net­work of linked tele­scopes, much as she had described two years ear­li­er, when she invoked dis­co balls, Mick Jag­ger, oranges, self­ies, and a jig­saw puz­zle in an effort to help peo­ple like me under­stand.

Look at that suck­er (or, more accu­rate­ly, its shad­ow!) That thing’s 500 mil­lion tril­lion kilo­me­ters from Earth!

(That’s much far­ther than King’s Land­ing is from Win­ter­fell.)

I’ll bet a lot of ele­men­tary sci­ence teach­ers, be they male, female, or non-bina­ry, are going to make sci­ence fun by hav­ing their stu­dents draw pic­tures of the pic­ture of the black hole.

If we could go back (or for­ward) in time, I can almost guar­an­tee that mine would be among the best because while I didn’t “get” sci­ence (or gym), I was a total art star with the crayons.

Then, crafty as Lord Petyr Bael­ish when pre­sen­ta­tion time rolled around, I would part­ner with a girl like Katie Bouman, who could explain the sci­ence with win­ning vig­or. She gen­uine­ly seems to embrace the idea that it “takes a vil­lage,” and that one’s fel­low vil­lagers should be cred­it­ed when­ev­er pos­si­ble.

(How did I draw the black hole, you ask? Hon­est­ly, it’s not that much hard­er than draw­ing a dough­nut. Now back to Katie!)

Alas, her pro­fes­sion­al warmth failed to reg­is­ter with legions of Inter­net trolls who began slim­ing her short­ly after a col­league at MIT shared a beam­ing snap­shot of her, tak­en, pre­sum­ably, with a reg­u­lar old phone as the black hole made its debut. That pic cement­ed her acci­den­tal sta­tus as the face of this project.

Note to the trolls—it was­n’t a dang self­ie.

“I’m so glad that every­one is as excit­ed as we are and peo­ple are find­ing our sto­ry inspi­ra­tional,’’ Bouman told The New York Times. “How­ev­er, the spot­light should be on the team and no indi­vid­ual per­son. Focus­ing on one per­son like this helps no one, includ­ing me.”

Although Bouman was a junior team mem­ber, she and oth­er grad stu­dents made major con­tri­bu­tions. She direct­ed the ver­i­fi­ca­tion of images, the selec­tion of imag­ing para­me­ters, and authored an imag­ing algo­rithm that researchers used in the cre­ation of three script­ed code pipelines from which the instant­ly-famous pic­ture was cob­bled togeth­er.

As Vin­cent Fish, a research sci­en­tist at MIT’s Haystack Obser­va­to­ry told CNN:

One of the insights Katie brought to our imag­ing group is that there are nat­ur­al images. Just think about the pho­tos you take with your cam­era phone—they have cer­tain prop­er­ties.… If you know what one pix­el is, you have a good guess as to what the pix­el is next to it.

Hey, that makes sense.

As The Verge’s sci­ence edi­tor, Mary Beth Grig­gs, points out, the rush to defame Bouman is of a piece with some of the non-vir­tu­al real­i­ties women in sci­ence face:

Part of the rea­son that some posters found Bouman imme­di­ate­ly sus­pi­cious had to do with her gen­der. Famous­ly, a num­ber of promi­nent men like dis­graced for­mer CERN physi­cist Alessan­dro Stru­mia have argued that women aren’t being dis­crim­i­nat­ed against in sci­ence — they sim­ply don’t like it, or don’t have the apti­tude for it. That argu­ment for­ti­fies a notion that women don’t belong in sci­ence, or can’t real­ly be doing the work. So women like Bouman must be fakes, this warped line of think­ing goes…

Even I, whose 7th grade sci­ence teacher tem­pered a bad grade on my report card by say­ing my inter­est in the­ater would like­ly serve me much bet­ter than any­thing I might eek from her class, know that just as many girls and women excel at sci­ence, tech­nol­o­gy, engi­neer­ing, and math as excel in the arts. (Some­times they excel at both!)

(And pow­er to every lit­tle boy with his sights set on nurs­ing, teach­ing, or bal­let!)

(How many black holes have the haters pho­tographed recent­ly?)

Grig­gs con­tin­ues:

Say­ing that she was part of a larg­er team doesn’t dimin­ish her work, or min­i­mize her involve­ment in what is already a his­to­ry-mak­ing project. High­light­ing the achieve­ments of a bril­liant, enthu­si­as­tic sci­en­tist does not dimin­ish the con­tri­bu­tions of the oth­er 214 peo­ple who worked on the project, either. But what it is doing is show­ing a dif­fer­ent mod­el for a sci­en­tist than the one most of us grew up with. That might mean a lot to some kids — maybe kids who look like her — mak­ing them excit­ed about study­ing the won­ders of the Uni­verse.

via Boing­Bo­ing

Relat­ed Con­tent:

Women’s Hid­den Con­tri­bu­tions to Mod­ern Genet­ics Get Revealed by New Study: No Longer Will They Be Buried in the Foot­notes

New Aug­ment­ed Real­i­ty App Cel­e­brates Sto­ries of Women Typ­i­cal­ly Omit­ted from U.S. His­to­ry Text­books

Stephen Hawk­ing (RIP) Explains His Rev­o­lu­tion­ary The­o­ry of Black Holes with the Help of Chalk­board Ani­ma­tions

Watch a Star Get Devoured by a Super­mas­sive Black Hole

Ayun Hal­l­i­day is an author, illus­tra­tor, the­ater mak­er and Chief Pri­ma­tol­o­gist of the East Vil­lage Inky zine.  Join her in New York City tonight for the next install­ment of her book-based vari­ety show, Necro­mancers of the Pub­lic Domain. Fol­low her @AyunHalliday.

Artificial Intelligence Identifies the Six Main Arcs in Storytelling: Welcome to the Brave New World of Literary Criticism

Is the sin­gu­lar­i­ty upon us? AI seems poised to replace every­one, even artists whose work can seem like an invi­o­lably human indus­try. Or maybe not. Nick Cave’s poignant answer to a fan ques­tion might per­suade you a machine will nev­er write a great song, though it might mas­ter all the moves to write a good one. An AI-writ­ten nov­el did almost win a Japan­ese lit­er­ary award. A suit­ably impres­sive feat, even if much of the author­ship should be attrib­uted to the program’s human design­ers.

But what about lit­er­ary crit­i­cism? Is this an art that a machine can do con­vinc­ing­ly? The answer may depend on whether you con­sid­er it an art at all. For those who do, no arti­fi­cial intel­li­gence will ever prop­er­ly devel­op the the­o­ry of mind need­ed for sub­tle, even mov­ing, inter­pre­ta­tions. On the oth­er hand, one group of researchers has suc­ceed­ed in using “sophis­ti­cat­ed com­put­ing pow­er, nat­ur­al lan­guage pro­cess­ing, and reams of dig­i­tized text,” writes Atlantic edi­tor Adri­enne LaFrance, “to map the nar­ra­tive pat­terns in a huge cor­pus of lit­er­a­ture.” The name of their lit­er­ary crit­i­cism machine? The Hedo­nome­ter.

We can treat this as an exer­cise in com­pil­ing data, but it’s arguable that the results are on par with work from the com­par­a­tive mythol­o­gy school of James Fra­zier and Joseph Camp­bell. A more imme­di­ate com­par­i­son might be to the very deft, if not par­tic­u­lar­ly sub­tle, Kurt Von­negut, who—before he wrote nov­els like Slaugh­ter­house Five and Cat’s Cra­dlesub­mit­ted a master’s the­sis in anthro­pol­o­gy to the Uni­ver­si­ty of Chica­go. His project did the same thing as the machine, 35 years ear­li­er, though he may not have had the where­with­al to read “1,737 Eng­lish-lan­guage works of fic­tion between 10,000 and 200,000 words long” while strug­gling to fin­ish his grad­u­ate pro­gram. (His the­sis, by the way, was reject­ed.)

Those num­bers describe the dataset from Project Guten­berg fed into the The Hedo­nome­ter by the com­put­er sci­en­tists at the Uni­ver­si­ty of Ver­mont and the Uni­ver­si­ty of Ade­laide. After the com­put­er fin­ished “read­ing,” it then plot­ted “the emo­tion­al tra­jec­to­ry” of all of the sto­ries using a “sen­ti­ment analy­sis to gen­er­ate an emo­tion­al arc for each work.” What it found were six broad cat­e­gories of sto­ry, list­ed below:

  1. Rags to Rich­es (rise)
  2. Rich­es to Rags (fall)
  3. Man in a Hole (fall then rise)
  4. Icarus (rise then fall)
  5. Cin­derel­la (rise then fall then rise)
  6. Oedi­pus (fall then rise then fall)

How does this endeav­or com­pare with Vonnegut’s project? (See him present the the­o­ry below.) The nov­el­ist used more or less the same method­ol­o­gy, in human form, to come up with eight uni­ver­sal sto­ry arcs or “shapes of sto­ries.” Von­negut him­self left out the Rags to Rich­es cat­e­go­ry; he called it an anom­aly, though he did have a head­ing for the same ris­ing-only sto­ry arc—the Cre­ation Story—which he deemed an uncom­mon shape for West­ern fic­tion. He did include the Cin­derel­la arc, and was pleased by his dis­cov­ery that its shape mir­rored the New Tes­ta­ment arc, which he also includ­ed in his schema, an act the AI sure­ly would have judged redun­dant.

Con­tra Von­negut, the AI found that one-fifth of all the works it ana­lyzed were Rags-to-Rich­es sto­ries. It deter­mined that this arc was far less pop­u­lar with read­ers than “Oedi­pus,” “Man in a Hole,” and “Cin­derel­la.” Its analy­sis does get much more gran­u­lar, and to allay our sus­pi­cions, the researchers promise they did not con­trol the out­come of the exper­i­ment. “We’re not impos­ing a set of shapes,” says lead author Andy Rea­gan, Ph.D. can­di­date in math­e­mat­ics at the Uni­ver­si­ty of Ver­mont. “Rather: the math and machine learn­ing have iden­ti­fied them.”

But the authors do pro­vide a lot of their own inter­pre­ta­tion of the data, from choos­ing rep­re­sen­ta­tive texts—like Har­ry Pot­ter and the Death­ly Hal­lows—to illus­trate “nest­ed and com­pli­cat­ed” plot arcs, to pro­vid­ing the guid­ing assump­tions of the exer­cise. One of those assump­tions, unsur­pris­ing­ly giv­en the authors’ fields of inter­est, is that math and lan­guage are inter­change­able. “Sto­ries are encod­ed in art, lan­guage, and even in the math­e­mat­ics of physics,” they write in the intro­duc­tion to their paper, pub­lished on Arxiv.org.

“We use equa­tions,” they go on, “to rep­re­sent both sim­ple and com­pli­cat­ed func­tions that describe our obser­va­tions of the real world.” If we accept the premise that sen­tences and inte­gers and lines of code are telling the same sto­ries, then maybe there isn’t as much dif­fer­ence between humans and machines as we would like to think.

via The Atlantic

Relat­ed Con­tent:

Nick Cave Answers the Hot­ly Debat­ed Ques­tion: Will Arti­fi­cial Intel­li­gence Ever Be Able to Write a Great Song?

Kurt Von­negut Dia­grams the Shape of All Sto­ries in a Master’s The­sis Reject­ed by U. Chica­go

Kurt Von­negut Maps Out the Uni­ver­sal Shapes of Our Favorite Sto­ries

Josh Jones is a writer and musi­cian based in Durham, NC. Fol­low him at @jdmagness

Watch 110 Lectures by Donald Knuth, “the Yoda of Silicon Valley,” on Programming, Mathematical Writing, and More

Many see the realms of lit­er­a­ture and com­put­ers as not just com­plete­ly sep­a­rate, but grow­ing more dis­tant from one anoth­er all the time. Don­ald Knuth, one of the most respect­ed fig­ures of all the most deeply com­put­er-savvy in Sil­i­con Val­ley, sees it dif­fer­ent­ly. His claims to fame include The Art of Com­put­er Pro­gram­ming, an ongo­ing mul­ti-vol­ume series of books whose pub­li­ca­tion began more than fifty years ago, and the dig­i­tal type­set­ting sys­tem TeX, which, in a recent pro­file of Knuth, the New York Times’ Siob­han Roberts describes as “the gold stan­dard for all forms of sci­en­tif­ic com­mu­ni­ca­tion and pub­li­ca­tion.”

Some, Roberts writes, con­sid­er TeX “Dr. Knuth’s great­est con­tri­bu­tion to the world, and the great­est con­tri­bu­tion to typog­ra­phy since Guten­berg.” At the core of his life­long work is an idea called “lit­er­ate pro­gram­ming,” which empha­sizes “the impor­tance of writ­ing code that is read­able by humans as well as com­put­ers — a notion that nowa­days seems almost twee.

Dr. Knuth has gone so far as to argue that some com­put­er pro­grams are, like Eliz­a­beth Bishop’s poems and Philip Roth’s Amer­i­can Pas­toral, works of lit­er­a­ture wor­thy of a Pulitzer.” Knuth’s mind, tech­ni­cal achieve­ments, and style of com­mu­ni­ca­tion have earned him the infor­mal title of “the Yoda of Sil­i­con Val­ley.”

That appel­la­tion also reflects a depth of tech­ni­cal wis­dom only attain­able by get­ting to the very bot­tom of things, which in Knuth’s case means ful­ly under­stand­ing how com­put­er pro­gram­ming works all the way down to the most basic lev­el. (This in con­trast to the aver­age pro­gram­mer, writes Roberts, who “no longer has time to manip­u­late the bina­ry muck, and works instead with hier­ar­chies of abstrac­tion, lay­ers upon lay­ers of code — and often with chains of code bor­rowed from code libraries.) Now every­one can get more than a taste of Knuth’s per­spec­tive and thoughts on com­put­ers, pro­gram­ming, and a host of relat­ed sub­jects on the Youtube chan­nel of Stan­ford Uni­ver­si­ty, where Knuth is now pro­fes­sor emer­i­tus (and where he still gives infor­mal lec­tures under the ban­ner “Com­put­er Mus­ings”).

Stan­ford’s online archive of Don­ald Knuth Lec­tures now num­bers 110, rang­ing across the decades and cov­er­ing such sub­jects as the usage and mechan­ics of TeX, the analy­sis of algo­rithms, and the nature of math­e­mat­i­cal writ­ing. “I am wor­ried that algo­rithms are get­ting too promi­nent in the world,” he tells Roberts in the New York Times pro­file. “It start­ed out that com­put­er sci­en­tists were wor­ried nobody was lis­ten­ing to us. Now I’m wor­ried that too many peo­ple are lis­ten­ing.” But hav­ing become a com­put­er sci­en­tist before the field of com­put­er sci­ence even had a name, the now-octo­ge­nar­i­an Knuth pos­sess­es a rare per­spec­tive to which any­one in 21st-cen­tu­ry tech­nol­o­gy could cer­tain­ly ben­e­fit from expo­sure.

Relat­ed Con­tent:

Free Online Com­put­er Sci­ence Cours­es

50 Famous Aca­d­e­mics & Sci­en­tists Talk About God

The Secret His­to­ry of Sil­i­con Val­ley

When J.M. Coet­zee Secret­ly Pro­grammed Com­put­ers to Write Poet­ry in the 1960s

Intro­duc­tion to Com­put­er Sci­ence and Pro­gram­ming: A Free Course from MIT

Peter Thiel’s Stan­ford Course on Star­tups: Read the Lec­ture Notes Free Online

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall or on Face­book.

Discover Rare 1980s CDs by Lou Reed, Devo & Talking Heads That Combined Music with Computer Graphics

When it first hit the mar­ket in 1982, the com­pact disc famous­ly promised “per­fect sound that lasts for­ev­er.” But inno­va­tion has a way of march­ing con­tin­u­al­ly on, and nat­u­ral­ly the inno­va­tors soon start­ed won­der­ing: what if per­fect sound isn’t enough? What if con­sumers want some­thing to go with it, some­thing to look at? And so, when com­pact disc co-devel­op­ers Sony and Philips updat­ed its stan­dards, they includ­ed doc­u­men­ta­tion on the use of the for­mat’s chan­nels not occu­pied by audio data. So was born the CD+G, which boast­ed “not only the CD’s full, dig­i­tal sound, but also video infor­ma­tion — graph­ics — view­able on any tele­vi­sion set or video mon­i­tor.”

That text comes from a pack­age scan post­ed by the online CD+G Muse­um, whose Youtube chan­nel fea­tures rips of near­ly every record released on the for­mat, begin­ning with the first, the Fire­sign The­atre’s Eat or Be Eat­en.

When it came out, lis­ten­ers who hap­pened to own a CD+G‑compatible play­er (or a CD+G‑compatible video game con­sole, my own choice at the time hav­ing been the Tur­bo­grafx-16) could see that beloved “head com­e­dy” troupe’s dense­ly lay­ered stu­dio pro­duc­tion and even more dense­ly lay­ered humor accom­pa­nied by images ren­dered in psy­che­del­ic col­or — or as psy­che­del­ic as images can get with only six­teen col­ors avail­able on the palette, not to men­tion a res­o­lu­tion of 288 pix­els by 192 pix­els, not much larg­er than a icon on the home screen of a mod­ern smart­phone. Those lim­i­ta­tions may make CD+G graph­ics look unim­pres­sive today, but just imag­ine what a cut­ting-edge nov­el­ty they must have seemed in the late 1980s when they first appeared.

Dis­play­ing lyrics for karaoke singers was the most obvi­ous use of CD+G tech­nol­o­gy, but its short lifes­pan also saw a fair few exper­i­ments on such oth­er major-label releas­es, all view­able at the CD+G Muse­um, as Lou Reed’s New York, which com­bines lyrics with dig­i­tized pho­tog­ra­phy of the epony­mous city; Talk­ing Heads’ Naked, which pro­vides musi­cal infor­ma­tion such as the chord changes and instru­ments play­ing on each phrase; Johann Sebas­t­ian Bach’s St. Matthew Pas­sion, which trans­lates the libret­to along­side works of art; and Devo’s sin­gle “Dis­co Dancer,” which tells the ori­gin sto­ry of those “five Spud­boys from Ohio.” With these and almost every oth­er CD+G release avail­able at the CD+G muse­um, you’ll have no short­age of not just back­ground music but back­ground visu­als for your next late-80s-ear­ly-90s-themed par­ty.

Relat­ed Con­tent:

Watch 1970s Ani­ma­tions of Songs by Joni Mitchell, Jim Croce & The Kinks, Aired on The Son­ny & Cher Show

The Sto­ry of How Beethoven Helped Make It So That CDs Could Play 74 Min­utes of Music

Dis­cov­er the Lost Ear­ly Com­put­er Art of Telidon, Canada’s TV Pro­to-Inter­net from the 1970s

Based in Seoul, Col­in Mar­shall writes and broad­casts on cities and cul­ture. His projects include the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les and the video series The City in Cin­e­ma. Fol­low him on Twit­ter at @colinmarshall or on Face­book.

M.I.T. Computer Program Alarmingly Predicts in 1973 That Civilization Will End by 2040

In 1704, Isaac New­ton pre­dict­ed the end of the world some­time around (or after, “but not before”) the year 2060, using a strange series of math­e­mat­i­cal cal­cu­la­tions. Rather than study what he called the “book of nature,” he took as his source the sup­posed prophe­cies of the book of Rev­e­la­tion. While such pre­dic­tions have always been cen­tral to Chris­tian­i­ty, it is star­tling for mod­ern peo­ple to look back and see the famed astronomer and physi­cist indulging them. For New­ton, how­ev­er, as Matthew Stan­ley writes at Sci­ence, “lay­ing the foun­da­tion of mod­ern physics and astron­o­my was a bit of a sideshow. He believed that his tru­ly impor­tant work was deci­pher­ing ancient scrip­tures and uncov­er­ing the nature of the Chris­t­ian reli­gion.”

Over three hun­dred years lat­er, we still have plen­ty of reli­gious doom­say­ers pre­dict­ing the end of the world with Bible codes. But in recent times, their ranks have seem­ing­ly been joined by sci­en­tists whose only pro­fessed aim is inter­pret­ing data from cli­mate research and sus­tain­abil­i­ty esti­mates giv­en pop­u­la­tion growth and dwin­dling resources. The sci­en­tif­ic pre­dic­tions do not draw on ancient texts or the­ol­o­gy, nor involve final bat­tles between good and evil. Though there may be plagues and oth­er hor­ri­ble reck­on­ings, these are pre­dictably causal out­comes of over-pro­duc­tion and con­sump­tion rather than divine wrath. Yet by some strange fluke, the sci­ence has arrived at the same apoc­a­lyp­tic date as New­ton, plus or minus a decade or two.

The “end of the world” in these sce­nar­ios means the end of mod­ern life as we know it: the col­lapse of indus­tri­al­ized soci­eties, large-scale agri­cul­tur­al pro­duc­tion, sup­ply chains, sta­ble cli­mates, nation states…. Since the late six­ties, an elite soci­ety of wealthy indus­tri­al­ists and sci­en­tists known as the Club of Rome (a fre­quent play­er in many con­spir­a­cy the­o­ries) has fore­seen these dis­as­ters in the ear­ly 21st cen­tu­ry. One of the sources of their vision is a com­put­er pro­gram devel­oped at MIT by com­put­ing pio­neer and sys­tems the­o­rist Jay For­rester, whose mod­el of glob­al sus­tain­abil­i­ty, one of the first of its kind, pre­dict­ed civ­i­liza­tion­al col­lapse in 2040. “What the com­put­er envi­sioned in the 1970s has by and large been com­ing true,” claims Paul Rat­ner at Big Think.

Those pre­dic­tions include pop­u­la­tion growth and pol­lu­tion lev­els, “wors­en­ing qual­i­ty of life,” and “dwin­dling nat­ur­al resources.” In the video at the top, see Aus­trali­a’s ABC explain the computer’s cal­cu­la­tions, “an elec­tron­ic guid­ed tour of our glob­al behav­ior since 1900, and where that behav­ior will lead us,” says the pre­sen­ter. The graph spans the years 1900 to 2060. “Qual­i­ty of life” begins to sharply decline after 1940, and by 2020, the mod­el pre­dicts, the met­ric con­tracts to turn-of-the-cen­tu­ry lev­els, meet­ing the sharp increase of the “Zed Curve” that charts pol­lu­tion lev­els. (ABC revis­it­ed this report­ing in 1999 with Club of Rome mem­ber Kei­th Suter.)

You can prob­a­bly guess the rest—or you can read all about it in the 1972 Club of Rome-pub­lished report Lim­its to Growth, which drew wide pop­u­lar atten­tion to Jay Forrester’s books Urban Dynam­ics (1969) and World Dynam­ics (1971). For­rester, a fig­ure of New­ton­ian stature in the worlds of com­put­er sci­ence and man­age­ment and sys­tems theory—though not, like New­ton, a Bib­li­cal prophe­cy enthusiast—more or less endorsed his con­clu­sions to the end of his life in 2016. In one of his last inter­views, at the age of 98, he told the MIT Tech­nol­o­gy Review, “I think the books stand all right.” But he also cau­tioned against act­ing with­out sys­tem­at­ic think­ing in the face of the glob­al­ly inter­re­lat­ed issues the Club of Rome omi­nous­ly calls “the prob­lem­at­ic”:

Time after time … you’ll find peo­ple are react­ing to a prob­lem, they think they know what to do, and they don’t real­ize that what they’re doing is mak­ing a prob­lem. This is a vicious [cycle], because as things get worse, there is more incen­tive to do things, and it gets worse and worse.

Where this vague warn­ing is sup­posed to leave us is uncer­tain. If the cur­rent course is dire, “unsys­tem­at­ic” solu­tions may be worse? This the­o­ry also seems to leave pow­er­ful­ly vest­ed human agents (like Exxon’s exec­u­tives) whol­ly unac­count­able for the com­ing col­lapse. Lim­its to Growth—scoffed at and dis­parag­ing­ly called “neo-Malthu­sian” by a host of lib­er­tar­i­an crit­ics—stands on far sur­er evi­den­tiary foot­ing than Newton’s weird pre­dic­tions, and its cli­mate fore­casts, notes Chris­t­ian Par­en­ti, “were alarm­ing­ly pre­scient.” But for all this doom and gloom it’s worth bear­ing in mind that mod­els of the future are not, in fact, the future. There are hard times ahead, but no the­o­ry, no mat­ter how sophis­ti­cat­ed, can account for every vari­able.

via Big Think

Relat­ed Con­tent:

In 1704, Isaac New­ton Pre­dicts the World Will End in 2060

A Cen­tu­ry of Glob­al Warm­ing Visu­al­ized in a 35 Sec­ond Video

A Map Shows What Hap­pens When Our World Gets Four Degrees Warmer: The Col­orado Riv­er Dries Up, Antarc­ti­ca Urban­izes, Poly­ne­sia Van­ish­es

It’s the End of the World as We Know It: The Apoc­a­lypse Gets Visu­al­ized in an Inven­tive Map from 1486

Josh Jones is a writer and musi­cian based in Durham, NC. Fol­low him at @jdmagness

« Go BackMore in this category... »
Quantcast
Open Culture was founded by Dan Colman.