Lynda Barry on How the Smartphone Is Endangering Three Ingredients of Creativity: Loneliness, Uncertainty & Boredom

The phone gives us a lot but it takes away three key elements of discovery: loneliness, uncertainty and boredom. Those have always been where creative ideas come from. - Lynda Barry

In the spring of 2016, the great cartoonist and educator, Lynda Barry, did the unthinkable, prior to giving a lecture and writing class at NASA’s Goddard Space Flight Center.

She demanded that all participating staff members surrender their phones and other such personal devices.

Her victims were as jangled by this prospect as your average iPhone-addicted teen, but surrendered, agreeing to write by hand, another antiquated notion Barry subscribes to:

The delete button makes it so that anything you’re unsure of you can get rid of, so nothing new has a chance. Writing by hand is a revelation for people. Maybe that’s why they asked me to NASA – I still know how to use my hands… there is a different way of thinking that goes along with them.

Barry—who told the Onion’s AV Club that she crafted her book What It Is with an eye toward bored readers stuck in a Jiffy Lube oil-change waiting room—is also a big proponent of doodling, which she views as a creative neurological response to boredom:

Boring meeting, you have a pen, the usual clowns are yakking. Most people will draw something, even people who can’t draw. I say “If you’re bored, what do you draw?” And everybody has something they draw. Like “Oh yeah, my little guy, I draw him.” Or “I draw eyeballs, or palm trees.” … So I asked them “Why do you think you do that? Why do you think you doodle during those meetings?” I believe that it’s because it makes having to endure that particular situation more bearable, by changing our experience of time. It’s so slight. I always say it’s the difference between, if you’re not doodling, the minutes feel like a cheese grater on your face. But if you are doodling, it’s more like Brillo.  It’s not much better, but there is a difference. You could handle Brillo a little longer than the cheese grater.

Meetings and classrooms are among the few remaining venues in which screen-addicted moths are expected to force themselves away from the phone’s inviting flame. Other settings—like the Jiffy Lube waiting room—require more initiative on the user's part.




Once, we were keener students of minor changes to familiar environments, the books strangers were reading in the subway, and those strangers themselves. Our subsequent observations were known to spark conversation and sometimes ideas that led to creative projects.

Now, many of us let those opportunities slide by, as we fill up on such fleeting confections as Candy Crush, funny videos, and all-you-can-eat servings of social media.

It’s also tempting to use our phones as defacto shields any time social anxiety looms. This dodge may provide short term comfort, especially to younger people, but remember, Barry and many of her cartoonist peers, including Daniel Clowes, Simon Hanselmann, and Ariel Schrag, toughed it out by making art. That's what got them through the loneliness, uncertainty, and boredom of their middle and high school years.

The book you hold in your hands would not exist had high school been a pleasant experience for me… It was on those quiet weekend nights when even my parents were out having fun that I began making serious attempts to make stories in comics form.

Adrian Tomine, introduction to 32 Stories

Barry is far from alone in encouraging adults to peel themselves away from their phone dependency for their creative good.

Photographer Eric Pickersgill’s Removed imagines a series of everyday situations in which phones and other personal devices have been rendered invisible. (It’s worth noting that he removed the offending articles from the models’ hands, rather that Photoshopping them out later.)

Computer Science Professor Calvin Newport’s recent book, Deep Work, posits that all that shallow phone time is creating stress, anxiety, and lost creative opportunities, while also doing a number on our personal and professional lives.

Author Manoush Zomorodi’s recent TED Talk on how boredom can lead to brilliant ideas, below, details a weeklong experiment in battling smartphone habits, with lots of scientific evidence to back up her findings.

But what if you wipe the slate of digital distractions only to find that your brain’s just… empty? A once occupied room, now devoid of anything but dimly recalled memes, and generalized dread over the state of the world?

The aforementioned 2010 AV Club interview with Barry offers both encouragement and some useful suggestions that will get the temporarily paralyzed moving again:

I don’t know what the strip’s going to be about when I start. I never know. I oftentimes have—I call it the word-bag. Just a bag of words. I’ll just reach in there, and I’ll pull out a word, and it’ll say “ping-pong.” I’ll just have that in my head, and I’ll start drawing the pictures as if I can… I hear a sentence, I just hear it. As soon as I hear even the beginning of the first sentence, then I just… I write really slow. So I’ll be writing that, and I’ll know what’s going to go at the top of the panel. Then, when it gets to the end, usually I’ll know what the next one is. By three sentences or four in that first panel, I stop, and then I say “Now it’s time for the drawing.” Then I’ll draw. But then I’ll hear the next one over on another page! Or when I’m drawing Marlys and Arna, I might hear her say something, but then I’ll hear Marlys say something back. So once that first sentence is there, I have all kinds of choices as to where I put my brush. But if nothing is happening, then I just go over to what I call my decoy page. It’s like decoy ducks. I go over there and just start messing around.

Related Content:

How Information Overload Robs Us of Our Creativity: What the Scientific Research Shows

The Case for Deleting Your Social Media Accounts & Doing Valuable “Deep Work” Instead, According to Prof. Cal Newport

Lynda Barry’s Illustrated Syllabus & Homework Assignments from Her New UW-Madison Course, “Making Comics”

Lynda Barry, Cartoonist Turned Professor, Gives Her Old Fashioned Take on the Future of Education

Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine.  Follow her @AyunHalliday.

The Map of Computer Science: New Animation Presents a Survey of Computer Science, from Alan Turing to “Augmented Reality”

I’ve never wanted to start a sentence with “I’m old enough to remember…” because, well, who does? But here we are. I remember the enormously successful Apple IIe and Commodore 64, and a world before Microsoft. Smart phones were science fiction. To do much more than word process or play games one had to learn a programming language. These ancient days seemed at the time—and in hindsight as well—to be the very dawn of computing. Before the personal computer, such devices were the size of kitchen appliances and were hidden away in military installations, universities, and NASA labs.

But of course we all know that the history of computing goes far beyond the early 80s: at least back to World War II, and perhaps even much farther. Do we begin with the abacus, the 2,200-Year-Old Antikythera Mechanism, the astrolabe, Ada Lovelace and Charles Babbage? The question is maybe one of definitions. In the short, animated video above, physicist, science writer, and YouTube educator Dominic Walliman defines the computer according to its basic binary function of “just flipping zeros and ones,” and he begins his condensed history of computer science with tragic genius Alan Turing of Turing Test and Bletchley Park codebreaking fame.




Turing’s most significant contribution to computing came from his 1936 concept of the “Turing Machine,” a theoretical mechanism that could, writes the Cambridge Computer Laboratory “simulate ANY computer algorithm, no matter how complicated it is!” All other designs, says Walliman—apart from a quantum computer—are equivalent to the Turing Machine, “which makes it the foundation of computer science.” But since Turing’s time, the simple design has come to seem endlessly capable of adaptation and innovation.

Walliman illustrates the computer's exponential growth by pointing out that a smart phone has more computing power than the entire world possessed in 1963, and that the computing capability that first landed astronauts on the moon is equal to “a couple of Nintendos” (first generation classic consoles, judging by the image). But despite the hubris of the computer age, Walliman points out that “there are some problems which, due to their very nature, can never be solved by a computer” either because of the degree of uncertainty involved or the degree of inherent complexity. This fascinating, yet abstract discussion is where Walliman’s “Map of Computer Science” begins, and for most of us this will probably be unfamiliar territory.

We’ll feel more at home once the map moves from the region of Computer Theory to that of Computer Engineering, but while Walliman covers familiar ground here, he does not dumb it down. Once we get to applications, we’re in the realm of big data, natural language processing, the internet of things, and “augmented reality.” From here on out, computer technology will only get faster, and weirder, despite the fact that the “underlying hardware is hitting some hard limits.” Certainly this very quick course in Computer Science only makes for an introductory survey of the discipline, but like Wallman’s other maps—of mathematics, physics, and chemistry—this one provides us with an impressive visual overview of the field that is both broad and specific, and that we likely wouldn’t encounter anywhere else.

As with his other maps, Walliman has made this the Map of Computer Science available as a poster, perfect for dorm rooms, living rooms, or wherever else you might need a reminder.

Related Content:

Free Online Computer Science Courses

How Ada Lovelace, Daughter of Lord Byron, Wrote the First Computer Program in 1842–a Century Before the First Computer

Watch Breaking the Code, About the Life & Times of Alan Turing (1996)

The Map of Mathematics: Animation Shows How All the Different Fields in Math Fit Together

The Map of Physics: Animation Shows How All the Different Fields in Physics Fit Together

The Map of Chemistry: New Animation Summarizes the Entire Field of Chemistry in 12 Minutes

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

The Planetarium Table Clock: Magnificent 1775 Timepiece Tracks the Passing of Time & the Travel of the Planets

If you're in Zurich, head over to the Beyer Clock and Watch Museum, which presents the history of timekeeping and timekeeping instruments, from 1400 BC to modern times. On display, you'll find sundials, water and tower clocks, Renaissance automata, and pendulum clocks. And the Planetarium Table Clock featured above.

Made circa 1775, the planetarium clock keeps time ... and so much more. According to the Museum of Artifacts website, the earth (look in the glass orb) "rotates around the sun in perfect real time." And the "other five planets rotate as well--they "go up, down, around, in relation to the etched constellations of precisely positioned stars on the crystal globe, which if you are smart enough will reveal what season it is." This fine timekeeping piece was the joint creation of Nicole-Reine Lepaute, a French astronomer who predicted the return of Halley's Comet, and her husband, Jean-André Lepaute, who presided over a clockmaking dynasty and became horloger du Roi (clockmaker to the king).

It's hard to imagine that the Planetarium clock didn't somehow inspire a more modern creation--the Midnight Planétarium, an astronomical watch that shows the rotation of five planets — Mercury, Venus, Earth, Mars, Jupiter, and Saturn. It has a price tag of $220,000 (excluding sales tax). See it on display below.

Follow Open Culture on Facebook and Twitter and share intelligent media with your friends. Or better yet, sign up for our daily email and get a daily dose of Open Culture in your inbox. 

If you'd like to support Open Culture and our mission, please consider making a donation to our site. It's hard to rely 100% on ads, and your contributions will help us provide the best free cultural and educational materials.

Related Content:

How Clocks Changed Humanity Forever, Making Us Masters and Slaves of Time

An Animated Alan Watts Waxes Philosophical About Time in The Fine Art of Goofing Off, the 1970s “Sesame Street for Grown-Ups”

Carl Sagan Presents Six Lectures on Earth, Mars & Our Solar System … For Kids (1977)

Margaret Hamilton, Lead Software Engineer of the Apollo Project, Stands Next to Her Code That Took Us to the Moon (1969)

Photo courtesy of MIT Museum

When I first read news of the now-infamous Google memo writer who claimed with a straight face that women are biologically unsuited to work in science and tech, I nearly choked on my cereal. A dozen examples instantly crowded to mind of women who have pioneered the very basis of our current technology while operating at an extreme disadvantage in a culture that explicitly believed they shouldn’t be there, this shouldn’t be happening, women shouldn’t be able to do a “man’s job!”

The memo, as Megan Molteni and Adam Rogers write at Wired, “is a species of discourse peculiar to politically polarized times: cherry-picking scientific evidence to support a pre-existing point of view.” Its specious evolutionary psychology pretends to objectivity even as it ignores reality. As Mulder would say, the truth is out there, if you care to look, and you don’t need to dig through classified FBI files. Just, well, Google it. No, not the pseudoscience, but the careers of women in STEM without whom we might not have such a thing as Google.




Women like Margaret Hamilton, who, beginning in 1961, helped NASA “develop the Apollo program’s guidance system” that took U.S. astronauts to the moon, as Maia Weinstock reports at MIT News. “For her work during this period, Hamilton has been credited with popularizing the concept of software engineering." Robert McMillan put it best in a 2015 profile of Hamilton:

It might surprise today’s software makers that one of the founding fathers of their boys’ club was, in fact, a mother—and that should give them pause as they consider why the gender inequality of the Mad Men era persists to this day.

Hamilton was indeed a mother in her twenties with a degree in mathematics, working as a programmer at MIT and supporting her husband through Harvard Law, after which she planned to go to graduate school. “But the Apollo space program came along” and contracted with NASA to fulfill John F. Kennedy’s famous promise made that same year to land on the moon before the decade’s end—and before the Soviets did. NASA accomplished that goal thanks to Hamilton and her team.

Photo courtesy of MIT Museum

Like many women crucial to the U.S. space program (many doubly marginalized by race and gender), Hamilton might have been lost to public consciousness were it not for a popular rediscovery. “In recent years,” notes Weinstock, "a striking photo of Hamilton and her team’s Apollo code has made the rounds on social media.” You can see that photo at the top of the post, taken in 1969 by a photographer for the MIT Instrumentation Laboratory. Used to promote the lab’s work on Apollo, the original caption read, in part, “Here, Margaret is shown standing beside listings of the software developed by her and the team she was in charge of, the LM [lunar module] and CM [command module] on-board flight software team.”

As Hank Green tells it in his condensed history above, Hamilton “rose through the ranks to become head of the Apollo Software development team.” Her focus on errors—how to prevent them and course correct when they arise—“saved Apollo 11 from having to abort the mission” of landing Neil Armstrong and Buzz Aldrin on the moon’s surface. McMillan explains that “as Hamilton and her colleagues were programming the Apollo spacecraft, they were also hatching what would become a $400 billion industry.” At Futurism, you can read a fascinating interview with Hamilton, in which she describes how she first learned to code, what her work for NASA was like, and what exactly was in those books stacked as high as she was tall. As a woman, she may have been an outlier in her field, but that fact is much better explained by the Occam’s razor of prejudice than by anything having to do with evolutionary determinism.

Note: You can now find Hamilton's code on Github.

Related Content:

How 1940s Film Star Hedy Lamarr Helped Invent the Technology Behind Wi-Fi & Bluetooth During WWII

How Ada Lovelace, Daughter of Lord Byron, Wrote the First Computer Program in 1842–a Century Before the First Computer

NASA Puts Its Software Online & Makes It Free to Download

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

Alice in Wonderland Gets Re-Envisioned by a Neural Network in the Style of Paintings By Picasso, van Gogh, Kahlo, O’Keeffe & More

An artist just starting out might first imitate the styles of others, and if all goes well, the process of learning those styles will lead them to a style of their own. But how does one learn something like an artistic style in a way that isn't simply imitative? Artificial intelligence, and especially the current developments in making computers not just think but learn, will certainly shed some light in the process — and produce, along the way, such fascinating projects as the video above, a re-envisioning of Disney's Alice in Wonderland in the styles of famous artists: Pablo Picasso, Georgia O'Keeffe, Katsushika HokusaiFrida Kahlo, Vincent van Gogh and others.

The idea behind this technological process, known as "style transfer," is "to take two images, say, a photo of a person and a painting, and use these to create a third image that combines the content of the former with the style of the later," says an explanatory post at the Paperspace Blog.




"The central problem of style transfer revolves around our ability to come up with a clear way of computing the 'content' of an image as distinct from computing the 'style' of an image. Before deep learning arrived at the scene, researchers had been handcrafting methods to extract the content and texture of images, merge them and see if the results were interesting or garbage."

Deep learning, the family of methods that enable computers to teach themselves, involves providing an artificial intelligence system called a "neural network" with huge amounts of data and letting it draw inferences. In experiments like these, the systems take in visual data and make inferences about how one set of data, like the content of frames of Alice in Wonderland, might look when rendered in the colors and contours of another, such as some of the most famous paintings in all of art history. (Others have tried it, as we've previously featured, with 2001: A Space Odyssey and Blade Runner.) If the technology at work here piques your curiosity, have a look at Google's free online course on deep learning or this new set of courses from Coursera— it probably won't improve your art skills, but it will certainly increase your understanding of a development that will play an ever larger role in the culture and economy ahead.

Here's a full list of painters used in the neural networked version of Alice:

Pablo Picasso
Georgia O'Keeffe
S.H. Raza
Hokusai
Frida Kahlo
Vincent van Gogh
Tarsila
Saloua Raouda Choucair
Lee Krasner
Sol Lewitt
Wu Guanzhong
Elaine de Kooning
Ibrahim el-Salahi
Minnie Pwerle
Jean-Michel Basquiat
Edvard Munch
Natalia Goncharova

via Kottke

Related Content:

Kubrick’s 2001: A Space Odyssey Rendered in the Style of Picasso; Blade Runner in the Style of Van Gogh

What Happens When Blade Runner & A Scanner Darkly Get Remade with an Artificial Neural Network

Google Launches Free Course on Deep Learning: The Science of Teaching Computers How to Teach Themselves

New Deep Learning Courses Released on Coursera, with Hope of Teaching Millions the Basics of Artificial Intelligence

The First Film Adaptation of Alice in Wonderland (1903)

Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. He’s at work on the book The Stateless City: a Walk through 21st-Century Los Angeles, the video series The City in Cinema, the crowdfunded journalism project Where Is the City of the Future?, and the Los Angeles Review of Books’ Korea Blog. Follow him on Twitter at @colinmarshall or on Facebook.

Hear What Music Sounds Like When It’s Created by Synthesizers Made with Artificial Intelligence

When synthesizers like the Yamaha DX7 became consumer products, the possibilities of music changed forever, making available a wealth of new, often totally unfamiliar sounds even to musicians who'd never before had a reason to think past the electric guitar. But if the people at Project Magenta keep doing what they're doing, they could soon bring about a wave of even more revolutionary music-making devices. That "team of Google researchers who are teaching machines to create not only their own music but also to make so many other forms of art," writes the New York Times' Cade Metz, work toward not just the day "when a machine can instantly build a new Beatles song," but the development of tools that allow artists "to create in entirely new ways."

Using neural networks, "complex mathematical systems allow machines to learn specific behavior by analyzing vast amounts of data" (the kind that generated all those disturbing "DeepDream" images a while back), Magenta's researchers "are crossbreeding sounds from very different instruments — say, a bassoon and a clavichord — creating instruments capable of producing sounds no one has ever heard."




You can give one of the results of these experiments a test drive yourself with NSynth, described by its creators as "a research project that trained a neural network on over 300,000 instrument sounds." Think of Nsynth as a synthesizer powered by AI.




Fire it up, and you can mash up and play your own sonic hybrids of guitar and sitar, piccolo and pan flute, hammer dulcimer and dog. In the video at the top of the post you can hear "the first tangible product of Google's Magenta program," a short melody created by an artificial intelligence system designed to create music based on inferences drawn from all the music it has "heard." Below that, we have another piece of artificial intelligence-generated music, this one a polyphonic piece trained on Bach chorales and performed with the sounds of NSynth.

If you'd like to see how the creation of never-before-heard instruments works in a bit more depth, have a look at the demonstration just above of the NSynth interface for Ableton Live, one of the most DJ-beloved pieces of audio performance software around, just above. Hearing all this in action brings to mind the moral of a story Brian Eno has often told about the DX7, from which only he and a few other producers got innovative results by actually learning how to program: as much as the prospect of AI-powered music technology may astound, the music created with it will only sound as good as the skills and adventurousness of the musicians at the controls — for now.

Related Content:

Artificial Intelligence Program Tries to Write a Beatles Song: Listen to “Daddy’s Car”

Artificial Intelligence Creativity Machine Learns to Play Beethoven in the Style of The Beatles’ “Penny Lane”

Watch Sunspring, the Sci-Fi Film Written with Artificial Intelligence, Starring Thomas Middleditch (Silicon Valley)

Two Artificial Intelligence Chatbots Talk to Each Other & Get Into a Deep Philosophical Conversation

Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. He’s at work on the book The Stateless City: a Walk through 21st-Century Los Angeles, the video series The City in Cinema, the crowdfunded journalism project Where Is the City of the Future?, and the Los Angeles Review of Books’ Korea Blog. Follow him on Twitter at @colinmarshall or on Facebook.

The Nano Guitar: Discover the World’s Smallest, Playable Microscopic Guitar

In 1997, the Cornell Chronicle announced: "The world's smallest guitar -- carved out of crystalline silicon and no larger than a single cell -- has been made at Cornell University to demonstrate a new technology that could have a variety of uses in fiber optics, displays, sensors and electronics."

Invented by Dustin W. Carr, the so-called "nanoguitar" measured 10 micrometers long--roughly the size of your average red blood cell. And it had six strings, each "about 50 nanometers wide, the width of about 100 atoms."

According to The Guardian, the vintage 1997 nanoguitar was actually never played. That honor went to a 2003 edition of the nanoguitar, whose strings were plucked by miniature lasers operated with an atomic force microscope, creating "a 40 megahertz signal that is 130,000 times higher than the sound of a full-scale guitar." The human ear couldn't hear something at that frequency, and that's a problem not even a good amp--a Vox AC30, Fender Deluxe Reverb, etc.--could fix.

Thus concludes today's adventure in nanotechnology.

Follow Open Culture on Facebook and Twitter and share intelligent media with your friends. Or better yet, sign up for our daily email and get a daily dose of Open Culture in your inbox. 

If you'd like to support Open Culture and our mission, please consider making a donation to our site. It's hard to rely 100% on ads, and your contributions will help us provide the best free cultural and educational materials.

Related Content:

Richard Feynman Introduces the World to Nanotechnology with Two Seminal Lectures (1959 & 1984)

Stephen Fry Introduces the Strange New World of Nanoscience

A Boy And His Atom: IBM Creates the World’s Smallest Stop-Motion Film With Atoms

« Go BackMore in this category... »
Quantcast