Laurie Anderson Introduces Her Virtual Reality Installation That Lets You Fly Magically Through Stories

While the sci-fi dreams of virtual and “augmented” reality are now within the grasp of artists and game designers, the technology of the adult human brain remains rooted in the stone age—we still need a good story to accompany the flickering shadows on the cave wall. An artist as wise as Laurie Anderson understands this, but—given that it’s Laurie Anderson—she isn’t going to retread familiar narrative paths, especially when working in the vehicle of VR, as she has in her new piece Chalkroom, created in a collaboration with Taiwanese artist Hsin-Chien Huang.

The piece allows viewers the opportunity to travel not only into the space of imagination a story creates, but into the very architecture of story itself—to walk, or rather float, through its passageways as words and letters drift by like tufts of dandelion, stars, or, as Anderson puts it, like snow. “They’re there to define the space and to show you a little bit about what it is,” says the artist in the interview above, “But they’re actually fractured languages, so it’s kind of exploded things.” She explains the “chalkroom” concept as resisting the “perfect, slick and shiny” aesthetic that characterizes most computer-generated images. “It has a certain tactility and made-by-hand kind of thing… this is gritty and drippy and filled with dust and dirt.”




Chalkroom, she says, "is a library of stories, and no one will ever find them all.” It sounds to me, at least, more intriguing than the premise of most video games, but the audience for this piece will be limited, not only to those willing to give it a chance, but to those who can experience the piece firsthand, as it were, by visiting the physical space of one of Anderson’s exhibitions and strapping on the VR goggles. Once they do, she says, they will be able to fly, a disorienting experience that sends some people falling out of their chair. Last spring, Chalkroom became part of an ongoing exhibit at the Massachusetts Museum of Contemporary Art, a “Laurie Anderson pilgrimage,” as Mass MoCA director Joseph C. Thompson describes it, that also features a VR experience called Aloft.

In August, Chalkroom appeared at the Louisiana Museum of Modern Art in Denmark, where the interview above took place. Watching it, you’ll see why the piece has generated so much buzz, winning “Best VR Experience” at the Venice Film Festival and visiting major museums around Europe and the U.S. “Mostly VR is kind of task-oriented,” she says, “you get that, you do that, you shoot that.” Chalkroom feels more like navigating catacombs, traversing dark labyrinths punctuated by brilliant constellations of light made out of words, as Anderson’s voice provides enigmatic narration against a backdrop of three-dimensional sound design. It’s an immersive journey that seems, as promised, like the one we take as readers, pursuing elusive meanings that can seem tantalizingly just out of reach.

via @WFMU

Related Content:

Laurie Anderson’s Top 10 Books to Take to a Desert Island

21 Artists Give “Advice to the Young:” Vital Lessons from Laurie Anderson, David Byrne, Umberto Eco, Patti Smith & More

Go Inside the First 30 Minutes of Kubrick’s The Shining with This 360º Virtual Reality Video

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

New Study Reveals How the Neanderthals Made Super Glue 200,000 Years Ago: The World’s Oldest Synthetic Material

It's become increasingly clear how much we've underestimated the Neanderthals, the archaic humans who evolved in Europe and went extinct about 40,000 years ago. Though we've long used them as a byword for a lumbering, beast-like lack of development and intelligence — compared, of course, to we glorious examples of Homo sapiens — evidence has come to reveal a greater similarity between us and Homo neanderthalensis than we'd imagined. Not only did they develop stone tools, they even invented a kind of "super glue," one that, as you can see in the NOVA segment above, we have difficulty replicating even today.

"Archaeologists first found tar-covered stones and black lumps at Neanderthal sites across Europe about two decades ago," writes the New York Times' Nicholas St. Fleur. "The tar was distilled from the bark of birch trees some 200,000 years ago, and seemed to have been used for hafting, or attaching handles to stone tools and weapons. But scientists did not know how Neanderthals produced the dark, sticky substance, more than 100,000 years before Homo sapiens in Africa used tree resin and ocher adhesives." But in a new study in Scientific Reports, "a team of archaeologists has used materials available during prehistoric times to demonstrate three possible ways Neanderthals could have deliberately made tar."




The process might have looked something like that in the video above, an attempt by archaeologists Wil Roebroeks and Friedrich Palmer to make this of oldest known synthetic material just as the Neanderthals might have executed it. Their only materials: "an upturned animal skull to catch the pitch; a small stone on which the pitch would condense; some rolls of birch bark, the source of the pitch; and a layer of ash, to exclude oxygen and prevent the bark from burning."

Image by Paul Kozowyk

They technically get it to work, managing to heat the bark to just the right temperature, but the experiment doesn't produce very much of this ancient super glue — certainly not as much as Neanderthals would have used to make spears, which might turn out to have been the very first industrial process in history. Innovation, in the 21st century as well as 250,000 years ago, does tend to come from unexpected places.

You can read more about archeologists latest theories on the making of Neanderthal super glue over at Scientific Reports.

via Gizmodo

Related Content:

What Did the Voice of Neanderthals, Our Distant Cousins, Sound Like?: Scientists Demonstrate Their “High Pitch” Theory

Hear the World’s Oldest Instrument, the “Neanderthal Flute,” Dating Back Over 43,000 Years

Richard Dawkins Explains Why There Was Never a First Human Being

Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. He’s at work on the book The Stateless City: a Walk through 21st-Century Los Angeles, the video series The City in Cinema, the crowdfunded journalism project Where Is the City of the Future?, and the Los Angeles Review of Books’ Korea Blog. Follow him on Twitter at @colinmarshall or on Facebook.

Lynda Barry on How the Smartphone Is Endangering Three Ingredients of Creativity: Loneliness, Uncertainty & Boredom

The phone gives us a lot but it takes away three key elements of discovery: loneliness, uncertainty and boredom. Those have always been where creative ideas come from. - Lynda Barry

In the spring of 2016, the great cartoonist and educator, Lynda Barry, did the unthinkable, prior to giving a lecture and writing class at NASA’s Goddard Space Flight Center.

She demanded that all participating staff members surrender their phones and other such personal devices.

Her victims were as jangled by this prospect as your average iPhone-addicted teen, but surrendered, agreeing to write by hand, another antiquated notion Barry subscribes to:

The delete button makes it so that anything you’re unsure of you can get rid of, so nothing new has a chance. Writing by hand is a revelation for people. Maybe that’s why they asked me to NASA – I still know how to use my hands… there is a different way of thinking that goes along with them.

Barry—who told the Onion’s AV Club that she crafted her book What It Is with an eye toward bored readers stuck in a Jiffy Lube oil-change waiting room—is also a big proponent of doodling, which she views as a creative neurological response to boredom:

Boring meeting, you have a pen, the usual clowns are yakking. Most people will draw something, even people who can’t draw. I say “If you’re bored, what do you draw?” And everybody has something they draw. Like “Oh yeah, my little guy, I draw him.” Or “I draw eyeballs, or palm trees.” … So I asked them “Why do you think you do that? Why do you think you doodle during those meetings?” I believe that it’s because it makes having to endure that particular situation more bearable, by changing our experience of time. It’s so slight. I always say it’s the difference between, if you’re not doodling, the minutes feel like a cheese grater on your face. But if you are doodling, it’s more like Brillo.  It’s not much better, but there is a difference. You could handle Brillo a little longer than the cheese grater.

Meetings and classrooms are among the few remaining venues in which screen-addicted moths are expected to force themselves away from the phone’s inviting flame. Other settings—like the Jiffy Lube waiting room—require more initiative on the user's part.




Once, we were keener students of minor changes to familiar environments, the books strangers were reading in the subway, and those strangers themselves. Our subsequent observations were known to spark conversation and sometimes ideas that led to creative projects.

Now, many of us let those opportunities slide by, as we fill up on such fleeting confections as Candy Crush, funny videos, and all-you-can-eat servings of social media.

It’s also tempting to use our phones as defacto shields any time social anxiety looms. This dodge may provide short term comfort, especially to younger people, but remember, Barry and many of her cartoonist peers, including Daniel Clowes, Simon Hanselmann, and Ariel Schrag, toughed it out by making art. That's what got them through the loneliness, uncertainty, and boredom of their middle and high school years.

The book you hold in your hands would not exist had high school been a pleasant experience for me… It was on those quiet weekend nights when even my parents were out having fun that I began making serious attempts to make stories in comics form.

Adrian Tomine, introduction to 32 Stories

Barry is far from alone in encouraging adults to peel themselves away from their phone dependency for their creative good.

Photographer Eric Pickersgill’s Removed imagines a series of everyday situations in which phones and other personal devices have been rendered invisible. (It’s worth noting that he removed the offending articles from the models’ hands, rather that Photoshopping them out later.)

Computer Science Professor Calvin Newport’s recent book, Deep Work, posits that all that shallow phone time is creating stress, anxiety, and lost creative opportunities, while also doing a number on our personal and professional lives.

Author Manoush Zomorodi’s recent TED Talk on how boredom can lead to brilliant ideas, below, details a weeklong experiment in battling smartphone habits, with lots of scientific evidence to back up her findings.

But what if you wipe the slate of digital distractions only to find that your brain’s just… empty? A once occupied room, now devoid of anything but dimly recalled memes, and generalized dread over the state of the world?

The aforementioned 2010 AV Club interview with Barry offers both encouragement and some useful suggestions that will get the temporarily paralyzed moving again:

I don’t know what the strip’s going to be about when I start. I never know. I oftentimes have—I call it the word-bag. Just a bag of words. I’ll just reach in there, and I’ll pull out a word, and it’ll say “ping-pong.” I’ll just have that in my head, and I’ll start drawing the pictures as if I can… I hear a sentence, I just hear it. As soon as I hear even the beginning of the first sentence, then I just… I write really slow. So I’ll be writing that, and I’ll know what’s going to go at the top of the panel. Then, when it gets to the end, usually I’ll know what the next one is. By three sentences or four in that first panel, I stop, and then I say “Now it’s time for the drawing.” Then I’ll draw. But then I’ll hear the next one over on another page! Or when I’m drawing Marlys and Arna, I might hear her say something, but then I’ll hear Marlys say something back. So once that first sentence is there, I have all kinds of choices as to where I put my brush. But if nothing is happening, then I just go over to what I call my decoy page. It’s like decoy ducks. I go over there and just start messing around.

Related Content:

How Information Overload Robs Us of Our Creativity: What the Scientific Research Shows

The Case for Deleting Your Social Media Accounts & Doing Valuable “Deep Work” Instead, According to Prof. Cal Newport

Lynda Barry’s Illustrated Syllabus & Homework Assignments from Her New UW-Madison Course, “Making Comics”

Lynda Barry, Cartoonist Turned Professor, Gives Her Old Fashioned Take on the Future of Education

Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine.  Follow her @AyunHalliday.

The Map of Computer Science: New Animation Presents a Survey of Computer Science, from Alan Turing to “Augmented Reality”

I’ve never wanted to start a sentence with “I’m old enough to remember…” because, well, who does? But here we are. I remember the enormously successful Apple IIe and Commodore 64, and a world before Microsoft. Smart phones were science fiction. To do much more than word process or play games one had to learn a programming language. These ancient days seemed at the time—and in hindsight as well—to be the very dawn of computing. Before the personal computer, such devices were the size of kitchen appliances and were hidden away in military installations, universities, and NASA labs.

But of course we all know that the history of computing goes far beyond the early 80s: at least back to World War II, and perhaps even much farther. Do we begin with the abacus, the 2,200-Year-Old Antikythera Mechanism, the astrolabe, Ada Lovelace and Charles Babbage? The question is maybe one of definitions. In the short, animated video above, physicist, science writer, and YouTube educator Dominic Walliman defines the computer according to its basic binary function of “just flipping zeros and ones,” and he begins his condensed history of computer science with tragic genius Alan Turing of Turing Test and Bletchley Park codebreaking fame.




Turing’s most significant contribution to computing came from his 1936 concept of the “Turing Machine,” a theoretical mechanism that could, writes the Cambridge Computer Laboratory “simulate ANY computer algorithm, no matter how complicated it is!” All other designs, says Walliman—apart from a quantum computer—are equivalent to the Turing Machine, “which makes it the foundation of computer science.” But since Turing’s time, the simple design has come to seem endlessly capable of adaptation and innovation.

Walliman illustrates the computer's exponential growth by pointing out that a smart phone has more computing power than the entire world possessed in 1963, and that the computing capability that first landed astronauts on the moon is equal to “a couple of Nintendos” (first generation classic consoles, judging by the image). But despite the hubris of the computer age, Walliman points out that “there are some problems which, due to their very nature, can never be solved by a computer” either because of the degree of uncertainty involved or the degree of inherent complexity. This fascinating, yet abstract discussion is where Walliman’s “Map of Computer Science” begins, and for most of us this will probably be unfamiliar territory.

We’ll feel more at home once the map moves from the region of Computer Theory to that of Computer Engineering, but while Walliman covers familiar ground here, he does not dumb it down. Once we get to applications, we’re in the realm of big data, natural language processing, the internet of things, and “augmented reality.” From here on out, computer technology will only get faster, and weirder, despite the fact that the “underlying hardware is hitting some hard limits.” Certainly this very quick course in Computer Science only makes for an introductory survey of the discipline, but like Wallman’s other maps—of mathematics, physics, and chemistry—this one provides us with an impressive visual overview of the field that is both broad and specific, and that we likely wouldn’t encounter anywhere else.

As with his other maps, Walliman has made this the Map of Computer Science available as a poster, perfect for dorm rooms, living rooms, or wherever else you might need a reminder.

Related Content:

Free Online Computer Science Courses

How Ada Lovelace, Daughter of Lord Byron, Wrote the First Computer Program in 1842–a Century Before the First Computer

Watch Breaking the Code, About the Life & Times of Alan Turing (1996)

The Map of Mathematics: Animation Shows How All the Different Fields in Math Fit Together

The Map of Physics: Animation Shows How All the Different Fields in Physics Fit Together

The Map of Chemistry: New Animation Summarizes the Entire Field of Chemistry in 12 Minutes

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

The Planetarium Table Clock: Magnificent 1775 Timepiece Tracks the Passing of Time & the Travel of the Planets

If you're in Zurich, head over to the Beyer Clock and Watch Museum, which presents the history of timekeeping and timekeeping instruments, from 1400 BC to modern times. On display, you'll find sundials, water and tower clocks, Renaissance automata, and pendulum clocks. And the Planetarium Table Clock featured above.

Made circa 1775, the planetarium clock keeps time ... and so much more. According to the Museum of Artifacts website, the earth (look in the glass orb) "rotates around the sun in perfect real time." And the "other five planets rotate as well--they "go up, down, around, in relation to the etched constellations of precisely positioned stars on the crystal globe, which if you are smart enough will reveal what season it is." This fine timekeeping piece was the joint creation of Nicole-Reine Lepaute, a French astronomer who predicted the return of Halley's Comet, and her husband, Jean-André Lepaute, who presided over a clockmaking dynasty and became horloger du Roi (clockmaker to the king).

It's hard to imagine that the Planetarium clock didn't somehow inspire a more modern creation--the Midnight Planétarium, an astronomical watch that shows the rotation of five planets — Mercury, Venus, Earth, Mars, Jupiter, and Saturn. It has a price tag of $220,000 (excluding sales tax). See it on display below.

Follow Open Culture on Facebook and Twitter and share intelligent media with your friends. Or better yet, sign up for our daily email and get a daily dose of Open Culture in your inbox. 

If you'd like to support Open Culture and our mission, please consider making a donation to our site. It's hard to rely 100% on ads, and your contributions will help us provide the best free cultural and educational materials.

Related Content:

How Clocks Changed Humanity Forever, Making Us Masters and Slaves of Time

An Animated Alan Watts Waxes Philosophical About Time in The Fine Art of Goofing Off, the 1970s “Sesame Street for Grown-Ups”

Carl Sagan Presents Six Lectures on Earth, Mars & Our Solar System … For Kids (1977)

Margaret Hamilton, Lead Software Engineer of the Apollo Project, Stands Next to Her Code That Took Us to the Moon (1969)

Photo courtesy of MIT Museum

When I first read news of the now-infamous Google memo writer who claimed with a straight face that women are biologically unsuited to work in science and tech, I nearly choked on my cereal. A dozen examples instantly crowded to mind of women who have pioneered the very basis of our current technology while operating at an extreme disadvantage in a culture that explicitly believed they shouldn’t be there, this shouldn’t be happening, women shouldn’t be able to do a “man’s job!”

The memo, as Megan Molteni and Adam Rogers write at Wired, “is a species of discourse peculiar to politically polarized times: cherry-picking scientific evidence to support a pre-existing point of view.” Its specious evolutionary psychology pretends to objectivity even as it ignores reality. As Mulder would say, the truth is out there, if you care to look, and you don’t need to dig through classified FBI files. Just, well, Google it. No, not the pseudoscience, but the careers of women in STEM without whom we might not have such a thing as Google.




Women like Margaret Hamilton, who, beginning in 1961, helped NASA “develop the Apollo program’s guidance system” that took U.S. astronauts to the moon, as Maia Weinstock reports at MIT News. “For her work during this period, Hamilton has been credited with popularizing the concept of software engineering." Robert McMillan put it best in a 2015 profile of Hamilton:

It might surprise today’s software makers that one of the founding fathers of their boys’ club was, in fact, a mother—and that should give them pause as they consider why the gender inequality of the Mad Men era persists to this day.

Hamilton was indeed a mother in her twenties with a degree in mathematics, working as a programmer at MIT and supporting her husband through Harvard Law, after which she planned to go to graduate school. “But the Apollo space program came along” and contracted with NASA to fulfill John F. Kennedy’s famous promise made that same year to land on the moon before the decade’s end—and before the Soviets did. NASA accomplished that goal thanks to Hamilton and her team.

Photo courtesy of MIT Museum

Like many women crucial to the U.S. space program (many doubly marginalized by race and gender), Hamilton might have been lost to public consciousness were it not for a popular rediscovery. “In recent years,” notes Weinstock, "a striking photo of Hamilton and her team’s Apollo code has made the rounds on social media.” You can see that photo at the top of the post, taken in 1969 by a photographer for the MIT Instrumentation Laboratory. Used to promote the lab’s work on Apollo, the original caption read, in part, “Here, Margaret is shown standing beside listings of the software developed by her and the team she was in charge of, the LM [lunar module] and CM [command module] on-board flight software team.”

As Hank Green tells it in his condensed history above, Hamilton “rose through the ranks to become head of the Apollo Software development team.” Her focus on errors—how to prevent them and course correct when they arise—“saved Apollo 11 from having to abort the mission” of landing Neil Armstrong and Buzz Aldrin on the moon’s surface. McMillan explains that “as Hamilton and her colleagues were programming the Apollo spacecraft, they were also hatching what would become a $400 billion industry.” At Futurism, you can read a fascinating interview with Hamilton, in which she describes how she first learned to code, what her work for NASA was like, and what exactly was in those books stacked as high as she was tall. As a woman, she may have been an outlier in her field, but that fact is much better explained by the Occam’s razor of prejudice than by anything having to do with evolutionary determinism.

Note: You can now find Hamilton's code on Github.

Related Content:

How 1940s Film Star Hedy Lamarr Helped Invent the Technology Behind Wi-Fi & Bluetooth During WWII

How Ada Lovelace, Daughter of Lord Byron, Wrote the First Computer Program in 1842–a Century Before the First Computer

NASA Puts Its Software Online & Makes It Free to Download

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

Alice in Wonderland Gets Re-Envisioned by a Neural Network in the Style of Paintings By Picasso, van Gogh, Kahlo, O’Keeffe & More

An artist just starting out might first imitate the styles of others, and if all goes well, the process of learning those styles will lead them to a style of their own. But how does one learn something like an artistic style in a way that isn't simply imitative? Artificial intelligence, and especially the current developments in making computers not just think but learn, will certainly shed some light in the process — and produce, along the way, such fascinating projects as the video above, a re-envisioning of Disney's Alice in Wonderland in the styles of famous artists: Pablo Picasso, Georgia O'Keeffe, Katsushika HokusaiFrida Kahlo, Vincent van Gogh and others.

The idea behind this technological process, known as "style transfer," is "to take two images, say, a photo of a person and a painting, and use these to create a third image that combines the content of the former with the style of the later," says an explanatory post at the Paperspace Blog.




"The central problem of style transfer revolves around our ability to come up with a clear way of computing the 'content' of an image as distinct from computing the 'style' of an image. Before deep learning arrived at the scene, researchers had been handcrafting methods to extract the content and texture of images, merge them and see if the results were interesting or garbage."

Deep learning, the family of methods that enable computers to teach themselves, involves providing an artificial intelligence system called a "neural network" with huge amounts of data and letting it draw inferences. In experiments like these, the systems take in visual data and make inferences about how one set of data, like the content of frames of Alice in Wonderland, might look when rendered in the colors and contours of another, such as some of the most famous paintings in all of art history. (Others have tried it, as we've previously featured, with 2001: A Space Odyssey and Blade Runner.) If the technology at work here piques your curiosity, have a look at Google's free online course on deep learning or this new set of courses from Coursera— it probably won't improve your art skills, but it will certainly increase your understanding of a development that will play an ever larger role in the culture and economy ahead.

Here's a full list of painters used in the neural networked version of Alice:

Pablo Picasso
Georgia O'Keeffe
S.H. Raza
Hokusai
Frida Kahlo
Vincent van Gogh
Tarsila
Saloua Raouda Choucair
Lee Krasner
Sol Lewitt
Wu Guanzhong
Elaine de Kooning
Ibrahim el-Salahi
Minnie Pwerle
Jean-Michel Basquiat
Edvard Munch
Natalia Goncharova

via Kottke

Related Content:

Kubrick’s 2001: A Space Odyssey Rendered in the Style of Picasso; Blade Runner in the Style of Van Gogh

What Happens When Blade Runner & A Scanner Darkly Get Remade with an Artificial Neural Network

Google Launches Free Course on Deep Learning: The Science of Teaching Computers How to Teach Themselves

New Deep Learning Courses Released on Coursera, with Hope of Teaching Millions the Basics of Artificial Intelligence

The First Film Adaptation of Alice in Wonderland (1903)

Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. He’s at work on the book The Stateless City: a Walk through 21st-Century Los Angeles, the video series The City in Cinema, the crowdfunded journalism project Where Is the City of the Future?, and the Los Angeles Review of Books’ Korea Blog. Follow him on Twitter at @colinmarshall or on Facebook.

More in this category... »
Quantcast