
Image Photo courtesy of the Laboratory of Neuro Imaging at UCLA.
Sometimes—as in the case of neuroscience—scientists and researchers seem to be saying several contradictory things at once. Yes, opposing claims can both be true, given different context and levels of description. But which is it, Neuroscientists? Do we have “neuroplasticity”—the ability to change our brains, and therefore our behavior? Or are we “hard-wired” to be a certain way by innate structures.
The debate long predates the field of neuroscience. It figured prominently in the work, for example, of John Locke and other early modern theorists of cognition—which is why Locke is best known as the theorist of tabula rasa. In “Some Thoughts Concerning Education,” Locke mostly denies that we are able to change much at all in adulthood.
Personality, he reasoned, is determined not by biology, but in the “cradle” by “little, almost insensible impressions on our tender infancies.” Such imprints “have very important and lasting consequences.” Sorry, parents. Not only did your kid get wait-listed for that elite preschool, but their future will also be determined by millions of sights and sounds that happened around them before they could walk.
It’s an extreme, and unscientific, contention, fascinating as it may be from a cultural standpoint. Now we have psychedelic-looking brain scans popping up in our news feeds all the time, promising to reveal the true origins of consciousness and personality. But the conclusions drawn from such research are tentative and often highly contested.
So what does science say about the eternally mysterious act of artistic creation? The abilities of artists have long seemed to us godlike, drawn from supernatural sources, or channeled from other dimensions. Many neuroscientists, you may not be surprised to hear, believe that such abilities reside in the brain. Moreover, some think that artists’ brains are superior to those of mediocre ability.
Or at least that artists’ brains have more gray and white matter than “right-brained” thinkers in the areas of “visual perception, spatial navigation and fine motor skills.” So writes Katherine Brooks in a Huffington Post summary of “Drawing on the right side of the brain: A voxel-based morphometry analysis of observational drawing.” The 2014 study, published at NeuroImage, involved a very small sampling of graduate students, 21 of whom were artists, 23 of whom were not. All 44 students were asked to complete drawing tasks, which were then scored and compared to images of their brain taken by a method called “voxel-based morphometry.”
“The people who are better at drawing really seem to have more developed structures in regions of the brain that control for fine motor performance and what we call procedural memory,” the study’s lead author, Rebecca Chamberlain of Belgium’s KU Leuven University, told the BBC. (Hear her segment on BBC Radio 4’s Inside Science here.) Does this mean, as Artnet News claims in their quick take, that “artists’ brains are more fully developed?”
It’s a juicy headline, but the findings of this limited study, while “intriguing,” are “far from conclusive.” Nonetheless, it marks an important first step. “No studies” thus far, Chamberlain says, “have assessed the structural differences associated with representational skills in visual arts.” Would a dozen such studies resolve questions about causality–nature or nurture? As usual, the truth probably lies somewhere in-between.
At Smithsonian, Randy Rieland quotes several critics of the neuroscience of art, which has previously focused on what happens in the brain when we look at a Van Gogh or read Jane Austen. The problem with such studies, writes Philip Ball at Nature, is that they can lead to “creating criteria of right or wrong, either in the art itself or in individual reactions to it.” But such criteria may already be predetermined by culturally-conditioned responses to art.
The science is fascinating and may lead to numerous discoveries. It does not, as the Creators Project writes hyperbolically, suggest that “artists actually are different creatures from everyone else on the planet.” As University of California philosopher professor Alva Noe states succinctly, one problem with making sweeping generalizations about brains that view or create art is that “there can be nothing like a settled, once-and-for-all account of what art is.”
Emerging fields of “neuroaesthetics” and “neurohumanities” may muddy the waters between quantitative and qualitative distinctions, and may not really answer questions about where art comes from and what it does to us. But then again, given enough time, they just might.
Related Content:
This Is Your Brain on Jane Austen: The Neuroscience of Reading Great Literature
The Neuroscience of Drumming: Researchers Discover the Secrets of Drumming & The Human Brain
The Neuroscience & Psychology of Procrastination, and How to Overcome It
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
Read More...
Just a quick fyi: The Manchester Benefit concert is happening now, and streaming live on YouTube. Coldplay, Pharrell Williams, Justin Bieber, Katy Perry, Miley Cyrus, Niall Horan, Usher, and Ariana Grande will all perform. Click play above to stream the live video feed.
If you would like to sign up for Open Culture’s free email newsletter, please find it here. It’s a great way to see our new posts, all bundled in one email, each day.
If you would like to support the mission of Open Culture, consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us continue providing the best free cultural and educational materials to learners everywhere. You can contribute through PayPal, Patreon, and Venmo (@openculture). Thanks!
Read More...
Every conversation about education in the U.S. takes place in a minefield. Unless you’re a billionaire who bought the job of Secretary of Education, you’d better be prepared to answer questions about racial and economic equity, disability issues, protections for LGBTQ students, teacher pay and unions, religious charter schools, and many other pressing concerns. These issues are not mutually exclusive, nor are they distinct from questions of curriculum, testing, or achievement. The terrain is littered with possible explosive conflicts between educators, parents, administrators, legislators, activists, and profiteers.
The needs of the most deeply invested stakeholders, as they say, the students themselves, seem to get far too little consideration. What if we in the U.S., all of us, actually wanted to improve the educational experiences and academic outcomes for our children—all of them? Where might we look for a model? Many people have looked to Finland, at least since 2010, when the documentary Waiting for Superman contrasted struggling U.S. public schools with highly successful Finnish equivalents.
The film, a positive spin on the charter school movement, received significant backlash for its cherry-picked examples and blaming of teachers’ unions for America’s failing schools. By contrast, Finland’s schools have been described by William Doyle, an American Fulbright Scholar who studies them, as “the ‘ultimate charter school network’ ” (a phrase, we’ll see, that means little in the Finnish context.) There, Doyle writes at The Hechinger Report, “teachers are not strait-jacketed by bureaucrats, scripts or excessive regulations, but have the freedom to innovate and experiment as teams of trusted professionals.”
Last year, Michael Moore featured many of Finland’s innovative educational experiments in his humorous, hopeful travelogue Where to Invade Next. In the clip above, you can hear from the country’s Minister of Education, Krista Kiuru, who explains to him why Finnish children do not have homework; hear also from a group of high school students, high school principal Pasi Majassari, first grade teacher Anna Hart and many others. Shorter school hours—the “shortest school days and shortest school years in the entire Western world”—leave plenty of time for leisure and recreation. Kids bake, hike, build things, make art, conduct experiments, sing, and generally enjoy themselves.
“There are no mandated standardized tests,” writes LynNell Hancock at Smithsonian, “apart from one exam at the end of students’ senior year in high school… there are no rankings, no comparisons or competition between students, schools or regions.” Yet Finnish students have, in the past several years, consistently ranked in the top ten among millions of students worldwide in science, reading, and math. “If there was one thing I kept hearing over and over again from the Finns,” says Moore above, “it’s that America should get rid of standardized tests,” should stop teaching to those tests, stop designing entire curricula around multiple-choice tests. Hancock describes the results of the Finnish system, and its costs:
Ninety-three percent of Finns graduate from academic or vocational high schools, 17.5 percentage points higher than the United States, and 66 percent go on to higher education, the highest rate in the European Union. Yet Finland spends about 30 percent less per student than the United States.
Moore’s camera registers the shock on Finnish educators’ faces when they hear that many U.S. schools eliminated music, art, poetry and other pursuits in order to focus almost exclusively on testing. Though lighthearted in tone, the segment really drives home the depressing degree to which so many U.S. students receive an impoverished education—one barely worthy of the name—unless they luck into a voucher for a high-end charter school or have the independent means for an expensive private one. In Finland, says the Minister of Education, “all the schools are equal. You never ask where the best school is.”
It’s also illegal in Finland to profit from schooling. Wealthy parents have to ensure that neighborhood schools can give their kids the best education possible, because they are the only option. Many people in the U.S. object to comparisons like Moore’s by noting that societies like Finland are “homogenous” next to what may seem to them like maddening cultural diversity in the U.S. However, Finland has incorporated (not without difficulty) large immigrant and refugee populations—even as its schools continue to improve. The government has responded in part to rising immigration with educational solutions such as this one, a “national initiative to reinforce Finnish higher education institutions (HEIs) as significant stakeholders in migrants’ integration.”
The subtantive differences between the two countries’ educational systems may have less to do with demography and more to do with economics and the training and social status of teachers.
In Finland, writes Doyle, no teacher “is allowed to lead a primary school class without a master’s degree in education, with specialization in research and classroom practice.” Teaching “is the most admired job in Finland next to medical doctors.” And as Dana Goldstein points out at The Nation—a fact Waiting for Superman failed to mention—Finnish teachers are “gasp!—unionized and granted tenure.” Perhaps an even more significant difference the documentary glossed over: in Finland, “families benefit from a cradle-to-grave social welfare system that includes universal daycare, preschool and healthcare, all of which are proven to help children achieve better results at school.”
Hundreds of studies in recent years substantiate this claim. It would seem intuitive that stresses associated with hunger and poverty would have a pernicious effect on learning, especially when poorer schools are so egregiously under-resourced. And the data says as much, to varying degrees. And yet, we are now in the U.S. slashing breakfast and lunch programs that feed hungry children and deciding whether to uninsure millions of families as millions more still lack basic health coverage. Most every American parent knows that quality daycares and preschools can cost as much per year as a decent university education in this country.
It seems to many of us that the atrocious state of the U.S. educational system can only be attributed to an act of will on the part our political elite, who see schools as competition for fundamentalist belief systems, opportunities to punish their opponents out of spite, or as rich fields for private profit. But it needn’t be so. It took 40 years for the Finns to create their current system. In the 1960s, their schools ranked on the very low end—along with those in the U.S. By most accounts, they’ve since shown there can be systems that, while surely imperfect in their own way, work for all kids, embedded within larger systems that prize their teachers and families.
Related Content:
In Japanese Schools, Lunch Is As Much About Learning As It’s About Eating
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
Read More...
Long before World of Warcraft, before Everquest and Second Life, and even before Ultima Online, computer-gamers of the 1980s looking for an online world to explore with others of their kind could fire up their Commodore 64s, switch on their dial-up modems, and log into Habitat. Brought out for the Commodore online service Quantum Link by Lucasfilm Games (later known as the developer of such classic point-and-click adventure games as Maniac Mansion and The Secret of Monkey Island, now known as Lucasarts), Habitat debuted as the very first large-scale graphical virtual community, blazing a trail for all the massively multiplayer online role-playing games (or MMORPGs) so many of us spend so much of our time playing today.
Designed, in the words of creators Chip Morningstar and F. Randall Farmer, to “support a population of thousands of users in a single shared cyberspace,” Habitat presented “a real-time animated view into an online simulated world in which users can communicate, play games, go on adventures, fall in love, get married, get divorced, start businesses, found religions, wage wars, protest against them, and experiment with self-government.” All that happened and more within the service’s virtual reality during its pilot run from 1986 to 1988. The features both cautiously and recklessly implemented by Habitat’s developers, and the feedback they received from its users, laid down the template for all the more advanced graphical online worlds to come.
At the top of the post, you can watch Lucasfilm’s original Habitat promotional video promise a “strange new world where names can change as quickly as events, surprises lurk at every turn, and the keynotes of existence are fantasy and fun,” one where “thousands of avatars, each controlled by a different human, can converge to shape an imaginary society.” (All performed, the narrator notes, “with the cooperation of a huge mainframe computer in Virginia.”) The form this society eventually took impressed Habitat’s creators as much as anyone, as Farmer writes in his “Habitat Anecdotes” from 1988, an examination of the most memorable happenings and phenomena among its users.

Farmer found he could group those users into five now-familiar categories: the Passives (who “want to ‘be entertained’ with no effort, like watching TV”), the Active (whose “biggest problem is overspending”), the Motivators (the most valuable users, for they “understand that Habitat is what they make of it”), the Caretakers (employees who “help the new users, control personal conflicts, record bugs” and so on), and the Geek Gods (the virtual world’s all-powerful administrators). Sometimes everyone got along smoothly, and sometimes — inevitably, given that everyone had to define the properties of this brand new medium even as they experienced it — they didn’t.
“At first, during early testing, we found out that people were taking stuff out of others’ hands and shooting people in their own homes,” Farmer writes. Later, a Greek Orthodox Minister opened Habitat’s first church, but “I had to eventually put a lock on the Church’s front door because every time he decorated (with flowers), someone would steal and pawn them while he was not logged in!” This citizen-governed virtual society eventually elected a sheriff from among its users, though the designers could never quite decide what powers to grant him. Other surprisingly “real world” institutions developed, including a newspaper whose user-publisher “tirelessly spent 20–40 hours a week composing a 20, 30, 40 or even 50 page tabloid containing the latest news, events, rumors, and even fictional articles.”
Though developing this then-advanced software for “the ludicrous Commodore 64” posed a serious technical challenge, write Farmer and Morningstar in their 1990 paper “The Lessons of Lucasfilm’s Habitat,” the real work began when the users logged on. All the avatars needed houses, “organized into towns and cities with associated traffic arteries and shopping and recreational areas” with “wilderness areas between the towns so that everyone would not be jammed together into the same place.” Most of all, they needed interesting places to visit, “and since they can’t all be in the same place at the same time, they needed a lot of interesting places to visit. [ … ] Each of those houses, towns, roads, shops, forests, theaters, arenas, and other places is a distinct entity that someone needs to design and create. Attempting to play the role of omniscient central planners, we were swamped.”
All this, the creators discovered, required them to stop thinking like the engineers and game designers they were, giving up all hope of rigorous central planning and world-building in favor of figuring out the tricker problem of how, “like the cruise director on an ocean voyage,” to make Habitat fun for everyone. Farmer faces that question again today, having launched the open-source NeoHabitat project earlier this year with the aim of reviving the Habitat world for the 21st century. As much progress as graphical multiplayer online games have made in the past thirty years, the conclusion Farmer and Morningstar reached after their experience creating the first one holds as true as ever: “Cyberspace may indeed change humanity, but only if it begins with humanity as it really is.”
Related Content:
Free: Play 2,400 Vintage Computer Games in Your Web Browser
Long Live Glitch! The Art & Code from the Game Now Released into the Public Domain
Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. He’s at work on a book about Los Angeles, A Los Angeles Primer, the video series The City in Cinema, the crowdfunded journalism project Where Is the City of the Future?, and the Los Angeles Review of Books’ Korea Blog. Follow him on Twitter at @colinmarshall or on Facebook.
Read More...
Upon arriving in Venice in the late 1930s, columnist and Algonquin Round Table regular Robert Benchley immediately sent a telegram back home to America: “Streets full of water. Please advise.” The line has taken its place in the canon of American humor, but in more recent times the image of water-filled streets — unintentionally water-filled streets, that is — has arisen most often in the conversation about climate change. Some of the potential disaster scenarios envision every major coastal city on Earth eventually turning into a kind of Venice, albeit a much less pleasant version thereof.

And so what better place than the one that hosts perhaps the world’s best known art exhibition, the Venice Biennale, to express climate-change anxiety in the form of public sculpture? “Venice is known for its gondolas, canals, and historic bridges,” writes Condé Nast Traveler’s Sebastian Modak, “but visitors will now also be greeted by another, albeit temporary, reminder of the city’s intimate relationship with water: a giant pair of hands reaching out of the Grand Canal and appearing to support the walls of the historic Ca’ Sagredo Hotel.” The piece is called Support, and it’s created by Barcelona-based Italian sculptor Lorenzo Quinn.
“I have three children, and I’m thinking about their generation and what world we’re going to pass on to them,” Quinn told Mashable’s Maria Gallucci. “I’m worried, I’m very worried.” The hands of his 11-year-old son actually provided the model for the polyurethane-and-resin hands of Support, weighing 5,000 pounds each, that stand on 30-foot pillars at the bottom of the Grand Canal. Modak quotes one of Quinn’s Instagram posts which describes the work as speaking to the people “in a clear, simple and direct way through the innocent hands of a child and it evokes a powerful message, which is that united we can make a stand to curb the climate change that affects us all.”
Those arguing in favor of more aggressive political measures to counteract the effects of climate change have gone to great lengths to point out what forms those effects have so far taken. But the fact that, apart from a stretch of hot summers, few of those effects have yet manifested undeniably in most people’s lives has certainly made their job harder. But nobody who visits Venice during the Biennale could fail to pause before Support, a work whose visual drama demands a reaction that temperature charts or data-filled studies can’t hope to provoke by themselves. And even apart from the issue at hand, as it were, Quinn’s sculpture reminds us that art, even in as deeply historical a setting as Venice, can also keep us thinking about the future.
Related Content:
Global Warming: A Free Course from UChicago Explains Climate Change
132 Years of Global Warming Visualized in 26 Dramatically Animated Seconds
A Song of Our Warming Planet: Cellist Turns 130 Years of Climate Change Data into Music
How Climate Change Is Threatening Your Daily Cup of Coffee
Frank Capra’s Science Film The Unchained Goddess Warns of Climate Change in 1958
Watch Episode 1 of Years of Living Dangerously, The New Showtime Series on Climate Change
Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. He’s at work on a book about Los Angeles, A Los Angeles Primer, the video series The City in Cinema, the crowdfunded journalism project Where Is the City of the Future?, and the Los Angeles Review of Books’ Korea Blog. Follow him on Twitter at @colinmarshall or on Facebook.
Read More...
In the 21st century, most of us have tried our hand at making some kind of digital art or another — even if only touching up cellphone photos of ourselves — but imagine the task of producing it 50 years ago. To make digital art before the world had barely heard the term “digital” required access to a mainframe computer, those hugely expensive hulks that filled rooms and printed out reams and reams of paper data, and the considerable technical know-how to operate it.
But the achievement also, to go by the very early example of Hiroshi Kawano, required a background in philosophy. A graduate of the University of Tokyo majoring in aesthetics and the philosophy of science before becoming a research assistant at that school and then a lecturer at the Tokyo Metropolitan College of Air-Technology, Kawano marshaled his knowledge and experience to create these “digital Mondrians,” so described because of their computer-generated resemblance to that Dutch painter’s most rigorously angular, solidly colored work.
Kawano had drawn inspiration, according to a Deutsche Welle article on his donation of his archives to Germany’s Center for Media Art, from “the writings of the German philosopher Max Bense, who proposed (among other things) the idea of measuring beauty using scientific rules. At the same time, Kawano heard that scientists were using computers to create music. Putting the two together, he decided to explore the possibility of using a computer to program beauty.”

Doing so required “writing programs in complex computer languages, then laboriously punching these programs into hundreds of cards before feeding them into the machine.” And “while the design of his works produced during the 1960s might look simple — they’re not. They are the result of complex mathematical algorithms programmed so that, although Kawano sets the rules for how the picture could look, he can’t determine exactly what will appear on the printer.”

Just before Kawano passed away in 2012, the ZKM (or Center for Art and Media Karlsruhe), celebrated his pioneering digital art with the exhibition “The Philosopher at the Computer,” some of which you can see in this German-language video clip. “The retrospective emphasizes Kawano’s special role in the circle of pioneers in ‘computer art,’ ” says its introduction. “He was neither artist, who discovered the computer as a new production medium and theme, nor engineer who came to art via the new machine, but a philosopher, who left his desk for the computer center to experiment with theoretical models.”
Can computers create art? Can they even be used to create art? These questions now have practically obvious answers in the affirmative, but back in 1964 when Kawano produced the first of these pieces, working through trial and error with the advice of the curious staff of his university’s computer center, the questions must have sounded impossibly philosophical. Today, writes Overhead Compartment’s Claudio Rivera, Kawano’s digital Mondrians “suggest themselves as an oddly ephemeral transition in the nexus of technology and art. The familiar colors and forms are flash-frozen in crystalline pixelation, almost as if seized up in the final, overheated throes of a suddenly-too-old computer.”
Related Content:
Andy Warhol’s Lost Computer Art Found on 30-Year-Old Floppy Disks
Watch the Dutch Paint “the Largest Mondrian Painting in the World”
Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. He’s at work on a book about Los Angeles, A Los Angeles Primer, the video series The City in Cinema, the crowdfunded journalism project Where Is the City of the Future?, and the Los Angeles Review of Books’ Korea Blog. Follow him on Twitter at @colinmarshall or on Facebook.
Read More...
In one school of popular reasoning, people judge historical outcomes that they think are favorable as worthy tradeoffs for historical atrocities. The argument appears in some of the most inappropriate contexts, such as discussions of slavery or the Holocaust. Or in individual thought experiments, such as that of a famous inventor whose birth was the result of a brutal assault. There are a great many people who consider this thinking repulsive, morally corrosive, and astoundingly presumptuous. Not only does it assume that every terrible thing that happens is part of a benevolent design, but it pretends to know which circumstances count as unqualified goods, and which can be blithely ignored. It determines future actions from a tidy and convenient story of the past.
We might contrast this attitude with a more Zen stance, for example, a radically agnostic “wait and see” approach to everything that happens. Not-knowing seems to give meditating monks a great deal of serenity in practice. But the theory terrifies most of us. Effects must have causes, we think, causes must have effects, and in order to predict what’s going to happen next (and thereby save our skins), we must know why we’re doing what we’re doing. The deep impulse is what psychologist and psychotherapist Viktor Frankl identifies, in his pre-gender-neutrally titled book, as Man’s Search for Meaning. Despite the misuse of this faculty to create neurotic or dehumanizing myths, “man’s search for meaning,” writes Frankl, “is the primary motivation in his life and not a ‘secondary rationalization’ of instinctual drives.”
Frankl understood perfectly well how the construction of meaning—through narrative, art, relationships, social fictions, etc.—might be perverted for murderous ends. He was a survivor of four concentration camps, which took the lives of his parents, brother, and wife. The first part of his book, “Experiences in a Concentration Camp,” recounts the horror in detail, sparing no one accountability for their actions. From these experiences, Frankl draws a conclusion, one he explains in the interview above in two parts from 1977. “The lesson one could learn from Auschwitz,” he says, “and in other concentration camps, in the final analysis was, those who were oriented toward a meaning—toward a meaning to be fulfilled by them in the future—were most likely to survive” beyond the experience. “The question,” Frankl says, “was survival for what?” (See a short animated summary of Frankl’s book below.)
Frankl does not excuse the deaths of his family, friends, and millions of others in his psychological theory, which he calls logotherapy. He certainly does not trivialize the most unimaginable of in-human experiences. “We all said to each other in camp,” he writes, “that there could be no earthly happiness which could compensate for all we had suffered.” But it was not the hope of happiness that “gave us courage,” he writes. It was the “will to meaning” that looked to the future, not to the past. In Frankl’s existentialist view, we ourselves create that meaning, for ourselves, and not for others. Logotherapy, Frankl writes, “defocuses all the vicious-circle formations and feedback mechanisms which play such a great role in the development of neuroses.” We must acknowledge the need to make sense of our lives and fill what Frankl called the “existential vacuum.” And we alone are responsible for writing better stories for ourselves.
To dig deeper in Frankl’s philosophy, you can read not only Man’s Search for Meaning but also The Will to Meaning: Foundations and Applications of Logotherapy.
Related Content:
Albert Camus’ Historic Lecture, “The Human Crisis,” Performed by Actor Viggo Mortensen
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
Read More...
In times of national anxiety, many of us take comfort in the fact that the U.S. has endured political crises even more severe than those at hand. History can be a teacher and a guide, and so too can poetry, as Walt Whitman reminds us again and again. Whitman witnessed some of the greatest upheavals and revolutionary changes the country has ever experienced: the Civil War and its aftermath, the assassination of Abraham Lincoln, the failure of Reconstruction, the massive industrialization of the country at the end of the 19th century.…
Perhaps this is why we return to Whitman when we make what critics call a “poetic turn.” His expansive, multivalent verse speaks for us when beauty, shock, or sadness exceed the limits of everyday language. Whitman contained the nation’s warring voices, and somehow reconciled them without diluting their uniqueness. This was, indeed, his literary mission, to “create a unified whole out of disparate parts,” argues Karen Swallow Prior at The Atlantic. “For Whitman, poetry wasn’t just a vehicle for expressing political lament; it was also a political force in itself.” Poetry’s importance as a binding agent in the fractious, fragile coalition of states, meant that for Whitman, the country’s “Presidents shall not be their common referee so much as their poets shall.”
Whitman wrote as a gay man who, by the time he published the first edition of Leaves of Grass in 1855, had gone from being an “ardent Free-Soiler” to fully supporting abolition. His poetry proclaimed a “radically egalitarian vision,” writes Martin Klammer, “of an ideal, multiracial republic.” A country that was, itself, a poem. “The United States themselves are essentially the greatest poem,” wrote Whitman in his preface. The nation’s contradictions inhabit us just as we inhabit them. The only way to resolve our differences, he insisted, is to embody them fully, with openness toward other people and the natural world. Understanding Whitman’s mission makes filmmaker Jennifer Crandall’s project Whitman, Alabama all the more poignant.
For two years, Crandall “crisscrossed this deep Southern state, inviting people to look into a camera and share part of themselves through the words of Walt Whitman.” To the question “Who is American?,” Crandall—just as Whitman before her—answers with a multitude of voices, weaving in and out of a collaborative reading of the epic “Song of Myself,” beginning with 97-year-old Virginia Mae Schmitt of Birmingham, at the top, who reads Whitman’s lines, “I, now thirty-seven years old in perfect health begin / Hoping to cease not till death.” No one watching the video, Crandall remarks, should ask, “Why isn’t’ a thirty-seven year old man reading this?” To do so is to ignore Whitman’s design for the universal in the particular.
When Whitman penned the first lines of “Song of Myself,” the country had not yet “Unlimber’d” the cannons “to begin the red business,” as he would later write, but the 1850 Fugitive Slave Act had clearly lain the foundation for civil war. The poet’s many revisions, additions, and subsequent editions of Leaves of Grass after his first small run in 1855 continued until his death in 1892. He was obsessed with the hugeness and dynamism of the country and its people, in their darkest, bloodiest moments and at their most flourishing. His vision lets everyone in, without qualification, constantly rewriting itself to meet new faces in the ever-changing nation.
As Mariam Jalloh, a 14-year old Muslim girl from Guinea, recites in her short portion of the reading further up, “every atom belonging to me as good belongs to you.” Jollah quite literally makes Whitman’s language her own, translating into her native Fulani the line, “If they are not just as close as they are distant, they are nothing.” Jalloh “may seem like a surprising conduit for the writing of Whitman, a long-dead queer socialist poet from Brooklyn,” writes Christian Kerr at Hyperallergic, “but such incongruity is the active agent in Whitman, Alabama’s therapeutic salve.” It is also, Whitman suggested, the matrix of American democracy.
See more readings from the project above from Laura and Brandon Reeder of Cullman, the Sullivan family of Mobile, and by Demetrius Leslie and Frederick George, and Patricia Marshall and Tammy Cooper, inmates at mens’ and womens’ prisons in Montgomery. Whitman’s voice winds through these bodies and voices, settling in, finding a home, then, restless, moving on, inviting us all to join in the chorus, yet also—in its contrarian way—telling us to find our own paths. “You shall no longer take things at second or third hand.…,” wrote Whitman, “nor look through the eyes of the dead, nor feed on the spectres in books, / You shall not look through my eyes either, nor take things from me, / You shall listen to all sides and filter them from yourself.”
Find many more readings at the Whitman, Alabama website. And stay tuned for new readings as they come online.
Also find works by Walt Whitman on our lists of Free Audio Books and Free eBooks.
Related Content:
The Civil War & Reconstruction: A Free Course from Yale University
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
Read More...
With the coming savage cuts in arts funding, perhaps we’ll return to a system of noblesse oblige familiar to students of The Gilded Age, when artists needed independent wealth or patronage, and wealthy industrialists often decided what was art, and what wasn’t. Unlike fine art, however, haute cuisine has always relied on the patronage of wealthy donors—or diners. It can be marketed in premade pieces, sold in cookbooks, and made to look easy on TV, but for reasons both cultural and practical, given the nature of food, an exquisitely-prepared dish can only be made accessible to a select few.
Still, we would be mistaken, suggested Futurist poet and theorist F.T. Marinetti (1876–1944), should we neglect to see cooking as an art form akin to all the others in its moral and intellectual influence on us. While hardly the first or the last artist to publish a cookbook, Marinetti’s Futurist Cookbook seems as first glance deadly, even aggressively, serious, lacking the whimsy, impractical weirdness, and surrealist art of Salvador Dali’s Les Diners de Gala, for example, or the eclectic wistfulness of the MoMA’s Artist’s Cookbook.
Just as he had sought with his earlier Futurist Manifesto to revolutionize art, Marinetti intended his cookbook to foment a “revolution of cuisine,” as Alex Revelli Sorini and Susanna Cutini point out. You might even call it an act of war when it came to certain staples of Italian eating, like pasta, which he thought responsible for “sluggishness, pessimism, nostalgic inactivity, and neutralism” (anticipating scads of low and no-carb diets to come).
Believing that people “think, dream and act according to what they eat and drink,” Marinetti formulated strict rules not only for the preparation of food, but also the serving and eating of it, going so far as to call for abolishing the knife and fork. A short excerpt from his introduction shows him applying to food the techno-romanticism of his Futurist theory—an ethos taken up by Benito Mussolini, whom Marinetti supported:
The Futurist culinary revolution … has the lofty, noble and universally expedient aim of changing radically the eating habits of our race, strengthening it, dynamizing it and spiritualizing it with brand-new food combinations in which experiment, intelligence and imagination will economically take the place of quantity, banality, repetition and expense.
In hindsight, the fascist overtones in Marinetti’s language seem glaring. In 1932, when the Futurist Cookbook was published, his Futurism seemed like a much-need “jolt to all the practical and intellectual activities,” note Sorini and Cutini. “The subject [of cooking] needed a good shake to reawaken its spirit.” And that’s just what it got. The Futurist Cookbook acted as “a preview of Italian-style Nouvelle Cuisine,” with such innovations as “additives and preservatives added to food, or using technological tools in the kitchen to mince, pulverize, and emulsify.”
Yet, for all the high seriousness with which Marinetti seems to treat his subject, “what the media missed” at the time, writes Maria Popova, “was that the cookbook was arguably the greatest artistic prank of the twentieth century.” In an introduction to the 1989 edition, British journalist and historian Lesley Chamberlain called the Futurist Cookbook “a serious joke, revolutionary in the first instance because it overturned with ribald laughter everything ‘food’ and ‘cookbooks’ held sacred.” Marinetti first swept away tradition in favor of creative dining events the Futurists called “aerobanquets,” such as one in Bologna in 1931 with a table shaped like an airplane and dishes called “spicy airport” (Olivier salad) and “rising thunder” (orange risotto). Lambrusco wine was served in gas cans.
It’s performance art worthy of Dali’s bizarre costumed dinner parties, but fueled by a genuine desire to revolutionize food, if not the actual eating of it, by “bringing together elements separated by biases that have no true foundation.” So remarked French chef Jules Maincave, a 1914 convert to Futurism and inspiration for what Marinetti calls “flexible flavorful combinations.” See several such recipes excerpted from the Futurist Cookbook at Brain Pickings, read the full book in Italian here, and, just below, see Marinetti’s rules for the perfect meal, first published in 1930 as the “Manifesto of Futurist Cuisine.”
Futurist cuisine and rules for the perfect lunch
1. An original harmony of the table (crystal ware, crockery and glassware, decoration) with the flavors and colors of the dishes.
2. Utter originality in the dishes.
3. The invention of flexible flavorful combinations (edible plastic complex), whose original harmony of form and color feeds the eyes and awakens the imagination before tempting the lips.
4. The abolition of knife and fork in favor of flexible combinations that can deliver prelabial tactile enjoyment.
5. The use of the art of perfumery to enhance taste. Each dish must be preceded by a perfume that will be removed from the table using fans.
6. A limited use of music in the intervals between one dish and the next, so as not to distract the sensitivity of the tongue and the palate and serves to eliminate the flavor enjoyed, restoring a clean slate for tasting.
7. Abolition of oratory and politics at the table.
8. Measured use of poetry and music as unexpected ingredients to awaken the flavors of a given dish with their sensual intensity.
9. Rapid presentation between one dish and the next, before the nostrils and the eyes of the dinner guests, of the few dishes that they will eat, and others that they will not, to facilitate curiosity, surprise, and imagination.
10. The creation of simultaneous and changing morsels that contain ten, twenty flavors to be tasted in a few moments. These morsels will also serve the analog function […] of summarizing an entire area of life, the course of a love affair, or an entire voyage to the Far East.
11. A supply of scientific tools in the kitchen: ozone machines that will impart the scent of ozone to liquids and dishes; lamps to emit ultraviolet rays; electrolyzers to decompose extracted juices etc. in order to use a known product to achieve a new product with new properties; colloidal mills that can be used to pulverize flours, dried fruit and nuts, spices, etc.; distilling devices using ordinary pressure or a vacuum, centrifuge autoclaves, dialysis machines.
The use of this equipment must be scientific, avoiding the error of allowing dishes to cook in steam pressure cookers, which leads to the destruction of active substances (vitamins, etc.) due to the high temperatures. Chemical indicators will check if the sauce is acidic or basic and will serve to correct any errors that may occur: lack of salt, too much vinegar, too much pepper, too sweet.”
via FineDiningLovers and BrainPickings
Related Content
Salvador Dalí’s 1973 Cookbook Gets Reissued: Surrealist Art Meets Haute Cuisine
Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness
Read More...
It somehow escaped me. Alec Baldwin has a podcast. With 133 episodes in its archive, Here’s The Thing with Alec Baldwin (Web — iTunes — Feeds) features “intimate and honest conversations” with “artists, policy makers and performers – to hear their stories, what inspires their creations, what decisions changed their careers, and what relationships influenced their work.” Below, we’ve embedded his recent conversation with Patti Smith. It’s quite good. But there are so many others worth a mention. Let me rattle off a quick list: REM’s Michael Stipe, Viggo Mortensen, Michael Pollan, Amy Schumer and Judd Apatow, William Friedkin, Paul Simon, Ira Glass, Jerry Seinfeld, David Simon, Radiohead’s Thom Yorke, Lena Dunham, Peter Frampton, David Letterman, Carol Burnett, Kristen Wiig, SNL’s Lorne Michaels, and Chris Rock.
Click the links to stream each interview, and don’t miss Baldwin’s new memoir, Nevertheless. He happens to narrate the audiobook version, which you can download for free if you sign up for Audible.com’s 30-day free trial. We have info on that here.
If you would like to sign up for Open Culture’s free email newsletter, please find it here. It’s a great way to see our new posts, all bundled in one email, each day.
If you would like to support the mission of Open Culture, consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us continue providing the best free cultural and educational materials to learners everywhere. You can contribute through PayPal, Patreon, and Venmo (@openculture). Thanks!
Read More...