Classical music enthusiasts seem to agree that the renewal of interest in period instruments made for a noticeable change in the sound of most, if not all, orchestral performances. But doesn’t the replication and use of viols, ophicleides, and fortepianos from the times of Bach, Beethoven, and Mozart raise a curiosity about what people used to make music generations before them, and generations before that? How early can we get into early music and still find tools to use in the 21st century? Since the end of the 20th, we’ve had the same answer: about nine millennia.
“Chinese archeologists have unearthed what is believed to be the oldest known playable musical instrument,” wrote Henry Fountain in a 1999 New York Times article on the discovery of “a seven-holed flute fashioned 9,000 years ago from the hollow wing bone of a large bird.”
Those holes “produced a rough scale covering a modern octave, beginning close to the second A above middle C,” and the fact of this “carefully selected tone scale indicates that the Neolithic musicians may have been able to play more than single notes, but actual music.”
You can hear the haunting sounds of this oldest playable musical instrument known to man in the clip above. When would those prehistoric humans have heard it themselves? Fountain quotes ethnomusicologist Frederick Lau as saying that these flutes “almost certainly were used in rituals,” perhaps “at temple fairs, burials and other ritualistic events,” and possibly even for “for personal entertainment.” 9,000 years ago, one surely took one’s entertainment where one could find it.
If this listening experience has given you a taste for the real oldies — not just the AM-radio but the history-of-mankind sense — you can also hear in our archive the 43,000-year-old “Neanderthal flute” (found only in fragments, but reconstructed) as well as such ancient songs as 100 BC’s “Seikilos Epitaph,” a composition by Euripides from a century before that, and a 3,400-year-old Sumerian hymn known as the oldest song in the world, all of which raises an important question: what will the people of the year 11000 think when they unearth our DJ rigs, those artifacts of so many of our own ritualistic events, and give them a spin?
In 1968, both Robert F. Kennedy and Martin Luther King, Jr. were assassinated, and U.S. cities erupted in riots; anti-war demonstrators chanted “the whole world is watching” as police beat and tear-gassed them in Chicago outside the Democratic convention. George Wallace led a popular political movement of Klan sympathizers and White Citizens Councils in a vicious backlash against the gains of the Civil Rights movement; and the vengeful, paranoid Richard Nixon was elected president and began to intensify the war in Vietnam and pursue his program of harassment and imprisonment of black Americans and anti-war activists through Hoover’s FBI (and later the bogus “war on drugs”).
Good times, and given several pertinent similarities to our current moment, it seems like a year to revisit if we want to see recent examples of organized, determined resistance by a very beleaguered Left. We might look to the Black Panthers, the Yippies, or Students for a Democratic Society, to name a few prominent and occasionally affiliated groups. But we can also revisit a near-revolution across the ocean, when French students and workers took to the Paris streets and almost provoked a civil war against the government of authoritarian president Charles de Gaulle. The events often referred to simply as Mai 68 have haunted French conservatives ever since, such that president Nicolas Sarkozy forty years later claimed their memory “must be liquidated.”
May 1968, wrote Steven Erlanger on the 40th anniversary, was “a holy moment of liberation for many, when youth coalesced, the workers listened and the semi-royal French government of de Gaulle took fright.” As loose coalitions in the U.S. pushed back against their government on multiple fronts, the Paris uprising (“revolution” or “riot,” depending on who writes the history) brought together several groups in common purpose who would have otherwise never have broken bread: “a crazy array of leftist groups,” students, and ordinary working people, writes Peter Steinfels, including “revisionist socialists, Trotskyists, Maoists, anarchists, surrealists and Marxists. They were anticommunist as much as anticapitalist. Some appeared anti-industrial, anti-institutional, even anti-rational.”
“Be realistic: Demand the impossible!” was one of the May movement’s slogans. A great many more slogans and icons appeared on “extremely fine examples of polemical poster art” like those you see here. These come to us via Dangerous Minds, who explain:
The Atelier Populaire, run by Marxist artists and art students, occupied the École des Beaux-Arts and dedicated its efforts to producing thousands of silk-screened posters using bold, iconic imagery and slogans as well as explicitly collective/anonymous authorship. Most of the posters were printed on newssheet using a single color with basic icons such as the factory to represent labor and a fist to stand for resistance.
The Paris uprisings began with university students, protesting same-sex dorms and demanding educational reform, “the release of arrested students and the reopening of the Nanterre campus of the University of Paris,” notes the Global Nonviolent Action Database. But in the following weeks the “protests escalated and gained more popular support, because of continuing police brutality.” Among the accumulating democratic demands and labor protests, writes Steinfels, was “one great fear… that contemporary capitalism was capable of absorbing any and all critical ideas or movements and bending them to its own advantage. Hence, the need for provocative shock tactics.”
This fear was dramatized by Situationists, who—like Yippies in the States—generally preferred absurdist street theater to earnest political action. And it provided the thesis of one of the most radical texts to come out of the tumultuous times, Guy Debord’s The Society of the Spectacle. In a historical irony that would have Debord “spinning in his grave,” the Situationist theorist has himself been co-opted, recognized as a “national treasure” by the French government, writes Andrew Gallix, and yet, “no one—not even his sworn ideological enemies—can deny Debord’s importance.”
The same could be said for Michel Foucault, who found the events of May ’68 transformational. Foucault pronounced himself “tremendously impressed” with students willing to be beaten and jailed, and his “turn to political militancy within a post-1968 horizon was the chief catalyst for halting and then redirecting his theoretical work,” argues professor of philosophy Bernard Gendron, eventually “leading to the publication of Discipline and Punish,” his groundbreaking “genealogy” of imprisonment and surveillance.
Many more prominent theorists and intellectuals took part and found inspiration in the movement, including André Glucksmann, who recalled May 1968 as “a moment, either sublime or detested, that we want to commemorate or bury.… a ‘cadaver,’ from which everyone wants to rob a piece.” His comments sum up the general cynicism and ambivalence of many on the French left when it comes to May ’68: “The hope was to change the world,” he says, “but it was inevitably incomplete, and the institutions of the state are untouched.” Both student and labor groups still managed to push through several significant reforms and win many government concessions before police and de Gaulle supporters rose up in the thousands and quelled the uprising (further evidence, Anne-Elisabeth Moutet argued this month, that “authoritarianism is the norm in France”).
The iconic posters here represent what Steinfels calls the movement’s “utopian impulse,” one however that “did not aim at human perfectibility but only at imagining that life could really be different and a whole lot better.” These images were collected in 2008 for a London exhibition titled “May 68: street Posters from the Paris Rebellion,” and they’ve been published in book form in Beauty is in the Street: A Visual Record of the May ’68 Paris Uprising. (You can also find and download many posters in the digital collection hosted by the Bibliotheque nationale de France.)
Perhaps the co-option Debord predicted was as inevitable as he feared. But like many radical U.S. movements in the sixties, the coordinated mobilization of huge numbers of people from every strata of French society during those exhilarating and dangerous few weeks opened a window on the possible. Despite its short-lived nature, May 1968 irrevocably altered French civil society and intellectual culture. As Jean-Paul Sartre said of the movement, “What’s important is that the action took place, when everybody believed it to be unthinkable. If it took place this time, it can happen again.”
Imagine your favorite works for the piano—the delicate and haunting, the thundering and powerful. The minimalism of Erik Satie, the Romanticism of Claude Debussy or Modest Mussorgsky, the rapturous swooning of Beethoven’s concertos. Maybe it’s Jerry Lee Lewis or Little Richard; Thelonious Monk or Duke Ellington. Tom Waits, Tori Amos, Rufus Wainwright, Prince… you get the idea.
Now imagine all of it never existing. A giant hole opens up in world culture. Catastrophic! Or maybe, I suppose, we’d never know the difference. But I’m certain we’d be worse off for it, somehow. The piano seems inevitable when we look back into music history. Its immediate predecessors, the clavichord and harpsichord, so resemble the modern piano that they must have evolved in just such a way, we think. But it needn’t have been so.
The harpsichord, writes Georgia State University’s Hyperphysics, “has a shape similar to a grand piano,” but its operation prevents one critical musical property: dynamics—“the player has no control over the loudness and quality of the tone.” On the whole, every innovation of the harpsichord’s design aimed to solve this problem. Over the instrument’s 400-year history, none of them did so as elegantly as the piano, invented around 1700 by Bartolomeo Cristofori. In the video above, you can hear a slightly later version of his instrument from 1720 played by pianist Dongsok Shin—an excerpt from one of the first pieces of music ever written for the instrument.
Cristofori called his design the gravecembalo col piano et forte, “keyboard instrument with soft and loud” sounds. This soon shortened to simply pianoforte. It’s interesting that the word for “soft” eventually became its sole name. For all its grandeur and thunderous capability, it’s the piano’s softness that so often captures our attention—the ability of this lumbering beast of an instrument to pull its punches and move with quiet grace. As you’ll probably note in Shin’s demonstration, the earliest pianos still retained a bit of the harpsichord’s twang, but we can also clearly discern the woody thumps, rumbles, and tinkling highs of modern pianos. (Compare it to this, for example.)
True to its name, the “quiet nature of the piano’s birth around 1700,” writes the Metropolitan Museum of Art, “comes as something of a surprise.” It was invented “almost entirely by one man,” Cristofori, whose expertise had made him steward of Florentine Prince Ferdinando d’Medici’s entire collection of harpsichords and other musical instruments. The first mention comes from a 1700 Medici inventory describing a harpsichord-like instrument “newly invented by Bartolomeo Cristofori with hammers and dampers, two keyboards, and a range of four octaves, C‑c.” The first pianos had 54 keys rather than 88, and used “small wooden hammers covered with deerskin.”
Other makers tried different mechanisms, but “Cristofori was an artful inventor,” the Met remarks, “creating such a sophisticated action for his pianos that, at the instrument’s inception, he solved many of the technical problems that continued to puzzle other piano designers for the next seventy-five years of its evolution.” These designers made shortcuts, since Cristofori’s “action was highly complex and thus expensive.” But nothing matched his design, and those features were “gradually reinvented and reincorporated in later decades.”
Cristofori’s ingenious innovations included an “escapement” mechanism that enabled the hammer to fall away from the string instantly after striking it, so as not to dampen the string, and allowing the string to be struck harder than on a clavichord; a “check” that kept the fast-moving hammer from bouncing back to re-hit the string; a dampening mechanism on a jack to silence the string when not in use; isolating the soundboard from the tension-bearing parts of the case, so that it could vibrate more freely; and employing thicker strings at higher tensions than on a harpsichord.
The piano Shin plays above is the oldest surviving instrument of Cristofori’s design, and it resides at the Metropolitan Museum of Art. Only “two other Cristofori pianos survive today,” notes CMuse, “in Rome and another at Leipzi University.” This instrument might have represented an elegant dead end in musical evolution. Though Baroque composers at the time, including Johann Sebastian Bach, “were aware of it,” most, like Bach, harbored doubts. “It was only with the compositions of Haydn and Mozart” decades later “that the piano found a firm place in music.” A place so firm, it’s nearly impossible to imagine the last 250 years of music without it.
Above you can watch what was arguably the first surf movie ever made–the very beginning of a long cinematic tradition that gave us Gidget in 1959, and TheEndless Summer in 1966. And lest you think the surf movie reached its zenith during those halcyon days, some would argue that the best surf films were later produced during the aughts–Thicker Than Water (2000), Blue Crush (2002), Step Into Liquid (2003), Riding Giants (2004), etc. And don’t forget this great little short, “Dark Side of the Lens.”
In 1906, smack in the middle of the aughts of last century, Thomas Edison sent the pioneering cinematographer Robert K. Bonine to shoot an ‘Actuality’ documentary about life in the Polynesian islands. The blurb accompanying this video describes the scene above: “The first moving pictures of surfers riding waves — Surf Riders, Waikiki Beach, Honolulu — shows a minute of about a dozen surfers on alaia boards in head-high, offshore surf at what is probably Canoes. These surfers are shot too far away to detail what they were wearing, but they all appear to be in tanksuits.”
If any one painting stands for mid-twentieth-century America, Nighthawks does. In fact, Edward Hopper’s 1942 canvas of four figures in a late-night New York City diner may qualify as the most vivid evocation of that country and time in any form. For Evan Puschak, better known as the video essayist Nerdwriter, the experience of Nighthawks goes well beyond the visual realm. “I’ve always thought of him in a sort of aromatic way,” says Puschak of the artist, “because his paintings evoke the same kinds of feelings and memories that I get from the sense of smell, as if he was channeling directly into my limbic system, excavating moments that were stored deeply away.”
But Puschak wouldn’t have experienced the early 1940s first-hand, much less the turn-of-the-century period in which Hopper grew up. Nor would have most of the people captivated by Nighthawkstoday, much less those countless appreciators as yet unborn. How does Hopper, in his most famous painting and many others, at once capture a time and a place while also resonating on a deeper, more universally human level?
Puschak takes up that question in “Look through the Window,” a video essay that examines the power of Hopper’s art, “clean, smooth, and almost too real,” through a breakdown of Nighthawks, an expression of all of the artist’s themes: “loneliness, alienation, voyeurism, quiet contemplation, and more.”
The effectiveness of the painting’s composition, in Puschak’s analysis, comes from such elements as the ambiguity of the relationships between its characters, the strong diagonal lines of the diner’s architecture, the use of light in the darkness, and the windows so clear as to look “as if they’re not even there,” all so memorably realized by Hopper’s painstaking dedication to his work. (His long and involved process, which we’ve previously featured here, even included a kind of storyboarding.) “As slowly and deliberately as he painted,” Puschak says, “he wanted us to look — really look, and to be made vulnerable, as a viewer always is.”
Many Americans must have felt such vulnerability with a special acuteness at the time Hopper finished painting Nighthawks, “the weeks and days following the bombing of Pearl Harbor, when everyone in New York City was paranoid about another attack.” Everyone, that is, except Edward Hopper, who kept his studio light on and kept on painting beneath it. “The future was very uncertain at this moment in time, as uncertain as the darkness that frames the patrons of this diner, a darkness they’re launched into by Hopper’s composition and our gaze.” Some might say that times, in America and elsewhere, haven’t become much more certain since. We, like Hopper, could do much worse than continuing to create ever more deliberately, and to see ever more clearly.
Glass Specialist Bill Gudenrath of the Corning Museum of Glass is an historian of glassworking techniques from ancient Egypt through the Renaissance and clearly expert at his craft, but he doesn’t appear to be too keen on supplying explanatory blow-by-blows. Nor would I be, bustling around a red hot glass oven, without so much as a Johnny Tremain-style leather apron to protect me. I’m not even sure I’d want the distraction of a video camera in my face.
But if, as the title implies, the goal is to produce a duplicate of this whimsical 1900-year-old guppy, the process must be broken down.
From what this casual viewer was able to piece together, the steps would go something like:
1. Twirl a red hot metal pipe in the forge until you have a healthy glob of molten glass. Apparently it’s not so different from making cotton candy.
2. Roll the glass blob back and forth on a metal tray.
3. Blow into the pipe’s non-glowing end to form a bubble.
4. Repeat steps 1–3
5. Roll the pipe back and forth on a metal sawhorse while seated, applying pinchers to taper the blob into a recognizably fishy-shape.
(Don’t worry about its proximity to your bare forearms and khaki-covered thighs! What could possibly go wrong?)
6. Twirl it like a baton.
(Depending on the length of your arms, your nascent glass fish may come dangerously close to the cement floor. Try not to sweat it.)
7. Use scissors and pinchers to tease out a nipple-shaped appendage that will become the fish’s lips.
8. Use another poker to apply various bloops of molten glass. (Novices may want to practice with a hot glue gun to get the hang of this — it’s trickier than it looks!) Pinch, prod and drape these bloops into eye and fin shapes. A non-electric crimping iron will prove handy here.
9. Use blue glass, tweezers and crimping iron to personalize your fish-shaped vessel’s distinctive dorsal and anal fins.
10. Tap on the pipe to crack the fish loose. (Careful!)
11. Score the distal end with a glass cutting tool.
(This step should prove a cinch for anyone who ever used a craft kit to turn empty beer and soda bottles into drinking glasses!)
12. Smooth rough edges with another loop of molten glass and some sort of electric underwater grinding wheel.
Optional 13th step: Read this description of a furnace session, to better acquaint yourself with both best glassblowing practices and the proper names for the equipment. Or get the jump on Christmas 2017 with this true how-to guide to producing hand blown glass ornaments.
Not planning on blowing any glass, fish-shaped or otherwise, any time soon?
Explore the somewhat mysterious history of the 1900-year-old fish-shaped original here, compliments of the British Museum’s St John Simpson, senior curator for its pre-Islamic collections from Iran and Arabia.
Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine. Her play Zamboni Godot is opening in New York City in March 2017. Follow her @AyunHalliday.
We’re well into the backlash cycle of the post-election outrage over “fake news,” as commentator after commentator calls this phrase into question and celebrates the fall of the gatekeeper media. Taking a phrase from Tom Wolfe, Matthew Continetti at the conservative Commentary argues that “the press… is a Victorian Gentleman, the arbiter of manners and fashion, the judge of right conduct and good breeding.” We should not lament this gentleman’s loss of a “liberal, affluent, entitled cocoon.” He had long ago “changed his job description and went from telling his readers what had happened to telling them what to think.”
Likewise, The Intercept has shown how fake news panic produced a “McCarthyite Blacklist” of independent organizations lumped together by “shoddy, slothful journalistic tactics” of the kind used by “smear artists” and peddlers of disinformation. Politics aside, what we should at least gather from this firestorm is that the story of “fake news”—or of deliberate hoaxes, lies, and propaganda—is much older than the Internet, though the speed at which it spreads has increased exponentially with the dominance of social media. We’re left wondering how we might reclaim some orientation toward the truth in any media. If everything is potentially fake news, what can we trust?
With the professional vetting of information in crisis, we are thrown back on the popularization of Darwinism advanced by “British defender of capitalism” Herbert Spencer, who—writes Timothy Snyder in his New York Times bestseller Black Earth: The Holocaust as History and Warning—described the market as “an ecosphere where the strongest and best survived.” In our information ecosystem, “strongest and best” is often determined not by natural forces, nor by expert adjudication of merit, but by algorithms… and cash. And as journalists at The Independent and elsewhere discovered last week, Google’s algorithms have decided that the best, most helpful answer to the question “did the holocaust happen?” comes from neo-Nazi hate site Stormfront, in a piece glibly titled “Top 10 reasons why the Holocaust didn’t happen.”
It should go without saying—and yet it must be said—that no serious historian of the period considers the systematic mass murder of millions of Jews and other “undesirables” to be an open historical question. The horror of the 30s and 40s, writes the U.S. Holocaust Memorial Museum, is “one of the best documented events in history” and denials and distortions of these events “are generally motivated by hatred of Jews.” (See their video explaining denialism at the top.) There’s no question that’s the motive in Google’s top search result for Holocaust denialism. Google admits as much, writing this past Monday, “We are saddened to see that hate organizations still exist. The fact that hate sites appear in search results does not mean that Google endorses these views.”
And yet, writes Carole Cadwalladr at The Guardian, the search engine giant also “confirmed it would not remove the result.” Cadwalladr details how she displaced the top result herself “with the only language that Google understands: money.” Lilian Black, the daughter of a Holocaust survivor, compared the tech giant’s response to “saying we know that the trains are running into Birkenau, but we’re not responsible for what’s happening at the end of it.” But they should bear some responsibility. Google, she says, shapes “people’s thinking… Can’t they see where this leads? And to have a huge worldwide organization refusing to acknowledge this. That’s what they think their role is? To be a bystander?”
The question forces us to confront not only the role of the press but also the role of the new gatekeepers, Google, Facebook, Twitter, etc., who have displaced Victorian systems of managing information and knowledge. The loss of status among academics and professional journalists and editors may have salutary effects, such as a democratization of media and the emergence of credible voices previously confined to the margins. But what can be done about the corresponding rise in deliberate misinformation published by hate groups and propaganda organizations? Moral considerations carry no weight when the figurative “marketplace of ideas” is reduced to the literal market.
Danny Sullivan, a search engine expert Cadwalladr cites, suggests that the reason the Stormfront result rose to the top of Google’s search may be nothing more than populism for profit: “Google has changed its algorithm to reward popular results over authoritative ones. For the reason that it makes Google more money.” The rising popularity of hate sites presents a growth opportunity for Google and its competitors. Meanwhile, racist hate groups spread their messages unimpeded, ordinary citizens are badly misinformed, and so-called “self-radicalized” individuals like mass killer Dylann Roof and Tommy Mair—who murdered British MP Jo Cox this past summer—continue to find the “strongest and best” cases for their homicidal designs, no matter that so much of the information they consume is not only fake, but designedly, malevolently false.
In January, 1970—with a line that might have come right out of any number of current opinion pieces taking the media to task—Rolling Stone ripped into Time, Life, Newsweek, the New York Times for their coverage of the 1969 Altamont Free Concert: “When the news media know what the public wants to hear and what they want to believe, they give it to them.”
What did the public want to hear? Apparently that Altamont was “Woodstock West,” full of “peace and love” and “good vibes.” Since, however, it was “undeniable that one man was actually murdered at the concert, a certain minimal adjustment was made, as if that event had been the result of some sort of unpredictable act of God, like a stray bolt of lightning.” The murdered fan, 18-year-old Meredith Hunter, was not, of course, killed by lightning, but stabbed to death by one of the Hell’s Angels who were hired as informal security guards.
Hunter was killed “just 20 feet in front of the stage where Mick Jagger was performing ‘Under My Thumb,’” writes the History Channel: “Unaware of what had just occurred, the Rolling Stones completed their set without further incident, bringing an end to a tumultuous day that also saw three accidental deaths and four live births.”
We know the moment best from the Maysles brothers concert film Gimme Shelter, which opens with a scene of Jagger viewing footage of the violence. See the unrest during “Sympathy for the Devil,” above, and the confused scene of the killing during “Under My Thumb,” further down.
It’s one of the few dark days in the history of rock. This was the anti-Woodstock. It also took place in December of 1969, so it bookmarked the end of the ‘60s in a chronological way. The loss of innocence that day really is why this has lasted and why it endures as a cultural touchstone.
The loss of American innocence is an old trope that assumes the country, at some mythical time in the past, was a blameless paradise. But who was to blame for Altamont? The Stones were not held legally accountable, nor was the biker who stabbed Hunter. In another echo from the past into the present, he was acquitted on self-defense grounds. “What happened at Altamont,” was also “not the music’s fault,” writes The New Yorker’s Richard Brody, who blames “Celebrity” and a loss of “benevolent spirits… the idea of the unproduced.”
To ascribe such incredible weight to this incident—to mark it as the end of peace and love and the birth of “infrastructure” and “authority,” as Brody does—seems historically tone deaf. Strictly from the point of view of the Stones’ musical development, we might say that the close of the sixties and the year of Altamont marked a transition to a darker, grittier period, the end of the band’s forays into psychedelia and folk music. That summer, Brian Jones drowned in his swimming pool. And the band followed the sneering “Under My Thumb” at Altamont with a brand new tune, “Brown Sugar,” a song about slavery and rape.
You can hear the first live performance of the song at the top of the post, captured in an audience recording, two years before its official recording and release on 1971’s Sticky Fingers. “It was a song of sadism,” writes Stanley Booth, “savagery, race hate/love, a song of redemption, a song that accepted the fear of night, blackness, chaos, the unknown.” It’s a song that would face instant backlash were it released today. “Twitter would lampoon [the band] with carefully thought out hashtags,” writes Lauretta Charlton, “Multiple Change.org petitions would be signed. The band would be forced to issue an apology.”
Jagger himself said in 1995, “I would never write that song now. I would probably censor myself.” And he has, in many subsequent performances, changed some of the most outrageous lyrics. Charlton confesses to loving and hating the song, calling it “gross, sexist, and stunningly offensive toward black women.” And yet, she says, “When I hear ‘Brown Sugar,’ the outrage hits me like a postscript, and by that point I’m too busy clapping and singing along to be indignant.” Surrounded by the violence at Altamont, Jagger channeled the violence of history in a raunchy blues that—like “Under My Thumb” and “Sympathy for the Devil”—captures the seductive nature of power and sexualized aggression, and gives the lie to facile ideas of innocence, whether of the past or of the contemporary social and political late-sixties scene.
We're hoping to rely on loyal readers, rather than erratic ads. Please click the Donate button and support Open Culture. You can use Paypal, Venmo, Patreon, even Crypto! We thank you!
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.