In the town of Bradford, near Leeds in the UK, they’ve imported more than 30 tons of sand to build nine sand sculptures across the city, as part of what’s called the Discovering Bradford project. Above, you can see one that caught our eye, thanks to the Vintage Anchor twitter stream. It’s a life-size sand sculpture of Emily Brontë, created by Jamie Wardley, an artist who belongs to the collective, Sand in Your Eye. Brontë was born in Thornton, a short hop, skip and a jump away from Bradford. For more culturally-inspired sand creations, see the Relateds below.
In the late 50s, a fearful, racist backlash against rock and roll, coupled with money-grubbing corporate payola, pushed out the blues and R&B that drove rock’s sound. In its place came easy listening orchestration more palatable to conservative white audiences. As sexy electric guitars gave way to string and horn sections, the comparatively aggressive sound of rock and roll seemed so much a passing fad that Decca’s senior A&R man rejected the Beatles’ demo in 1962, telling Brian Epstein, “guitar groups are on their way out.”
But it wasn’t only the blues, R&B, and doo wop revivalism of British Invasion bands that saved the American art form. It was also the often unintentional influence of audio engineers who—with their incessant tinkering and a number of happy accidents—created new sounds that defined the countercultural rock and roll of the 60s and 70s. Ironically, the two technical developments that most characterized those decades’ rock guitar sounds—the wah-wah and fuzz pedals—were originally marketed as ways to imitate strings, horns, and other non-rock and roll instruments.
As you’ll learn in the documentary above, Cry Baby: The Pedal that Rocks the World, the wah-wah pedal, with its “waka-waka” sound so familiar from “Shaft” and 70s porn soundtracks, officially came into being in 1967, when the Thomas Organ company released the first incarnation of the effect. But before it acquired the brand name “Cry Baby” (still the name of the wah-wah pedal manufactured by Jim Dunlop), it went by the name “Clyde McCoy,” a backward-looking bit of branding that attempted to market the effect through nostalgia for pre-rock and roll music. Clyde McCoy was a jazz trumpet player known for his “wah-wah” muting technique on songs like “Sugar Blues” in the 20s, and the pedal was thought to mimic McCoy’s jazz-age effects. (McCoy himself had nothing to do with the marketing.)
Nonetheless the development of the wah-wah pedal came right out of the most current sixties’ technology made for the most current of acts, the Beatles. Increasingly drowned out by screaming crowds in larger and larger venues, the band required louder and louder amplifiers, and British amp company Vox obliged, creating the 100-watt “Super Beatle” amp in 1964 for their first U.S. tour. As Priceonomics details, when Thomas Organ scored a contract to manufacture the amps stateside, a young engineer named Brad Plunkett was given the task of learning how to make them for less. While experimenting with the smooth dial of a rotary potentiometer in place of an expensive switch, he discovered the wah-wah effect, then had the bright idea to combine the dial—which swept a resonant peak across the upper mid-range frequency—with the foot pedal of an organ.
The rest, as the cliché goes, is history—a fascinating history at that, one that leads from Elvis Presley studio guitarist Del Casher, to Frank Zappa, Clapton and Hendrix, and to dozens of 70s funk guitarists and beyond.
Art Thompson, editor of Guitar Player Magazine, notes in the star-studded Cry Baby documentary that prior to the invention of the wah-wah pedal, guitarists had a limited range of effects—tape delay, tremolo, spring reverb, and fuzz. Only one of these effects, however, was then available in pedal form, and that pedal, Gibson’s Maestro Fuzz-Tone, would also revolutionize the sound of sixties rock. But as you can hear in the short 1962 demonstration record above for the Maestro Fuzz-Tone, the fuzz effect was also marketed as a way of simulating other instruments: “Organ-like tones, mellow woodwinds, and whispering reeds,” says the announcer, “booming brass, and bell-clear horns.”
In fact, Keith Richards, in the Stones’ “(I Can’t Get No) Satisfaction”—the song credited with introducing the Maestro’s sound to rock and roll in 1965—originally recorded his fuzzed-out guitar part as a placeholder for a horn section. “But we didn’t have any horns,” he wrote in his autobiography, Life; “the fuzz tone had never been heard before anywhere, and that’s the sound that caught everybody’s attention.”
The assertion isn’t strictly true. While “Satisfaction” brought fuzz to the forefront, the effect first appeared, by accident, in 1961, with “a faulty connection in a mixing board,” writes William Weir in a history of fuzz for The Atlantic. Fuzz, “a term of art… came to define the sound of rock guitar,” but it first appeared in “the bass solo of country singer Marty Robbins on ‘Don’t Worry,’” an “otherwise sweet and mostly acoustic tune.” At the time, engineers argued over whether to leave the mistaken distortion in the mix. Luckily, they opted to keep it, and listeners loved it. When Nancy Sinatra asked engineer Glen Snoddy to replicate the sound, he recreated it in the form of the Maestro.
Guitarists had experimented deliberately with similar distortion effects since the very beginnings of rock and roll, cutting through their amp’s speakers—like Link Wray in his menacing classic instrumental “Rumble”—or pushing small, tube-powered amplifiers past their limits. But none of these experiments, nor the pedals that later emulated them, sound like the fuzz pedal, which achieves its buzzing effect by severely clipping the guitar’s signal. Later iterations from other manufacturers—the Tone Bender, Big Muff, and Fuzz Face—have acquired their own cache, in large part because of Jimi Hendrix’s heavy use of various fuzz pedals throughout his career. “Like the shop talk of wine enthusiasts,” writes Weir, “discussions among distortion cognoscenti on nuances of tone can baffle outsiders.”
Indeed. Those early experiments with effects pedals now fetch upwards of several thousand dollars on the vintage market. And a recent boom in boutique pedals has sent prices for handcrafted replicas of those original models—along with several innovative new designs—into the hundreds of dollars for a single pedal. (One handmade overdrive, the Klon Centaur, has become the most imitated of modern pedals; originals can go for up to two thousand dollars.) The specialization of effects pedal technology, and the hefty pricing for vintage and contemporary effects alike, can be daunting for beginning guitarists who want to sound like their favorite players. But what early players and engineers figured out still holds true—musical innovation is all about creating original sounds by experimenting with whatever you have at hand.
You see here the versatile Peters’ visual interpretation of W.B. Yeats’ “When You Are Old,” a natural choice given his apparent poetic interests, but one drawn in the style of Japanese manga. In adapting Yeats’ words to a lady in the twilight of life, Peters has paid specific tribute to the work of Clamp, Japan’s famous all-female comic-artist collective known for series like RG Veda, Tokyo Babylon, and X/1999.
Clamp fans will find that, in three brief pages, Peters touches on quite a few of the aesthetic tropes that have long characterized the collective’s work. (You’ll want to click through to Peters’ own “When You Are Old” page to see an extra illustration that also fits well into the Clamp sensibility.) Yeats fans will no doubt appreciate the chance to see the poet’s work in an entirely new way. I, for one, had never before pictured a cat on the lap of the woman “old and grey and full of sleep” reflecting on the “moments of glad grace” of her youth and the one man who loved her “pilgrim soul,” but now I always will — and I imagine both Yeats and Clamp would approve of that. You can read and hear Yeats’ 1892 poem here. If you click on the images on this page, you can view them in a larger format.
Physicist Stephen Hawking may trump them all, though his famously recognizable voice is not organic. The one we all associate with him has been computer generated since worsening Amyotrophic lateral sclerosis, aka Lou Gehrig’s disease, led to a tracheotomy in 1985.
Without the use of his hands, Hawking controls the Assistive Context-Aware Toolkit software with a sensor attached to one of his cheek muscles.
Recently, Intel has made the software and its user guide available for free download on the code sharing site, Github. It requires a computer running Windows XP or above to use, and also a webcam that will track the visual cues of the user’s facial expressions.
The multi-user program allows users to type in MS Word and browse the Internet, in addition to assisting them to “speak” aloud in English.
The software release is intended to help researchers aiding sufferers of motor neuron diseases, not pranksters seeking to borrow the famed physicist’s voice for their doorbells and cookie jar lids. To that end, the free version comes with a default voice, not Professor Hawking’s.
Ayun Halliday is an author, illustrator, and Chief Primatologist of the East Village Inky zine. Her play, Fawnbook, is currently playing in New York City. Follow her @AyunHalliday
Just a few miles down the highway from Open Culture’s gleaming headquarters you will find Los Gatos High School, where Dan Burns, an AP Physics Teacher, has figured out a simple but clever way to visualize gravity, as it was explained by Einstein’s 1915 General Theory of Relativity. Get $20 of spandex, some marbles, a couple of weights, and you’re all good to go. Using these readily-available objects, you can demonstrate how matter warps space-time, how objects gravitate towards one another, and why objects orbit in the way they do. My favorite part comes at the 2:15 mark, where Burns demonstrates the answer to a question you’ve maybe pondered before: why do all planets happen to orbit the sun moving in a clockwise (rather than counter-clockwise) fashion? Now you can find out why.
If you would like to sign up for Open Culture’s free email newsletter, please find it here. It’s a great way to see our new posts, all bundled in one email, each day.
If you would like to support the mission of Open Culture, consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us continue providing the best free cultural and educational materials to learners everywhere. You can contribute through PayPal, Patreon, and Venmo (@openculture). Thanks!
1949’s Death of a Salesman is one of the most enduring plays in the American canon, a staple of both community and professional theater.
Playwright Arthur Miller recalled that when the curtain fell on the first performance, there were “men in the audience sitting there with handkerchiefs over their faces. It was like a funeral.”
Robert Falls, Artistic Director of Chicago’s Goodman Theater, brings the experience of dozens of productions to bear when he describes it as the only play that “sends men weeping into the Men’s room.”
Small wonder that the titular part has become a grail of sorts for aging leading men eager to be taken seriously. Dustin Hoffman, George C. Scott, and Philip Seymour Hoffman have all had a go at Willy Loman, a role still associated with the towering Lee J. Cobb, who originated it.
(Willy’s wife, Linda, with her famous graveside admonition that “attention must be paid,” is considered no less of a plum part.)
On February 2, 1955, Arthur Miller joined Salesman’s first Mrs. Loman, Mildred Dunnock, to read selections from the script before a live audience at Manhattan’s 92nd Street YMCA. In addition to reading the role of Willy Loman, Miller supplied stage directions and explained his rationale for picking the featured scenes. The Pulitzer Prize winner’s New York accent and brusque manner make him a natural, and of course, who better to understand the nuances, motivations, and historical context of this tragically flawed character?
Miller told The New Yorker that he based Loman on his family friend, Manny Newman:
Manny lived in his own mind all the time. He never got out of it. Everything he said was totally unexpected. People regarded him as a kind of strange, completely untruthful personality. Very charming. I thought of him as a kind of wonderful inventor. For example, at will, he would suddenly say, “That’s a lovely suit you have on.” And for no reason at all, he’d say, “Three hundred dollars.” Now, everybody knew he never paid three hundred dollars for a suit in those days. At a party, he would lie down on his wife’s lap and pretend to be sucking her breast. He’d curl up on her lap—she was an immense woman. It was crazy. At the same time, there was something in him which was terribly moving. It was very moving, because his suffering was right on his skin, you see.
Ayun Halliday is an author, illustrator, and Chief Primatologist of the East Village Inky zine. Her play, Fawnbook, is now playing in New York City . Follow her @AyunHalliday
In a perfect world, I could write this post for free. Alas, the rigors of the modern economy demand that I pay regular and sometimes high prices for food, shelter, books, and the other necessities of life. And so if I spend time working on something — and in my case, that usually means writing something — I’d better ask for money in exchange, or I’ll find myself out on the street before long. Nobody understands this better than Harlan Ellison, the hugely prolific author of novels, stories, essays screenplays, comic books, usually in, or dealing with, the genre of science fiction.
Ellison also starred in Dreams with Sharp Teeth, a documentary about his colorful life and all the work he’s written during it, a clip of which you can see at the top of the post. In it, he describes receiving a call just the day before from “a little film company” seeking permission to include an interview clip with him previously shot about the making of Babylon 5, a series on which he worked as creative consultant. “Absolutely,” Ellison said to the company’s representative. “All you’ve got to do is pay me.”
This simple request seemed to take the representative—who went on to insist that “everyone else is just doing it for nothing” and that “it would be good publicity”—quite by surprise. “Do you get a paycheck?” Ellison then asked. “Does your boss get a paycheck? Do you pay the telecine guy? Do you pay the cameraman? Do you pay the cutters? Do you pay the Teamsters when they schlep your stuff on the trucks? Would you go to the gas station and ask them to give you free gas? Would you go to the doctor and have them take out our spleen for nothing?”
This line of questioning has come up again and again since Ellison told this story, as when the journalist Nate Thayer, or more recently Wil Wheaton, spoke out against the expectation that writers would hand out the rights to their work “for exposure.” The pragmatic Ellison frames the matter as follows: “Cross my palm with silver, and you can use my interview.” But do financially-oriented attitudes such as his (“I don’t take a piss without getting paid for it”) taint the art and craft of writing? He doesn’t think so: “I sell my soul,” he admits, “but at the highest rates.”
Did Bram Stoker’s world-famous Dracula character—perhaps the most culturally unkillable of all horror monsters—derive from Irish folklore? Search the Gaelic “Droch-Fhoula” (pronounced droc’ola) and, in addition to the requisite metal bands, you’ll find references to the “Castle of the Blood Visage,” to a blood-drinking chieftain named Abhartach, and to other possible native sources of Irish writer Bram Stoker’s 1897 novel. These Celtic legends, the BBC writes, “may have shaped the story as much as European myths and Gothic literature.”
Despite all this intriguing speculation about Dracula’s Irish origins, the actors playing him have come from a variety of places. One recent incarnation, TV series Dracula, did cast an Irish actor, Jonathan Rhys Meyers, in the role.
Hungarian Bela Lugosi comes closest to the fictional character’s nationality, as well as that of another, perhaps dubious source, Romanian warlord Vlad the Impaler. Protean Brit Gary Oldman played up the character as Slavic aristocrat in Francis Ford Coppola’s somewhat more faithful take. But one too-oft-overlooked portrayal by another English actor, Christopher Lee, deserves much more attention than it receives.
The audio here was also recorded in 1966 by the book’s editor Russ Jones. Comics blogger Steven Thompson remarks that “since Dracula is made up of a series of letters, journal and diary entries, the writers here logically take a more straightforward route of telling the tale while maintaining the episodic feel quite well.” Rather than the voice of Count Dracula, Lee reads as the novel’s epistolary narrator Jonathan Harker, and the Dracula in the artwork, drawn by artist Al McWilliams, “bears more than a passing resemblance here to actor John Carradine,” a notable American actor who played the character in Universal’s House of Frankenstein and House of Dracula. Nonetheless, Lee’s voice is enough to conjure his many exceptional performances as the prototypical vampire, a character and concept that will likely never die.
Scholar and writer Bob Curran, a proponent of the Irish origins of Dracula, argues in his book Vampires that legends of undead, blood-drinking ghouls are found all over the world, which goes a long way toward explaining the enduring popularity of Dracula in particular and vampires in general. We’ll probably see another actor inherit the role of Stoker’s seductively creepy count in the near future. Whoever it is will have to measure himself against not only the performances of Lugosi, Carradine, Oldman, and Meyers, but also against the debonair Christopher Lee. He would do well, wherever he comes from, to study Lee’s Dracula films closely, and listen to him read the story in the adaptation above.
It shouldn’t be especially controversial to point out that we live in a pivotal time in human history—that the actions we collectively take (or that plutocrats and technocrats take) will determine the future of the human species—or whether we even have a future in the coming centuries. The threats posed by climate change and war are exacerbated and accelerated by rapidly worsening economic inequality. Exponential advances in technology threaten to eclipse our ability to control machines rather than be controlled, or stamped out, by them.
If you would like to sign up for Open Culture’s free email newsletter, please find it here. It’s a great way to see our new posts, all bundled in one email, each day.
If you would like to support the mission of Open Culture, consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us continue providing the best free cultural and educational materials to learners everywhere. You can contribute through PayPal, Patreon, and Venmo (@openculture). Thanks!
Where Kurzweil has seen this event through an optimistic, New Age lens, Hawking’s view seems more in line with dystopian sci-fi visions of robot apocalypse. “Success in AI would be the biggest event in human history,” he wrote in The Independent last year, “Unfortunately it might also be the last.” Given the design of autonomous weapons systems and, as he told the BBC, the fact that “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded,” the prospect looks chilling, but it isn’t inevitable.
Our tech isn’t actively out to get us. “The real risk with AI isn’t malice but competence,” Hawking clarified, in a fascinating Reddit “Ask Me Anything” session last month. Due to the physicist’s physical limitations, readers posted questions and voted on their favorites. From these, Hawking elected the “ones he feels he can give answers to.” In response to a top-rated question about the so-called “Terminator Conversation,” he wrote, “A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”
This problem of misaligned goals is not of course limited to our relationship with machines. Our precarious economic relationships with each other pose a separate threat, especially in the face of massive job loss due to future automation. We’d like to imagine a future where technology frees us of toil and want, the kind of society Buckminster Fuller sought to create. But the truth is that wealth and income inequality, at their highest levels in the U.S. since at least the Gilded Age, may determine a very different path—one we might think of in terms of “The ElysiumConversation.” Asked in the same AMA Reddit session, “Do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done?,” Hawking elaborated,
If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.
For decades after the Cold War, capitalism had the status of an unquestionably sacred doctrine—the end of history and the best of all possible worlds. Now, not only has Hawking identified its excesses as drivers of human decline, but so have other decidedly non-Marxist figures like Bill Gates, who in a recent Atlantic interview described the private sector as “in general inept” and unable to address the climate crisis because of its focus on short-term gains and maximal profits. “There’s no fortune to be made,” he said, from dealing with some of the biggest threats to our survival. But if we don’t deal with them, the losses are incalculable.
Many people still have a major fear of mathematics, having suffered through school and not really having been in the right frame of mind to grasp concepts that we’ve been told will come in handy in our future working lives. When Britons get to the age of 16, many can choose to leave school, escaping the terror of math (or, as they say, maths).
But we shouldn’t live in fear, so along comes Citizen Maths, a UK-based free online course that purports to help adults catch up with Level 2 math (aka what a 16-year-old should know) without getting hit with a ruler or a spit wad. The course is funded by the UFI Charitable Trust, which focuses on providing free education for adults.
The Citizen Maths course currently consists of three units—Proportion, Uncertainty, and Representation. Additional sections on Pattern and Measurements will soon follow. All units come with videos and tests that take about an hour of the viewer’s time. As the narrator says, you can “learn in safety, without fear of being told off or exposed.” The full course takes, on average, about 20 hours.
And the tutorials bring in the real world, not just the abstract. Ratios and odds are experienced through roulette, horse racing, and playing dice. Understanding insurance comes into the tutorial on making decisions. Modeling is explained by trying to understand weather patterns. And proportion is explained through baking recipes and making cocktails.
As of this post, three of the five sections are available, with the complete course due up by next year. You can find more advanced Math courses in our collection of Free Online Math Courses.
Ted Mills is a freelance writer on the arts who currently hosts the FunkZone Podcast. You can also follow him on Twitter at @tedmills, read his other arts writing at tedmills.com and/or watch his films here.
The 19th century witnessed the birth of photography. And, before too long, Victorian society found important applications for the new medium — like memorializing the dead. A recent post on a Dutch version of National Geographic notes that “Photographing deceased family members just before their burial was enormously popular in certain Victorian circles in Europe and the United States. Although adults were also photographed, it was mainly children who were commemorated in this way. In a period plagued by unprecedented levels of infant mortality, post-mortem pictures often provided the only tangible memory of the deceased child.”
Though unusual by modern standards, the pictures played an important role in a family’s grieving process and often became one of its cherished possessions — cherished because it was likely the only photo of the deceased child that families had. During the early days of photography, portraits were expensive, which meant that most families didn’t take pictures during the course of everyday life. It was only death that gave them a prompt.
The practice of taking post mortem pictures peaked in the 19th century, right around the time when “snapshot” photography became more prevalent, allowing families to take portraits at a lower cost, when everyone was in the full swing of life. Hence obviating the need for post-mortem photos. You can learn more about this bygone practice by visiting the Burns Archive or getting the book, Sleeping Beauty: Memorial Photography in America.
Follow Open Culture on Facebook and Twitter and share intelligent media with your friends. Or better yet, sign up for our daily email and get a daily dose of Open Culture in your inbox. And if you want to make sure that our posts definitely appear in your Facebook newsfeed, just follow these simple steps.
We're hoping to rely on loyal readers, rather than erratic ads. Please click the Donate button and support Open Culture. You can use Paypal, Venmo, Patreon, even Crypto! We thank you!
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.