They only have flexor muscles, which allow their legs to curl in, and they extend them outward by hydraulic pressure. When they die, they lose the ability to actively pressurize their bodies. That’s why they curl up.
When a scientifically inclined human inserts a needle into a deceased spider’s hydraulic prosoma chamber, seals it with superglue, and delivers a tiny puff of air from a handheld syringe, all eight legs will straighten like fingers on jazz hands.
These necrobiotic spider gripper tools can lift around 130% of their body weight — smaller spiders are capable of handling more — and each one is good for approximately 1000 grips before degrading.
Preston and Yap envision putting the spiders to work sorting or moving small scale objects, assembling microelectronics, or capturing insects in the wild for further study.
Eventually, they hope to be able to isolate the movements of individual legs, as living spiders can.
Environmentally, these necrobiotic parts have a major advantage in that they’re fully biodegradable. When they’re no longer technologically viable, they can be composted. (Humans can be too, for that matter…)
The idea is as innovative as it is offbeat. As a soft robotics specialist, Preston is always seeking alternatives to hard plastics, metals and electronics:
We use all kinds of interesting new materials like hydrogels and elastomers that can be actuated by things like chemical reactions, pneumatics and light. We even have some recent work on textiles and wearables…The spider falls into this line of inquiry. It’s something that hasn’t been used before but has a lot of potential.”
Conquer any lingering arachnophobia by reading Yap and Preston’s research article, Necrobotics: Biotic Materials as Ready-to-Use Actuators, here.
More recently, the poet and international educator has combined her interest in amigurumi crocheted animals and ChatGPT, the open source AI chatbot.
Having crocheted an amiguruminarwhal for a nephew earlier this year, she hopped on ChatGPT and asked it to create “a crochet pattern for a narwhal stuffed animal using worsted weight yarn.”
The result might have discouraged another querent, but Woolner got out her crochet hook and sallied forth, following ChatGPTs instructions to the letter, despite a number of red flags indicating that the chatbot’s grasp of narwhal anatomy was highly unreliable.
Its ignorance is part of its DNA. As a large language model, ChatGPT is capable of producing predictive text based on vast amounts of data in its memory bank. But it can’t see images.
It has no idea what a cat looks like or even what crochet is. It simply connects words that frequently appear together in its training data. The result is superficially plausible passages of text that often fall apart when exposed to the scrutiny of an expert—what’s been called “fluent bullshit.”
It’s also not too hot at math, a skill set knitters and crocheters bring to bear reading patterns, which traffic in numbers of rows and stitches, indicated by abbreviations that really flummox a chatbot.
Rnd 7: sc even (12); F/O and leave a long strand of yarn to sew the dorsal fin between rnds # 18–23. Do not stuff the fin.
Pity poor ChatGPT, though, like Woolner, it tried.
Their collaboration became a cause célèbre when Woolner debuted the “AI generated narwhal crochet monstrosity” on TikTok, aptly comparing the large tusk ChatGPT had her position atop its head to a chef’s toque.
Is that the best AI can do?
A recent This American Life episode details how Sebastien Bubeck, a machine learning researcher at Microsoft, commanded another large language model, GPT‑4, to create code that TikZ, a vector graphics producer, could use to “draw” a unicorn.
This collaborative experiment was perhaps more empirically successful than the ChatGPT amigurumi patterns Woolner dutifully rendered in yarn and fiberfill. This American Life’s David Kestenbaum was sufficiently awed by the resulting image to hazard a guess that “when people eventually write the history of this crazy moment we are in, they may include this unicorn.”
It’s not good, but it’s a fucking unicorn. The body is just an oval. It’s got four stupid rectangles for legs. But there are little squares for hooves. There’s a mane, an oval for the head. And on top of the head, a tiny yellow triangle, the horn. This is insane to say, but I felt like I was seeing inside its head. Like it had pieced together some idea of what a unicorn looked like and this was it.
Let’s not poo poo the merits of Woolner’s ongoing explorations though. As one commenter observed, it seems she’s “found a way to instantiate the weird messed up artifacts of AI generated images in the physical universe.”
To which Woolner responded that she “will either be spared or be one of the first to perish when AI takes over governance of us meat sacks.”
In the meantime, she’s continuing to harness ChatGPT to birth more monstrous amigurumi. Gerald the Narwhal’s has been joined by a cat, an otter, Norma the Normal Fish, XL the Newt, and Skein Green, a pelican bearing get well wishes for author and science vlogger Hank Green.
Two weeks later, the Daily Beast pronounced this attempt, nicknamed Gerard, “even less narwhal-looking than the first. Its body was a massive stuffed triangle, and its tusk looked like a gumdrop at one end.”
Woolner dubbed Gerard possibly the most frustrating AI-generated amigurumi of her acquaintance, owing to an onslaught of specificity on ChatCPT’s part. It overloaded her with instructions for every individual stitch, sometimes calling for more stitches in a row than existed in the entire pattern, then dipped out without telling her how to complete the body and tail.
As silly as it all may seem, Woolner believes her ChatGPT amigurumi collabsare a healthy model for artists using AI technology:
I think if there are ways for people in the arts to continue to create, but also approach AI as a tool and as a potential collaborator, that is really interesting. Because then we can start to branch out into completely different, new art forms and creative expressions—things that we couldn’t necessarily do before or didn’t have the spark or the idea to do can be explored.
If you, like Hank Green, have fallen for one of Woolner’s unholy creations, downloadable patterns are available here for $2 a pop.
Those seeking alternatives to fiberfill are advised to stuff their amigurumi with “abandoned hopes and dreams” or “all those free tee shirts you get from giving blood and running road races or whatever you do for fun”.
The YouTube channel There I Ruined It creates new versions of songs using AI-generated voices. For Dustin Ballard, the channel’s creator, the point is to “lovingly destroy your favorite songs.” Take the example above. Here, an AI version of Johnny Cash’s voice sings the lyrics of Aqua’s “Barbie Girl,” set to the music of Cash’s “Folsom Prison Blues.” Recently, Ballard explained his approach to Business Insider:
My process for these is a little different than most people. I first record the vocals myself so that I can do my best imitation of the cadence of the original singer. Then I use one of their own songs (like ‘Folsom Prison Blues’ rather than the original ‘Barbie Girl’ music) to add to the illusion that this is a ‘real’ song in the artist’s catalog, though clearly all done in jest. Finally, I use an AI voice model trained on snippets of the original artist’s singing to transform my voice into theirs. I have a guy in Argentina I often call upon for this training (although the Johnny Cash one already existed).
If you would like to support the mission of Open Culture, consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us continue providing the best free cultural and educational materials to learners everywhere. You can contribute through PayPal, Patreon, and Venmo (@openculture). Thanks!
Released in November 2022, ChatGPT gave us all a glimpse into the future world of AI–a sense of what the world will look like when chatbots can think and execute tasks on our behalf. There’s a good chance that you’ve already experimented loosely with ChatGPT, trying to test its strengths and weaknesses. But have you considered using ChatGPT to unlock your creativity and productivity in more substantive ways? If so, Vanderbilt University has a new course for you: Prompt Engineering for ChatGPT.
Created by Dr. Jules White, Prompt Engineering for ChatGPT will teach students how to write effective “prompts” (or well-crafted questions) so that they can leverage ChatGPT and other large language models. Large language models (LLMs) respond to “prompts” posed by users in natural language statements. If users can write good prompts, they can get effective answers from large language models and discover creative uses for these tools. Divided into six modules, the Vanderbilt course covers the art of writing effective prompts, starting with basic prompts and building toward more sophisticated ones. By course’s end, students should feel comfortable using ChatGPT to complete meaningful tasks in their personal and professional lives. For example, one student left this testimonial after completing the course:
As a medical researcher and medical writer with >30 years of experience, I was really stunned to see what the capabilities of LLMs are. Dr. White made a great work of explaining and giving examples. About halfway through the course I was able to put ChatGPT to work on a real work-related issue. With its help, I was able in fact to complete in 7 hours a job that would have required at least 20. Now, after completing the course, I believe that — by applying some more complex formatting — I could have shaved another couple of hours…”
Offered on the Coursera platform, Prompt Engineering for ChatGPT is designed for beginners. You only need a browser and a ChatGPT account. Designed to be completed in 18 hours, students can take the course for a fee ($49) and earn a credential at the end. Or they can also audit the course–and forego the credential–for no fee. Enroll here.
Nota Bene: Open Culture has a partnership with Coursera. We often feature their courses because the courses offer value to our readers. We typically receive fees when users sign up for a paid course, and sometimes we receive a fee for featuring an educational program itself. Those fees help support our operation.
At the center of Indiana Jones and the Dial of Destiny is a device quite like the real ancient Greek artifact known as the Antikythera mechanism, which has been called the world’s oldest computer. “Every Indiana Jones adventure needs an exotic MacGuffin,” writes Smithsonian.com’s Meilan Solly, and in this latest and presumably last installment in its series, “the hero chases after the Archimedes Dial, a fictionalized version of the Antikythera mechanism that predicts the location of naturally occurring fissures in time.” After undergoing Indiana Jonesification, in other words, the Antikythera mechanism becomes a time machine, a function presumably not included in even the least responsible archaeological speculations about its still-unclear set of functions.
But according to Jo Marchant, author of Decoding the Heavens: Solving the Mystery of the World’s First Computer, the Antikythera mechanism really is “a time machine in a sense. When you turn the handle on the side, you are moving backward in time, you’re controlling time. You’re seeing the universe either being fast-forwarded or reversed, and you’re choosing the speed and can set it to any moment in history that you want.”
She refers to the fact that a handle on the side of the mechanism controls gears within it, which engage to compute and display “the positions of celestial bodies, the date, the timing of athletic games. There’s a calendar, there’s an eclipse prediction dial, and there are inscriptions giving you information about what the stars are doing.”
It seems that the Antikythera mechanism could tell you “everything you need to know about the state and workings of the cosmos,” at least if you’re an ancient Greek. But it also tells us something important about the ancient Greeks themselves: specifically, that they’d developed much more sophisticated mechanical engineering than we’d known before the early twentieth century, when the device was discovered in a shipwreck. According to the BBC video above on the details of the Antikythera mechanism’s known capabilities, Arthur C. Clarke thought that “if the ancient Greeks had understood the capabilities of the technology, then they would have reached the moon within 300 years.” A grand old civilization that turns out to have been on a course for outer space: now there’s a viable premise for the next big architectural adventure film franchise.
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the Substack newsletterBooks on Cities, the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.
The first time I saw the infamous Skullcassette-and-Bones logo was on holiday in the UK and purchased the very un-punky Chariots of Fire soundtrack. It was on the inner sleeve. “Home Taping Is Killing Music” it proclaimed. It was? I asked myself. “And it’s illegal” a subhead added. It is? I also asked myself. (Ironically, this was a few months before I came into possession of my first combination turntable-cassette deck.)
Ten years and racks and racks of homemade cassette dubs on my shelves later, music seemed to be doing very well. (Later, by going digital, the music industry killed itself, and I had absolutely nothing to do with it.)
Instead, the British Phonographic Industry (BPI) were taking aim at people who were recording songs off the radio instead of purchasing records. With the rise of the cassette tape in popularity, the BPI saw pounds and pence leaving their pockets.
Now, figuring out lost profits from home taping could be a fools’ errand, but let’s focus on the “illegal” part. Technically, this is true. Radio stations pay licensing fees to play music, so a consumer taping that song off the radio is infringing on the song’s copyright. Britain has very different “fair use” laws than America. In addition, digital radio and clearer signals have complicated matters over the years.
In practice, however, the whole thing was bunkum. Radio recordings are historic. Mixtapes are culture. I have my tapes of John Peel’s BBC shows, which I recorded for the music. Now, I listen to them for Peel’s intros and outros.
Seriously, the Napalm Death Peel Sessions *only* make sense with his commentary. Whoever taped this is an unknown legend:
The post-punk crowd knew the campaign was bunkum too. Malcolm McLaren, always the provocateur, released Bow Wow Wow’s cassette-only-single C‑30 C‑60 C‑90 Go with a blank B‑side that urged consumers to record their own music. EMI quickly dropped the band.
The Dead Kennedys also repeated the black b‑side gimmick with In God We Trust, Inc. (I would be interested in anybody who picks up a copy used of either to see what *is* on the b‑side).
And then there were the parodies. The metal group Venom used “Home Taping Is Killing Music; So Are Venom” on an album; Peter Principle offered “Home Taping Is Making Music”: Billy Bragg kept it Marxist: “Capitalism is killing music — pay no more than £4.99 for this record”. For the industry, music was the product; for the regular folks, music was communication, it was art, it was a language.
The campaign never did much damage. Attempts to levy a tax on blank cassettes didn’t get traction in the UK. And BPI’s director general John Deacon was frustrated that record companies didn’t want to splash the Jolly Roger on inner sleeves. The logo lives on, however, as part of torrent site Pirate Bay’s sails:
Just after the hysteria died down, compact discs began their rise, planting the seeds for the digital revolution, the mp3, file sharing, and now streaming.
(Wait, is it possible to record internet streams? Why, yes.)
If you have any stories about how you helped “kill music” by recording your favorite DJs, confess your crimes in the comments.
Note: An earlier version of this post appeared on our site in 2019.
If you would like to support the mission of Open Culture, consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us continue providing the best free cultural and educational materials to learners everywhere. You can contribute through PayPal, Patreon, and Venmo (@openculture). Thanks!
Ted Mills is a freelance writer on the arts who currently hosts the artist interview-based FunkZone Podcast and is the producer of KCRW’s Curious Coast. You can also follow him on Twitter at @tedmills, read his other arts writing at tedmills.com and/or watch his films here.
Google Arts & Culture’s new initiative Inside a Genius Mindoffers an interactive experience of the codices in which Da Vinci made his sketches, diagrams, and notes.
It’s also a curatorial collaboration between a human — Oxford art history professor Martin Kemp — and artificial intelligence.
His non-human counterpart used machine learning to delve into the notebooks’ contents, investigating some 1040 pages from 6 volumes and “drawing thematic connections across time and subject matter to reflect Leonardo’s spirit of interdisciplinary imagination, innovation and the profound unity at the heart of his apparently diverse pursuits.”
Upon launching the experiment, you bushwhack your way through the individual codices by clicking on the sketches floating toward you like elements in a classic space-themed video game, or choose to enjoy one of five curated stories.
Using a discreet and somewhat fiddly navigation bar on the left side of the screen, we toured Leonardo’s renderings of the flayed muscles of the upper spine, the vessels and nerves of the neck and liver, the Arno valley with the route of a proposed canal that would run from Florence to Pisa, a view of the Alps from Milan, the fall of light on a face, studies of optics and men in action, and observations of the moon and earthshine.
How are these things related?
“Leonardo believed that the human body represented the whole natural world in miniature” and the selections do offer food for thought that Leonardo’s passion for the underlying laws of nature is the common thread running through his research and art.
Each image is accompanied a button inviting you to “explore” the work further. Click it for information about dimensions, provenance, and media, as well as some tantalizing biographical tidbits, such as this, adapted from the catalogue for the 2019 exhibit Leonardo da Vinci: A Life in Drawing:
Leonardo had first studied anatomy in the late 1480s. By the end of his life he claimed to have performed 30 human dissections, intending to publish an illustrated treatise on the subject, but this was never completed, and Leonardo’s work thus had no discernible impact on the discipline. His only documented dissection was carried out in the winter of 1507–8, when he performed an autopsy on an old man whose death he had witnessed in a hospital in Florence. The studies on this page from Leonardo’s notebook are based on that dissection: on the verso Leonardo depicts the vessels of the liver; and in notes elsewhere in the notebook he gives the first known clinical description of cirrhosis of the liver.
Perhaps you’d like to circumvent the machine learning and use your own genius mind to make connections a la Da Vinci?
Try messing around with the AI tags. See what you can cobble together to forge a cohesive alliance between such elements as wing, horse, map, musical instruments, and spiral.
Or cleanse your palate by putting a mash-up of two codex sketches on a digital sticky with the help of Google AI, mindful that the master, who lived to the ripe old age of 67, was probably a bit more intentional with his time…
“In the criminal justice system,” the evergreen Law & Order’s opening credits remind us, “the people are represented by two separate, yet equally important, groups: the police, who investigate crime; and the district attorneys, who prosecute the offenders.”
They fail to mention the life-sized skeleton with ghastly glowing eyes and a camera tucked away inside its skull.
Ms. Shelby’s vision sought to transform the police interrogation room into a haunted house where the sudden appearance of the aforementioned skeleton would shock a guilty suspect into confession.
(Presumably an innocent person would have nothing to fear, other than sitting in a pitch black chamber where a truth-seeking skeleton was soon to manifest before their very eyes.)
The idea may have seemed slightly less far-fetched immediately following a decade when belief in Spiritualism flourished.
False mediums used sophisticated stagecraft to convince members of a gullible public that they were in the presence of the supernatural.
Ms. Shelby’s proposed apparatus consisted of a “structure divided into two chambers:”
…one chamber of which is darkened to provide quarters in which the suspect is confined while being subjected to examination, the other chamber being provided for the examiner, the two chambers being separated from each other by a partition which is provided with a panel upon one side of which is mounted a figure in the form of a skeleton, the said skeleton having the rear J portion of the skull removed and the recording apparatus inserted therein.
The examiner was also tasked with voicing the skeleton, using appropriately spooky tones and a well-positioned megaphone.
As silly as Ms. Shelby’s invention seems nearly a hundred years after the patent was filed, it’s impressive for its robust embrace of technology, particularly as it pertains to capturing the presumably spooked suspect’s reaction:
The rear portion of the skull of the skeleton is removed and a camera casing is mounted in the panel extending into the skull, said camera being preferable of the continuously-moving film-type an having provisions for simultaneously recording pictures and sound waves, or reproducing these, as may be desired or required, the said camera impression upon the having an objective adapted to register with the nose, or other opening, in the skull. The eye-sockets are provided with bulbs adapted to impress different light intensities on the marginsof the film, the central section of the film being arranged to receive the pictures, the variations in the light intensities of the bulbs being governed by means of the microphones, and selenium cells (not shown), which are included in the light circuit and tend to cause the fluctuations of the current to vary the intensity of the light for sound recording purposes, the density of the light film varying with the intensity of the light thus transmitted.
Ms. Shelby believed that a suspect whose confession had been recorded by the skeleton would have difficulty making a retraction stick, especially if photographs taken during the big reveal caught them with a guilty-looking countenance.
Writing on officer.com, Jonathan Kozlowski applauds Ms. Shelby’s impulse to innovate, even as he questions if “scaring a confession out of a guy by being really really creepy (should) be considered coercion:”
Shelby doesn’t seem to have gotten any credit for it and nor am I sure that Shelby was even the first to think of the idea, BUT if you remove the skeleton figure and the red lightbulbs staring into the criminal’s soul was this the inspiration of a mounted surveillance camera?
Allow me to push it even further … imagine your department’s interview room. If you’ve got the camera in the corner (or multiple) let that be. Instead of the skeleton figure just put an officer standing in the corner with a recording body camera. The officer is just standing there. Staring. Sure that’s a MASSIVE waste of time and money — of course. I may be wrong, but if I’m being honest this seems like intimidation.
It also strikes us that the element of surprise would be a challenge to keep under wraps. All it would take is one freaked-out crook (innocent or guilty) blabbing to an underworld connection, “You wouldn’t believe the crazy thing that happened when they hauled me down to the station the other night…”
What sort of horrific special effect could force a guilty party to confess in the 21st century? Something way more dreadful than a skeleton with glowing red eyes, comedian Tom Scott’s experiment below suggests.
Having enlisted creative technologist Charles Yarnold to build Ms. Shelby’s apparatus, he invited fellow YouTubers Chloe Dungate, Tom Ridgewell, and Daniel J Layton to step inside one at a time, hoping to identify which of them had nicked the cookie with which he had baited his crime-catching hook.
The participants’ reactions at the critical moment ranged from delighted giggles to a satisfying yelp, but the results were utterly inconclusive. Nobody ‘fessed up to stealing the cookies.
We're hoping to rely on loyal readers, rather than erratic ads. Please click the Donate button and support Open Culture. You can use Paypal, Venmo, Patreon, even Crypto! We thank you!
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.