If you’re in Zurich, head over to the Beyer Clock and Watch Museum, which presents the history of timekeeping and timekeeping instruments, from 1400 BC to modern times. On display, you’ll find sundials, water and tower clocks, Renaissance automata, and pendulum clocks. And the Planetarium Table Clock featured above.
Made circa 1775, the planetarium clock keeps time … and so much more. According to the Museum of Artifacts website, the earth (look in the glass orb) “rotates around the sun in perfect real time.” And the “other five planets rotate as well–they “go up, down, around, in relation to the etched constellations of precisely positioned stars on the crystal globe, which if you are smart enough will reveal what season it is.” This fine timekeeping piece was the joint creation of Nicole-Reine Lepaute, a French astronomer who predicted the return of Halley’s Comet, and her husband, Jean-André Lepaute, who presided over a clockmaking dynasty and became horloger du Roi (clockmaker to the king).
It’s hard to imagine that the Planetarium clock didn’t somehow inspire a more modern creation–the Midnight Planétarium, an astronomical watch that shows the rotation of five planets — Mercury, Venus, Earth, Mars, Jupiter, and Saturn. It has a price tag of $220,000 (excluding sales tax). See it on display below.
If you would like to sign up for Open Culture’s free email newsletter, please find it here. It’s a great way to see our new posts, all bundled in one email, each day.
If you would like to support the mission of Open Culture, consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us continue providing the best free cultural and educational materials to learners everywhere. You can contribute through PayPal, Patreon, and Venmo (@openculture). Thanks!
When I first read news of the now-infamous Google memo writer who claimed with a straight face that women are biologically unsuited to work in science and tech, I nearly choked on my cereal. A dozen examples instantly crowded to mind of women who have pioneered the very basis of our current technology while operating at an extreme disadvantage in a culture that explicitly believed they shouldn’t be there, this shouldn’t be happening, women shouldn’t be able to do a “man’s job!”
The memo, as Megan Molteni and Adam Rogers write at Wired, “is a species of discourse peculiar to politically polarized times: cherry-picking scientific evidence to support a pre-existing point of view.” Its specious evolutionary psychology pretends to objectivity even as it ignores reality. As Mulder would say, the truth is out there, if you care to look, and you don’t need to dig through classified FBI files. Just, well, Google it. No, not the pseudoscience, but the careers of women in STEM without whom we might not have such a thing as Google.
Women like Margaret Hamilton, who, beginning in 1961, helped NASA “develop the Apollo program’s guidance system” that took U.S. astronauts to the moon, as Maia Weinstock reports at MIT News. “For her work during this period, Hamilton has been credited with popularizing the concept of software engineering.” Robert McMillan put it best in a 2015 profile of Hamilton:
It might surprise today’s software makers that one of the founding fathers of their boys’ club was, in fact, a mother—and that should give them pause as they consider why the gender inequality of the Mad Men era persists to this day.
Hamilton was indeed a mother in her twenties with a degree in mathematics, working as a programmer at MIT and supporting her husband through Harvard Law, after which she planned to go to graduate school. “But the Apollo space program came along” and contracted with NASA to fulfill John F. Kennedy’s famous promise made that same year to land on the moon before the decade’s end—and before the Soviets did. NASA accomplished that goal thanks to Hamilton and her team.
Photo courtesy of MIT Museum
Like many women crucial to the U.S. space program (many doubly marginalized by race and gender), Hamilton might have been lost to public consciousness were it not for a popular rediscovery. “In recent years,” notes Weinstock, “a striking photo of Hamilton and her team’s Apollo code has made the rounds on social media.” You can see that photo at the top of the post, taken in 1969 by a photographer for the MIT Instrumentation Laboratory. Used to promote the lab’s work on Apollo, the original caption read, in part, “Here, Margaret is shown standing beside listings of the software developed by her and the team she was in charge of, the LM [lunar module] and CM [command module] on-board flight software team.”
As Hank Green tells it in his condensed history above, Hamilton “rose through the ranks to become head of the Apollo Software development team.” Her focus on errors—how to prevent them and course correct when they arise—“saved Apollo 11 from having to abort the mission” of landing Neil Armstrong and Buzz Aldrin on the moon’s surface. McMillan explains that “as Hamilton and her colleagues were programming the Apollo spacecraft, they were also hatching what would become a $400 billion industry.” At Futurism, you can read a fascinating interview with Hamilton, in which she describes how she first learned to code, what her work for NASA was like, and what exactly was in those books stacked as high as she was tall. As a woman, she may have been an outlier in her field, but that fact is much better explained by the Occam’s razor of prejudice than by anything having to do with evolutionary determinism.
An artist just starting out might first imitate the styles of others, and if all goes well, the process of learning those styles will lead them to a style of their own. But how does one learn something like an artistic style in a way that isn’t simply imitative? Artificial intelligence, and especially the current developments in making computers not just think but learn, will certainly shed some light in the process — and produce, along the way, such fascinating projects as the video above, a re-envisioning of Disney’s Alice in Wonderland in the styles of famous artists: Pablo Picasso, Georgia O’Keeffe, Katsushika Hokusai, Frida Kahlo, Vincent van Gogh and others.
The idea behind this technological process, known as “style transfer,” is “to take two images, say, a photo of a person and a painting, and use these to create a third image that combines the content of the former with the style of the later,” says an explanatory post at the Paperspace Blog.
“The central problem of style transfer revolves around our ability to come up with a clear way of computing the ‘content’ of an image as distinct from computing the ‘style’ of an image. Before deep learning arrived at the scene, researchers had been handcrafting methods to extract the content and texture of images, merge them and see if the results were interesting or garbage.”
Deep learning, the family of methods that enable computers to teach themselves, involves providing an artificial intelligence system called a “neural network” with huge amounts of data and letting it draw inferences. In experiments like these, the systems take in visual data and make inferences about how one set of data, like the content of frames of Alice in Wonderland, might look when rendered in the colors and contours of another, such as some of the most famous paintings in all of art history. (Others have tried it, as we’ve previously featured, with 2001: A Space Odyssey and Blade Runner.) If the technology at work here piques your curiosity, have a look at Google’s free online course on deep learning or this new set of courses from Coursera— it probably won’t improve your art skills, but it will certainly increase your understanding of a development that will play an ever larger role in the culture and economy ahead.
Here’s a full list of painters used in the neural networked version of Alice:
Pablo Picasso
Georgia O’Keeffe
S.H. Raza
Hokusai
Frida Kahlo
Vincent van Gogh
Tarsila
Saloua Raouda Choucair
Lee Krasner
Sol Lewitt
Wu Guanzhong
Elaine de Kooning
Ibrahim el-Salahi
Minnie Pwerle
Jean-Michel Basquiat
Edvard Munch
Natalia Goncharova
When synthesizers like the Yamaha DX7 became consumer products, the possibilities of music changed forever, making available a wealth of new, often totally unfamiliar sounds even to musicians who’d never before had a reason to think past the electric guitar. But if the people at Project Magenta keep doing what they’re doing, they could soon bring about a wave of even more revolutionary music-making devices. That “team of Google researchers who are teaching machines to create not only their own music but also to make so many other forms of art,” writes the New York Times’ Cade Metz, work toward not just the day “when a machine can instantly build a new Beatles song,” but the development of tools that allow artists “to create in entirely new ways.”
Using neural networks, “complex mathematical systems allow machines to learn specific behavior by analyzing vast amounts of data” (the kind that generated all those disturbing “DeepDream” images a while back), Magenta’s researchers “are crossbreeding sounds from very different instruments — say, a bassoon and a clavichord — creating instruments capable of producing sounds no one has ever heard.”
You can give one of the results of these experiments a test drive yourself with NSynth, described by its creators as “a research project that trained a neural network on over 300,000 instrument sounds.” Think of Nsynth as a synthesizer powered by AI.
Fire it up, and you can mash up and play your own sonic hybrids of guitar and sitar, piccolo and pan flute, hammer dulcimer and dog. In the video at the top of the post you can hear “the first tangible product of Google’s Magenta program,” a short melody created by an artificial intelligence system designed to create music based on inferences drawn from all the music it has “heard.” Below that, we have another piece of artificial intelligence-generated music, this one a polyphonic piece trained on Bach chorales and performed with the sounds of NSynth.
If you’d like to see how the creation of never-before-heard instruments works in a bit more depth, have a look at the demonstration just above of the NSynth interface for Ableton Live, one of the most DJ-beloved pieces of audio performance software around, just above. Hearing all this in action brings to mind the moral of a story Brian Eno has often told about the DX7, from which only he and a few other producers got innovative results by actually learning how to program: as much as the prospect of AI-powered music technology may astound, the music created with it will only sound as good as the skills and adventurousness of the musicians at the controls — for now.
In 1997, the Cornell Chronicle announced: “The world’s smallest guitar — carved out of crystalline silicon and no larger than a single cell — has been made at Cornell University to demonstrate a new technology that could have a variety of uses in fiber optics, displays, sensors and electronics.”
Invented by Dustin W. Carr, the so-called “nanoguitar” measured 10 micrometers long–roughly the size of your average red blood cell. And it had six strings, each “about 50 nanometers wide, the width of about 100 atoms.”
According to The Guardian, the vintage 1997 nanoguitar was actually never played. That honor went to a 2003 edition of the nanoguitar, whose strings were plucked by miniature lasers operated with an atomic force microscope, creating “a 40 megahertz signal that is 130,000 times higher than the sound of a full-scale guitar.” The human ear couldn’t hear something at that frequency, and that’s a problem not even a good amp–a Vox AC30, Fender Deluxe Reverb, etc.–could fix.
Thus concludes today’s adventure in nanotechnology.
If you would like to sign up for Open Culture’s free email newsletter, please find it here. It’s a great way to see our new posts, all bundled in one email, each day.
If you would like to support the mission of Open Culture, consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us continue providing the best free cultural and educational materials to learners everywhere. You can contribute through PayPal, Patreon, and Venmo (@openculture). Thanks!
“Human beings are born with a need for food and shelter,” writes Lanchester. “Once these fundamental necessities of life have been acquired, we look around us at what other people are doing, and wanting, and we copy them.” Or as Thiel explained it, “Imitation is at the root of all behavior.” Lanchester reports that “the reason Thiel latched onto Facebook with such alacrity was that he saw in it for the first time a business that was Girardian to its core: built on people’s deep need to copy,” yet few of us, its users, have clearly perceived that essential aspect of Facebook and other social media platforms.
Marshall McLuhan, despite having died decades before their development, would have caught on right away — and he understood why even we savvy denizens of the 21st century haven’t. “For the past 3500 years of the Western world, the effects of media — whether it’s speech, writing, printing, photography, radio or television — have been systematically overlooked by social observers,” said the author of Understanding Mediaand The Medium is the Message. “Even in today’s revolutionary electronic age, scholars evidence few signs of modifying this traditional stance of ostrichlike disregard.”
Those words come from an in-depth 1969 interview with Playboy magazine that broke the celebrity literature professor McLuhan’s ideas to an even wider audience than they’d had before. In it he diagnosed a “peculiar form of self-hypnosis” he called “Narcissus narcosis, a syndrome whereby man remains as unaware of the psychic and social effects of his new technology as a fish of the water it swims in. As a result, precisely at the point where a new media-induced environment becomes all pervasive and transmogrifies our sensory balance, it also becomes invisible.”
As McLuhan saw it, “most people, from truck drivers to the literary Brahmins, are still blissfully ignorant of what the media do to them; unaware that because of their pervasive effects on man, it is the medium itself that is the message, not the content, and unaware that the medium is also the massage — that, all puns aside, it literally works over and saturates and molds and transforms every sense ratio. The content or message of any particular medium has about as much importance as the stenciling on the casing of an atomic bomb.”
Just last month, no less omnipresent an internet titan than Google celebrated McLuhan’s 106th birthday, and a social observer called PR Professor saw in it a certain irony: though “it seems like technology that extends man’s ability to experience and interpret the world is positive and desirable,” McLuhan pointed out “that the inherent tendency to focus on the messages within the media make us blind to the limits and structures imposed by the mediums themselves.” This blindness has consequences indeed, since, according to McLuhan, each time a society develops a new media technology, “all other functions of that society tend to be transmuted to accommodate that new form” as that technology “saturates every institution of that society.”
This went for speech, writing, print, and the telegraph as well as it goes for “social media platforms like Twitter, which reduce expressive possibilities to 140 characters of text or expressing one’s self through the ‘re-tweeting’ of posts by others.” McLuhan believed that at one time only the interpretive work of the artist, “who has had the power — and courage — of the seer to read the language of the outer world and relate it to the inner world,” could allow the rest of us to recognize the thoroughgoing effects of technology on society, but that “the new environment of electric information” had made possible “a new degree of perception and critical awareness by nonartists.” At least more of us, if we step back, can now understand our affliction by mimetic desire, Narcissus narcosis, or any number of other troubling conditions. What to do about them remains an open question.
There the organization, “comprised of a vast community of 3D scanning and 3D printing enthusiasts,” has amassed a collection of 7,834 3D models and counting, all toward their mission ” to archive the world’s sculptures, statues, artworks and any other objects of cultural significance using 3D scanning technologies to produce content suitable for 3D printing.”
Scan the World hasn’t limited its mandate to just artifacts and artworks kept in museums: among its models you’ll also find large scale pieces of public sculpture like the Statue of Liberty and even beloved buildings like Big Ben. This conjures up the tantalizing vision of each of us one day becoming empowered to 3D-print our very own London, complete with not just a British Museum but all the objects, each of which tells part of humanity’s story, inside it.
As much of a technological marvel as it may represent, printing out a Venus de Milo or a David or a Leaning Tower of Pisa or a Moai head at home can’t, of course, compare to making the trip to see the genuine article, especially with the kind of 3D printers now available to consumers. But as recent technological history has shown us, the most amazing developments tend to come out of the decentralized efforts of countless enthusiasts — just the kind of community powering Scan the World. The great achievements of the future have to start somewhere, and they might as well start by paying tribute to the greatest achievements of the past.
Before J.M. Coetzee became perhaps the most acclaimed novelist alive, he worked as a programer. That may not sound particularly notable these days, but bear in mind that the Nobel laureate and two-time Booker-winning author of Waiting for the Barbarians,Disgrace, and Elizabeth Costelloheld that day job first at IBM in the early 1960s — back, in other words, when nobody had a computer on their desk. And back when IBM was IBM: that mighty American corporation had brought the kind of computing power it alone could command to branch offices in cities around the world, including London, where Coetzee landed after leaving his native South Africa after graduating from the University of Cape Town.
The years Coetzee spent “writing machine code for computers,” he once wrote in a letter to Paul Auster, saw him “getting so deeply sucked into the process that I sometimes felt I was descending into a madness in which the brain is taken over by mechanical logic.” This must have caused some distress to a literarily minded young man who heard his true calling only from poetry.
“I was very heavily under the influence, in my teens and early twenties, of, first, T.S. Eliot, but then, more substantially, Ezra Pound, and later of German poetry, of Rilke in particular,” he says to Peter Sacks in the interview above, remembering the years before he put poetry aside as a craft in favor of the novel.
“Under the shadowless glare of the neon lighting, he feels his very soul to be under attack,“Coetzee writes, in the autobiographical novel Youth, of the protagonist’s time as a programmer. “The building, a featureless block of concrete and glass, seems to give off a gas, odourless, colourless, that finds its way into his blood and numbs him. IBM, he can swear, is killing him, turning him into a zombie.” Only in the evening can he “leave his desk, wander around, relax. The machine room downstairs, dominated by the huge memory cabinets of the 7090, is more often than not empty; he can run programs on the little 1401 computer, even, surreptitiously, play games on it.”
He could also use these clunky, punchcard-operated computers to write poetry. “In the mid 1960s Coetzee was working on one of the most advanced programming projects in Britain,” writes King’s College London researcher Rebecca Roach. “During the day he helped to design the Atlas 2 supercomputer destined for the United Kingdom’s Atomic Energy Research Establishment at Aldermaston. At night he used this hugely powerful machine of the Cold War to write simple ‘computer poetry,’ that is, he wrote programs for a computer that used an algorithm to select words from a set vocabulary and create repetitive lines.”
These lines, as seen here in one page of the print-outs held at the Coetzee archive at the University of Texas at Austin’s Harry Ransom Center, include “INCHOATE SHARD IMAGINE THE OUBLIETTE,” “FRENETIC AMBIENCE DISHEARTEN THE ROSE,” “PASSIONATE PABULUM CARPET THE MIRROR,” and “FRENETIC TETANUS DEADEN THE DOCUMENT.”Though he never published these results, writes Roach, he “edited and included phrases from them in poetry that he did publish.” Is this a curious chapter in the early life of a prominent man of letters, or was this realm of “flat metallic surfaces” an ideal forge for the sensibilities of a writer now known, as John Lanchester so aptly put it, for his “unusual quality of passionate coldness” — a kind of brilliant austerity that hardly deadens any of his documents.
We're hoping to rely on loyal readers, rather than erratic ads. Please click the Donate button and support Open Culture. You can use Paypal, Venmo, Patreon, even Crypto! We thank you!
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.