Granted access to a time machine, few of us would presumably opt first for the experience of skull surgery by the Incas. Yet our chances of survival would be better than if we underwent the same procedure 400 years later, at least if it took place on a Civil War battlefield. In both fifteenth-century Peru and the nineteenth-century United States, surgeons were performing a lot of trepanation, or removal of a portion of the skull. Since the Neolithic period, individuals had been trepanned for a variety of reasons, some of which now sound more medically compelling than others, but the Incan civilization took it to another level of frequency, and indeed sophistication.
Anyone with an interest in the history of technology would do well to study the Incas, who were remarkable in both what they developed and what they didn’t. Though there was no Incan alphabet, there was khipu, (or quipu), previously featured here on Open Culture, a system of record-keeping that used nothing but knotted cords.
The Incas may not have had wheeled vehicles or mechanical devices as we know them today, but they did have precision masonry, an extensive road system, advanced water management for agricultural and other uses, high-quality textiles, and plant-derived antiseptic — something more than a little useful if you also happen to be cutting a lot of holes in people’s skulls.
Studying the history of trepanation, neurologist David Kushner, along with bioarchaeologists John Verano and Anne Titelbaum, examined more than 600 Peruvian skulls dating from between 400 BC and the mid-sixteenth-century, which marked the end of the Incans’ 133-year-long run. As Science’s Lizzie Wade reports, the oldest evidence shows an unenviable 40% survival rate, but the surgical technique evolved over time: by the Inca era, the number rises to between 75% and 83%, as against 46% to 56% in Civil War military hospitals. Some Incan skulls even show signs of having undergone up to seven successful trepanations — or non-fatal ones, at any rate. Though that venerable form of surgery may no longer be practiced, modern neurosurgeons today use techniques based on the same principles. Should we find ourselves in need of their services, we’ll no doubt prefer to keep our distance from the time machine.
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. He’s the author of the newsletterBooks on Cities as well as the books 한국 요약 금지 (No Summarizing Korea) and Korean Newtro.Follow him on the social network formerly known as Twitter at @colinmarshall.
Gladys Mae West was born in rural Virginia in 1930, grew up working on a tobacco farm, and died earlier this month a celebrated mathematician whose work made possible the GPS technology most of us use each and every day. Hers was a distinctively American life, in more ways than one. Seeking an escape from the agricultural labor she’d already gotten to know all too well, she won a scholarship to Virginia State College by becoming her high school class valedictorian; after earning her bachelor’s and master’s degrees in mathematics, she taught for a time and then applied for a job at the naval base up in Dahlgren. She first distinguished herself there by verifying the accuracy of bombing tables with a hand calculator, and from there moved on up to the computer programming team.
This was the early nineteen-sixties, when programming a computer meant not coding, but laboriously feeding punch cards into an enormous mainframe. West and her colleagues used IBM’s first transistorized machine, the 7030 (or “Stretch”), which was for a few years the fastest computer in the world.
It cost an equivalent of $81,860,000 in today’s dollars, but no other computer had the power to handle the project of calculating the precise shape of Earth as affected by gravity and the nature of the oceans. About a decade later, another team of government scientists made use of those very same calculations when putting together the model employed by the World Geodetic System, which GPS satellites still use today. Hence the tendency of celebratory obituaries to underscore the point that without West’s work, GPS wouldn’t be possible.
Nor do any of them neglect to point out the fact that West was black, one of just four such mathematicians working for the Navy at Dahlgren. Stories like hers have drawn much greater public interest since the success of Hidden Figures, the Hollywood adaptation of Margot Lee Shetterly’s book about the black female mathematicians at NASA during the Space Race. When that movie came out, in 2016, even West’s own children didn’t know the importance of the once-classified work she’d done. Only in 2018, when she provided that information on a biographical form she filled out for an event hosted by her college sorority, did it become public. She thus spent the last years of her long life as a celebrity, sought out by academics and journalists eager to understand the contributions of another no-longer-hidden figure. But to their questions about her own GPS use, she reportedly answered that she preferred a good old-fashioned paper map.
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. He’s the author of the newsletterBooks on Cities as well as the books 한국 요약 금지 (No Summarizing Korea) and Korean Newtro.Follow him on the social network formerly known as Twitter at @colinmarshall.
In his 1935 essay, “The Work of Art in the Age of Mechanical Reproducibility,” influential German-Jewish critic Walter Benjamin introduced the term “aura” to describe an authentic experience of art. Aura relates to the physical proximity between objects and their viewers. Its loss, Benjamin argued, was a distinctly 20th-century phenomenon caused by mass media’s imposition of distance between object and viewer, though it appears to bring art closer through a simulation of intimacy.
The essay makes for potent reading today. Mass media — which for Benjamin meant radio, photography, and film — turns us all into potential actors, critics, experts, he wrote, and takes art out of the realm of the sacred and into the realm of the spectacle. Yet it retains the pretense of ritual. We make offerings to cults of personality, expanded in our time to include influencers and revered and reviled billionaires and political figures who joust in the headlines like professional wrestlers, led around by the chief of all heels. As Benjamin writes:
The film responds to the shriveling of the aura with an artificial build-up of the “personality” outside the studio. The cult of the movie star, fostered by the money of the film industry, preserves not the unique aura of the person but the “spell of the personality,” the phony spell of a commodity.
Benjamin’s focus on the medium as not only expressive but constitutive of meaning has made his essay a staple on communications and media theory course syllabi, next to the work of Marshall McLuhan. Many readings tend to leave aside the politics of its epilogue, likely since “his remedy,” writes Martin Jay — “the politicization of art by Communism — was forgotten by all but his most militant Marxist interpreters,” and hardly seemed like much of a remedy during the Cold War, when Benjamin became more widely available in translation.
Benjamin’s own idiosyncratic politics aside, his essay anticipates a crisis of authorship and authority currently surfacing in the use of social media as a dominant form of political spectacle.
With the increasing extension of the press, which kept placing new political, religious, scientific, professional, and local organs before the readers, an increasing number of readers became writers—at first, occasional ones. It began with the daily press opening to its readers space for “letters to the editor.” And today there is hardly a gainfully employed European who could not, in principle, find an opportunity to publish somewhere or other comments on his work, grievances, documentary reports, or that sort of thing. Thus, the distinction between author and public is about to lose its basic character.
Benjamin’s analysis of conventional film, especially, leads him to conclude that its reception required so little of viewers that they easily become distracted. Everyone’s a critic, but “at the movies this position requires no attention. The public is an examiner, but an absent-minded one.” Passive consumption and habitual distraction do not make for considered, informed opinion or a healthy sense of proportion.
What Benjamin referred to (in translation) as mechanical reproducibility we might now just call The Internet (and the coteries of “things” it haunts poltergeist-like). Later theorists influenced by Benjamin foresaw our age of digital reproducibility doing away with the need for authentic objects, and real people, altogether. Benjamin himself might characterize a medium that can fully detach from the physical world and the material conditions of its users — a medium in which everyone gets a column, public photo gallery, and video production studio — as ideally suited to the aims of fascism.
Fascism attempts to organize the newly created proletarian masses without affecting the property structure which the masses strive to eliminate. Fascism sees its salvation in giving these masses not their right, but instead a chance to express themselves. The masses have a right to change property relations; Fascism seeks to give them an expression while preserving property. The logical result of Fascism is the introduction of aesthetics into political life.
The logical result of turning politics into spectacle for the sake of preserving inequality, writes Benjamin, is the romanticization of war and slaughter, glorified plainly in the Italian Futurist manifesto of Filippo Marinetti and the literary work of Nazi intellectuals like Ernst Jünger. Benjamin ends the essay with a discussion of how fascism aestheticizes politics to one end: the annihilation of aura by more permanent means.
Under the rise of fascism in Europe, Benjamin saw that human “self-alienation has reached such a degree that it can experience its own destruction as an aesthetic pleasure of the first order. This is the situation of politics which Fascism is rendering aesthetic.” Those who participate in this spectacle seek mass violence “to supply the artistic gratification of a sense perception that has been changed by technology.” Distracted and desensitized, they seek, that is, to compensate for profound disembodiment and the loss of meaningful, authentic experience.
Over the centuries, a variety of places have laid credible claim to being the world’s art center: Constantinople, Florence, Paris, New York. But on the scale of, say, ten millennia, the hot spots become rather less recognizable. Up until about 20,000 years ago, it seems that creators and viewers of art alike spent a good deal in one particular cave: Liang Metanduno, located on Muna Island in Indonesia’s Southeast Sulawesi province. The many paintings on its walls of recognizable humans, animals, and boats have brought it fame in our times as a kind of ancient art gallery. But in recent years, a much older piece of work has been discovered there, one whose creation occurred at least 67,800 years ago.
The creation in question is a handprint, faint but detectable, probably made by blowing a mixture of ochre and water over an actual human hand. To determine its age, researchers performed what’s called uranium-series analysis on the deposits of calcium carbonate that had built up on and around it.
The number of 67,800 years is, of course, not exact, but it’s also just a minimum: in fact, the handprint could well be much older. In a paper published last week in Nature, the researchers point out that its age exceeds both that of the oldest similar rock art found elsewhere in Indonesia and that of a hand stencil in Spain attributed to Neanderthals, “which until now represented the oldest demonstrated minimum-age constraint for cave art worldwide.”
It isn’t impossible that this at least 67,800-year-old handprint could also have been made by Neanderthals. The obvious modification of the hand’s shape, however, an extension and tapering of the fingers that brings to mind animal claws (or the clutches of Nosferatu), suggests to certain scientific eyes the kind of cognition attributable specifically to Homo sapiens. This discovery has great potential relevance not just to art history, but even more so to other fields concerned with the development of our species. While it had previously been thought, for instance, that the first human settlers of Australia made their way there through Indonesia (in a time of much lower sea levels) between 50,000 and 65,000 years ago, the handprint’s existence in Liang Metanduno suggests that the migration took place even earlier. All these millennia later, Australia remains a favored destination for a variety of immigrants — some of whom do their part to keep Sydney’s art scene interesting.
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. He’s the author of the newsletterBooks on Cities as well as the books 한국 요약 금지 (No Summarizing Korea) and Korean Newtro.Follow him on the social network formerly known as Twitter at @colinmarshall.
At least when I was in grade school, we learned the very basics of how the Third Reich came to power in the early 1930s. Paramilitary gangs terrorizing the opposition, the incompetence and opportunism of German conservatives, the Reichstag Fire. And we learned about the critical importance of propaganda, the deliberate misinforming of the public in order to sway opinions en masse and achieve popular support (or at least the appearance of it). While Minister of Propaganda Joseph Goebbels purged Jewish and leftist artists and writers, he built a massive media infrastructure that played, writes PBS, “probably the most important role in creating an atmosphere in Germany that made it possible for the Nazis to commit terrible atrocities against Jews, homosexuals, and other minorities.”
How did the minority party of Hitler and Goebbels take over and break the will of the German people so thoroughly that they would allow and participate in mass murder? Post-war scholars of totalitarianism like Theodor Adorno and Hannah Arendt asked that question over and over, for several decades afterward. Their earliest studies on the subject looked at two sides of the equation. Adorno contributed to a massive volume of social psychology called The Authoritarian Personality, which studied individuals predisposed to the appeals of totalitarianism. He invented what he called the F‑Scale (“F” for “fascism”), one of several measures he used to theorize the Authoritarian Personality Type.
Arendt, on the other hand, looked closely at the regimes of Hitler and Stalin and their functionaries, at the ideology of scientific racism, and at the mechanism of propaganda in fostering “a curiously varying mixture of gullibility and cynicism with which each member… is expected to react to the changing lying statements of the leaders.” So she wrote in her 1951 Origins of Totalitarianism, going on to elaborate that this “mixture of gullibility and cynicism… is prevalent in all ranks of totalitarian movements”:
In an ever-changing, incomprehensible world the masses had reached the point where they would, at the same time, believe everything and nothing, think that everything was possible and nothing was true… The totalitarian mass leaders based their propaganda on the correct psychological assumption that, under such conditions, one could make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable proof of their falsehood, they would take refuge in cynicism; instead of deserting the leaders who had lied to them, they would protest that they had known all along that the statement was a lie and would admire the leaders for their superior tactical cleverness.
Why the constant, often blatant lying? For one thing, it functioned as a means of fully dominating subordinates, who would have to cast aside all their integrity to repeat outrageous falsehoods and would then be bound to the leader by shame and complicity. “The great analysts of truth and language in politics”—writes McGill University political philosophy professor Jacob T. Levy—including “George Orwell, Hannah Arendt, Vaclav Havel—can help us recognize this kind of lie for what it is.… Saying something obviously untrue, and making your subordinates repeat it with a straight face in their own voice, is a particularly startling display of power over them. It’s something that was endemic to totalitarianism.”
Arendt and others recognized, writes Levy, that “being made to repeat an obvious lie makes it clear that you’re powerless.” She also recognized the function of an avalanche of lies to render a populace powerless to resist, the phenomenon we now refer to as “gaslighting”:
The result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth and truth be defamed as a lie, but that the sense by which we take our bearings in the real world—and the category of truth versus falsehood is among the mental means to this end—is being destroyed.
The epistemological ground thus pulled out from under them, most would depend on whatever the leader said, no matter its relation to truth. “The essential conviction shared by all ranks,” Arendt concluded, “from fellow traveler to leader, is that politics is a game of cheating and that the ‘first commandment’ of the movement: ‘The Fuehrer is always right,’ is as necessary for the purposes of world politics, i.e., world-wide cheating, as the rules of military discipline are for the purposes of war.”
Arendt wrote Origins of Totalitarianism from research and observations gathered during the 1940s, a very specific historical period. Nonetheless the book, Jeffrey Isaacs remarks at The Washington Post, “raises a set of fundamental questions about how tyranny can arise and the dangerous forms of inhumanity to which it can lead.” Arendt’s analysis of propaganda and the function of lies seems particularly relevant at this moment. The kinds of blatant lies she wrote of might become so commonplace as to become banal. We might begin to think they are an irrelevant sideshow. This, she suggests, would be a mistake.
Note: An earlier version of this post appeared on our site in 2017.
The Renaissance did not, strictly speaking, occur in China. Yet it seems that the Middle Kingdom did have its Renaissance men, so to speak, and in much earlier times at that. We find one such illustrious figure in the Han dynasty of the first and second centuries: a statesman named Zhang Heng (78–139 AD), who managed to distinguish himself across a range of fields from mathematics to astronomy to philosophy to poetry. His accomplishments in science and technology include inventing the first hydraulic armillary sphere for observing the heavens, improving water clocks with a secondary tank, calculating pi further than it had been in China to date, and making discoveries about the nature of the moon. He also, so records show, put together the first-ever seismoscope, a device for detecting earthquakes.
A visual explanation of Zhang’s design appears in the ScienceWorld video above. His seismoscope, its narrator says, “was called hòufēng dìdòngyí, which means ‘instrument for measuring seasonal winds and movements of the earth,’ ” and it could “determine roughly the direction in which an earthquake occurred.”
Each of its eight dragon heads (a combination of number and creature that, in China, could hardly be more auspicious) holds a ball; when the ground shook, the dragon pointing toward the epicenter of the quake drops its ball into the mouth of one of the decorative toads waiting below. At one time, as history has recorded, it “detected an earthquake 650 kilometers, or 400 miles away, that wasn’t felt at the location of the seismoscope.”
Not bad, considering that neither Zhang nor anyone else had yet heard of tectonic plates. But as all engineers know, practical devices often work just fine even in the absence of completely sound theory. Though no contemporary examples of hòufēng dìdòngyí survive from Zhang’s time, “researchers believe that inside the seismoscope were a pendulum, a bronze ball under the pendulum, eight channels, and eight levers that activated the dragons’ mouths.” Moving in response to a shock wave, the pendulum would release the ball in the opposite direction, which would roll down a channel and release the mouth at the end of it. However innovative it was for its time, this scheme could, of course, provide no information about exactly how far away the earthquake happened, to say nothing of prediction. Fortunately, centuries of Renaissance men still lay ahead to figure all that out.
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. He’s the author of the newsletterBooks on Cities as well as the books 한국 요약 금지 (No Summarizing Korea) and Korean Newtro.Follow him on the social network formerly known as Twitter at @colinmarshall.
Buckminster Fuller was, in many ways, a twenty-first century man: an achievement in itself, considering he was born in the nineteenth century and died in the twentieth. In fact, it may actually count as his defining achievement. For all the inventions presented as revolutionary that never really caught on — the Dymaxion house and car, the geodesic dome — as well as the countless pages of eccentrically theoretical writing and even more countless hours of talk, it can be difficult for us now, here in the actual twenty-first century, to pin down the civilizational impact he so earnestly longed to make. But to the extent that he embodied the faith, born of the combination of industrial might and existential dread that colored the postwar American zeitgeist, that technology can rationally re-shape the world, we’re all his intellectual children.
In the video above, Joe Scott provides an introduction to Fuller and his world in about ten minutes. After a much-referenced Damascene conversion, the once-dissolute Fuller spent most of his life “trying to solve the world’s problems,” Scott says, “specifically in finding ways to save resources and provide for everybody on the planet: to do more with less, as we would say.”
The title he gave himself of “comprehensive anticipatory design scientist” neatly represents both his globally, even universally scaled ambitions, as well as his compulsive knack for self-promotion. If the designs he came up with to achieve his utopian ends never took root in society (even geodesic domes ended up as something like “the hula hoop of twentieth-century architecture,” James Gleick writes, in that they were “everywhere, and then they were a bit silly”), the problem had in part to do with the tendency of his grand visions to outpace the functional technology of his day.
In his sensibility, too, “Bucky” Fuller can come off as a familiar type in our own time, even to those who’ve never heard of him. “There is no doubt whatever in Fuller’s mind that the whole development of modern science and technology has resulted from a willingness on the part of a very few men to sail into the wind of tradition, to trust in their own intellect, and to take advantage of their natural mobility,” wrote the New Yorker’s Calvin Tompkins in a 1966 profile. No wonder he appealed to the Whole Earth Catalog counterculture of that decade, which eventually evolved into the culture of what we now call Silicon Valley, where no declared intention to reinvent the way humans live and work is too ridiculously ambitious. Though few figures could have seemed more likely to turn permanently passé, Buckminster Fuller continues to inspire fascination — and in a way, as a patron saint of techno-optimism, he lives on today.
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. He’s the author of the newsletterBooks on Cities as well as the books 한국 요약 금지 (No Summarizing Korea) and Korean Newtro.Follow him on the social network formerly known as Twitter at @colinmarshall.
Though it’s easily forgotten in our age of air travel and instantaneous global communication, many a great city is located where it is because of a river. That holds true everywhere from London to Buenos Aires to Tokyo to New York — and even to Los Angeles, despite its own once-uncontrollable river having long since been turned into a much-ridiculed concrete drainage channel. But no urban waterway has been quite so romanticized for quite so long as the Seine, which runs through the middle of Paris. And it was in the middle of the Seine, on the now-aptly named Île de la Cité, that Paris began. In the 3D time-lapse video above, you can witness the nearly two-and-a-half-millennium evolution of that tiny settlement into the capital we know today in just three minutes.
Paris didn’t take its shape in a simple process of outward growth. As is visible from high above through the video’s animation, the city has grown differently in each era of its existence, whether it be that of the Parisii, the tribe from whom it takes its name; of the Roman Empire, which constructed the standard Cardo Maximus (now known as the Rue Saint-Jacques) and Decumanus Maximus, among much other infrastructure; the Middle Ages, amid whose great (and haphazard) densification rose Notre-Dame de Paris; or the time of Baron Haussmann, whose radical urban renovations laid waste to great swathes of medieval Paris and replaced them with the broad avenues, stately residential buildings, and grand monuments recognized around the world today.
At first glance, the built environment of modern Paris can seem to have been frozen in Haussmann’s mid-nineteenth century — and no doubt, that’s just the way its countless many tourists might want it. But as shown in the video, the Ville Lumière has kept changing throughout the industrial era, and hasn’t stopped in the succeeding “globalization era.” More growth and transformation has lately taken place outside central Paris, beyond the encircling Boulevard Périphérique, but it would hardly do justice to history to ignore such more relatively recent, more divisive additions as the Tour Montparnasse, the Centre Pompidou, or the Louvre Pyramid. (When it was built in the eighteen-eighties, even the beloved Eiffel Tower drew a great deal of ire and disdain.) And though the venerable Notre-Dame may have stood on Île de la Cité since the fourteenth century, the thoroughgoing reconstruction that followed its 2019 fire has made it belong just as much to the twenty-first.
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. He’s the author of the newsletterBooks on Cities as well as the books 한국 요약 금지 (No Summarizing Korea) and Korean Newtro.Follow him on the social network formerly known as Twitter at @colinmarshall.
We're hoping to rely on loyal readers, rather than erratic ads. Please click the Donate button and support Open Culture. You can use Paypal, Venmo, Patreon, even Crypto! We thank you!
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.