Discover DALL-E, the Artificial Intelligence Artist That Lets You Create Surreal Artwork

DALL-E, an artificial intelligence system that generates viable-looking art in a variety of styles in response to user supplied text prompts, has been garnering a lot of interest since it debuted this spring.

It has yet to be released to the general public, but while we’re waiting, you could have a go at DALL-E Mini, an open source AI model that generates a grid of images inspired by any phrase you care to type into its search box.

Co-creator Boris Dayma explains how DALL-E Mini learns by viewing millions of captioned online images:

Some of the concepts are learnt (sic) from memory as it may have seen similar images. However, it can also learn how to create unique images that don’t exist such as “the Eiffel tower is landing on the moon” by combining multiple concepts together.

Several models are combined together to achieve these results:

• an image encoder that turns raw images into a sequence of numbers with its associated decoder

• a model that turns a text prompt into an encoded image

• a model that judges the quality of the images generated for better filtering 

My first attempt to generate some art using DALL-E mini failed to yield the hoped for weirdness.  I blame the blandness of my search term – “tomato soup.”

Perhaps I’d have better luck “Andy Warhol eating a bowl of tomato soup as a child in Pittsburgh.”

Ah, there we go!

I was curious to know how DALL-E Mini would riff on its namesake artist’s handle (an honor Dali shares with the titular AI hero of Pixar’s 2018 animated feature, WALL-E.)

Hmm… seems like we’re backsliding a bit.

Let me try “Andy Warhol eating a bowl of tomato soup as a child in Pittsburgh with Salvador Dali.”

Ye gods! That’s the stuff of nightmares, but it also strikes me as pretty legit modern art. Love the sparing use of red. Well done, DALL-E mini.

At this point, vanity got the better of me and I did the AI art-generating equivalent of googling my own name, adding “in a tutu” because who among us hasn’t dreamed of being a ballerina at some point?

Let that be a lesson to you, Pandora…

Hopefully we’re all planning to use this playful open AI tool for good, not evil.

Hyperallergic’s Sarah Rose Sharp raised some valid concerns in relation to the original, more sophisticated DALL-E:

It’s all fun and games when you’re generating “robot playing chess” in the style of Matisse, but dropping machine-generated imagery on a public that seems less capable than ever of distinguishing fact from fiction feels like a dangerous trend.

Additionally, DALL-E’s neural network can yield sexist and racist images, a recurring issue with AI technology. For instance, a reporter at Vice found that prompts including search terms like “CEO” exclusively generated images of White men in business attire. The company acknowledges that DALL-E “inherits various biases from its training data, and its outputs sometimes reinforce societal stereotypes.”

Co-creator Dayma does not duck the troubling implications and biases his baby could unleash:

While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases. While the extent and nature of the biases of the DALL·E mini model have yet to be fully documented, given the fact that the model was trained on unfiltered data from the Internet, it may generate images that contain stereotypes against minority groups. Work to analyze the nature and extent of these limitations is ongoing, and will be documented in more detail in the DALL·E mini model card.

The New Yorker cartoonists Ellis Rosen and Jason Adam Katzenstein conjure another way in which DALL-E mini could break with the social contract:

And a Twitter user who goes by St. Rev. Dr. Rev blows minds and opens multiple cans of worms, using panels from cartoonist Joshua Barkman’s beloved webcomic, False Knees:

Proceed with caution, and play around with DALL-E mini here.

Get on the waitlist for original flavor DALL-E access here.

 

Related Content

Artificial Intelligence Brings to Life Figures from 7 Famous Paintings: The Mona Lisa, Birth of Venus & More

Google App Uses Machine Learning to Discover Your Pet’s Look Alike in 10,000 Classic Works of Art

Artificial Intelligence for Everyone: An Introductory Course from Andrew Ng, the Co-Founder of Coursera

Ayun Halliday is the Chief Primatologist of the East Village Inky zine and author, most recently, of Creative, Not Famous: The Small Potato Manifesto.  Follow her @AyunHalliday.

Japanese Researcher Sleeps in the Same Location as Her Cat for 24 Consecutive Nights!


Cross cat napping with bed hopping and you might end up having an “adventure in comfort” similar to the one that informs student Yuri Nakahashi‘s thesis for Tokyo’s Hosei University.

For 24 consecutive nights, Nakahashi forwent the comforts of her own bed in favor of a green sleeping bag, unfurled in whatever random location one of her five pet cats had chosen as its sleeping spot that evening.

(The choice of which cat would get the pleasure of dictating each night’s sleeping bag coordinates was also randomized.)

As the owner of five cats, Nakahashi presumably knew what she was signing up for…

 

Cats rack out atop sofa backs, on stairs, and under beds…and so did Nakahashi.

Her photos suggest she logged a lot of time on a bare wooden floor.

A FitBit monitored the duration and quality of time spent asleep, as well as the frequency with which she awakened during the night.

She documented the physical and psychological effects of this experiment in an interactive published by the Information Processing Society of Japan.


She reports that she eagerly awaited the revelation of each night’s coordinates, and that even when her sleep was disrupted by her pets’ middle of the night grooming routines, bunking next to them had a “relaxing effect.”

Meanwhile, our research suggests that the same experiment would awaken a vastly different response in a different human subject, one suffering from ailurophobia, say, or severe allergies to the proteins in feline saliva, urine, and dander.

What’s really surprising about Nakahashi’s itinerant, and apparently pleasure-filled undertaking is how little difference there is between her average sleep score during the experiment and her average sleep score from the 20 days preceding it.

At left, an average sleep score of 84.2 for the 20 days leading up to experiment. At right, an average sleep score 83.7 during the experiment.

Nakahashi’s entry for the YouFab Global Creative Awards, a prize for “work that attempts a dialogue that transcends the boundaries of species, space, and time” reflects the playful spirit she brought to her slightly off-kilter experiment:

 Is it possible to add diversity to the way we enjoy sleep? Let’s think about food. In addition to the taste and nutrition of the food, each meal is a special experience with diversity depending on the people you are eating with, the atmosphere of the restaurant, the weather, and many other factors. In order to bring this kind of enjoyment to sleep, we propose an “adventure in comfort” in which the cat decides where to sleep each night, away from the fixed bedroom and bed. This project is similar to going out to eat with a good friend at a restaurant, where the cat guides you to sleep.

She notes that traditional beds have an immobility owing to “their physical weight and cultural concepts such as direction.”

This suggests that her work could be of some benefit to humans in decidedly less fanciful, involuntary situations, whose lack of housing leads them to sleep in unpredictable, and inhospitable locations.

Nakahashi’s time in the green sleeping bag inspired her to create the below model of a more flexible bed, using a polypropylene bag, rice and nylon film.

We have created a prototype of a double-layered inflatable bed that has a pouch structure that inflates with air and a jamming structure that becomes hard when air is compressed. The pouch side softly receives the body when inflated. The jamming side becomes hard when the air is removed, and can be firmly fixed in an even space. The air is designed to move back and forth between the two layers, so that when not in use, the whole thing can be rolled up softly for storage. 

It’s hard to imagine the presence of a pussycat doing much to ameliorate the anxiety of those forced to flee their familiar beds with little warning, but we can see how Nakahashi’s design might bring a degree of physical relief when sleeping in subway stations, basement corners, and other harrowing locations.

Via Spoon & Tomago

Ayun Halliday is the Chief Primatologist of the East Village Inky zine and author, most recently, of Creative, Not Famous: The Small Potato Manifesto.  Follow her @AyunHalliday.

Related Content 

A 110-Year-Old Book Illustrated with Photos of Kittens & Cats Taught Kids How to Read

An Animated History of Cats: How Over 10,000 Years the Cat Went from Wild Predator to Sofa Sidekick

GPS Tracking Reveals the Secret Lives of Outdoor Cats

M.I.T. Computer Program Predicts in 1973 That Civilization Will End by 2040

In 1704, Isaac Newton predicted the end of the world sometime around (or after, “but not before”) the year 2060, using a strange series of mathematical calculations. Rather than study what he called the “book of nature,” he took as his source the supposed prophecies of the book of Revelation. While such predictions have always been central to Christianity, it is startling for modern people to look back and see the famed astronomer and physicist indulging them. For Newton, however, as Matthew Stanley writes at Science, “laying the foundation of modern physics and astronomy was a bit of a sideshow. He believed that his truly important work was deciphering ancient scriptures and uncovering the nature of the Christian religion.”

Over three hundred years later, we still have plenty of religious doomsayers predicting the end of the world with Bible codes. But in recent times, their ranks have seemingly been joined by scientists whose only professed aim is interpreting data from climate research and sustainability estimates given population growth and dwindling resources. The scientific predictions do not draw on ancient texts or theology, nor involve final battles between good and evil. Though there may be plagues and other horrible reckonings, these are predictably causal outcomes of over-production and consumption rather than divine wrath. Yet by some strange fluke, the science has arrived at the same apocalyptic date as Newton, plus or minus a decade or two.


The “end of the world” in these scenarios means the end of modern life as we know it: the collapse of industrialized societies, large-scale agricultural production, supply chains, stable climates, nation states…. Since the late sixties, an elite society of wealthy industrialists and scientists known as the Club of Rome (a frequent player in many conspiracy theories) has foreseen these disasters in the early 21st century. One of the sources of their vision is a computer program developed at MIT by computing pioneer and systems theorist Jay Forrester, whose model of global sustainability, one of the first of its kind, predicted civilizational collapse in 2040. “What the computer envisioned in the 1970s has by and large been coming true,” claims Paul Ratner at Big Think.

Those predictions include population growth and pollution levels, “worsening quality of life,” and “dwindling natural resources.” In the video at the top, see Australia’s ABC explain the computer’s calculations, “an electronic guided tour of our global behavior since 1900, and where that behavior will lead us,” says the presenter. The graph spans the years 1900 to 2060. “Quality of life” begins to sharply decline after 1940, and by 2020, the model predicts, the metric contracts to turn-of-the-century levels, meeting the sharp increase of the “Zed Curve” that charts pollution levels. (ABC revisited this reporting in 1999 with Club of Rome member Keith Suter.)

You can probably guess the rest—or you can read all about it in the 1972 Club of Rome-published report Limits to Growth, which drew wide popular attention to Jay Forrester’s books Urban Dynamics (1969) and World Dynamics (1971). Forrester, a figure of Newtonian stature in the worlds of computer science and management and systems theory—though not, like Newton, a Biblical prophecy enthusiast—more or less endorsed his conclusions to the end of his life in 2016. In one of his last interviews, at the age of 98, he told the MIT Technology Review, “I think the books stand all right.” But he also cautioned against acting without systematic thinking in the face of the globally interrelated issues the Club of Rome ominously calls “the problematic”:

Time after time … you’ll find people are reacting to a problem, they think they know what to do, and they don’t realize that what they’re doing is making a problem. This is a vicious [cycle], because as things get worse, there is more incentive to do things, and it gets worse and worse.

Where this vague warning is supposed to leave us is uncertain. If the current course is dire, “unsystematic” solutions may be worse? This theory also seems to leave powerfully vested human agents (like Exxon’s executives) wholly unaccountable for the coming collapse. Limits to Growth—scoffed at and disparagingly called “neo-Malthusian” by a host of libertarian critics—stands on far surer evidentiary footing than Newton’s weird predictions, and its climate forecasts, notes Christian Parenti, “were alarmingly prescient.” But for all this doom and gloom it’s worth bearing in mind that models of the future are not, in fact, the future. There are hard times ahead, but no theory, no matter how sophisticated, can account for every variable.

Note: An earlier version of this post appeared on our site in 2018.

Related Content:

In 1953, a Telephone-Company Executive Predicts the Rise of Modern Smartphones and Video Calls

In 1922, a Novelist Predicts What the World Will Look Like in 2022: Wireless Telephones, 8-Hour Flights to Europe & More

In 1704, Isaac Newton Predicts the World Will End in 2060

It’s the End of the World as We Know It: The Apocalypse Gets Visualized in an Inventive Map from 1486

Watch the Destruction of Pompeii by Mount Vesuvius, Re-Created with Computer Animation (79 AD)

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

Google App Uses Machine Learning to Discover Your Pet’s Look Alike in 10,000 Classic Works of Art


Does your cat fancy herself a 21st-century incarnation of Bastet, the Egyptian Goddess of the Rising Sun, protector of the household, aka The Lady of Slaughter?

If so, you should definitely permit her to download the Google Arts & Culture app on your phone to take a selfie using the Pet Portraits feature.

Remember all the fun you had back in 2018 when the Art Selfie feature mistook you for William II, Prince of Orange or the woman in “Jacob Cornelisz. van Oostsanen Painting a Portrait of His Wife”?


Surely your pet will be just as excited to let a machine-learning algorithm trawl tens of thousands of artworks from Google Arts & Culture’s partnering museums’ collections, looking for doppelgängers.

Or maybe it’ll just view it as one more example of human folly, if a far lesser evil than our predilection for pet costumes.

Should your pet wish to know more about the artworks it resembles, you can tap the results to explore them in depth.

Dogs, fish, birds, reptiles, horses, and rabbits can play along too, though anyone hailing from the rodent family will find themselves shut out.

Mashable reports that “uploading a stock image of a mouse returned drawings of wolves.”

We can’t blame your pet snake for fuming.

Ditto your Vietnamese Pot-bellied pig.

Though your pet ferret probably doesn’t need an app (or a crystal ball) to know what its result would be. Better than an ermine collar, anyway…


If your pet is game and falls within Pet Portraits approved species parameters, here are the steps to follow:

  1. Launch the Google Arts & Culture app and select the Camera button. Scroll to the Pet Portraits option.
  2. Have your pet take a selfie. (Or alternatively, upload a saved image.)
  3. Give the app a few seconds (or minutes) to return multiple results with similarity percentages.

Download the Google Arts & Culture app here.

Ayun Halliday is the Chief Primatologist of the East Village Inky zine and author, most recently, of Creative, Not Famous: The Small Potato Manifesto.  Follow her @AyunHalliday.

Related Content:

Google’s Free App Analyzes Your Selfie and Then Finds Your Doppelganger in Museum Portraits

Construct Your Own Bayeux Tapestry with This Free Online App

A Gallery of 1,800 Gigapixel Images of Classic Paintings: See Vermeer’s Girl with the Pearl Earring, Van Gogh’s Starry Night & Other Masterpieces in Close Detail

A 10-Course Introduction to Data Science from Johns Hopkins

Data is now everywhere. And those who can harness data effectively stand poised to innovate and make impactful decisions. This holds true in business, medicine, healthcare, education and other spheres of life.

Enter the 10-course Introduction to Data Science from Johns Hopkins. Offered on the Coursera platform, this course sequence covers “the concepts and tools you’ll need throughout the entire data science pipeline, from asking the right kinds of questions to making inferences and publishing results.” The program includes courses covering The Data Scientist’s Toolbox, R Programming, Getting and Cleaning Data, Developing Data Products and more. There’s also a Capstone Project where students can build a data product using real-world data.


Students can formally enroll in the Introduction to Data Science specialization and receive a certificate for each course they complete–a certificate they can share with prospective employers and their professional networks. They’ll also leave with a portfolio demonstrating mastery of the material covered in the sequence. Hopkins estimates that most learners can complete the sequence in 3-7 months, during which time students will be charged $49 per month.

Alternatively, students can audit individual courses for free. When you enroll in a course, look carefully for the Audit option. Note: Auditors cannot receive a certificate for each completed course.

If would like to formally enroll in the Introduction to Data Science sequence, you can start a 7-Day Free Trial and size things up here.

Open Culture has a partnership with Coursera. If readers enroll in certain Coursera courses and programs, it helps support Open Culture.

Related Content:

Google Data Analytics Certificate: 8 Courses Will Help Prepare Students for an Entry-Level Job in 6 Months

200 Online Certificate & Microcredential Programs from Leading Universities & Companies

Become a Project Manager Without a College Degree with Google’s Project Management Certificate

Google Data Analytics Certificate: 8 Courses Will Help Prepare Students for an Entry-Level Job in 6 Months

During the pandemic, Google launched a series of Career Certificates that will “prepare learners for an entry-level role in under six months.” The new career initiative includes certificates concentrating on Project Management and UX Design. And now also Data Analytics, a burgeoning field that focuses on “the collection, transformation, and organization of data in order to draw conclusions, make predictions, and drive informed decision making.”

Offered on the Coursera platform, the Data Analytics Professional Certificate consists of eight courses, including “Foundations: Data, Data, Everywhere,” “Prepare Data for Exploration,” “Data Analysis with R Programming,” and “Share Data Through the Art of Visualization.” Overall this program “includes over 180 hours of instruction and hundreds of practice-based assessments, which will help you simulate real-world data analytics scenarios that are critical for success in the workplace. The content is highly interactive and exclusively developed by Google employees with decades of experience in data analytics.”


Upon completion, students–even those who haven’t pursued a college degree–can directly apply for jobs (e.g., junior or associate data analyst, database administrator, etc.) with Google and over 130 U.S. employers, including Walmart, Best Buy, and Astreya. You can start a 7-day free trial and explore the courses here. If you continue beyond the free trial, Google/Coursera will charge $39 USD per month. That translates to about $235 after 6 months, the time estimated to complete the certificate.

Explore the Data Analytics Certificate by watching the video above. Learn more about the overall Google career certificate initiative here. And find other Google professional certificates here.

Note: Open Culture has a partnership with Coursera. If readers enroll in certain Coursera courses and programs, it helps support Open Culture.

Related Content:

200 Online Certificate & Microcredential Programs from Leading Universities & Companies

Online Degrees & Mini Degrees: Explore Masters, Mini Masters, Bachelors & Mini Bachelors from Top Universities.

Google Introduces 6-Month Career Certificates, Threatening to Disrupt Higher Education with “the Equivalent of a Four-Year Degree”

Coursera and Google Launch an Online Certificate Program to Help Students Become IT Professionals & Get Attractive Jobs

Google’s UX Design Professional Certificate: 7 Courses Will Help Prepare Students for an Entry-Level Job in 6 Months

Become a Project Manager Without a College Degree with Google’s Project Management Certificate

Are We All Getting More Depressed?: A New Study Analyzing 14 Million Books, Written Over 160 Years, Finds the Language of Depression Steadily Rising


The relations between thought, language, and mood have become subjects of study for several scientific fields of late. Some of the conclusions seem to echo religious notions from millennia ago. “As a man thinketh, so he is,” for example, proclaims a famous verse in Proverbs (one that helped spawn a self-help movement in 1903). Positive psychology might agree. “All that we are is the result of what we have thought,” says one translation of the Buddhist Dhammapada, a sentiment that cognitive behavioral therapy might endorse.

But the insights of these traditions — and of social psychology — also show that we’re embedded in webs of connection: we don’t only think alone; we think — and talk and write and read — with others. External circumstances influence mood as well as internal states of mind. Approaching these questions differently, researchers at the Luddy School of Informatics, Computing, and Engineering at Indiana University asked, “Can entire societies become more or less depressed over time?,” and is it possible to read collective changes in mood in the written languages of the past century or so?


The team of scientists, led by Johan Bollen, Indiana University professor of informatics and computing, took a novel approach that brings together tools from at least two fields: large-scale data analysis and cognitive-behavioral therapy (CBT). Since diagnostic criteria for measuring depression have only been around for the past 40 years, the question seemed to resist longitudinal study. But CBT provided a means of analyzing language for markers of “cognitive distortions” — thinking that skews in overly negative ways. “Language is closely intertwined with this dynamic” of thought and mood, the researchers write in their study, “Historical language records reveal a surge of cognitive distortions in recent decades,” published just last month in PNAS.

Choosing three languages, English (US), German, and Spanish, the team looked for “short sequences of one to five words (n-grams), labeled cognitive distortion schemata (CDS).” These words and phrases express negative thought processes like “catastrophizing,” “dichotomous reasoning,” “disqualifying the positive,” etc. Then, the researchers identified the prevalence of such language in a collection of over 14 million books published between 1855 and 2019 and uploaded to Google Books. The study controlled for language and syntax changes during that time and accounted for the increase in technical and non-fiction books published (though it did not distinguish between literary genres).

What the scientists found in all three languages was a distinctive “‘hockey stick’ pattern” — a sharp uptick in the language of depression after 1980 and into the present time. The only spikes that come close on the timeline occur in English language books during the Gilded Age and books published in German during and immediately after World War II. (Highly interesting, if unsurprising, findings.) Why the sudden, steep climb in language signifying depressive thinking? Does it actually mark a collective shift in mood, or show how historically oppressed groups have had more access to publishing in the past forty years, and have expressed less satisfaction with the status quo?

While they are careful to emphasize that they “make no causal claims” in the study, the researchers have some ideas about what’s happened, observing for example:

The US surge in CDS prevalence coincides with the late 1970s when wages stopped tracking increasing work productivity. This trend was associated with rises in income inequality to recent levels not seen since the 1930s. This phenomenon has been observed for most developed economies, including Germany, Spain and Latin America.

Other factors cited include the development of the World Wide Web and its facilitation of political polarization, “in particular us-vs.-them thinking… dichotomous reasoning,” and other maladaptive thought patterns that accompany depression. The scale of these developments might be enough to explain a major collective rise in depression, but one commenter offers an additional gloss:

The globe is *Literally* on fire, or historically flooding – Multiple economic crashes barely decades apart – a ghost town of a housing market – a multi-year global pandemic – wealth concentration at the .01% level – terrible pay/COL equations – blocking unionization/workers rights – abusive militarized police, without the restraint or training of actual military –  You can’t afford X for a monthly mortgage payment!  Pay 1.5x for rent instead! – endless wars for the last… 30…years? 50 if we include stuff like Korea, Cold War, Vietnam… How far has the IMC been milking the gov for funds to make the rich richer? Oh, and a billionaire 3-way space race to determine who’s got the biggest “rocket”

These sound like reasons for global depression indeed, but the arrow could also go the other way: maybe catastrophic reasoning produced actual catastrophes; black and white thinking led to endless wars, etc…. More study is needed, says Bollen and his colleagues, yet it seems probable, given the data, that “large populations are increasingly stressed by pervasive cultural, economic, and social changes” — changes occurring more rapidly, frequently, and with greater impact on our daily lives than ever before. Read the full study at PNAS

Related Content: 

Stanford’s Robert Sapolsky Demystifies Depression, Which, Like Diabetes, Is Rooted in Biology

A Unified Theory of Mental Illness: How Everything from Addiction to Depression Can Be Explained by the Concept of “Capture”

Charles Bukowski Explains How to Beat Depression: Spend 3-4 Days in Bed and You’ll Get the Juices Flowing Again (NSFW)

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

A Data Visualization of Every Italian City & Town Founded in the BC Era


Ancient people did not think about history the way most of us do. It made no difference to contemporary readers of the popular Roman historian, Livy (the “JK Rowling of his day”), that “most of the flesh and blood of [his] narrative is fictitious,” and “many of the stories are not really Roman but Greek stories reclothed in Roman dress,” historian Robert Ogilvie writes in an introduction to Livy’s Early History of Rome. Ancient historians did not write to document facts, but to illustrate moral, philosophical, and political truths about what they saw as immutable human nature.

Much of what we know about Roman antiquity comes not from ancient Roman history but from modern archeology (which is still making “amazing” new discoveries about Roman cities). The remains of Rome at its apogee date from the time of Livy, who was likely born in 59 BC and died circa 12 AD. A contemporary, and possibly a friend, of Augustus, the historian lived through a period of immense growth in which the new empire spread across the continent, founding, building, and conquering towns and cities as it went — a time, he wrote, when “the might of an imperial people is beginning to work its own ruin.”


Livy preferred to look back — “turn my eyes from the troubles,” he said — “more than seven hundred years,” to the date long given for the founding of Rome, 753 BC, which seemed ancient enough to him. Modern archeologists have found, however, that the city probably arose hundreds of years earlier, having been continuously inhabited since around 1000 BC. Livy’s own prosperous but provincial city of Padua only became incorporated into the Roman empire a few decades before his birth. According to Livy himself, Padua was first founded in 1183 BC by the Trojan prince Antenor…  if you believe the stories….

The point is that ancient Roman dates are suspect when they come from literary sources (or “histories”) rather than artifacts and archaeological dating methods. What is the distribution of such dates across articles about ancient Rome on Wikipedia? Who could say. But the sheer number of documents and artifacts left behind by the Romans and the people they conquered and subdued make it easy to reconstruct the historical strata of European cities — though we should allow for more than a little exaggeration, distortion, and even fiction in the data.

The maps you see here use Wikipedia data to visualize towns and cities in modern-day Italy founded before the first century — that is, every Italian settlement of any kind with a “BC” cited in its associated article. Many of these were founded by the Romans in the 2nd or 3rd century BC. Many cities, like Pompeii, Milan, and Livy’s own Padua, were conquered or slowly taken over from earlier peoples. Another version of the visualization, above, shows a distribution by color of the dates from 10,000 BC to 10 BC. It makes for an equally striking way to illustrate the history, and prehistory, of Italy up to Livy’s time — that is, according to Wikipedia.

The creator of the visualizations obtained the data by scraping 8000 Italian Wikipedia articles for mentions of “BC” (or “AC” in Italian). Even if we all agreed the open online encyclopedia is an authoritative source (and we certainly do not), we’d still be left with the problem of ancient dating in creating an accurate map of ancient Roman and Italian history. Unreliable data does not improve in picture form. But data visualizations can, when combined with careful scholarship and good research, make dry lists of numbers come alive, as Livy’s stories made Roman history, as he knew it, live for his readers.

See the creator’s dataset below and learn more here.

count 1152

mean 929.47

std 1221.89

min 2

25% 196

50% 342.5

75% 1529.5

max 10000

Related Content: 

The Roads of Ancient Rome Visualized in the Style of Modern Subway Maps

Rome’s Colosseum Will Get a New Retractable Floor by 2023 — Just as It Had in Ancient Times

A Virtual Tour of Ancient Rome, Circa 320 CE: Explore Stunning Recreations of The Forum, Colosseum and Other Monuments

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

More in this category... »
Quantcast
Open Culture was founded by Dan Colman.