Data is now everywhere. And those who can harness data effectively stand poised to innovate and make impactful decisions. This holds true in business, medicine, healthcare, education and other spheres of life.
Enter the 10-course Introduction to Data Science from Johns Hopkins. Offered on the Coursera platform, this course sequence covers “the concepts and tools you’ll need throughout the entire data science pipeline, from asking the right kinds of questions to making inferences and publishing results.” The program includes courses covering The Data Scientist’s Toolbox, R Programming, Getting and Cleaning Data, Developing Data Products and more. There’s also a Capstone Project where students can build a data product using real-world data.
Students can formally enroll in theIntroduction to Data Science specialization and receive a certificate for each course they complete–a certificate they can share with prospective employers and their professional networks. They’ll also leave with a portfolio demonstrating mastery of the material covered in the sequence. Hopkins estimates that most learners can complete the sequence in 3-7 months, during which time students will be charged $49 per month.
Alternatively, students can audit individual courses for free. When you enroll in a course, look carefully for the Audit option. Note: Auditors cannot receive a certificate for each completed course.
During the pandemic, Google launched a series of Career Certificates that will “prepare learners for an entry-level role in under six months.” The new career initiative includes certificates concentrating on Project Management and UX Design. And now also Data Analytics, a burgeoning field that focuses on “the collection, transformation, and organization of data in order to draw conclusions, make predictions, and drive informed decision making.”
Offered on the Coursera platform, the Data Analytics Professional Certificate consists of eight courses, including “Foundations: Data, Data, Everywhere,” “Prepare Data for Exploration,” “Data Analysis with R Programming,” and “Share Data Through the Art of Visualization.” Overall this program “includes over 180 hours of instruction and hundreds of practice-based assessments, which will help you simulate real-world data analytics scenarios that are critical for success in the workplace. The content is highly interactive and exclusively developed by Google employees with decades of experience in data analytics.”
Upon completion, students–even those who haven’t pursued a college degree–can directly apply for jobs (e.g., junior or associate data analyst, database administrator, etc.) with Google and over 130 U.S. employers, including Walmart, Best Buy, and Astreya. You can start a 7-day free trial and explore the courses here. If you continue beyond the free trial, Google/Coursera will charge $39 USD per month. That translates to about $235 after 6 months, the time estimated to complete the certificate.
The relations between thought, language, and mood have become subjects of study for several scientific fields of late. Some of the conclusions seem to echo religious notions from millennia ago. “As a man thinketh, so he is,” for example, proclaims a famous verse in Proverbs (one that helped spawn a self-help movement in 1903). Positive psychology might agree. “All that we are is the result of what we have thought,” says one translation of the Buddhist Dhammapada, a sentiment that cognitive behavioral therapy might endorse.
But the insights of these traditions — and of social psychology — also show that we’re embedded in webs of connection: we don’t only think alone; we think — and talk and write and read — with others. External circumstances influence mood as well as internal states of mind. Approaching these questions differently, researchers at the Luddy School of Informatics, Computing, and Engineering at Indiana University asked, “Can entire societies become more or less depressed over time?,” and is it possible to read collective changes in mood in the written languages of the past century or so?
The team of scientists, led by Johan Bollen, Indiana University professor of informatics and computing, took a novel approach that brings together tools from at least two fields: large-scale data analysis and cognitive-behavioral therapy (CBT). Since diagnostic criteria for measuring depression have only been around for the past 40 years, the question seemed to resist longitudinal study. But CBT provided a means of analyzing language for markers of “cognitive distortions” — thinking that skews in overly negative ways. “Language is closely intertwined with this dynamic” of thought and mood, the researchers write in their study, “Historical language records reveal a surge of cognitive distortions in recent decades,” published just last month in PNAS.
Choosing three languages, English (US), German, and Spanish, the team looked for “short sequences of one to five words (n-grams), labeled cognitive distortion schemata (CDS).” These words and phrases express negative thought processes like “catastrophizing,” “dichotomous reasoning,” “disqualifying the positive,” etc. Then, the researchers identified the prevalence of such language in a collection of over 14 million books published between 1855 and 2019 and uploaded to Google Books. The study controlled for language and syntax changes during that time and accounted for the increase in technical and non-fiction books published (though it did not distinguish between literary genres).
What the scientists found in all three languages was a distinctive “‘hockey stick’ pattern” — a sharp uptick in the language of depression after 1980 and into the present time. The only spikes that come close on the timeline occur in English language books during the Gilded Age and books published in German during and immediately after World War II. (Highly interesting, if unsurprising, findings.) Why the sudden, steep climb in language signifying depressive thinking? Does it actually mark a collective shift in mood, or show how historically oppressed groups have had more access to publishing in the past forty years, and have expressed less satisfaction with the status quo?
While they are careful to emphasize that they “make no causal claims” in the study, the researchers have some ideas about what’s happened, observing for example:
The US surge in CDS prevalence coincides with the late 1970s when wages stopped tracking increasing work productivity. This trend was associated with rises in income inequality to recent levels not seen since the 1930s. This phenomenon has been observed for most developed economies, including Germany, Spain and Latin America.
Other factors cited include the development of the World Wide Web and its facilitation of political polarization, “in particular us-vs.-them thinking… dichotomous reasoning,” and other maladaptive thought patterns that accompany depression. The scale of these developments might be enough to explain a major collective rise in depression, but one commenter offers an additional gloss:
The globe is *Literally* on fire, or historically flooding – Multiple economic crashes barely decades apart – a ghost town of a housing market – a multi-year global pandemic – wealth concentration at the .01% level – terrible pay/COL equations – blocking unionization/workers rights – abusive militarized police, without the restraint or training of actual military – You can’t afford X for a monthly mortgage payment! Pay 1.5x for rent instead! – endless wars for the last… 30…years? 50 if we include stuff like Korea, Cold War, Vietnam… How far has the IMC been milking the gov for funds to make the rich richer? Oh, and a billionaire 3-way space race to determine who’s got the biggest “rocket”
These sound like reasons for global depression indeed, but the arrow could also go the other way: maybe catastrophic reasoning produced actual catastrophes; black and white thinking led to endless wars, etc…. More study is needed, says Bollen and his colleagues, yet it seems probable, given the data, that “large populations are increasingly stressed by pervasive cultural, economic, and social changes” — changes occurring more rapidly, frequently, and with greater impact on our daily lives than ever before. Read the full study at PNAS.
Ancient people did not think about history the way most of us do. It made no difference to contemporary readers of the popular Roman historian, Livy (the “JK Rowling of his day”), that “most of the flesh and blood of [his] narrative is fictitious,” and “many of the stories are not really Roman but Greek stories reclothed in Roman dress,” historian Robert Ogilvie writes in an introduction to Livy’s Early History of Rome. Ancient historians did not write to document facts, but to illustrate moral, philosophical, and political truths about what they saw as immutable human nature.
Much of what we know about Roman antiquity comes not from ancient Roman history but from modern archeology (which is still making “amazing” new discoveries about Roman cities). The remains of Rome at its apogee date from the time of Livy, who was likely born in 59 BC and died circa 12 AD. A contemporary, and possibly a friend, of Augustus, the historian lived through a period of immense growth in which the new empire spread across the continent, founding, building, and conquering towns and cities as it went — a time, he wrote, when “the might of an imperial people is beginning to work its own ruin.”
Livy preferred to look back — “turn my eyes from the troubles,” he said — “more than seven hundred years,” to the date long given for the founding of Rome, 753 BC, which seemed ancient enough to him. Modern archeologists have found, however, that the city probably arose hundreds of years earlier, having been continuously inhabited since around 1000 BC. Livy’s own prosperous but provincial city of Padua only became incorporated into the Roman empire a few decades before his birth. According to Livy himself, Padua was first founded in 1183 BC by the Trojan prince Antenor… if you believe the stories….
The point is that ancient Roman dates are suspect when they come from literary sources (or “histories”) rather than artifacts and archaeological dating methods. What is the distribution of such dates across articles about ancient Rome on Wikipedia? Who could say. But the sheer number of documents and artifacts left behind by the Romans and the people they conquered and subdued make it easy to reconstruct the historical strata of European cities — though we should allow for more than a little exaggeration, distortion, and even fiction in the data.
The maps you see here use Wikipedia data to visualize towns and cities in modern-day Italy founded before the first century — that is, every Italian settlement of any kind with a “BC” cited in its associated article. Many of these were founded by the Romans in the 2nd or 3rd century BC. Many cities, like Pompeii, Milan, and Livy’s own Padua, were conquered or slowly taken over from earlier peoples. Another version of the visualization, above, shows a distribution by color of the dates from 10,000 BC to 10 BC. It makes for an equally striking way to illustrate the history, and prehistory, of Italy up to Livy’s time — that is, according to Wikipedia.
The creator of the visualizations obtained the data by scraping 8000 Italian Wikipedia articles for mentions of “BC” (or “AC” in Italian). Even if we all agreed the open online encyclopedia is an authoritative source (and we certainly do not), we’d still be left with the problem of ancient dating in creating an accurate map of ancient Roman and Italian history. Unreliable data does not improve in picture form. But data visualizations can, when combined with careful scholarship and good research, make dry lists of numbers come alive, as Livy’s stories made Roman history, as he knew it, live for his readers.
See the creator’s dataset below and learn more here.
Having collected data on Ross’ evergreen series, The Joy of Painting, they analyzed it for frequency of color use over the show’s 403 episodes, as well as the number of colors applied to each canvas.
For those keeping score, after black and white, alizarin crimson was the color Ross favored most, and 1/4 of the paintings made on air boast 12 colors.
The data could be slightly skewed by the contributions of occasional guest artists such as Ross’ former instructor, John Thamm, who once counseled Ross to “paint bushes and trees and leave portrait painting to someone else.” Thamm availed himself of a single color — Van Dyke Brown — to demonstrate the wipe out technique. His contribution is one of the few human likenesses that got painted over the show’s 11-year public television run.
Indian Red was accorded but a single use, in season 22’s first episode, “Autumn Images.” (“Let’s sparkle this up. We’re gonna have fall colors. Let’s get crazy.”)
For art lovers craving a more traditional gallery experience, site creator Connor Rothschild has installed a virtual bench facing a frame capable of displaying all the paintings in random or chronological order, with digital swatches representing the paints that went into them and YouTube links to the episodes that produced them.
And for those who’d rather gaze at data science, the code is available on GitHub.
2020 was “a year for the (record) books in publishing,” wrote Jim Milliot in Publisher’s Weekly this past January, a surge continuing into 2021. Yet some kinds of print books have so declined in sales there may be no reason to keep publishing them, or buying them, since their equivalents online are superior in almost every respect to any version on paper. As I finally conceded during a recent, aggressive spring cleaning, I personally have no reason to store heavy, bulky, dusty reference books, except in cases of extreme sentiment.
Started in 1995 by Stanford philosopher Edward Zalta with only two entries, the SEP is “positively ancient in internet years,” but it is hardly “ossified,” remaining an online source “‘comparable in scope, depth and authority,’” the American Library Association’s Booklist review wrote, “to the biggest philosophy encyclopedias in print.”
I personally think the SEP is just as interesting for its content as its achievement, if not more so — and now, thanks to engineer and developer Joseph DiCastro, that content is more accessible than ever, though an interactive visualization project and search engine called Visualizing SEP.
Visualizing SEP “provides clear visualizations based on a philosophical taxonomy that DiCastro adapted from the one developed by the Indiana University Philosophy Ontology Project (InPhO),” Justin Weinberg writes at Daily Nous. “Type a term into the search box and suggested SEP entries will be listed. Click on one of the entry titles, and a simple visualization will appear with your selected entry at the center and related entries surrounding it.” At the top of the page, you can select from a series of “domains.” Each selection produces a similar visualization of various-sized dots.
I found enough entries to keep me busy for hours in the very first domain graph, “Aesthetics and Philosophy of Art.” The last of these, simply titled “Thinker,” links together all of the philosophers mentioned in the Stanford Encyclopedia of Philosophy, from the most famous household names to the most obscure and scholastic. Just skimming through these names and reading the brief biographies at the left will leave readers with a broader contextual understanding than they could gain from a print encyclopedia. (Click on the “Article Details” button to expand the full article).
The visualizer project carries forth into the data-obsessed 21st century one of the best things about the Internet in its earliest years: access to free, high quality (and highly portable) information with few barriers for entry. Learn more about how to best navigate Visualizing SEP at Daily Nous.
We should just trust the experts. But wait: to identify true expertise requires its own kind of even more specialized expertise. Besides, experts disagree with each other, and over time disagree with themselves as well. This makes it challenging indeed for all of us non-experts — and we’re all non-experts in the fields to which we have not dedicated our lives — to understand phenomena of any complexity. As for grasping climate change, with its enormous historical scale and countless many variables, might we as well just throw up our hands? Many have done so: Neil Halloran, creator of the short documentary Degrees of Uncertainty above, labels them “climate denialists” and “climate defeatists.”
Climate denialists choose to believe that manmade climate change isn’t happening, climate defeatists choose to believe that it’s inevitable, and both thereby let themselves off the hook. Not only do they not have to address the issue, they don’t even have to understand it — which itself can seem a fairly daunting task, given that scientists themselves express no small degree of uncertainty about climate change’s degree and trajectory. “The only way to learn how sure scientists are is to dig in a little and view their work with some healthy skepticism,” says Halloran. This entails developing an instinct not for refutation, exactly, but for examining just how the experts arrive at their conclusions and what pitfalls they encounter along the way.
Often, scientists “don’t know how close they are to the truth, and they’re prone to confirmation bias,” and as anyone professionally involved in the sciences knows full well, they work “under pressure to publish noteworthy findings.” Their publications then find their way to a media culture in which, increasingly, “trusting or distrusting scientists is becoming a matter of political identity.” As he did in his previous documentary The Fallen of World War II, Halloran uses animation and data visualization to illuminate his own path to understanding a global occurrence whose sheer proportions make it difficult to perceive.
This journey takes Halloran not just around the globe but back in time, starting in the year 19,000 B.C. and ending in projections of a future in which ring seas swallow much of Amsterdam, Miami, and New Orleans. The most important stop in the middle is the Age of Enlightenment and the Industrial Revolution of the 17th through the 19th century, when science and technology rose to prominence and brought about an unprecedented human flourishing — with climatic consequences that have begun to make themselves known, albeit not with absolute certainty. But as Halloran sees it, “uncertainty, the very thing that clouds our view, also frees us to construct possible answers.”
Surely you’ve learned, as I have, to filter out the constant threats of doom. It’s impossible to function on high alert all of the time. But one must stay at least minimally informed. To check the news even once a day is to encounter headline after headline announcing DOOM IS COMING! Say that we’re all desensitized, and rather than react, we evaluate: In what way will doom arrive? How bad will the doom be? There are many competing theories of doom. Which one is most likely, and how can we understand them in relation to each other?
When the pandemic hit last winter, “we as a society were completely unprepared for it,” despite the fact that experts had been warning us for decades that exactly such a threat was high on the scale of likelihood. Are we focusing on the wrong kinds of doom, to the exclusion of more pressing threats? Instead of panicking when the coronavirus hit, Walliman cooly wondered what else might be lurking around the corner. “Crikey,” says the New Zealander upon the first reveal of his Map of Doom, “there’s quite a lot aren’t there?”
Not content to just collect disasters (and draw them as if they were all happening at the same time), Walliman also wanted to find out which ones pose the biggest threat, “using some real data.” After the Map of Doom comes the Chart of Doom, an XY grid plotting the likelihood and severity of various crises. These include ancient stalwarts like super volcanoes; far more recent threats like nuclear war and catastrophic climate change; cosmic threats like asteroids and collapsing stars; terrestrial threats like widespread societal collapse and extra-terrestrial threats like hostile aliens….
At the top of the graph, at the limit of “high likelihood,” there lies the “already happening zone,” including, of course, COVID-19, climate change, and volatile extreme weather events like hurricanes and tsunamis. At the bottom, in the “impossible to calculate” zone, we find sci-fi events like rogue AI, rogue black holes, rogue nano-bots, hostile aliens, and the collapse of the vacuum of space. All theoretically possible, but in Walliman’s analysis mostly unlikely to occur. As in all of his maps, he cites his sources on the video’s YouTube page.
If you’re not feeling quite up to a data presentation on mass casualty events just now, you can download the Map and Chart of Doom here and peruse them at your leisure. Pick up a Map of Doom for the wall at Walliman’s site, and while you’re there, why not buy an “I survived 2020” sticker. Maybe it’s premature, and maybe in poor taste. And maybe in times of doom we need someone to face the facts of doom squarely, turn them into cartoon infographics of doom, and claim victories like living through another calendar year.
Get the best cultural and educational resources on the web curated for you in a daily email. We never spam. Unsubscribe at any time.
FOLLOW ON SOCIAL MEDIA
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.
Open Culture (openculture.com) and our trusted partners use technology such as cookies on our website to personalise ads, support social media features, and analyze our traffic. Please click below to consent to the use of this technology while browsing our site.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.