Human History Gets a Rewrite

A brilliant new account upends bedrock assumptions about 30,000 years of change. By William Deresiewicz, October 18, 2021, The Atlantic

A pyramid balances on its point, upside down, in the desert with blue sky and small figures including a caravan of camels
Illustration by Rodrigo Corral. Sources: Hugh Sitton / Getty; Been There YB / Shutterstock

Many years ago, when I was a junior professor at Yale, I cold-called a colleague in the anthropology department for assistance with a project I was working on. I didn’t know anything about the guy; I just selected him because he was young, and therefore, I figured, more likely to agree to talk.

Five minutes into our lunch, I realized that I was in the presence of a genius. Not an extremely intelligent person—a genius. There’s a qualitative difference. The individual across the table seemed to belong to a different order of being from me, like a visitor from a higher dimension. I had never experienced anything like it before. I quickly went from trying to keep up with him, to hanging on for dear life, to simply sitting there in wonder.

That person was David Graeber. In the 20 years after our lunch, he published two books; was let go by Yale despite a stellar record (a move universally attributed to his radical politics); published two more books; got a job at Goldsmiths, University of London; published four more books, including Debt: The First 5,000 Years, a magisterial revisionary history of human society from Sumer to the present; got a job at the London School of Economics; published two more books and co-wrote a third; and established himself not only as among the foremost social thinkers of our time—blazingly original, stunningly wide-ranging, impossibly well read—but also as an organizer and intellectual leader of the activist left on both sides of the Atlantic, credited, among other things, with helping launch the Occupy movement and coin its slogan, “We are the 99 percent.”

On September 2, 2020, at the age of 59, David Graeber died of necrotizing pancreatitis while on vacation in Venice. The news hit me like a blow. How many books have we lost, I thought, that will never get written now? How many insights, how much wisdom, will remain forever unexpressed? The appearance of The Dawn of Everything: A New History of Humanity is thus bittersweet, at once a final, unexpected gift and a reminder of what might have been. In his foreword, Graeber’s co-author, David Wengrow, an archaeologist at University College London, mentions that the two had planned no fewer than three sequels.

And what a gift it is, no less ambitious a project than its subtitle claims. The Dawn of Everything is written against the conventional account of human social history as first developed by Hobbes and Rousseau; elaborated by subsequent thinkers; popularized today by the likes of Jared Diamond, Yuval Noah Harari, and Steven Pinker; and accepted more or less universally. The story goes like this. Once upon a time, human beings lived in small, egalitarian bands of hunter-gatherers (the so-called state of nature). Then came the invention of agriculture, which led to surplus production and thus to population growth as well as private property. Bands swelled to tribes, and increasing scale required increasing organization: stratification, specialization; chiefs, warriors, holy men.

Eventually, cities emerged, and with them, civilization—literacy, philosophy, astronomy; hierarchies of wealth, status, and power; the first kingdoms and empires. Flash forward a few thousand years, and with science, capitalism, and the Industrial Revolution, we witness the creation of the modern bureaucratic state. The story is linear (the stages are followed in order, with no going back), uniform (they are followed the same way everywhere), progressive (the stages are “stages” in the first place, leading from lower to higher, more primitive to more sophisticated), deterministic (development is driven by technology, not human choice), and teleological (the process culminates in us).

It is also, according to Graeber and Wengrow, completely wrong. Drawing on a wealth of recent archaeological discoveries that span the globe, as well as deep reading in often neglected historical sources (their bibliography runs to 63 pages), the two dismantle not only every element of the received account but also the assumptions that it rests on.

Yes, we’ve had bands, tribes, cities, and states; agriculture, inequality, and bureaucracy, but what each of these were, how they developed, and how we got from one to the next—all this and more, the authors comprehensively rewrite. More important, they demolish the idea that human beings are passive objects of material forces, moving helplessly along a technological conveyor belt that takes us from the Serengeti to the DMV. We’ve had choices, they show, and we’ve made them. Graeber and Wengrow offer a history of the past 30,000 years that is not only wildly different from anything we’re used to, but also far more interesting: textured, surprising, paradoxical, inspiring.

The bulk of the book (which weighs in at more than 500 pages) takes us from the Ice Age to the early states (Egypt, China, Mexico, Peru). In fact, it starts by glancing back before the Ice Age to the dawn of the species. Homo sapiens developed in Africa, but it did so across the continent, from Morocco to the Cape, not just in the eastern savannas, and in a great variety of regional forms that only later coalesced into modern humans. There was no anthropological Garden of Eden, in other words—no Tanzanian plain inhabited by “mitochondrial Eve” and her offspring. As for the apparent delay between our biological emergence, and therefore the emergence of our cognitive capacity for culture, and the actual development of culture—a gap of many tens of thousands of years—that, the authors tell us, is an illusion. The more we look, especially in Africa (rather than mainly in Europe, where humans showed up relatively late), the older the evidence we find of complex symbolic behavior.https://2d3c661f517cc02a17d473042d5bb50c.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

That evidence and more—from the Ice Age, from later Eurasian and Native North American groups—demonstrate, according to Graeber and Wengrow, that hunter-gatherer societies were far more complex, and more varied, than we have imagined. The authors introduce us to sumptuous Ice Age burials (the beadwork at one site alone is thought to have required 10,000 hours of work), as well as to monumental architectural sites like Göbekli Tepe, in modern Turkey, which dates from about 9000 B.C. (at least 6,000 years before Stonehenge) and features intricate carvings of wild beasts. They tell us of Poverty Point, a set of massive, symmetrical earthworks erected in Louisiana around 1600 B.C., a “hunter-gatherer metropolis the size of a Mesopotamian city-state.” They describe an indigenous Amazonian society that shifted seasonally between two entirely different forms of social organization (small, authoritarian nomadic bands during the dry months; large, consensual horticultural settlements during the rainy season). They speak of the kingdom of Calusa, a monarchy of hunter-gatherers the Spanish found when they arrived in Florida. All of these scenarios are unthinkable within the conventional narrative.Five minutes into my lunch with David Graeber, I realized that I was in the presence of a genius. Not an extremely intelligent person—a genius.

The overriding point is that hunter-gatherers made choices—conscious, deliberate, collective—about the ways that they wanted to organize their societies: to apportion work, dispose of wealth, distribute power. In other words, they practiced politics. Some of them experimented with agriculture and decided that it wasn’t worth the cost. Others looked at their neighbors and determined to live as differently as possible—a process that Graeber and Wengrow describe in detail with respect to the Indigenous peoples of Northern California, “puritans” who idealized thrift, simplicity, money, and work, in contrast to the ostentatious slaveholding chieftains of the Pacific Northwest. None of these groups, as far as we have reason to believe, resembled the simple savages of popular imagination, unselfconscious innocents who dwelt within a kind of eternal present or cyclical dreamtime, waiting for the Western hand to wake them up and fling them into history.The Dawn of Everything: A New History of HumanityDAVID GRAEBER AND DAVID WENGROW,FARRAR, STRAUS AND GIROUXBUY BOOKWhen you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

The authors carry this perspective forward to the ages that saw the emergence of farming, of cities, and of kings. In the locations where it first developed, about 10,000 years ago, agriculture did not take over all at once, uniformly and inexorably. (It also didn’t start in only a handful of centers—Mesopotamia, Egypt, China, Mesoamerica, Peru, the same places where empires would first appear—but more like 15 or 20.) Early farming was typically flood-retreat farming, conducted seasonally in river valleys and wetlands, a process that is much less labor-intensive than the more familiar kind and does not conduce to the development of private property. It was also what the authors call “play farming”: farming as merely one element within a mix of food-producing activities that might include hunting, herding, foraging, and horticulture.

Settlements, in other words, preceded agriculture—not, as we’ve thought, the reverse. What’s more, it took some 3,000 years for the Fertile Crescent to go from the first cultivation of wild grains to the completion of the domestication process—about 10 times as long as necessary, recent analyses have shown, had biological considerations been the only ones. Early farming embodied what Graeber and Wengrow call “the ecology of freedom”: the freedom to move in and out of farming, to avoid getting trapped by its demands or endangered by the ecological fragility that it entails.

SPONSOR CONTENT

Discover Small Businesses You’ll Love in Your Own Personalized Main Street

FACEBOOK

The authors write their chapters on cities against the idea that large populations need layers of bureaucracy to govern them—that scale leads inevitably to political inequality. Many early cities, places with thousands of people, show no sign of centralized administration: no palaces, no communal storage facilities, no evident distinctions of rank or wealth. This is the case with what may be the earliest cities of all, Ukrainian sites like Taljanky, which were discovered only in the 1970s and which date from as early as roughly 4100 B.C., hundreds of years before Uruk, the oldest known city in Mesopotamia. Even in that “land of kings,” urbanism antedated monarchy by centuries. And even after kings arose, “popular councils and citizen assemblies,” Graeber and Wengrow write, “were stable features of government,” with real power and autonomy. Despite what we like to believe, democratic institutions did not begin just once, millennia later, in Athens.

If anything, aristocracy emerged in smaller settlements, the warrior societies that flourished in the highlands of the Levant and elsewhere, and that are known to us from epic poetry—a form of existence that remained in tension with agricultural states throughout the history of Eurasia, from Homer to the Mongols and beyond. But the authors’ most compelling instance of urban egalitarianism is undoubtedly Teotihuacan, a Mesoamerican city that rivaled imperial Rome, its contemporary, for size and magnificence. After sliding toward authoritarianism, its people abruptly changed course, abandoning monument-building and human sacrifice for the construction of high-quality public housing. “Many citizens,” the authors write, “enjoyed a standard of living that is rarely achieved across such a wide sector of urban society in any period of urban history, including our own.”

And so we arrive at the state, with its structures of central authority, exemplified variously by large-scale kingdoms, by empires, by modern republics—supposedly the climax form, to borrow a term from ecology, of human social organization. What is the state? the authors ask. Not a single stable package that’s persisted all the way from pharaonic Egypt to today, but a shifting combination of, as they enumerate them, the three elementary forms of domination: control of violence (sovereignty), control of information (bureaucracy), and personal charisma (manifested, for example, in electoral politics). Some states have displayed just two, some only one—which means the union of all three, as in the modern state, is not inevitable (and may indeed, with the rise of planetary bureaucracies like the World Trade Organization, be already decomposing). More to the point, the state itself may not be inevitable. For most of the past 5,000 years, the authors write, kingdoms and empires were “exceptional islands of political hierarchy, surrounded by much larger territories whose inhabitants … systematically avoided fixed, overarching systems of authority.”https://2d3c661f517cc02a17d473042d5bb50c.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Is “civilization” worth it, the authors want to know, if civilization—ancient Egypt, the Aztecs, imperial Rome, the modern regime of bureaucratic capitalism enforced by state violence—means the loss of what they see as our three basic freedoms: the freedom to disobey, the freedom to go somewhere else, and the freedom to create new social arrangements? Or does civilization rather mean “mutual aid, social co-operation, civic activism, hospitality [and] simply caring for others”?

These are questions that Graeber, a committed anarchist—an exponent not of anarchy but of anarchism, the idea that people can get along perfectly well without governments—asked throughout his career. The Dawn of Everything is framed by an account of what the authors call the “indigenous critique.” In a remarkable chapter, they describe the encounter between early French arrivals in North America, primarily Jesuit missionaries, and a series of Native intellectuals—individuals who had inherited a long tradition of political conflict and debate and who had thought deeply and spoke incisively on such matters as “generosity, sociability, material wealth, crime, punishment and liberty.”

The Indigenous critique, as articulated by these figures in conversation with their French interlocutors, amounted to a wholesale condemnation of French—and, by extension, European—society: its incessant competition, its paucity of kindness and mutual care, its religious dogmatism and irrationalism, and most of all, its horrific inequality and lack of freedom. The authors persuasively argue that Indigenous ideas, carried back and publicized in Europe, went on to inspire the Enlightenment (the ideals of freedom, equality, and democracy, they note, had theretofore been all but absent from the Western philosophical tradition). They go further, making the case that the conventional account of human history as a saga of material progress was developed in reaction to the Indigenous critique in order to salvage the honor of the West. We’re richer, went the logic, so we’re better. The authors ask us to rethink what better might actually mean.

The Dawn of Everything is not a brief for anarchism, though anarchist values—antiauthoritarianism, participatory democracy, small-c communism—are everywhere implicit in it. Above all, it is a brief for possibility, which was, for Graeber, perhaps the highest value of all. The book is something of a glorious mess, full of fascinating digressions, open questions, and missing pieces. It aims to replace the dominant grand narrative of history not with another of its own devising, but with the outline of a picture, only just becoming visible, of a human past replete with political experiment and creativity.

“How did we get stuck?” the authors ask—stuck, that is, in a world of “war, greed, exploitation [and] systematic indifference to others’ suffering”? It’s a pretty good question. “If something did go terribly wrong in human history,” they write, “then perhaps it began to go wrong precisely when people started losing that freedom to imagine and enact other forms of social existence.” It isn’t clear to me how many possibilities are left us now, in a world of polities whose populations number in the tens or hundreds of millions. But stuck we certainly are.


This article appears in the November 2021 print edition with the headline “It Didn’t Have to Be This Way.” When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.The Dawn of Everything: A New History of HumanityDAVID GRAEBER AND DAVID WENGROW, FARRAR, STRAUS AND GIROUXBUY BOOKWilliam Deresiewicz is the author of The Death of the Artist: How Creators Are Struggling to Survive in the Age of Billionaires and Big Tech.

A deeply researched warning about how the digital economy threatens artists’ lives and work–the music, writing, and visual art that sustain our souls and societies–from an award-winning essayist and criticThere are two stories you hear about earning a living as an artist in the digital age. One comes from Silicon Valley. There’s never been a better time to be an artist, it goes. If you’ve got a laptop, you’ve got a recording studio. If you’ve got an iPhone, you’ve got a movie camera. And if production is cheap, distribution is free: it’s called the Internet. Everyone’s an artist; just tap your creativity and put your stuff out there.The other comes from artists themselves. Sure, it goes, you can put your stuff out there, but who’s going to pay you for it? Everyone is not an artist. Making art takes years of dedication, and that requires a means of support. If things don’t change, a lot of art will cease to be sustainable.So which account is true? Since people are still making a living as artists today, how are they managing to do it? William Deresiewicz, a leading critic of the arts and of contemporary culture, set out to answer those questions. Based on interviews with artists of all kinds, The Death of the Artist argues that we are in the midst of an epochal transformation. If artists were artisans in the Renaissance, bohemians in the nineteenth century, and professionals in the twentieth, a new paradigm is emerging in the digital age, one that is changing our fundamental ideas about the nature of art and the role of the artist in society.

EARLY IN HIS new book, The Death of the Artist: How Creators Are Struggling to Survive in the Age of Billionaires and Big Tech, William Deresiewicz relates two stories often told about the arts today. From Silicon Valley and its boosters, we hear: “There’s never been a better time to be an artist.” Anyone can easily market their own music, books, or films online, drum up a thousand true fans, and enjoy a decent living. We see proof of this, time and again, in profiles of bold creators who got tired of waiting to be chosen, took to the web, and saw their work go viral.

The artists tell another tale. Yes, you can produce and post your work more easily, but so can everyone else. Every year, every major venue — SoundCloud, Kindle Store, Sundance — is inundated with thousands if not millions of songs, books, and films, but most sink like a stone. Of the 6,000,000 books in the US Kindle Store, the “overwhelming majority” of which are self-published, “68 percent sell fewer than two copies a month.” Only about 2,000 US Kindle Store authors earn more than $25,000 per year. Spotify features roughly 2,000,000 artists worldwide, but less than four percent of them garner 95 percent of the streams. The pie has been “pulverized into a million tiny crumbs.” We may now have “universal access” to the audience, but “at the price of universal impoverishment.”

Deresiewicz is a literary critic and author of a provocative earlier book on higher education in the United States, Excellent Sheep: The Miseducation of the American Elite and the Way to a Meaningful Life. He made his first foray into the debate about the plight of artists in The Atlantic in 2015, but declined at the time to endorse either of the two narratives set out above. In that essay, he framed the debate itself as symptomatic of a deeper shift on the artistic horizon. Creators are becoming unmoored from the institutions that have long made their careers possible, he argued, as publishers, labels, studios, and colleges are now “contracting or disintegrating.” Left to fend for themselves in the marketplace, artists have been forced to practice “creative entrepreneurship,” with less time to spend building an oeuvre or perfecting their technique, and more time to be spent on networking and self-promoting. User reviews and recommendation engines matter more to them than critical opinion. Their work tends to be tamer, safer, more “formulaic” — “more like entertainment, less like art.” More broadly, this new breed of artist is compelled to feel good about all the internet makes possible, and to ignore the fact that few have managed to capitalize on it. In 2015, the future under the new paradigm was not encouraging. But it seemed too soon to pass judgment.

Having gathered copious evidence for the book, Deresiewicz now stands firmly against the model of the creative entrepreneur. Based on some 140 phone interviews with creators across a number of fields, and ample studies and reports, the book urges us to dismiss the Silicon Valley narrative as pure “propaganda.” It is a persuasive and thoroughly engaging read. Deresiewicz is not a pioneer in this terrain — Scott Timberg’s Culture Crash: The Killing of the Creative Class and Jonathan Taplin’s Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy cover much of the same ground. But Deresiewicz takes a closer look at artists’ lives and careers, presenting a bleak composite picture that anyone with creative aspirations must confront. All but the most popular creators, he makes clear, face new and daunting obstacles, pointing to a future in which more artists will do more of their work as part-time amateurs. Final chapters try to brighten the picture somewhat with encouraging words about organizing and advocating for IP reform. But the book leaves unclear the answer to a larger question: is the aspiration to become a full-time writer, filmmaker, or musician — no matter how earnestly held — now essentially obsolete?

¤

It has always been hard to make a living in the arts; what is new, Deresiewicz contends, is that even moderately successful artists — who publish, show, or perform frequently — often struggle to lead a middle-class life. Revenue for most creators is falling: the Authors Guild, for example, reported a drop in the writing income of American authors from 2009 to 2015 by an average of 30 percent. When distribution moved online, the middle of the artistic earning spectrum collapsed. This runs contrary to the early optimism of figures like former Wired editor Chris Anderson, who saw a bright future for less popular artists. Free of the spatial constraints of brick and mortar retail, selling books and music online would lead to a flatter distribution curve. Rather than a graph showing a sharp curve with most sales going to the top 100 or so artists, the net would lead to a graph with sales dispersed more gradually over millions of artists — leading to a long tail. But as Deresiewicz makes clear, this hasn’t happened. The net didn’t feed a long tail of content consumption; it just made the head of the curve a lot taller. In the 1980s, 80 percent of music album revenue went to the top 20 percent of content. Now it goes to the top one percent. Deresiewicz reveals a similar pattern across the arts: many of the people he interviewed earned from $20,000 to $30,000 a year, “if not less.” The more successful earned from $40,000 to $70,000, “but not more.”

Artists have lost income because content has been “demonetized.” Putting so much music, text, and video online has rendered much of it worthless, due to piracy or sheer, superfluous abundance. Publishers, labels, and studios all face falling revenues, resulting in ever smaller advances and marketing budgets. Television, having resisted demonetization, is the one bright exception to the trend. Netflix, HBO, and other platforms support a thriving culture of middle-class creators — standing out, for Deresiewicz, as the exception that proves the rule.

Chapters on rent, space, and time show how much harder it is to sustain a full-time living as an artist, alone or in close proximity to others. Median rent in the United States is up about 42 percent, adjusted for inflation, since 2000. Not a single person Deresiewicz spoke with was “living decently in a market-rate apartment in a major city on their earnings as an artist.” Artists can no longer afford “to live where artists live.” Nor can many get by without support from parents or partners. “The only way the current model works,” the author was often told, “is if you are young, healthy and childless.” Many of the profiles in the book portray artists living in extreme frugality, often in cramped quarters, in smaller cities or towns, compelled to spend much of their time on menial day jobs and side hustles. Not surprisingly, as one observed, “most people burn out after ten years.”

Deresiewicz devotes chapters to the situation in each of the arts, with the common theme being the takeover of winner-take-all economics. Musicians, for their part, never recovered from digitization. With file sharing having taught a generation to expect music for free, first musicians and then labels surrendered to streaming services — fearing no revenue at all. Yet streaming fees, now the main source of income in music, are tiny — on Spotify, fractions of a penny per stream; on YouTube, between $700 and $6,000 per million views, a number that few artists reach. “Nowhere is the long tail thinner or the fat head fatter than in music.” Ninety percent of subscription fees go to the “megastars in the head.” The top 0.1 percent of artists take 50 percent of album sales, with “similar numbers for downloads and streaming.” Musicians are left “scrambling” to find other means to make a living. Live performances support some, but in 2017, 60 percent of that income went to the top one percent.

The writing scene is equally grim. With 39 percent fewer books sold in US stores between 2007 and 2017, and fewer books reviewed in prominent venues, publishers have lost control over marketing. Mid-list and early career authors receive far less support. Now that 67 percent of books in the United States are sold online — with Amazon alone scooping up 40 percent of print books and 80 percent of ebooks — authors are at the mercy of mysterious algorithms for discovery and promotion. Flying solo, once the great authorial hope, has turned out to be a dead end. Since 2008, Deresiewicz notes, 7,000,000 books have been self-published in the United States. “All but a tiny fraction reach essentially no readers and earn essentially no money.” Economic inequality in the visual arts is even more extreme. Only 10 percent of BFA, MFA, or PhD arts grads in the United States earn a “primary living” in the field. In 2018, “just twenty individuals accounted for 64 percent of total sales by living artists.”

¤

Where, then, are we headed? In one of the book’s most illuminating chapters, Deresiewicz draws on the work of cultural historians Larry Shiner (The Invention of Art: A Cultural History) and Howard Singerman (Art Subjects: Making Artists in the American University), among others, to place the current upheavals in artistic creation within a longer history — one showing how models of production tend to have short shelf lives, and why creators of the near future may have more in common with their more commercially driven, artisanal ancestors of the past.

In the early modern age, da Vinci, Shakespeare, and Bach were just that: artisans or craftspeople who apprenticed to learn traditional methods and strove, with the support of a patron, to become masters. They worked primarily for a commercial purpose and didn’t quibble over the distinction between art and craft.

Citing Raymond Williams, Deresiewicz pinpoints the birth of our present conception of art in the second half of the 18th century, the age of Romanticism and Revolution, when the phrase “fine arts” emerged. Rather than imitating tradition, artists now sought to express an inner truth, reflecting a wider embrace of individuality, rebellion, and youth — a trend that Deresiewicz connects to the rise of democracy and self-government. In the 19th century, the cultured bourgeois would come to revere the artist as a solitary, expressive genius, a bohemian prophet and visionary, culminating in the esoteric modernism of Picasso, Joyce, and Stravinsky. By this point, works of art gained in monetary value, but artists often sought to cultivate an air of independence from the market.

The artist as genius was displaced not by the emerging entrepreneurial model, but by the artist as professional, a model born in the culture boom that followed World War II. A host of new institutions — museums, theaters, orchestras, and universities — gave the creator a safe and steady perch. No longer a wandering bohemian seeking inspiration, the artist was now a credentialed professional striving, over many years, books, films, and albums, to perfect their technique. Typically, a tenured professor, a staff writer, or a musician attached to a record label, the creator enjoyed commercial success, but strove for critical acclaim.

Deresiewicz paints each of these paradigms with a broad brush — conceding that there are exceptional or overlapping figures and works in each period. But the sketch supports his larger point that in all three paradigms, artists were sheltered from the market by an external source. Now, he argues, we’re moving “unmistakably” into a new dispensation “marked by the final triumph of the market” and “the removal of the last vestiges of protection and mediation.” As the institutions supporting the professional model “disintegrate” — as professors become adjuncts, and publishers, galleries, and studios downsize or die off — a further aspect of all three models is also being left behind: the ability to devote the bulk of one’s time to art. For Deresiewicz, “[g]reat art, even good art, relies on the existence of individuals who are able to devote the lion’s share of their energy to producing it — in other words professionals.”

But conditions today favor the amateur. They favor “speed, brevity, and repetition; novelty but also recognizability.” Artists no longer have the time nor the space to “cultivate an inner stillness or focus”; no time for the “slow build.” Creators need to cater to the market’s demand for constant and immediate engagement, for “flexibility, versatility, and extroversion.” As a result, “irony, complexity, and subtlety are out; the game is won by the brief, the bright, the loud, and the easily grasped.”

The change underway is clearly part of a larger cultural transition, involving more than just the arts. But Deresiewicz singles out Silicon Valley as a main culprit: he cites Taplin’s estimate that between 2004 and 2015, creators lost roughly $50 billion in annual revenue to the major tech platforms. They did so largely, Deresiewicz contends, by abetting piracy. Lawmakers could curtail this to some degree, by forcing Google, YouTube, and Facebook to allow creators to remove infringing content. But the big players continually resist because “[p]iracy is just too lucrative for them.” Ultimately, Deresiewicz argues that government should break up these monopolies; it should hinder their tendency to “flout the law, to dictate terms, to smother competition, to control debate, to shape legislation, to determine price.”

As with so many works of nonfiction that deliver bad news, the obligatory “what is to be done” segment at the end of the book fails to stir much enthusiasm. Deresiewicz asserts, correctly, “we’re not ‘going back’ to anything” — then urges creators to join advocacy groups like CreativeFuture or the Authors Guild, and to lobby for copyright and IP reform. Yet it’s hard to tell how the goal here is different from trying to turn back the clock. As Deresiewicz concedes, his proposals are “plainly incommensurate with the scale of the overall problem.”

They are indeed. The digital genie won’t be put back in the bottle. Big tech might be reined in on certain fronts, but it won’t be abolished or broken up. Nor can we expect labels, studios, publishers, or colleges to play the same supportive role they once did. Some see signs of hope, for some forms of creative endeavor, in the rise of paid subscriptions. But the evidence in The Death of the Artist is copious and inescapable on the most crucial fact about art in the present age that won’t change: when creative work is sold online, sales are radically unequal, following drastic power-law distributions. No matter how many people subscribe, no matter how aggressively Big Tech comes to be governed, selling art, music, or books in a digital world will always entail a lion’s share of the proceeds flowing into the pockets of a small few.

Deresiewicz shies away from putting it starkly, but the lesson is clear: a career on the older professional model — a gradual build to a moderate critical success — is only viable at this point for those who can support themselves for the long haul. A dwindling few will manage to do this by landing a perch at a magazine, a studio, a university. And so the model may not be entirely obsolete at present. But, aside from television, the book points to a future of creative production involving more work being done by amateurs, more done as a hobby, a passion project, a side gig — whatever that might mean for “great or even just good art.” Beyond that, who can say?

Robert Diab teaches law at Thompson Rivers University in British Columbia and writes about human rights and civil liberties

Nicolas Ortega

IDEAS

THE NEXT DECADE COULD BE EVEN WORSE

A historian believes he has discovered iron laws that predict the rise and fall of societies. He has bad news.By Graeme WoodDECEMBER 2020 ISSUESHARE

Peter turchin, one of the world’s experts on pine beetles and possibly also on human beings, met me reluctantly this summer on the campus of the University of Connecticut at Storrs, where he teaches. Like many people during the pandemic, he preferred to limit his human contact. He also doubted whether human contact would have much value anyway, when his mathematical models could already tell me everything I needed to know.https://audm.herokuapp.com/player-embed/?pub=atlantic&articleID=historian-who-sees-future

But he had to leave his office sometime. (“One way you know I am Russian is that I cannot think sitting down,” he told me. “I have to go for a walk.”) Neither of us had seen much of anyone since the pandemic had closed the country several months before. The campus was quiet. “A week ago, it was even more like a neutron bomb hit,” Turchin said. Animals were timidly reclaiming the campus, he said: squirrels, woodchucks, deer, even an occasional red-tailed hawk. During our walk, groundskeepers and a few kids on skateboards were the only other representatives of the human population in sight.

The year 2020 has been kind to Turchin, for many of the same reasons it has been hell for the rest of us. Cities on fire, elected leaders endorsing violence, homicides surging—­­to a normal American, these are apocalyptic signs. To Turchin, they indicate that his models, which incorporate thousands of years of data about human history, are working. (“Not all of human history,” he corrected me once. “Just the last 10,000 years.”) He has been warning for a decade that a few key social and political trends portend an “age of discord,” civil unrest and carnage worse than most Americans have experienced. In 2010, he predicted that the unrest would get serious around 2020, and that it wouldn’t let up until those social and political trends reversed. Havoc at the level of the late 1960s and early ’70s is the best-case scenario; all-out civil war is the worst.

The fundamental problems, he says, are a dark triad of social maladies: a bloated elite class, with too few elite jobs to go around; declining living standards among the general population; and a government that can’t cover its financial positions. His models, which track these factors in other societies across history, are too complicated to explain in a nontechnical publication. But they’ve succeeded in impressing writers for nontechnical publications, and have won him comparisons to other authors of “megahistories,” such as Jared Diamond and Yuval Noah Harari. The New York Times columnist Ross Douthat had once found Turchin’s historical model­ing unpersuasive, but 2020 made him a believer: “At this point,” Douthat recently admitted on a podcast, “I feel like you have to pay a little more attention to him.”https://7b5398f22dccdccd72e52f21134e7ce7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Diamond and Harari aimed to describe the history of humanity. Turchin looks into a distant, science-fiction future for peers. In War and Peace and War (2006), his most accessible book, he likens himself to Hari Seldon, the “maverick mathematician” of Isaac Asimov’s Foundation series, who can foretell the rise and fall of empires. In those 10,000 years’ worth of data, Turchin believes he has found iron laws that dictate the fates of human societies.

The fate of our own society, he says, is not going to be pretty, at least in the near term. “It’s too late,” he told me as we passed Mirror Lake, which UConn’s website describes as a favorite place for students to “read, relax, or ride on the wooden swing.” The problems are deep and structural—not the type that the tedious process of demo­cratic change can fix in time to forestall mayhem. Turchin likens America to a huge ship headed directly for an iceberg: “If you have a discussion among the crew about which way to turn, you will not turn in time, and you hit the iceberg directly.” The past 10 years or so have been discussion. That sickening crunch you now hear—steel twisting, rivets popping—­­is the sound of the ship hitting the iceberg.

“We are almost guaranteed” five hellish years, Turchin predicts, and likely a decade or more. The problem, he says, is that there are too many people like me. “You are ruling class,” he said, with no more rancor than if he had informed me that I had brown hair, or a slightly newer iPhone than his. Of the three factors driving social violence, Turchin stresses most heavily “elite overproduction”—­the tendency of a society’s ruling classes to grow faster than the number of positions for their members to fill. One way for a ruling class to grow is biologically—think of Saudi Arabia, where princes and princesses are born faster than royal roles can be created for them. In the United States, elites over­produce themselves through economic and educational upward mobility: More and more people get rich, and more and more get educated. Neither of these sounds bad on its own. Don’t we want everyone to be rich and educated? The problems begin when money and Harvard degrees become like royal titles in Saudi Arabia. If lots of people have them, but only some have real power, the ones who don’t have power eventually turn on the ones who do.

FROM OUR DECEMBER 2020 ISSUE

Check out the full table of contents and find your next story to read.See More

In the United States, Turchin told me, you can see more and more aspirants fighting for a single job at, say, a prestigious law firm, or in an influential government sinecure, or (here it got personal) at a national magazine. Perhaps seeing the holes in my T-shirt, Turchin noted that a person can be part of an ideological elite rather than an economic one. (He doesn’t view himself as a member of either. A professor reaches at most a few hundred students, he told me. “You reach hundreds of thousands.”) Elite jobs do not multiply as fast as elites do. There are still only 100 Senate seats, but more people than ever have enough money or degrees to think they should be running the country. “You have a situation now where there are many more elites fighting for the same position, and some portion of them will convert to counter-elites,” Turchin said.

RECOMMENDED READING

Donald Trump, for example, may appear elite (rich father, Wharton degree, gilded commodes), but Trumpism is a counter-elite movement. His government is packed with credentialed nobodies who were shut out of previous administrations, sometimes for good reasons and sometimes because the Groton-­Yale establishment simply didn’t have any vacancies. Trump’s former adviser and chief strategist Steve Bannon, Turchin said, is a “paradigmatic example” of a counter-elite. He grew up working-class, went to Harvard Business School, and got rich as an investment banker and by owning a small stake in the syndication rights to Seinfeld. None of that translated to political power until he allied himself with the common people. “He was a counter-elite who used Trump to break through, to put the white working males back in charge,” Turchin said.

Elite overproduction creates counter-elites, and counter-elites look for allies among the commoners. If commoners’ living standards slip—not relative to the elites, but relative to what they had before—they accept the overtures of the counter-elites and start oiling the axles of their tumbrels. Commoners’ lives grow worse, and the few who try to pull themselves onto the elite lifeboat are pushed back into the water by those already aboard. The final trigger of impending collapse, Turchin says, tends to be state insolvency. At some point rising in­security becomes expensive. The elites have to pacify unhappy citizens with handouts and freebies—and when these run out, they have to police dissent and oppress people. Eventually the state exhausts all short-term solutions, and what was heretofore a coherent civilization disintegrates.https://7b5398f22dccdccd72e52f21134e7ce7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Turchin’s prognostications would be easier to dismiss as barstool theorizing if the disintegration were not happening now, roughly as the Seer of Storrs foretold 10 years ago. If the next 10 years are as seismic as he says they will be, his insights will have to be accounted for by historians and social scientists—assuming, of course, that there are still universities left to employ such people.

Peter Turchin, photographed in Connecticut’s Natchaug State Forest in October 2020
Peter Turchin, photographed in Connecticut’s Natchaug State Forest in October. The former ecologist seeks to apply mathematical rigor to the study of human history. (Malike Sidibe)

Turchin was born in 1957 in Obninsk, Russia, a city built by the Soviet state as a kind of nerd heaven, where scientists could collaborate and live together. His father, Valen­tin, was a physicist and political dissident, and his mother, Tatiana, had trained as a geologist. They moved to Moscow when he was 7 and in 1978 fled to New York as political refugees. There they quickly found a community that spoke the household language, which was science. Valen­tin taught at the City University of New York, and Peter studied biology at NYU and earned a zoology doctorate from Duke.https://7b5398f22dccdccd72e52f21134e7ce7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Turchin wrote a dissertation on the Mexican bean beetle, a cute, ladybug­like pest that feasts on legumes in areas between the United States and Guatemala. When Turchin began his research, in the early 1980s, ecology was evolving in a way that some fields already had. The old way to study bugs was to collect them and describe them: count their legs, measure their bellies, and pin them to pieces of particle­board for future reference. (Go to the Natural History Museum in London, and in the old storerooms you can still see the shelves of bell jars and cases of specimens.) In the ’70s, the Australian physicist Robert May had turned his attention to ecology and helped transform it into a mathematical science whose tools included supercomputers along with butterfly nets and bottle traps. Yet in the early days of his career, Turchin told me, “the majority of ecologists were still quite math-phobic.”

Turchin did, in fact, do fieldwork, but he contributed to ecology primarily by collecting and using data to model the dynamics of populations—for example, determining why a pine-beetle population might take over a forest, or why that same population might decline. (He also worked on moths, voles, and lemmings.)

In the late ’90s, disaster struck: Turchin realized that he knew everything he ever wanted to know about beetles. He compares himself to Thomasina Coverly, the girl genius in the Tom Stoppard play Arcadia, who obsessed about the life cycles of grouse and other creatures around her Derbyshire country house. Stoppard’s character had the disadvantage of living a century and a half before the development of chaos theory. “She gave up because it was just too complicated,” Turchin said. “I gave up because I solved the problem.”

Turchin published one final monograph, Complex Population Dynamics: A Theoretical/Empirical Synthesis (2003), then broke the news to his UConn colleagues that he would be saying a permanent sayonara to the field, although he would continue to draw a salary as a tenured professor in their department. (He no longer gets raises, but he told me he was already “at a comfortable level, and, you know, you don’t need so much money.”) “Usually a midlife crisis means you divorce your old wife and marry a graduate student,” Turchin said. “I divorced an old science and married a new one.”Turchin’s prognostications would be easier to dismiss as barstool theorizing if they weren’t playing out now, roughly as he foretold 10 years ago.

One of his last papers appeared in the journal Oikos. “Does population ecology have general laws?” Turchin asked. Most ecologists said no: Populations have their own dynamics, and each situation is different. Pine beetles reproduce, run amok, and ravage a forest for pine-beetle reasons, but that does not mean mosquito or tick populations will rise and fall according to the same rhythms. Turchin suggested that “there are several very general law-like propositions” that could be applied to ecology. After its long adolescence of collecting and cataloging, ecology had enough data to describe these universal laws—and to stop pretending that every species had its own idiosyncrasies. “Ecologists know these laws and should call them laws,” he said. Turchin proposed, for example, that populations of organisms grow or decline exponentially, not linearly. This is why if you buy two guinea pigs, you will soon have not just a few more guinea pigs but a home—and then a neighborhood—full of the damn things (as long as you keep feeding them). This law is simple enough to be understood by a high-school math student, and it describes the fortunes of everything from ticks to starlings to camels. The laws Turchin applied to ecology—and his insistence on calling them laws—­generated respectful controversy at the time. Now they are cited in textbooks.

Having left ecology, Turchin began similar research that attempted to formulate general laws for a different animal species: human beings. He’d long had a hobby­ist’s interest in history. But he also had a predator’s instinct to survey the savanna of human knowledge and pounce on the weakest prey. “All sciences go through this transition to mathematization,” Turchin told me. “When I had my midlife crisis, I was looking for a subject where I could help with this transition to a mathematized science. There was only one left, and that was history.”

Historians read books, letters, and other texts. Occasionally, if they are archaeologically inclined, they dig up potsherds and coins. But to Turchin, relying solely on these methods was the equivalent of studying bugs by pinning them to particleboard and counting their antennae. If the historians weren’t going to usher in a mathematical revolution themselves, he would storm their departments and do it for them.

“There is a longstanding debate among scientists and philosophers as to whether history has general laws,” he and a co-author wrote in Secular Cycles (2009). “A basic premise of our study is that historical societies can be studied with the same methods physicists and biologists used to study natural systems.” Turchin founded a journal, Cliodynamics, dedicated to “the search for general principles explaining the functioning and dynamics of historical societies.” (The term is his coinage; Clio is the muse of history.) He had already announced the discipline’s arrival in an article in Nature, where he likened historians reluctant to build general principles to his colleagues in biology “who care most for the private life of warblers.” “Let history continue to focus on the particular,” he wrote. Cliodynamics would be a new science. While historians dusted bell jars in the basement of the university, Turchin and his followers would be upstairs, answering the big questions.https://7b5398f22dccdccd72e52f21134e7ce7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

To seed the journal’s research, Turchin masterminded a digital archive of historical and archaeological data. The coding of its records requires finesse, he told me, because (for example) the method of determining the size of the elite-aspirant class of medieval France might differ from the measure of the same class in the present-day United States. (For medieval France, a proxy is the membership in its noble class, which became glutted with second and third sons who had no castles or manors to rule over. One American proxy, Turchin says, is the number of lawyers.) But once the data are entered, after vetting by Turchin and specialists in the historical period under review, they offer quick and powerful suggestions about historical phenomena.

Historians of religion have long pondered the relationship between the rise of complex civilization and the belief in gods—especially “moralizing gods,” the kind who scold you for sinning. Last year, Turchin and a dozen co-authors mined the database (“records from 414 societies that span the past 10,000 years from 30 regions around the world, using 51 measures of social complexity and 4 measures of supernatural enforcement of morality”) to answer the question conclusively. They found that complex societies are more likely to have moralizing gods, but the gods tend to start their scolding after the societies get complex, not before. As the database expands, it will attempt to remove more questions from the realm of humanistic speculation and sock them away in a drawer marked answered.

One of Turchin’s most unwelcome conclusions is that complex societies arise through war. The effect of war is to reward communities that organize themselves to fight and survive, and it tends to wipe out ones that are simple and small-scale. “No one wants to accept that we live in the societies we do”—rich, complex ones with universities and museums and philosophy and art—“because of an ugly thing like war,” he said. But the data are clear: Darwinian processes select for complex socie­ties because they kill off simpler ones. The notion that democracy finds its strength in its essential goodness and moral improvement over its rival systems is likewise fanciful. Instead, democratic societies flourish because they have a memory of being nearly obliterated by an external enemy. They avoided extinction only through collective action, and the memory of that collective action makes democratic politics easier to conduct in the present, Turchin said. “There is a very close correlation between adopting democratic institutions and having to fight a war for survival.”

Also unwelcome: the conclusion that civil unrest might soon be upon us, and might reach the point of shattering the country. In 2012, Turchin published an analysis of political violence in the United States, again starting with a database. He classified 1,590 incidents—riots, lynchings, any political event that killed at least one person—from 1780 to 2010. Some periods were placid and others bloody, with peaks of brutality in 1870, 1920, and 1970, a 50-year cycle. Turchin excludes the ultimate violent incident, the Civil War, as a “sui generis event.” The exclusion may seem suspicious, but to a statistician, “trimming outliers” is standard practice. Historians and journalists, by contrast, tend to focus on outliers—­because they are interesting—and sometimes miss grander trends.

Certain aspects of this cyclical view require relearning portions of American history, with special attention paid to the numbers of elites. The industrialization of the North, starting in the mid-19th century, Turchin says, made huge numbers of people rich. The elite herd was culled during the Civil War, which killed off or impoverished the southern slaveholding class, and during Reconstruction, when America experienced a wave of assassinations of Republican politicians. (The most famous of these was the assassination of James A. Garfield, the 20th president of the United States, by a lawyer who had demanded but not received a political appointment.) It wasn’t until the Progressive reforms of the 1920s, and later the New Deal, that elite overproduction actually slowed, at least for a time.

This oscillation between violence and peace, with elite over­production as the first horseman of the recurring American apocalypse, inspired Turchin’s 2020 prediction. In 2010, when Nature surveyed scientists about their predictions for the coming decade, most took the survey as an invitation to self-promote and rhapsodize, dreamily, about coming advances in their fields. Turchin retorted with his prophecy of doom and said that nothing short of fundamental change would stop another violent turn.

SPONSOR CONTENT

Create a Personal Main Street of Small Businesses You’ll Love—But Didn’t Know Existed Yet

FACEBOOK

Turchin’s prescriptions are, as a whole, vague and unclassifiable. Some sound like ideas that might have come from Senator Elizabeth Warren—tax the elites until there are fewer of them—while others, such as a call to reduce immigration to keep wages high for American workers, resemble Trumpian protectionism. Other policies are simply heretical. He opposes credential-­oriented higher education, for example, which he says is a way of mass-producing elites without also mass-­producing elite jobs for them to occupy. Architects of such policies, he told me, are “creating surplus elites, and some become counter-elites.” A smarter approach would be to keep the elite numbers small, and the real wages of the general population on a constant rise.

How to do that? Turchin says he doesn’t really know, and it isn’t his job to know. “I don’t really think in terms of specific policy,” he told me. “We need to stop the runaway process of elite overproduction, but I don’t know what will work to do that, and nobody else does. Do you increase taxation? Raise the minimum wage? Universal basic income?” He conceded that each of these possibilities would have unpredictable effects. He recalled a story he’d heard back when he was still an ecologist: The Forest Service had once implemented a plan to reduce the population of bark beetles with pesticide—only to find that the pesticide killed off the beetles’ predators even more effectively than it killed the beetles. The intervention resulted in more beetles than before. The lesson, he said, was to practice “adaptive management,” changing and modulating your approach as you go.

Eventually, Turchin hopes, our understanding of historical dynamics will mature to the point that no government will make policy without reflecting on whether it is hurtling toward a mathematically pre­ordained disaster. He says he could imagine an Asimovian agency that keeps tabs on leading indicators and advises accordingly. It would be like the Federal Reserve, but instead of monitoring inflation and controlling monetary supply, it would be tasked with averting total civilizational collapse.https://7b5398f22dccdccd72e52f21134e7ce7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Historians have not, as a whole, accepted Turchin’s terms of surrender graciously. Since at least the 19th century, the discipline has embraced the idea that history is irreducibly complex, and by now most historians believe that the diversity of human activity will foil any attempt to come up with general laws, especially predictive ones. (As Jo Guldi, a historian at Southern Methodist University, put it to me, “Some historians regard Turchin the way astronomers regard Nostradamus.”) Instead, each historical event must be lovingly described, and its idiosyncrasies understood to be limited in relevance to other events. The idea that one thing causes another, and that the causal pattern can tell you about sequences of events in another place or century, is foreign territory.

One might even say that what defines history as a humanistic enterprise is the belief that it is not governed by scientific laws—that the working parts of human societies are not like billiard balls, which, if arranged at certain angles and struck with a certain amount of force, will invariably crack just so and roll toward a corner pocket of war, or a side pocket of peace. Turchin counters that he has heard claims of irreducible complexity before, and that steady application of the scientific method has succeeded in managing that complexity. Consider, he says, the concept of temperature—­something so obviously quantifiable now that we laugh at the idea that it’s too vague to measure. “Back before people knew what temperature was, the best thing you could do is to say you’re hot or cold,” Turchin told me. The concept depended on many factors: wind, humidity, ordinary human differences in perception. Now we have thermometers. Turchin wants to invent a thermometer for human societies that will measure when they are likely to boil over into war.Eventually, Turchin hopes, no government will make policy without reflecting on whether it is hurtling toward a mathematically preordained disaster.

One social scientist who can speak to Turchin in his own mathematical argot is Dingxin Zhao, a sociology professor at the University of Chicago who is—incredibly—­also a former mathematical ecologist. (He earned a doctorate modeling carrot-weevil population dynamics before earning a second doctorate in Chinese political sociology.) “I came from a natural-science background,” Zhao told me, “and in a way I am sympathetic to Turchin. If you come to social science from natural sciences, you have a powerful way of looking at the world. But you may also make big mistakes.”https://7b5398f22dccdccd72e52f21134e7ce7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Zhao said that human beings are just much more complicated than bugs. “Biological species don’t strategize in a very flexible way,” he told me. After millennia of evolutionary R&D, a woodpecker will come up with ingenious ways to stick its beak into a tree in search of food. It might even have social characteristics—an alpha woodpecker might strong-wing beta woodpeckers into giving it first dibs on the tastiest termites. But humans are much wilier social creatures, Zhao said. A woodpecker will eat a termite, but it “will not explain that he is doing so because it is his divine right.” Humans pull ideological power moves like this all the time, Zhao said, and to understand “the decisions of a Donald Trump, or a Xi Jinping,” a natural scientist has to incorporate the myriad complexities of human strategy, emotion, and belief. “I made that change,” Zhao told me, “and Peter Turchin has not.”

Turchin is nonetheless filling a historiographical niche left empty by academic historians with allergies not just to science but to a wide-angle view of the past. He places himself in a Russian tradition prone to thinking sweeping, Tolstoyan thoughts about the path of history. By comparison, American historians mostly look like micro-historians. Few would dare to write a history of the United States, let alone one of human civilization. Turchin’s approach is also Russian, or post-Soviet, in its rejection of the Marxist theory of historical progress that had been the official ideology of the Soviet state. When the U.S.S.R. collapsed, so too did the requirement that historical writing acknowledge international communism as the condition toward which the arc of history was bending. Turchin dropped ideology altogether, he says: Rather than bending toward progress, the arc in his view bends all the way back on itself, in a never-­ending loop of boom and bust. This puts him at odds with American historians, many of whom harbor an unspoken faith that liberal democracy is the end state of all history.https://7b5398f22dccdccd72e52f21134e7ce7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Writing history in this sweeping, cyclical way is easier if you are trained outside the field. “If you look at who is doing these megahistories, more often than not, it’s not actual historians,” Walter Scheidel, an actual historian at Stanford, told me. (Scheidel, whose books span millennia, takes Turchin’s work seriously and has even co-written a paper with him.) Instead they come from scientific fields where these taboos do not dominate. The genre’s most famous book, Guns, Germs, and Steel (1997), beheld 13,000 years of human history in a single volume. Its author, Jared Diamond, spent the first half of his career as one of the world’s foremost experts on the physiology of the gall­bladder. Steven Pinker, a cognitive psychologist who studies how children acquire parts of speech, has written a megahistory about the decline of violence across thousands of years, and about human flourishing since the Enlightenment. Most historians I asked about these men—and for some reason megahistory is nearly always a male pursuit—used terms like laughingstock and patently tendentious to describe them.

Pinker retorts that historians are resentful of the attention “disciplinary carpet­baggers” like himself have received for applying scientific methods to the humanities and coming up with conclusions that had eluded the old methods. He is skeptical of Turchin’s claims about historical cycles, but he believes in data-driven historical inquiry. “Given the noisiness of human behavior and the prevalence of cognitive biases, it’s easy to delude oneself about a historical period or trend by picking whichever event suits one’s narrative,” he says. The only answer is to use large data sets. Pinker thanks traditional historians for their work collating these data sets; he told me in an email that they “deserve extraordinary admiration for their original research (‘brushing the mouse shit off moldy court records in the basement of town halls,’ as one historian put it to me).” He calls not for surrender but for a truce. “There’s no reason that traditional history and data science can’t merge into a cooperative enterprise,” Pinker wrote. “Knowing stuff is hard; we need to use every available tool.”

SPONSOR CONTENT

Discover Small Businesses You’ll Love in Your Own Personalized Main Street

FACEBOOK

Guldi, the Southern Methodist University professor, is one scholar who has embraced tools previously scorned by historians. She is a pioneer of data-driven history that considers timescales beyond a human lifetime. Her primary technique is the mining of texts—for example, sifting through the millions and millions of words captured in parliamentary debate in order to understand the history of land use in the final century of the British empire. Guldi may seem a potential recruit to cliodynamics, but her approach to data sets is grounded in the traditional methods of the humanities. She counts the frequency of words, rather than trying to find ways to compare big, fuzzy categories among civilizations. Turchin’s conclusions are only as good as his databases, she told me, and any database that tries to code something as complex as who constitutes a society’s elites—then tries to make like-to-like comparisons across millennia and oceans—will meet with skepticism from traditional historians, who deny that the subject to which they have devoted their lives can be expressed in Excel format. Turchin’s data are also limited to big-­picture characteristics observed over 10,000 years, or about 200 lifetimes. By scientific standards, a sample size of 200 is small, even if it is all humanity has.

Yet 200 lifetimes is at least more ambitious than the average historical purview of only one. And the reward for that ambition—­­in addition to the bragging rights for having potentially explained everything that has ever happened to human beings—includes something every writer wants: an audience. Thinking small rarely gets you quoted in The New York Times. Turchin has not yet attracted the mass audiences of a Diamond, Pinker, or Harari. But he has lured connoisseurs of political catastrophe, journalists and pundits looking for big answers to pressing questions, and true believers in the power of science to conquer uncertainty and improve the world. He has certainly outsold most beetle experts.

SPONSOR CONTENT

Create a Personal Main Street of Small Businesses You’ll Love—But Didn’t Know Existed Yet

FACEBOOK

If he is right, it is hard to see how history will avoid assimilating his insights—if it can avoid being abolished by them. Privately, some historians have told me they consider the tools he uses powerful, if a little crude. Clio­dynamics is now on a long list of methods that arrived on the scene promising to revolutionize history. Many were fads, but some survived that stage to take their rightful place in an expanding historiographical tool kit. Turchin’s methods have already shown their power. Cliodynamics offers scientific hypotheses, and human history will give us more and more opportunities to check its predictions—­revealing whether Peter Turchin is a Hari Seldon or a mere Nostradamus. For my own sake, there are few thinkers whom I am more eager to see proved wrong.


This article appears in the December 2020 print edition with the headline “The Historian Who Sees the Future.” It was first published online on November 12, 2020.Graeme Wood is a staff writer at The Atlantic and the author of The Way of the Strangers: Encounters With the Islamic State.

People Aren’t Meant to Talk This Much

Breaking up social-media companies is one way to fix them. Shutting their users up is a better one.By Ian Bogost

A photo collage of one man talking in another's ear
GraphicaArtis / Getty

8:00 AM ETSHARE

Your social life has a biological limit: 150. That’s the number—Dunbar’s number, proposed by the British psychologist Robin Dunbar three decades ago—of people with whom you can have meaningful relationships.

What makes a relationship meaningful? Dunbar gave The New York Times shorthand answer: “those people you know well enough to greet without feeling awkward if you ran into them in an airport lounge”—a take that may accidentally reveal the substantial spoils of having produced a predominant psychological theory. The construct encompasses multiple “layers” of intimacy in relationships. We can reasonably expect to develop up to 150 productive bonds, but we have our most intimate, and therefore most connected, relationships with only about five to 15 closest friends. We can maintain much larger networks, but only by compromising the quality or sincerity of those connections; most people operate in much smaller social circles.https://660f3395e3291cc1aa40d8231012562e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Some critics have questioned Dunbar’s conclusion, calling it deterministic and even magical. Still, the general idea is intuitive, and it has stuck. And yet, the dominant container for modern social life—the social network—does anything but respect Dunbar’s premise. Online life is all about maximizing the quantity of connections without much concern for their quality. On the internet, a meaningful relationship is one that might offer diversion or utility, not one in which you divulge secrets and offer support.

A lot is wrong with the internet, but much of it boils down to this one problem: We are all constantly talking to one another. Take that in every sense. Before online tools, we talked less frequently, and with fewer people. The average person had a handful of conversations a day, and the biggest group she spoke in front of was maybe a wedding reception or a company meeting, a few hundred people at most. Maybe her statement would be recorded, but there were few mechanisms for it to be amplified and spread around the world, far beyond its original context.

Online media gives the everyperson access to channels of communication previously reserved for Big Business. Starting with the world wide web in the 1990s and continuing into user-generated content of the aughts and social media of the 2010s, control over public discourse has moved from media organizations, governments, and corporations to average citizens. Finally, people could publish writing, images, videos, and other material without first getting the endorsement of publishers or broadcasters. Ideas spread freely beyond borders.

And we also received a toxic dump of garbage. The ease with which connections can be made—along with the way that, on social media, close friends look the same as acquaintances or even strangers—means any post can successfully appeal to people’s worst fears, transforming ordinary folks into radicals. That’s what YouTube did to the Christchurch shooter, what conspiracy theorists preceding QAnon did to the Pizzagaters, what Trumpists did to the Capitol rioters. And, closer to the ground, it’s how random Facebook messages scam your mother, how ill-thought tweets ruin lives, how social media has made life in general brittle and unforgiving.

It’s long past time to question a fundamental premise of online life: What if people shouldn’t be able to say so much, and to so many, so often?


The process of giving someone a direct relationship with anyone else is sometimes called disintermediation, because it supposedly removes the intermediaries sitting between two parties. But the disintermediation of social media didn’t really put the power in the hands of individuals. Instead, it replaced the old intermediaries with new ones: Google, Facebook, Twitter, many others. These are less technology companies than data companies: They suck up information when people search, post, click, and reply, and use that information to sell advertising that targets users by ever-narrower demographic, behavioral, or commercial categories. For that reason, encouraging people to “speak” online as much as possible is in the tech giants’ best interest. Internet companies call this “engagement.”

RECOMMENDED READING

The gospel of engagement duped people into mistaking using the software with carrying out meaningful or even successful conversations. A bitter tweet that produces chaotic acrimony somehow became construed as successful online speech rather than a sign of its obvious failure. All those people posting so often seemed to prove that the plan was working. Just look at all the speech!

Thus, the quantity of material being produced, and the size of the audiences subjected to it, became unalloyed goods. The past several years of debate over online speech affirm this state of affairs. First, the platforms invented metrics to encourage engagement, such as like and share counts. Popularity and reach, of obvious value to the platforms, became social values too. Even on the level of the influencer, the media personality, or the online mob, scale produced power and influence and wealth, or the fantasy thereof.

The capacity to reach an audience some of the time became contorted into the right to reach every audience all of the time. The rhetoric about social media started to assume an absolute liberty always to be heard; any effort to constrain or limit users’ ability to spread ideas devolving into nothing less than censorship. But there is no reason to believe that everyone should have immediate and constant access to everyone else in the world at all times.

My colleague Adrienne LaFrance has named the fundamental assumption, and danger, of social media megascale: “not just a very large user base, but a tremendous one, unprecedented in size.” Technology platforms such as Facebook assume that they deserve a user base measured in the billions of people—and then excuse their misdeeds by noting that effectively controlling such an unthinkably large population is impossible. But technology users, including Donald Trump and your neighbors, also assume that they can and should taste the spoils of megascale. The more posts, the more followers, the more likes, the more reach, the better. This is how bad information spreads, degrading engagement into calamity the more attention it accrues. This isn’t a side effect of social media’s misuse, but the expected outcome of its use. As the media scholar Siva Vaidhyanathan puts it, the problem with Facebook is Facebook.https://660f3395e3291cc1aa40d8231012562e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

So far, controlling that tidal wave of content has been seen as a task to be carried out after the fact. Companies such as Facebook employ (or outsource) an army of content moderators, whose job involves flagging objectionable material for suppression. That job is so terrible that it amounts to mental and emotional trauma. And even then, the whole affair is just whack-a-mole, stamping out one offending instance only for it to reappear elsewhere, perhaps moments later. Determined to solve computing’s problems with more computing, social-media companies are also trying to use automated methods to squelch or limit posts, but too many people post too many variations, and AI isn’t sufficiently discerning for the techniques to work effectively.

Regulatory intervention, if it ever comes, also won’t solve the problem. No proposal for breaking up Facebook would address the scale issue; the most likely scenario would just split Instagram and WhatsApp off from their parent. These entities are already global, managing billions of users via a single service. You wouldn’t get WhatsApp Pakistan, baby-Bell style. And even if you did, the scale of access people have to one another’s attention within those larger communities would still remain massive. Infinite free posts, messages, and calls have made communication easier but also changed its nature—connecting people to larger and larger audiences more and more often.

Wouldn’t it just be better if fewer people posted less stuff, less frequently, and if smaller audiences saw it?


Limiting social media may seem impossible, or tautological. But, in fact, these companies have long embraced constraints. Tweets can be 280 characters and no more. YouTube videos for most users cannot exceed 15 minutes—before 2010, the limit was 10, helping establish the short-form nature of online video. Later, Vine pushed brevity to its logical extreme, limiting videos to six seconds in length. Snapchat bet its success on ephemerality, with posts that vanish after a brief period rather than persist forever.Grouped – How Small Groups Of Friends Are The Key To Influence On The Social WebPAUL ADAMS,NEW RIDERS PUBBUY BOOKWhen you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

Even the capacity to respond to a Facebook post, Twitter DM, Slack message, or other online matter with likes, emotes, or emoji constrains what people can do when they use those services. Those constraints often feel curious or even disturbing, but winnowing the infinity of possible responses down to a few shorthands creates boundaries.

Yet despite the many material limitations that make popular online tools what they are, few platforms ever limit the volume of posts or the reach of users in a clear and legible way.

Imagine if access and reach were limited too: mechanically rather than juridically, by default? What if, for example, you could post to Facebook only once a day, or week, or month? Or only to a certain number of people? Or what if, after an hour or a day, the post expired, Snapchat style? Or, after a certain number of views, or when it reached a certain geographic distance from its origins, it self-destructed? That wouldn’t stop bad actors from being bad, but it would reduce their ability to exude that badness into the public sphere.

Such a constraint would be technically trivial to implement. And it’s not entirely without precedent. On LinkedIn, you can amass as large a professional network as you’d like, but your profile stops counting after 500 contacts, which purportedly nudges users to focus on the quality and use of their contacts rather than their number. Nextdoor requires members to prove that they live in a particular neighborhood to see and post to that community (admittedly, this particular constraint doesn’t seem to have stopped bad behavior on its own). And I can configure a post to be shown only to a specific friend group on Facebook, or prevent strangers from responding to tweets or Instagram posts. But these boundaries are porous, and opt-in.https://660f3395e3291cc1aa40d8231012562e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

A better example of a limited network exists, one that managed to solve many of the problems of social web through design, but that didn’t survive long enough to see the perks of its logic. It was called Google+.


In 2010, Paul Adams led a social-research team at Google, where he hoped to create something that would help people maintain and build relationships online. He and his team tried to translate what sociologists already knew about human relationships into technology. Among the most important of those ideas: People have relatively narrow social relationships. “We talk to the same, small group of people again and again,” Adams wrote in his 2012 book, Grouped. More specifically, people tend to have the most conversations with just their five closest ties. Unsurprisingly, these strong ties, as sociologists call them, are also the people who hold the most influence over us.

This understanding of strong ties was central to Google+. It allowed users to organize people into groups, called circles, around which interactions were oriented. That forced people to consider the similarities and differences among the people in their networks, rather than treating them all as undifferentiated contacts or followers. It makes sense: One’s family is different from one’s work colleagues, who are different from one’s poker partners or church members.

Adams also wanted to heed a lesson from the sociologist Mark Granovetter: As people shift their attention from strong to weak ties, the resulting connections become more dangerous. Strong ties are strong because their reliability has been affirmed over time. The input or information one might receive from a family member or co-worker is both more trusted and more contextualized. By contrast, the things you hear a random person say at the store (or on the internet) are—or should be—less intrinsically trustworthy. But weak ties also produce more novelty, precisely because they carry messages people might not have seen before. The evolution of a weak tie to a strong one is supposed to take place over an extended time, as an individual tests and considers the relationship and decides how to incorporate it into their life. As Granovetter put it in his 1973 paper on the subject, strong ties don’t bridge between two different social groups. New connections require weak ties.https://660f3395e3291cc1aa40d8231012562e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Weak ties can lead to new opportunities, ideas, and perspectives—this feature characterizes their power. People tend to find new job opportunities and mates via weak ties, for example. But online, we encounter a lot more weak ties than ever before, and those untrusted individuals tend to seem similar to reliable ones—every post on Facebook or Twitter looks the same, more or less. Trusting weak ties becomes easier, which allows influences that were previously fringe to become central, or influences that are central to reinforce themselves. Granovetter anticipated this problem back in the early ’70s: “Treating only the strength of ties,” he wrote, “ignores … all the important issues regarding their content.”

Adams’s book feels like a prediction of everything that would go wrong with the internet. Ideas spread easily, Adams writes, when they get put in front of lots of people who are easy to influence. And in turn, those people become vectors for spreading them to other adopters, which is much quicker when masses of easily influenced people are so well connected—as they are on social media. When people who take longer to adopt ideas eventually do so, Adams concludes, it’s “because they were continuously exposed to so many of their connections adopting.” The lower the threshold for trust and spread, the more the ideas produced by any random person circulate unfettered. Worse, people share the most emotionally arousing ideas, stories, images, and other materials.

You know how this story ends. Facebook built its services to maximize the benefit of weak-tie spread to achieve megascale. Adams left Google for Facebook in early 2011, before Google+ had even launched. Eight years later, Google unceremoniously shut down the service.


https://660f3395e3291cc1aa40d8231012562e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Up until now, social reform online has been seen either as a problem for even more technology to solve, or as one that demands regulatory intervention. Either option moves at a glacial pace. Facebook, Google, and others attempt to counter misinformation and acrimony with the same machine learning that causes them. Critics call on the Department of Justice to break up these companies’ power, or on Congress to issue regulations to limit it. Facebook set up an oversight board, fusing its own brand of technological solutionism with its own flavor of juridical oversight. Meanwhile, the misinformation continues to flow, and the social environment continues to decay from its rot.

Imposing more, and more meaningful, constraints on internet services, by contrast, is both aesthetically and legally compatible with the form and business of the technology industry. To constrain the frequency of speech, the size or composition of an audience, the spread of any single speech act, or the life span of such posts is entirely accordant with the creative and technical underpinning of computational media. It should be shocking that you pay no mind to recomposing an idea so it fits in 280 characters, but that you’d never accept that the resulting message might be limited to 280 readers or 280 minutes. And yet, nothing about the latter is fundamentally different from the former.

Regulatory interventions have gotten nowhere because they fail to engage with the material conditions of megascale, which makes policing all those people and all that content simply too hard. It’s also divided the public over who ought to have influence. Any differential in perceived audience or reach can be cast as bias or censorship. The tech companies can’t really explain why such differences arise, because they are hidden inside layers of apparatus, nicknamed The Algorithm. In turn, the algorithm becomes an easy target for blame, censure, or reprisal. And in the interim, the machinery of megascale churns on, further eroding any trust or reliability in information of any kind—including understandings of how social software currently operates or what it might do differently.https://660f3395e3291cc1aa40d8231012562e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Conversely, design constraints on audience and reach that apply equally to everyone offer a means to enforce a suppression of contact, communication, and spread. To be effective, those constraints must be clear and transparent—that’s what makes Twitter’s 280-character format legible and comprehensible. They could also be regulated, implemented, and verified—at least more easily than pressuring companies to better moderate content or to make their algorithms more transparent. Finally, imposing hard limits on online social behavior would embrace the skills and strengths of computational design, rather than attempting to dismantle them.

This would be a painful step to take, because everyone has become accustomed to megascale. Technology companies would surely fight any effort to reduce growth or engagement. Private citizens would bristle at new and unfamiliar limitations. But the alternative, living amid the ever-rising waste spewed by megascale, is unsustainable. If megascale is the problem, downscale has to be the solution, somehow. That goal is hardly easy, but it is feasible, which is more than some competing answers have going for them. Just imagine how much quieter it would be online if it weren’t so loud.Grouped – How Small Groups Of Friends Are The Key To Influence On The Social WebPAUL ADAMS, NEW RIDERS PUBBUY BOOKIan Bogost is a contributing writer at The Atlantic and the Director of the Program in Film & Media Studies at Washington University in St. Louis. His latest book is Play Anything.

**

The Metaverse Is Bad

It is not a world in a headset but a fantasy of power.By Ian Bogost

Looking out through eye holes at a synthetic landscape
Ali Kahfi / Getty; Corbis / Getty; The Atlantic

OCTOBER 21, 2021SHARE

In science fiction, the end of the world is a tidy affair. Climate collapse or an alien invasion drives humanity to flee on cosmic arks, or live inside a simulation. Real-life apocalypse is more ambiguous. It happens slowly, and there’s no way of knowing when the Earth is really doomed. To depart our world, under these conditions, is the same as giving up on it.

And yet, some of your wealthiest fellow earthlings would like to do exactly that. Elon Musk, Jeff Bezos, and other purveyors of private space travel imagine a celestial paradise where we can thrive as a “multiplanet species.” That’s the dream of films such as Interstellar and Wall-E. Now comes news that Mark Zuckerberg has embraced the premise of The Matrix, that we can plug ourselves into a big computer and persist as flesh husks while reality decays around us. According to a report this week from The Verge, the Facebook chief may soon rebrand his company to mark its change in focus from social media to “the metaverse.”

In a narrow sense, this phrase refers to internet-connected glasses. More broadly, though, it’s a fantasy of power and control.

Beyond science fiction, metaverse means almost nothing. Even within sci-fi, it doesn’t mean much. No article on this topic would be complete without a mention of the 1992 novel Snow Crash, in which Neal Stephenson coined the term. But that book offers scarce detail about the actual operation of the alternate-reality dreamworld it posits. A facility of computers in the desert runs the metaverse, and the novel’s characters hang out inside the simulation because their real lives are boring or difficult. No such entity exists today, of course, just as no real product even approximates the rough idea—drawn from Stephenson or William Gibson or Philip K. Dick—of having people jack into a virtual, parallel reality with goggles or brain implants. Ironically, these writers clearly meant to warn us off those dreams, rather than inspire them.

In the simplest explanation, the metaverse is just a sexy, aspirational name for some kind of virtual or augmented-reality play. Facebook owns a company called Oculus, which manufactures and sells VR computers and headsets. Oculus is also making a 3-D, virtual platform called Horizon—think Minecraft with avatars, but without the blocks. Facebook, Apple, and others have also invested heavily in augmented reality, a kind of computer graphics that uses goggles to overlay interactive elements onto a live view of the world. So far, the most viable applications of VR and AR can be found in medicine, architecture, and manufacturing, but dreams of its widespread consumer appeal persist. If those dreams become realized, you’ll probably end up buying crap and yelling at people through a head-mounted display, instead of through your smartphone. Sure, calling that a metaverse probably sounds better. Just like “the cloud” sounds better than, you know, a server farm where people and companies rent disk space.

It’s absurd but telling that the inspiration for the metaverse was meant as satire. Just as OZY Media misinterprets Shelley, so Zuck and crew misconstrue metaverse fiction. In Snow Crash, as in other cyberpunk stories (including the 1995 Kathryn Bigelow film Strange Days), the metaverse comes across as intrinsically dangerous. The book’s title refers to a digital drug for denizens of the metaverse, with harmful neurological effects that extend outside it.

RECOMMENDED READING

That danger hasn’t survived the metaverse’s translation into contemporary technological fantasy. Instead, the concept appeals to tech magnates because it connects the rather prosaic reality of technologized consumer attention to a science-fictional dream of escape. You can see why Zuckerberg, plagued by months and years of criticism of his decidedly low-fidelity social networks and apps, might find an escape hatch appealing. The metaverse offers a way to leave behind worldly irritants and relocate to greener pastures. This is the rationale of a strip miner or a private-equity partner: Take what you can, move on, and don’t look back. No wonder fictional worlds with metaverses are always trashed.

The fantasy is bigger, though. CEOs in tech know that billions of people still live much of their life beyond computer screens. Those people buy automobiles and grow herb gardens. They copulate and blow autumn leaves. Real life still seeps through the seams of computers. The executives know that no company, however big, can capture all the world. But there is an alternative: If only the public could be persuaded to abandon atoms for bits, the material for the symbolic, then people would have to lease virtualized renditions of all the things that haven’t yet been pulled online. Slowly, eventually, the uncontrollable material world falls away, leaving in its stead only the pristine—but monetizable—virtual one.

The technical feasibility of such an outcome is slight, but don’t let that bother you. More important is the ambition it represents for tycoons who have already captured so much of the global population’s attention: Even as a hypothetical, a metaverse solves all the problems of physics, business, politics, and everything else. In the metaverse, every home can have a dishwasher. Soft goods such as clothing and art (and receipts for JPEGs) can be manufactured at no cost and exchanged for nothing, save the transaction fees charged by your metaverse provider. A metaverse also assumes complete interoperability. It offers a path toward total consolidation, where one entity sells you entertainment, social connection, trousers, antifreeze, and everything in between. If realized, the metaverse would become the ultimate company town, a megascale Amazon that rolls up raw materials, supply chains, manufacturing, distribution, and use and all its related discourse into one single service. It is the black hole of consumption.https://c00dbc48970cd08aa8f2e6ccc3ec6a53.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Postmodern critics celebrated and lamented metadiscursivity—the tendency to talk about talking about things as a substitute for talking about them. Then “going meta” became a power move online, a way of getting atop and over a person, product, or idea in a futile attempt to tame it. In an era of infinite, free connectivity, meaning became so plentiful that it began to seem suspect. Going meta short-circuited the need to contend with meaning in the first place, replacing it with a tower of deferred meanings, each one-upping the last’s claim to prominence. Memes meme memes, then appear on T-shirts, then recur as Instagrammed latte art.

As I write this, a rumor about the rumor about Facebook’s metaversal rebranding is circulating: Bloomberg reported yesterday that the company already owns meta.com, meta.org, and perhaps dozens of other meta-names, domains, handles, and properties. What better way to go meta on going meta than to rename the company Meta? (Later in the day, the technology writer Casey Newton reported that Zuckerberg is “now leaning away from Meta as the name.”)

Despite its slipperiness, going meta has another, firmer meaning. In Greek, the prefix meta (μετα) refers to transcendence. About-itselfness, the way ironists and epistemologists use the term today, offers one interpretation. But meta- also has a more prosaic meaning, referring to something above or beyond something else. Superiority, power, and conquest come along for the ride: A 1928 book on eugenics is titled Metanthropos, or the Body of the Future. A metaverse is a universe, but better. More superior. An überversum for an übermench. The metaverse, the superman, the private vessel of trillionaire intergalactic escape, the ark on the dark sea of ice melt: To abandon a real and present life for a hypothetical new one means giving up on everything else in the hopes of saving oneself. That’s hubris, probably. But also, to dream of immortality is to admit weakness—a fear that, like all things, you too might end.Ian Bogost is a contributing writer at The Atlantic and the Director of the Program in Film & Media Studies at Washington University in St. Louis. His latest book is Play Anything.