The story we have been telling ourselves about our origins is wrong, and perpetuates the idea of inevitable social inequality. David Graeber and David Wengrow ask why the myth of ‘agricultural revolution’ remains so persistent, and argue that there is a whole lot more we can learn from our ancestors.
Quite independently, archaeological evidence suggests that in the highly seasonal environments of the last Ice Age, our remote ancestors were behaving in broadly similar ways: shifting back and forth between alternative social arrangements, permitting the rise of authoritarian structures during certain times of year, on the proviso that they could not last; on the understanding that no particular social order was ever fixed or immutable. Within the same population, one could live sometimes in what looks, from a distance, like a band, sometimes a tribe, and sometimes a society with many of the features we now identify with states. With such institutional flexibility comes the capacity to step outside the boundaries of any given social structure and reflect; to both make and unmake the political worlds we live in. Having lived so much of our history moving back and forth between different political systems, ‘how did we get so stuck?
1. In the beginning was the word
For centuries, we have been telling ourselves a simple story about the origins of social inequality. For most of their history, humans lived in tiny egalitarian bands of hunter-gatherers. Then came farming, which brought with it private property, and then the rise of cities which meant the emergence of civilization properly speaking. Civilization meant many bad things (wars, taxes, bureaucracy, patriarchy, slavery…) but also made possible written literature, science, philosophy, and most other great human achievements.
Almost everyone knows this story in its broadest outlines. Since at least the days of Jean-Jacques Rousseau, it has framed what we think the overall shape and direction of human history to be. This is important because the narrative also defines our sense of political possibility. Most see civilization, hence inequality, as a tragic necessity. Some dream of returning to a past utopia, of finding an industrial equivalent to ‘primitive communism’, or even, in extreme cases, of destroying everything, and going back to being foragers again. But no one challenges the basic structure of the story.
There is a fundamental problem with this narrative. It isn’t true.
Overwhelming evidence from archaeology, anthropology, and kindred disciplines is beginning to give us a fairly clear idea of what the last 40,000 years of human history really looked like, and in almost no way does it resemble the conventional narrative.
- Our species did not, in fact, spend most of its history in tiny bands and there is no reason to believe that small-scale groups are especially likely to be egalitarian, or that large ones must necessarily have kings, presidents, or bureaucracies. These are just prejudices stated as facts.
- Agriculture did not mark an irreversible threshold in social evolution
- The first cities were often robustly egalitarian.
Still, even as researchers have gradually come to a consensus on such questions, they remain strangely reluctant to announce their findings to the public – or even scholars in other disciplines – let alone reflect on the larger political implications. As a result, those writers who are reflecting on the ‘big questions’ of human history – Jared Diamond, Francis Fukuyama, Ian Morris, and others – still take Rousseau’s question (‘what is the origin of social inequality?’) as their starting point, and assume the larger story will begin with some kind of fall from primordial innocence.
Simply framing the question this way means making a series of assumptions, that 1. there is a thing called ‘inequality,’ 2. that it is a problem, and 3. that there was a time it did not exist. Since the financial crash of 2008, of course, and the upheavals that followed, the ‘problem of social inequality’ has been at the centre of political debate. There seems to be a consensus, among the intellectual and political classes, that levels of social inequality have spiralled out of control, and that most of the world’s problems result from this, in one way or another. Pointing this out is seen as a challenge to global power structures, but compare this to the way similar issues might have been discussed a generation earlier.
Unlike terms such as ‘capital’ or ‘class power’, the word ‘equality’ is practically designed to lead to half-measures and compromise. One can imagine overthrowing capitalism or breaking the power of the state, but it’s very difficult to imagine eliminating ‘inequality’. In fact, it’s not obvious what doing so would even mean, since people are not all the same and nobody would particularly want them to be.
‘Inequality’ is a way of framing social problems appropriate to technocratic reformers, the kind of people who assume from the outset that any real vision of social transformation has long since been taken off the political table. It allows one to tinker with the numbers, argue about Gini coefficients and thresholds of dysfunction, readjust tax regimes or social welfare mechanisms, even shock the public with figures showing just how bad things have become (‘can you imagine? 0.1% of the world’s population controls over 50% of the wealth!’), all without addressing any of the factors that people actually object to about such ‘unequal’ social arrangements: for instance, that some manage to turn their wealth into power over others; or that other people end up being told their needs are not important, and their lives have no intrinsic worth. The latter, we are supposed to believe, is just the inevitable effect of inequality, and inequality, the inevitable result of living in any large, complex, urban, technologically sophisticated society. That is the real political message conveyed by endless invocations of an imaginary age of innocence, before the invention of inequality: that if we want to get rid of such problems entirely, we’d have to somehow get rid of 99.9% of the Earth’s population and go back to being tiny bands of foragers again. Otherwise, the best we can hope for is to adjust the size of the boot that will be stomping on our faces, forever, or perhaps to wrangle a bit more wiggle room in which some of us can at least temporarily duck out of its way.
Mainstream social science now seems mobilized to reinforce this sense of hopelessness. Almost on a monthly basis we are confronted with publications trying to project the current obsession with property distribution back into the Stone Age, setting us on a false quest for ‘egalitarian societies’ defined in such a way that they could not possibly exist outside some tiny band of foragers (and possibly, not even then). What we’re going to do in this essay, then, is two things. First, we will spend a bit of time picking through what passes for informed opinion on such matters, to reveal how the game is played, how even the most apparently sophisticated contemporary scholars end up reproducing conventional wisdom as it stood in France or Scotland in, say, 1760. Then we will attempt to lay down the initial foundations of an entirely different narrative. This is mostly ground-clearing work. The questions we are dealing with are so enormous, and the issues so important, that it will take years of research and debate to even begin understanding the full implications. But on one thing we insist.
Abandoning the story of a fall from primordial innocence does not mean abandoning dreams of human emancipation – that is, of a society where no one can turn their rights in property into a means of enslaving others, and where no one can be told their lives and needs don’t matter. To the contrary. Human history becomes a far more interesting place, containing many more hopeful moments than we’ve been led to imagine, once we learn to throw off our conceptual shackles and perceive what’s really there.
2. Contemporary authors on the origins of social inequality; or, the eternal return of Jean-Jacques Rousseau
Let us begin by outlining received wisdom on the overall course of human history. It goes something a little like this:
As the curtain goes up on human history – say, roughly two hundred thousand years ago, with the appearance of anatomically modern Homo sapiens – we find our species living in small and mobile bands ranging from twenty to forty individuals. They seek out optimal hunting and foraging territories, following herds, gathering nuts and berries. If resources become scarce, or social tensions arise, they respond by moving on, and going someplace else. Life for these early humans – we can think of it as humanity’s childhood – is full of dangers, but also possibilities. Material possessions are few, but the world is an unspoiled and inviting place. Most work only a few hours a day, and the small size of social groups allows them to maintain a kind of easy-going camaraderie, without formal structures of domination. Rousseau, writing in the 18th century, referred to this as ‘the State of Nature,’ but nowadays it is presumed to have encompassed most of our species’ actual history. It is also assumed to be the only era in which humans managed to live in genuine societies of equals, without classes, castes, hereditary leaders, or centralised government.
Alas this happy state of affairs eventually had to end. Our conventional version of world history places this moment around 10,000 years ago, at the close of the last Ice Age.
At this point, we find our imaginary human actors scattered across the world’s continents, beginning to farm their own crops and raise their own herds. Whatever the local reasons (they are debated), the effects are momentous, and basically the same everywhere. Territorial attachments and private ownership of property become important in ways previously unknown, and with them, sporadic feuds and war. Farming grants a surplus of food, which allows some to accumulate wealth and influence beyond their immediate kin-group. Others use their freedom from the food-quest to develop new skills, like the invention of more sophisticated weapons, tools, vehicles, and fortifications, or the pursuit of politics and organised religion. In consequence, these ‘Neolithic farmers’ quickly get the measure of their hunter-gatherer neighbours, and set about eliminating or absorbing them into a new and superior – albeit less equal – way of life.
To make matters more difficult still, or so the story goes, farming ensures a global rise in population levels. As people move into ever-larger concentrations, our unwitting ancestors take another irreversible step to inequality, and around 6,000 years ago, cities appear – and our fate is sealed. With cities comes the need for centralised government. New classes of bureaucrats, priests, and warrior-politicians install themselves in permanent office to keep order and ensure the smooth flow of supplies and public services. Women, having once enjoyed prominent roles in human affairs, are sequestered, or imprisoned in harems. War captives are reduced to slaves. Full-blown inequality has arrived, and there is no getting rid of it. Still, the story-tellers always assure us, not everything about the rise of urban civilization is bad. Writing is invented, at first to keep state accounts, but this allows terrific advances to take place in science, technology, and the arts. At the price of innocence, we became our modern selves, and can now merely gaze with pity and jealousy at those few ‘traditional’ or ‘primitive’ societies that somehow missed the boat.
This is the story that, as we say, forms the foundation of all contemporary debate on inequality. If say, an expert in international relations, or a clinical psychologist, wishes to reflect on such matters, they are likely to simply take it for granted that, for most of human history, we lived in tiny egalitarian bands, or that the rise of cities also meant the rise of the state. The same is true of most recent books that try to look at the broad sweep of prehistory, in order to draw political conclusions relevant to contemporary life. Consider Francis Fukuyama’s The Origins of Political Order: From Prehuman Times to the French Revolution:
In its early stages, human political organization is similar to the band-level society observed in higher primates like chimpanzees. This may be regarded as a default form of social organization. … Rousseau pointed out that the origin of political inequality lay in the development of agriculture, and in this he was largely correct. Since band-level societies are preagricultural, there is no private property in any modern sense. Like chimp bands, hunter-gatherers inhabit a territorial range that they guard and occasionally fight over. But they have a lesser incentive than agriculturalists to mark out a piece of land and say ‘this is mine’. If their territory is invaded by another group, or if it is infiltrated by dangerous predators, band-level societies may have the option of simply moving somewhere else due to low population densities. Band-level societies are highly egalitarian … Leadership is vested in individuals based on qualities like strength, intelligence, and trustworthiness, but it tends to migrate from one individual to another.
Jared Diamond, in World Before Yesterday: What Can We Learn from Traditional Societies?, suggests such bands (in which he believes humans still lived ‘as recently as 11,000 years ago’) comprised ‘just a few dozen individuals’, most biologically related. They led a fairly meagre existence, ‘hunting and gathering whatever wild animal and plant species happen to live in an acre of forest’. (Why just an acre, he never explains). And their social lives, according to Diamond, were enviably simple. Decisions were reached through ‘face-to-face discussion’; there were ‘few personal possessions’, and ‘no formal political leadership or strong economic specialization’. Diamond concludes that, sadly, it is only within such primordial groupings that humans have ever achieved a significant degree of social equality.
For Diamond and Fukuyama, as for Rousseau some centuries earlier, what put an end to that equality – everywhere and forever – was the invention of agriculture and the higher population levels it sustained. Agriculture brought about a transition from ‘bands’ to ‘tribes’. Accumulation of food surplus fed population growth, leading some ‘tribes’ to develop into ranked societies known as ‘chiefdoms’. Fukuyama paints an almost biblical picture, a departure from Eden: ‘As little bands of human beings migrated and adapted to different environments, they began their exit out of the state of nature by developing new social institutions’. They fought wars over resources. Gangly and pubescent, these societies were headed for trouble.
It was time to grow up, time to appoint some proper leadership. Before long, chiefs had declared themselves kings, even emperors. There was no point in resisting. All this was inevitable once humans adopted large, complex forms of organization. Even when the leaders began acting badly – creaming off agricultural surplus to promote their flunkies and relatives, making status permanent and hereditary, collecting trophy skulls and harems of slave-girls, or tearing out rival’s hearts with obsidian knives – there could be no going back. ‘Large populations’, Diamond opines, ‘can’t function without leaders who make the decisions, executives who carry out the decisions, and bureaucrats who administer the decisions and laws. Alas for all of you readers who are anarchists and dream of living without any state government, those are the reasons why your dream is unrealistic: you’ll have to find some tiny band or tribe willing to accept you, where no one is a stranger, and where kings, presidents, and bureaucrats are unnecessary’.
A dismal conclusion, not just for anarchists, but for anybody who ever wondered if there might be some viable alternative to the status quo. But the remarkable thing is that, despite the smug tone, such pronouncements are not actually based on any kind of scientific evidence. There is no reason to believe that small-scale groups are especially likely to be egalitarian, or that large ones must necessarily have kings, presidents, or bureaucracies. These are just prejudices stated as facts.
In the case of Fukuyama and Diamond one can, at least, note they were never trained in the relevant disciplines (the first is a political scientist, the other has a PhD on the physiology of the gall bladder). Still, even when anthropologists and archaeologists try their hand at ‘big picture’ narratives, they have an odd tendency to end up with some similarly minor variation on Rousseau. In The Creation of Inequality: How our Prehistoric Ancestors Set the Stage for Monarchy, Slavery, and Empire, Kent Flannery and Joyce Marcus, two eminently qualified scholars, lay out some five hundred pages of ethnographic and archaeological case studies to try and solve the puzzle. They admit our Ice Age forbears were not entirely unfamiliar with institutions of hierarchy and servitude, but insist they experienced these mainly in their dealings with the supernatural (ancestral spirits, and the like). The invention of farming, they propose, led to the emergence of demographically extended ‘clans’ or ‘descent groups’, and as it did so, access to spirits and the dead became a route to earthly power (how, exactly, is not made clear). According to Flannery and Marcus, the next major step on the road to inequality came when certain clansmen of unusual talent or renown – expert healers, warriors, and other over-achievers – were granted the right to transmit status to their descendants, regardless of the latter’s talents or abilities. That pretty much sowed the seeds, and meant from then on, it was just a matter of time before the arrival of cities, monarchy, slavery and empire.
The curious thing about Flannery and Marcus’ book is that only with the birth of states and empires do they really bring in any archaeological evidence. All the really key moments in their account of the ‘creation of inequality’ rely instead on relatively recent descriptions of small-scale foragers, herders, and cultivators like the Hadza of the East African Rift, or Nambikwara of the Amazonian rainforest. Accounts of such ‘traditional societies’ are treated as if they were windows onto the Palaeolithic or Neolithic past. The problem is that they are nothing of the kind. The Hadza or Nambikwara are not living fossils. They have been in contact with agrarian states and empires, raiders and traders, for millennia, and their social institutions were decisively shaped through attempts to engage with, or avoid them. Only archaeology can tell us what, if anything, they have in common with prehistoric societies. So, while Flannery and Marcus provide all sorts of interesting insights into how inequalities might emerge in human societies, they give us little reason to believe that this was how they actually did.
Finally, let us consider Ian Morris’s Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Morris is pursuing a slightly different intellectual project: to bring the findings of archaeology, ancient history, and anthropology into dialogue with the work of economists, such as Thomas Piketty on the causes of inequality in the modern world, or Sir Tony Atkinson’s more policy-oriented Inequality: What can be Done? The ‘deep time’ of human history, Morris informs us, has something important to tell us about such questions – but only if we first establish a uniform measure of inequality applicable across its entire span. This he achieves by translating the ‘values’ of Ice Age hunter-gatherers and Neolithic farmers into terms familiar to modern-day economists, and then using those to establish Gini coefficients, or formal inequality rates. Instead of the spiritual inequities that Flannery and Marcus highlight, Morris gives us an unapologetically materialist view, dividing human history into the three big ‘Fs’ of his title, depending on how they channel heat. All societies, he suggests, have an ‘optimal’ level of social inequality – a built-in ‘spirit-level’ to use Pickett and Wilkinson’s term – that is appropriate to their prevailing mode of energy extraction.
In a 2015 piece for the New York Times Morris actually gives us numbers, quantified primordial incomes in USD and fixed to 1990 currency values.1 He too assumes that hunter-gatherers of the last Ice Age lived mostly in tiny mobile bands. As a result, they consumed very little, the equivalent, he suggests, of about $1.10/day. Consequently, they also enjoyed a Gini coefficient of around 0.25 – that is, about as low as such rates can go – since there was little surplus or capital for any would-be elite to grab. Agrarian societies – and for Morris this includes everything from the 9,000-year-old Neolithic village of Çatalhöyük to Kublai Khan’s China or the France of Louis XIV – were more populous and better off, with an average consumption of $1.50-$2.20/day per person, and a propensity to accumulate surpluses of wealth. But most people also worked harder, and under markedly inferior conditions, so farming societies tended towards much higher levels of inequality.
Fossil-fuelled societies should really have changed all that by liberating us from the drudgery of manual work, and bringing us back towards more reasonable Gini coefficients, closer to those of our hunter-forager ancestors – and for a while it seemed like this was beginning to happen, but for some odd reason, which Morris doesn’t completely understand, matters have gone into reverse again and wealth is once again sucked up into the hands of a tiny global elite:
If the twists and turns of economic history over the last 15,000 years and popular will are any guide, the ‘right’ level of post-tax income inequality seems to lie between about 0.25 and 0.35, and that of wealth inequality between about 0.70 and 0.80. Many countries are now at or above the upper bounds of these ranges, which suggests that Mr. Piketty is indeed right to foresee trouble.
Some major technocratic tinkering is clearly in order!
Let us leave Morris’ prescriptions aside but just focus on one figure: the Palaeolithic income of $1.10 a day. Where exactly does it come from? Presumably the calculations have something to do with the calorific value of daily food intake. But if we’re comparing this to daily incomes today, wouldn’t we also have to factor in all the other things Palaeolithic foragers got for free, but which we ourselves would expect to pay for: free security, free dispute resolution, free primary education, free care of the elderly, free medicine, not to mention entertainment costs, music, storytelling, and religious services? Even when it comes to food, we must consider quality: after all, we’re talking about 100% organic free-range produce here, washed down with purest natural spring water. Much contemporary income goes to mortgages and rents. But consider the camping fees for prime Palaeolithic locations along the Dordogne or the Vézère, not to mention the high-end evening classes in naturalistic rock painting and ivory carving – and all those fur coats. Surely all this must cost wildly in excess of $1.10/day, even in 1990 dollars. It’s not for nothing that Marshall Sahlins referred to foragers as ‘the original affluent society.’ Such a life today would not come cheap.
This is all admittedly a bit silly, but that’s kind of our point: if one reduces world history to Gini coefficients, silly things will, necessarily, follow. Also depressing ones. Morris at least feels something is askew with the recent galloping increases of global inequality. By contrast, historian Walter Scheidel has taken Piketty-style readings of human history to their ultimate miserable conclusion in his 2017 book The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century, concluding there’s really nothing we can do about inequality. Civilization invariably puts in charge a small elite who grab more and more of the pie. The only thing that has ever been successful in dislodging them is catastrophe: war, plague, mass conscription, wholesale suffering and death. Half measures never work. So, if you don’t want to go back to living in a cave, or die in a nuclear holocaust (which presumably also ends up with the survivors living in caves), you’re going to just have to accept the existence of Warren Buffett and Bill Gates.
The liberal alternative? Flannery and Marcus, who openly identify with the tradition of Jean-Jacques Rousseau, end their survey with the following helpful suggestion:
We once broached this subject with Scotty MacNeish, an archaeologist who had spent 40 years studying social evolution. How, we wondered, could society be made more egalitarian? After briefly consulting his old friend Jack Daniels, MacNeish replied, ‘Put hunters and gatherers in charge.’
3. But did we really run headlong for our chains?
The really odd thing about these endless evocations of Rousseau’s innocent State of Nature, and the fall from grace, is that Rousseau himself never claimed the State of Nature really happened. It was all a thought-experiment. In his Discourse on the Origin and the Foundation of Inequality Among Mankind (1754), where most of the story we’ve been telling (and retelling) originates, he wrote:
… the researches, in which we may engage on this occasion, are not to be taken for historical truths, but merely as hypothetical and conditional reasonings, fitter to illustrate the nature of things, than to show their true origin.
Rousseau’s ‘State of Nature’ was never intended as a stage of development. It was not supposed to be an equivalent to the phase of ‘Savagery’, which opens the evolutionary schemes of Scottish philosophers such as Adam Smith, Ferguson, Millar, or later, Lewis Henry Morgan. These others were interested in defining levels of social and moral development, corresponding to historical changes in modes of production: foraging, pastoralism, farming, industry. What Rousseau presented is, by contrast, more of a parable. As emphasised by Judith Shklar, the renowned Harvard political theorist, Rousseau was really trying to explore what he considered the fundamental paradox of human politics: that our innate drive for freedom somehow leads us, time and again, on a ‘spontaneous march to inequality’. In Rousseau’s own words: ‘All ran headlong for their chains in the belief that they were securing their liberty; for although they had enough reason to see the advantages of political institutions, they did not have enough experience to foresee the dangers’. The imaginary State of Nature is just a way of illustrating the point.
Rousseau wasn’t a fatalist. What humans make, he believed, they could unmake. We could free ourselves from the chains; it just wasn’t going to be easy. Shklar suggests that the tension between ‘possibility and probability’ (the possibility of human emancipation, the likelihood we’ll all just place ourselves in some form of voluntary servitude again) was the central animating force of Rousseau’s writings on inequality. All this might seem a bit ironic since, after the French Revolution, many conservative critics held Rousseau personally responsible for the guillotine. What brought the Terror, they insisted, was precisely his naive faith in the innate goodness of humanity, and his belief that a more equal social order could simply be imagined by intellectuals and then imposed by the ‘general will’. But very few of those past figures now pilloried as romantics and utopians were really so naive.
Karl Marx, for instance, held that what makes us human is our power of imaginative reflection – unlike bees, we imagine the houses we’d like to live in, and only then set about constructing them – but he also believed that one couldn’t just proceed in the same way with society, and try to impose an architect’s model. To do so would be to commit the sin of ‘utopian socialism’, for which he had nothing but contempt. Instead, revolutionaries had to get a sense of the larger structural forces that shaped the course of world history, and take advantage of underlying contradictions: for instance, the fact that individual factory-owners need to stiff their workers to compete, but if all are too successful in doing so, no one will be able to afford what their factories produce. Yet such is the power of two thousand years of scripture, that even when hard-headed realists start talking about the vast sweep of human history, they fall back on some variation of the Garden of Eden – the Fall from Grace (usually, as in Genesis, owing to an unwise pursuit of Knowledge); the possibility of future Redemption. Marxist political parties quickly developed their own version of the story, fusing together Rousseau’s State of Nature and the Scottish Enlightenment idea of developmental stages. The result was a formula for world history that began with original ‘primitive communism’, overcome by the dawn of private property, but someday destined to return.
We must conclude that revolutionaries, for all their visionary ideals, have not tended to be particularly imaginative, especially when it comes to linking past, present, and future. Everyone keeps telling the same story. It’s probably no coincidence that today, the most vital and creative revolutionary movements at the dawn of this new millennium – the Zapatistas of Chiapas, and Kurds of Rojava being only the most obvious examples – are those that simultaneously root themselves in a deep traditional past. Instead of imagining some primordial utopia, they can draw on a more mixed and complicated narrative. Indeed, there seems to be a growing recognition, in revolutionary circles, that freedom, tradition, and the imagination have always, and will always be entangled, in ways we do not completely understand. It’s about time the rest of us catch up, and start to consider what a non-Biblical version of human history might be like.
4. How the course of (past) history can now change
So, what has archaeological and anthropological research really taught us, since the time of Rousseau?
Well, the first thing is that asking about the ‘origins of social inequality’ is probably the wrong place to start. True, before the beginning of what’s called the Upper Palaeolithic we really have no idea what most human social life was like. Much of our evidence comprises scattered fragments of worked stone, bone, and a few other durable materials. Different hominin species coexisted; it’s not clear if any ethnographic analogy might apply. Things only begin to come into any kind of focus in the Upper Palaeolithic itself, which begins around 45,000 years ago, and encompasses the peak of glaciation and global cooling (c. 20,000 years ago) known as the Last Glacial Maximum. This last great Ice Age was then followed by the onset of warmer conditions and gradual retreat of the ice sheets, leading to our current geological epoch, the Holocene. More clement conditions followed, creating the stage on which Homo sapiens – having already colonized much of the Old World – completed its march into the New, reaching the southern shores of the Americas by around 15,000 years ago.
So, what do we actually know about this period of human history? Much of the earliest substantial evidence for human social organization in the Palaeolithic derives from Europe, where our species became established alongside Homo neanderthalensis, prior to the latter’s extinction around 40,000 BC. (The concentration of data in this part of the world most likely reflects a historical bias of archaeological investigation, rather than anything unusual about Europe itself). At that time, and through the Last Glacial Maximum, the habitable parts of Ice Age Europe looked more like Serengeti Park in Tanzania than any present-day European habitat. South of the ice sheets, between the tundra and the forested shorelines of the Mediterranean, the continent was divided into game-rich valleys and steppe, seasonally traversed by migrating herds of deer, bison, and woolly mammoth. Prehistorians have pointed out for some decades – to little apparent effect – that the human groups inhabiting these environments had nothing in common with those blissfully simple, egalitarian bands of hunter-gatherers, still routinely imagined to be our remote ancestors.
To begin with, there is the undisputed existence of rich burials, extending back in time to the depths of the Ice Age. Some of these, such as the 25,000-year-old graves from Sungir, east of Moscow, have been known for many decades and are justly famous. Felipe Fernández-Armesto, who reviewed Creation of Inequality for The Wall Street Journal,2 expresses his reasonable amazement at their omission: ‘Though they know that the hereditary principle predated agriculture, Mr. Flannery and Ms. Marcus cannot quite shed the Rousseauian illusion that it started with sedentary life. Therefore they depict a world without inherited power until about 15,000 B.C. while ignoring one of the most important archaeological sites for their purpose’. For dug into the permafrost beneath the Palaeolithic settlement at Sungir was the grave of a middle-aged man buried, as Fernández-Armesto observes, with ‘stunning signs of honor: bracelets of polished mammoth-ivory, a diadem or cap of fox’s teeth, and nearly 3,000 laboriously carved and polished ivory beads’. And a few feet away, in an identical grave, ‘lay two children, of about 10 and 13 years respectively, adorned with comparable grave-gifts – including, in the case of the elder, some 5,000 beads as fine as the adult’s (although slightly smaller) and a massive lance carved from ivory’.
Such findings appear to have no significant place in any of the books so far considered. Downplaying them, or reducing them to footnotes, might be more easy to forgive were Sungir an isolated find. It is not. Comparably rich burials are by now attested from Upper Palaeolithic rock shelters and open-air settlements across much of western Eurasia, from the Don to the Dordogne. Among them we find, for example, the 16,000-year-old ‘Lady of Saint-Germain-la-Rivière’, bedecked with ornaments made on the teeth of young stags hunted 300 km away, in the Spanish Basque country; and the burials of the Ligurian coast – as ancient as Sungir – including ‘Il Principe’, a young man whose regalia included a sceptre of exotic flint, elk antler batons, and an ornate headdress of perforated shells and deer teeth. Such findings pose stimulating challenges of interpretation. Is Fernández-Armesto right to say these are proofs of ‘inherited power’? What was the status of such individuals in life?
No less intriguing is the sporadic but compelling evidence for monumental architecture, stretching back to the Last Glacial Maximum. The idea that one could measure ‘monumentality’ in absolute terms is of course as silly as the idea of quantifying Ice Age expenditure in dollars and cents. It is a relative concept, which makes sense only within a particular scale of values and prior experiences. The Pleistocene has no direct equivalents in scale to the Pyramids of Giza or the Roman Colloseum. But it does have buildings that, by the standards of the time, could only have been considered public works, implying sophisticated design and the coordination of labour on an impressive scale. Among them are the startling ‘mammoth houses’, built of hides stretched over a frame of tusks, examples of which – dating to around 15,000 years ago – can be found along a transect of the glacial fringe reaching from modern-day Kraków all the way to Kiev.
Still more astonishing are the stone temples of Göbekli Tepe, excavated over twenty years ago on the Turkish-Syrian border, and still the subject of vociferous scientific debate. Dating to around 11,000 years ago, the very end of the last Ice Age, they comprise at least twenty megalithic enclosures raised high above the now-barren flanks of the Harran Plain. Each was made up of limestone pillars over 5m in height and weighing up to a ton (respectable by Stonehenge standards, and some 6,000 years before it). Almost every pillar at Göbekli Tepe is a remarkable work of art, with relief carvings of menacing animals projecting from the surface, their male genitalia fiercely displayed. Sculpted raptors appear in combination with images of severed human heads. The carvings attest to sculptural skills, no doubt honed in the more pliable medium of wood (once widely available on the foothills of the Taurus Mountains), before being applied to the bedrock of the Harran. Intriguingly, and despite their size, each of these massive structures had a relatively short lifespan, ending with a great feast and the rapid infilling of its walls: hierarchies raised to the sky, only to be swiftly torn down again. And the protagonists in this prehistoric pageant-play of feasting, building, and destruction were, to the best of our knowledge, hunter-foragers, living by wild resources alone.
What, then, are we to make of all of this? One scholarly response has been to abandon the idea of an egalitarian Golden Age entirely, and conclude that rational self-interest and accumulation of power are the enduring forces behind human social development. But this doesn’t really work either.
Evidence for institutional inequality in Ice Age societies, whether in the form of grand burials or monumental buildings, is nothing if not sporadic. Burials appear literally centuries, and often hundreds of kilometres, apart. Even if we put this down to the patchiness of the evidence, we still have to ask why the evidence is so patchy: after all, if any of these Ice Age ‘princes’ had behaved anything like, say, Bronze Age princes, we’d also be finding fortifications, storehouses, palaces – all the usual trappings of emergent states. Instead, over tens of thousands of years, we see monuments and magnificent burials, but little else to indicate the growth of ranked societies. Then there are other, even stranger factors, such as the fact that most of the ‘princely’ burials consist of individuals with striking physical anomalies, who today would be considered giants, hunchbacks, or dwarfs.
A wider look at the archaeological evidence suggests a key to resolving the dilemma. It lies in the seasonal rhythms of prehistoric social life. Most of the Palaeolithic sites discussed so far are associated with evidence for annual or biennial periods of aggregation, linked to the migrations of game herds – whether woolly mammoth, steppe bison, reindeer or (in the case of Göbekli Tepe) gazelle – as well as cyclical fish-runs and nut harvests. At less favourable times of year, at least some of our Ice Age ancestors no doubt really did live and forage in tiny bands. But there is overwhelming evidence to show that at others they congregated en masse within the kind of ‘micro-cities’ found at Dolní Věstonice, in the Moravian basin south of Brno, feasting on a super-abundance of wild resources, engaging in complex rituals, ambitious artistic enterprises, and trading minerals, marine shells, and animal pelts over striking distances. Western European equivalents of these seasonal aggregation sites would be the great rock shelters of the French Périgord and the Cantabrian coast, with their famous paintings and carvings, which similarly formed part of an annual round of congregation and dispersal.
Such seasonal patterns of social life endured, long after the ‘invention of agriculture’ is supposed to have changed everything. New evidence shows that alternations of this kind may be key to understanding the famous Neolithic monuments of Salisbury Plain, and not just in terms of calendric symbolism. Stonehenge, it turns out, was only the latest in a very long sequence of ritual structures, erected in timber as well as stone, as people converged on the plain from remote corners of the British Isles, at significant times of year. Careful excavation has shown that many of these structures – now plausibly interpreted as monuments to the progenitors of powerful Neolithic dynasties – were dismantled just a few generations after their construction. Still more strikingly, this practice of erecting and dismantling grand monuments coincides with a period when the peoples of Britain, having adopted the Neolithic farming economy from continental Europe, appear to have turned their backs on at least one crucial aspect of it, abandoning cereal farming and reverting – around 3300 BC – to the collection of hazelnuts as a staple food source. Keeping their herds of cattle, on which they feasted seasonally at nearby Durrington Walls, the builders of Stonehenge seem likely to have been neither foragers nor farmers, but something in between. And if anything like a royal court did hold sway in the festive season, when they gathered in great numbers, then it could only have dissolved away for most of the year, when the same people scattered back out across the island.
Why are these seasonal variations important? Because they reveal that from the very beginning, human beings were self-consciously experimenting with different social possibilities. Anthropologists describe societies of this sort as possessing a ‘double morphology’. Marcel Mauss, writing in the early twentieth century, observed that the circumpolar Inuit, ‘and likewise many other societies . . . have two social structures, one in summer and one in winter, and that in parallel they have two systems of law and religion’. In the summer months, Inuit dispersed into small patriarchal bands in pursuit of freshwater fish, caribou, and reindeer, each under the authority of a single male elder. Property was possessively marked and patriarchs exercised coercive, sometimes even tyrannical power over their kin. But in the long winter months, when seals and walrus flocked to the Arctic shore, another social structure entirely took over as Inuit gathered together to build great meeting houses of wood, whale-rib, and stone. Within them, the virtues of equality, altruism, and collective life prevailed; wealth was shared; husbands and wives exchanged partners under the aegis of Sedna, the Goddess of the Seals.
Another example were the indigenous hunter-gatherers of Canada’s Northwest Coast, for whom winter – not summer – was the time when society crystallised into its most unequal form, and spectacularly so. Plank-built palaces sprang to life along the coastlines of British Columbia, with hereditary nobles holding court over commoners and slaves, and hosting the great banquets known as potlatch. Yet these aristocratic courts broke apart for the summer work of the fishing season, reverting to smaller clan formations, still ranked, but with an entirely different and less formal structure. In this case, people actually adopted different names in summer and winter, literally becoming someone else, depending on the time of year.
Perhaps most striking, in terms of political reversals, were the seasonal practices of 19th-century tribal confederacies on the American Great Plains – sometime, or one-time farmers who had adopted a nomadic hunting life. In the late summer, small and highly mobile bands of Cheyenne and Lakota would congregate in large settlements to make logistical preparations for the buffalo hunt. At this most sensitive time of year they appointed a police force that exercised full coercive powers, including the right to imprison, whip, or fine any offender who endangered the proceedings. Yet as the anthropologist Robert Lowie observed, this ‘unequivocal authoritarianism’ operated on a strictly seasonal and temporary basis, giving way to more ‘anarchic’ forms of organization once the hunting season – and the collective rituals that followed – were complete.
Scholarship does not always advance. Sometimes it slides backwards. A hundred years ago, most anthropologists understood that those who live mainly from wild resources were not, normally, restricted to tiny ‘bands.’ That idea is really a product of the 1960s, when Kalahari Bushmen and Mbuti Pygmies became the preferred image of primordial humanity for TV audiences and researchers alike. As a result we’ve seen a return of evolutionary stages, really not all that different from the tradition of the Scottish Enlightenment: this is what Fukuyama, for instance, is drawing on, when he writes of society evolving steadily from ‘bands’ to ‘tribes’ to ‘chiefdoms,’ then finally, the kind of complex and stratified ‘states’ we live in today – usually defined by their monopoly of ‘the legitimate use of coercive force.’ By this logic, however, the Cheyenne or Lakota would have had to be ‘evolving’ from bands directly to states roughly every November, and then ‘devolving’ back again come spring. Most anthropologists now recognize that these categories are hopelessly inadequate, yet nobody has proposed an alternative way of thinking about world history in the broadest terms.
Quite independently, archaeological evidence suggests that in the highly seasonal environments of the last Ice Age, our remote ancestors were behaving in broadly similar ways: shifting back and forth between alternative social arrangements, permitting the rise of authoritarian structures during certain times of year, on the proviso that they could not last; on the understanding that no particular social order was ever fixed or immutable. Within the same population, one could live sometimes in what looks, from a distance, like a band, sometimes a tribe, and sometimes a society with many of the features we now identify with states. With such institutional flexibility comes the capacity to step outside the boundaries of any given social structure and reflect; to both make and unmake the political worlds we live in. If nothing else, this explains the ‘princes’ and ‘princesses’ of the last Ice Age, who appear to show up, in such magnificent isolation, like characters in some kind of fairy-tale or costume drama. Maybe they were almost literally so. If they reigned at all, then perhaps it was, like the kings and queens of Stonehenge, just for a season.
5. Time for a re-think
Modern authors have a tendency to use prehistory as a canvas for working out philosophical problems: are humans fundamentally good or evil, cooperative or competitive, egalitarian or hierarchical? As a result, they also tend to write as if for 95% of our species history, human societies were all much the same. But even 40,000 years is a very, very long period of time. It seems inherently likely, and the evidence confirms, that those same pioneering humans who colonized much of the planet also experimented with an enormous variety of social arrangements. As Claude Lévi-Strauss often pointed out, early Homo sapiens were not just physically the same as modern humans, they were our intellectual peers as well. In fact, most were probably more conscious of society’s potential than people generally are today, switching back and forth between different forms of organization every year. Rather than idling in some primordial innocence, until the genie of inequality was somehow uncorked, our prehistoric ancestors seem to have successfully opened and shut the bottle on a regular basis, confining inequality to ritual costume dramas, constructing gods and kingdoms as they did their monuments, then cheerfully disassembling them once again.
If so, then the real question is not ‘what are the origins of social inequality?’, but, having lived so much of our history moving back and forth between different political systems, ‘how did we get so stuck?’ All this is very far from the notion of prehistoric societies drifting blindly towards the institutional chains that bind them. It is also far from the dismal prophecies of Fukuyama, Diamond, Morris, and Scheidel, where any ‘complex’ form of social organization necessary means that tiny elites take charge of key resources, and begin to trample everyone else underfoot. Most social science treats these grim prognostications as self-evident truths. But clearly, they are baseless. So, we might reasonably ask, what other cherished truths must now be cast on the dust-heap of history?
Quite a number, actually. Back in the ‘70s, the brilliant Cambridge archaeologist David Clarke predicted that, with modern research, almost every aspect of the old edifice of human evolution, ‘the explanations of the development of modern man, domestication, metallurgy, urbanization and civilisation – may in perspective emerge as semantic snares and metaphysical mirages.’ It appears he was right. Information is now pouring in from every quarter of the globe, based on careful empirical fieldwork, advanced techniques of climatic reconstruction, chronometric dating, and scientific analyses of organic remains. Researchers are examining ethnographic and historical material in a new light. And almost all of this new research goes against the familiar narrative of world history. Still, the most remarkable discoveries remain confined to the work of specialists, or have to be teased out by reading between the lines of scientific publications. Let us conclude, then, with a few headlines of our own: just a handful, to give a sense of what the new, emerging world history is starting to look like.
The first bombshell on our list concerns the origins and spread of agriculture. There is no longer any support for the view that it marked a major transition in human societies. In those parts of the world where animals and plants were first domesticated, there actually was no discernible ‘switch’ from Palaeolithic Forager to Neolithic Farmer. The ‘transition’ from living mainly on wild resources to a life based on food production typically took something in the order of three thousand years. While agriculture allowed for the possibility of more unequal concentrations of wealth, in most cases this only began to happen millennia after its inception. In the time between, people in areas as far removed as Amazonia and the Fertile Crescent of the Middle East were trying farming on for size, ‘play farming’ if you like, switching annually between modes of production, much as they switched their social structures back and forth. Moreover, the ‘spread of farming’ to secondary areas, such as Europe – so often described in triumphalist terms, as the start of an inevitable decline in hunting and gathering – turns out to have been a highly tenuous process, which sometimes failed, leading to demographic collapse for the farmers, not the foragers.
Clearly, it no longer makes any sense to use phrases like ‘the agricultural revolution’ when dealing with processes of such inordinate length and complexity. Since there was no Eden-like state, from which the first farmers could take their first steps on the road to inequality, it makes even less sense to talk about agriculture as marking the origins of rank or private property. If anything, it is among those populations – the ‘Mesolithic’ peoples – who refused farming through the warming centuries of the early Holocene, that we find stratification becoming more entrenched; at least, if opulent burial, predatory warfare, and monumental buildings are anything to go by. In at least some cases, like the Middle East, the first farmers seem to have consciously developed alternative forms of community, to go along with their more labour-intensive way of life. These Neolithic societies look strikingly egalitarian when compared to their hunter-gatherer neighbours, with a dramatic increase in the economic and social importance of women, clearly reflected in their art and ritual life (contrast here the female figurines of Jericho or Çatalhöyük with the hyper-masculine sculpture of Göbekli Tepe).
Another bombshell: ‘civilization’ does not come as a package. The world’s first cities did not just emerge in a handful of locations, together with systems of centralised government and bureaucratic control. In China, for instance, we are now aware that by 2500 BC, settlements of 300 hectares or more existed on the lower reaches of the Yellow River, over a thousand years before the foundation of the earliest (Shang) royal dynasty. On the other side of the Pacific, and at around the same time, ceremonial centres of striking magnitude have been discovered in the valley of Peru’s Río Supe, notably at the site of Caral: enigmatic remains of sunken plazas and monumental platforms, four millennia older than the Inca Empire. Such recent discoveries indicate how little is yet truly known about the distribution and origin of the first cities, and just how much older these cities may be than the systems of authoritarian government and literate administration that were once assumed necessary for their foundation. And in the more established heartlands of urbanisation – Mesopotamia, the Indus Valley, the Basin of Mexico – there is mounting evidence that the first cities were organised on self-consciously egalitarian lines, municipal councils retaining significant autonomy from central government. In the first two cases, cities with sophisticated civic infrastructures flourished for over half a millennium with no trace of royal burials or monuments, no standing armies or other means of large-scale coercion, nor any hint of direct bureaucratic control over most citizen’s lives.
Jared Diamond notwithstanding, there is absolutely no evidence that top-down structures of rule are the necessary consequence of large-scale organization.
Walter Scheidel notwithstanding, it is simply not true that ruling classes, once established, cannot be gotten rid of except by general catastrophe. To take just one well-documented example: around 200 AD, the city of Teotihuacan in the Valley of Mexico, with a population of 120,000 (one of the largest in the world at the time), appears to have undergone a profound transformation, turning its back on pyramid-temples and human sacrifice, and reconstructing itself as a vast collection of comfortable villas, all almost exactly the same size. It remained so for perhaps 400 years. Even in Cortés’ day, Central Mexico was still home to cities like Tlaxcala, run by an elected council whose members were periodically whipped by their constituents to remind them who was ultimately in charge.
The pieces are all there to create an entirely different world history. For the most part, we’re just too blinded by our prejudices to see the implications. For instance, almost everyone nowadays insists that participatory democracy, or social equality, can work in a small community or activist group, but cannot possibly ‘scale up’ to anything like a city, a region, or a nation-state. But the evidence before our eyes, if we choose to look at it, suggests the opposite. Egalitarian cities, even regional confederacies, are historically quite commonplace. Egalitarian families and households are not.
Once the historical verdict is in, we will see that the most painful loss of human freedoms began at the small scale – the level of gender relations, age groups, and domestic servitude – the kind of relationships that contain at once the greatest intimacy and the deepest forms of structural violence. If we really want to understand how it first became acceptable for some to turn wealth into power, and for others to end up being told their needs and lives don’t count, it is here that we should look. Here too, we predict, is where the most difficult work of creating a free society will have to take place.
Watch the authors discuss some of the issues raised in this essay in the following videos:
1. David Graeber and David Wengrow: Palaeolithic Politics and Why It Still Matters (13 October 2015) (Vimeo)
2. David Graeber and David Wengrow: Teach-Out (7 March 2018) (Facebook)
3. David Graeber and David Wengrow: Slavery and Its Rejection Among Foragers on the Pacific Coast of North America: A Case of Schismogenesis? (22 March 2018) (Collège de France)
‘To Each Age Its Inequality’ by Ian Morris. New York Times, 9 July 2015. See: https://www.nytimes.com/2015/07/10/opinion/to-each-age-its-inequality.html
‘It's Good To Have a King’ by Felipe Fernández-Armesto. Wall Street Journal, 10 May 2012. See: https://www.wsj.com/articles/SB10001424052702304363104577389944241796150
Published 2 March 2018, Original in English, First published by Eurozine © David Graeber, David Wengrow / EurozinePDF/PRINT
Human History Gets a Rewrite
A brilliant new account upends bedrock assumptions about 30,000 years of change. By William Deresiewicz OCTOBER 18, 2021
Many years ago, when I was a junior professor at Yale, I cold-called a colleague in the anthropology department for assistance with a project I was working on. I didn’t know anything about the guy; I just selected him because he was young, and therefore, I figured, more likely to agree to talk.
Five minutes into our lunch, I realized that I was in the presence of a genius. Not an extremely intelligent person—a genius. There’s a qualitative difference. The individual across the table seemed to belong to a different order of being from me, like a visitor from a higher dimension. I had never experienced anything like it before. I quickly went from trying to keep up with him, to hanging on for dear life, to simply sitting there in wonder.
That person was David Graeber. In the 20 years after our lunch, he published two books; was let go by Yale despite a stellar record (a move universally attributed to his radical politics); published two more books; got a job at Goldsmiths, University of London; published four more books, including Debt: The First 5,000 Years, a magisterial revisionary history of human society from Sumer to the present; got a job at the London School of Economics; published two more books and co-wrote a third; and established himself not only as among the foremost social thinkers of our time—blazingly original, stunningly wide-ranging, impossibly well read—but also as an organizer and intellectual leader of the activist left on both sides of the Atlantic, credited, among other things, with helping launch the Occupy movement and coin its slogan, “We are the 99 percent.”
The Dawn of Everything: A New History of Humanity DAVID GRAEBER & DAVID WENGROW, Farrar, Strauss, & Giroux
On September 2, 2020, at the age of 59, David Graeber died of necrotizing pancreatitis while on vacation in Venice. The news hit me like a blow. How many books have we lost, I thought, that will never get written now? How many insights, how much wisdom, will remain forever unexpressed? The appearance of The Dawn of Everything: A New History of Humanity is thus bittersweet, at once a final, unexpected gift and a reminder of what might have been. In his foreword, Graeber’s co-author, David Wengrow, an archaeologist at University College London, mentions that the two had planned no fewer than three sequels.
And what a gift it is, no less ambitious a project than its subtitle claims. The Dawn of Everything is written against the conventional account of human social history as first developed by Hobbes and Rousseau; elaborated by subsequent thinkers; popularized today by the likes of Jared Diamond, Yuval Noah Harari, and Steven Pinker; and accepted more or less universally. The story goes like this. Once upon a time, human beings lived in small, egalitarian bands of hunter-gatherers (the so-called state of nature). Then came the invention of agriculture, which led to surplus production and thus to population growth as well as private property. Bands swelled to tribes, and increasing scale required increasing organization: stratification, specialization; chiefs, warriors, holy men.
Eventually, cities emerged, and with them, civilization—literacy, philosophy, astronomy; hierarchies of wealth, status, and power; the first kingdoms and empires. Flash forward a few thousand years, and with science, capitalism, and the Industrial Revolution, we witness the creation of the modern bureaucratic state. The story is linear (the stages are followed in order, with no going back), uniform (they are followed the same way everywhere), progressive (the stages are “stages” in the first place, leading from lower to higher, more primitive to more sophisticated), deterministic (development is driven by technology, not human choice), and teleological (the process culminates in us).
It is also, according to Graeber and Wengrow, completely wrong. Drawing on a wealth of recent archaeological discoveries that span the globe, as well as deep reading in often neglected historical sources (their bibliography runs to 63 pages), the two dismantle not only every element of the received account but also the assumptions that it rests on. Yes, we’ve had bands, tribes, cities, and states; agriculture, inequality, and bureaucracy, but what each of these were, how they developed, and how we got from one to the next—all this and more, the authors comprehensively rewrite. More important, they demolish the idea that human beings are passive objects of material forces, moving helplessly along a technological conveyor belt that takes us from the Serengeti to the DMV. We’ve had choices, they show, and we’ve made them. Graeber and Wengrow offer a history of the past 30,000 years that is not only wildly different from anything we’re used to, but also far more interesting: textured, surprising, paradoxical, inspiring.
The bulk of the book (which weighs in at more than 500 pages) takes us from the Ice Age to the early states (Egypt, China, Mexico, Peru). In fact, it starts by glancing back before the Ice Age to the dawn of the species. Homo sapiens developed in Africa, but it did so across the continent, from Morocco to the Cape, not just in the eastern savannas, and in a great variety of regional forms that only later coalesced into modern humans. There was no anthropological Garden of Eden, in other words—no Tanzanian plain inhabited by “mitochondrial Eve” and her offspring. As for the apparent delay between our biological emergence, and therefore the emergence of our cognitive capacity for culture, and the actual development of culture—a gap of many tens of thousands of years—that, the authors tell us, is an illusion. The more we look, especially in Africa (rather than mainly in Europe, where humans showed up relatively late), the older the evidence we find of complex symbolic behavior.
- In Defense of FactsWILLIAM DERESIEWICZ
- The Next Decade Could Be Even WorseGRAEME WOOD
- The Original Sharing EconomyILANA E. STRAUSS
That evidence and more—from the Ice Age, from later Eurasian and Native North American groups—demonstrate, according to Graeber and Wengrow, that hunter-gatherer societies were far more complex, and more varied, than we have imagined. The authors introduce us to sumptuous Ice Age burials (the beadwork at one site alone is thought to have required 10,000 hours of work), as well as to monumental architectural sites like Göbekli Tepe, in modern Turkey, which dates from about 9000 B.C. (at least 6,000 years before Stonehenge) and features intricate carvings of wild beasts. They tell us of Poverty Point, a set of massive, symmetrical earthworks erected in Louisiana around 1600 B.C., a “hunter-gatherer metropolis the size of a Mesopotamian city-state.” They describe an indigenous Amazonian society that shifted seasonally between two entirely different forms of social organization (small, authoritarian nomadic bands during the dry months; large, consensual horticultural settlements during the rainy season). They speak of the kingdom of Calusa, a monarchy of hunter-gatherers the Spanish found when they arrived in Florida. All of these scenarios are unthinkable within the conventional narrative.Five minutes into my lunch with David Graeber, I realized that I was in the presence of a genius. Not an extremely intelligent person—a genius.
The overriding point is that hunter-gatherers made choices—conscious, deliberate, collective—about the ways that they wanted to organize their societies: to apportion work, dispose of wealth, distribute power. In other words, they practiced politics. Some of them experimented with agriculture and decided that it wasn’t worth the cost. Others looked at their neighbors and determined to live as differently as possible—a process that Graeber and Wengrow describe in detail with respect to the Indigenous peoples of Northern California, “puritans” who idealized thrift, simplicity, money, and work, in contrast to the ostentatious slaveholding chieftains of the Pacific Northwest. None of these groups, as far as we have reason to believe, resembled the simple savages of popular imagination, unselfconscious innocents who dwelt within a kind of eternal present or cyclical dreamtime, waiting for the Western hand to wake them up and fling them into history.
The authors carry this perspective forward to the ages that saw the emergence of farming, of cities, and of kings. In the locations where it first developed, about 10,000 years ago, agriculture did not take over all at once, uniformly and inexorably. (It also didn’t start in only a handful of centers—Mesopotamia, Egypt, China, Mesoamerica, Peru, the same places where empires would first appear—but more like 15 or 20.) Early farming was typically flood-retreat farming, conducted seasonally in river valleys and wetlands, a process that is much less labor-intensive than the more familiar kind and does not conduce to the development of private property. It was also what the authors call “play farming”: farming as merely one element within a mix of food-producing activities that might include hunting, herding, foraging, and horticulture.
Settlements, in other words, preceded agriculture—not, as we’ve thought, the reverse. What’s more, it took some 3,000 years for the Fertile Crescent to go from the first cultivation of wild grains to the completion of the domestication process—about 10 times as long as necessary, recent analyses have shown, had biological considerations been the only ones. Early farming embodied what Graeber and Wengrow call “the ecology of freedom”: the freedom to move in and out of farming, to avoid getting trapped by its demands or endangered by the ecological fragility that it entails.
The authors write their chapters on cities against the idea that large populations need layers of bureaucracy to govern them—that scale leads inevitably to political inequality. Many early cities, places with thousands of people, show no sign of centralized administration: no palaces, no communal storage facilities, no evident distinctions of rank or wealth. This is the case with what may be the earliest cities of all, Ukrainian sites like Taljanky, which were discovered only in the 1970s and which date from as early as roughly 4100 B.C., hundreds of years before Uruk, the oldest known city in Mesopotamia. Even in that “land of kings,” urbanism antedated monarchy by centuries. And even after kings arose, “popular councils and citizen assemblies,” Graeber and Wengrow write, “were stable features of government,” with real power and autonomy. Despite what we like to believe, democratic institutions did not begin just once, millennia later, in Athens.https://d9cd0e6cff8cdc5483303eb053e4faf3.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
If anything, aristocracy emerged in smaller settlements, the warrior societies that flourished in the highlands of the Levant and elsewhere, and that are known to us from epic poetry—a form of existence that remained in tension with agricultural states throughout the history of Eurasia, from Homer to the Mongols and beyond. But the authors’ most compelling instance of urban egalitarianism is undoubtedly Teotihuacan, a Mesoamerican city that rivaled imperial Rome, its contemporary, for size and magnificence. After sliding toward authoritarianism, its people abruptly changed course, abandoning monument-building and human sacrifice for the construction of high-quality public housing. “Many citizens,” the authors write, “enjoyed a standard of living that is rarely achieved across such a wide sector of urban society in any period of urban history, including our own.”
And so we arrive at the state, with its structures of central authority, exemplified variously by large-scale kingdoms, by empires, by modern republics—supposedly the climax form, to borrow a term from ecology, of human social organization. What is the state? the authors ask. Not a single stable package that’s persisted all the way from pharaonic Egypt to today, but a shifting combination of, as they enumerate them, the three elementary forms of domination: control of violence (sovereignty), control of information (bureaucracy), and personal charisma (manifested, for example, in electoral politics). Some states have displayed just two, some only one—which means the union of all three, as in the modern state, is not inevitable (and may indeed, with the rise of planetary bureaucracies like the World Trade Organization, be already decomposing). More to the point, the state itself may not be inevitable. For most of the past 5,000 years, the authors write, kingdoms and empires were “exceptional islands of political hierarchy, surrounded by much larger territories whose inhabitants … systematically avoided fixed, overarching systems of authority.”https://d9cd0e6cff8cdc5483303eb053e4faf3.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
Is “civilization” worth it, the authors want to know, if civilization—ancient Egypt, the Aztecs, imperial Rome, the modern regime of bureaucratic capitalism enforced by state violence—means the loss of what they see as our three basic freedoms: the freedom to disobey, the freedom to go somewhere else, and the freedom to create new social arrangements? Or does civilization rather mean “mutual aid, social co-operation, civic activism, hospitality [and] simply caring for others”?
These are questions that Graeber, a committed anarchist—an exponent not of anarchy but of anarchism, the idea that people can get along perfectly well without governments—asked throughout his career. The Dawn of Everything is framed by an account of what the authors call the “indigenous critique.” In a remarkable chapter, they describe the encounter between early French arrivals in North America, primarily Jesuit missionaries, and a series of Native intellectuals—individuals who had inherited a long tradition of political conflict and debate and who had thought deeply and spoke incisively on such matters as “generosity, sociability, material wealth, crime, punishment and liberty.”
The Indigenous critique, as articulated by these figures in conversation with their French interlocutors, amounted to a wholesale condemnation of French—and, by extension, European—society: its incessant competition, its paucity of kindness and mutual care, its religious dogmatism and irrationalism, and most of all, its horrific inequality and lack of freedom. The authors persuasively argue that Indigenous ideas, carried back and publicized in Europe, went on to inspire the Enlightenment (the ideals of freedom, equality, and democracy, they note, had theretofore been all but absent from the Western philosophical tradition). They go further, making the case that the conventional account of human history as a saga of material progress was developed in reaction to the Indigenous critique in order to salvage the honor of the West. We’re richer, went the logic, so we’re better. The authors ask us to rethink what better might actually mean.
The Dawn of Everything is not a brief for anarchism, though anarchist values—antiauthoritarianism, participatory democracy, small-c communism—are everywhere implicit in it. Above all, it is a brief for possibility, which was, for Graeber, perhaps the highest value of all. The book is something of a glorious mess, full of fascinating digressions, open questions, and missing pieces. It aims to replace the dominant grand narrative of history not with another of its own devising, but with the outline of a picture, only just becoming visible, of a human past replete with political experiment and creativity.
“How did we get stuck?” the authors ask—stuck, that is, in a world of “war, greed, exploitation [and] systematic indifference to others’ suffering”? It’s a pretty good question. “If something did go terribly wrong in human history,” they write, “then perhaps it began to go wrong precisely when people started losing that freedom to imagine and enact other forms of social existence.” It isn’t clear to me how many possibilities are left us now, in a world of polities whose populations number in the tens or hundreds of millions. But stuck we certainly are.
This article appears in the November 2021 print edition with the headline “It Didn’t Have to Be This Way.” When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.The Dawn of Everything: A New History of HumanityDAVID GRAEBER AND DAVID WENGROW, FARRAR, STRAUS AND GIROUXBUY BOOKWilliam Deresiewicz is the author of The Death of the Artist: How Creators Are Struggling to Survive in the Age of Billionaires and Big Tech.
All History is Revisionist History, NEH, July 2022
Ever since Thucydides dismissed Herodotus, historians have differed about the past. The conversion of Roman emperor Constantine to Christianity in 312 CE led to the most transformative historiographic shift in the West. —Wikimedia
The collective noun for a group of historians is an “argumentation,” and for good reason. At the very dawn of historical inquiry in the West, historians were already wrestling over the past, attacking each other, debating the purposes and uses of historical knowledge, choosing different subjects to pursue, and arguing about how to pursue them. That is, in the infancy of their intellectual pursuit, historians were engaged in what we know as “revisionist history”—writing coexisting, diverse, and sometimes sharply clashing accounts of various subjects, accounts that challenged and sought to alter what had been written about them before. Accordingly, historians take it as indisputable that interpretive contests are inherent in all of their efforts to advance historical understanding. What’s more, historians are of the abiding conviction that robust, free arguments about the realities, significance, and meaning of the past should be cherished as an integral element of an open society like the one ours strives to be.
Let me explain. A fundamental feature of historical thought is the distinction between “the past” and “history.” What we call “the past” is just that: It’s what happened at some point before now. Once it occurs, “the past” is gone forever—beyond repeating, beyond reliving, beyond replicating. It’s recoverable only by the evidence, almost never complete, that it leaves behind; and that evidence must be interpreted by individual humans—historians principally, but archaeologists, anthropologists, and others, all of whom differ in all sorts of ways.
Distinct from “the past” are the narratives and analyses that historians offer about earlier times. That’s what we call “history.” History is what people make of the forever-gone past out of surviving documents and artifacts, human recall, and such items as photographs, films, and sound recordings. Indeed, history is created by the application of human thought and imagination to what’s left behind. And because each historian is an individual human being—differing by sex and gender; origin, nationality, ethnicity, and community; nurture, education, and culture; wealth and occupation; politics and ideology; mind, disposition, sensibility, and interest, each living at a distinct time in a distinct place—as a community of professionals, they come to hold different views, have different purposes, create different interpretations, and put forth their own distinctive understandings of “the past.”
A second fundamental fact regarding historical knowledge is that those who commit themselves professionally to writing and teaching history are normal individuals who just happen to be historians. And as the world in which they live changes, historians change as well. Historical interpretations tend to grow and adjust in some synchrony with the times into which human existence has moved so that previous historians’ interpretations are likely to yield to ones more comprehensible, compelling, and relevant to those who are alive. As time passes, new evidence and new methods for examining old evidence emerge, and new subjects of historical inquiry make their appearance. Consequently, historians’ histories change. Works that don’t speak to the times in which they’re created are likely to have short shelf lives.
It’s therefore a mistake to think that historians can fully isolate themselves in majestic, objective, intellectual solitude from the world around them. As hard as they may try to keep their own hopes and views out of what they write, historians, like others, try to find meaning in the past. And when they find it for themselves, they wish to share it with others—their students, readers, and viewers. If they don’t, they fail in one of their principal aims: to make knowledge of the past illuminate, deepen, and enrich the present.Photo caption
Debates about the causes and consequences of the Civil War are enduring and complex, involving figures such as Robert E. Lee, whose statue in Richmond, Virginia, was removed on September 8, 2021. —Alamy
All of these realities provide the general context for the existence of revisionist history. Yet to understand revisionist history fully, we need to take ourselves up to 35,000 feet—high above historians’ specific disputes about specific issues—and examine as a distinct phenomenon what historians are routinely up to down below. When, from that height, we look at what historians have long engaged themselves in doing, what do we find? How should we understand shifting interpretations of the past?
What we call revisionist history appeared at the very birth of written history. It wasn’t, as many allege, the product of the radicalism of the 1960s, nor did it spring up on the political left. Instead, starting with Herodotus and Thucydides, the celebrated Greek founders of extended historical writing in the West, it dates from 2,500 years ago. Since then, it has occupied no settled position on the ideological spectrum and in fact has gained as many interpretive wins for conservative arguments as for liberal ones.
When, in around 430 BCE, Herodotus of Halicarnassus, “the father of history,” wrote his great work, titled simply The Histories, about the wars between the Greeks and non-Greek “barbarians” from the east, he took the first steps toward distinguishing historical inquiry from myth. That shift in emphasis must be considered the first fundamental revision in ways of conceptualizing the past. Purposefully avoiding the fables of the bards, the greatest of which were Homer’s epic poems The Iliad and The Odyssey, Herodotus made history a subject of rational analysis and explanation. To do so, he had to rely on existing evidence and its analysis. As an omnivorously curious man, he drew on available written sources, what he observed on his travels, and his interviews with those who’d participated in or recalled the wars of which he wrote. He also began to banish gods as causal agents from human affairs. He subjected his sources, Homeric tales included, to critical evaluation and raised his eyebrows at some of what he learned during his research. In seeking to explain the causes and outcomes of the Persian Wars, he ventured into what today we know of as social and cultural history. The result was a vast, somewhat unruly, wonderfully engaging history—the West’s first. Photo caption
Thucydides’s history—that of warfare, politics, and statecraft—originated as a fundamental break from Herodotean study, earning him the title “the first revisionist historian.” —Erich Lessing / Art Resource, NY
Yet it took no time for Herodotus to come under attack. His younger contemporary Thucydides curtly dismissed Herodotus’s pathbreaking work as “a prize essay to be heard for the moment, . . . attractive at truth’s expense.” What, to Thucydides, constituted the deficiency of his elder’s work? An Athenian general, Thucydides believed that, instead of being appealing in its art and capacious in its explanatory reach, history should maintain a tight focus on warfare, statecraft, leadership, and politics, its chief method being reliance on written texts and the direct observations of participants, its aims to instruct and, only secondarily, to please, its readers. In addition, he thought, it should be entirely secular; gods were of no use for explanatory purposes. His differences with Herodotus were philosophical in the sense that they put into contention what historians should study, why they should study it, and the uses to which they should put what they learn. We venerate Thucydides’s great History of the Peloponnesian War because of its gravity and the brilliance of the speeches he had his historical figures, like Pericles, deliver. His work continues to instruct everyone who reads it.
What’s relevant here is that Thucydides’s subjects, methods, and aims held the field of historical study effectively unopposed for the next 2,300 years. Thucydidean history, in the works of Polybius, Xenophon, Sallust, Livy, Tacitus, Plutarch, and Josephus, was the kind of history that the founders of the United States absorbed in their youth. Thomas Jefferson and John Adams discussed Thucydides in their lifelong correspondence but never once mentioned Herodotus. Until recently, most Americans were the children of Thucydides in that they studied Thucydidean subjects in school and college. Only recently have those subjects been joined by others with which Herodotus was more comfortable—those of social and cultural history, through which the history of women, laboring people, African Americans, Latinos, gays and lesbians, and others have been given greatly enlarged attention. Needless to say, the emergence of all people as historical subjects has been a source of public and political friction. It also lies at the roots of the negative use of the otherwise neutral term “revisionist history.” Photo caption
Two miles from where the Robert E. Lee statue once stood, the Emancipation and Freedom Monument in Richmond, Virginia, depicts a man and woman freed from slavery. —Ansel Olson
Yet there’s irony in the triumph of Thucydidean history. As Donald Kagan, the late, celebrated, traditionalist historian of the classical world, argued, Thucydides’s sharply drawn distinction between his own and Herodotean history made Thucydides “the first revisionist historian.” That is, the history of warfare, politics, and statecraft—the traditionalist, male-oriented history that long held the center of Western education—originated as a fundamental break with the kind of Herodotean history that preceded it. In addition to being revisionist, Thucydides’s History was “conservative” in that, in the context of today’s world, it turned historical thought away from broad coverage toward a set of subjects that until late in the twentieth century were considered the only ones legitimate and significant enough to examine and to teach. In doing so, Thucydides created an enduring tension within historical studies between the kind of subject-limited history he wrote and the more universalistic set of topics that Herodotus pursued. We are inescapably the inheritors of that tension, our arguments about the appropriate historical subjects to study encased in an intellectual mold two and a half millennia old. The fact that that tension has endured so long suggests that the kinds of interpretive differences that constitute historians’ arguments with each other are part of history’s ineradicable genetic makeup.
These arguments and that tension, however, are not the only constituent elements of history’s DNA. So is its variability—the changes, increasingly frequent and sometimes seismic, in the way history is conceived and expressed. In the West, the most transformative historiographic shift—conceptual, philosophic, religious, and cultural—was the one brought on by the conversion of the Roman emperor Constantine to Christianity in 312 CE. Not surprisingly, Constantine’s conversion engendered an immediate, revolutionary adjustment in historical thought to explain and justify the Christian faith’s emergence as the favored belief system of Constantine’s empire. It was an adjustment that was destined to put classical, pagan historiography on the defensive in the West ever after.
The principal author of Christianity’s historical claims was Eusebius, Bishop of Caesarea, whose world-historical achievement was, in his Ecclesiastical History, to create an account, written and published after 313 CE, of the history of Christianity and the Christian Church. No greater transformative revisionist conception of the Western past has ever appeared. Working from documents and fighting with polemical ardor on behalf of what soon became theological orthodoxy, Eusebius provided the West with the historical claims and the Western Church with the historical underpinnings from which subsequently grew most Westerners’ understanding of their world. Not even the Marxist worldview and the more recent introduction of women into the historical record have proved as powerful, permeating, deep, and enduring as the reign of Christian concepts, chronology, and subjects on the way we consider the past.
Here, again, it’s worth noting that, like Thucydides’s success in pulling historiography in a direction that would endure as historical orthodoxy for 2,300 years, so Eusebius’s Christian history has served as a traditionalist anchor of Western historiography ever since its emergence. Its incorporation of Jewish monotheism into Christian faith in place of pagan polytheism; its substitution, for older more fatalistic circular historical schemes, of the hope of future deliverance; and its claim that history had a datable origin, whether in Genesis or the Incarnation, were innovations of Eusebian thought. Since then, they’ve served as the secure mooring of the Western historical consciousness. For a second time, revisionist history became traditional; what once was a challenge to pagan orthodoxy turned into an orthodoxy of its own.
Of course, few changes in historical interpretations cause such deep and enduring revolutions in understanding of the past as the world-historical Christian historiographic transformation. Most revisionist history is normal in the sense that it’s embodied in the histories that all historians write; unlike Eusebius’s, it doesn’t create the intellectual foundations of a new religious faith and culture. Accordingly, all new historical arguments and perspectives must be assessed as to their impact on existing knowledge and convictions and in relation to whatever else they are about. Here, consideration of scale is essential. The reassessment of, say, a minor Civil War battle may seriously alter the understanding of that encounter without having a wide or deep impact on understanding of the larger Civil War. But other reinterpretations of other subjects can carry much greater consequences because they affect more significant issues. Take the change of view caused by the 1946 discovery of the Dead Sea Scrolls, which sharply altered knowledge about early Judaism; or take the detection in the 1960s of the site at L’Anse aux Meadows in Newfoundland, which offered strong support for the existence of a Norse settlement in the “new” world a millennium ago—that is, before Columbus stumbled upon and “discovered” the Western Hemisphere. All such shifts in interpretation, while of lesser impact than the emergence of Eusebian Christian history, force a reconsideration of the subject to which they relate. Photo caption
The 1960s excavation of the L’Anse aux Meadows site on a northern tip of Newfoundland offered strong support for the existence of a Norse settlement in the “new” world—before Columbus’s “discovery” of the Western Hemisphere.
It does not, however, require new evidence to shift historical understanding. Sometimes, a deepening of historical knowledge originates in the application of newly available methods to long-existing material. Such has been the recent case regarding the venerable, if long suppressed, charge, one originating in his lifetime, that Thomas Jefferson had fathered children with his slave Sally Hemings. While Hemings family lore about the relationship between the two had long existed (and been ignored) and while historians’ analyses of the timing of Hemings’s pregnancies had lent fresh circumstantial weight to some of the claims, it was only when genetic information could be supplied by Hemings’s living lineal descendants and examined by geneticists that Jefferson’s relationship with his enslaved concubine could be firmly established.
The most common source of revisionist thinking arises from shifts in perspective. The classic American example of repeated shifts of this sort concerns the enduring, complex, and deeply consequential debates over the causes and consequences of the Civil War. Since 1860, interpretations of that vast contest have mutated in concert with changes in American politics, law, attitudes, and society, changes especially relating to race. Similarly, over the past half century the emergence to political, economic, cultural, and social authority of women, African Americans, and other people previously omitted from historical consideration, plus the appointment of members of those groups to academic faculties and senior positions in other cultural institutions, have led scholars to learn more about those groups’ histories. The results have been profound. Historians now take it for granted that it’s impossible to understand any part of the past without taking into account the realities of all, and all kinds of, people.
Historians routinely put their own and others’ understanding of the past into question. They don’t do this for the fun of it. Rather, they believe it’s their responsibility to create an understanding of the past that speaks to the living. There’s nothing novel to them about arguing, then reaching at least a temporary consensus, about everything from why a Civil War battle turned out as it did to why the North won the entire conflict. There’s nothing unusual about debates among historians of women about how to frame and understand the historical suppression of women’s agency in human affairs. Arguments over the causes and consequences of such momentous events as the American, French, Russian, and Chinese revolutions never abate. Historians are currently in the midst of reevaluating the peopling of the Americas, the result of which is to leave in tatters the colonial history of the United States long taught to American school students.
It’s in the context of that reevaluation that we recently experienced a furor over which date to assign to the beginning of American history. There are many candidates to consider: the first settlements of people from Asia dating back at least 20,000 years in the territory now comprising the contiguous United States; the 1565 Spanish settlement at today’s St. Augustine, Florida, the oldest continuously occupied site of European habitation in the lower 48 states; Spain’s establishment of an outpost on Tewa people’s lands near today’s Santa Fe, New Mexico, in 1598; the 1607 English settlement at Jamestown, Virginia; the 1619 introduction of race slavery in that colony; and the Declaration of Independence. Such debates, arising from different perspectives on the same evidence and constituting classic instances of revisionist history, are unlikely ever to be fully stilled because they have to do with the origin story of the United States, with national identity, and with current circumstances. History can never be walled off from the present. Photo caption
Regarded as the “father of history,” Herodotus of Halicarnassus took the first steps toward distinguishing historical inquiry from myth. —Alamy
It should thus occasion no surprise that many people find it difficult to accept such frequent challenges to what they were taught to think of as unalterably fixed and true—whether it concerns the rise of Christianity, the decline and fall of the Roman Empire, the causes of the American Civil War, the role of women in the historical past, or the birth date of American history. It’s not surprising that they ask in bewilderment: If the past can’t change, then how can the history about it do so? They’re offended to learn that at least some of what they were taught early in life as “history” is no longer fully accepted by historians and is instead taught in different ways. Like all humans, families, peoples, and nations—like many historians, too—they want to believe what they learned when young, especially since it long served as an adhesive of their identity. You mean to tell me that the Constitution was written in part to protect slavery and not only for its, and the Declaration of Independence’s, lofty stated ideals of independence, life, liberty, the pursuit of happiness, and the general welfare of all Americans? You mean to say that a historical case can be made against the use of atomic bombs on Japanese cities to end a ferocious war in which hundreds of thousands of American lives were lost and perhaps an equal number saved by the bombs? Many people are ready to dismiss all such interpretations as no more than “revisionist history”—the result of ideology, politics, and misbegotten negativism.
Many people also resist accepting the simple truth that historians, whom they consider experts, disagree about the facts and what to make of them. To be sure, everyone can agree that the Declaration of Independence was approved on July 2, 1776, and was adopted two days later. Those are the facts. But what do they mean? That independence was a summer event? That it came suddenly? That it capped at least 11 years of increasing turmoil? That it inaugurated five years of brutal warfare? And so on. Mere facts don’t have meaning. They must be given that meaning by human beings. And those human beings very well may disagree among themselves about their meaning. But, for many non-historians, disagreement among experts fits uneasily with their desire for certainty. Many condemn historians’ changing interpretations as evidence of political bias. Still others see challenges to historical orthodoxies as threats to the historical tales congruent with their political aims and thus to their power. They ask themselves, too, since historians themselves often don’t agree about the past, why anyone should have confidence in historians’ professional claims to be experts. Why should anyone cede to historians authority over what happened when those historians challenge what was long taught as gospel truth? Of course, nothing requires people to cede anything to historians. But just as it’s best to hire a licensed electrician to wire a new house rather than to do it on your own or to visit an experienced orthopedic surgeon rather than a carpenter to set your broken leg, so it’s probably preferable to turn to an experienced historian for authoritative current understanding about a particular subject as well as knowledge about currently existing disagreements over it.
Professional historians view their roles and contributions in a different light than non-historians. They consider their debates not simply as intellectual exercises but as a contribution to understanding and to the welfare of an open society. To them, revisions in knowledge about the past serve society much as a gyroscope serves to help maintain a ship’s even keel. It’s their conviction that adjustments to existing knowledge, adjustments grounded as much in known evidence as in new thought and new perspectives, allow for the potential increase and deepening of knowledge about human existence for everyone. Historians take in stride the differences among themselves, try to learn from their interprofessional disputes, and endeavor to incorporate into their own investigations what makes the most sense to them. Most importantly, they’re of the strong conviction that battles over the past are inescapable because they’re hard-wired into human nature and existence. All of this means that rarely, if ever, can “Case Closed” be stamped on a historical subject.
But if no subject is immune from reconsideration, what about the widespread conviction that history can and should be objective in the sense of being an accurate and full account of what actually occurred? It’s likely to surprise most people that today’s historians believe that it can’t be. On what grounds do they believe that?
The first reason, both existential and epistemological, grows from the impossibility of knowing all that happened in the past. In addition to being beyond re-experiencing, no past event, whether as small in scale as an auto accident or as vast as a revolution, can be recorded in its entirety while taking place or understood in its entirety afterward. Only some, never all, evidence of an event—say reports of witnesses, physical remains, and films and sound recordings—remains behind, doesn’t deteriorate, or isn’t purposefully destroyed; and what does remain is a result of such factors as its collectors’ partialities, their speed and intent in saving it, their point of view when reporting it, and sheer accident. What’s missing would tell us more, but it doesn’t exist to do so. We’re thus left to interpret what remains as best we can by using all the evidence available and subjecting it to examination for authenticity, accuracy, and meaning. But since there are likely to be different ways to interpret the surviving evidence, the results of even the most experienced historians’ interpretations will often differ. That’s because each historian, indeed all people, will bring distinct interests, sensibilities, and minds to bear when they examine the same evidence. Here is where differences over interpretation—the opportunities for revisionist history—enter the picture. Whether they arise from disputes over evidence and what it means or, as is sometimes the case, from different social or ideological views, all such differences must be, as they always are, subjected to hard-headed examination by any and all who enter such interpretive battles over the past.
Take, for example, the storming of the Bastille in Paris on July 14, 1789. Some people witnessed it from outside that fortress, others from inside. Some were guards, some prisoners, some liberators. Because none of them could observe and record everything that occurred, much evidence was simply lost. In addition, as neuroscience and memory studies demonstrate, much of what participants and onlookers perceived was selective; their eyes and ears were subject to “inattentional blindness”—the inclination of our senses to take in only what they’re focused on, not all that they might apprehend. Furthermore, the more distant in time witnesses are from an event, the less dependable is their memory of it. The result is that what historians know of anything in the past is only a part of what occurred, was witnessed, and was reported; and some of that may be inaccurate. But even were an event’s missing evidence available, our knowledge of it would remain disputable.
Because these are the realities that historians face in their day-to-day endeavors, they’re reconciled to the general proposition that it’s impossible to achieve a history of any subject that is complete, certain, accurate, unchanging, final, eternally valid, proven beyond dispute, and accepted as such by everyone. Since that’s so, it means that every history of every subject, no matter how insignificant, will differ in some respects from its predecessors—through addition, subtraction, the use of new evidence, differences in points of view, or distinctive arguments. Yet this should be no cause for despair. Each new contribution to deeper and fuller understanding of anything ought to be seen as closing in on, even if it can never reach, a meeting point at which historians will agree that all that can be said about the subject has been said so that an enduring consensus as to what probably occurred and why it did so is reached. With robust commitment to that goal, even without hope of ever fully reaching it, historians press on in confidence that, as existing evidence is reevaluated, new evidence discovered, and new minds put to work on a subject, the gap between what is known and what remains to be understood narrows bit by bit.
We should keep in mind, too, that there’s nothing novel in the interpretive variety of historical knowledge. When compared with, say, different interpretations of works of classical and jazz standards, distinct productions of operas and plays, and the multiplicity of styles in art and architecture, historical interpretations don’t seem much different. Nor do the sources of such diversity—conductors’ temperaments, instrumentalists’ styles, stage directors’ and opera producers’ visions, and artists’ take on the world around them—seem to differ much from those at work in the creation of works of history. Just as in other human activities, no two historians will, or will be able to, approach an identical subject in identical ways out of identical interests for identical purposes. For that reason, historians live easily and calmly with disagreement and argumentation.
In the end, a strong case can be made that the concept of “revisionist history” is so widely applied and the realities behind it so deeply infused into historical thought that it’s useless as a distinctive feature of any work of history. All written history is—in one respect or another, on one scale or another, and with one impact or another—revisionist in intent or consequence. Revisionist history is a universal phenomenon. Historians’ debates and shifting views of their subjects are the principal means by which they approach, while never reaching, their goal of understanding the extraordinary complexity of human life in times before their own. In fact, their arguments about the past and their varied ways of going about their work should be celebrated as signature characteristics of a democratic culture. Where enforced orthodoxy exists, there lies totalitarianism.
A democratic culture, in which different views and different “truths” are allowed to coexist and share billing in the public forum of thought, ought also to be seen as a glorious storehouse of ideas, many of them cast off in one era yet always available for reuse in another. In no case does a new way of viewing the past annihilate older ones. On the contrary: Discarded historical interpretations, like strata of ancient sedimentary rock, lie buried atop each other, out of sight until they’re made visible again for study and use. Renewed, reconsidered, and repurposed, they can then fuel fresh struggles to understand the past. Revisionist history ensures the unending renewal of knowledge of what came before our own days on earth. We should celebrate as well as accept that fact.About the author
A cofounder of the National History Center of the American Historical Association, James M. Banner Jr. is the author most recently of The Ever-Changing Past: Why All History Is Revisionist History (Yale University Press, 2021).
Funding: The Perseus Digital Library, a widely used resource for history and literature of the Greco-Roman world, has received nine grants from NEH since 1999, the most recent in 2019 to support the development of reading tools that significantly enhance the amount of lexical and historical information available as users read text in translation. Thucydides appears in many NEH-supported projects over the years, especially educational programs for teachers and college faculty. Among the scholars whose research on Thucydides NEH has funded, Donald Kagan went on to write a well-known four-volume history of the Peloponnesian War, receive the National Humanities Medal from President George W. Bush in 2002, and deliver the Jefferson Lecture in the Humanities in 2005. In the late 1970s, Civil War historian Bell I. Wiley was awarded three separate grants for seminars on the American South from 1800 to 1865 that promised to emphasize “influences, national and regional, that have contributed to revisionism.” The terms revisionist and revisionism appear in close to 40 grant records over the years, reflecting a great range of subjects from the Cold War to the French Revolution to museology, the Holocaust, the philosophy of perception, Buddhist hermeneutics, and the poetry of Emily Dickinson. Historiography is mentioned in more than 200 NEH grant records, from a 1972 conference on medieval historiography directed by Morton Bloomfield to a 2019 research fellowship to Yiman Wang of the University of California, Santa Cruz, to support work toward a book on the Chinese-American actress Anna May Wong.
This article is available for unedited republication, free of charge, using the following credit: “Originally published as “All History Is Revisionist History” in the Summer 2022 issue of Humanities magazine, a publication of the National Endowment for the Humanities.” Please notify us at firstname.lastname@example.org if you are republishing it or have any questions.
Mary jemison a documentary
Mary Jemison (Deh-he-wä-nis) (1743 – September 19, 1833) was an American frontierswoman who was adopted in her teens by the Seneca. When she was in her teens, she was captured in what is now Adams County, Pennsylvania, from her home along Marsh Creek. She became fully assimilated into her captors’ culture and later chose to remain a Seneca rather than return to British colonial culture. Her statue stands today in Letchworth State Park.
- Mary jemison a documentary
- Mary jemison monologue
- Marys Account of her capture
- Legacy and honors
- In popular culture
Mary jemison monologue
Jemison was born to Thomas and Jane Jemison aboard the ship William and Mary in the fall of 1743, while en route from what is now Northern Ireland to America. They landed in Philadelphia, Pennsylvania, and joined other Protestant Scots-Irish immigrants in heading west to settle on cheaper available lands in what was then the western frontier (now central Pennsylvania). They “squatted” on territory that was under the authority of the Iroquois Confederacy, which was based in central and western New York.
The Jemisons had cleared land to make their farm, and the couple had several children. By 1755, conflicts had started in the French and Indian War, the North American front of the Seven Years’ War between France and Britain. Both sides made use of Native American allies. They were especially used in the many frontier areas. One morning in 1755, a raiding party consisting of six Shawnee Indians and four Frenchmen captured Mary, her family (except two older brothers) and a young boy from another family. En route to Fort Duquesne (present-day Pittsburgh), then controlled by the French, the Shawnee killed Mary’s mother, father, and siblings and ritually scalped them. The 12-year-old Mary and the young boy were spared, likely because they were considered of suitable age for adoption. Once the party reached the fort, Mary was given to two Seneca, who took Mary downriver to their settlement. A Seneca family adopted Mary, renaming her as Deh-he-wä-nis (other romanization variants include: Dehgewanus, Dehgewanus and Degiwanus, Dickewamis), which she learned meant “a pretty girl, a handsome girl, or a pleasant, good thing.”
When she came of age, she married a Delaware man named Sheninjee, who was living with the band. They had a son whom she named Thomas after her father. Sheninjee took her on a 700-mile (1,100 km) journey to the Sehgahunda Valley along the Genesee River in present-day western New York state. Although Jemison and their son reached this destination, her husband did not. Leaving his wife one day to hunt, he had taken ill and died.
As a widow, Mary and her child were taken in by Sheninjee’s clan relatives; she made her home at Little Beard’s Town (present-day Cuylerville, New York). She later married a Seneca named Hiakatoo; they had six children together: Nancy, Polly, Betsey, Jane, John, and Jesse. In 1811 John murdered his half-brother Thomas. Some time later John murdered his brother Jesse, and in 1817 John was himself murdered by two men from the Squawky Hill Reservation.
During the American Revolutionary War, the Seneca were allies of the British, hoping that victory would enable them to expel the encroaching colonists. Jemison’s account of her life includes some observations during this time. She and others in the Seneca town helped supply Joseph Brant (Mohawk) and his force of Iroquois warriors from various nations, who fought against the rebel colonists. After the war, the Seneca were forced to give up their lands to the United States as allies of the defeated British. In 1797 the Seneca sold much of their land at Little Beard’s Town to European-American settlers. At that time, during negotiations with the Holland Land Company held at Geneseo, New York, Mary Jemison proved to be an able negotiator for the Seneca tribe. She helped win more favorable terms for giving up their rights to the land at the Treaty of Big Tree (1797).
Late in life, she told her story to the minister James E. Seaver, who published it as a classic “captivity narrative“, Narrative of the Life of Mrs. Mary Jemison (1824; latest ed. 1967). Although some early readers thought that Seaver must have imposed his own beliefs, today many history scholars think the memoir is a reasonably accurate account of Jemison’s life story and attitude.
In 1823, the Seneca sold most of the remainder of the land in that area, except for a 2-acre (8,100 m2) tract of land reserved for Jemison’s use. Known by local residents as the “White Woman of the Genesee”, Jemison lived on the tract until she sold it in 1831 and moved to the Buffalo Creek Reservation. Jemison lived the rest of her life with the Seneca Nation. She died on September 19, 1833, aged 90. She was initially buried on the Buffalo Creek Reservation.
Mary’s Account of her capture
“The party that took us consisted of six Indians and four Frenchmen, who immediately commenced plundering, as I just observed, and took what they considered most valuable; consisting principally of bread, meal, and meat. Having taken as much provision as they could carry, they set out with their prisoners in great haste, for fear of detection, and soon entered the woods.
On our march that day, an Indian went behind us with a whip, with which he frequently lashed the children, to make them keep up. In this manner we traveled till dark, without a mouthful of food or a drop of water, although we had not eaten since the night before. Whenever the little children cried for water, the Indians would make them drink urine, or go thirsty. At night they encamped in the woods, without fire and without shelter, where we were watched with the greatest vigilance. Extremely fatigued, and very hungry, we were compelled to lie upon the ground, without supper or a drop of water to satisfy the cravings of our appetites. As in the daytime, so the little ones were made to drink urine in the night, if they cried for water. Fatigue alone brought us a little sleep for the refreshment of our weary limbs; and at the dawn of day we were again started on our march, in the same order that we had proceeded the day before.
About sunrise we were halted, and the Indians gave us a full breakfast of provision that they had brought from my father’s house. Each of us, being very hungry, partook of this bounty of the Indians, except father, who was so much overcome with his situation, so much exhausted by anxiety and grief, that silent despair seemed fastened upon his countenance, and he could not be prevailed upon to refresh his sinking nature by the use of a morsel of food. Our repast being finished, we again resumed our march; and before noon passed a small fort, that I heard my father say was called Fort Canagojigge.
That was the only time that I heard him speak from the time we were taken till we were finally separated the following night.
Toward evening, we arrived at the border of a dark and dismal swamp, which was covered with small hemlocks or some other evergreen, and various kinds of bushes, into which we were conducted; and having gone a short distance, we stopped to encamp for the night.
Here we had some bread and meat for supper; but the dreariness of our situation, together with the uncertainty under which we all labored, as to our future destiny, almost deprived us of the sense of hunger, and destroyed our relish for food.
As soon as I had finished my supper, an Indian took off my shoes and stockings, and put a pair of moccasins on my feet, which my mother observed; and believing that they would spare my life, even if they should destroy the other captives, addressed me, as near as I can remember, in the following words:
‘My dear little Mary, I fear that the time has arrived when we must be parted for ever. Your life, my child, I think will be spared; but we shall probably be tomahawked here in this lonesome place by the Indians. Oh! how can I part with you, my darling? What will become of my sweet little Mary? Oh! how can I think of your being continued in captivity, without a hope of your being rescued? Oh! that death had snatched you from my embraces in your infancy: the pain of parting then would have been pleasing to what It now is; and I should have seen the end of your troubles! Alas, my dear! my heart bleeds at the thought of what awaits you; but, if you leave us, remember, my child, your own name, and the names of your father and mother. Be careful and not forget your English tongue. If you shall have an opportunity to get away from the Indians don’t try to escape; for if you do they will find and destroy you. Don’t forget, my little daughter, the prayers that I have learned you – say them often: be a good child, and God will bless you! May God bless you, my child, and make you comfortable and happy.’
During this time, the Indians stripped the shoes and stockings from the little boy that belonged to the woman who was taken with us, and put moccasins on his feet, as they had done before on mine. I was crying. An Indian took the little boy and myself by the hand, to lead us off from the company, when my mother exclaimed, ‘Don’t cry, Mary! – don’t cry, my child! God will bless you! Farewell – farewell!’
The Indian led us some distance into the bushes or woods, and there lay down with us to spend the night. The recollection of parting with my tender mother kept me awake, while the tears constantly flowed from my eyes. A number of times in the night, the little boy begged of me earnestly to run away with him, and get clear of the Indians; but remembering the advice I had so lately received, and knowing the dangers to which we should be exposed, in traveling without a path and without a guide, through a wilderness unknown to us, I told him that I would not go, and persuaded him to lie still till morning.
My suspicion as to the fate of my parents proved too true; for soon after I left them they were killed and scalped, together with Robert, Matthew, Betsey, and the woman and her two children, and mangled in the most shocking manner
After a hard day’s march we encamped in a thicket, where the Indians made a shelter of boughs, and then built a good fire to warm and dry our benumbed limbs and clothing; for it had rained some through the day. Here we were again fed as before. When the Indians had finished their supper, they took from their baggage a number of scalps, and went about preparing them for the market, or to keep without spoiling, by straining them over small hoops which they prepared for that purpose, and then drying and scraping them by the fire.
Having put the scalps, yet wet and bloody, upon the hoops, and stretched them to their full extent, they held them to the fire till they were partly dried, and then, with their knives, commenced scraping off the flesh; and in that way they continued to work, alternately drying and scraping them, till they were dry and clean. That being done, they combed the hair in the neatest manner, and then painted it and the edges of the scalps, yet on the hoops, red. Those scalps I knew at the time must have been taken from our family, by the color of the hair. My mother’s hair was red; and I could easily distinguish my father’s and the children’s from each other. That sight was most appalling; yet I was obliged to endure it without complaining. In the course of the night, they made me to understand that they should not have killed the family, if the whites had not pursued them.”
Legacy and honors
In 1874, at the request of her descendants, Jemison’s remains were transferred and reinterred near the 1765 Seneca Council House from the former Caneadea Reservation, which had been relocated to the estate of William Pryor Letchworth, He had purchased the former council house and had it restored by John Shanks, a Seneca grandson of Jemison. This work was completed at his Glen Iris Estate in 1872. Letchworth invited Seneca and state officials for a rededication of the Council House that year. In 1881, Letchworth acquired a cabin formerly belonging to Mary’s daughter, Nancy Jemison, and had it moved from Gardeau Flats to near the Council House and the site of Mary’s grave. In 1906 he bequeathed his entire estate to New York. Near present-day Castile, today it is surrounded by Letchworth State Park. A bronze statue of Mary Jemison, created in 1910 by Henry Kirke Bush-Brown, marks her grave. Following restoration of the grounds to Letchworth’s time, since 2006 the memorial has stood between the two cabins. Dr. George Frederick Kunz helped pay for and commission the 1910 memorial to Jemison, who was known as “The White Indian of the Genesee.” Dr. Kunz was fascinated by Native Americans, and contributed much to their memorials in New York.
In popular culture
Indian Captive: The Story of Mary Jemison (1941) is a fictionalized version of Jemison’s story for young readers, written and illustrated by Lois Lenski. At the end of this novel, she is renamed by the Seneca as “little woman of great courage.”Rayna M. Gangi’s Mary Jemison: White Woman of the Seneca (1996) is a fictionalized version of Jemison’s story.Deborah Larsen’s The White (2002) is a fictionalized version of Jemison’s story, imagining her process of assimilation to the Native American culture in which she lived.
Captivity Narrative: William Barnett
William Barnett was taken in August 1757 at age nine during a Delaware Indian raid in Lancaster County, Pa. His father traveled with George Croghan to Fort Pitt to retrieve him. Below is an excerpt of their first meeting after the son was returned from captivity.
The account of the Barnett family can be found in The Pennsylvania-German in the French and Indian War: A Historical Sketch by Henry M.M. Richards here
Faithful to his promise, Col. Croghan used every endeavor to obtain him. At length, through the instrumentality of traders, he was successful. He was brought to Fort Pitt, and, for want of an opportunity to send him to his father, was retained under strict guard, so great was his inclination to return to savage life. On one occasion he sprang down the bank of the Allegheny River, jumped into a canoe, and was midway in the stream before he was observed. He was quickly pursued, but reached the opposite shore, raised the Indian whoop, and hid himself among the bushes. After several hours’ pursuit he was retaken and brought back to the fort. Soon after, an opportunity offering, he was sent to Carlisle. His father, having business at that place, arrived after dark on the same day, and, without knowing, took lodging at the same public house where his son was, and who had been some time in bed…The sleeping boy was awakened and told his father stood by his bed. He replied in broken English, ‘No my father.’ At this moment his father spoke, saying, ‘William, my son, look at me; I am your father!’ On hearing his voice and seeing his face he sprang from the bed, clasped him in his arms, and shouted, ‘My father! My father is still alive!’ All the spectators shed tears, the father wept like a child, while form his lips flowed thankful expressions of gratitude, to the Almighty disposer of all events, that his long-lost child was again restored.
CAPTIVITY NARRATIVE: JONATHAN ALDER
Captivity Narrative: Jonathan Alder
In 1782, when Jonathan Alder was nine years old, he was captured and adopted by a Shawnee and Mingo family. When he entered adulthood, his Indian parents offered him freedom, but he chose not to leave them. It wasn’t until 1795, after both parents died, that he began a slow return to white society.
Below is an excerpt from Alder’s captivity narrative.
We spent the balance of the fall in hunting. We passed through the west side of the Scioto River stopping awhile on the bottoms where Columbus now is. There was large fields of corn and an Indian town on the east side of the river where the new state prison is, which I will speak of hereafter. Then, we struck up betwixt the river and Big Darby. There was a great beech crop that fall and bear was very fat and fine. We killed bear and deer and by the time we got back onto the Mad River, our horses was well loaded with skins.
We had arrived back safely without any serious trouble and the one thing that gave me great consolation was that we had made the trip and they had neither killed nor hurt anyone. I was very careful never to tell them how I heard that door open and shut for fear they might think I was not a true partner.
I was now bordering close onto manhood. One morning my old Indian father called me and told me that I was now near the age that young men should be free and doing for themselves. I now had the right to come and go and stay where I pleased and was not under any restraint whatsoever, particularly from himself and my mother. “But,” said he, “if you choose, you can stay with us as long as we live and then we may eat of your venison and bear meat and oil, which will be a great consolation to us in our old days. But all the profits arising from the sale of your skins and furs shall be yours. I shall still draw my rations of clothing, and blankets, and powder, and lead from the British government and you shall always have what we don’t need.”
I thanked them both very kindly for the liberty they granted me, but told them I had no desire to leave them; that I preferred to stay with them as long as they lived if I should outlive them; that they had been very kind and good to me and that I would feel an obligation to them as long as I lived. “My white mother I have almost forgotten and, of course, I shall never see again,” I told them. “I accept you as my parents. I acknowledge myself to be your son by adoption and am under all obligations to you as such.” My mother came up to me and held out her hands. She was so overcome that she did not speak, but I saw that her eyes were full. My father came forward and shook hands with me without saying anything more. I must acknowledge that my feelings were greatly agitated, for everything had happened so unexpectedly. To avoid any further outbursts of feeling I picked up my gun and shot pouch and went off hunting. I must confess that the thought that I was now my own man, free to go and come, and act, and do for myself agitated my mind more or less all day. I had been free before, but this seemed to be a new era in my life. I killed a deer in the afternoon and carried it off. Everything seemed to be cheerful and pleasant, but there seemed to be a different feeling with us such as I cannot express. From this time on when I would sell my skins and furs, I had a purse of my own and kept the money and bought my own clothes.
Learn more about Jonathan Alder in his captivity narrative, “A History