Fluke
On October 30, 1926, Mr. and Mrs. H. L. Stimson stepped off a steam train in Kyoto, Japan, and checked into room number 56 at the nearby Miyako Hotel.
In a remote outpost of New Mexico, the scientists and soldiers saw a potential savior: a new weapon of unimaginable destruction that they called the Gadget.
Thirteen men were asked to join the Target Committee, an elite group that would decide how to introduce the Gadget to the world. Which city should be destroyed? The Target Committee agreed: Kyoto must be destroyed. The committee also agreed on three backup targets: Hiroshima, Yokohama, and Kokura. Why was Kyoto spared? And why was Nagasaki — a city that hadn’t even been considered a top-tier bombing target — destroyed?
By 1945, Mr. H. (Henry) L. Stimson had become America’s secretary of war, the top civilian overseeing wartime operations. The final targeting list contained four cities: Hiroshima, Kokura, Niigata, and a late addition, Nagasaki. Stimson had saved what the generals called his “pet city.”
The second bomb was to be dropped on the city of Kokura. Nagasaki’s civilians were doubly unlucky: the city was a last-minute addition to the backup targeting list, and it was leveled because of a fleeting window of poor weather over another city.
The story of Kyoto and Kokura poses an immediate challenge to our convenient, simplified assumptions of cause and effect following a rational, ordered progression.
We want a rational explanation to make sense of the chaos of life.
When we consider the what-if moments, it’s obvious that arbitrary, tiny changes and seemingly random, happenstance events can divert our career paths, rearrange our relationships, and transform how we see the world.
What we ignore are the invisible pivots, the moments that we will never realize were consequential, the near misses and near hits that are unknown to us because we have never seen, and will never see, our alternative possible lives.
When we try to explain the world — to explain who we are, how we got here, and why the world works the way it does — we ignore the flukes. We willfully ignore a bewildering truth: but for a few small changes, our lives and our societies could be profoundly different. When given the choice between complex uncertainty and comforting — but wrong — certainty, we too often choose comfort.
There is a concept in philosophy known as amor fati, or love of one’s fate. We must accept that our lives are the culmination of everything that came before us.
We are the surviving barbs of a chain-link past, and if that past had been even marginally different, we would not be here.
In his 1922 play, Back to Methuselah, George Bernard Shaw writes, “Some men see things as they are and ask, ‘Why?’ I dream things that never were and ask, ‘Why not?’”
I began to wonder whether the history of humanity is just an endless, but futile, struggle to impose order, certainty, and rationality onto a world defined by disorder, chance, and chaos.
As the late philosopher Hannah Arendt once put it, “The smallest act in the most limited circumstances bears the seed of boundlessness, because one deed, and sometimes one word, suffices to change every constellation.”
Convergence is the “everything happens for a reason” school of evolutionary biology. Contingency is the “stuff happens” theory.
Uncertainty has long been shunned, shoved aside by rational-choice theories and clockwork models. Small variations are dismissed as “noise” that should be ignored, so we can focus on the real “signal.”
Several decades ago, a heretic of evolutionary theory named Motoo Kimura challenged that conventional wisdom, insisting that small, arbitrary, and random fluctuations matter more than we think.
Changing Anything Changes Everything
When the world functions “normally,” life seems to have a predictable, well-ordered regularity, a regularity that we convince ourselves we can mostly direct, masters of our own destinies. There’s just one problem: it’s a lie. It’s the lie that defines our times. We might call it the delusion of individualism.
In 1814, a French polymath named Pierre-Simon Laplace was grappling with the enduring mysteries of such an intertwined existence.
Laplace surmised that every event, every gust of wind, every molecule, is governed by a rigid set of scientific rules: Newton’s unbending laws of nature.
Laplace came up with an intriguing thought experiment. Imagine you had a supernatural creature — now referred to as Laplace’s demon — with omniscient intelligence. It would have no power to change anything, but it could know, with absolute precision, every detail about every single atom in the universe.
In other words, with perfect information, the demon would see reality across time and space like a solved jigsaw puzzle, so it would understand why everything was happening and could therefore know what would happen next.
So, which is it? Do we live in a clockwork universe, or an uncertain one? Sixty years ago, a man named Edward Norton Lorenz brought us closer to the answer. Even in a clockwork universe with controlled conditions, minuscule changes can make an enormous difference.
Lorenz’s findings created the concept of the butterfly effect, the notion that a butterfly flapping its wings in Brazil could trigger a tornado in Texas. Lorenz had inadvertently given birth to chaos theory. The lesson was clear: if Laplace’s demon could exist, its measurements would need to be flawless.
The tiniest fluctuations matter. Confidence in a predictable future, therefore, is the providence of charlatans and fools.
Whatever the reasons for why we tend to discount the total unity of our world and instead divide everything into neat boxes, interconnectedness is reality. It drives everything.
In recent centuries, the world has become more intertwined, not less.
When you step in a river, you change it. Nothing is static. Even microscopic changes add up over time.
We remain stuck with a limited field of view. Expand that view, as the astronauts did, gazing out of their spacecrafts, and it immediately becomes clear that individualism is a mirage. Connection defines us. At first, an intertwined world seems terrifying.
Our chaotic, intertwined existence reveals a potent, astonishing fact: We control nothing, but influence everything.
The myth of a controllable world that each of us can tame is ubiquitous, particularly in modern Western society. The American dream is the delusion of individualism on steroids. Everything is up to us!
Humans like straightforward stories, in which X causes Y, not in which a thousand disparate factors combine to cause Y.
There is a fundamental division in philosophy, between the atomistic and the relational view of the world. Western philosophical traditions tend to emphasize atomism. Eastern philosophy tends to be dominated by relational thinking. The connections between components within the system, rather than just the components themselves, are most important.
The relational and atomistic divide is mirrored in religion. Hindus refer to the Brahman, the concept of total unity for all that exists in the universe, in contrast to the atman, or individual soul, which only has the illusion of independence from the whole.
Over time, individualism has been reinforced because in modernity we’ve also lost our sense of connection to the natural world.
Modern humans master a tiny slice of the world. But by coordinating our efforts and putting those slices together, we’ve unlocked potential that was previously unimaginable. That was the great triumph of reductionism, in which it’s assumed that complex phenomena can be best understood by breaking them down into their individual parts.
We will tackle six big questions:
- Does everything happen for a reason, or does stuff . . . just happen?
- Why do tiny changes sometimes produce huge impacts?
- Why do we cling to a storybook version of reality even if it’s not true?
- Can’t we just tame flukes with better data and more sophisticated probability models?
- Where do flukes come from — and why do they blindside us?
- Can we live better, happier lives if we embrace the chaos of our world?
Everything Doesn’t Happen for a Reason
Random fluctuations can spread out across time and space to cause unexpected opportunities or calamitous disaster — or both.
We tend to systematically downplay the role of luck — the word we use to describe the random and the accidental intersecting with our lives.
Most human traits, including intelligence, skills, and hard work, are normally distributed, following a Gaussian, or bell-shaped, curve, a bit like an inverted U. Wealth, by contrast, isn’t normally distributed.
Some billionaires may be talented. All have been lucky. And luck is, by definition, the product of chance.
We should all take a bit less credit for our triumphs and a bit less blame for our failures. Some things — even important and maddening and horrific things — just happen. That’s the inevitable result of an interconnected chaotic world.
“Turtles all the way down” has become a shorthand for an infinite regress, in which each explanation stands atop another, which stands atop another, on and on. That’s how contingency works.
Survivors determine the future. Ruthless, but effective.
The snooze-button effect. If the world is mostly convergent, then it won’t matter if you get out of bed five minutes later than you were originally planning. But if the world is sometimes diverted by small, contingent events, then each tap of the snooze button could change everything.
Sadly, there’s only one Earth, we can’t rewind time, and these contingency versus convergence experiments remain possible only with microbes in a science lab. For the moment, though, it seems that Lenski and Blount — and a much larger team of researchers who have worked on the LTEE — have resolved the contingency versus convergence debate: to us, the world appears convergent, until we realize, with a jolt, that it isn’t.
We live, as do E. coli, in a world defined by what we might call contingent convergence, which is broadly how change happens. There’s order and structure, but the snooze-button effect is real. That leads to an unsettling, but also exhilarating, truth: every moment matters.
The world is full of “good enough” solutions, which others call a kludge approach. A kludge is defined as “an ill-assorted collection of parts assembled to fulfill a particular purpose.”
With tiny changes, so much could turn out differently. It’s not just true in evolution, but also in our lives and our societies.
Why Our Brains Distort Reality
Imagine two creatures: we might call them the Truth Creature and the Shortcut Creature. The Truth Creature sees everything exactly as it is. By contrast, the Shortcut Creature can’t see any of that detail, but instead only perceives and processes that which is most useful to it.
Which creature would you rather be? We are tempted to side with the truth. But that would be a fatal mistake. Shortcut Creatures always win.
Most of us assume that truth is, by definition, useful. But consider it a bit more carefully, and it becomes clear that’s not the case. We do not see reality, but rather a “manifest image” of it, a useful illusion that helps us navigate the world.
Our perceptions of reality are the contingent by-product of evolution by natural selection.
When truth and usefulness are in conflict, the Shortcut strategy always eventually beats out the Truth strategy.
Neuroscience evidence is accumulating that one mechanism by which we get better at navigating the world is through “synaptic pruning”. The brains of newborns are packed with 100 billion neurons.
As Alison Barth, a neuroscientist at Carnegie Mellon University, explains, “Networks that are constructed through overabundance and then pruning are much more robust and efficient.”
Our perception of reality is just one possible way of seeing the world. With three types of photoreceptors in our eyes (red, green, and blue), we’re known as trichromats.
Another trick of the Shortcut Creature is that human brains are pattern detection machines.
But our brains also evolved to be allergic to chance and chaos, wrongly detecting patterns and proposing false reasons for why things happen rather than accepting the accidental or the arbitrary as the correct explanation.
We’ve evolved to overdetect patterns.
Superstition is not, as many unfairly believe, the providence of simpletons. Instead, it is an understandable and nearly universal way that humans assert control when they feel that ordinary, rational methods of manipulating the world have become fruitless.
Teleological bias is related to a phenomenon called apophenia, the inference of a relationship between two unrelated objects, or a mistaken inference of causality.
Hegel and Marx were wrong: nature and complex systems such as modern human society are not moving relentlessly toward some idealized end point.
Humans trying to navigate the unimaginable complexity of modern society are now facing an evolutionary trap of our own because our minds didn’t evolve to cope with a hyperconnected world that relentlessly converges toward a knife’s edge, in which one tiny fluke can change everything in an instant. The Shortcut Creature doesn’t do quite so well when navigating a new, more complex world.
The Human Swarm
In 1875, a plague of locusts the size of California swept across the United States devouring everything in its path. An estimated 3.5 trillion insects formed a cloud eighteen hundred miles long. In total, the swarms devoured three-quarters of the value of all farm products in the United States, the modern equivalent of $ 120 billion in damage.
Scientists have long been perplexed by why swarms form. Recent research may have finally solved those puzzles — and it’s all about density. When there are fewer than seventeen per square meter, each locust keeps to itself.
Locusts begin to march as a unified swarm at precisely 73.7 locusts per square meter.
Humans have, over thousands of years, transitioned from societies that mirror the medium-density locusts to a high-density swarm.
The past was largely defined by local instability. Day-to-day life was unpredictable. Modern society is fundamentally different. Like the locust army marching as one, there is now immense order and apparent regularity, even as the population soars and density hits unprecedented levels.
Modern human society has an unprecedented regularity. We live in a world that is more ordered, regimented, and structured than ever before.
That’s the paradox of the swarm. Human society has become simultaneously far more convergent toward ordered regularity (which makes it appear seductively predictable) and also far more contingent (which makes it fundamentally uncertain and chaotic).
Few complex social systems can be captured with that stripped-down version of reality.
The answer lies with a relatively new realm of knowledge called complexity science and complex-adaptive-systems research. It’s concerned with states of the world that are between the two extremes of order and disorder, between pure randomness and stability, between control and anarchy. It’s an entirely different lens with which to view the world, making everything come into sharper focus.
A Swiss watch is complicated, but not complex.
What makes something “complex”? Complex systems, such as locust swarms or modern human society, involve diverse, interacting, and interconnected parts (or individuals) that adapt to one another.
Complex adaptive systems are path dependent, a bit like the Garden of Forking Paths.
There’s neither predictable order, nor disordered chaos. Instead, the market lies somewhere between the two, with millions of interacting agents producing its behavior. It’s a decentralized system that, like the swarm, can’t be controlled.
The interactions of lots of diverse, interconnected agents or units that constantly adapt to one another can produce a phenomenon known as emergence. With decentralized, self-organized emergence, complex adaptive systems produce regularities and patterns.
Basins of attraction. It means that, over time, a system will converge toward one, or many, particular outcomes. In complex systems, basins of attraction can change over time, creating instability. When the number of basins of attraction increases abruptly, a system can become more prone to shocks.
When complex systems are approaching the edge of chaos, primed to hit a tipping point, they can start to show warning signs. One red flag is a newly discovered phenomenon that scientists call critical slowing down. The “slowing down” refers to how long it takes a system to return to an equilibrium after a minor disturbance.
A phenomenon known as self-organized criticality. The name was coined in 1987 by Per Bak, a Danish physicist who showed how his concept applied to grains of sand in a sandpile. Everything seems perfectly ordered, stable, and predictable as the pile grows steadily. That is, until the sandpile hits a critical state and one additional grain of sand triggers an enormous avalanche.
As Scott Page rightly points out, each individual controls almost nothing, but influences almost everything.
Modern society is a complex system, seemingly stable, teetering on the edge of chaos — until everything falls apart due to a small change, from the accidental to the infinitesimal.
The outbreak of World War I nicely illustrates the relationship between criticality and contingency. The license plate on the archduke’s car read A-III-118, which could also be written as A-II-I1-18. Armistice Day, when the guns fell silent on the Western Front, was November 11, 1918, or A-11-11-18.
We are now extremely prone to catastrophic cascades triggered by small fluctuations. Yet, we keep piling our sandpiles higher and higher, tempting fate. Modern society is now so intertwined that ordinary individuals, not just kings and popes and generals, can redirect the entire human swarm.
A fully optimized system pushed to the edge of chaos is more likely to drift toward tipping points and cascades.
The internet is fundamentally different. It’s a revolution that has, for the first time in history, created an explosion in who can create widely disseminated information. It’s a fundamental shift: from few-to-many communication to many-to-many communication.
As the historian Felipe Fernández – Armesto writes, “Ideas are the main motors of change in human cultures and . . . the pace of change is a function of the mutual accessibility of ideas.” That motor of change is now in overdrive.
Heraclitus Rules
Sentient beings, including humans, are prediction machines. Our survival depends on it. The decision to forage, fight, or take flight are all based on attempts to calculate the unknown.
Humans have long accepted some uncertainty beyond our control. For millennia, surprisingly few systematic attempts were made to precisely measure or quantify uncertainty and risk.
The Arabic word for dice, al-zahr, is where we get the word hazard, a modern synonym for risk, and the Spanish word azar, which means “chance” or “randomness.”
The first usage of the Latin word resicum, which gave birth to our word risk, emerged from a notary contract in the Italian maritime republic of Genoa in 1156.
Breakthroughs in early probability theory were driven by games of chance. Most notably, in 1654, Blaise Pascal and Pierre de Fermat proposed a solution to what is known as an interrupted game.
As the mathematical tools grew, a greater proportion of the world could be understood and calculated. Soon, a craze swept the intellectuals of European high society: to count everything.
Today, probability theory has become a sophisticated and lucrative branch of mathematics.
We too often pretend that we can answer questions that we cannot. That overconfidence has meant that we write out chance, chaos, and contingent flukes because they don’t fit into the neater world we like to imagine exists.
At the smallest levels, matter behaves in ways that seem impossible. Conventional interpretations of quantum experiments imply that tiny particles can be in two places at once, a phenomenon called superposition. However, when we observe those particles, they collapse into a single position, suggesting that reality changes depending on whether someone’s looking.
In the early twentieth century, a renegade economist named Frank Knight challenged the conventional economic wisdom, which relied on a series of simplistic assumptions. Knight persuasively articulated the difference between, in his terminology, uncertainty versus risk.
For example, tossing a six-sided die is a matter of risk rather than uncertainty. We don’t know which exact number it will land on, but we do know that each number has a one-in-six chance of ending up on top. Risk can be tamed. Uncertainty, by contrast, refers to situations in which a future outcome is unknown and the underlying mechanism producing that outcome is also unknown — and may even be constantly changing.
Don’t mistake untamable chaos for tamable chance.
The world of questions can be split into two categories: those that must be answered and those that need not be. We might call these the “take your best shot” questions versus the “don’t bother trying” questions.
To confuse matters further, an endless supply of words describe probabilities: Bayesian, objective, subjective, epistemic, aleatory, frequentist, propensity, logical, inductive, or predictive inference.
There are two main camps for probability statements. As the eminent philosopher of science Ian Hacking explains, many probabilities are part of either frequency-type probability or belief-type probability. The frequency type is mostly based on how often an outcome will occur. Belief-type probabilities are completely different. They are expressions of a degree of confidence that you have in a specific claim or future outcome, based on the available evidence.
As John Kay and Mervyn King put it in their excellent book, Radical Uncertainty, probabilities can be best applied to situations in which “the possible outcomes are well defined, the underlying processes that give rise to them change little over time, and there is a wealth of [relevant] historic information.”
Probability-based estimates rely on accurate categories.
The Land of Heraclitean Uncertainty. Heraclitus is the pre-Socratic philosopher who spoke of the ever-changing river and the ever-changing man. Heraclitus was clearly right that change is constant. When uncertainty is produced because the world itself is changing, that’s Heraclitean uncertainty, and probabilities quickly become useless, as past patterns can become meaningless in an instant. We get lost when we use probabilities in the Land of Heraclitean Uncertainty.
For the problem: weather patterns are contingent. The weather an hour from now is manageable risk, but because the system is sensitive to tiny, unpredictable fluctuations, it quickly becomes more uncertain the longer you gaze into the future. Chaos theory takes over. We might call this chaotic uncertainty.
We imagine that we can calculate what we can’t anticipate. This is a serious problem for modern data analysis because most research efforts only collect data for variables that are already considered important. It’s a piece of information you’d never consider tracking down until after its importance became clear. Unknown unknowns are therefore directly related to what Nassim Nicholas Taleb calls Black Swans, in which we are surprised by rare, unexpected, and consequential events that can be neither anticipated nor quantified by equations.
The world is now changing so quickly that past regularities are becoming less predictive of the future than ever before. The shelf life of probability is getting shorter. This has created a strange paradox. The future is becoming more uncertain and often impossible to predict. At the same time, we are making increasingly precise predictions that often turn out to be wildly wrong. We put blind faith in probability at our peril.
Rather than embrace a healthy dose of uncertainty, we cling to false certainty.
The problem, however, is that the models have become so influential that we can forget that they are models — deliberate simplifications that are, by design, inaccurate representations of the thing itself, just as a map helps us navigate a territory. Economic models that purport to explain human behavior are just like Google Maps, sometimes useful, but vastly different from the economy itself.
Decision theory is used, often to great effect, to impose rigorous thinking on difficult problems. But there’s a hitch. The assumptions for decision theory apply best to a simple social world that doesn’t exist.
Decision theory is therefore a flawed, sometimes useful, way of navigating the garden of forking paths before us.
The Storytelling Animal
In the storybook version of life, humans are rational utility maximizers who make choices according to a structured internal flowchart of risks and rewards, penalties and payoffs. In truth, humans act according to our beliefs — the “why” that drives us. Those beliefs are constantly swayed by the arbitrary, the accidental, and the seemingly random. But when we study ourselves — when we try to understand what makes society tick — we systematically ignore this obvious fact.
Rational choice theory and its intellectual offshoots have dominated social scientific thinking about human decision-making ever since Adam Smith advanced its core assumptions in the nineteenth century.
Over time, a softer version of rational choice theory that doesn’t assume such perfect information has become more prominent, called bounded rational choice theory. The bounded part refers to humans not being perfect in our decision-making. We make cognitive mistakes and lack crucial information. Rather than being optimizers, we’re often guided by satisfice, a portmanteau of satisfy and suffice, in which we choose not what is optimal, but what is good enough.
The professional study of humanity is detached from how most people experience the world. Eighty-four percent of the world’s population identifies with a religious group.
In the real world, emotion, hunches, impulse, faith, and belief in the divine have profound effects on consequential decision-making, yet we pretend that the world is peopled with implicit probability calculators.
Theories of rational decision-making are perhaps the closest we’ve come to pretending natural laws apply to humans, too.
Our thoughts are also influenced by sensory perceptions, experiences, and the thoughts of other thinking, self-reflective beings, all moderated by culture, norms, institutions, and religions.
Our beliefs are most easily swayed when ideas are put into a story. Our brains are so attuned to narrative that we will connect the dots into a story even when the dots aren’t connected, which is called narrative bias. The economy runs on numbers, not stories, we’re told in school. But that just isn’t true. Humans make up the economy — and humans navigate the world through narratives. The story of a possible future event can cause that event to take place. There’s not a separate, objective, rational market economy that’s detached from the storytelling animal because the market is the aggregation of billions of storytelling animals.
Our subjective beliefs drive change, which makes the world even more contingent.
The Lottery of Earth
Now, we move from why to where. When we hear the phrase space-time, many of us have a vague association with Einstein and the impenetrable mysteries of physics.
Becoming an island was arguably the most consequential event in the history of Britain, but you won’t find it in most British history books. The development of an empire built with its fearsome navy. Navies require ships, and ships require timber. America was to be the saving grace of the Royal Navy, a vast continent of untouched forests. Across the Atlantic, the king wanted the trees for the Royal Navy. In the winter of 1772, a royal surveyor discovered six sawmills near Weare, New Hampshire, that were processing wood with the telltale broad-arrow mark upon the bark. The owners were arrested, which the townspeople saw as an egregious injustice. In the early hours of April 14, 1772, a mob descended on the Pine Tree Tavern, where the king’s enforcer lay asleep. The Pine Tree Riot, as it came to be known, was an indirect trigger for revolution. Tall trees were a key, but often forgotten, factor in America’s founding. In the war that soon followed, the new American navy sailed under a flag of arboreal resistance: a single tall pine tree set against a white background.
Geography, it is sometimes said, is destiny.
From the beginning, our bodies were shaped, quite literally, by our physical environment.
Twenty million years ago, two enormous plates crashed together, creating the Tibetan Plateau. This siphoned moisture away from East Africa, drying the region out. Ape populations were separated by climate and divided into two branches: the African apes and the Asian ones. The African apes eventually became us.
Modern explanations of social change rarely include geographical or geological factors.
To more precisely understand the role of geology and geography, we need a few concepts. The first can be thought of as the lottery of earth.
When humans make decisions, however, one crucial choice can create a fixed trajectory for quite a long time. This is the concept of path dependency. Past decisions constrain future ones. Path dependency can make it harder to change course. A single human choice, or a small set of human choices, about how to interact with the physical environment in a specific historical moment can create a trajectory that is then followed by future generations.
Finally, there’s the most interesting type of geography and geology rerouting history. I call this human space-time contingency.
These days, saying that an argument relies on “geographic determinism” or “environmental determinism” is a grave insult in history and social science, a way to instantly dismiss a scholarly claim. Nonetheless, our environment is a key factor that partially determines human history, even though past thinkers have perverted geographical explanations as a stalking horse for racism.
In Guns, Germs, and Steel, Diamond observes that human history was also diverted by the shape and orientation of the continents — an idea known as the continental axis theory. Climate, habitat, vegetation, soil, and wildlife are mostly dictated by latitude, not longitude.
Geography isn’t destiny, but it matters.
If we lived in a uniform world, where every location was identical to every other one, there would be little trade and little reason to migrate. Cultures would converge, killing off one of the richest gifts of the human experience.
Our lives are shaped by the decisions of humans, alive and long dead, but also by the lottery of earth.
Everyone’s a Butterfly
Let’s consider two opposite conceptions of how history works. In one vision of historical change, there’s the storybook reality: Change is ordered and structured. The convergent trajectory of events means that individuals come and go, but trends dominate. History is written by unseen social forces, and the main characters are powerless to alter the plot. On the opposite extreme, individuals reign supreme because the idiosyncratic behavior of a single person can reroute us all onto a different path. The logical extension of that viewpoint — rooted in chaos theory — means that every individual isn’t just capable of changing history.
These two conceptions of change are fundamentally different. So, are we just along for the ride, or does each of us determine the destination?
Chaos theory proves that small changes can produce enormous impacts, so any manipulation of the past would risk drastic change, making the thought experiment even more uncertain.
For centuries, it was broadly accepted that key individuals determine history. In the nineteenth century, the Scottish philosopher Thomas Carlyle turned this mindset into an explicit philosophy of history known as the Great Man Theory. Then, in the late nineteenth and early twentieth centuries, historians, philosophers, and economists pushed back hard against the Great Man view. History shaped the leader; the leader didn’t shape history. Similarly, Hegel, and later Marx, presented history as a predictable march toward an end goal.
In the 1920s and 1930s, the Annales school of history emerged in France, founded by a group of scholars who looked to understand social change by analyzing long-term society wide trends rather than specific individuals or key events. The Annales school changed what it means to “do history.”
Political scientists and economists tend to treat individuals as interchangeable, too, dismissing explanations that lie with specific people.
Western knowledge production systematically prioritizes general rules — even if they’re misleading or wrong — over specific and idiosyncratic understanding of individuals.
We cling to the idea that what matters more than who — and by extension that the message matters more than the messenger. But for most of history, it’s been clear that often isn’t true.
In Greek mythology, Cassandra of Troy caught the eye of the god Apollo. Apollo gave her a divine gift: the ability to accurately see the future. But Cassandra later scorned Apollo. Unable to revoke the gift of foresight he had bestowed upon Cassandra, Apollo did the next best thing by cursing her with the punishment of disbelief. No matter how accurate her prophecies, nobody would believe her. The myth of Cassandra is one of the earliest indications that humans have long understood that if there is a fixed truth, our interpretation of it is often subjectively tied to who promotes that truth.
Signaling involves deliberate attempts to convey information using socially accepted clues.
Schemas are psychological tools we use to distill vast amounts of information into easily maintained categories. Our mental maps and schemas are not fixed, but constantly shifting.
Perhaps the genius matters less than the idea that forms a stroke of genius. But is that true? This is an important question because if even scientific ideas are at least partly contingent on which individual comes up with them, then it’s hard to dispute that just about everything is contingent and prone to flukes created by individuals.
In the twentieth century, two titans of the philosophy of science, Karl Popper and Thomas Kuhn, sparred over how modern science works.
Popper emphasized how disproving bad ideas drives change in a more objective process; Kuhn emphasized the subjective role of individuals. To Popper, scientists try to tear down bad ideas to expose the truth, jettisoning flawed theories through falsification. They continually try to disprove every proposed hypothesis, and when they do, that idea goes to the junk heap of scientific history.
By contrast, Thomas Kuhn, who wrote The Structure of Scientific Revolutions in 1962, argued that scientists, like all of us, have prejudices and biases. Individual scientists have an established set of beliefs, they believe in certain theories, and they devote their professional lives to proving those views right. When the cracks get big enough, the entire edifice of science can collapse, decades of accepted truth destroyed in a bewildering crash. Kuhn refers to these moments as revolutions in science, where previously dominant paradigms are replaced by fresh ones, and the process repeats. To Kuhn, scientists themselves matter — and they matter a lot. Individual researchers can sway which questions science asks, which hypotheses are taken seriously, and who gets funding.
Great discoveries are “in the air” when they’re made, part of a scientific trend. Evolutionary theory would’ve eventually carried the day because it’s correct — and correct ideas do tend to win out in scientific inquiry.
Of Clocks and Calendars
The Garden of Forking Paths is affected by everything, everywhere, constantly. Each path we take, moment by moment, makes some worlds possible, others impossible.
Time is life’s invisible variable. It’s impossible to imagine a world free of time because we can’t experience any moment but the present.
There is no such thing as objective time. Time exists relationally, yet another instance of reality being intertwined, not separable. Time, itself, remains a mystery.
We look at calendars to gaze into our future and see what comes next. But at a more fundamental level, our calendars are the result of a few key decisions made by small groups of people thousands of years ago, shaping the rhythms of our lives and the patterns of modern society.
In the earliest days of Rome, people followed a ten-month calendar, adding up to 304 days, with the remainder of the days of a year being lumped together into a winter period of varying length. Later reforms added two months, January and February, but the original numbering system remains. That’s why the names September, October, November, and December refer, linguistically, to the numbers seven, eight, nine, and ten.
Even our naming systems are the living ghosts of past decisions.
Why are there seven days in the planetary calendar? Because five planets were visible to the naked eye (Saturn, Mars, Mercury, Jupiter, and Venus), and with the sun and the moon, that makes seven.
We synchronize our lives with rhythms produced by accidents of history.
In a constantly changing world, all else is never equal, and it’s rarely a safe assumption to make, unless a given cause and effect is stationary and stable, such as a coin flip. In messy reality, a pattern in one place won’t necessarily hold true in another.
Outcomes vary not just across space, but also across time.
“English spelling is ridiculous,” writes Arika Orent. The Anglo-Saxons in England spoke Old English. Viking invasions injected Old Norse. In the eleventh century, the Normans effectively obliterated written English, replacing it with French. But when written English returned in the 1300s, the language was in flux. Then, the printing press was invented. Standardization became essential, and words had to be shortened for efficiency. Hadde became had, thankefull became thankful.
Some flukes have staying power. W. Brian Arthur, an economist who became one of the founding fathers of complex systems theory, demonstrated this effect with technology, coining a new term called increasing returns.
We too often imagine that we can simply ignore the “noise,” the flukes, the contingent uncertainty produced by our beliefs, where something happens, who’s involved, or when it takes place. But we can’t. Even our best experts routinely get it wrong. And that yields a disconcerting fact: we do not understand ourselves.
The Emperor’s New Equations
Much of our world is shaped by our flawed understanding of how humanity works. We allocate budgets and set tax rates based on economic forecasts that are rarely accurate beyond short periods.
“All models are wrong, but some are useful,” noted the statistician George Box.
We need to have a more accurate recognition of what we can — and can’t — understand about ourselves as we navigate a complex world swayed by the random, the arbitrary, and the accidental.
We can break down this problem into two parts, which I call the Easy Problem of Social Research and the Hard Problem of Social Research. The Easy Problem is derived from flawed methods. It can be — and should be — slayed. The Hard Problem is probably unsolvable, as it’s derived not from human error or bad methodology, but because some forms of uncertainty tied to human behavior are absolute and unresolvable.
When researchers tried to repeat previous studies and experiments — including findings that had been widely accepted as conventional wisdom — they got different results.
Social researchers are, unfortunately, sometimes guilty of using bad research methods or even deliberately gaming the system.
Most studies conducted in political science, economics, sociology, psychology, and so on produce a quantitative metric known as a P value. When the P value is sufficiently low, researchers tend to interpret that as evidence that the finding is likely to be real, or, as it’s formally known, statistically significant. The research community has largely agreed that the threshold for publication is a P value below 0.05. When researchers tweak their data analysis to produce a P value that’s low enough for an article to be published, that’s called P-hacking.
Unfortunately, bad research is just as influential as good research. A 2020 study found that research that failed to replicate (and is therefore likely to be bogus) is cited at the same rate as research that’s been independently verified through a repeat study.
Complexity science — and those who use the more sophisticated logic of complex adaptive systems to understand our world — sadly represents a tiny sliver of modern research production.
When the model predicts something with a low probability and it happens, then it’s just the world that’s being weird, not the model being incorrect. It’s unfalsifiable, impossible to disprove. And when you can’t disprove things, we get stuck in ruts — and our misconceptions about our world grow steadily worse.
If the old storybook worldview of ordered individualism, linear relationships, and big effects having big causes is so wrong, then why does it persist?
Chris Anderson and David Sally, a strong-link problem. You can afford to have a weak link — so long as your strongest link is really strong. To fix a weak-link problem, you can’t focus on the best bits. You must eliminate the weakest links.
As Mastroianni points out, science is a strong-link problem. It’s the best discoveries that change society, and it doesn’t matter much if a bunch of bogus sludge clogs up low-level academic journals. In addition to being a strong-link problem, science is a realm of survival of the fittest. Science is therefore an engine of progress because it combines a strong-link problem with evolutionary pressures, which usually makes the strong links stronger over time.
There’s one more reason why our modern sages underestimate the importance of small, contingent tweaks as drivers of change. In the last several decades, social research has undergone a quantitative revolution. Our understanding of our world has become mathematized.
Sometimes, the equations that govern systems are so complex, so mind-bogglingly intricate, that trying to represent the underlying dynamics with mathematical precision is a fool’s errand. That hasn’t stopped us from trying — and failing — to represent complex systems with simple, short equations. That’s part of the reason why we’re so often wrong. We often use pared-down linear equations to describe maddeningly complex nonlinear systems that can radically pivot on the tiniest detail.
In 2005, the comedian Stephen Colbert coined a term that became widely used in American politics: truthiness. If a claim felt true, it was true, no matter the facts. Several years later, the economist Paul Romer riffed on Colbert’s phrase to describe what he saw as a major flaw in economics research: mathiness. Romer argued that modern economics was using math to obscure rather than illuminate. Modern attempts to understand ourselves too often end up producing equations that are nonsensical, the mathiness that Romer warned about.
We so badly crave clear evidence that one X causes one Y, and so long as we continue our quest for the Holy Grail of Causality within our storybook reality, we convince ourselves we might just find it.
If we want to get better at understanding ourselves, we need to make (bad) predictions, so that we can learn from those failures and develop new tools to iteratively make better predictions.
Our lives and the future of society are really hard to predict.
Could It Be Otherwise?
“If you could rewind your life to the very beginning and then press play, would everything turn out the same?”
There are, I suspect, six main ways that most people would answer the “Would everything turn out the same?” question:
- No, everything would be different because human choices are idiosyncratic. (Let’s call this the “I could have done otherwise” answer.)
- No, everything would be different because God (or gods) sometimes intervenes to change things. (The “divine intervention” answer.)
- No, the world would be at least a bit different because quantum mechanics proves that some things — at least at the smallest levels of atomic and subatomic particles — are truly random. (Let’s call this the “quantum flukes” answer.)
- Yes, everything would be the same because a supernatural being (God or gods) directs everything — and the universe unfolds according to that fixed divine script. (The “God decides everything” answer.)
- Yes, everything would be basically the same because even though there would be small changes in the replayed life, the small stuff gets washed out and doesn’t matter much. (The “everything happens for a reason” answer.)
- Yes, everything would be identical because the world follows the natural laws of physics, and everything that happens is caused by what happened previously, in an unbroken chain of causes and effects. (The “deterministic universe” answer.)
Those who say that replaying the tape of our life from the beginning would produce an identical result are determinists. Those who say a replay could turn out differently are indeterminists.
The state of the universe at every snapshot of time is determined by antecedent causes, or, in plain language, by what came before.
We understand our world better when we do something that no other species can do so effectively — explore that profound question “What if?”
Determinism doesn’t mean that we can predict the future. Chaos theory shows that seemingly insignificant tweaks to the initial conditions in a deterministic system can produce wildly different results over time. Determinism combined with chaos theory says that we can’t change the script, but if we could, then even one microscopic change to the plot or the characters — even a butterfly flapping its wings as it flits across the stage — could alter everything that follows in the rest of the play.
Indeterminism, by contrast, suggests that the script can change.
Newton’s laws don’t explain everything. In the last century, three major challenges to Newtonian physics have been discovered. His laws don’t apply well to the very small (which requires quantum physics), the very fast (which requires special relativity), or the very large (which requires general relativity).
The Copenhagen interpretation implies that at the tiniest levels of matter, some aspects of our world are completely random, governed not by determinism, but by probabilities. This interpretation gave rise to a scientific paradigm that concluded the world is indeterministic, not because we can change things, but because things change randomly by their very nature. We might call this camp the quantum indeterminists.
Nobody really knows what’s going on! However, what’s broadly agreed on within much of the scientific community is that one of these two propositions is correct:
- Determinism is true.
- The world is indeterministic, but only due to quantum weirdness.
Our feeling of possessing libertarian free will is central to the experience of being human. It leads to a common argument: we feel as if we have free will, therefore we must. This is terrible logic. Perceptions do not make reality.
If we are to rescue libertarian free will from the jaws of physics, then we must propose scientific heresy: that human brain matter has a unique magical property, replicated nowhere else in the known universe. If libertarian free will does exist, it would violate everything we know about the way the universe works, as it would require us to be, in the words of the philosopher Daniel C. Dennett, “spectral puppeteers” who are able to control our brains from the outside.
Smart people don’t choose to be smart and less intelligent people don’t choose to be less intelligent.
Calling a theory “deterministic” is one of the harshest social science attacks, a shorthand that aims to discredit ideas as both absurd and morally repugnant. The “ghost in the machine” continues to haunt how we understand ourselves and our world.
To me, determinism is awe-inspiring. Our present moment is woven together with infinite threads that stretch back billions of years.
Our best and worst moments are inextricably linked. The happiest experiences of your life are part of the same thread in which you suffered the most crushing despair. One couldn’t follow without the other.
If someone else existed in your place, the world would be different. Because you exist, you will have an impact on the world, some good, some bad.
We all are the living manifestation of 13.7 billion years of flukes. Perhaps we can finally accept that we will never be able to fully understand our own existence.
Why Everything We Do Matters
When we try to distill every waking effort into a struggle for ratcheting optimization, it’s the essence of being human that’s dissolved away, leaving only a residue of clockwork, atomized inner barrenness.
On every measurable metric, we’re better off than ever before, but many of us feel worse off for it. This is a despair of our own making, according to the German sociologist Hartmut Rosa, not because of technology, but because of a futile yearning to make the world controllable. The categorial imperative of late modernity, Rosa writes, is straightforward but bleak: “Always act in such a way that your share of the world is increased.” Relationships become a means to an end, reducing a magically networked existence into mere “networking”.
It can be comforting to accept what we truly are: a cosmic fluke, networked atoms infused with consciousness, drifting on a sea of uncertainty.
But it’s not just that worship within the Church of Control makes us miserable. Paradoxically, misguided attempts to assert control make the world less controllable — and in dangerous ways.
Complexity science, as we have seen, establishes the risks of living on the “edge of chaos,” in which a system teeters on the precipice of a tipping point, the moment when Black Swans become most likely to blindside us. Yet, what do we do? We race toward the edge, hoping to slay every last bit of slack within our social systems, prostrating ourselves before the God of Efficiency.
As a species, we delude ourselves when we imagine that we would prefer a certain world that we could fully control. In truth, we crave a healthy balance between order and disorder, fulfilled by our world of contingent convergence.
Life would be boring and monotonous if everything were structured and ordered, but pure disorder would destroy us.
Embracing the beauty of uncertainty means a bit less emphasis on how your individual action in the present can produce an optimized future, and a bit more emphasis on celebrating the present that has been created for you, the symphony of our lives that is being played by an orchestra of trillions of individual beings hitting their respective notes across billions of years, culminating in this utterly unique, contingent moment.
The good society is one in which we accept the uncertain and embrace the unknown.
We’ve engineered a society that is, in too many ways, the opposite of that good society, in which day-to-day life is overoptimized, overscheduled, and overplanned, while society itself is more prone to unwanted surprises, of catastrophic upheaval and destructive disorder.
Humans, like all creatures, face a trade-off between two strategies for interacting with the world: explore versus exploit. To explore is, by definition, to wander, to not know where you’re going. To exploit is to race toward a known destination.
These ideas are related to what’s known as a local maximum versus a global maximum. The lesson is that exploiting too soon — before you’ve explored far enough — means you get stuck always climbing the local maximum, unaware of better possibilities.
Through random tinkering, evolution has forged ingenious solutions to complex problems, the likes of which are far better than what we, as self-reflective, intentional, and intelligent beings, could ever come up with. In biology, this is known as Orgel’s second rule: evolution is cleverer than you are.
Two equations for random motion are: Lévy walks and Brownian motion. A Lévy walk is characterized by lots of little movements in various directions, followed, every so often, by a big movement in one direction. Brownian motion, by contrast, is just a series of small movements within the same area.
Sometimes, life’s best flukes come not from evermore-precise analytics of a seemingly stable past, but in exploring a fresh, uncertain future — sometimes even aimlessly.
In one recent study, when participants were left alone for between six and eleven minutes in a room that was empty except for a device that could give them a painful electric shock, many opted to shock themselves rather than to sit alone with their thoughts. One man shocked himself 190 times in less than ten minutes.
Leisure-time invention, in which intellectual lightning strikes only when our minds turn their gaze away from a problem.
By chasing control, we trap ourselves. By letting go just a little, we may liberate not only ourselves, but our best ideas.