Home > Poslovno svetovanje > Mustafa Suleyman, Michael Bhaskar: The Coming Wave; AI, Power and Our Future

Mustafa Suleyman, Michael Bhaskar: The Coming Wave; AI, Power and Our Future

CONTAINMENT IS NOT POSSIBLE

ALMOST EVERY CULTURE HAS A FLOOD MYTH

The rise and spread of technologies has also taken the form of world-changing waves.

This proliferation of technology in waves is the story of Homo technologicus — of the technological animal.

Almost every object in your line of sight has, in all likelihood, been created or altered by human intelligence. Language — the foundation of our social interactions, of our cultures, of our political organizations, and perhaps of what it means to be human — is another product, and driver, of our intelligence.

Only one other force is so omnipresent in this picture: biological life itself.

It’s no exaggeration to say the entirety of the human world depends on either living systems or our intelligence.

This wave is unleashing the power to engineer these two universal foundations: a wave of nothing less than intelligence and life.

The coming wave is defined by two core technologies: artificial intelligence (AI) and synthetic biology.

I believe this coming wave of technology is bringing human history to a turning point. If containing it is impossible, the consequences for our species are dramatic, potentially dire. Equally, without its fruits we are exposed and precarious.

AI has been climbing the ladder of cognitive abilities for decades, and it now looks set to reach human-level performance across a very wide range of tasks within the next three years.

Both pursuing and not pursuing new technologies is, from here, fraught with risk. The chances of muddling through a “narrow path” and avoiding one or the other outcome — techno-authoritarian dystopia on the one hand, openness-induced catastrophe on the other — grow smaller over time as the technology becomes cheaper, more powerful, and more pervasive and the risks accumulate.

This is the core dilemma: that, sooner or later, a powerful generation of technology leads humanity toward either catastrophic or dystopian outcomes. I believe this is the great meta-problem of the twenty-first century.

The current discourse around technology ethics and safety is inadequate. Despite the many books, debates, blog posts, and tweetstorms about technology, you rarely hear anything about containing it.

Over the next few decades, I argued, AI systems would replace “intellectual manual labor” in much the same way, and certainly long before robots replace physical labor.

I attended a seminar on technology risks at a well-known university.

The presenter showed how the price of DNA synthesizers, which can print bespoke strands of DNA, was falling rapidly. Costing a few tens of thousands of dollars, they are small enough to sit on a bench in your garage and let people synthesize — that is, manufacture — DNA. And all this is now possible for anyone with graduate-level training in biology or an enthusiasm for self-directed learning online.

They finished with an alarming thought: a single person today likely “has the capacity to kill a billion people.” All it takes is motivation.

The collective response in the seminar was more than just dismissive. People simply refused to accept the presenter’s vision.

This widespread emotional reaction I was observing is something I have come to call the pessimism-aversion trap: the misguided analysis that arises when you are overwhelmed by a fear of confronting potentially dark realities, and the resulting tendency to look the other way.

Pessimism aversion is an emotional response, an ingrained gut refusal to accept the possibility of seriously destabilizing outcomes.

Many of those whom I accuse of being stuck in the pessimism-aversion trap fully embrace the growing critiques of technology. But they nod along without actually taking any action. We’ll manage, we always do, they say.

The various technologies I’m speaking of share four key features that explain why this isn’t business as usual: they are inherently general and therefore omni-use, they hyper-evolve, they have asymmetric impacts, and, in some respects, they are increasingly autonomous.

Their creation is driven by powerful: geopolitical competition, massive financial rewards, and an open, distributed culture of research.

HOMO TECHNOLOGICUS

ENDLESS PROLIFERATION

In the early nineteenth century, the railway revolutionized transport, its biggest innovation in thousands of years.

Innovators tried various approaches. As early as the eighteenth century, a French inventor called Nicolas-Joseph Cugnot built a kind of steam-powered car.

In 1863, the Belgian inventor Jean Joseph Étienne Lenoir powered the first vehicle with an internal combustion engine, driving it seven miles out of Paris.

A German engineer called Nicolaus August Otto spent years working on a gas engine, much smaller than a steam engine. By 1876, in a Deutz AG factory in Cologne, Otto produced the first functional internal combustion engine, the “four-stroke” model. Yet it was another German engineer, Carl Benz, who pipped them to the post. Using his version of a four-stroke internal combustion engine, in 1886 he patented the Motorwagen, now seen as the world’s first proper car.

By 1893, Benz had sold a measly 69 vehicles; by 1900, just 1,709. The turning point was Henry Ford’s 1908 Model T. His simple but effective vehicle was built using a revolutionary approach: the moving assembly line. Today some 2 billion combustion engines are in everything from lawnmowers to container ships.

The previously challenging notion of moving from place to place in search of prosperity or fun became a regular feature of human life.

Technology has a clear, inevitable trajectory: mass diffusion in great roiling waves.

The simple hand ax forms part of history’s first wave of technology.

Another wave was equally pivotal: fire. Wielded by our ancestor Homo erectus, it was a source of light, warmth, and safety from predators. Stonework and fire were proto-general-purpose technologies, meaning they were, in turn enabling new inventions, goods, and organizational behaviors.

Language, agriculture, writing — each was a general-purpose technology at the center of an early wave.

One major study pegged the number of general-purpose technologies that have emerged over the entire span of human history at just twenty-four.

The Agricultural Revolution (9000 – 7500 BCE), one of history’s most significant waves, marked the arrival of two massive general-purpose technologies that gradually replaced the nomadic, hunter-gatherer way of life: the domestication of plants and animals.

Beginning around the 1770s in Europe, the first wave of the Industrial Revolution combined steam power, mechanized looms, the factory system, and canals. In the 1840s came the age of railways, telegraphs, and steamships, and a bit later steel and machine tools; together they formed the First Industrial Revolution.

The ten thousand years up to 1000 BCE saw seven general-purpose technologies emerge. The two hundred years between 1700 and 1900 marked the arrival of six, from steam engines to electricity. And in the last hundred years alone there were seven.

General-purpose technologies become waves when they diffuse widely.

The Nobel Prize – winning economist William Nordhaus calculated that the same amount of labor that once produced fifty-four minutes of quality light in the eighteenth century now produces more than fifty years of light. As a result, the average person in the twenty-first century has access to approximately 438,000 times more “lumen-hours” per year than our eighteenth – century cousins.

Proliferation is catalyzed by two forces: demand and the resulting cost decreases, each of which drives technology to become even better and cheaper.

Of course, behind technological breakthroughs are people. They labor at improving technology in workshops, labs, and garages, motivated by money, fame, and often knowledge itself.

By 1945, an important precursor to computers called the ENIAC, an eight-foot-tall behemoth of eighteen thousand vacuum tubes capable of three hundred operations a second, was developed at the University of Pennsylvania. Bell Labs initiated another significant breakthrough in 1947: the transistor, a semiconductor creating “logic gates” to perform calculations. This crude device, comprising a paper clip, a scrap of gold foil, and a crystal of germanium that could switch electronic signals, laid the basis for the digital age.

The rise in computational power underpinned a flowering of devices, applications, and users.

It took smartphones a few years to go from niche product to utterly essential item for two-thirds of the planet.

With this wave came email, social media, online videos — each a fundamentally new experience enabled by the transistor and another general-purpose technology, the internet.

THE CONTAINMENT PROBLEM

Technology’s unavoidable challenge is that its makers quickly lose control over the path their inventions take once introduced to the world.

Understanding technology is, in part, about trying to understand its unintended consequences, to predict not just positive spillovers but revenge effects.

Technology’s problem here is a containment problem. If this aspect cannot be eliminated, it might be curtailed. Containment is the overarching ability to control, limit, and, if need be, close down technologies at any stage of their development or deployment.

In most cases, containment is about meaningful control, the capability to stop a use case, change a research direction, or deny access to harmful actors. It means preserving the ability to steer waves to ensure their impact reflects our values, helps us flourish as a species, and does not introduce significant harms that outweigh their benefits.

Containment encompasses regulation, better technical safety, new governance and ownership models, and new modes of accountability and transparency, all as necessary (but not sufficient) precursors to safer technology.

Think of containment, then, as a set of interlinked and mutually reinforcing technical, cultural, legal, and political mechanisms for maintaining societal control of technology during a time of exponential change.

People throughout history have attempted to resist new technologies because they felt threatened and worried their livelihoods and way of life would be destroyed. Fighting, as they saw it, for the future of their families, they would, if necessary, physically destroy what was coming.

Where there is demand, technology always breaks out, finds traction, builds users.

Technology’s nature is to spread, no matter the barriers.

Inventions cannot be uninvented or blocked indefinitely, knowledge unlearned or stopped from spreading.

Technologies are ideas, and ideas cannot be eliminated.

The seeming inevitability of waves comes not from the absence of resistance but from demand overwhelming it.

That nuclear technology remained contained was no accident; it was a conscious nonproliferation policy of the nuclear powers, helped by the fact that nuclear weapons are incredibly complex and expensive to produce.

Glimmers of containment are rare and often flawed. They include moratoriums on biological and chemical weapons; the Montreal Protocol of 1987, which phased out substances damaging the atmosphere’s ozone layer,

Perhaps the most ambitious containment agenda is decarbonization, measures like the Paris Agreement.

In general these containment efforts are limited to highly specific technologies, some in narrow jurisdictions, all with only a shaky purchase.

This is not containment proper. None of these efforts represent the full-scale arresting of a wave of general-purpose technology, although, as we will see later, they do offer important pointers for the future.

For most of history, the challenge of technology lay in creating and unleashing its power. That has now flipped: the challenge of technology today is about containing its unleashed power, ensuring it continues to serve us and our planet.

THE NEXT WAVE

THE TECHNOLOGY OF INTELLIGENCE

It’s often said that there are more potential configurations of a Go board than there are atoms in the known universe; one million trillion trillion trillion trillion more configurations in fact!

When IBM’s Deep Blue beat Garry Kasparov at chess in 1997, it used the so-called brute-force technique, where an algorithm aims to systematically crunch through as many possible moves as it can. That approach is hopeless in a game with as many branching outcomes as Go.

AlphaGo initially learned by watching 150,000 games played by human experts. Once we were satisfied with its initial performance, the key next step was creating lots of copies of AlphaGo and getting it to play against itself over and over. This meant the algorithm was able to simulate millions of new games, trying out combinations of moves that had never been played before, and therefore efficiently explore a huge range of possibilities, learning new strategies in the process.

Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years. In just a few months, we could train algorithms to discover new knowledge and find new, seemingly superhuman insights. How could we take that further? Would this method work for real-world problems?

At root, the primary driver of all of these new technologies is material — the ever-growing manipulation of their atomic elements.

Then, starting in the mid-twentieth century, technology began to operate at a higher level of abstraction. At the heart of this shift was the realization that information is a core property of the universe. It can be encoded in a binary format and is, in the form of DNA, at the core of how life operates. Strings of ones and zeros, or the base pairs of DNA — these are not just mathematical curiosities. They are foundational and powerful. Understand and control these streams of information and you might steadily open a new world of possibility.

The coming wave of technology is built primarily on two general-purpose technologies capable of operating at the grandest and most granular levels alike: artificial intelligence and synthetic biology.

AI is enabling us to replicate speech and language, vision and reasoning. Foundational breakthroughs in synthetic biology have enabled us to sequence, modify, and now print DNA.

In the words of the economist W. Brian Arthur, “the overall collection of technologies bootstraps itself upward from the few to the many and from the simple to the complex.”

The coming wave is a supercluster, an evolutionary burst like the Cambrian explosion, the most intense eruption of new species in the earth’s history, with many thousands of potential new applications.

Another trait of the new wave is speed.

The legendary computer science professor Marvin Minsky famously hired a summer student to work on an early vision system in 1966, thinking that significant milestones were just within reach. That was wildly optimistic. The breakthrough moment took nearly half a century, finally arriving in 2012 in the form of a system called AlexNet.

In the case of AlexNet, the training data consisted of images. Each red, green, or blue pixel is given a value, and the resulting array of numbers is fed into the network as an input. Within the network, “neurons” link to other neurons by a series of weighted connections, each of which roughly corresponds to the strength of the relationship between inputs. A technique called backpropagation then adjusts the weights to improve the neural network; when an error is spotted, adjustments propagate back through the network to help correct it in the future. AlexNet was built by the legendary researcher Geoffrey Hinton and two of his students, Alex Krizhevsky and Ilya Sutskever, at the University of Toronto. Following the AlexNet breakthrough, AI suddenly became a major priority in academia, government, and corporate life.

Industry research output and patents soared. In 1987 there were just ninety academic papers published at Neural Information Processing Systems, at what became the field’s leading conference. By the 2020s there were almost two thousand.

AI is becoming much easier to access and use: tools and infrastructure like Meta’s PyTorch or OpenAI’s application programming interfaces (APIs) help put state-of-the-art machine learning capabilities in the hands of nonspecialists. 5G and ubiquitous connectivity create a massive, always-on user base.

AI is already here. But it’s far from done.

ChatGPT is, in simple terms, a chatbot. But it is so much more powerful and polymathic than anything that had previously been made public. Ask it a question and it replies instantaneously in fluent prose.

LLMs take advantage of the fact that language data comes in a sequential order. Each unit of information is in some way related to data earlier in a series. The model reads very large numbers of sentences, learns an abstract representation of the information contained within them, and then, based on this, generates a prediction about what should come next. The challenge lies in designing an algorithm that “knows where to look” for signals in a given sentence. What are the key words, the most salient elements of a sentence, and how do they relate to one another? In AI this notion is commonly referred to as “attention.”

These systems are called transformers. Since Google researchers published the first paper on them in 2017, the pace of progress has been staggering. The number of parameters is a core measure of an AI system’s scale and complexity. But it wasn’t until the summer of 2020, when OpenAI released GPT-3, that people started to truly grasp the magnitude of what was happening. With a whopping 175 billion parameters it was, at the time, the largest neural network ever constructed.

If DQN and AlphaGo were the early signs of something lapping at the shore, ChatGPT and LLMs are the first signs of the wave beginning to crash around us.

If we assume that the average person can read about two hundred words per minute, in an eighty-year lifetime that would be about eight billion words, assuming they did absolutely nothing else twenty-four hours per day. More realistically, the average American reads a book for about fifteen minutes per day, which over the year amounts to reading about a million words.

Not long after the arrival of LLMs, researchers work at scales of data and computation that would have seemed astounding a few years ago. First hundreds of millions, then billions of parameters became normal. Now the talk is of “brain-scale” models with many trillions of parameters.

In less than ten years the amount of compute used to train the best AI models has increased by nine orders of magnitude — going from two petaFLOPs to ten billion petaFLOPs. To get a sense of one petaFLOP, imagine a billion people each holding a million calculators, doing a complex multiplication, and hitting “equals” at the same time.

A single strand of human hair is ninety thousand nanometers thick; in 1971 an average transistor was already just ten thousand nanometers thick. Today the most advanced chips are manufactured at three nanometers. Transistors are getting so small they are hitting physical limits; at this size electrons start to interfere with one another, messing up the process of computation.

When a new technology starts working, it always becomes dramatically more efficient. AI is no different. Google’s Switch Transformer, for example, has 1.6 trillion parameters. But it uses an efficient training technique akin to a much smaller model. At Inflection AI we can reach GPT-3-level language model performance with a system just one twenty-fifth the size.

Progress is accelerating so much that benchmarks get eclipsed before new ones are even made.

So, where does AI go next as the wave fully breaks? Today we have narrow or weak AI: limited and specific versions.

Existing AI systems still operate in relatively narrow lanes. What is yet to come is a truly general or strong AI capable of human-level performance across a wide range of complex tasks — able to seamlessly shift among them.

Over the last decade, intellectual and political elites in tech circles became absorbed by the idea that a recursively self-improving AI would lead to an “intelligence explosion” known as the Singularity.

For years people framed AGI as likely to come at the flick of a switch. AGI is binary — you either have it or you don’t, a single, identifiable threshold that would be crossed by a given system. I’ve always thought that this characterization is wrong.

What we would really like to know is, can I give an AI an ambiguous, open-ended, complex goal that requires interpretation, judgment, creativity, decision-making, and acting across multiple domains, over an extended time period, and then see the AI accomplish that goal?

Rather than get too distracted by questions of consciousness, then, we should refocus the entire debate around near-term capabilities and how they will evolve in the coming years.

I think of this as “artificial capable intelligence” (ACI), the point at which AI can achieve complex goals and tasks with minimal oversight. AI and AGI are both parts of the everyday discussion, but we need a concept encapsulating a middle layer in which the Modern Turing Test is achieved but before systems display runaway “superintelligence.” ACI is shorthand for this point.

The first stage of AI was about classification and prediction — it was capable, but only within clearly defined limits and at preset tasks.

ACI represents the next stage of AI’s evolution. A system that not only could recognize and generate novel images, audio, and language appropriate to a given context, but also would be interactive — operating in real time, with real users.

It won’t be long before AI can transfer what it “knows” from one domain to another, seamlessly, as humans do.

THE TECHNOLOGY OF LIFE

Living systems self-assemble and self-heal; they’re energy-harnessing architectures that can replicate, survive, and flourish in a vast range of environments, all at a breathtaking level of sophistication, atomic precision, and information processing.

The coming decades will be defined by a convergence of biology and engineering. Like AI, synthetic biology is on a sharp trajectory of falling costs and rising capabilities.

Working on bacteria in 1973, Stanley N. Cohen and Herbert W. Boyer found ways of transplanting genetic material from one organism into another, showing how they could successfully introduce DNA from a frog into a bacterium. The age of genetic engineering had arrived.

Genetic engineering has gotten much cheaper and much easier.

One catalyst was the Human Genome Project. This was a thirteen-year, 3 multibillion-dollar endeavor that gathered together thousands of scientists from across the world, in private and public institutions, with a single goal: unlocking the three billion letters of genetic information making up the human genome.

Less well known is what The Economist calls the Carlson curve: the epic collapse in costs for sequencing DNA. Thanks to ever-improving techniques, the cost of human genome sequencing fell from $ 1 billion in 2003 to well under $ 1,000 by 2022.

CRISPR edits DNA sequences with the help of Cas9, an enzyme acting as a pair of finely tuned DNA scissors, cutting parts of a DNA strand for precise genetic editing and modification.

Genetic engineering has embraced the do-it-yourself ethos that once defined digital start-ups and led to such an explosion of creativity and potential in the early days of the internet.

CRISPR is only the start. Gene synthesis is the manufacture of genetic sequences, printing strands of DNA. If sequencing is reading, synthesizing is writing. And writing doesn’t just involve reproducing known strands of DNA; it also enables scientists to write new strands, to engineer life itself.

Synthetic biology’s true promise, then, is that it will “enable people to more directly and freely make whatever they need wherever they are.”

In 2010 a team led by Craig Venter21 took a near copy of the genome of the bacterium Mycoplasma mycoides and transplanted it into a new cell that then replicated.

The field of systems biology aims to understand the “larger picture” of a cell, tissue, or organism by using bioinformatics and computational biology to see how the organism works holistically; such efforts could be the foundation for a new era of personalized medicine.

Altos Labs, which has raised $ 3 billion, more start-up funding than for any previous biotech venture, is one company seeking to find effective anti-aging technologies.

Already the first children with edited genomes have been born in China after a rogue professor embarked on a series of live experiments with young couples, eventually leading, in 2018, to the birth of twins, known as Lulu and Nana, with edited genomes. His work shocked the scientific community, breaching all ethical norms.

Scientists like the Nobel laureate Frances Arnold create enzymes that produce novel chemical reactions, including ways to bind silicon and carbon, usually a tricky, energy-intensive process with wide-ranging uses in areas like electronics.

The vast petrochemical industry could see a challenge from young start-ups like Solugen, whose Bioforge is an attempt to build a carbon-negative factory.

Another company, LanzaTech, harnesses genetically modified bacteria to convert waste CO2 from steel mill production into widely used industrial chemicals.

Next-generation DNA printers will produce DNA with an increasing degree of precision. DNA is itself the most efficient data storage mechanism we know of — capable of storing data at millions of times the density of current computational techniques with near-perfect fidelity and stability. Theoretically, the entirety of the world’s data might be stored in just one kilogram of DNA.

Proteins are the building blocks of life. Your muscles and blood, hormones and hair, indeed, 75 percent of your dry body weight: all proteins.

Understand proteins, and you’ve taken a giant leap forward in understanding — and mastering — biology.

Simply knowing the DNA sequence isn’t enough to know how a protein works. Instead, you need to understand how it folds.

In 1993, they decided to set up a biannual competition — called Critical Assessment for Structure Prediction (CASP) — to see who could crack the protein folding problem.

Then, at CASP13 in 2018, held at a palm-fringed resort in Cancún, a rank outsider entrant arrived at the competition, with zero track record, and beat ninety-eight established teams. The winning team was DeepMind’s. Called AlphaFold, the project started during a weeklong experimental hackathon in my group at the company back in 2016.

Our team used deep generative neural networks to predict how the proteins might fold based on their DNA, training on a set of known proteins and extrapolating from there.

AlphaFold was so good that CASP was, like ImageNet, retired.

Whereas once it might have taken researchers weeks or months to determine a protein’s shape and function, that process can now begin in a matter of seconds.

The bio-revolution is coevolving with advances in AI, and indeed many of the phenomena discussed in this chapter will rely on AI for their realization. Think, then, of two waves crashing together, not a wave but a superwave.

Welcome to the age of biomachines and biocomputers, where strands of DNA perform calculations and artificial cells are put to work. Where machines come alive. Welcome to the age of synthetic life.

THE WIDER WAVE

Technologies don’t develop or operate in air locks, removed from one another.

Where you find a general-purpose technology, you also find other technologies developing in constant dialogue, spurred on by it.

Bio and AI are at the center, but around them lies a penumbra of other transformative technologies.

In 1837, John Deere was a blacksmith working in Grand Detour, Illinois. Then one day Deere saw a broken steel saw at a mill. Steel being scarce, he took his find home and fashioned the blade into a plow. The Midwest duly became the breadbasket of the world; John Deere quickly became synonymous with agriculture; and a techno-geographic revolution was instigated. The John Deere company still makes agricultural technology today. Increasingly, though, the company builds robots. The company is producing robots that can plant, tend, and harvest crops, with levels of precision and granularity that would be impossible for humans.

Google’s research division is building robots that could, like the 1950s dream, do household chores and basic jobs from stacking dishes to tidying chairs in meeting rooms.

Another growing area is in the ability for robots to swarm, greatly amplifying the potential capabilities of any individual robot into a hive mind. Robots can operate with precision in a far greater range of environments for far longer periods than humans. AIs are products of bits and code, existing within simulations and servers. Robots are their bridge, their interface with the real world.

In 2019, Google announced that it had reached “quantum supremacy.” Researchers had built a quantum computer, one using the peculiar properties of the subatomic world. Chilled to a temperature colder than the coldest parts of outer space, Google’s machine used an understanding of quantum mechanics to complete a calculation in seconds that would, it said, have taken a conventional computer ten thousand years.

Arguably, quantum computing’s most significant near-term promise is in modeling chemical reactions and the interaction of molecules in previously impossible detail. This could let us understand the human brain or materials science with extraordinary granularity.

Like AI and biotech, quantum computing helps speed up other elements of the wave.

Energy rivals intelligence and life in its fundamental importance.

If you wanted to write the crudest possible equation for our world it would be something like this: (Life + Intelligence) × Energy = Modern Civilization

In 2000, solar energy cost $ 4.88 per watt, but by 2019 it had fallen to just 38 cents.

Behind it all lies the dormant behemoth of clean energy, this time inspired if not directly powered by the sun: nuclear fusion.

As the elements of AI, advanced biotechnology, quantum computing, and robotics combine in new ways, prepare for breakthroughs like advanced nanotechnology, a concept that takes the ever-growing precision of technology to its logical conclusion.

The ultimate vision of nanotechnology is one where atoms become controllable building blocks, capable of automatically assembling almost anything.

Nanomachines would work at speeds far beyond anything at our scale.

AT ITS CORE, THE coming wave is a story of the proliferation of power. If the last wave reduced the costs of broadcasting information, this one reduces the costs of acting on it, giving rise to technologies that go from sequencing to synthesis, reading to writing, editing to creating, imitating conversations to leading them.

FOUR FEATURES OF THE COMING WAVE

Drones provide us with a glimpse of what’s in store for the future of warfare.

The coming wave is, however, characterized by a set of four intrinsic features compounding the problem of containment.

  • First among them is the primary lesson of this section: hugely asymmetric impact.
  • Second, they are developing fast, a kind of hyper-evolution, iterating, improving, and branching into new areas at incredible speed.
  • Third, they are often omni-use; that is, they can be used for many different purposes.
  • And fourth, they increasingly have a degree of autonomy beyond any previous technology.

We already live in an age of interlinked global systems. In the coming wave a single point — a given program, a genetic change — can alter everything.

Software’s hyper-evolution is spreading. The next forty years will see both the world of atoms rendered into bits at new levels of complexity and fidelity and, crucially, the world of bits rendered back into tangible atoms with a speed and ease unthinkable until recently.

This is what we mean by hyper-evolution — a fast, iterative platform for creation.

In 2020 an AI system sifted through 100 million molecules to create the first machine-learning-derived antibiotic — called halicin.

Dual-use technologies are those with both civilian and military applications. Dual-use technologies are both helpful and potentially destructive, tools and weapons. A more appropriate term for the technologies of the coming wave is “omni-use”.

Containing something like this is always going to be much harder than containing a constrained, single-task technology, stuck in a tiny niche with few dependencies.

Over time, technology tends toward generality.

Omni-use features and asymmetric impacts are magnified in the coming wave, but to some extent they’re inherent properties of all technology. That isn’t the case for autonomy.

The new wave of autonomy heralds a world where constant intervention and oversight are increasingly unnecessary.

A paradox of the coming wave is that its technologies are largely beyond our ability to comprehend at a granular level yet still within our ability to create and use.

We won’t always be able to predict what these autonomous systems will do next; that’s the nature of autonomy.

Any discussion of containment has to acknowledge that if or when AGI-like technologies do emerge, they will present containment problems beyond anything else we’ve ever encountered.

Ultimately, in its most dramatic forms, the coming wave could mean humanity will no longer be at the top of the food chain. Homo technologicus may end up being threatened by its own creation.

UNSTOPPABLE INCENTIVES

In most discussions of technology people still get stuck on what it is, forgetting why it was created in the first place. This is not about some innate techno – determinism. This is about what it means to be human.

Thanks to a series of macro-drivers behind technologies’ development and spread, the fruit will not be left on the tree; why the wave will break.

  • The first driver has to do with what I experienced with AlphaGo: great power competition. Technological rivalry is a geopolitical reality.
  • Second comes a global research ecosystem with its ingrained rituals.
  • Then come the immense financial gains from technology and the urgent need to tackle our global social challenges.
  • And the final driver is perhaps the most human of all: ego.

Postwar America took its technological supremacy for granted. Sputnik woke it up. AlphaGo was quickly labeled China’s Sputnik moment for AI.

Today, China has an explicit national strategy to be the world leader in AI by 2030. It’s not just AI either. From cleantech to bioscience, China surges across the spectrum of fundamental technologies, investing at an epic scale. China’s R & D spending was just 12 percent of America’s. By 2020, it was 90 percent. China installs as many robots as the rest of the world combined.

Quantum computing is an area of notable Chinese expertise. In 2016, China sent the world’s first “quantum satellite,” Micius, into space. Micius was only the start in China’s quest for an unhackable quantum internet.

China is already ahead of the United States in green energy, 5G, and AI and is on a trajectory to overtake it in quantum and biotech in the next few years.

Political will could disrupt or cancel the other incentives discussed in this chapter. A government could — in theory — rein in research incentives, clamp down on private business, curtail ego-driven initiatives. But it cannot wave away hard-edged competition from its geopolitical rivals.

Countries have different strengths, from bioscience and AI (like the U.K.) to robotics (Germany, Japan, and South Korea) to cybersecurity (Israel).

Through its Atmanirbhar Bharat (Self-Reliant India) program, India’s government is working to ensure the world’s most populous country achieves ownership of core technology systems competitive with the United States and China.

Raw curiosity, the quest for truth, the importance of openness, evidence-based peer review — these are core values for scientific and technological research.

Openness is science and technology’s cardinal ideology. What is known must be shared; what is discovered must be published.

We live in an age of what Audrey Kurth Cronin calls “open technological innovation.”

Some of the world’s biggest companies — Alphabet, Meta, Microsoft — regularly contribute huge amounts of IP for free.

At DeepMind we learned early that opportunities to publish were a key factor when leading researchers decided where to work.

Worldwide R & D spending is at well over $ 700 billion annually, hitting record highs. Amazon’s R & D budget alone34 is $ 78 billion, which would be the ninth biggest in the world if it were a country.

CRISPR gene editing technology, for example, has its roots in work done by the Spanish researcher Francisco Mojica.

Neural networks spent decades in the wilderness, trashed by luminaries like Marvin Minsky. Only a few isolated researchers like Geoffrey Hinton and Yann LeCun kept them going through a period when the word “neural” was so controversial that researchers would deliberately remove it from their papers.

In 1830, the first passenger railway opened between Liverpool and Manchester. Yet it was also a sensation, faster than anything then experienced. Growth was rapid. Two hundred and fifty passengers a day had been forecast; twelve hundred a day were using it after only a month. In 1844, a young MP called William Gladstone put forward the Railway Regulation Act to supercharge investment.

THE RAILWAY BOOM OF the 1840s was “arguably the greatest bubble in history.” But in the annals of technology, it is more norm than exception.

Science has to be converted into useful and desirable products for it to truly spread far and wide. Put simply: most technology is made to earn money. And this, the potential for profit, is built on something even more long-lasting and robust: raw demand. People both want and need the fruits of technology.

Technology entered a virtuous circle of creating wealth that could be reinvested in further technological development, all of which drove up living standards.

Huge quantities of capital expenditure, R & D spending, venture capital, and private equity investment, unmatched by any other sector, or any government outside China and the United States, are the raw fuel powering the coming wave.

For most of history simply feeding yourself and your family was the dominant challenge of human life. Farming has always been a hard, uncertain business.

When Thomas Malthus argued in 1798 that a fast-growing population would quickly exhaust the carrying capacity of agriculture and lead to a collapse, he wasn’t wrong; static yields would and often did follow this rule. Corn yields per hectare in the United States have tripled in the last fifty years. In 1945, around 50 percent of the world’s population was seriously undernourished. Today, despite a population well over three times bigger, that’s down to 10 percent. Feeding the world is still an enormous challenge.

IT’S LIKELY THAT THE world is heading for two degrees Celsius of climate warming or more.

Despite well-justified talk of a clean energy transition, the distance still to travel is vast.

The energy scholar Vaclav Smil calls ammonia, cement, plastics, and steel the four pillars of modern civilization.

Sustainable, scalable batteries need radical new technologies.

A school of naive techno-solutionism sees technology as the answer to all of the world’s problems. Alone, it’s not. How it is created, used, owned, and managed all make a difference.

The coming wave is coming partly because there is no way through without it.

Scientists and technologists are all too human. They crave status, success, and a legacy. Making history, doing something that matters, helping others, beating others. Build something new. Change the game. Climb the mountain. Whether noble and high-minded or bitter and zero-sum, when you work on technology, it’s often this aspect. Find a successful scientist or technologist and somewhere in there you will see someone driven by raw ego, spurred on by emotive impulses that might sound base or even unethical but are nonetheless an under-recognized part of why we get the technologies we do.

NATIONALISM, CAPITALISM, AND SCIENCE — these are, by now, embedded features of the world.

Without tools to spread information at light speed, people in the past could happily sit with new technologies staring them in the face sometimes for decades before they realized their full implications.

Today the world is watching everyone else react in real time. Everything leaks. Everything is copied, iterated, improved. This is why the coming wave is coming, why containing it is such a challenge. Technology is now an indispensable mega-system infusing every aspect of daily life, society, and the economy. Slowing these technologies is antithetical to national, corporate, and research interests.

The idea that CRISPR or AI can be put back in the box is not credible.

STATES OF FAILURE

THE GRAND BARGAIN

AT ITS HEART, THE NATION-STATE, THE CENTRAL UNIT OF the world’s political order today, offers its citizens a simple and highly persuasive bargain.

History suggests that a monopoly over violence — that is, entrusting the state with wide latitude to enforce laws and develop its military powers — is the surest way to enable peace and prosperity.

Even as it grows more powerful and entangled with everyday life, the grand bargain of the nation-state, therefore, is that not only can centralized power enable peace and prosperity, but this power can be contained using a series of checks, balances, redistributions, and institutional forms.

One project I facilitated in 2009 at the Copenhagen climate negotiations involved convening hundreds of NGOs and scientific experts to align their negotiating positions. The idea was to present a coherent position to 192 squabbling countries at the main summit. Except we couldn’t get consensus on anything.

Our institutions for addressing massive global problems were not fit for purpose.

While technology is still the single most powerful avenue for addressing the challenges of the twenty-first century, we cannot ignore downsides.

An influential minority in the tech industry not only believes that new technologies pose a threat to our ordered world of nation-states; this group actively welcomes its demise.

Our system of nation-states isn’t perfect, far from it. Nonetheless, we must do everything to bolster and protect it.

Western societies in particular are mired in a deep-seated anxiety; they are “nervous states,” impulsive and fractious.

DEMOCRACIES ARE BUILT ON trust. People need to trust that government officials, militaries, and other elites will not abuse their dominant positions. Everyone relies on the trust that taxes will be paid, rules honored, the interests of the whole put ahead of individuals.

Behind the new authoritarian impulse and political instability lies a growing pool of social resentment. A key catalyst of instability and social resentment, inequality has surged across Western nations in recent decades.

Government policy, a shrinking working-age population, stalling educational levels, and decelerating long-term growth have all contributed to decisively more unequal societies.

Onshoring, national security, resilient supply chains, self-sufficiency — today’s language of trade is once again the language of borders, barriers, and tariffs.

Global challenges are reaching a critical threshold. Rampant inflation. Energy shortages. Stagnant incomes. A breakdown of trust. Waves of populism.

This makes containment far more complicated.

I’VE OFTEN HEARD IT said that technology is “value neutral” and that its politics arise from its use. This is so reductive and simplistic that it’s almost meaningless.

Technologies are ideas, manifested in products and services that have profound and lasting consequences for people, social structures, the environment, and everything in between.

Technology and political order are intimately connected. The introduction of new technologies has major political consequences.

What emerges will, I think, tend in two directions with a spectrum of outcomes in between. On one trajectory, some liberal democratic states will continue to be eroded from within, becoming a kind of zombie government. On another, unthinking adoption of some aspects of the coming wave opens pathways to domineering state control, creating supercharged Leviathans whose power goes beyond even history’s most extreme totalitarian governments.

Both failing states and authoritarian regimes are disastrous outcomes. Neither direction can or will contain the coming wave.

These fragility amplifiers, system shocks, emergencies 2.0, will greatly exacerbate existing challenges, shaking the state’s foundation, upsetting our already precarious social balance.

FRAGILITY AMPLIFIERS

Power is “the ability or capacity to do something or act in a particular way; … to direct or influence the behavior of others or the course of events.”

Technology is ultimately political because technology is a form of power. And perhaps the single overriding characteristic of the coming wave is that it will democratize access to power.

Wherever power is today, it will be amplified. Anyone with goals — that is, everyone — will have huge help in realizing them.

Now imagine robots equipped with facial recognition, DNA sequencing, and automatic weapons. Future robots may not take the form of scampering dogs. Miniaturized even further, they will be the size of a bird or a bee, armed with a small firearm or a vial of anthrax. They might soon be accessible to anyone who wants them. This is what bad actor empowerment looks like.

AI adept at exploiting not just financial, legal, or communications systems but also human psychology , our weaknesses and biases, is on the way.

This new dynamic — where bad actors are emboldened to go on the offensive — opens up new vectors of attack thanks to the interlinked, vulnerable nature of modern systems.

When non-state and bad actors are empowered in this way, one of the core propositions of the state is undermined: the semblance of a security umbrella for citizens is deeply damaged.

Unlike an arrow or even a hypersonic missile, AI and bioagents will evolve more cheaply, more rapidly, and more autonomously than any technology we’ve ever seen.

Maintaining a decisive, indefinite strategic advantage across such a broad spectrum of general-use technologies is simply not possible.

Information and communication together is its own escalating vector of risk, another emerging fragility amplifier requiring attention.

The rise of synthetic media at scale and minimal cost amplifies both disinformation (malicious and intentionally misleading information) and misinformation (a wider and more unintentional pollution of the information space) at once.

Not all stressors and harms come from bad actors, however. Some come from the best of intentions.

Biological labs are subject to global standards that should stop accidents. The most secure are known as biosafety level 4 (BSL-4) labs. They represent the highest standards of containment for working with the most dangerous pathogenic materials.

And yet accidents and leaks still happen. The 1977 Russian flu is just one example.

In 2007 a leaking pipe at the U.K.’s Pirbright Institute, which includes BSL-4 labs, caused an outbreak of foot-and-mouth disease costing £ 147 million.

SARS is supposed to be kept in BSL-3 conditions, but it has escaped from virology labs in Singapore, Taiwan, and China.

Nothing should get out. Yet pathogens do, time and again.

FEW AREAS OF BIOLOGY are as controversial as gain-of-function (GOF) research. Put simply, gain-of-function experiments deliberately engineer pathogens to be more lethal or infectious, or both.

GOF research is meant to keep people safe. Yet it inevitably occurs in a flawed world, where labs leak, where pandemics happen.

As the power and spread of any technology grows, so its failure modes escalate.

A lab leak is just one good example of unintended consequences. Accidents like this create another unpredictable stressor, another splintering crack in the system.

In the past, new technologies put people out of work, producing what the economist John Maynard Keynes called “technological unemployment.” In Keynes’s view, this was a good thing, with increasing productivity freeing up time for further innovation and leisure.

Broadly speaking, when technology damaged old jobs and industries, it also produced new ones.

What if new job-displacing systems scale the ladder of human cognitive ability itself, leaving nowhere new for labor to turn?

Many remain unconvinced. Economists like David Autor argue that new technology consistently raises incomes, creating demand for new labor.

I believe this rosy vision is implausible over the next couple of decades; automation is unequivocally another fragility amplifier. My best guess is that new jobs won’t come in the numbers or timescale to truly help.

Even those who don’t foresee the most severe outcomes of automation still accept that it is on course to cause significant medium-term disruptions.

LABOR MARKET DISRUPTIONS ARE, like social media, fragility amplifiers. They damage and undermine the nation-state.

The stressors outlined in this chapter (which are by no means exhaustive) — new forms of attack and vulnerability, the industrialization of misinformation, lethal autonomous weapons, accidents like lab leaks, and the consequences of automation — are all familiar to people in tech, policy, and security circles. Yet they are too often viewed in isolation.

In this great redistribution of power, the state, already fragile and growing more so, is shaken to its core, its grand bargain left tattered and precarious.

THE FUTURE OF NATIONS

Soon after the stirrup was introduced into Europe, Charles Martel, leader of the Franks, saw its potential. Using it to devastating effect, he defeated and expelled the Saracens from France. But the introduction of these heavy cavalry units required immense supporting changes in Frankish society. Horses were hungry and expensive. Heavy cavalry required long years of training. In response, Martel and his heirs expropriated church lands and used them to raise a warrior elite. In return for their new wealth and status that elite promised to keep arms and fight for the king. Over time this improvised pact grew into an elaborate system of feudalism. It became the dominant political form of the entire medieval period. The stirrup was an apparently simple innovation. But with it came a social revolution changing hundreds of millions of lives.

Exponential technologies amplify everyone and everything.

Technologies can reinforce social structures, hierarchies, and regimes of control as well as upend them.

This ungovernable “post-sovereign” world, in the words of the political scientist Wendy Brown, will go far beyond a sense of near-term fragility; it will be instead a long-term macro-trend toward deep instability grinding away over decades. The first result will be massive new concentrations of power and wealth that reorder society.

We are not quite heading for a neocolonial East India Company 2.0. But I do think we have to confront the sheer scale and influence that some boardrooms have not just over the subtle nudges and choice architectures that shape culture and politics today but, more importantly, over where this could lead in decades to come. They are empires of a sort, and with the coming wave their scale, influence, and capability are set to radically expand.

The most powerful forces in the world are actually groups of individuals coordinating to achieve shared goals. Organizations too are a kind of intelligence. Companies, militaries, bureaucracies, even markets — these are artificial intelligences, aggregating and processing huge amounts of data, organizing themselves around specific goals, building mechanisms to get better and better at achieving those goals. Indeed, machine intelligence resembles a massive bureaucracy far more than it does a human mind.

I think we’ll see a group of private corporations grow beyond the size and reach of many nation-states.

In the last wave, things dematerialized; goods became services. All the big tech platforms either are mainly service businesses or have very large service businesses. Returns on intelligence will compound exponentially.

Re-creating the essence of what’s made our species so successful into tools that can be reused and reapplied over and over, in myriad different settings, is a mighty prize, which corporations and bureaucracies of all kinds will pursue, and wield. How these entities are governed, how they will rub against, capture, and reengineer the state, is an open question. That they will challenge it seems certain.

Another inevitable reaction of nation-states will be to use the tools of the coming wave to tighten their grip on power, taking full advantage to entrench their dominance.

Compared with the West, Chinese research into AI concentrates on areas of surveillance like object tracking, scene understanding, and voice or action recognition. China is now the leader in facial recognition technologies, with giant companies like Megvii and CloudWalk vying with SenseTime for market share. Around half the world’s billion CCTV18 cameras are in China. Many have built-in facial recognition.

It would be a mistake to write this off as just a Chinese or authoritarian problem. For a start, this tech is being exported wholesale to places like Venezuela and Zimbabwe, Ecuador and Ethiopia.

BEFORE THE COMING WAVE the notion of a global “high – tech panopticon” was the stuff of dystopian novels, Yevgeny Zamyatin’s We or George Orwell’s 1984. The panopticon is becoming possible.

In its Lebanese home territory, Hezbollah operates as a Shiite “state within a state.” Across the large swaths of Lebanese territory it controls, Hezbollah operates schools, hospitals, health-care centers, infrastructure, water projects, and microcredit-lending initiatives. So, what is Hezbollah? State or non-state? Extremist group or conventional territory-based power?

The coming wave, however, could make a range of small, state-like entities a lot more plausible. Contrary to centralization, it might actually spur a kind of “Hezbollahization,” a splintered, tribalized world where everyone has access to the latest technologies, where everyone can support themselves on their own terms, where it is far more possible for anyone to maintain living standards without the great superstructures of nation-state organization.

Mass rebellion, secessionism, and state formation of any kind look very different in this world. Redistributing real power means communities of all kinds can live as they wish, whether they are ISIS, FARC, Anonymous, secessionists from Biafra to Catalonia, or a major corporation building luxury theme parks on a remote island in the Pacific.

AS PEOPLE INCREASINGLY TAKE power into their own hands, I expect inequality’s newest frontier to lie in biology. A country desperate for investment or advantage might see potential in becoming an anything-goes biohacker paradise.

Governance works by consent; it is a collective fiction resting on the belief of everyone concerned.

The old social contract gets ripped to pieces. Institutions are bypassed, undermined, superseded. Taxation, law enforcement, compliance with norms: all under threat.

Something more like the pre-nation-state world emerges in this scenario, neo-medieval, smaller, more local, and constitutionally diverse, a complex, unstable patchwork of polities.

This is a world where billionaires and latter-day prophets can build and run microstates.

Understanding the future means handling multiple conflicting trajectories at once. The coming wave launches immense centralizing and decentralizing riptides at the same time.

The coming wave will only deepen and recapitulate the exact same contradictory dynamics of the last wave. The internet does precisely this: centralizes in a few key hubs while also empowering billions of people.

Everyone can build a website, but there’s only one Google. Everyone can sell their own niche products, but there’s only one Amazon.

So, where does it leave technology and, much more important, where does it leave us? What happens if the state can no longer control, in a balanced fashion, the coming wave?

THE DILEMMA

THE HISTORY OF HUMANITY IS, IN PART, A HISTORY OF CATASTROPHE. Pandemics feature widely. Two killed up to 30 percent of the world population: the sixth-century Plague of Justinian and the fourteenth-century Black Death.

Catastrophes are also, of course, man-made. World War I killed around 1 percent of the global population; World War II, 3 percent.

The upshot of the coming wave’s four features is that, absent strong methods of containment operating at every level, catastrophic outcomes like an engineered pandemic are more possible than ever.

The only entity in principle capable of navigating this existential bind is the same system of nation-states currently falling apart, dragged down by the very forces it needs to contain.

Over time, then, the implications of these technologies will push humanity to navigate a path between the poles of catastrophe and dystopia. This is the essential dilemma of our age.

Over the next ten years, AI will be the greatest force amplifier in history. This is why it could enable a redistribution of power on a historic scale. The greatest accelerant of human progress imaginable, it will also enable harms — from wars and accidents to random terror groups, authoritarian governments, overreaching corporations, plain theft, and willful sabotage.

There is no instruction manual on how to build the technologies in the coming wave safely.

Nor is building safe and contained technology in itself sufficient. Solving the question of AI alignment doesn’t mean doing so once; it means doing it every time a sufficiently powerful AI is built, wherever and whenever that happens.

Containment is about the ability to control technology. Further back, that means the ability to control the people and societies behind it.

When the unitary power of the nation-state is threatened, when containment appears increasingly difficult, when lives are on the line, the inevitable reaction will be a tightening of the grip on power. The question is, at what cost?

The door to dystopia is cracked open. Indeed, in the face of catastrophe, for some dystopia may feel like a relief.

A cataclysm would galvanize calls for an extreme surveillance apparatus to stop future such events.

As smaller-scale technology failures mount, calls for control increase. As control increases, checks and balances get whittled down, the ground shifts and makes way for further interventions, and a steady downward spiral to techno-dystopia begins.

Catastrophe and dystopia. The philosopher of technology Lewis Mumford talked about the “megamachine,” where social systems combine with technologies to form “a uniform, all-enveloping structure” that is “controlled for the benefit of depersonalized collective organizations.”

The development of new technologies is, as we’ve seen, a critical part of meeting our planet’s grand challenges. Without new technologies, these challenges will simply not be met. Without new technologies, sooner or later everything stagnates, and possibly collapses altogether.

Over the next century, the global population will start falling. This is a global problem. This is not only about numbers but about expertise, tax base, and investment levels; retirees will be pulling money out of the system, not investing it for the long term.

Stress on our resources, too, is a certainty. Demand for lithium, cobalt, and graphite is set to rise 500 percent by 2030.

Given the population and resource constraints, just standing still would probably require a global two-to-threefold productivity improvement. Make no mistake: standstill in itself spells disaster. Standstill means a meager future of at best decline but probably an implosion that could spiral alarmingly. Some might argue this forms a third pole, a great trilemma.

I am, however, confident that the coming decades will see complex, painful trade-offs between prosperity, surveillance, and the threat of catastrophe growing ever more acute. Even a system of states in the best possible health would struggle. We are facing the ultimate challenge for Homo technologicus.

THROUGH THE WAVE

CONTAINMENT MUST BE POSSIBLE

Saying “Regulation!” in the face of awesome technological change is the easy part.

Regulation alone is not enough.

Even technologists and researchers in areas like AI struggle with the pace of change. What chance, then, do regulators have, with fewer resources?

Technology evolves week by week. Drafting and passing legislation takes years.

The central problem for humanity in the twenty-first century is how we can nurture sufficient legitimate political power and wisdom, adequate technical mastery, and robust norms to constrain technologies to ensure they continue to do far more good than harm. How, in other words, we can contain the seemingly uncontainable.

The most ambitious legislation is probably the EU’s AI Act, first proposed in 2021.

Technologies with “unacceptable risk” of causing direct harm will be prohibited. Where AI affects fundamental human rights or critical systems like basic infrastructure, public transport, health, or welfare, it will get classed as “high risk,” subjected to greater levels of oversight and accountability.

Most regulation walks a tightrope of competing interests. But in few areas other than frontier technology must it tackle something so widely diffused, so critical to the economy, and yet so fast evolving.

Regulating not just hyper-evolutionary but omni-use general-purpose technologies is incredibly challenging.

ABOVE THE CUT AND thrust of legislative debate, nations are also caught in a contradiction. Every nation wants to be, and be seen, at the technological frontier.

On the other hand, they’re desperate to regulate and manage these technologies — to contain them, not least for fear they will threaten the nation – state as the ultimate seat of power.

There is an unbridgeable gulf between the desire to rein in the coming wave and the desire to shape and own it, between the need for protections against technologies and the need for protections against others.

Contained technology is technology whose modes of failure are known, managed, and mitigated, a situation where the means to shape and govern technology escalate in parallel with its capabilities.

Generally, though, consider containment more as a set of guardrails, a way to keep humanity in the driver’s seat when a technology risks causing more harm than good.

Recall the four features of the coming wave: asymmetry, hyper-evolution, omni-use, and autonomy. Each feature must be viewed through the lens of containability.

Containment of the coming wave is, I believe, not possible in our current world.

TEN STEPS TOWARD CONTAINMENT

THINK OF THE TEN IDEAS PRESENTED HERE AS CONCENTRIC CIRCLES. We start small and direct, close to the technology. From there each idea gets progressively broader. It’s the way all these layers of the onion build that makes them powerful; each alone is insufficient.

  • SAFETY: AN APOLLO PROGRAM FOR TECHNICAL SAFETY

Technical safety, up close, in the code, in the lab, is the first item on any containment agenda.

Physical segregation is just one aspect of transforming technical safety architecture to meet the challenge of the next wave.

There’s a clear must-do here: encourage, incentivize, and directly fund much more work in this area. It’s time for an Apollo program on AI safety and biosafety.

The highest-level challenge, whether in synthetic biology, robotics, or AI, is building a bulletproof off switch, a means of closing down any technology threatening to run out of control.

How to do this with technologies that are as distributed, protean, and far-reaching as in the coming wave — technologies whose precise form isn’t yet clear, technologies that in some cases might actively resist — is an open question.

  • AUDITS: KNOWLEDGE IS POWER; POWER IS CONTROL

Audits sound boring. Necessary, maybe — but deadly dull. But they are critical to containment.

Trust comes from transparency. We absolutely need to be able to verify, at every level, the safety, integrity, or uncompromised nature of a system.

Keeping close tabs on significant data sets that are used to train models, particularly open-source data sets, bibliometrics from research, and publicly available harmful incidents, would be a fruitful and noninvasive place to start. APIs that let others use foundational AI services should not be blindly open, but rather come with “know your customer” checks, as with, say, portions of the banking industry.

Screening all DNA synthesis would be a major bio-risk reduction exercise and would not, in my view, unduly curb civil liberties.

Getting both technical safety features and audit measures in place is vital, but it takes something we don’t have. Time.

  • CHOKE POINTS: BUY TIME

Xi Jinping was worried. “We rely on imports for some critical devices, components, and raw materials,” the Chinese president told a group of the country’s scientists in September 2020.

Some years earlier a government-run newspaper had used a more graphic image to describe the same problem: Chinese technology was, it said, limited by a series of “choke points.”

Xi’s fears came to pass on October 7, 2022. America declared war on China, attacking one of those choke points.

The shots fired were export controls on advanced semiconductors, the chips that underwrite computing and so artificial intelligence.

The wave can be slowed, at least for some period of time and in some areas.

Buying time in an era of hyper-evolution is invaluable. Time to develop further containment strategies. Time to build in additional safety measures. Time to test that off switch. Time to build improved defensive technologies. Time to shore up the nation-state, regulate better, or even just get that bill passed. Time to knit together international alliances.

Chips aren’t the only choke point. Industrial-scale cloud computing, too, is dominated by six major companies.

  • MAKERS: CRITICS SHOULD BUILD IT

Credible critics must be practitioners. Building the right technology, having the practical means to change its course, not just observing and commenting, but actively showing the way, making the change, effecting the necessary actions at source, means critics need to be involved.

  • BUSINESSES: PROFIT + PURPOSE

Profit drives the coming wave. There’s no pathway to safety that doesn’t recognize and grapple with this fact. When it comes to exponential technologies like AI and synthetic biology, we must find new accountable and inclusive commercial models that incentivize safety and profit alike.

I believe that figuring out ways to reconcile profit and social purpose in hybrid organizational structures is the best way to navigate the challenges that lie ahead, but making it work in practice is incredibly hard.

Shareholder capitalism works because it is simple and clear, and governance models too have a tendency to default to the simple and clear. In the shareholder model, lines of accountability and performance tracking are quantified and very transparent. It may be possible to design more modern structures in theory, but operating them in practice is another story.

Containment needs a new generation of corporations. It needs founders and those working in tech to contribute positively to society.

  • GOVERNMENTS: SURVIVE, REFORM, REGULATE

Nation-states still control many fundamental elements of civilization: law, the money supply, taxation, the military, and so on. That helps with the task ahead, where they will need to create and maintain resilient social systems, welfare nets, security architectures, and governance mechanisms capable of surviving severe stress.

Proactive governments will exert far greater control than if they just commission services and live off outsourced expertise and tech owned and operated elsewhere. Accountability is enabled by deep understanding. Ownership gives control. Both require governments to get their hands dirty.

Their first task should be to better monitor and understand developments in technology.

The most sophisticated AI systems or synthesizers or quantum computers should be produced only by responsible certified developers. As part of their license, they would need to subscribe to clear, binding security and safety standards, following rules, running risk assessments, keeping records, closely monitoring live deployments.

Taxation also needs to be completely overhauled to fund security and welfare as we undergo the largest transition of value creation — from labor to capital — in history.

Today U.S. labor is taxed at an average rate of 25 percent, equipment and software at just 5 percent.

A carefully calibrated shift in the tax burden away from labor would incentivize continued hiring and cushion disruptions in household life.

A universal basic income (UBI) — that is, an income paid by the state for every citizen irrespective of circumstances — has often been floated as the answer to the economic disruptions of the coming wave.

In an era of hyper- scaling corporate AIs, we should start to think of capital taxes.

Mechanisms must be found for cross-border taxation of those giant businesses.

A fixed portion of company value, for example, paid as a public dividend would keep value transferring back to the population in an age of extreme concentration.

  • ALLIANCES: TIME FOR TREATIES

There is no path to technological safety without working with your adversaries.

Beyond encouraging bilateral initiatives, the obvious thing at this stage is to propose creating some new kind of global institution devoted to technology.

Rather than having an organization that itself directly regulates, builds, or controls technology, I would start with something like an AI Audit Authority — the AAA.

  • CULTURE: RESPECTFULLY EMBRACING FAILURE

The common thread here is governance: of software systems, of microchips, of businesses and research institutes, of countries, and of the international community.

That’s what’s needed for the coming wave: real, gut-level buy-in from everyone involved in frontier technologies.

While the tech industry talks a big game when it comes to “embracing failure,” it rarely does so when it comes to privacy or safety or technical breaches. Launching a product that doesn’t catch on is one thing, but owning a language model that causes a misinformation apocalypse or a drug that causes adverse reactions is far more uncomfortable.

The first thing a technology company should do when encountering any kind of risk, downside, or failure mode is to safely communicate to the wider world.

Knowledge is a public good, but it should no longer be the default.

For millennia, the Hippocratic oath has been a moral lodestar for the medical profession. In Latin, Primum non nocere. First, do no harm. Scientists need something similar.

Pause before building, pause before publishing, review everything, sit down and hammer out the second-, third-, nth-order impacts.

  • MOVEMENTS: PEOPLE POWER

Because we build technology, we can fix the problems it creates. This is true in the broadest sense. But, the problem is, there is no functional “we” here.

Instead, countless distributed actors work sometimes together and sometimes at cross-purposes.

Citizen assemblies offer a mechanism for bringing a wider group into the conversation.

Change happens when people demand it.

  • THE NARROW PATH: THE ONLY WAY IS THROUGH

Ten steps summary:

  • Technical safety. Concrete technical measures to alleviate possible harms and maintain control.
  • Audits. A means of ensuring the transparency and accountability of technology.
  • Choke points. Levers to slow development and buy time for regulators and defensive technologies.
  • Makers. Ensuring responsible developers build appropriate controls into technology from the start.
  • Businesses. Aligning the incentives of the organizations behind technology with its containment.
  • Government. Supporting governments, allowing them to build technology, regulate technology, and implement mitigation measures.
  • Alliances. Creating a system of international cooperation to harmonize laws and programs.
  • Culture. A culture of sharing learning and failures to quickly disseminate means of addressing them.
  • Movements. All of this needs public input at every level, including to put pressure on each component and make it accountable.
  • Step 10 is about coherence, ensuring that each element works in harmony with the others.

Safe, contained technology is, like liberal democracy, not a final end state; rather, it is an ongoing process, a delicate equilibrium that must be actively maintained, constantly fought for and protected.

IS CONTAINMENT OF THE coming wave possible? The narrow path must be walked forever from here on out, and all it takes is one misstep to tumble into the abyss. The blunt challenge of containment is not a reason to turn away; it is a call to action, a generational mission we all need to face.

We should all get comfortable with living with contradictions in this era of exponential change and unfurling powers.

LIFE AFTER THE ANTHROPOCENE

The coming wave is going to change the world. Ultimately, human beings may no longer be the primary planetary drivers, as we have become accustomed to being. We are going to live in an epoch when the majority of our daily interactions are not with other people but with AIs.

Technology should amplify the best of us, open new pathways for creativity and cooperation, work with the human grain of our lives and most precious relationships. It should make us happier and healthier, the ultimate complement to human endeavor and life well lived — but always on our terms, democratically decided, publicly debated, with benefits widely distributed.

You may also like
How will leading people look in the AI world
Enamul Haque: The Ultimate Modern Guide to Artificial Intelligence
Bill Schmarzo: The Economics of Data, Analytics and Digital transformation
Marvin Minsky: The Emotion Machine

Leave a Reply