Home > Razvoj družbe > Vaclav Smil: How the World Really Works

Vaclav Smil: How the World Really Works

Introduction: Why Do We Need This Book?

In 1872, a century after the appearance of the last volume of the Encyclopédie, any collection of knowledge had to resort to the superficial treatment of a rapidly expanding range of topics, and, one and a half centuries later, it is impossible to sum up our understanding even within narrowly circumscribed specialties: such terms as “physics” or “biology” are fairly meaningless labels, and experts in particle physics would find it very hard to understand even the first page of a new research paper in viral immunology. Obviously, this atomization of knowledge has not made any public decision-making easier.

Why then do most people in modern societies have such a superficial knowledge about how the world really works? The complexities of the modern world are an obvious explanation: people are constantly interacting with black boxes, whose relatively simple outputs require little or no comprehension of what is taking place inside the box.

Urbanization and mechanization have been two important reasons for this comprehension deficit. Most modern urbanites are thus disconnected not only from the ways we produce our food but also from the ways we build our machines and devices, and the growing mechanization of all productive activity means that only a very small share of the global population now engages in delivering civilization’s energy and the materials that comprise our modern world.

America now has only about 3 million men and women (farm owners and hired labor) directly engaged in producing food.

China is the world’s largest producer of steel — smelting, casting, and rolling nearly a billion tons of it every year — but all of that is done by less than 0.25 percent of China’s 1.4 billion people.

From lawyers and economists to code writers and money managers, their disproportionately high rewards are for work completely removed from the material realities of life on earth.

The real wrench in the works: we are a fossil-fueled civilization whose technical and scientific advances, quality of life, and prosperity rest on the combustion of huge quantities of fossil carbon, and we cannot simply walk away from this critical determinant of our fortunes in a few decades, never mind years.

Our high-energy societies have been steadily increasing their dependence on fossil fuels in general and on electricity, the most flexible form of energy, in particular.

How much of what we rely on to survive, from wheat to tomatoes to shrimp, has one thing in common: it requires substantial, direct and indirect, fossil fuel inputs.

Our societies are sustained by materials created by human ingenuity, focusing on what I call the four pillars of modern civilization: ammonia, steel, concrete, and plastics.

The world has become so interconnected by transportation and communication.

Modern societies have succeeded in eliminating or reducing many previously mortal or crippling risks — polio and giving birth, for example — but many perils will always be with us, and we repeatedly fail to make proper risk assessments, both underestimating and exaggerating the dangers we face.

How unfolding environmental changes might affect our three existential necessities: oxygen, water, and food.

Rather than resorting to an ancient comparison of foxes and hedgehogs (a fox knows many things, but a hedgehog knows one big thing), I tend to think about modern scientists as either the drillers of ever-deeper holes (now the dominant route to fame) or scanners of wide horizons (now a much-diminished group).

Understanding Energy: Fuels and Electricity

The first microorganisms emerge nearly 4 billion years ago.

Hundreds of millions of years then elapse with no signs of change before cyanobacteria begin to use the energy of the visible incoming solar radiation to convert CO2 and water into new organic compounds and release oxygen.

This is a radical shift that will create Earth’s oxygenated atmosphere, yet a long time elapses before new, more complex aquatic organisms are seen 1.2 billion years ago.

Many four-legged animals briefly stand or awkwardly walk on two legs, and more than 4 million years ago this form of locomotion becomes the norm for small ape-like creatures that begin spending more time on land than in trees.

Several hundred thousand years ago, the probes detect the first extrasomatic use of energy — external to one’s body; that is, any energy conversion besides digesting food — when some of these upright walkers master fire and begin to use it deliberately for cooking, comfort, and safety.

This trend intensifies with the next notable change, the adoption of crop cultivation. About 10 millennia ago, the probes record the first patches of deliberately cultivated plants as a small share of the Earth’s total photosynthesis becomes controlled and manipulated by humans who domesticate — select, plant, tend, and harvest — crops for their (delayed) benefit.

In 1600, rather than relying solely on wood, a society is increasingly burning coal, a fuel produced by photosynthesis tens or hundreds of millions of years ago and fossilized by heat and pressure during its long underground storage. Even by 1850, rising coal extraction in Europe and North America supplies no more than 7 percent of all fuel energy, nearly half of all useful kinetic energy comes from draft animals, about 40 percent from human muscles, and just 15 percent from the three inanimate prime movers: waterwheels, windmills, and the slowly spreading steam engines. But by 2020 more than half of the world’s electricity will still be generated by the combustion of fossil fuels, mainly coal and natural gas.

The global population rose from 1 billion in 1800 to 1.6 billion in 1900 and 6.1 billion in the year 2000, and hence the supply of useful energy rose (all values in gigajoules per capita) from 0.05 in 1800 to 2.7 in 1900 and to about 28 in the year 2000.

An average inhabitant of the Earth nowadays has at their disposal nearly 700 times more useful energy than their ancestors had at the beginning of the 19th century.

An abundance of useful energy underlies and explains all the gains — from better eating to mass-scale travel; from mechanization of production and transport to instant personal electronic communication — that have become norms rather than exceptions in all affluent countries.

Energy conversions are the very basis of life and evolution. Modern history can be seen as an unusually rapid sequence of transitions to new energy sources, and the modern world is the cumulative result of their conversions.

Energy is the only truly universal currency, and nothing (from galactic rotations to ephemeral insect lives) can take place without its transformations.

Understanding how the world really works cannot be done without at least a modicum of energy literacy.

What is energy? How do we define this fundamental quantity? The Greek etymology is clear. Aristotle, writing in his Metaphysics, combined ἐν (in) with ἔργνο (work) and concluded that every object is maintained by ἐνέργεια.

Eventually, Isaac Newton (1643 – 1727) laid down fundamental physical laws involving mass, force, and momentum, and his second law of motion made it possible to derive the basic energy units. Using modern scientific units, 1 joule is the force of 1 newton — that is, the mass of 1 kilogram accelerated by 1 m/s acting over a distance of 1 meter.

Our practical understanding of energy was greatly expanded during the 19th century thanks to the era’s proliferating experiments with combustion, heat, radiation, and motion. This led to what is still the most common definition of energy: “the capacity for doing work” — a definition valid only when the term “work” means not only some invested labor but, as one of the leading physicists of the era put it, a generalized physical “act of producing a change of configuration in a system in opposition to a force which resists that change.”

Richard Feynman, who (in his famous Lectures on Physics) tackled the challenge in his straightforward manner, stressing that “energy has a large number of different forms, and there is a formula for each one. These are: gravitational energy, kinetic energy, heat energy, elastic energy, electrical energy, chemical energy, radiant energy, nuclear energy, mass energy.”

It is important to realize that in physics today, we have no knowledge of what energy is. We do not have a picture that energy comes in little blobs of a definite amount.

The first law of thermodynamics states that no energy is ever lost during conversions: be that chemical to chemical when digesting food; chemical to mechanical when moving muscles; chemical to thermal when burning natural gas; thermal to mechanical when rotating a turbine; mechanical to electrical in a generator; or electrical to electromagnetic as light illuminates the page you are reading. However, all energy conversions eventually result in dissipated low-temperature heat: no energy has been lost, but its utility, its ability to perform useful work, is gone (the second law of thermodynamics).

There are many choices available when it comes to energy conversions, some far better than others.

Large nuclear reactors are the most reliable producers of electricity: some of them now generate it 90 – 95 percent of the time, compared to about 45 percent for the best offshore wind turbines and 25 percent for photovoltaic cells in even the sunniest of climates.

This is simple physics or electrical engineering, but it is remarkable how often these realities are ignored. Another common mistake is to confuse energy with power, and this is done even more frequently.

Energy is a scalar, which in physics is a quantity described only by its magnitude; volume, mass, density and time are other ubiquitous scalars. Power measures energy per unit of time and hence it is a rate (in physics, a rate measures change, commonly per time).

Power equals energy divided by time: in scientific units, it is watts = joules/seconds. Energy equals power multiplied by time: joules = watts × seconds .

An adult man’s basal metabolic rate (the energy required at complete rest to run the body’s essential functions) is about 80 watts, or 80 joules per second; lying prone all day a 70-kilogram man would still need about 7 megajoules (80×24×3,600) of food energy, or about 1,650 kilocalories, to maintain his body temperature, energize his beating heart, and run myriad enzymatic reactions.

Here is a density ladder (all rates in gigajoules per ton): air-dried wood, 16; bituminous coal (depending on quality), 24 – 30; kerosene and diesel fuels, about 46. In volume terms (all rates in gigajoules per cubic meter), energy densities are only about 10 for wood, 26 for good coal, 38 for kerosene. Natural gas (methane) contains only 35 MJ/m3 — or less than 1/1,000 of kerosene’s density.

Crude oil needs refining to separate the complex mixture of hydrocarbons into specific fuels — gasoline being the lightest; residual fuel oil the heaviest — but this process yields more valuable fuels for specific uses, and it also produces indispensable non-fuel products such as lubricants. Lubricants are needed to minimize friction in everything from the massive turbofan engines in wide-body jetliners to miniature bearings.

Another product derived from crude oil is asphalt.

And hydrocarbons have yet another indispensable non-fuel use: as feedstocks for many different chemical syntheses.

The shift from coal to crude oil took generations to accomplish. Commercial crude oil extraction began during the 1850s in Russia, Canada, and the US.

Crude oil became a global fuel, and eventually the world’s most important source of primary energy, thanks to the discoveries of giant oil fields in the Middle East and in the USSR — and, of course, also thanks to the introduction of large tankers.

Because once demand for oil began to increase, many oil-saving measures remained in place and some — notably the transitions to more efficient industrial uses — kept on intensifying.

In 1995, crude oil extraction finally surpassed the 1979 record and then continued to rise, meeting the demand of an economically reforming China as well as the rising demand elsewhere in Asia — but oil has not regained its pre-1975 relative dominance.

Its share of the global commercial primary energy supply fell from 45 percent in 1970 to 38 percent in the year 2000 and to 33 percent in 2019 — and it is now certain that its further relative decline will continue as natural gas consumption and wind and solar electricity generation keep increasing.

If energy, according to Feynman, is “that abstract thing,” then electricity is one of its most abstract forms.

Electricity is the best form of energy for lighting. The most telling comparison of light sources is in terms of their luminous efficacy.

When setting the luminous efficacy of candles as equal to 1, coal gas lights in the early industrial cities produced 5 – 10 times more; before the First World War electric light bulbs with tungsten filaments emitted up to 60 times more; today’s best fluorescent lights produce about 500 times as much; and sodium lamps (used for outdoor lighting) are up to 1,000 times more efficacious.

The conversion of electricity into kinetic energy by electric motors first revolutionized nearly every sector of industrial production and later penetrated every household niche.

The service sector now dominates all modern economies, and its operation is completely dependent on electricity. Electric motors power elevators and escalators, air-condition buildings, open doors, and compact garbage. The long-term trend toward the electrification of societies (rising share of fuels converted to electricity rather than consumed directly) has been unmistakable.

Electricity still supplies only a relatively small share of final global energy consumption, just 18 percent.

Commercial electricity generation began in 1882, with three firsts. Two of them were the pioneering coal-fired generating stations designed by Thomas Edison (Holborn Viaduct in London began operating in January 1882; Pearl Street station in New York in September 1882), and the third was the first hydroelectric station (on the Fox River in Appleton, Wisconsin, also generating since September 1882).

Nuclear fission began to generate commercial electricity in 1956 at Britain’s Calder Hall, saw its greatest expansion during the 1980s, peaked in 2006, and has since declined slightly to about 10 percent of global electricity generation.

Hydro generation accounted for nearly 16 percent in 2020; wind and solar added almost 7 percent; and the rest (about two-thirds) came from large central stations fueled mostly by coal and natural gas.

The target is not total decarbonization but “net zero” or carbon neutrality.

Germany is the most notable example: since the year 2000, it has boosted its wind and solar capacity 10-fold and raised the share of renewables (wind, solar, and hydro) from 11 percent to 40 percent of total generation.

In 2019, Germany generated 577 terawatt-hours of electricity, less than 5 percent more than in 2000 — but its installed generating capacity expanded by about 73 percent (from 121 to about 209 gigawatts).

A nuclear renaissance would be particularly helpful if we cannot develop better ways of large-scale electricity storage soon. The future of nuclear generation remains uncertain. Only China, India, and South Korea are committed to further expansion of their capacities. Even the European Union now recognizes that it could not come close to its extraordinarily ambitious decarbonization target without nuclear reactors.

News headlines assure us that the future of flight is electric — touchingly ignoring the huge gap between the energy density of kerosene burned by turbofans and today’s best lithium-ion (Li-ion) batteries that would be on board these hypothetically electric planes. Turbofan engines powering jetliners burn fuel whose energy density is 46 megajoules per kilogram (that’s nearly 12,000 watt-hours per kilogram), converting chemical to thermal and kinetic energy — while today’s best Li-ion batteries supply less than 300 Wh/kg, more than a 40-fold difference.

Germany will soon generate half of its electricity from renewables, but during the two decades of Energiewende the share of fossil fuels in the country’s primary energy supply has only declined from about 84 percent to 78 percent.

The economic rise of China was the main reason why the global consumption of fossil fuels rose by about 45 percent during the first two decades of the 21st century, and why, despite extensive and expensive expansion of renewable energies, the share of fossil fuels in the world’s primary energy supply fell only marginally, from 87 percent to about 84 percent.

What we need is to pursue a steady reduction of our dependence on the energies that made the modern world. We still do not know most of the particulars of this coming transition, but one thing remains certain: it will not be (it cannot be) a sudden abandonment of fossil carbon, nor even its rapid demise — but rather its gradual decline.

Understanding Food Production: Eating Fossil Fuels

Securing a sufficient quantity and nutritional variety of food is the existential imperative for every species.

Foraging in arid environments could require an area of more than 100 square kilometers to support a single family.

In more productive regions, population densities could rise to as many as 2 – 3 people per 100 hectares.

In ancient Egypt, the density rate rose from about 1.3 people per hectare of cultivated land during the predynastic period (pre – 3150 BCE) to about 2.5 people per hectare 3,500 years later, when the country was a province of the Roman Empire.

Over time, and very slowly, preindustrial rates of food production rose even higher — but rates of 3 people per hectare were not achieved until the 16th century, and only then in intensively cultivated regions of Ming China; in Europe they remained below 2 people per hectare until the 18th century.

Rising food production reduced the malnutrition rate from 2 in 3 people in 1950 to 1 in 11 by 2019. In 1950 the world was able to supply adequate food to about 890 million people, but by 2019 that had risen to just over 7 billion: a nearly eight-fold increase in absolute terms!

The fundamental energy conversion producing our food has not changed: as always, we are eating, whether directly as plant foods or indirectly as animal foodstuffs, products of photosynthesis — the biosphere’s most important energy conversion, powered by solar radiation.

It takes about 10 minutes of human labor to produce a kilogram of wheat, and that would, with wholegrain flour, yield 1.6 kilograms (two loaves) of bread. This is laborious, slow, and low-yielding farming — but it is completely solar, and no other energy inputs are required beyond the Sun’s radiation.

High-productivity harvests are possible thanks to increasing infusions of fossil energies.

Many people nowadays admiringly quote the performance gains of modern computing (“so much data”) or telecommunication (“so much cheaper”) — but what about harvests? In two centuries, the human labor to produce a kilogram of American wheat was reduced from 10 minutes to less than two seconds.

Nitrogen is needed in such great quantities because it is in every living cell.

The element is abundant — it makes up nearly 80 percent of the atmosphere, organisms live submerged in it — and yet it is a key limiting factor in crop productivity as well as in human growth. This is one of the great paradoxical realities of the biosphere and its explanation is simple: nitrogen exists in the atmosphere as a non-reactive molecule (N2), and only a few natural processes can split the bond between the two nitrogen atoms and make the element available to form reactive compounds.

For a standard baguette (250 grams), the embedded energy equivalent is about 2 tablespoons of diesel fuel; for a large German Bauernbrot (2 kilograms), it would be about 2 cups of diesel fuel (less for a wholewheat loaf). The real fossil energy cost is higher still, because only a small share of bread is now baked where it is bought.

An equivalent energy consumption as high as 600 mL/kg! But if the bread’s typical (roughly 5:1) ratio of edible mass to the mass of embedded energy (1 kilogram of bread compared to about 210 grams of diesel fuel) seems uncomfortably high, recall that I have already noted that grains — even grains after processing and conversion into our favorite foods — are at the bottom of our food energy subsidy ladder.

Feed costs alone can be as low as 150 milliliters of diesel fuel per kilogram of edible meat, and as high as 750 mL/kg.

The most conservative combined rate for feeding and rearing the birds would be thus an equivalent of about 200 milliliters of diesel fuel per kilogram of meat, but the values can go as high as 1 liter.

The minima of 300 – 350 mL/kg is a remarkably efficient performance compared to the rates of 210 – 250 mL/ kg for bread, and this is reflected in the comparably affordable prices of chicken.

Perhaps the most meticulous study of tomato cultivation in the heated and unheated multi-tunnel greenhouses of Almería in Spain — concluded that the cumulative energy demand of net production is more than 500 milliliters of diesel fuel (more than two cups) per kilogram for the former (heated) and only 150 mL/kg for the latter harvest.

When bought in a Scandinavian supermarket, tomatoes from Almería’s heated plastic greenhouses have a stunningly high embedded production and transportation energy cost. Its total is equivalent to about 650 mL/kg , or more than five tablespoons (each containing 14.8 milliliters) of diesel fuel per medium-sized (125 gram) tomato!

As it turns out, capturing what the Italians so poetically call frutti di mare is the most energy-intensive process of food provision.

If you want to eat wild fish with the lowest-possible fossil carbon footprint, stick to sardines. The mean for all seafood is stunningly high — 700 mL/kg.

So, the evidence is inescapable: our food supply — be it staple grains, clucking birds, favorite vegetables, or seafood praised for its nutritious quality — has become increasingly dependent on fossil fuels.

Anthropogenic energy inputs into modern field farming (including all transportation), fisheries, and aquaculture add up to only about 4 percent of recent annual global energy use.

For the US, where, thanks to the prevalence of modern techniques and widespread economies of scale, the direct energy use in food production is now on the order of 1 percent of the total national supply. But after adding the energy requirements of food processing and marketing, packaging, transportation, wholesale and retail services, household food storage and preparation, and away-from-home food and marketing services, the grand total in the US reached nearly 16 percent of the nation’s energy supply in 2007 and now it is approaching 20 percent.

There are many reasons why we should not continue many of today’s food-producing practices.

Between 1800 and 2020, we reduced the labor needed to produce a kilogram of grain by more than 98 percent — and we reduced the share of the country’s population engaged in agriculture by the same large margin.

Global inventory of reactive nitrogen shows that six major flows bring the element to the world’s croplands: atmospheric deposition, irrigation water, plowing-under of crop residues, spreading of animal manures, nitrogen left in soil by leguminous crops, and application of synthetic fertilizers.

Synthetic fertilizers supply 110 megatons of nitrogen per year, or slightly more than half of the 210 – 220 megatons used in total.

Global crop cultivation supported solely by the laborious recycling of organic wastes and by more common rotations is conceivable for a global population of 3 billion people consuming largely plant-based diets, but not for nearly 8 billion people on mixed diets.

The well-documented global food losses have been excessively high, mostly because of an indefensible difference between output and actual needs: daily average per capita requirements of adults in largely sedentary affluent populations are no more than 2,000 – 2,100 kilocalories, far below the actual supplies of 3,200 – 4,000 kilocalories.

According to the FAO, the world loses almost half of all root crops, fruits, and vegetables, about a third of all fish, 30 percent of cereals, and a fifth of all oilseeds, meat, and dairy products — or at least one-third of the overall food supply.

The quest for mass-scale veganism is doomed to fail. Eating meat has been as significant a component of our evolutionary heritage as our large brains (which evolved partly because of meat eating), bipedalism, and symbolic language.

Between 1961 and 1980 there was a substantial decline in the share of applied nitrogen actually incorporated by crops (from 68 percent to 45 percent), then came a levelling off at around 47 percent.

There are obvious opportunities for running field machinery without fossil fuels.

The readers of this book now understand that our food is partly made not just of oil, but also of coal that was used to produce the coke required for smelting the iron needed for field, transportation, and food processing machinery; of natural gas that serves as both feedstock and fuel for the synthesis of nitrogenous fertilizers; and of the electricity generated by the combustion of fossil fuels that is indispensable for crop processing, taking care of animals, and food and feed storage and preparation.

Understanding Our Material World: The Four Pillars of Modern Civilization

Four materials rank highest on this combined scale, and they form what I have called the four pillars of modern civilization: cement, steel, plastics, and ammonia.

They are needed in larger (and still increasing) quantities than are other essential inputs. In 2019, the world consumed about 4.5 billion tons of cement, 1.8 billion tons of steel, 370 million tons of plastics, and 150 million tons of ammonia, and they are not readily replaceable by other materials — certainly not in the near future or on a global scale.

Another key commonality between these four materials is particularly noteworthy as we contemplate the future without fossil carbon: the mass-scale production of all of them depends heavily on the combustion of fossil fuels.

As a result , global production of these four indispensable materials claims about 17 percent of the world’s primary energy supply, and 25 percent of all CO2 emissions originating in the combustion of fossil fuels — and currently there are no commercially available and readily deployable mass-scale alternatives to displace these established processes.

Of the four substances (and despite my dislike of rankings!), it is ammonia that deserves the top position as our most important material.

Maturing agronomic science made it clear that the only way to secure adequate food for the larger populations of the 20th century was to raise yields by increasing the supply of nitrogen and phosphorus, two key plant macronutrients.

The synthesis of ammonia from its elements, nitrogen and hydrogen, was pursued by a number of highly qualified chemists (including Wilhelm Ostwald, a Nobel Prize winner in chemistry in 1909), but in 1908 Fritz Haber — at that time professor of physical chemistry and electrochemistry at the Technische Hochschule in Karlsruhe — working with his English assistant Robert Le Rossignol and supported by BASF, Germany’s (and the world’s) leading chemical enterprise, was the first researcher to succeed.

There are now only two effective direct solutions to field losses of nitrogen: the spreading of expensive slow-release compounds; and, more practically, turning to precision farming and applying fertilizers only as needed based on analyses of the soil.

Plastics are a large group of synthetic (or semisynthetic) organic materials whose common quality is that they are fit for forming (molding). Synthesis of plastics begins with monomers, simple molecules that can be bonded in long chains or branches to make polymers. The two key monomers, ethylene and propylene, are produced by the steam cracking (heating to 750 – 950ºC) of hydrocarbon feedstocks, and hydrocarbons also energize subsequent syntheses.

But plastics have found their most indispensable roles in health care in general and in the hospital treatment of infectious diseases in particular. Modern life now begins (in maternity wards) and ends (in intensive care units) surrounded by plastic items.

Steels (the plural is more accurate as there are more than 3,500 varieties) are alloys dominated by iron (Fe). Modern steels are made from cast iron by reducing its high carbon content to 0.08 – 2.1 percent by weight. Steel’s physical properties handily beat those of the hardest stones, as well as those of the other two most common metals.

Steels come in four major categories.

  • Carbon steels (90 percent of all steels on the market are 0.3 – 0.95 percent carbon ) are everywhere, from bridges to fridges and from gears to shears.
  • Alloy steels include varying shares of one or more elements (most commonly manganese, nickel, silicon, and chromium, but also aluminum, molybdenum, titanium, and vanadium), added in order to improve their physical properties (hardness, strength, ductility).
  • Stainless steel (10 – 20 percent chromium) was made for the first time only in 1912 for kitchenware, and is now widely used for surgical instruments, engines, machine parts, and in construction.
  • Tool steels have a tensile strength 2 – 4 times higher than the best construction steels, and they are used for cutting steel and other metals for dies (for stamping or extrusion of other metals or plastics), as well as for manual cutting and hammering.

Steel determines the look of modern civilization and enables its most fundamental functions. Do we have adequate supplies of iron ore to keep making steel for many generations to come? Iron is the Earth’s dominant element by mass because it is heavy.

This is a resource/production (R/P) ratio of more than 300 years, far beyond any conceivable planning horizons (the R/P ratio for crude oil is just 50 years).

Moreover, steel is readily recycled by melting it in an electric arc furnace (EAF).

Steel scrap has become one of the world’s most valuable export commodities, as countries with a long history of steel production and with plenty of accumulated scrap sell the material to expanding producers. Primary steelmaking still dominates, producing more than twice as much hot metal every year as is recycled — almost 1.3 billion tons in 2019.

Ironmaking is highly energy-intensive, with about 75 percent of the total demand claimed by blast furnaces.

The World Steel Association puts the average global rate at 500 kilograms of carbon per ton, with recent primary steelmaking emitting about 900 megatons of carbon a year, or 7 – 9 percent of direct emissions from the global combustion of fossil fuels.

But steel is not the only major material responsible for a significant share of CO2 emissions: cement is much less energy-intensive, but because its global output is nearly three times that of steel, its production is responsible for a very similar share (about 8 percent) of emitted carbon.

Cement is the indispensable component of concrete, and it is produced by heating (to at least 1,450ºC) ground limestone (a source of calcium) and clay, shale, or waste materials (sources of silicon, aluminum, and iron) in large kilns — long (100 – 220 meters) inclined metal cylinders.

Perhaps the most stunning outcome of this rise is that in just two years — 2018 and 2019 — China produced nearly as much cement (about 4.4 billion tons) as did the United States during the entire 20th century (4.56 billion tons).

Yet another astounding statistic is that the world now consumes in one year more cement than it did during the entire first half of the 20th century.

During the 21st century we will face unprecedented burdens of concrete deterioration, renewal, and removal (with, obviously, a particularly acute problem in China), as structures will have to be torn down — in order to be replaced or destroyed — or abandoned.

Two prominent examples illustrate this unfolding material dependence. No structures are more obvious symbols of “green” electricity generation than large wind turbines — but these enormous accumulations of steel, cement, and plastics are also embodiments of fossil fuels.

Uncertainties about the future rates of electric vehicle adoption are large, but a detailed assessment of material needs, based on two scenarios (assuming that 25 percent or 50 percent of the global fleet in 2050 would be electric vehicles), found the following: from 2020 to 2050 demand for lithium would grow by factors of 18 – 20, for cobalt by 17 – 19, for nickel by 28 – 31, and factors of 15 – 20 would apply for most other materials from 2020.

Modern economies will always be tied to massive material flows, whether those of ammonia-based fertilizers to feed the still-growing global population; plastics, steel, and cement needed for new tools, machines, structures, and infrastructures; or new inputs required to produce solar cells, wind turbines, electric cars, and storage batteries. And until all energies used to extract and process these materials come from renewable conversions, modern civilization will remain fundamentally dependent on the fossil fuels used in the production of these indispensable materials. No AI, no apps, and no electronic messages will change that.

Understanding Globalization: Engines, Microchips, and Beyond

Statistics concerning money movements greatly underestimate the real (including massive illegal) flows. The global merchandise trade is now close to $ 20 trillion a year, and the annual value of world trade in commercial services is close to $ 6 trillion.

Perhaps the greatest misconception about globalization is that it is a historical inevitability preordained by economic and social evolution.

My goal is to explain how technical factors — above all, new prime movers (engines, turbines, motors) and new means of communication and information (storage, transmission, and retrieval) — made successive waves of globalization possible, and then to point out how these technical advances have been contingent on the prevailing political and social conditions.

In its most fundamental physical way, globalization is, and will remain, simply the movement of mass — of raw materials, foodstuffs, finished products, and people — and the transmission of information (warnings, guidance, news, data, ideas) and investment within and among the continents, enabled by techniques that make such transfers possible on large scales and in affordable and reliable ways.

But the scattered linking of parts of Europe, Asia, and Africa is a far cry from a truly global reach. Only the inclusion of the New World (starting in 1492) and the first circumnavigation of the Earth (1519) began to satisfy this definition.

The East India Company, headquartered in London and operating between 1600 and 1874, traded a wide range of items — largely to and from the Indian subcontinent — ranging from textiles and metals to spices and opium. The Vereenigde Oost-Indische Compagnie (Dutch East India Company) imported spices, textiles, gems, and coffee mostly from Southeast Asia; it kept its uninterrupted monopoly on trade with Japan for two centuries (between 1641 and 1858), and the Dutch domination of the East Indies ended only in 1945.

Incipient globalization eventually connected the world with far-flung but not very intensive exchanges enabled by sail ships. Steam engines made these linkages more common, more intensive, and much more predictable, while telegraph provided the first truly global means of (near-instant) communication. The combination of the first diesel engines, flight, and radio elevated and accelerated these enablers of globalization. And large diesels (in shipping), turbines (in flight), containers (enabling intermodal transport), and microchips (allowing unprecedented controls thanks to the volume and speed of information-handling) brought globalization to its highest stage.

Starting at the beginning, the limits of globalization dependent solely on animate power are easily stated.

Caravans on the Silk Road (from Tanais on the Black Sea via Sarai to Beijing) took a year, implying an average speed of about 25 kilometers per day.

The average duration of a voyage to Batavia (present-day Jakarta) was 238 days (eight months) during the 17th century, and another month from Batavia to Dejima, the small Dutch outpost in the Nagasaki harbor.

During the second century of the early modern era (1500 – 1800) the societies at the forefront of this still-modest but rising wave of globalization were influenced by these long-range interchanges.

This was just an incipient, selective, and limited globalization without any substantial nationwide impacts, to say nothing about truly global consequences.

Economist Angus Maddison estimated that in 1698 – 1700 commodity exports from the East Indies accounted for just 1.8 percent of the Dutch net domestic product, and that the Indonesian export surplus was a mere 1.1 percent of the Dutch GDP — and nearly a century later (1778 – 1780) both of these shares were still only 1.7 percent.

The first quantitative leap in the process of globalization came only with the combination of more reliable navigation, steam power (resulting in larger ship capacities and faster speeds), and the telegraph — the first means of (nearly) instant long-distance communication.

The first steam-powered westward transatlantic crossings took place in 1838, but sailing ships remained competitive for another four decades.

Practical telegraph was developed during the late 1830s and the early 1840s; the first (short-lived) transatlantic link cable was laid in 1858; and by the century’s end undersea cables had connected all continents.

The total volume of global trade quadrupled between 1870 and 1913; the share of trade (exports and imports) in the worldwide economic product rose from about 5 percent in 1850 to 9 percent by 1870, and to 14 percent in 1913.

The next fundamental advance in prime movers that raised the capability of long-distance shipping was the replacement of steam engines with diesel engines — machines of superior efficiency and reliable performance.

Two concurrent processes that promoted further globalization were the invention of airplanes powered by reciprocating gasoline engines, and radio communication.

This distinct and intensive, but still far from universal, spell of post-1950 globalization — which ended in 1973 – 1974 with OPEC’s two rounds of oil price increases and which was followed by 15 years of relative stagnation — was enabled by a combination of four fundamental technical advances. These were the rapid adoption of much more powerful and efficient designs of diesel engines; the introduction (and even faster diffusion) of a new prime mover, the gas turbine used for the propulsion of jetliners; superior designs for intercontinental shipping (massive bulk carriers for liquids and solids, and the containerization of other cargoes); and quantum leaps in computing and information processing.

After the Second World War, crude oil tankers were the first vessels to grow in capacity as the rapid economic growth of Western Europe and Japan coincided with the availability of newly discovered Middle Eastern giant oil fields (Saudi Arabia’s Ghawar, the world’s largest, was found in 1948 and began flowing in 1951), and exports of this inexpensive fuel (until 1971 it sold for less than $ 2 per barrel) required vessels of increasing capacities.

Intercontinental natural gas shipments became possible with the introduction of the first liquefied natural gas (LNG) tankers (carrying the fuel at − 162ºC in insulated containers), which brought exports from Algeria to the UK starting in 1964 and from Alaska to Japan in 1969.

In October 1957, Gateway City, a freighter whose hold was fitted with cellular compartments to accommodate 226 stacked containers, became the world’s first true container ship, and McLean’s Sea-Land company began a regular container service to Europe (Newark – Rotterdam) in April 1966 and to Japan in 1968.

The integration of the global economy has been closely tied to the introduction of wide-body jetliners — to the Boeing 747 and to its later Airbus (A340 and A380) emulators.

The years between 1950 and 1973 were marked by rapid economic growth in virtually every part of the world: its global annual mean rate and its average per capita gains were nearly 2.5 times greater than during the previous globalization wave of 1850 – 1913, and the value of exported goods in the world economic product rose from a low of just over 4 percent in 1945 to 9.6 percent in 1950 and about 14 percent in 1974, equaling the 1913 share but with trade volume nearly ten times higher.

OPEC-driven oil price increases caused globalization to falter, weaken, and recede, but this retreat did not affect all economic sectors — and in a matter of years a combination of effective adjustments laid foundations for a new round of globalization that, thanks to new political alignments, progressed further than any of the preceding waves.

By the late 1960s, technical capabilities were ready for unprecedented global integration: energy supply was plentiful, there was no shortage of money to invest, and all that was needed was to extend the globalization process to the nations that did not participate in the first postwar round.

China, Russia, and India became major participants in global trade, finances, travel, and talent flows.

In 1972, China had no trade with the US; 1984 was the last year the US had a surplus trading goods with Beijing; in 2009, China became the world’s largest exporter of goods; and by 2018 its exports accounted for more than 12 percent of all global sales, and its trade surplus with the US reached nearly $ 420 billion before declining by about 18 percent in 2019 due to rising tensions between the two economic superpowers.

India, with its messy electoral and multiethnic politics, has not been able to replicate China’s post-1990 rise, but the record of its per capita GDP growth during the first two decades of the 21st century indicates a clear departure from the previous decades of poor performance. Since 2008 the country’s annual growth of merchandise exports has been, at 5.3 percent, only slightly behind China’s 5.7 percent, and the impact of India’s software engineers in Silicon Valley (where they have been the single most important contingent of skilled immigrants in the industry) has been far above Chinese contributions.

For generations the US led overall tourist expenditures, but it was surpassed by China in 2012 and five years later Chinese tourists were spending twice as much as Americans.

The history of globalization reveals an undeniable long-term trend toward greater international economic integration that is manifested by intensified flows of energies, materials, people, ideas, and information, and that is enabled by improving technical capabilities. The process is not new, but only thanks to many post-1850 innovations could it have reached its recent intensity and extent.

History reminds us that the recent state of things is unlikely to last for generations.

A mass-scale, rapid retreat from the current state is impossible, but the pro-globalization sentiment has been weakening for some time.

We now have solid quantitative confirmation that globalization did reach a turning point in the mid-2000s. This development was soon obscured by the Great Recession of 2008, but McKinsey’s analysis of 23 industry value chains shows that goods-producing value chains (still growing slowly in absolute terms) have become significantly less trade-intensive, with exports declining from 28.1 percent of gross output in 2007 to 22.5 percent in 2017.

Global value chains are becoming more knowledge-intensive and rely increasingly on highly skilled labor.

Not surprisingly, the reshoring of manufacturing could be the wave of the future, both in North America and in Europe: a 2020 survey showed that 64 percent of American manufacturers said that reshoring is likely following the pandemic.

Understanding Risks: From Viruses to Diets to Solar Flares

We can look at which populations live the longest and what their diets are.

Among the world’s more than 200 nations and territories, Japan has had the highest average longevity since the early 1980s, when its combined (male and female) life expectancy at birth surpassed 77 years. Further gains followed, and by 2020 Japan’s combined life expectancy at birth was about 84.6 years. The Japanese diet has undergone an enormous transformation during the past 150 years.

The latest published surveys show Japan and the US to be surprisingly close in total food energy consumed per day. In 2015 – 2016, US males consumed only 11 percent more, and US women not even 4 percent more food energy per day than their Japanese counterparts did in 2017.

But there is a major gap in terms of average fat intake, with American males consuming about 45 percent more and women 30 percent more than the Japanese. And the greatest disparity is in sugar intake: among US adults it is about 70 percent higher. When recalculated in terms of average annual differences, Americans have recently consumed about 8 kilograms more fat and 16 kilograms more sugar every year than the average adult in Japan.

The Spanish women are the runners-up in the world’s record life expectancy, and the country traditionally followed the so-called Mediterranean diet, with high intakes of vegetables, fruits, and whole grains complemented by beans, nuts, seeds, and olive oil. But as the average incomes in Spain rose, they rapidly changed those habits to a surprisingly high degree.

There is no “objective risk” waiting to be measured because our risk perceptions are inherently subjective, dependent on our understanding of specific dangers (familiar vs. new risks) and on cultural circumstances.

Nuclear electricity generation is widely perceived as unsafe, x-rays as tolerably risky.

Large differences in individual tolerance of risk are best illustrated by the fact that many individuals engage — voluntarily and repeatedly — in activities that others might consider not just too risky but belonging all too clearly to the category of death wish.

A uniform metric able to subsume fatalities and injuries or economic losses (whose totals could differ by orders of magnitude among different societies) and chronic pain (something that remains notoriously unquantifiable) is clearly an impossible objective. But the finality of dying provides a universal, ultimate, and incontestably quantifiable numerator that can be used for comparative risk assessment. The simplest and most obvious way to make some revealing comparisons is to use a standard denominator and to compare annual frequencies of causes of death per 100,000 people.

A more insightful metric then is to use the time during which people are affected by a given risk as the common denominator, and do the comparisons in terms of fatalities per person per hour of exposure — that is, the time when an individual is subject, involuntarily or voluntarily, to a specific risk. This approach was introduced in 1969 by Chauncey Starr in his evaluation of social benefits and technological risks and I still find it preferable to another general metric — that of micromorts.

These units define a micro probability, a one-in-a-million chance of death per specific exposure, and express it per year, per day, per surgery, per flight, or per distance traveled — and these non-uniform denominators do not make for easy across-the-board comparisons.

To reverse the conclusion about general mortality, in affluent countries the overall risk of natural demise amounts to 1 person among 1 million dying every hour; every hour, 1 person among about 3 million dies of heart disease and 1 among roughly 70 million dies of an accidental fall.

But many risky exposures cannot be so easily assigned, because there is no clear dichotomy between voluntary and involuntary risks.

For the US we have totals of distances traveled every year by all motor vehicles and by passenger cars (a recent grand total has been about 5.2 trillion kilometers annually) and, after declining for many years, traffic fatalities have gone up slightly to about 40,000 a year. Assuming an average combined speed of 65 km/hour (about 40 mph) gives us annually about 80 billion driving hours in the US, and with 40,000 fatalities this translates exactly to 5 × 10 – 7 (0.0000005) fatalities per hour of exposure. And for men of my age group the driving-risk bump is only 12 percent above the overall risk of dying.

Perhaps the most revealing way to compare the airline industry’s fatalities is per 100 billion passenger-kilometers flown. This rate was 14.3 in 2010, it reached a record low of 0.65 in 2017, but it increased to 2.75 in 2019. Flying in 2019 was thus more than five times safer than in 2010, and more than 200 times safer than at the beginning of the jetliner era in the late 1950s.

The total of about 10.5 billion passenger-hours spent aloft and 292 fatalities translates to 2.8 × 10 – 8 (0.000000028) fatalities per person per hour of flying. This is only about 3 percent of the general risk of mortality.

At the opposite end of the voluntary risk spectrum are activities whose brief duration carries a high probability of death. None is riskier than base jumping from cliffs, towers, bridges, and buildings.

For comparison, in skydiving a fatal accident used to take place roughly once every 100,000 jumps but the latest US data show one fatality for every 250,000 jumps. With a typical descent lasting five minutes the exposure risk is only about 5 × 10 – 5, still 50 times higher than just sitting in a chair for those five minutes — but it is only about 1/1,000 of the risk associated with base jumping.

Finally, a few key numbers concerning one of the most dreaded modern involuntary exposures: the risk of terrorism. Between 1995 and 2017, 3,516 people died in terrorist attacks on US soil, with 2,996 fatalities (or 85 percent of that total) on September 11, 2001. Countrywide individual exposure risk thus averaged 6 × 10 – 11 during those 22 years, and for Manhattan it was two orders of magnitude higher, increasing the risk of just being alive by one-tenth of a percent, a quantity that is too small to be meaningfully internalized. In less fortunate countries, the recent toll of terrorist attacks has been much higher: in Iraq in 2017 (with more than 4,300 deaths) the risk rose to 1.3 × 10 – 8, and in Afghanistan in 2018 (7,379 deaths) to 2.3 × 10 – 8, but even that rate raises the basic risk of being alive by just a few percent and it remains lower than the risk people voluntarily assume by driving.

And how do recurrent deadly natural hazards compare with just being alive, and with the risks of extreme sports? This translates to about 3 × 10 – 9 (0.000000003) fatalities per hour of exposure, a risk that is three orders of magnitude lower than just living.

Floods and earthquakes in most parts of the world carry exposure risks mostly on the order of between 1 × 10 – 10 and 5 × 10 – 10.

Risks with truly global impacts fall into two very different categories: relatively frequent viral pandemics that can exact a considerable toll in a matter of months or a few years; and exceedingly rare but uncommonly deadly natural catastrophes that could take place within spans as short as a few days, hours, or seconds but whose consequences might persist not only for centuries but for millions of years, far beyond any civilizational horizons.

Perhaps the best example of a natural risk that would not directly kill anybody, but that would cause enormous planet-wide disruptions resulting in a large number of indirect casualties, is the possibility of a catastrophic geomagnetic storm caused by a coronal mass ejection.

The corona is the outermost layer of the Sun’s atmosphere (it can be seen without special instruments only during a total solar eclipse) and is, paradoxically, hundreds of times hotter than the Sun’s surface. Coronal mass ejections are enormous (billions of tons) expulsions of explosively accelerated material that carry an embedded magnetic field whose strength greatly surpasses that of background solar wind and the interplanetary magnetic field.

The largest known coronal mass ejection began on the morning of September 1, 1859, while Richard Carrington, a British astronomer, was observing and drawing a large solar sunspot that emitted a sizable, kidney-shaped white flare.

Even limited damage would mean hours or days of disrupted communications and grid operations, and a massive geomagnetic storm would sever all of these links on a global scale, leaving us without electricity, without information, without transportation, without the ability to make credit card payments or to withdraw money from banks.

While many experts are well aware of these odds and of the enormity of the potential consequences, this is clearly one of those risks (much like a pandemic) for which we cannot ever be adequately prepared: we just have to hope that the next massive coronal ejection event will not equal or surpass the Carrington Event.

While this may not be what the world wants to hear at this time, it is an unfortunate truth that viral pandemics are guaranteed to reappear with relatively high frequency and, although sharing inevitable commonalities, they are unpredictably specific in their impacts.

Moreover, most people and most governments find it difficult to deal properly with low-probability but high-impact (high-loss) events.

But it is difficult, if not impossible, to avoid many exposures, because (as already noted) in some cases there is no clear dichotomy between voluntary and involuntary risks. And most risks are beyond our control.

We habitually underestimate voluntary, familiar risks while we repeatedly exaggerate involuntary, unfamiliar exposures. We constantly overestimate the risks stemming from recent shocking experiences and underestimate the risk of events once they recede in our collective and institutional memory.

Public reaction to risks is guided more by a dread of what is unfamiliar, unknown, or poorly understood than by any comparative appraisal of actual consequences.

Understanding the Environment: The Only Biosphere We Have

If our species is to survive, never mind to flourish, for at least as long as high civilizations have been around (that is, for another 5,000 or so years), then we will have to make sure that our continuing interventions do not imperil the long-term habitability of the planet — or, as modern parlance has it, that we do not transgress safe planetary boundaries.

The list of these critical biospheric boundaries includes nine categories: climate change (now interchangeably, albeit inaccurately, called simply global warming), ocean acidification (endangering marine organisms that build structures of calcium carbonate), depletion of stratospheric ozone (shielding the Earth from excessive ultraviolet radiation and threatened by releases of chlorofluorocarbons), atmospheric aerosols (pollutants reducing visibility and causing lung impairment), interference in nitrogen and phosphorus cycles (above all, the release of these nutrients into fresh and coastal waters), freshwater use (excessive withdrawals of underground, stream, and lake waters), land use changes (due to deforestation, farming, and urban and industrial expansion), biodiversity loss, and various forms of chemical pollution.

Oxygen, after all, is the most acutely limiting resource for human survival. Our species, like all other chemoheterotrophs (organisms that cannot internally produce their own nutrition), requires its constant supply. The resting frequency of breathing is 12 – 20 inhalations a minute, and the daily adult per capita intake averages almost 1 kilogram of O2. For the global population, that translates to an annual intake of about 2.7 billion tons of oxygen a year, an utterly insignificant fraction (0.00023 percent) of the element’s atmospheric presence of about 1.2 quadrillion tons of O2.

Massive forest fires are destructive and harmful in many ways, but they are not going to suffocate us because of a lack of oxygen.

In contrast, the provision of the second-most acutely required natural input should be very high on our list of environmental worries — and not because there is any absolute shortage of this critical resource but because it is unevenly distributed and because we have not managed it well. And that is an understatement — we waste water enormously and, so far, we have been slow to adopt many effective changes that would reverse undesirable habits and trends.

National per capita consumption is the best (the most exhaustive) way to assess water footprints: it adds green, blue, and grey water components as well as all virtual water (water that was required for the growth or production of imported food and manufactured goods). Domestic blue water use (all values are in cubic meters per year per capita) ranges from just over 29 in Canada and 23 in the USA to about 11 in France, 7 in Germany, and about 5 in China and India, and to less than 1 in many African countries.

Concerns about wasting a finite resource are always appropriate, but there is no imminent phosphorus crisis. According to the International Fertilizer Development Center, the world’s phosphate rock reserves and resources are adequate to meet fertilizer demand for the next 300 – 400 years.

The real concern about plant nutrients is the environmental (and hence economic) consequences of their unwanted presence in the environment, mostly in water. Phosphorus from fertilizers is lost through soil erosion and precipitation runoff and it is released in waste produced by domestic animals and people.

And any longer-term assessment of the three existential necessities — atmospheric oxygen, water availability, and food production — must consider how their provision could be affected by the unfolding process of climate change, a gradual transformation that will leave its mark on the biosphere in myriad ways: the impacts go far beyond higher temperatures and rising ocean levels, the two changes that are most often referenced by the media.

A few years before his death, Joseph Fourier (1768 – 1830), a French mathematician, was the first scientist to realize that the atmosphere absorbs some of the radiation emanating from the ground; and in 1856, Eunice Foote, an American scientist and inventor, was the first author to link (briefly but clearly) CO2 with global warming.

All that is to say, we should not worry about oxygen. However, we must be concerned about the future of the water supply. With a warming of up to 2ºC, populations exposed to increased, climate change – induced water scarcity may be as low as 500 million and as high as 3.1 billion. Global warming will, inevitably, intensify the water cycle because higher temperatures will increase evaporation.

The combination of our inaction and of the extraordinarily difficult nature of the global warming challenge is best illustrated by the fact that three decades of large-scale international climate conferences have had no effect on the course of global CO2 emissions.

These meetings could never have stopped either the expansion of China’s coal extraction (it more than tripled between 1995 and 2019, to nearly as much as the rest of the world combined) or the just-noted worldwide preference for massive SUVs, and they could not have dissuaded millions of families from purchasing — as soon as their rising incomes allowed — new air conditioners that will work through the hot humid nights of monsoonal Asia and hence will not be energized by solar electricity anytime soon. The combined effect of these demands: between 1992 and 2019, the global emissions of CO2 rose by about 65 percent; those of CH4 by about 25 percent.

A close reading reveals that these magic prescriptions give no explanation for how the four material pillars of modern civilization (cement, steel, plastic, and ammonia) will be produced solely with renewable electricity, nor do they convincingly explain how flying, shipping, and trucking (to which we owe our modern economic globalization) could become 80 percent carbon-free by 2030; they merely assert that it could be so.

The evolution of societies is affected by the unpredictability of human behavior, by sudden shifts of long-lasting historical trajectories, by the rise and fall of nations, and is accompanied by our ability to enact meaningful changes.

Affluent countries could reduce their average per capita energy use by large margins and still retain a comfortable quality of life. Widespread diffusion of simple technical fixes ranging from mandated triple windows to designs of more durable vehicles would have significant cumulative effects. The halving of food waste and changing the composition of global meat consumption would reduce carbon emissions without degrading the quality of food supply. Remarkably, these measures are absent, or rank low, in typical recitals of coming low-carbon “revolutions” that rely on as-yet-unavailable mass-scale electricity storage or on the promise of unrealistically massive carbon capture and its permanent storage underground.

Understanding the Future: Between Apocalypse and Singularity

“Apocalypse” comes (via Latin) from the ancient Greek ἀποκάλυψις. Literally, it means “uncovering.”

It is not uncommon to read how artificial intelligence and deep learning systems will carry us all the way to the “Singularity.”

Apocalypse and singularity offer two absolutes: our future will have to lie somewhere within that all – encompassing range.

For generations, businesses and governments were the most common practitioners and consumers of forecasting, then academics joined the game in large numbers from the 1950s, and now anybody can be a forecaster.

Quantitative forecasts fall into three broad categories.

  • The smallest includes forecasts that deal with processes whose workings are well known and whose dynamics are inherently restricted to a relatively confined set of outcomes.
  • The second, and a much larger category, includes forecasts pointing in the right direction but with substantial uncertainties regarding the specific outcome.
  • And the third category is that of quantitative fables: such forecasting exercises may teem with numbers, but the numbers are outcomes of layered (and often questionable) assumptions, and the processes traced by such computerized fairy tales will have very different real-world endings.

Only the forecasts (projections, computer models) in the first category provide solid insights and good guidance, especially when looking only a decade or so ahead. More complex models combining the interactions of economic, social, technical, and environmental factors require more assumptions and open the way for greater errors.

Catastrophists have always had a hard time imagining that human ingenuity can meet future food, energy, and material needs — but during the past three generations we have done so despite a tripling of the global population since 1950.

And techno-optimists, who promise endless near-miraculous solutions, must reckon with a similarly poor record.

Little has changed half a century later: frightening prophecies and utterly unrealistic promises abound.

The inertia of large, complex systems is due to their basic energetic and material demands — as well as the scale of their operations. Demands for energy and materials are constantly affected by the quest for higher efficiencies and for optimized production processes, but efficiency improvements and relative dematerialization have their physical limits, and advantages brought by new alternatives will have offsetting costs.

And in a civilization where production of essential commodities now serves nearly 8 billion people, any departure from established practices also runs repeatedly into the constraints of scale.

Fundamental material requirements are now measured in billions and hundreds of millions of tons per year. This makes it impossible either to substitute such masses for entirely different commodities — what would take the place of more than 4 billion tons of cement or nearly 2 billion tons of steel? — or to make a rapid (years rather than decades) transition to entirely new ways of producing these essential inputs.

These realities help to explain why the fundamentals of our lives will not change drastically in the coming 20 – 30 years, despite the near-constant flood of claims about superior innovations ranging from solar cells to lithium-ion batteries, from the 3-D printing of everything (from microparts to entire houses) to bacteria able to synthesize gasoline. Steel, cement, ammonia, and plastics will endure as the four material pillars of civilization; a major share of the world’s transportation will be still energized by refined liquid fuels (automotive gasoline and diesel, aviation kerosene, and diesel and fuel oil for shipping); grain fields will be cultivated by tractors pulling plows, harrows, seeders, and fertilizer applicators and harvested by combines spilling the grains into trucks.

In some critical instances, our successes and our abilities to avoid the worst outcomes have been due to being prescient, vigilant, and determined to find effective fixes. In other cases, we have been undoubtedly lucky. Again, there are no clear indications that our ability to prevent failures has been uniformly increasing.

No real progress can be achieved until at least these top five countries, now responsible for 80 percent of all emissions, agree to clear and binding commitments.

Any effective commitments will be expensive, they will have to last for at least two generations in order to bring the desired outcome (of much reduced, if not totally eliminated, greenhouse gas emissions), and even drastic reductions going well beyond anything that could be realistically envisaged will not show any convincing benefits for decades.

A commonly used climate-economy model indicates the break-even year (when the optimal policy would begin to produce net economic benefit) for mitigation efforts launched in the early 2020s would be only around 2080.

Being agnostic about the distant future means being honest: we have to admit the limits of our understanding, approach all planetary challenges with humility, and recognize that advances, setbacks, and failures will all continue to be a part of our evolution and that there can be no assurance of (however defined) ultimate success, no arrival at any singularity — but, as long as we use our accumulated understanding with determination and perseverance, there will also not be an early end of days.

I am not a pessimist or an optimist, I am a scientist.

A realistic grasp of our past, present, and uncertain future is the best foundation for approaching the unknowable expanse of time before us. The future, as ever, is not predetermined. Its outcome depends on our actions.

You may also like
Anthony Giddens: The Third Way, The Renewal of Social Democracy
Anthony Giddens: Runaway World; How Globalization is reshaping our lives

Leave a Reply