Home > Digitalizacija > Chris Miller: Chip War; The Fight for the World’s Most Critical Technology

Chris Miller: Chip War; The Fight for the World’s Most Critical Technology

Chip War

The United States still has a stranglehold on the silicon chips that gave Silicon Valley its name, though its position has weakened dangerously. China now spends more money each year importing chips than it spends on oil. China is devoting its best minds and billions of dollars to developing its own semiconductor technology in a bid to free itself from America’s chip choke.

We rarely think about chips, yet they’ve created the modern world. The fate of nations has turned on their ability to harness computing power.

Around a quarter of the chip industry’s revenue comes from phones.

Fabricating and miniaturizing semiconductors has been the greatest engineering challenge of our time. Today, no firm fabricates chips with more precision than the Taiwan Semiconductor Manufacturing Company, better known as TSMC.

Last year, the chip industry produced more transistors than the combined quantity of all goods produced by all other companies, in all other industries, in all human history. Nothing else comes close.

Today’s semiconductor supply chain requires components from many cities and countries, but almost every chip made still has a Silicon Valley connection or is produced with tools designed and built in California.

Other countries have found it impossible to keep up on their own but have succeeded when they’ve deeply integrated themselves into Silicon Valley’s supply chains. Europe has isolated islands of semiconductor expertise, notably in producing the machine tools needed to make chips and in designing chip architectures.

A typical chip might be designed with blueprints from the Japanese-owned, UK-based company called Arm, by a team of engineers in California and Israel, using design software from the United States. When a design is complete, it’s sent to a facility in Taiwan, which buys ultra-pure silicon wafers and specialized gases from Japan. The design is carved into silicon using some of the world’s most precise machinery, which can etch, deposit, and measure layers of materials a few atoms thick. These tools are produced primarily by five companies, one Dutch, one Japanese, and three Californian, without which advanced chips are basically impossible to make. Then the chip is packaged and tested, often in Southeast Asia, before being sent to China for assembly into a phone or computer.

Chips from Taiwan provide 37 percent of the world’s new computing power each year. Two Korean companies produce 44 percent of the world’s memory chips. The Dutch company ASML builds 100 percent of the world’s extreme ultraviolet lithography machines, without which cutting-edge chips are simply impossible to make. OPEC’s 40 percent share of world oil production looks unimpressive by comparison.

The interconnections between the chip industries in the U.S., China, and Taiwan are dizzyingly complex. There’s no better illustration of this than the individual who founded TSMC.

Morris Chang was born in mainland China; grew up in World War II − era Hong Kong; was educated at Harvard, MIT, and Stanford; helped build America’s early chip industry while working for Texas Instruments in Dallas.

Cold War Chips

From Steel to Silicon

Japanese soldiers described World War II as a “typhoon of steel.” It certainly felt that way to Akio Morita. Morita only barely avoided the front lines by getting assigned to a Japanese navy engineering lab.

Across the East China Sea, Morris Chang’s childhood was punctuated by the sound of gunfire and air-raid sirens warning of imminent attack.

Budapest was on the opposite side of the world, but Andy Grove lived through the same typhoon of steel that swept across Asia.

World War II’s outcome was determined by industrial output, but it was clear already that new technologies were transforming military power. The two atomic bombs that destroyed Hiroshima and Nagasaki brought forth much speculation that a nascent Atomic Age might replace an era defined by coal and steel.

Morris Chang and Andy Grove were schoolboys in 1945, too young to have thought seriously about technology or politics. Akio Morita, however, was in his early twenties and had spent the final months of the war developing heat-seeking missiles.

The idea of using devices to compute wasn’t new. During the Great Depression, America’s Works Progress Administration, looking to employ jobless office workers, set up the Mathematical Tables Project. The demand for calculations kept growing.

More accuracy required more calculations. Engineers eventually began replacing mechanical gears in early computers with electrical charges. Early electric computers used the vacuum tube, a lightbulb-like metal filament enclosed in glass. A tube turned on was coded as a 1 while a vacuum tube turned off was a 0.

The Switch

William Shockley had long assumed that if a better “switch” was to be found, it would be with the help of a type of material called semiconductors.

Semiconductors, Shockley’s area of specialization, are a unique class of materials. Most materials either let electric current flow freely (like copper wires) or block current (like glass). Semiconductors are different. On their own, semiconductor materials like silicon and germanium are like glass, conducting hardly any electricity at all. But when certain materials are added and an electric field is applied, current can begin to flow.

“Doping” semiconductor materials with other elements presented an opportunity for new types of devices that could create and control electric currents.

In 1945, Shockley first theorized what he called a “solid state valve,” sketching in his notebook a piece of silicon attached to a ninety-volt battery.

Two of Shockley’s colleagues at Bell Labs: Walter Brattain and John Bardeen. Inspired by Shockley’s theorizing, Brattain and Bardeen built a device that applied two gold filaments, each attached by wires to a power source and to a piece of metal, to a block of germanium, with each filament touching the germanium less than a millimeter apart from the other. On the afternoon of December 16, 1947, at Bell Labs’ headquarters, Bardeen and Brattain switched on the power and were able to control the current surging across the germanium. Shockley’s theories about semiconductor materials had been proven correct.

Noyce, Kilby, and the Integrated Circuit

The transistor could only replace vacuum tubes if it could be simplified and sold at scale.

In 1955, he established Shockley Semiconductor. Shockley planned to build the world’s best transistors. Transistors soon began to be used in place of vacuum tubes in computers, but the wiring between thousands of transistors created a jungle of complexity.

Jack Kilby, an engineer at Texas Instruments. Kilby was one of the first people outside Bell Labs to use a transistor, after his first employer, Milwaukee-based Centralab, licensed the technology from AT&T. Kilby called his invention an “integrated circuit,” but it became known colloquially as a “chip,” because each integrated circuit was made from a piece of silicon “chipped” off a circular silicon wafer.

The eight defectors from Shockley’s lab are widely credited with founding Silicon Valley. One of the eight, Eugene Kleiner, would go on to found Kleiner Perkins, one of the world’s most powerful venture capital firms. Gordon Moore, who went on to run Fairchild’s R & D process, would later coin the concept of Moore’s Law to describe the exponential growth in computing power. Most important was Bob Noyce, the leader of the “traitorous eight,” who had a charismatic, visionary enthusiasm for microelectronics and an intuitive sense of which technical advances were needed to make transistors tiny, cheap, and reliable.

Where Kilby, unbeknownst to Noyce, had produced a mesa transistor on a germanium base and then connected it with wires, Noyce used Hoerni’s planar process to build multiple transistors on the same chip.

Like Kilby, Noyce had produced an integrated circuit: multiple electric components on a single piece of semiconductor material. However, Noyce’s version had no freestanding wires at all.

It seemed far easier to miniaturize Fairchild’s “planar” design than standard mesa transistors.

Liftoff

The first big order for Noyce’s chips came from NASA, which in the 1960s had a vast budget to send astronauts to the moon.

By November 1962, Charles Stark Draper, the famed engineer who ran the MIT lab, had decided to bet on Fairchild chips for the Apollo program, calculating that a computer using Noyce’s integrated circuits would be one-third smaller and lighter than a computer based on discrete transistors. It would use less electricity, too.

Chip sales to the Apollo program transformed Fairchild from a small startup into a firm with one thousand employees.

As Noyce ramped up production for NASA, he slashed prices for other customers. An integrated circuit that sold for $ 120 in December 1961 was discounted to $ 15 by next October.

At TI headquarters in Dallas, Kilby and TI president Pat Haggerty were looking for a big customer for their own integrated circuits. In fall 1962, the Air Force began looking for a new computer to guide its Minuteman II missile. Winning the Minuteman II contract transformed TI’s chip business. By 1965, 20 percent of all integrated circuits sold that year went to the Minuteman program.

Mortars and Mass Production

Jay Lathrop pulled into Texas Instruments’ parking lot for his first day of work on September 1, 1958. Like engineers at Fairchild, he was struggling with mesa-shaped transistors, which were proving difficult to miniaturize.

While looking through a microscope at one of their transistors, Lathrop and his assistant, chemist James Nall, had an idea: a microscope lens could take something small and make it look bigger.

Lathrop called the process photolithography — printing with light. He produced transistors much smaller than had previously been possible, measuring only a tenth of an inch in diameter, with features as small as 0.0005 inches in height. Photolithography made it possible to imagine mass-producing tiny transistors.

Haggerty and Kilby realized that light rays and photoresists could solve the mass-production problem. Implementing Lathrop’s lithography process at Texas Instruments required new materials and new processes.

Morris Chang arrived at TI in 1958, the same year as Jay Lathrop, and was put in charge of a production line of transistors. He was tasked with running a production line of transistors to be used in IBM computers, a type of transistor so unreliable that TI’s yield was close to zero, he recalled.

Noyce acted swiftly to hire Lathrop’s lab partner, chemist James Nall, to develop photolithography at Fairchild. “Unless we could make it work,” Noyce reasoned, “we did not have a company.”

It was up to production engineers like Andy Grove to improve Fairchild’s manufacturing process.

The spread of semiconductors was enabled as much by clever manufacturing techniques as academic physics. It was engineering and intuition, as much as scientific theorizing, that turned a Bell Labs patent into a world-changing industry.

It was the traitorous eight young engineers who abandoned his company, as well as a similar group at Texas Instruments, who turned Shockley’s transistors into a useful product — chips — and sold them to the U.S. military while learning how to mass-produce them.

 “I… WANT… TO… GET… RICH”

“Selling R & D to the government was like taking your venture capital and putting it into a savings account,” Noyce declared. “Venturing is venturing; you want to take the risk.”

It was Fairchild’s R & D team that, under Gordon Moore’s direction, not only devised new technology but opened new civilian markets as well.

In 1965, Moore was asked by Electronics magazine to write a short article on the future of integrated circuits. He predicted that every year for at least the next decade, Fairchild would double the number of components that could fit on a silicon chip. This forecast of exponential growth in computing power soon came to be known as Moore’s Law. It was the greatest technological prediction of the century.

At Fairchild, Noyce and Moore were already dreaming of personal computers and mobile phones.

In the mid-1960s, Fairchild chips that previously sold for $ 20 were cut to $ 2.

Thanks to falling prices, Fairchild began winning major contracts in the private sector. Annual U.S. computer sales grew from 1,000 in 1957 to 18,700 a decade later. By the mid – 1960s, almost all these computers relied on integrated circuits.

The Circuitry of the American World

Soviet Silicon Valley

Just like the Pentagon, the Kremlin realized that transistors and integrated circuits would transform manufacturing, computing, and military power. Beginning in the late 1950s, the USSR established new semiconductor facilities across the country and assigned its smartest scientists to build this new industry. For an ambitious young engineer like Yuri Osokin it was hard to imagine a more exciting assignment.

“Catching up and overtaking” the United States seemed like a real possibility. As with another sphere where the Soviets had caught up to the United States — nuclear weapons — the USSR had a secret weapon: a spy ring.

Joel Barr and Alfred Sarant. In the 1930s, Barr and Sarant were integrated into an espionage ring led by Julius Rosenberg, the infamous Cold War spy. In the late 1940s, as the FBI began unraveling the KGB’s spy networks in the U.S., Rosenberg was tried and sentenced to death by electrocution alongside his wife, Ethel. Before the FBI could catch them, Sarant and Barr fled the country, eventually reaching the Soviet Union.

“Copy It”

Around the same time that Nikita Khrushchev declared his support for building Zelenograd, a Soviet student named Boris Malin returned from a year studying in Pennsylvania with a small device in his luggage — a Texas Instruments SN-51, one of the first integrated circuits sold in the United States.

Alexander Shokin, the bureaucrat in charge of Soviet microelectronics, believed the SN-51 was a device the Soviet Union must acquire by any means necessary.

The launch of Sputnik in 1957, the first space flight of Yuri Gagarin in 1961, and the fabrication of Osokin’s integrated circuit in 1962 provided incontrovertible evidence that the Soviet Union was becoming a scientific superpower.

Spying could only get Shokin and his engineers so far. Simply stealing a chip didn’t explain how it was made. Every step of the process of making chips involved specialized knowledge that was rarely shared outside of a specific company. Soviet leaders never comprehended how the “copy it” strategy condemned them to backwardness.

The Transistor Salesman

When Japanese prime minister Hayato Ikeda met French president Charles de Gaulle amid the splendor of the Elysée Palace in November 1962, he brought a small gift for his host: a Sony transistor radio.

Integrated circuits didn’t only connect electronic components in innovative ways, they also knit together nations in a network, with the United States at its center.

Making Japan a transistor salesman was core to America’s Cold War strategy.

Makoto Kikuchi was a young physicist in the Japanese government’s Electrotechnical Laboratory in Tokyo, which employed some of the country’s most advanced scientists.

Several years later, in 1953, Kikuchi met John Bardeen. The same year Bardeen landed in Tokyo, Akio Morita took off from Haneda Airport for New York.

Morita’s physics degree proved useful in postwar Japan, too. In April 1946, with the country still in ruins, Morita partnered with a former colleague named Masaru Ibuka to build an electronics business, which they soon named Sony, from the Latin sonus (sound) and the American nickname “sonny.”

He and Ibuka decided to bet the future of their company on selling these devices not only to Japanese customers, but to the world’s richest consumer market, America.

Sony had the benefit of cheaper wages in Japan, but its business model was ultimately about innovation, product design, and marketing. Morita’s “license it” strategy couldn’t have been more different from the “copy it” tactics of Soviet Minister Shokin.

Sony’s first major success was transistor radios.

Sony’s expertise wasn’t in designing chips but devising consumer products and customizing the electronics they needed. Calculators were another consumer device transformed by Japanese firms.

The semiconductor symbiosis that emerged between America and Japan involved a complex balancing act. Each country relied on the other for supplies and for customers.

“Transistor Girls”

It was mostly men who designed the earliest semiconductors, and mostly women who assembled them.

Fairchild was the first semiconductor firm to offshore assembly in Asia, but Texas Instruments, Motorola, and others quickly followed. Within a decade, almost all U.S. chipmakers had foreign assembly facilities.

Sporck’s next stop was Singapore, a majority ethnic Chinese city-state whose leader, Lee Kuan Yew, had “pretty much outlawed” unions, as one Fairchild veteran remembered.

Precision Strike

TI’s first major contract for integrated circuits had been for massive nuclear missiles like the Minuteman II, but the war in Vietnam required different types of weapons.

The problem with many guided munitions, the military concluded, was the vacuum tubes. The Sparrow III anti-aircraft missiles that U.S. fighters used in the skies over Vietnam relied on vacuum tubes that were hand-soldered. The humid climate of Southeast Asia, the force of takeoff and landings, and the rough-and-tumble of fighter combat caused regular failures.

Weldon Word, a thirty-four-year-old project engineer at TI, wanted to change this. As early as the mid-1960s, Word was already envisioning using microelectronics to transform the military’s kill chain. Many defense contractors were trying to sell the Pentagon expensive missiles, but Word told his team to build weapons priced like an inexpensive family sedan. Word thought TI’s expertise in semiconductor electronics could make the Air Force’s bombs more accurate.

Outside a small number of military theorists and electrical engineers, therefore, hardly anyone realized Vietnam had been a successful testing ground for weapons that married microelectronics and explosives in ways that would revolutionize warfare and transform American military power.

Supply Chain Statecraft

Texas Instruments executive Mark Shepherd was poised to lead TI’s strategy of offshoring some of its production to Asia. His first meeting with Taiwan’s powerful and savvy economy minister, K. T. Li. Li eventually realized that Taiwan would benefit from integrating itself more deeply with the United States.

The 1960s had been a good decade for Taiwan’s economy but disastrous for its foreign policy. The island’s dictator, Chiang Kai-shek, still dreamed of reconquering the mainland, but the military balance had shifted decisively against him.

In July 1968, having smoothed over relations with the Taiwanese government, TI’s board of directors approved construction of the new facility in Taiwan. By August 1969, this plant was assembling its first devices. By 1980, it had shipped its billionth unit.

With the Singapore government’s support, TI and National Semiconductors built assembly facilities in the city-state. Many other chipmakers followed. By the end of the 1970s, American semiconductor firms employed tens of thousands of workers internationally, mostly in Korea, Taiwan, and Southeast Asia.

By the early 1980s, the electronics industry accounted for 7 percent of Singapore’s GNP and a quarter of its manufacturing jobs.

From South Korea to Taiwan, Singapore to the Philippines a map of semiconductor assembly facilities looked much like a map of American military bases across Asia.

Intel’s Revolutionaries

The year 1968 seemed like a revolutionary moment. Yet it was the Palo Alto Times that scooped the world’s biggest newspapers by reporting on page 6 what, in hindsight, was the most revolutionary event of the year: “Founders Leave Fairchild; Form Own Electronics Firm.”

Noyce and Moore abandoned Fairchild as quickly as they’d left Shockley’s startup a decade earlier, and founded Intel, which stood for Integrated Electronics.

Two years after its founding, Intel launched its first product, a chip called a dynamic random access memory, or DRAM. Before the 1970s, computers generally “remembered” data using not silicon chips but a device called a magnetic core, a matrix of tiny metal rings strung together by a grid of wires.

In the 1960s, engineers like IBM’s Robert Dennard began envisioning integrated circuits that could “remember” more efficiently than little metal rings.

Memory chips don’t need to be specialized, so chips with the same design can be used in many different types of devices. This makes it possible to produce them in large volumes. By contrast, the other main type of chips — those tasked with “computing” rather than “remembering” — were specially designed for each device, because every computing problem was different.

Intel decided to focus on memory chips, where mass production would produce economies of scale.

Intel wasn’t the first company to think about producing a generalized logic chip.

Intel, however, launched a chip called the 4004 and described it as the world’s first microprocessor — “a micro-programmable computer on a chip,” as the company’s advertising campaign put it.

The person who best understood how mass-produced computing power would revolutionize society was a Caltech professor named Carver Mead. Though Gordon Moore had first graphed the exponential increase in transistor density in his famous 1965 article, Mead coined the term “Moore’s Law” to describe it.

The Pentagon’s Offset Strategy

No one benefitted more from Noyce and Moore’s revolution than a cornerstone of the old order — the Pentagon.

William Perry. For a Silicon Valley entrepreneur like Perry, serving as undersecretary of defense for research and engineering was, he said, the “best job in the world.”

Pentagon analysts like Andrew Marshall. He envisioned “rapid information gathering,” “sophisticated command and control,” and “terminal guidance” for missiles, imagining munitions that could strike targets with almost perfect accuracy.

Perry realized that Marshall’s vision of the future of war would soon be possible due to the miniaturization of computing power.

Led by Perry, the Pentagon poured money into new weapons systems that capitalized on America’s advantage in microelectronics.

The U.S. military lost the war in Vietnam, but the chip industry won the peace that followed, binding the rest of Asia, from Singapore to Taiwan to Japan, more closely to the U.S. via rapidly expanding investment links and supply chains.

Leadership Lost?

“That Competition Is Tough”

The 1980s were a hellish decade for the entire U.S. semiconductor sector. Silicon Valley thought it sat atop the world’s tech industry, but after two decades of rapid growth it now faced an existential crisis: cutthroat competition from Japan.

By the 1980s, consumer electronics had become a Japanese specialty, with Sony leading the way in launching new consumer goods, grabbing market share from American rivals.

Sony’s research director, the famed physicist Makoto Kikuchi, told an American journalist that Japan had fewer geniuses than America, a country with “outstanding elites.” But America also had “a long tail” of people “with less than normal intelligence,” Kikuchi argued, explaining why Japan was better at mass manufacturing.

In 1979, just months before Anderson’s presentation about quality problems in American chips, Sony introduced the Walkman.

Charlie Sporck, the executive who’d been burned in effigy while managing a GE production line, found Japan’s productivity fascinating and frightening.

“At War with Japan”

“I don’t want to pretend I’m in a fair fight,” complained Jerry Sanders, CEO of Advanced Micro Devices. “I’m not.”

He eventually landed a job in sales and marketing at Fairchild Semiconductor, working alongside Noyce, Moore, and Andy Grove before they left Fairchild to found Intel.

“The chip industry was an incredibly competitive industry,” remembered Charlie Sporck, the executive who’d led the offshoring of chip assembly throughout Asia. Sporck saw Silicon Valley’s internal battles as fair fights, but thought Japan’s DRAM firms benefitted from intellectual property theft, protected markets, government subsidies, and cheap capital.

Sneaking into rivals’ facilities was illegal but keeping tabs on competitors was normal practice in Silicon Valley. So, too, was accusing rivals of pilfering employees, ideas, and intellectual property. America’s chipmakers were constantly suing each other, after all.

Noyce and Moore had left Shockley Semiconductor to found Fairchild, then left Fairchild to found Intel, where they hired dozens of Fairchild employees, including Andy Grove.

Sporck and Sanders pointed out that Japanese firms benefitted from a protected domestic market, too.

Japan’s government subsidized its chipmakers, too. Unlike in the U.S., where antitrust law discouraged chip firms from collaborating, the Japanese government pushed companies to work together, launching a research consortium called the VLSI Program in 1976 with the government funding around half the budget.

Moreover, the U.S. government was itself deeply involved in supporting semiconductors, though Washington’s funding took the form of grants from DARPA.

Jerry Sanders saw Silicon Valley’s biggest disadvantage as its high cost of capital. The Japanese “pay 6 percent, maybe 7 percent, for capital. I pay 18 percent on a good day,” he complained.

Japan’s firms doubled down on DRAM production as Silicon Valley was pushed out. In 1984, Hitachi spent 80 billion yen on capital expenditure for its semiconductor business, compared to 1.5 billion a decade earlier. At Toshiba, spending grew from 3 billion to 75 billion; at NEC, from 3.5 billion to 110 billion.

Japan’s semiconductor surge seemed unstoppable, no matter the apocalyptic predictions of American chipmakers. Soon all of Silicon Valley would be left for dead.

“Shipping Junk”

The world’s leading lens makers were Germany’s Carl Zeiss and Japan’s Nikon, though the U.S. had a few specialized lens makers, too. Perkin Elmer, a small manufacturer in Norwalk. The company realized this technology could be used in semiconductor lithography and developed a chip scanner that could align a silicon wafer and a lithographic light source with almost perfect precision, which was crucial if the light was to hit the silicon exactly as intended.

Perkin Elmer’s scanner could create chips with features approaching one micron — a millionth of a meter — in width. Perkin Elmer’s scanner dominated the lithography market in the late 1970s, but by the 1980s, it had been displaced by GCA, a company led by an Air Force officer – turned-geophysicist named Milt Greenberg.

As Japan’s chip industry rose, however, GCA began to lose its edge.

The semiconductor industry had always been ferociously cyclical, with the industry skyrocketing upward when demand was strong, and slumping back when it was not.

Just as the market slumped, GCA lost its position as the only company building steppers. Japan’s Nikon had initially been a partner of GCA, providing the precision lenses for its stepper. But Greenberg had decided to cut Nikon out. After Greenberg stopped buying Nikon lenses, the Japanese company decided to make its own stepper. It acquired a machine from GCA and reverse engineered it. Soon Nikon had more market share than GCA.

The Crude Oil of the 1980s

On a chilly spring evening in Palo Alto, Bob Noyce, Jerry Sanders, and Charlie Sporck met under a sloping, pagoda-style roof.

Noyce, Sanders, and Sporck had all started their careers at Fairchild: Noyce the technological visionary; Sanders the marketing showman; Sporck the manufacturing boss barking at his employees to build faster, cheaper, better. A decade later they’d become competitors as CEOs of three of America’s biggest chipmakers. But as Japan’s market share grew, they decided it was time to band together again.

In 1986, Japan had overtaken America in the number of chips produced. By the end of the 1980s, Japan was supplying 70 percent of the world’s lithography equipment.

The Defense Department recruited Jack Kilby, Bob Noyce, and other industry luminaries to prepare a report on how to revitalize America’s semiconductor industry.

The Pentagon’s task force summarized the ramifications in four bullet points, underlining the key conclusions: U.S. military forces depend heavily on technological superiority to win. Electronics is the technology that can be leveraged most highly. Semiconductors are the key to leadership in electronics. U.S. defense will soon depend on foreign sources for state-of-the-art technology in semiconductors.

Death Spiral

The question of support for semiconductors was decided by lobbying in Washington. One issue on which Silicon Valley and free market economists agreed was taxes. Bob Noyce testified to Congress in favor of cutting the capital gains tax from 49 percent to 28 percent and advocated loosening financial regulation to let pension funds invest in venture capital firms.

Next, Congress tightened intellectual property protections via the Semiconductor Chip Protection Act.

In 1986, with the threat of tariffs looming, Washington and Tokyo cut a deal. Japan’s government agreed to put quotas on its exports of DRAM chips, limiting the number that were sold to the U.S.

Congress tried one final way to help. In 1987, a group of leading chipmakers and the Defense Department created a consortium called Sematech, funded half by the industry and half by the Pentagon.

Bob Noyce volunteered to lead Sematech. He was already de facto retired from Intel, having turned over the reins to Gordon Moore and Andy Grove a decade earlier.

Testifying to Congress in 1989, Noyce declared that “Sematech may likely be judged, in large part, as to how successful it is in saving America’s optical stepper makers.” This was exactly what employees at GCA, the ailing Massachusetts manufacturer of lithography tools, were hoping to hear.

Sematech bet hugely on GCA, giving the company contracts to produce deep-ultraviolet lithography equipment that was at the cutting edge of the industry’s capabilities. But GCA still didn’t have a viable business model.

Customers had already gotten comfortable with equipment from competitors like Nikon, Canon, and ASML, and didn’t want to take a risk on new and unfamiliar tools from a company whose future was uncertain.

In 1990, Noyce, GCA’s greatest supporter at Sematech, died of a heart attack after his morning swim. By 1993, GCA’s owner, a company called General Signal, announced it would sell GCA or close it. Sematech, which had already provided millions in funding for GCA, decided to pull the plug. The company shut its doors and sold off its equipment, joining a long list of firms vanquished by Japanese competition.

The Japan That Can Say No

By the 1980s, Morita perceived deep problems in America’s economy and society.

“The United States has been busy creating lawyers,” Morita lectured, while Japan has “been busier creating engineers.” Moreover, American executives were too focused on “this year’s profit,” in contrast to Japanese management, which was “long range.” American labor relations were hierarchical and “old style,” without enough training or motivation for shop-floor employees.

In 1989, Morita set out his views in a collection of essays titled The Japan That Can Say No: Why Japan Will Be First Among Equals. The book was coauthored with Shintaro Ishihara, a controversial far-right politician.

The same year that Ishihara and Morita published The Japan That Can Say No, former defense secretary Harold Brown published an article that drew much the same conclusions. “High Tech Is Foreign Policy,” Brown titled the article. If America’s high-tech position was deteriorating, its foreign policy position was at risk, too.

“Japan leads in memory chips, which are at the heart of consumer electronics,” Brown admitted. “The Japanese are rapidly catching up in logic chips and application-specific integrated circuits.”

Would Japan, a first-class technological power, be satisfied with second-class military status?

America Resurgent

The Potato Chip King

Micron made “the best damn widgets in the whole world,” Jack Simplot used to say. The Idaho billionaire didn’t know much about the physics of how his company’s main product, DRAM chips, actually worked.

Silicon Valley’s resurgence was driven by scrappy startups and by wrenching corporate transformations. The U.S. overtook Japan’s DRAM behemoths not by replicating them but by innovating around them.

America’s unrivaled power during the 1990s and 2000s stemmed from its resurgent dominance in computer chips, the core technology of the era.

Jack Simplot made his first fortune in potatoes, pioneering the use of machines to sort potatoes, dehydrate them, and freeze them for use in french fries. This wasn’t Silicon Valley − style innovation, but it earned him a massive contract to sell spuds to McDonald’s.

Micron, the DRAM firm that Simplot backed, at first seemed guaranteed to fail. Micron decided to challenge the Japanese DRAM makers at their own game, but to do so by aggressively cutting costs.

Micron had a knack for cost cuts that none of its Silicon Valley or Japanese competitors could match. Ward Parkinson — “the engineering brains behind the organization,” one early employee remembered — had a talent for designing DRAM chips as efficiently as possible.

“It was by far the worst product on the market,” Ward joked, “but by far the least expensive to produce.”

Micron learned to compete with Japanese rivals like Toshiba and Fujitsu when it came to the storage capacity of each generation of DRAM chip and to outcompete them on cost.

Disrupting Intel

“Look, Clayton, I’m a busy man and I don’t have time to read drivel from academics,” Andy Grove told Harvard Business School’s most famous professor, Clayton Christensen.

Grove described his management philosophy in his bestselling book Only the Paranoid Survive: “Fear of competition, fear of bankruptcy, fear of being wrong and fear of losing can all be powerful motivators.”

Grove realized Intel’s business model of selling DRAM chips was finished.

In 1980, Intel had won a small contract with IBM, America’s computer giant, to build chips for a new product called a personal computer.

The microprocessor market seemed almost certain to grow. But the prospect that microprocessor sales could overtake DRAMs, which constituted the bulk of chip sales, seemed mind-boggling, one of Grove’s deputies recalled. Grove saw no other choice.

“Disruptive innovation” sounded attractive in Clayton Christensen’s theory, but it was gut-wrenching in practice, a time of “gnashing of teeth,” Grove remembered, and “bickering and arguments.”

In Grove’s restructuring plan, step one was to lay off over 25 percent of Intel’s workforce.

Intel’s new manufacturing method was called “copy exactly.” Once Intel determined that a specific set of production processes worked best, they were replicated in all other Intel facilities.

Intel’s yields rose substantially, while its manufacturing equipment was used more efficiently, driving down costs.

Grove and Intel got lucky, too. Some of the structural factors that had favored Japanese producers in the early 1980s began to shift.

“My Enemy’s Enemy”: The Rise of Korea

Lee Byung-Chul could make a profit selling almost anything. He would turn Samsung into a semiconductor superpower thanks to two influential allies: America’s chip industry and the South Korean state.

South Korea was used to navigating between bigger rivals.

Lee expanded his business empire despite the war, navigating South Korea’s complicated politics with finesse.

Lee had long wanted to break into the semiconductor industry, watching companies like Toshiba and Fujitsu take DRAM market share in the late 1970s and early 1980s.

In February 1983, after a nervous, sleepless night, Lee picked up the phone, called the head of Samsung’s electronics division, and proclaimed: “Samsung will make semiconductors.” He bet the company’s future on semiconductors, and was ready to spend at least $ 100 million, he declared.

The best way to deal with international competition in memory chips from Japan, Silicon Valley wagered, was to find an even cheaper source in Korea, while focusing America’s R & D efforts on higher-value products rather than commoditized DRAMs.

The U.S. didn’t simply provide a market for South Korean DRAM chips; it provided technology, too. With Silicon Valley’s DRAM producers mostly near collapse, there was little hesitation about transferring top-notch technology to Korea. Lee proposed to license a design for a 64K DRAM from Micron, the cash-strapped memory chip startup, befriending its founder Ward Parkinson in the process.

“This Is the Future”

The rebirth of America’s chip industry after Japan’s DRAM onslaught was only possible thanks to Andy Grove’s paranoia, Jerry Sanders’s bare-knuckle brawling, and Jack Simplot’s cowboy competitiveness.

Carver Mead, the goateed physicist who was a friend of Gordon Moore, was puzzling over this dilemma when he was introduced to Lynn Conway, a computer architect at Xerox’s Palo Alto Research Center.

Conway realized that the digital revolution Mead prophesied needed algorithmic rigor. After she and Mead were introduced by a mutual colleague, they began discussing how to standardize chip design. Why couldn’t you program a machine to design circuits, they wondered. “Once you can write a program to do something,” Mead declared, “you don’t need anybody’s tool kit, you write your own.”

No one was more interested in what soon became known as the “Mead-Conway Revolution” than the Pentagon. DARPA financed a program to let university researchers send chip designs to be produced at cutting-edge fabs.

Today, every chip company uses tools from each of three chip design software companies that were founded and built by alumni of these DARPA- and SRC-funded programs.

Jacobs, Viterbi, and several colleagues set up a wireless communications business called Qualcomm — quality communications — betting that ever-more-powerful microprocessors would let them stuff more signals into existing spectrum bandwidth.

Even by the early 1990s, using chips to send large quantities of data through the air seemed like a niche business.

The KGB’s Directorate T

Vladimir Vetrov was a KGB spy, but his life felt more like a Chekhov story than a James Bond film.

In 1963, the same year the USSR established Zelenograd, the city of scientists working on microelectronics, the KGB established a new division, Directorate T, which stood for teknologia. In the early 1980s, the KGB reportedly employed around one thousand people to steal foreign technology. The KGB began stealing semiconductor manufacturing equipment, too. It took time for the West to realize the scale of the theft.

The operations of Directorate T might have remained a state secret had Vetrov not decided to add intrigue to his otherwise dull existence upon moving back to Moscow. Soon Vetrov was passing dozens of documents about Directorate T to his French handler in Moscow.

The USSR’s “copy it” strategy had actually benefitted the United States, guaranteeing the Soviets faced a continued technological lag.

“Weapons of Mass Destruction”: The Impact of the Offset

Marshal Nikolai Ogarkov served as chief of the general staff of the Soviet military from 1977 to 1984.

By traditional metrics like numbers of tanks or troops, the Soviet Union had a clear advantage in the early 1980s. Ogarkov saw things differently: quality was overtaking quantity. He was fixated on the threat posed by America’s precision weapons.

The U.S. had put a guidance computer powered by Texas Instruments’ chips onboard the Minuteman II missile in the early 1960s, but the Soviets’ first missile guidance computer using integrated circuits wasn’t tested until 1971.

The Kremlin wanted to revitalize its microelectronics industry but didn’t know how to do so. In 1987, Soviet leader Mikhail Gorbachev visited Zelenograd and called for “more discipline” in the city’s work.

However, discipline alone couldn’t solve the Soviets’ basic problems. One issue was political meddling. A second issue was overreliance on military customers. The Soviet Union barely had a consumer market, so it produced only a fraction of the chips built in the West. A final challenge was that the Soviets lacked an international supply chain.

The Soviet Union’s effort to reinvigorate its chipmakers failed completely.

War Hero

Early in the morning on January 17, 1991, the first wave of American F-117 stealth bombers took off from their airbases in Saudi Arabia, their black airframes quickly disappearing in the dark desert sky. Their target: Baghdad.

U.S. airpower proved decisive in the Persian Gulf War, decimating Iraqi forces while minimizing U.S. casualties. Weldon Word received an award for inventing the Paveway, improving its electronics, and driving down its cost so that each one was never more expensive than a jalopy, just as he had originally promised.

The reverberations from the explosions of Paveway bombs and Tomahawk missiles were felt as powerfully in Moscow as in Baghdad.

“The Cold War Is Over and You Have Won”

Japan’s seeming dominance had been built on an unsustainable foundation of government-backed overinvestment.

Sony, which was unique among Japanese semiconductor firms in never betting heavily on DRAMs, succeeded in developing innovative new products, like specialized chips for image sensors.

Most of Japan’s big DRAM producers, however, failed to take advantage of their influence in the 1980s to drive innovation. None of the Japanese chip giants could replicate Intel’s pivot to microprocessors or its mastery of the PC ecosystem.

By the time Japan’s stock market crashed, Japan’s semiconductor dominance was already eroding. In 1993, the U.S. retook first place in semiconductor shipments.

Gorbachev promised to end the Cold War by withdrawing Soviet troops from Eastern Europe, and he wanted access to American technologies in exchange. Meeting with America’s tech executives, he encouraged them to invest in the USSR.

The Cold War was over; Silicon Valley had won.

Integrated Circuits, Integrated World?

“We Want a Semiconductor Industry in Taiwan”

In 1985, Taiwan’s powerful minister K.T.Li called Morris Chang into his office in Taipei.

Taiwan had deliberately inserted itself into semiconductor supply chains since the 1960s, as a strategy to provide jobs, acquire advanced technology, and to strengthen its security relationship with the United States. China’s entry into electronics assembly threatened to put Taiwan out of business. It was impossible to compete with China on price. Taiwan had to produce advanced technology itself.

When the government of Taiwan called, offering to put him in charge of the island’s chip industry and providing a blank check to fund his plans, Chang found the offer intriguing.

Chang convinced Philips, the Dutch semiconductor company, to put up $ 58 million, transfer its production technology, and license intellectual property in exchange for a 27.5 percent stake in TSMC. The rest of the capital was raised from wealthy Taiwanese who were “asked” by the government to invest.

The founding of TSMC gave all chip designers a reliable partner. Chang promised never to design chips, only to build them. TSMC didn’t compete with its customers; it succeeded if they did.

The economics of chip manufacturing required relentless consolidation. Whichever company produced the most chips had a built-in advantage, improving its yield and spreading capital investment costs over more customers. TSMC’s business boomed during the 1990s and its manufacturing processes improved relentlessly.

“All People Must Make Semiconductors”

In 1987, the same year that Morris Chang founded TSMC, several hundred miles to the southwest a then-unknown engineer named Ren Zhengfei established electronics trading company called Huawei.

In Shenzhen, Ren Zhengfei bought cheap telecommunications equipment in Hong Kong and sold it for a higher price across China.

By 1960, China had established its first semiconductor research institute, in Beijing.

The year after China produced its first integrated circuit, Mao plunged the country into the Cultural Revolution, arguing that expertise was a source of privilege that undermined socialist equality.

The idea of building advanced industries with poorly educated employees was absurd. Even more so was Mao’s effort to keep out foreign technology and ideas.

One tiny speck of Chinese territory escaped the horrors of the Cultural Revolution. Thanks to a quirk of colonialism, Hong Kong was still governed temporarily by the British.

On September 2, 1975, John Bardeen landed in Beijing, two decades after he’d won his first Nobel Prize with Shockley and Brattain for inventing the transistor. In 1972, he had become the only person to win a second Nobel in physics, this time for work on superconductivity.

Bardeen and his colleagues left China impressed with the country’s scientists, but China’s semiconductor manufacturing ambitions seemed hopeless.

Mao Zedong died the year after Bardeen’s visit to China.

Soon China’s government declared that “science and technology” were “the crux of the Four Modernizations.”

“Sharing God’s Love with the Chinese”

Richard Chang just wanted to “share God’s love with the Chinese.” Chang had a missionary’s zeal to bring advanced chipmaking to China.

The geography of chip fabrication shifted drastically over the 1990s and 2000s. U.S. fabs made 37 percent of the world’s chips in 1990, but this number fell to 19 percent by 2000 and 13 percent by 2010. Japan’s market share in chip fabrication collapsed, too. South Korea, Singapore, and Taiwan each poured funds into their chip industries and rapidly increased output.

If anyone could build a chip industry in China, it was Richard Chang. He wouldn’t rely on nepotism or on foreign help. All the knowledge needed for a world-class fab was already in his head. While working at Texas Instruments, he’d opened new facilities for the company around the world. Why couldn’t he do the same in Shanghai? He founded the Semiconductor Manufacturing International Corporation (SMIC) in 2000, raising over $ 1.5 billion from international investors like Goldman Sachs, Motorola, and Toshiba.

Like China’s other chip startups, SMIC benefitted from vast government support, like a five-year corporate tax holiday and reduced sales tax on chips sold in China.

Now TSMC had competition from multiple foundries in different countries in East Asia. Singapore’s Chartered Semiconductor, Taiwan’s UMC and Vanguard Semiconductor, and South Korea’s Samsung — which entered the foundry business in 2005 — were also competing with TSMC to produce chips designed elsewhere.

Lithography Wars

John Carruthers. As a leader of Intel’s R & D efforts, Carruthers was used to making big bets. Along with everyone else in the industry, Carruthers knew existing lithography methods would soon be unable to produce the ever-smaller circuits that next-generation semiconductors required. He wanted to target “extreme ultraviolet” (EUV) light, with a wavelength of 13.5 nanometers.

Intel would eventually spend billions of dollars on R & D and billions more learning how to use EUV to carve chips.

Three existential questions hung over the lithography industry: engineering, business, and geopolitics.

Producing chips at this scale, most researchers believed, required more precise lithography tools to shoot light at photoresist chemicals and carve shapes on silicon.

The “war” to find the next, best type of beam to shoot at silicon wafers was only one of three contests underway over the future of lithography. The second battle was commercial, over which company would build the next generation of lithography tools.

The only real competitor to Canon and Nikon was ASML, the small but growing Dutch lithography company. In 1984, Philips, the Dutch electronics firm, had spun out its internal lithography division, creating ASML.

Whereas Japanese competitors tried to build everything in-house, ASML could buy the best components on the market.

Both ASML and TSMC started as small firms on the periphery of the chip industry, but they grew together, forming a partnership without which advances in computing today would have ground to a halt.

The partnership between ASML and TSMC pointed to the third “lithography war” of the 1990s. This was a political contest, though few people in industry or government preferred to think in those terms.

ASML was the only lithography firm left. The idea of giving a foreign company access to the most advanced research coming out of America’s national labs raised some questions in Washington.

Locked out of the research at the U.S. national labs, Nikon and Canon decided not to build their own EUV tools, leaving ASML as the world’s only producer. In 2001, meanwhile, ASML bought SVG, America’s last major lithography firm.

The manufacturing of EUV wasn’t globalized, it was monopolized. A single supply chain managed by a single company would control the future of lithography.

The Innovator’s Dilemma

Otellini inherited a company that was enormously profitable. He saw his primary task as keeping profit margins as high as possible by milking Intel’s de facto monopoly on x86 chips, and he applied textbook management practices to defend it.

The x86 architecture dominated PCs not because it was the best, but because IBM’s first personal computer happened to use it. Like Microsoft, which provided the operating system for PCs, Intel controlled this crucial building block for the PC ecosystem.

By the mid-2000s, just as cloud computing was emerging, Intel had won a near monopoly over data center chips, competing only with AMD.

Some companies tried challenging x86’ s position as the industry standard in PCs. In 1990, Apple and two partners established a joint venture called Arm, based in Cambridge, England.

Arm failed to win market share in PCs in the 1990s and 2000s, because Intel’s partnership with Microsoft’s Windows operating system was simply too strong to challenge. However, Arm’s simplified, energy-efficient architecture quickly became popular in small, portable devices that had to economize on battery use.

Intel didn’t realize until too late that it ought to compete in another seemingly niche market for a portable computing device: the mobile phone. Intel turned down the iPhone contract. Jobs turned to Arm’s architecture, which unlike x86 was optimized for mobile devices that had to economize on power consumption. The early iPhone processors were produced by Samsung, which had followed TSMC into the foundry business.

A fixation on hitting short-term margin targets began to replace long-term technology leadership. The shift in power from engineers to managers accelerated this process. Otellini, Intel’s CEO from 2005 to 2013, admitted he turned down the contract to build iPhone chips because he worried about the financial implications.

Running Faster?

Grove wasn’t convinced. “Abandoning today’s ‘commodity’ manufacturing can lock you out of tomorrow’s emerging industry,” he declared, pointing to the electric battery industry.

By the early 2010s, the most advanced microprocessors had a billion transistors on each chip. The software capable of laying out these transistors was provided by three American firms, Cadence, Synopsys, and Mentor, which controlled around three-quarters of the market.

A new consensus in Washington formed around the idea that the best policy was to “run faster than America’s rivals.

The U.S. went so far as to give China’s SMIC special status as a “validated end-user,” certifying that the company didn’t sell to the Chinese military and was thus exempt from certain export controls.

“Run faster” was an elegant strategy with only a single problem: by some key metrics, the U.S. wasn’t running faster, it was losing ground.

Intel was running more slowly, though it still benefitted from its more advanced starting point.

Offshoring Innovation?

“Real Men Have Fabs”

Jerry Sanders, the Rolex-clad, Rolls Royce − driving brawler who founded AMD, liked to compare owning a semiconductor fab with putting a pet shark in your swimming pool. Sharks cost a lot to feed, took time and energy to maintain, and could end up killing you.

Having brawled with the Japanese for DRAM market share in the 1980s and with Intel for the PC market in the 1990s, Sanders was committed to his fabs. He thought they were crucial to AMD’s success.

By the 2000s, it was common to split the semiconductor industry into three categories. “Logic” refers to the processors that run smartphones, computers, and servers. “Memory” refers to DRAM, which provides the short-term memory computers need to operate, and flash, also called NAND, which remembers data over time. The third category of chips is more diffuse, including analog chips like sensors that convert visual or audio signals into digital data, radio frequency chips that communicate with cell phone networks, and semiconductors that manage how devices use electricity.

This third category has not been primarily dependent on Moore’s Law to drive performance improvements.

The economics of this segment are different from logic and memory chips that must relentlessly shrink transistors to remain on the cutting edge.

The memory market, by contrast, has been dominated by a relentless push toward offshoring production to a handful of facilities, mostly in East Asia.

In the late 1990s, several of Japan’s struggling DRAM producers were consolidated into a single company, called Elpida which sought to compete with Idaho’s Micron and with Korea’s Samsung and SK Hynix.

Elpida struggled to survive and in 2013 was bought by Micron.

The Fabless Revolution

Since the late 1980s, there’s been explosive growth in the number of fabless chip firms, which design semiconductors in – house but outsource their manufacturing, commonly relying on TSMC for this service.

Computer graphics remained an appealing niche for semiconductor startups, because unlike PC microprocessors, in graphics Intel didn’t have a de facto monopoly.

The company that eventually came to dominate the market for graphics chips, Nvidia, had its humble beginnings not in a trendy Palo Alto coffeehouse but in a Denny’s in a rough part of San Jose.

Nvidia was founded in 1993 by Chris Malachowsky, Curtis Priem, and Jensen Huang, the latter of whom remains CEO today. Today Nvidia’s chips, largely manufactured by TSMC, are found in most advanced data centers. It’s a good thing the company didn’t need to build its own fab.

Nvidia wasn’t the only fabless company pioneering new use cases for specialized logic chips.

Jacobs, whose faith in Moore’s Law was as strong as ever, thought a more complicated system of frequency-hopping would work better. Rather than keeping a given phone call on a certain frequency, he proposed moving call data between different frequencies, letting him cram more calls into available spectrum space.

He built a small network with a couple cell towers to prove it would work. Soon the entire industry realized Qualcomm’s system would make it possible to fit far more cell phone calls into existing spectrum space by relying on Moore’s Law to run the algorithms that make sense of all the radio waves bouncing around.

Qualcomm has made hundreds of billions of dollars selling chips and licensing intellectual property. But it hasn’t fabricated any chips: they’re all designed in – house but fabricated by companies like Samsung or TSMC.

By making possible mobile phones, advanced graphics, and parallel processing, fabless firms enabled entirely new types of computing.

Morris Chang’s Grand Alliance

The changing of the guard atop the chip industry accelerated the splitting of chip design and manufacturing, with much of the latter offshored. Five years after Sanders retired from AMD, the company announced it was dividing its chip design and fabrication businesses.

AMD spun out these facilities into a new company that would operate as a foundry like TSMC, producing chips not only for AMD but other customers, too. The investment arm of the Abu Dhabi government, Mubadala, became the primary investor in the new foundry, an unexpected position for a country known more for hydrocarbons than for high-tech.

The fate of AMD’s production capabilities would end up shaping the chip industry — and guaranteeing that the most advanced chipmaking would take place offshore.

GlobalFoundries, as this new company that inherited AMD’s fabs was known, entered an industry that was as competitive and unforgiving as ever.

To better control the movement of electrons, new materials and transistor designs were needed. Unlike the 2D design used since the 1960s, the 22nm node introduced a new 3D transistor, called a FinFET (pronounced finfet).

When GlobalFoundries was established as an independent company in 2009, industry analysts thought it was well placed to win market share amid this race toward 3D transistors.

The company had a partnership with IBM and Samsung to jointly develop technology, making it straightforward for customers to contract with either GlobalFoundries or with Samsung to produce their chips.

Those firms worried that ideas shared with Samsung’s chip foundry might end up in other Samsung products. TSMC and GlobalFoundries had no such conflicts of interest.

Morris Chang wasn’t about to give up dominance of the foundry business, though.

Chang realized that TSMC could pull ahead of rivals technologically because it was a neutral player around which other companies would design their products. He called this TSMC’s “Grand Alliance,” a partnership of dozens of companies that design chips, sell intellectual property, produce materials, or manufacture machinery. “TSMC knows it is important to use everyone’s innovation,” Chang declared, “ours, that of the equipment makers, of our customers, and of the IP providers. That’s the power of the Grand Alliance.”

Apple Silicon

The greatest beneficiary of the rise of foundries like TSMC was a company that most people don’t even realize designs chips: Apple.

A year after launching the iPhone, Apple bought a small Silicon Valley chip design firm called PA Semi that had expertise in energy-efficient processing. Soon Apple began hiring some of the industry’s best chip designers. Two years later, the company announced it had designed its own application processor, the A4, which it used in the new iPad and the iPhone 4.

China’s ecosystem of assembly facilities is the world’s best place to build electronic devices. Taiwanese companies, like Foxconn and Wistron, that run these facilities for Apple in China are uniquely capable of churning out phones, PCs, and other electronics.

By 2010, at the time Apple launched its first chip, there were just a handful of cutting-edge foundries: Taiwan’s TSMC, South Korea’s Samsung, and — perhaps — GlobalFoundries, depending on whether it could succeed in winning market share.

The smartphone supply chain looks very different from the one associated with PCs.

Apple’s iPhone processors are fabricated exclusively in Taiwan. Today, no company besides TSMC has the skill or the production capacity to build the chips Apple needs.

EUV

By the late-2010s, ASML, the Dutch lithography company, had spent nearly two decades trying to make extreme-ultraviolet lithography work. Doing so required scouring the world for the most advanced components, the purest metals, the most powerful lasers, and the most precise sensors. EUV was one of the biggest technological gambles of our time.

Using EUV light introduced new difficulties that proved almost impossible to resolve. Where Lathrop used a microscope, visible light, and photoresists produced by Kodak, all the key EUV components had to be specially created.

Cymer, a company founded by two laser experts from the University of California, San Diego, had been a major player in lithographic light sources since the 1980s. Cymer’s light source only worked, though, thanks to a new laser that could pulverize the tin droplets with sufficient power. This required a carbon dioxide−based laser more powerful than any that previously existed.

In summer 2005, two engineers at Cymer approached a German precision tooling company called Trumpf to see if it could build such a laser. It took Trumpf a decade to master these challenges and produce lasers with sufficient power and reliability. Each one required exactly 457,329 component parts.

After Cymer and Trumpf found a way to blast tin so it emits sufficient EUV light, the next step was to create mirrors that collected the light and directed it toward a silicon chip. Zeiss’s primary challenge was that EUV is difficult to reflect. Many materials absorb EUV rather than reflect it. Ultimately, Zeiss created mirrors that were the smoothest objects ever made, with impurities that were almost imperceptibly small.

For Frits van Houts, who took over leadership of ASML’s EUV business in 2013, the most crucial input into an EUV lithography system wasn’t any individual component, but the company’s own skill in supply chain management. ASML ended up buying several suppliers, including Cymer, after concluding it could better manage them itself.

The miracle isn’t simply that EUV lithography works, but that it does so reliably enough to produce chips cost-effectively.

EUV tools work in part because their software works. ASML uses predictive maintenance algorithms to guess when components need to be replaced before they break, for example.

ASML’s EUV lithography tool is the most expensive mass-produced machine tool in history, so complex it’s impossible to use without extensive training from ASML personnel, who remain on-site for the tool’s entire life span.

ASML’s EUV tools weren’t really Dutch, though they were largely assembled in the Netherlands. Crucial components came from Cymer in California and Zeiss and Trumpf in Germany.

“There Is No Plan B”

Morris Chang bet more heavily on EUV than anyone else in the semiconductor industry.

Shang-yi Chiang, the soft-spoken engineer who headed TSMC’s R & D and was widely credited for the company’s top-notch manufacturing technology, was convinced EUV was the only path forward.

With Chiang back in charge of R & D, TSMC charged forward toward EUV.

Like TSMC, Samsung, and Intel, GlobalFoundries was considering adopting EUV as it prepared for its own 7nm node.

Several years later, in 2014, it bought IBM’s microelectronics business, promising to produce chips for Big Blue, which had decided to go fabless for the same reason as AMD.

GlobalFoundries competed with Taiwan’s UMC for status as the world’s second-largest foundry, with each company having about 10 percent of the foundry marketplace.

TSMC, Intel, and Samsung were certain to adopt EUV, though they had different strategies about when and how to embrace it. GlobalFoundries was less confident.

By 2018, GlobalFoundries had purchased several EUV lithography tools and was installing them in its most advanced facility, Fab 8, when the company’s executives ordered them to halt work. The EUV program was being canceled. GlobalFoundries was giving up production of new, cutting-edge nodes.

Building cutting-edge processors was too expensive for everyone except the world’s biggest chipmakers.

The number of companies capable of fabricating leading-edge logic chips fell from four to three.

How Intel Forgot Innovation

At least the United States could count on Intel.

The company spent over $ 10 billion a year on R & D throughout the 2010s, four times as much as TSMC and three times more than the entire budget of DARPA.

As the chip industry entered the EUV era, Intel looked poised to dominate. Yet rather than capitalizing on this new era of shrinking transistors, Intel squandered its lead, missing major shifts in semiconductor architecture needed for artificial intelligence, then bungling its manufacturing processes and failing to keep up with Moore’s Law.

Without Intel, there won’t be a single U.S. company — or a single facility outside of Taiwan or South Korea — capable of manufacturing cutting-edge processors.

Intel’s leaders had to split their attention between chip design and chip manufacturing. They ended up bungling both.

In the early 2010s, just as Intel completed its conquest of the data center, processing demands began to shift. The new trend was artificial intelligence — a task that Intel’s main chips were poorly designed to address.

Since the 1980s, Intel has specialized in a type of chip called a CPU. They can conduct many different types of calculations, which makes them versatile, but they do these calculations serially, one after another. Because AI workloads often require running the same calculation repeatedly, using different data each time, finding a way to customize chips for AI algorithms is crucial to making them economically viable.

In the early 2010s, Nvidia — the designer of graphic chips — began hearing rumors of PhD students at Stanford using Nvidia’s graphics processing units (GPUs) for something other than graphics. GPUs, by contrast, are designed to run multiple iterations of the same calculation at once. This type of “parallel processing,” it soon became clear, had uses beyond controlling pixels of images in computer games. It could also train AI systems efficiently.

As investors bet that data centers will require ever more GPUs, Nvidia has become America’s most valuable semiconductor company.

Google has designed its own chips called Tensor processing units (TPUs), which are optimized for use with Google’s TensorFlow software library.

Whether it will be Nvidia or the big cloud companies doing the vanquishing, Intel’s near-monopoly in sales of processors for data centers is ending.

Since 2015, Intel has repeatedly announced delays to its 10nm and 7nm manufacturing processes, even as TSMC and Samsung have charged ahead.

By 2020, half of all EUV lithography tools, funded and nurtured by Intel, were installed at TSMC.

China’s Challenge

Made in China

“Without cybersecurity there is no national security,” declared Xi Jinping, general secretary of the Chinese Communist Party, in 2014, “and without informatization, there is no modernization.”

No country has been more successful than China at harnessing the digital world for authoritarian purposes.

China’s digital world runs on digits — 1s and 0s — that are processed and stored mostly by imported semiconductors. China’s tech giants depend on data centers full of foreign, largely U.S.-produced, chips.

Which core technologies most worry Xi? One is a software product, Microsoft Windows, which is used by most PCs in China, despite repeated efforts to develop competitive Chinese operating systems. Yet even more important in Xi’s thinking are the chips that power China’s computers, smartphones, and data centers.

“Call Forth the Assault”

Months before his Davos debut, Xi had struck a different tone in a speech to Chinese tech titans and Communist Party leaders in Beijing for a conference on “cyber security and informatization.” To an audience that included Huawei founder Ren Zhengfei, Alibaba CEO Jack Ma, high-profile People’s Liberation Army (PLA) researchers, and most of China’s political elite, Xi exhorted China to focus on “gaining breakthroughs in core technology as quickly as possible.” Above all, “core technology” meant semiconductors.

The chip industry faced an organized assault by the world’s second-largest economy and the one-party state that ruled it.

Demand for chips was “exploding,” China’s leaders realized, driven by “cloud computing, the Internet of Things and big data.”

Across the entire semiconductor supply chain, aggregating the impact of chip design, intellectual property, tools, fabrication, and other steps, Chinese firms have a 6 percent market share, compared to America’s 39 percent, South Korea’s 16 percent, or Taiwan’s 12 percent, according to the Georgetown researchers.

So as early as 2014, Beijing had decided to double down on semiconductor subsidies, launching what became known as the “Big Fund” to back a new leap forward in chips.

China was disadvantaged, however, by the government’s desire not to build connections with Silicon Valley, but to break free of it.

China’s import of chips — $ 260 billion in 2017, the year of Xi’s Davos debut — was far larger than Saudi Arabia’s export of oil or Germany’s export of cars.

Integrated circuits made up 15 percent of South Korea’s exports in 2017; 17 percent of Singapore’s; 19 percent of Malaysia’s; 21 percent of the Philippines’; and 36 percent of Taiwan’s. Made in China 2025 called all this into question.

Technology Transfer

IBM’s decision to trade technology for market access made business sense. The firm’s technology was seen as second-rate, and without Beijing’s imprimatur it was unlikely to reverse its post-Snowden market shrinkage.

IBM wasn’t the only company willing to help Chinese firms develop data center chips. Around the same time, Qualcomm, the company specializing in chips for smartphones, was trying to break into the data center chip business using an Arm architecture.

The most controversial example of technology transfer, however, was by Intel’s archrival, AMD. AMD cut a deal with a consortium of Chinese firms and government bodies to license the production of modified x86 chips for the Chinese market.

The Chinese market was so enticing that companies found it nearly impossible to avoid transferring technology.

Viewed on their own terms, the deals that IBM, AMD, and Arm struck in China were driven by reasonable business logic. Collectively, they risk technology leakage.

“Mergers Are Bound to Happen”

For Zhao Weiguo, it was a long, winding road from a childhood raising pigs and sheep along China’s western frontier to being celebrated as a chip billionaire by Chinese media.

In 2013, four years after buying his stake in Tsinghua Unigroup, and just before China’s Communist Party announced new plans to provide vast subsidies to the country’s semiconductor firms, Zhao decided it was time to invest in the chip industry.

In 2013, Tsinghua Unigroup started its shopping spree at home, spending several billion dollars buying two of China’s most successful fabless chip design companies, Spreadtrum Communications and RDA Microelectronics.

A year later, in 2014, Zhao cut a deal with Intel to couple Intel’s wireless modem chips with Tsinghua Unigroup’s smartphone processors.

He floated the idea of buying a 25 percent stake in TSMC and advocated merging MediaTek with Tsinghua Unigroup’s chip design businesses.

Soon Zhao set his sights on America’s semiconductor industry. In July 2015, Tsinghua Unigroup floated the idea of buying Micron. Micron said it didn’t think the transaction was realistic given the U.S. government’s security concerns.

The Rise of Huawei

Ren Zhengfei from Huawei. Huawei’s smartphone unit, meanwhile, was until recently one of the world’s largest, rivaling Apple and Samsung in numbers of phones sold. The company provides other types of tech infrastructure, too, from undersea fiber-optic cables to cloud computing.

The ties between Huawei and the Chinese state are well documented but explain little about how the company built a globe-spanning business.

Huawei has embraced foreign competition from its earliest days. Ren Zhengfei’s business model has been fundamentally different from Alibaba’s or Tencent’s.

Ren had grown up in a family of high school teachers in rural Guizhou Province in southern China. He moved to Shenzhen, then a small town just across the border from Hong Kong. Ren saw an opportunity to import telecom switches, the equipment that connects one caller to another. When his partners across the border realized he was making good money by reselling their equipment, they cut him off, so Ren decided to build his own equipment.

Today it is one of the world’s three biggest providers of equipment on cell towers, alongside Finland’s Nokia and Sweden’s Ericsson.

Theft of intellectual property may well have benefitted the company, but it can’t explain its success.

Huawei’s spending on R & D, meanwhile, is world leading. Its roughly $ 15 billion annual R & D budget.

Starting in 1999, Huawei hired IBM’s consulting arm to teach it to operate like a world-class company. Thanks to IBM and other Western consultants, Huawei learned to manage its supply chain, anticipate customer demand, develop top-class marketing, and sell products worldwide.

In addition to Western consulting firms, Huawei had help from another powerful institution: China’s government.

The company identified the 250 most important semiconductors that its products required and began designing as many as possible in-house.

The 5G Future

Telecom firms have therefore relied on semiconductors to pack ever more data into existing spectrum space. “Spectrum is far more expensive than silicon,” explains Dave Robertson, a chip expert at Analog Devices, which specializes in semiconductors that manage radio transmission.

Cars are only the most prominent example of how the ability to send and receive more data will create more demand for computing power — in devices on the “edge” of the network, in the cell network itself, and in vast data centers.

The Next Offset

The fate of China’s semiconductor industry isn’t simply a question of commerce. Whichever country can produce more 1s and 0s will have a serious military advantage, too.

The PLA has been talking about “AI weapons” for at least a decade, referring to systems that use “AI to pursue, distinguish, and destroy enemy targets automatically.” Xi Jinping himself has urged the PLA to “accelerate the development of military intelligentization” as a defense priority.

The warfare of the future will be more reliant than ever on chips — powerful processors to run AI algorithms , big memory chips to crunch data , perfectly tuned analog chips to sense and produce radio waves. In 2017, DARPA launched a new project called the Electronics Resurgence Initiative to help build the next wave of militarily relevant chip technology.

China’s leaders have identified their reliance on foreign chipmakers as a critical vulnerability. They’ve set out a plan to rework the world’s chip industry by buying foreign chipmakers, stealing their technology, and providing billions of dollars of subsidies to Chinese chip firms.

The Chip Choke

“Everything We’re Competing On”

Intel’s CEO Brian Krzanich couldn’t hide his anxiety about China’s push to seize a bigger share of the world’s chip industry. Krzanich was tasked with hobnobbing with U.S. government officials, convincing the U.S. government to do something about China’s massive semiconductor subsidies.

China had driven U.S. solar panel manufacturing out of business. Couldn’t it do the same in semiconductors?

It wasn’t until the final days of the Obama administration, therefore, that the government began to act. Around the same time, the White House commissioned a group of semiconductor executives and academics to study the future of the industry.

America’s technological lead in fabrication, lithography, and other fields had dissipated because Washington convinced itself that companies should compete but that governments should simply provide a level playing field.

U.S. intelligence had voiced concerns about Huawei’s alleged links to the Chinese government for many years, though it was only in the mid-2010s that the company and its smaller peer, ZTE, started attracting public attention.

Trump repeatedly attacked China for “ripping us off,” but he had little interest in policy details and none in technology. His focus was on trade and tariffs.

Publicly, semiconductor CEOs and their lobbyists urged the new administration to work with China and encourage it to comply with trade agreements. Privately, they admitted this strategy was hopeless and feared that state-supported Chinese competitors would grab market share at their expense. The entire chip industry depended on sales to China — be it chipmakers like Intel, fabless designers like Qualcomm, or equipment manufacturers like Applied Materials.

Fujian Jinhua

China’s Fujian Province is right across the straits from Taiwan. When the government of Fujian Province decided to open a DRAM chipmaker called Jinhua and provided it with over $ 5 billion in government funding, Jinhua wagered that a partnership with Taiwan was its best path to success.

To compete, Jinhua had to acquire this manufacturing know-how by means fair or foul. Jinhua cut a deal with Taiwan’s UMC. UMC was promising to provide DRAM technology, but it wasn’t in the DRAM business.

When Micron sued UMC and Jinhua for violating its patents, they countersued in China’s Fujian Province. A Fujian court ruled that Micron was responsible for violating UMC and Jinhua’s patents. This was a perfect case study of the state-backed intellectual property theft foreign companies operating in China had long complained of.

Jinhua was cut off from buying U.S. equipment for manufacturing chips. Within months, production at Jinhua ground to a halt. China’s most advanced DRAM firm was destroyed.

The Assault on Huawei

Concern about Huawei wasn’t confined to the Trump administration or the United States. Australia had banned Huawei from 5G networks.

Some close American allies in Eastern Europe openly banned the company, like Poland.

Many Europeans also thought China’s technological advance was inevitable and therefore not worth trying to stop.

The point was less that Huawei was directly supporting China’s military than that the company was advancing China’s overall level of chip design and microelectronics know-how.

Nearly every chip in the world uses software from at least one of three U.S. – based companies, Cadence, Synopsys, and Mentor.

Excluding the chips Intel builds in-house, all the most advanced logic chips are fabricated by just two companies, Samsung and TSMC, both located in countries that rely on the U.S. military for their security. Moreover, making advanced processors requires EUV lithography machines produced by just one company , the Netherlands’ ASML, which in turn relies on its San Diego subsidiary, Cymer (which it purchased in 2013), to supply the irreplaceable light sources in its EUV lithography tools.

China’s Sputnik Moment?

Xi Jinping recently appointed his top economic aide, Liu He, to serve as a “chip czar,” managing the country’s semiconductor efforts. There’s no doubt that China’s spending billions to subsidize chip firms. Whether this funding produces new technology remains to be seen.

For complete independence, China would need to acquire cutting-edge design software, design capabilities, advanced materials, and fabrication know-how, among other steps. China will no doubt make progress in some of these spheres, yet some are simply too expensive and too difficult for China to replicate at home.

Despite the rhetoric, China’s not actually pursuing an all-domestic supply chain. Beijing recognizes this is simply impossible. China would like a non-U.S. supply chain, but because of America’s heft in the chip industry and the extraterritorial power of its export regulations, a non-American supply chain is also unrealistic, except perhaps in the distant future.

The worry for other countries is that China’s slew of subsidies will let it win market share across multiple parts of the supply chain, especially those that don’t require the most advanced technologies.

Shortages and Supply Chains

In 2020, just as the United States began to impose a chip choke on China, cutting off some of the country’s leading tech companies from accessing U.S. chip technology, a second chip choke began asphyxiating parts of the world economy. Certain types of chips became difficult to acquire, especially the types of basic logic chips that are widely used in automobiles.

Carmakers spent much of 2021 struggling and often failing to acquire semiconductors. These firms are estimated to have produced 7.7 million fewer cars in 2021 than would have been possible had they not faced chip shortages, which implies a $ 210 billion collective revenue loss, according to industry estimates.

“Rivalries among semiconductor businesses have now begun to draw in countries.”

South Korea isn’t the only country where chip companies and the government work as a “team,” to use President Moon’s phrase. Taiwan’s government remains fiercely protective of its chip industry, which it recognizes as its greatest source of leverage on the international stage.

When it comes to making these chips, however, the U.S. currently lags behind. The primary hope for advanced manufacturing in the United States is Intel. After years of drift, the company named Pat Gelsinger as CEO in 2021.

Gelsinger has cut a deal with ASML to let Intel acquire the first next-generation EUV machine, which is expected to be ready in 2025. The second prong of Gelsinger’s strategy is launching a foundry business that will compete directly with Samsung and TSMC, producing chips for fabless firms and helping Intel win more market share.

The Taiwan Dilemma

TSMC’s chairman is certainly right that no one wants to “disrupt” the semiconductor supply chains that crisscross the Taiwan Strait. But both Washington and Beijing would like more control over them.

Every company that’s invested on either side of the Taiwan Strait, from Apple to Huawei to TSMC, is implicitly betting on peace.

Conclusion

The staggering complexity of producing computing power shows that Silicon Valley isn’t simply a story of science or engineering. Technology only advances when it finds a market. The history of the semiconductor is also a story of sales, marketing, supply chain management, and cost reduction.

At some point, the laws of physics will make it impossible to shrink transistors further. Even before then, it could become too costly to manufacture them.

The end of Moore’s Law would be devastating for the semiconductor industry — and for the world. We produce more transistors each year only because it’s economically viable to do so.

Neil Thompson and Svenja Spanuth, two researchers, have gone so far as to argue that we’re seeing a “decline of computers as a general-purpose technology.” They think the future of computing will be divided between “‘fast lane’ applications that get powerful customized chips and ‘slow lane’ applications that get stuck using general-purpose chips whose progress fades.”

Leave a Reply