Home > Poslovno svetovanje > Management > Tae Kim: The Nvidia Way; Jensen Huang and the Making of a Tech Giant

Tae Kim: The Nvidia Way; Jensen Huang and the Making of a Tech Giant

The Nvidia Way

I FOUND THIS TO BE A pervasive attitude within Nvidia: that the culture of the place discourages looking back, whether at errors or successes, in favor of focusing on the future — the blank whiteboard of opportunity.

The founding of Nvidia by Jensen, Curtis Priem, and Chris Malachowsky, in a back booth at a Denny’s all the way back in 1993.

Jensen’s business acumen and hard-driving management style were critical to Nvidia’s early success, but Priem’s chip architecture prowess and Malachowsky’s manufacturing expertise were essential, too.

I have come to call “the Nvidia Way.” This culture combines unusual independence for each employee with the highest possible standards; it encourages maximum speed while demanding maximum quality; it allows Jensen to act as strategist and enforcer with a direct line of sight to everyone and everything at the company. Above all, it demands an almost superhuman level of effort and mental resilience from everyone. It’s not just that working at Nvidia is intense, though it certainly is; it’s that Jensen’s management style is unlike anything else in corporate America.

Can you really separate Nvidia from its CEO?

The Early Years (Pre-1993)

Pain and Suffering

Jensen was born in Taiwan on February 17, 1963, to Taiwanese parents.

After a wave of political unrest hit Thailand, Jensen’s parents decided to send him and his brother to Tacoma, Washington, to live with their aunt and uncle.

Later in his life, Jensen’s executives would say he had developed his tough, street-fighter mentality during his days in Kentucky. “Maybe this is a bit of my early schooling, I will never start a fight, but I will never walk away from one. So, if someone is going to pick on me, they’d better think twice,” Jensen himself said.

It also taught him to find satisfaction in the quality of his work, no matter how minor the task, and to fulfill each according to the highest possible standards.

Already, he was learning another fact of working life: the trade-off between having high standards and being efficient with one’s time.

Jensen was invited to interview with some of the largest semiconductor and chip makers in the country. He first had his eye on Texas Instruments, whose offices spanned multiple zip codes, but his interview went poorly, and he didn’t get an offer. He next interviewed with two companies based in California. The first was Advanced Micro Devices, or AMD, The second was LSI Logic. He received offers from both companies and chose AMD, because he was more familiar with its reputation. He took the plunge and joined LSI. There, he was given a technical role working with customers. He was assigned to a start-up called Sun Microsystems, where he met two engineers, Curtis Priem and Chris Malachowsky, who were working on a secret project that promised to revolutionize how people used workstation computers — high-performance computers built to perform specialized technical or scientific tasks, such as three-dimensional modeling or industrial design.

The Graphics Revolution

AS A TEENAGER, CURTIS PRIEM taught himself how to program by writing games in the computer lab of his high school in Fairview Park, Ohio, just outside of Cleveland.

WHEN CONSIDERING COLLEGES, PRIEM FOCUSED on three schools: the Massachusetts Institute of Technology, Case Western Reserve University, and Rensselaer Polytechnic Institute (RPI). Two factors led him to favor the last: at RPI, professors, not teaching assistants, taught freshman classes, and the school had recently announced that it would be acquiring an advanced IBM 3033 mainframe computer, which would be made accessible even to incoming freshmen.

Vermont Microsystems ended up being nothing like Apple. So, he started looking west, to Silicon Valley. He interviewed at GenRad and received an offer. He didn’t know it, but GenRad was a company in crisis by the time he joined.

A man named Wayne Rosing offered Priem an interview at Sun Microsystems. Rosing vowed to take advantage of the increasing technical ability to render fast, beautiful, color graphics. To do that, he needed someone who could design powerful graphics chips. Hence his interest in Curtis Priem.

IT WAS EXACTLY THE OPPOSITE of what Sun’s executives wanted Rosing to do. At the time, the company was focused on launching a new line of computers called the SPARCstation series. These were UNIX-based workstations designed for specific scientific and technical applications, in particular computer-aided design (CAD) and computer-aided manufacturing (CAM) programs that could be used to design complex physical objects, from bridges to airplanes to mechanical parts.

Priem could design and build whatever he could dream up, so long as it could work within the data-throughput constraints of the “frame buffer” — the memory that the SPARCstation dedicated to graphics processing. Priem realized he couldn’t tackle the project alone; he needed help. It would come soon from another engineer, Chris Malachowsky, whom Sun Microsystems had hired from Hewlett-Packard. The two men would share an office and become known as the “closet graphics” team.

UNLIKE HIS OFFICE MATE, CHRIS MALACHOWSKY had come late to the world of computers. Malachowsky went on to major in electrical engineering and parlayed his good grades into a job at Hewlett-Packard in California.

When Malachowsky first arrived at HP, he saw that hands-on experience in its manufacturing department could give him a practical perspective on the industry that few others seemed to possess  He worked on the HP-1000 Minicomputer product line and learned how to write embedded control software for its communication peripherals.

Malachowsky decided to apply for jobs at other companies, solely for the purpose of getting some practice interviewing.

The second practice interview was at Sun Microsystems, where he had applied for an unspecified position making graphics chips. He agreed to interview with the lead engineer, Curtis Priem.

In order to produce the high-quality graphics that Rosing wanted (but that Rosing’s boss didn’t), Priem had designed a monstrosity of a graphics accelerator. Priem’s accelerator would handle up to 80 percent of the computational workload on its own.

It was a good design, in theory, but now it was up to Malachowsky to figure out how to make it a reality. Malachowsky would rely on LSI Logic. LSI had just introduced a new chip architecture called “sea-of-gates,” through which they could fit more than ten thousand gate arrays onto a single chip, a feat that no other manufacturer had been able to accomplish.

To make sure that Priem and Malachowsky got the chip they had drawn up, LSI assigned one of its rising stars to manage the Sun account — a relatively new hire named Jensen Huang. Together, the three of them worked out the manufacturing process that would make Priem’s design ready for fabrication.

In 1989, the three men finalized the specifications for Sun’s new graphics accelerator. The FBC would require 43,000 gates and 170,000 transistors in order to do its job properly; the TEC, 25,000 gates and 212,000 transistors.

GX STARTED AS AN OPTIONAL add-on, for which Sun up-charged customers $ 2,000. GX made everything on the display work faster: two-dimensional geometry, three-dimensional wireframing, even the mundane task of scrolling through lines of text was quicker and better with GX accelerators than without them.

After a few years of torrid sales as an add-on option, the GX chips became standard on every Sun workstation. Its success boosted the careers of Priem and Malachowsky, who became graphic architects and were given their own team, called the Low End Graphics Option group.

“We realized our time was limited and neither of us wanted to work at Sun,” Priem said. They already had a new project in mind: resurrecting the next-generation accelerator chip that Sun leadership had passed on.

“Why don’t we just go build Samsung a demonstration chip?” Priem asked Malachowsky. “We’ll just be consultants and show them the value of this new memory device they are committing to build.”

“We knew a guy!” he recalled later. “We knew a guy who we were good friends with who had moved into technology licensing and building systems on a chip for other people. So, we reached out to Jensen.” Malachowsky and Priem asked Jensen Huang for help writing a contract to work with Samsung. The three started meeting to devise a business strategy to deal with the Korean company. Then one day Jensen said, “Why are we doing this for them?”

The Birth of Nvidia

CURTIS PRIEM AND CHRIS MALACHOWSKY’S idea for a graphics-chip venture was perfectly timed. In 1992, two major developments — one in hardware, one in software — accelerated the demand for better graphics cards. The first was the computer industry’s adoption of the Peripheral Component Interconnect (PCI) bus.

The second development was Microsoft’s release of Windows 3.1, which was intended to showcase the very latest in computer-graphics capabilities.

Priem and Malachowsky decided the PC market, rather than the workstation market, represented the best opportunity for their start-up.

In late 1992, Priem, Malachowsky, and Jensen met frequently at a Denny’s at the corner of Capitol and Berryessa in East San Jose to figure out how to turn their idea into a business plan.

Jensen still needed to be convinced to leave his job. Eventually, Jensen decided that $ 50 million in revenue was possible. He was confident, as a gamer himself, that the gaming market was going to grow considerably.

In December of 1992, Priem forced their hand. He submitted his letter of resignation to Sun Microsystems, effective December 31. The following day, alone in his house, he founded the new venture, “just by declaring that this was started,” he later recalled.

Priem revealed that he already had basic specifications in mind for a new PC-based graphics accelerator. In many ways, it would be an evolution of the GX chip they had worked on for six years. He wanted to call the chip the “GX Next Version,” or GXNV. Jensen told Priem to “drop the GX.” Their new chip would be called the NV1.

Once news of the three cofounders’ new venture spread, several senior engineers at Sun Microsystems quit and joined the fledgling start-up. Two crucial early hires were Bruce McIntyre, a software programmer on the GX team, and David Rosenthal, a chip architect who became the start-up’s chief scientist.

McIntyre and Priem took a Sun GX graphics chip and attached it to a board that could plug into their Gateway. The hardware interface was easy; the software integration was much harder.

It took a full month of work to remap the GX’s graphics registers to work with Windows 3.1.

Now, the start-up had a staff. It had a viable demonstration product. It only needed an official name, so that it could be legally incorporated. The last remaining option was “Invidia,” which Priem found by looking up the Latin word for envy. “We dropped the ‘I’ and went with NVidia to honor the NV1 chip we were developing,” said Priem.

On April 5, 1993, Nvidia was officially born.

THE FIRST TEST OF NVIDIA’S viability — the search for funding — loomed. The entire VC industry was really a niche in the economy, making just over a billion dollars in outlays per year (close to $ 2 billion in today’s dollars).

Jensen’s decision to ease his way out of LSI Logic turned out to pay immediate dividends during Nvidia’s fund-raising process. LSI’s CEO, Wilfred Corrigan promised to introduce Jensen to Don Valentine at Sequoia Capital. Valentine had invested in LSI Logic back in 1982, which earned him a handsome payout when the company went public a year later.

For all their ambition to take over the PC graphics market, Nvidia had to focus its resources on the single best opportunity rather than spread themselves thin chasing all possible ones. This was why they had declined Wayne Rosing’s offer to make chips that could run on both Sun workstations and IBM-compatible PCs.

The next meeting they took, with Sutter Hill Ventures, went more smoothly. The only partner excited about Nvidia was Tench Coxe, who had joined the firm a few years prior.

The positive meeting with Sutter Hill seemed to bode well for the big test two days later: their pitch to Don Valentine at Sequoia. Sequoia met with Nvidia’s cofounders two more times in mid-June. At the last meeting, they decided to invest.

Nvidia secured $ 2 million of Series A funding from Sequoia Capital and Sutter Hill Ventures — $ 1 million apiece — at the end of the month.

Near-Death Experiences (1993–2003)

All In

FINALLY, NVIDIA COULD STOP MERELY talking about its first chip and start building it. The first order of business was moving the company out of Priem’s townhouse and into a real office.

Priem would handle chip architecture and products as the company’s chief technical officer, and Malachowsky would run the engineering and implementation teams. They simply assumed that Jensen Huang would make the business decisions.

While Priem was working on the design, Jensen focused on convincing Intel to support his new card. His contact at Intel was a young executive named Pat Gelsinger. In the end, Jensen prevailed. Intel went with a more open standard.

As the NV1’ s design came into focus, Jensen and Malachowsky finalized their partnership with the foundry that would be manufacturing all of their chips, SGS-Thomson in Europe. SGS-Thomson essentially agreed to fund Nvidia’s entire software division of around a dozen people in order to secure the privilege of manufacturing the NV1 chip.

In the fall of 1994, SGS and Nvidia presented the NV1 at COMDEX in Las Vegas, one of the largest computer trade shows in the world. Impressed with the NV1 demonstration, Sega agreed to begin working with Nvidia as it planned its next console.

In May 1995, Sega and Nvidia signed a five-year partnership, where Nvidia agreed to build its next-generation chip, the NV2, exclusively for Sega’s next gaming console.

BUT NVIDIA HAD GRAVELY MISJUDGED the market. For one, over the previous two years, memory prices had plummeted from $ 50 per megabyte to $ 5 per megabyte, which meant that the NV1’ s stinginess with onboard memory was no longer much of a competitive advantage.

It was a single game, the first-person shooter DOOM, that sealed the NV1’ s fate. At the time of the chip’s launch, DOOM was the most popular game in the world. The NV1 chip only partially supported VGA graphics and relied on a software emulator to supplement its VGA capabilities — which resulted in slow performance for gamers playing DOOM. Nvidia’s new card, which was supposed to push the boundaries of the graphics industry, could not keep up with the world’s most popular game.

Jensen realized Nvidia had made several critical mistakes with the NV1, from positioning to product strategy. They had overdesigned the card, stuffing it with features no one cared about.

Nvidia had spent nearly $ 15 million to develop the NV1. The bad result, however, meant that Nvidia was now facing a cash crisis.

DURING ONE OF NVIDIA’S VERY FIRST board meetings, director Harvey Jones, a former CEO of a leading chip-design-software company called Synopsys, asked Jensen about the NV1: “How would you position this?” In the aftermath of the NV1’ s failure, Jensen regretted not taking Jones’s question a little more seriously. In his search for answers to Jones’s question, he gravitated to the book Positioning: The Battle for Your Mind by Al Ries and Jack Trout. According to the two authors, potential buyers didn’t want to be persuaded. They wanted to be seduced.

In 1996, Sega informed Nvidia that the company would no longer be using the NV2 in its next console.

After about a year spent on the project, Kogachi was able to get an NV2 prototype working within Sega’s specifications. The milestone triggered the $ 1 million payout, money that was a key lifeline during a time of crisis. The majority of the $ 1 million was immediately put into research and development on the NV3.

While Nvidia reeled from its missteps with the NV1 and NV2 and was pivoting to focus on the NV3, a formidable new competitor had emerged in the PC graphics market. Three alumni of Silicon Graphics, Scott Sellers, Ross Smith, and Gary Tarolli, founded the company 3dfx in 1994, just one year after Nvidia’s incorporation.

3dfx pitched itself as the only start-up that could bring SGI-level performance to personal computers, at a fraction of the cost. Knowingly or not, 3dfx followed the exact principles laid down by Al Ries and Jack Trout in Positioning.

Jensen saw where the industry was headed and demanded that Nvidia’s engineers follow the market rather than fight it.

Jensen’s message inspired Priem to go big, literally, with the NV3.

As a nod to Nvidia’s ambitions — and , perhaps , as a way to signal a clean break with its past design philosophy — the company decided to give the NV3 an external brand that was different than its internal code name and dubbed it the RIVA 128, which encapsulated the chip’s ultimate purpose: RIVA stood for Real-time Interactive Video and Animation Accelerator, and “128” was a nod to the 128-bit bus, which would be the largest ever included on a single chip — another first for the consumer PC industry.

The company also now knew that its chips needed to have 100 percent hardware support for the old VGA standard.

Jensen was able to source and license a VGA core design from one of Nvidia’s competitors, a company called Weitek. Not only did Jensen sign a licensing agreement with Weitek — he was also able to poach its VGA chip designer, Gopal Solanki, who became a project manager and one of the CEO’s top lieutenants.

Nvidia unveiled the RIVA 128 in April at the 1997 Computer Game Developers Conference.

First, the RIVA 128 could outperform 3dfx’s best cards; and second, the company that 3dfx had left for dead was instead about to come roaring back into the 3-D graphics market.

In late summer, Jensen gathered the whole company in the office cafeteria. He pulled a piece of paper from his pocket and read off some dollar figures, down to cents. He folded the paper back into his pocket and said, “That’s how much money we have in the bank.”

Jensen then pulled out another piece of paper from his pocket. He opened it up and read, “One purchase order from STB Systems for 30,000 units of RIVA 128.” It was the chip’s first major order. The cafeteria erupted in cheers. Jensen had indulged in a bit of showmanship for dramatic effect.

The RIVA 128 was the company’s first big hit.

“The RIVA 128 was a miracle,” Jensen said. “When our backs were against the wall, Curtis, Chris, Gopal, and David Kirk built it. They made really good decisions.”

Ultra-Aggressive

THE RIVA 128 DID MORE THAN ensure Nvidia’s survival. It also served as a magnet for talent.

Caroline Landry was a chip designer for the Canadian company Matrox Graphics when she first heard about Nvidia’s new chip. When she started, she had trouble adjusting to Nvidia’s intense culture.

Landry mentioned to Jensen that some employees were griping about the long work hours. His response was typically direct. “People who train for the Olympics grumble about training early in the morning, too.” Jensen was sending a message: long hours were a necessary prerequisite for excellence. To this day, he has not deviated from that view or altered Nvidia’s expectation that employees adopt extreme work habits.

“At Nvidia, you embrace your smart colleagues and don’t feel threatened.

“The theoretical limit of what you could do — that’s what Speed of Light is. That’s the only thing we were allowed to measure against,” remembered former executive Robert Csongor.

Much of what the company learned on the RIVA 128 became standard in its future chip development. From that point on, Nvidia had software drivers ready at the beginning of chip production: the drivers would already have been tested across all the important applications and games and to ensure compatibility with prior Nvidia chips. This approach became a significant competitive advantage for Nvidia, whose rivals had to develop separate drivers for different chip – architecture generations.

FEAR AND ANXIETY BECAME JENSEN’S favorite motivational tools. At each monthly company meeting, he would say, “We’re thirty days from going out of business.”

That paranoia came to a head in late 1997. Intel had always been both an important partner for Nvidia and a potential competitive threat. Just months after the RIVA 128 launched to great fanfare, Intel announced its own chip, the i740. It was a direct challenge to Nvidia — its new chip and its very existence. After Intel’s i740 announcement, “our sales pipeline started to dry up,” one Nvidia executive said. CHRIS MALACHOWSKY SPEARHEADED the response to the Intel threat.

IN SOME INSTANCES, NVIDIA’S FOCUS on speed could lead to lapses in quality — at least, relative to the high standards that Jensen set for the company.

JENSEN’S COMPETITIVENESS OFTEN MOTIVATED his employees to do extraordinary things. But it could also reveal a petty side of the CEO.

Since Nvidia’s founding, it had partnered with SGS-Thomson, the European chip conglomerate, to manufacture its chips. As Jensen and his cofounders discovered during their initial meeting with Sequoia, SGS-Thomson did not have the best reputation, and it had struggled to remain competitive in the face of less expensive labor in East Asia. Yet now that Nvidia was producing great chips and selling them in massive quantities, SGS-Thomson’s weaknesses became far more difficult to ignore.

When Nvidia was founded in 1993, Jensen struggled to find chip-manufacturing capacity. In 1996, he tried a more personal approach. He addressed a letter to Morris Chang, TSMC’s CEO, asking if the two men could discuss Nvidia’s chip needs. This time, Chang called him, and the two men arranged for a visit in Sunnyvale. He managed to secure some production capacity from TSMC to supplement SGS-Thomson’s capabilities, and the relationship seemed to be going well.

The two CEOs and their companies had become so close in such a short period of time — and the relationship between Nvidia and SGS-Thomson had soured almost as fast — that in February of 1998, Nvidia made TSMC its main supplier.

JENSEN’S RESPONSE TO NVIDIA’S near-death from a production backlog was, paradoxically, to restructure the entire company in order to ship new designs even faster.

He began to call Michael Hara, Nvidia’s head of marketing, into his office to brainstorm strategy. Hara, who had worked at several of Nvidia’s competitors, explained the market dynamics to Jensen. The whole industry moved according to the rhythms of computer manufacturers, who refreshed their product launches twice a year: in spring and fall. They were constantly shopping around for better chips to put in their PCs, readily replacing existing vendors with new ones as faster, higher-quality components became available.

After a few weeks, Jensen announced to his executive team that he had figured out how to keep Nvidia ahead of the competition — forever. “We’re going to fundamentally restructure the engineering department to line up with the refresh cycles,” he said.

Priem’s design had a software-based “resource manager,” essentially a miniature operating system that sat on top of the hardware itself. The resource manager allowed Nvidia’s engineers to emulate certain hardware features that normally needed to be physically printed onto chip circuits. This involved a performance cost but accelerated the pace of innovation, because Nvidia’s engineers could take more risks.

Nvidia also began emphasizing backwards compatibility for its software drivers, which it had first done with the RIVA 128.

Jensen saw emulation and backwards-compatible drivers not just as good technical principles but also as competitive advantages.

Nvidia’s rapid iteration meant that “the competition will always be shooting behind the duck,” as Jensen described it.

By the end of 1999, Nvidia had reorganized its model for design and production on the “Three Teams, Two Seasons” strategy. It had a philosophy that demanded employees operate at the “Speed of Light”.

And it had a corporate mantra — ”We’re thirty days from going out of business” — that served as a warning about complacency.

Just Go Win

In September of 1998, 3dfx sued for patent infringement. Just a year earlier, 3dfx’s leaders had been so confident that Nvidia was about to go bankrupt that they didn’t even bother making a play for their struggling rival. Now, the situation had been almost turned on its head. 3dfx then decided to expand into an entirely new part of the graphics industry. In December 1998, it bought the graphics-board manufacturer STB Systems for $ 141 million. 3dfx soon faced a complete operational meltdown. It failed to manage STB’s inventories. Its mid-tier cards failed to sell. It simply ran out of cash. The company’s creditors initiated bankruptcy proceedings near the end of 2000. On December 15, Nvidia bought 3dfx’s patents and other assets and hired about one hundred of its employees. In October 2002, 3dfx filed for bankruptcy.

Rick Tsai was TSMC’s executive vice president of operations when Nvidia first started working with the chip manufacturer. When TSMC first began to work with Nvidia, the entire industry was working on a smaller scale. Within just a few years, Nvidia’s success in graphics made it one of TSMC’s top two or three customers.

ON FRIDAY, JANUARY 22, 1999, Nvidia finally went public. With the Asian financial crisis over and the company’s finances in solid shape, the stock proved irresistible to investors. The company raised $ 42 million from its stock sale, and its shares ended the first day of trading up 64 percent at $ 19.69 per share. At that price, Nvidia was valued at $ 626 million.

WITH THE MONEY FROM THE IPO, Nvidia pursued ever-larger strategic partnerships. The company had hired Oliver Baltuch, a tech-industry veteran, to manage significant relationships with big companies such as Microsoft, Intel, and AMD.

The company had stayed away from the gaming-console market since Sega canceled its contract for the NV2 chip. But a few years later, in 1999, Microsoft hinted that it was developing its first console and that it would be based on the DirectX API.

Microsoft soon changed direction, though. In January 2000, the company gave graphics start-up Gigapixel, led by founder and CEO George Haber, a development contract to supply the graphics technology for Microsoft’s Xbox console. Even after the Gigapixel announcement, Nvidia continued to make the case that it was the right partner for the Xbox. Jensen, Diskin, Thompson, and McBreen agreed that Nvidia would replace Gigapixel as Microsoft’s graphics-chip partner. The new console would instead use a new, custom-designed chip from Nvidia, with Jensen and Diskin insisting that Microsoft pay $ 200 million up front to cover the new chip’s research and development, an amount that required personal approval from Bill Gates himself. In the week of Bill Gates’s GDC announcement that Nvidia, in fact, would be the graphics-chip supplier for the Xbox, Nvidia’s stock soared above $ 100 per share.

IT WOULD BE ONE OF THE last happy moments at Nvidia for Priem. In the late 1990s the Nvidia cofounder had started to clash more frequently with the company’s engineering staff. Priem was now fighting with Jensen so often, and with such intensity, that the company brought in a special workplace consultant to try to resolve their differences. In 2003, a few years after his reassignment, Priem took an extended leave of absence to deal with some issues he had with his then-wife. But after three months, Jensen could no longer dodge employee questions on the whereabouts of the company’s cofounder and chief technical officer. He gave Priem a choice and an ultimatum: return to work full-time, transition into a part-time consulting role for Nvidia, or resign.

GeForce and the Innovator’s Dilemma

The Innovator’s Dilemma is one of Jensen’s favorite books, and he was determined not to let such a fate befall Nvidia.

He studied the business strategies of other leading companies for inspiration on how to fend off an attack from below. As he looked at Intel’s product lineup, he noticed that its Pentium series of CPUs had a range of clock speeds — a key measure of processor performance — but all of the Pentium cores themselves shared the same chip design and theoretically had identical features and capabilities.

Jensen saw that Nvidia could stop throwing away parts that failed quality tests as a matter of course. Rejected parts were generating no revenue for Nvidia; they were thrown out. But by spending a bit more to spruce up the rejected parts for use on less intensive chip lines, Nvidia could create a whole new derivative product line that could turn a profit, without the expensive and time-consuming process of research and development. The strategy was dubbed “ship the whole cow,” a reference to how butchers find ways to use almost every part of a steer carcass, from nose to tail — not just the prime cuts like the tenderloin and the ribs.

Nvidia decided to bend the rules in order to stand out even more. In 1999, it launched the successor to the RIVA TNT2 series, which it called the GeForce 256.

The GeForce 256 thus took even more computational burden off of the CPU and made the entire computer run faster.

To call the new chip the first entry in an entirely new product category altogether: a graphics processing unit, or GPU, which would be to graphics rendering what the computer’s main central processing unit (CPU) was for all other computational tasks.

The world understood that CPUs were supposed to cost hundreds of dollars. Nvidia chips were sold at wholesale for less than $ 100 each, even though they were just as complex as, and had more transistors than, CPUs. Once the company started marketing all of its chips as GPUs, the pricing gap narrowed considerably.

MODERN GRAPHICS CHIPS ORGANIZE computation through what is called a graphics pipeline, turning geometry data with object coordinates into an image. The first stage of this process, called the geometry stage, involves transforming object vertices, or points, in a virtual 3-D space through scaling and rotation calculations. The second stage, rasterization, determines the position of each object on the screen. The third stage, called the fragment stage, calculates the color and textures. In the final stage, the image is assembled.

David Kirk, Nvidia’s chief scientist, wanted to change all this by inventing a true GPU. In February 2001, Nvidia released the GeForce 3, whose programmable shader technology and support for third-party development of its core graphics functions made it the first true GPU.

THE DRIVE TO CONTINUALLY DIVERSIFY. Nvidia’s business led the company straight to Apple. Historically, Nvidia had not sold much to Apple, partly because Nvidia optimized its products for Intel-based CPUs, which Apple did not use. But in the early 2000s, Nvidia won a small contract to supply graphics chips for the consumer-oriented iMac G4.

Chris Diskin, who had successfully won Microsoft’s Xbox business, was put in charge of the overall sales relationship with Apple. He worked with Dan Vivoli to figure out a strategy to get Nvidia’s GeForce chips into more Apple computer products. The key break came thanks to an old, iconic Pixar short film.

In a brainstorming meeting during the development of the GeForce 3, Daly believed he had come up with the perfect means of showing off Nvidia’s new chip. Pixar’s two-minute animated short Luxo Jr. had been a watershed moment for computer animation. But there was the fact that Luxo Jr. was Pixar’s property. “That’s fine. Don’t worry about it. I’ll go figure it out,” Vivoli said. Both he and David Kirk had contacts at Pixar. Their request eventually reached Pixar’s chief creative officer, John Lasseter, who had directed Toy Story, A Bug’s Life, and would later direct Cars. Lasseter declined it.

Vivoli thought, “What if we showed the demo to Steve Jobs?” Jobs decided that the Power Mac G4 computer would offer the GeForce 3 as a premium option. Jobs also asked whether Apple could use the demo at the 2001 Macworld in Tokyo. Vivoli told him about the copyright issue, to which Jobs replied that he would check with the people at Pixar.

It wasn’t that ATI was kicking Nvidia’s ass in laptops, as Jobs had thought. It was just that Nvidia had not needed to create a chip specifically for a lower-power laptop model, when a throttled-down version of its flagship line would do the job just fine. Thirty minutes later, Diskin got a call from Apple executive Phil Schiller. “I don’t know what you told Steve, but we need your entire laptop team in here tomorrow for a full day to review your silicon,” he said. Nvidia would go from no presence in Apple laptops to a nearly 85 percent share of Apple’s entire computer lineup in a matter of years.

NVIDIA WAS GOING FROM STRENGTH to strength. It had added one hundred employees from its vanquished rival 3dfx, won an Xbox gaming-console business deal that would go on to generate $ 1.8 billion in revenue over its lifetime, and secured contracts to make chips for Apple’s Mac computer lines.

In 2000, ATI Technologies acquired the small graphics firm ArtX for $ 400 million. ATI’s purchase of ArtX gave it instant credibility in the field of console games — and a group of engineers who immediately started working on a chip called the R300.

Nvidia, in the meantime, was caught up in a legal dispute with Microsoft. Nvidia was designing its next chip, which it called the NV30, without access to the upcoming version of Direct3D’s technical specifications.

“NV30 was an architectural disaster. It was an architectural tragedy,” Jensen later said. “The software team, the architecture team, and the chip-design team hardly communicated with each other.” For the first time since the NV1, Nvidia was about to release a card that was not at the very top of the market in terms of performance.

ATI, in contrast, had agreed to sign the contract with Microsoft so that it could optimize the R300 with Direct3D from the start. The chip and the new card that housed it, the Radeon 9700 PRO, worked perfectly and was fully compliant with Microsoft’s latest release of the API.

Compared with the R300 , cards based on NV30 were more expensive, ran hotter, ran games slower, and had an excessively loud fan.

The only thing that saved Nvidia was that its competition did not press its advantage very hard. ATI had decided to peg the price of its R300-based graphics cards at $ 399, the same price as NV30-based cards. If ATI had cut the price of the R300 aggressively enough, the company could have destroyed demand for the inferior NV30-based cards and likely bankrupted Nvidia.

As Nvidia established itself as the dominant player in the graphics industry, the company’s executives got distracted by its partners, its investors, and its finances. It failed to see the growing problem within its own walls — complacency. And was almost destroyed because of it.

Jensen would have to evolve into a different kind of leader for Nvidia to succeed in the coming decade.

Nvidia Rising (2002–2013)

The Era of the GPU

ONE OF THE EARLIEST REFERENCES to the technology that would eventually turn Nvidia into a trillion-dollar company was in a PhD thesis about clouds. Mark Harris. In 2002, Harris observed that an increasing number of computer scientists were using GPUs, such as Nvidia’s GeForce 3, for nongraphics applications. To do so, they utilized the GeForce 3’s programmable shader technology, originally designed to paint colors for pixels, to perform matrix multiplication.

Using GPUs for nongraphics purposes, however, required a very specific skill set. Researchers had to rely on programming languages designed exclusively for graphics shading, including OpenGL and Nvidia’s Cg (C for graphics), which was introduced in 2002 to run on the GeForce 3. Sufficiently dedicated programmers such as Harris learned how to “translate” their real-world problems into functions that these languages could execute, and they soon figured out how to use GPUs to make progress in understanding protein folding, determining stock-options pricing, and assembling diagnostic images from MRI scans.

Harris decided to coin something simpler: “general-purpose computing on GPUs,” or “GPGPU.” Harris’s intense interest in GPUs earned him a job at Nvidia. Nvidia had brought him on to help the company make GPGPU much easier.

In order to generate more demand, Nvidia would have to make its cards easier to program.

Harris learned there was a chip team within Nvidia working on a secret project code-named the NV50. Nvidia called this programming model for chips the Compute Unified Device Architecture, or CUDA. New software, rather than new hardware, would transform the company.

TWO OF THE MOST IMPORTANT figures in the early development of CUDA were Ian Buck and John Nickolls. Nickolls was the hardware expert. Buck worked on software.

Nvidia would have to do two things: make CUDA available to everyone, and make it applicable to everything. Jensen insisted that they launch CUDA across Nvidia’s entire lineup, including its GeForce line of gaming GPUs, so that it would be widely available for a relatively affordable price.

The company invested so much in converting its GPUs for CUDA compatibility that its gross margin, a measure of its profitability, fell from 45.6 percent in the 2008 fiscal year (covering January 2007 to January 2008) to 35.4 percent in the 2010 fiscal year. As Nvidia increased spending on CUDA, the global financial crisis destroyed consumer demand for high-end electronics as well as corporate demand for GPU-powered workstations. The combined pressures caused Nvidia’s stock price to fall by more than 80 percent between October 2007 and November 2008.

The “Era of the GPU” would create so many opportunities that Jensen saw it as his mission to prepare Nvidia to take advantage of it — even if no one could know exactly what those opportunities would be.

PROFESSOR ROSS WALKER CREATED one of the new-use cases for GPUs in the form of a biotechnology program called Assisted Model Building with Energy Refinement, or AMBER. The program simulates proteins in biological systems and has become one of the most popular applications used by academics and pharmaceutical companies to research new drugs.

Just as Jensen had predicted, GPUs were making advanced computing far more accessible and cheaper, which in turn made a program such as AMBER far more accessible. And the widespread adoption of AMBER transformed how the entire field of molecular dynamics conducted research.

JENSEN MADE NO APOLOGIES for Nvidia’s aggressive approach to chip sales. In fact, he insisted that salespeople take the same stance with all clients, regardless of size.

JENSEN DOESN’T LIKE DESCRIBING the strategy around CUDA as the building of a “moat.” He prefers to focus on Nvidia’s customers; he talks about how the company has worked to create a strong, self-reinforcing “network” that helps CUDA users.

Today, there are more than 5 million CUDA developers, 600 AI models, 300 software libraries, and more than 3,700 CUDA GPU-accelerated applications. There are about 500 million CUDA-capable Nvidia GPUs in the market.

Nvidia invested heavily in deep learning from the outset, dedicating substantial resources to creating CUDA-enabled frameworks and tools. This proactive approach paid off when artificial intelligence exploded in the early 2020s.

Tortured into Greatness

THE COMPANY THAT CREATED CUDA and opened the way to the era of general-purpose computing on GPUs had much in common with the company founded in a Denny’s booth in 1993. It still prized technical skill and maximum effort above all else.

Jensen had learned that corporate culture tended to atrophy as more people from more locations joined the company, and an atrophying culture could hurt product quality.

Jensen displayed his trademark directness and impatience in all settings.

Jensen’s at-times harsh approach was a deliberate choice. He knew that people would inevitably fail, especially in a high-pressure industry.

“I don’t like giving up on people,” he said. “I’d rather torture them into greatness.”

Do your job. Don’t be too proud of the past. Focus on the future.

JENSEN’S PREFERENCE FOR THE DIRECT approach also shaped Nvidia’s corporate structure as the company grew.

Jensen believed that the traditional corporate pyramid, with an executive suite at the top, multiple layers of middle management in the middle, and a foundation of line workers at the bottom, was antithetical to fostering excellence. “The first layer is the senior people. You would think that they need the least amount of management.” Instead, he focused on providing them collectively with information from across the organization, as well as with his own strategic guidance.

In the 2010s, Jensen had forty executives on his leadership team, or the “e-staff,” each reporting to him. Today the number is more than sixty. The large number of executives in e-staff meetings has fostered a culture of transparency and knowledge sharing. “It turns out that by having a lot of direct reports, not having one-on-ones, [we] made the company flat, information travels quickly, employees are empowered,” Jensen said.

“Strategy is not words. Strategy is action,” he said. “We don’t do a periodic planning system. The reason for that is because the world is a living, breathing thing. We just plan continuously. There’s no five-year plan.”

Jensen said. “It got people thinking about the work and not the organization. The work, not the hierarchy.” Under the “mission is the boss” philosophy, Jensen would start every new project by designating a leader, or a “Pilot in Command” (PIC), who would report directly to Jensen.

After Jensen organized Nvidia’s employees into groups centralized by function — sales, engineering, operations, and so on — they were treated as a general pool of talent and not divided by business units or divisions.

With such a large and distributed organization, Jensen needed to somehow keep tabs on what was going on inside Nvidia in order to make sure everyone had the right priorities. So Jensen asked employees at every level of the organization to send an e-mail to their immediate team and to executives that detailed the top five things they were working on and what they had recently observed in their markets, including customer pain points, competitor activities, technology developments, and the potential for project delays. The ideal top five e-mail is five bullet points where the first word is an action word. Every day, he would read about a hundred Top 5 e-mails to get a snapshot of what was happening within the company. Top 5 e-mails became a source of new market insights.

Jensen would often respond to e-mails within minutes of receiving them and wanted a response from an employee within twenty-four hours at most.

His whiteboarding creates a specific kind of meeting, one dedicated to solving problems, not reviewing things that have already been done. At the conclusion of a meeting, Jensen would summarize the new ideas the group had developed on the whiteboard.

Through mechanisms such as direct public feedback, the Top 5 e-mail, and the requirement to present ideas on a whiteboard rather than as a static PowerPoint, Nvidia equips its workforce with powerful weapons in the constant struggle for accuracy and rigor and against groupthink and inertia.

The Engineer’s Mind

Activist investor Carl Icahn has a theory that much of corporate America mismanages the succession process in choosing new CEOs. He calls it anti-Darwinian. Icahn observed that competent executives often get sidelined in favor of more likeable but less capable ones because of behavioral incentives inside companies. CEOs want to survive. Naturally, then, they prefer not to oversee a direct subordinate who is brighter and could potentially replace them.

Jensen has said many times that he could not do his job effectively without in-depth familiarity with the technology itself. “It’s essential we understand the underpinnings of the technology so you have an intuition for how the industry is going to change,” he once remarked.

If you want to be impactful in a large organization, don’t waste other people’s time.

Nvidia’s extreme work culture stems from the chief executive himself, who lives and breathes his job and looks down on anyone who isn’t as committed.

He lacks sympathy for anyone who works less than he does, and he does not believe that he has missed out on anything in life by giving himself so completely to Nvidia.

Into the Future (2013–Present)

The Road to AI

BY 2005, NVIDIA’S CHIEF SCIENTIST, David Kirk, was considering a change. Professor Bill Dally had nothing left to prove in the field of computer science. Kirk hired Dally not only to succeed him as chief scientist, an important position with many duties across the company. He also knew that Dally could accelerate Nvidia’s development of GPU technology.

SOON AFTER HE STARTED AT NVIDIA, Dally began to redeploy the company’s research teams to work on parallel computing.

One of Dally’s former colleagues at Stanford, the computer science professor Andrew Ng, was collaborating with Google Brain. After leaving Stanford to join Nvidia, Dally kept in touch with Ng. He assigned his Nvidia colleague Bryan Catanzaro, to help Ng’s team use GPUs for deep learning. The optimizations allowed Ng and Catanzaro to consolidate the work once performed by two thousand CPUs across a mere twelve Nvidia GPUs.

Alex Krizhevsky and Illya Sutskever’s work stoked Jensen’s interest in artificial intelligence. “Deep learning is going to be really big,” he said at an executive team meeting in 2013. “We should go all in on it.”

The first step was to assign significantly more personnel and funding to AI.

Catanzaro turned his GPU – optimization work into a software library Nvidia called CUDA Deep Neural Network, or cuDNN. This became the company’s first AI-optimized library.

But the real task was making bespoke hardware circuits that were optimized for AI. When Nvidia pivoted to AI, its architects were already working on the next generation of GPUs, called Volta. The development of an entirely new type of tiny processor, called the Tensor Core, which was integrated into Volta. In Dally’s words, they were “matrix multiple engines,” made for deep learning, and deep learning alone.

The center of gravity in AI would shift away from Stanford, Toronto, and Caltech and move to start-ups and well-established tech companies alike. Geoffrey Hinton and Fei-Fei Li would end up at Google. Andrew Ng worked as the chief scientist at Baidu. Ilya Sutskever, would cofound a deep-learning start-up called OpenAI. The one thing all of them had in common was that in their academic lives, they had used Nvidia GPUs to do their groundbreaking research.

The “Most Feared” Hedge Fund

THOUGH FEW KNOW IT, THE HISTORIES of Nvidia and Starboard Value, perhaps the most famous activist hedge fund in the world, are intertwined.

Jeff Smith is the founder of Starboard. The hedge fund accumulated a stake of 4.4 million shares in Nvidia, worth about $ 62 million, during the quarter ending in June of 2013.

Before Jensen knew it, Starboard was no longer an investor. But that wasn’t the end of Starboard’s influence on the chip industry, and on Nvidia. In January 2017, Starboard bought an 11 percent stake in Mellanox. In September 2018, Mellanox received a nonbinding purchase offer from an outside company at $ 102 per share. Intel and Xilinx topping out around a bid of $ 122.50 a share. Nvidia went just a little bit higher, at $ 125 per share. It won the bidding war on March 7, 2019, for an all-cash offer of $ 6.9 billion. In May 2024, Nvidia disclosed that the portion of the company that was formerly Mellanox had generated $ 3.2 billion in quarterly revenue.

Lighting the Future

David Luebke. In 2006, he became the first hire at a new division called Nvidia Research.

Nvidia had remained at the very forefront of innovation primarily through its operational excellence and strategic discipline.

RAY TRACING — A TECHNIQUE THAT SIMULATES the behavior of light rays as they bounce off or pass through objects in a virtual scene.

With Jensen convinced that ray tracing was worth pursuing, Luebke next went to an Nvidia GPU engineering team design session. The first step was acquiring start-ups that had specific expertise in ray tracing. Nvidia pursued and bought two: Mental Images, based in Berlin, and RayScale, in Utah.

In just three years, Nvidia Research had transformed from a group that pursued speculative computing projects into a reliable source of new business opportunities for the company.

A key contribution came from an Nvidia team located in Helsinki. They took on the challenge of research into a new specialized ray-tracing processor core inside GPUs. Nvidia prepared to release dedicated ray-tracing cores with the next architecture, which would be called Turing.

Both ray tracing and AI were going to change gaming forever. We knew that this was inevitable.

It had taken six years of development to build a sufficiently accurate AI model for the frame-generation feature, according to Catanzaro.

The development of DLSS and real-time ray tracing reveal how Nvidia came to approach innovation. Both ray tracing and DLSS have become must-have features that developers have incorporated into hundreds of games. And the features perform the best on Nvidia graphics cards, making it difficult for AMD to compete effectively.

The Big Bang

Jensen himself has called AI a “universal function approximator” that can predict the future with reasonable accuracy.

The best way to access this universal function approximator, of course, was through Nvidia technology.

The core architecture used in modern large language models is the Transformer introduced in the 2017 paper “Attention Is All You Need”. Jensen grasped the need to add support for Transformers in Nvidia’s AI offerings almost right away. He instructed his GPU software teams to write a special library for Nvidia Tensor Cores that optimizes them for use with Transformer operations; the library was later called the Transformer Engine.

When demand for generative AI exploded in 2023, Nvidia was the only hardware manufacturer ready to fully support it.

Its second, but lesser-known, advantage is pricing power.

Today, Nvidia graphics cards cost more than $ 2,000 apiece.

Though biology is one of the most complex systems, Jensen explained that for the first time in history, it could be digitally engineered.

The only thing that might hinder Nvidia are the so-called AI scaling laws. There are three components of these laws: model size, computing power, and data.

The Nvidia Way

Hiring raw talent is the first essential component of the Nvidia Way.

Leave a Reply