Aug 15, 2024
Japanese soldiers used to refer to World War II as a 'typhoon of steel'. Akio Morita, a young engineer, who barely avoided the front lines, felt the same way. Morris Chang, who lived across the East China sea spent his childhood fleeing Japanese armies that swept across China. He spent his teenage years moving from one place to another. On the opposite side of the world, Andy Grove survived multiple invasions of Budapest. He suffered at the hands of both Hungary's far-right government who treated Jews like second-class citizens and the Red army troops.
The American economists at the War Production Board measured success in terms of copper, iron, rubber, oil, aluminium, and tin as America converted its manufacturing prowess into military power. The United States built more tanks, ships, and planes than all of the Axis powers combined.
World War II caused an increase in the production of tanks and planes, and it also led to the creation of research labs that developed new technologies like rockets and radars. Morris Chang and Andy Grove were schoolboys when the war ended but Akio Morita was in his early twenties and had spent the last few months of the war developing heat seeking missiles.
William Shockley had always thought that a better "switch" could be created with the help of a unique type of material called semiconductors. Some materials like copper wires let electric current flow and other materials like glass block the current flow.
Semiconductor materials like silicon and germanium act like glass. But when mixed with other materials and the application of electric field, current can begin to flow. "Doping" semiconductor material with other elements presented an opportunity for the creation of new types of devices that could create and control electric currents. Until the late 1940s, no one could explain why pieces of semiconductor materials acted in such puzzling ways.
In the year 1945, Shockley theorized what he called a "solid state valve" and sketched it in his notebook. Two years later, two of his colleagues at Bell Labs, Walter Brattain and John Bardeen, built a device which enabled them to control the current surging across the germanium proving Shockley's theories about semiconductor materials correct. It was soon christened a 'transistor'.
AT&T, the parent company of Bell Labs, was in the business of telephones, not computers. They saw it useful for amplifying signals and replacing less reliable vacuum tubes in radios and hearing aids which were also used to amplify signals.
Not wanting to be outdone by his colleagues, Shockley conceptualized a new type of transistor which could turn the current on and off. He designed a switch. Soon billions of these transistors would be employed at microscopic scale to replace the human brain in the task of computing.
In 1955, William Shockley established Shockley Semiconductor in San Francisco.
The plan was for Shockley to build the world's best transistors. AT&T, the owner of Bell Labs, offered to license the device to other companies for $25,000. Everyone agreed transistors were a clever piece of technology. However, their widespread adoption hinged on their ability to outperform vacuum tubes or be manufactured at a lower cost.
In the summer of 1958, Jack Kilby, an engineer at Texas Instruments, was determined to find a solution to reduce the complexity caused by the numerous wires required in transistor-based systems. Instead of constructing individual transistors using separate silicon or germanium materials, the idea was to create multiple components on a single semiconductor piece. This would allow for the integration of numerous transistors into a single silicon or germanium slab. Kilby called it "integrated circuit", but it also became known as a "chip".
The principles behind transistors were well understood, but producing them consistently posed a significant challenge. A drawback of the mesa structure was the potential for impurities such as dust to settle on the transistor, leading to interactions with the surface materials. Swiss physicist Jean Hoerni recognized that by constructing the entire transistor within, rather than on top of, the germanium, the mesas could be eliminated altogether.
Several months later, Bob Noyce understood that Hoerni's "planar method" could be utilized to create multiple transistors on a single silicon piece. While Kilby, without Noyce's knowledge, had developed a mesa transistor on a germanium base and wired it, Noyce employed Hoerni's planar process to build multiple transistors on a single chip. Eventually, the "integrated circuits" that Kilby and Noyce had developed came to be known as "semiconductors" or, more simply, "chips."
Initially, Noyce's integrated circuit had a cost that was fifty times higher than that of a simpler device with separate components connected by wires. But everyone agreed that Noyce’s invention was clever and all it needed was a market.
Fairchild's first order of Noyce's chips came from NASA which had the mission of sending astronauts to the moon. The Apollo program's purchase of chips led to a significant transformation of Fairchild, turning it from a small startup into a company with 1,000 employees. Its sales skyrocketed from $500,000 in 1958 to $21 million just two years later.
Texas Instruments received its first order of chips from MIT's Instrumentation Lab. They were planning on using it on the Navy's missile program. The MIT team ended up not using the chips in the missile but found the idea of integrated circuits interesting.
Texas Instruments informed the Defense Department staff about Kilby's invention. Subsequently, in the following year, the Air Force Avionics Lab decided to sponsor the chip research conducted by Texas Instruments. Securing the Minuteman II contract led to a significant transformation of Texas Instruments' chip business. The firm started selling thousands of chips amid fear of American "missile gap" with the Soviet Union.
Jay Lathrop, a MIT graduate, and his assistant James Nall, a chemist, had an idea: a microscope lens could make something small look bigger. But what if they turned the microscope upside down? Could they use the lens to take a bigger pattern and "print" it onto germanium.
A layer of photoresist, a light-sensitive chemical from Kodak, was applied by Lathrop to a germanium block. This photoresist would dissolve when exposed to light. In areas where the photoresist layer was exposed to light, the chemical structure underwent changes, enabling it to be washed off.
Lathrop found that by adding an extremely thin aluminum layer, he could also "print" wires to link the germanium to an external power source. The method was named photolithography by Lathrop, as it involved printing using light. This technique allowed him to create transistors significantly smaller than what was achievable before.
Recognizing the potential of his MIT classmate Jay Lathrop's discovery, Noyce understood that it could revolutionize transistor production. Noyce promptly hired James Nall, the chemist who worked alongside Lathrop, to develop photolithography at Fairchild.
Shockley, Bardeen, and Brattain were awarded the Nobel Prize for their invention of the transistor. Jack Kilby went on to win a Nobel Prize for creating the first integrated circuit, had Bob Noyce not died, he would have shared the prize with Kilby.
With these capabilities in hand, both Fairchild and Texas Instruments (TI) entered the mid-1960s with the challenge of transforming chips into a mass market product.
Bob Noyce always envisioned a large civilian market for his chips even though in the early 1960s no such market existed.
In 1965, Moore predicted that every year for at least the next decade, Fairchild would double the number of components that could fit on a silicon chip. This meant that by 1975, integrated circuits would have sixty-five thousand tiny transistors carved into them. Moore's law refers to this forecast of exponential growth and was the greatest technological prediction of the century.
Several top employees of Fairchild decided to join competing chip makers. The reason was that Fairchild, which was still owned by an East Coast multimillionaire, refused to give his employees stock options. The employees were financially motivated to join the competition even though they were paid well by Fairchild. One of the employees who left Fairchild wrote on his leaving questionnaire: "I... WANT... TO... GET... RICH."
In the fall of 1959, Anatoly Trutko, a semiconductor engineer from the Soviet Union, moved into a Stanford University dormitory. The same year, the CIA published a report stating that the Soviet Union was only two to four years ahead of America in quality and quantity of transistors produced.
In the 1950s, the Soviet Union established new semiconductor facilities across the country and assigned its smartest scientists to build this new industry. Yuri Osokin was tasked to build a circuit with multiple components on the same piece of germanium.
Soviet leader Nikita Khrushchev was determined to outcompete the United States in every sphere, from corn to production to satellite launches. In the late 1950s, Joel Barr and Alfred Sarant, two former Soviet spies, began building their first computer. Their work attracted the attention of Alexander Shokin, the person who managed the Soviets electronics industry.
The Soviet government started planning for a semiconductor city on the outskirts of Moscow.
A Soviet student named Boris Malin returned from a year studying in Pennsylvania with a Texas Instrument SN-51 integrated circuit in his luggage. Upon examining it under a microscope, Shokin ordered his engineers to "copy it".
In 2000, Jack Kilby received the Nobel Prize in Physics for his invention of the integrated circuit. He shared the award with Zhores Alferov, a Russian scientist who had conducted significant research in the 1960s on how semiconductor devices could generate light.
Shokin's "copy it" strategy was flawed. Copying worked in building nuclear weapons, because there were a very limited number of nuclear weapons that were built. When it came to building chips, that wasn't a sound strategy because in the U.S. TI and Fairchild were mass-producing it.
The USSR was good at quantity but not quality or purity, both of which were crucial to high-volume chipmaking. The recipe for building a chip is very complicated. Just as stealing a cake can't explain how it was baked, simply stealing a chip didn't explain how it was made.
While Silicon Valley's startup founders gained hands-on experience by switching jobs and working on the factory floor, Shokin made decisions from his ministerial desk in Moscow. Since the chip technology moved so quickly, it made stealing last year's design a hopeless strategy. The "copying it" strategy of the Soviet leaders condemned them to backwardness.
Not only did integrated circuits connect electronic components in novel ways, but they also interconnected nations within a network, with the United States as its focal point. In 1946, amidst the nation's ruins, Morita joined forces with ex-colleague Masaru Ibuka to establish an electronics company, which they eventually named Sony. Their first device was an electric rice cooker, which turned out to be a dud. But the tape recorder that they made worked and sold better.
In 1953, John Bardeen travelled to Tokyo. The same year Akio Morita departed from Haneda Airport bound for New York. During his time in New York, he met AT&T executives who granted him a license to produce the transistor. They predicted that the most useful application of this technology would be in hearing aids.
Sony enjoyed the advantage of cheaper labor costs in Japan, but their success was primarily driven by their focus on innovation, product design, and marketing strategies. Morita's "license it" approach was in stark contrast to the "copy it" tactics employed by Soviet Minister Shokin, highlighting a significant difference in their strategies.
Sony excelled by identifying emerging markets and targeting them with remarkable products that incorporated Silicon Valley's cutting-edge circuit technology. Sony's first major success came in the form of transistor radios which the Japanese Prime Minister Ikeda had given de Gaulle.
Sony's expertise lies in its ability to devise consumer products, but not in designing chips. Calculators are another example of a consumer device that was transformed by the Japanese firms. However, replicating Sony's product innovation and marketing prowess proved just as challenging as replicating America's semiconductor expertise. The semiconductor symbiosis between the United States and Japan involved a delicate balance. Both countries depended on one another for supplies and customers, creating a complex interdependence.
TI (Texas Instruments) were the first foreign chip maker to open a plant in Japan. And this was made possible, thanks to the efforts of Sony's Morita.
Designing the semiconductors was mostly done by men, while the job of assembling them fell to women. Chip companies employed women, as they could be compensated with lower wages and were less likely than men to demand improved working conditions. Production managers believed that women's smaller hands made them more adept at assembling and testing completed semiconductors, which also contributed to their preference for hiring women.
Failchild eventually opened facilities in Maine and on a Navajo reservation in New Mexico that provided tax incentives. Bob Noyce had invested in a radio assembly factory in Hong Kong where wages were 25 cents an hour - a tenth of an average American wage. In the 1960s, Taiwanese workers made 19 cents an hour, Malaysians 15 cents, Singaporeans 11 cents, and South Koreans only a dime.
Fairchild rented space in a sandal factory in Hong Kong. It continued to make its Silicon wafers in California but the final assembly was done in the Hong Kong factory.
In the early bombing campaigns in Vietnam, over eight hundred thousand tons of bombs were dropped. This didn't have much of an impact on the North Vietnamese army because most of the bombs missed their targets. The Sparrow III anti-aircraft missiles employed by U.S. fighter jets in the Vietnam War utilized vacuum tubes that were hand-soldered. The humid climate of Southeast Asia, the stress of takeoff and landings, and the intensity of fighter combat frequently resulted in failures.
The military granted Texas Instruments a nine-month period and $99,000 to produce a laser-guided bomb. On May 13, 1972, U.S. aircraft deployed twenty-four of the bombs on the Thanh Hoa Bridge, which had remained intact until that day. A basic laser sensor and a few transistors transformed a weapon with a 0-for-638 hit ratio into a precision-destruction instrument.
Taiwan and U.S. had been treaty allies but America's defeat in Vietnam made its promises look shaky. America's impending defeat in Vietnam was a great cause of concern for the Taiwanese government. As the Vietnam War persisted, the U.S. reduced economic aid to its Asian allies, including Taiwan, a foreboding indication for a nation so reliant on American backing.
Americans who were not concerned about protecting Taiwan might be more inclined to defend Texas Instrument. The greater the number of semiconductor plants on the island and the stronger the economic ties with the United States, the more secure Taiwan would be.
In 1968, TI's board of directors approved construction of the new facility in Taiwan. And by 1980, it had shipped its billionth unit.
From South Korea to Taiwan, Singapore to the Philippines, a map of semiconductor assembly facilities closely resembled a map of American military bases across Asia.
Texas Instrument still has its facilities in Taiwan but since then Taiwan has made itself an irreplaceable partner to Silicon Valley.
Noyce and Moore left Fairchild just as swiftly as they had departed Shockley's startup a decade earlier, and went on to establish Intel, which stands for Integrated Electronics.
Two years after its founding, Intel introduced its first product, a chip known as a dynamic random access memory, or DRAM for short. Prior to the 1970s, computers typically relied on magnetic cores, rather than silicon chips, to store data. These cores were made up of a collection of small metal rings connected by a network of wires. When a ring was magnetized, it represented a 1 for the computer, while a non-magnetized ring symbolized a 0.
A DRAM chip functioned similarly to the older magnetic core memories, utilizing electric currents to store 1s and 0s. However, instead of employing wires and rings, DRAM circuits were etched into silicon. As they didn't need to be manually woven, they experienced fewer malfunctions and could be produced at a significantly smaller size.
Intel aimed to govern the DRAM chip market. Memory chips don't require customization, which means chips with the same design can be employed in various devices. This enables mass production of these chips. On the contrary, the other primary type of chips - those responsible for "computing" rather than "remembering" - were custom-designed for each device, as every computing problem varied.
A calculator operated differently than a missile's guidance computer, for instance, so until the 1970s, they utilized different logic chips. This specialization increased costs, leading Intel to concentrate on memory chips, where mass production would lead to economies of scale.
Intel wasn't the first company to consider manufacturing a generalized logic chip. A defense contractor had previously developed a chip similar to Intel's for the computer on the F-14 fighter jet. However, the existence of that chip remained undisclosed until the 1990s.
Intel, on the other hand, introduced a chip called the 4004 and promoted it as the world's first microprocessor - a "micro-programmable computer on a chip," as the company's marketing campaign proclaimed. This chip could be employed in various devices and sparked a revolution in computing.
Despite being stunned by Sputnik and the Cuban Missile Crisis, it wasn't until the early 1970s that the Soviets amassed a substantial stockpile of intercontinental ballistic missiles, ensuring that a sufficient number of their nuclear weapons could withstand a U.S. nuclear strike and retaliate with a devastating atomic attack of their own.
Even more concerning, the Soviet army boasted a far greater number of tanks and planes, which were already positioned in potential conflict zones in Europe. The U.S., under domestic pressure to reduce military spending, was unable to match this.
Strategists like Andrew Marshall recognized that the sole solution to the Soviet quantitative advantage was to manufacture superior quality weapons. He envisioned "rapid information gathering," "sophisticated command and control," and "terminal guidance" for missiles, envisioning munitions that could hit targets with near-perfect precision.
Additionally, unlike when integrated circuits were initially invented, the chip industry had become less centered on military production. Only consumer markets possessed the volume necessary to finance the immense R&D programs that Moore's Law necessitated.
Silicon Valley believed it stood at the pinnacle of the global tech industry, but after two decades of rapid expansion, it now confronted a critical challenge: fierce competition from Japan.
Morita's transistor radios were the first notable threat to American economic dominance, and their success emboldened Morita and his Japanese peers to aim even higher. American industries, from automobiles to steel, were facing intense competition from Japan.
By the 1980s, consumer electronics had become a Japanese forte, with Sony leading the charge in introducing new consumer products and seizing market share from American competitors. Initially, Japanese companies succeeded by copying U.S. competitors' products, producing them at a higher quality and lower price. Some Japanese emphasized the notion that they excelled at implementation, while America was superior at innovation.
In 1979, Sony launched the Walkman, a portable music player that revolutionized the music industry, incorporating five of the company's cutting-edge integrated circuits in each device.
After securing a position in sales and marketing at Fairchild Semiconductor, Jerry Sanders worked with Noyce, Moore, and Andy Grove. Eventually, the trio left the company to establish Intel.
Following the establishment of his own chip company, AMB, in 1969, he was embroiled in a prolonged legal battle with Intel for the next three decades, primarily concerning intellectual property disputes.
In a covert exchange, Hitachi employee Jun Naruse received a badge from a "consultant" at a company called Glenmar, in return for handing over an envelope of cash. This company, Glenmar, pledged to assist Hitachi in acquiring industrial secrets.
However, Glenmar was revealed to be a front company, as its employees were actually undercover FBI agents.
During the mid-1980s, when Toshiba, a Japanese industrial conglomerate, was a leading DRAM producer globally, the company faced numerous accusations, which were later confirmed to be true, that it had sold machinery to the Soviets, enabling them to construct submarines with reduced noise levels. Although there was no direct connection between Toshiba's deal with the Soviets involving submarine technology and its semiconductor business, many Americans perceived the submarine case as additional proof of unethical practices by Japanese companies.
The documented cases of unlawful Japanese industrial espionage were relatively few. This could be interpreted in two ways: either that stealing secrets was not a significant factor in Japan's success or that Japanese companies were adept at covert intelligence gathering. While infiltrating competitors' facilities was illegal, monitoring rivals was considered a standard practice in Silicon Valley. Additionally, it was common in Silicon Valley to accuse competitors of stealing employees, ideas, and intellectual property. Monitoring and imitating competitors were integral to the business model of Silicon Valley.
But was Japan’s strategy any different from Silicon Valley's strategy? Japanese firms had access to the U.S. market, but Silicon Valley companies faced challenges in gaining market share in Japan. Up until 1974, Japan had implemented limitations on the number of semiconductor chips that American companies were allowed to sell within their market.
Even after the quotas were removed, Japanese companies continued to purchase a limited number of semiconductor chips from Silicon Valley. This was despite the fact that Japan accounted for a quarter of global semiconductor consumption, with companies such as Sony incorporating these chips into their televisions and VCRs, which were then sold worldwide.
Japan's government also provided financial support to its domestic semiconductor manufacturers through subsidies. In contrast to the U.S., where antitrust laws discouraged collaboration among chip manufacturers, Japan's government actively encouraged cooperation between companies. In 1976, the Japanese government established the VLSI Program, a research consortium, and provided approximately half of its budget. This program aimed to promote collaboration and innovation in the semiconductor industry.
Jerry Sanders, a prominent figure in the semiconductor industry, viewed the high cost of capital as the primary disadvantage for Silicon Valley in comparison to its competitors. He mentioned that Japanese companies could access capital at interest rates of around 6 to 7 percent, while he had to pay a staggering 18 percent interest on a good day. Japanese firms had higher levels of debt compared to their American counterparts, yet they were able to borrow at lower interest rates.
During the early 1980s, Japanese companies invested 60 percent more in production equipment than their U.S. competitors, despite facing intense competition and minimal profits in the industry. Japanese chip manufacturers continued to invest and produce, steadily increasing their market share.
As a result of their continuous investments and production, five years after the introduction of the 64K DRAM chip, Intel, the company that initially pioneered DRAM chips, saw its global market share plummet to a mere 1.7 percent, while Japanese competitors' market share surged. In 1985, Japanese companies allocated 46 percent of the global capital expenditure on semiconductors, while American firms accounted for 35 percent.
The Japanese chip manufacturers argued that their actions were not unfair, as American semiconductor companies also received significant support from the government, particularly through defense contracts.
Lithography had become a big business, and at the start of the 1980s, GCA was at the top. As transistors were miniaturized, the complexity of the lithography process increased, with each component, such as chemicals, lenses, and lasers that align the silicon wafers with the light source, becoming more challenging to manage.
The world's top lens manufacturers were Germany's Carl Zeiss and Japan's Nikon, although the U.S. also had a number of specialized lens makers. Perkin Elmer, a small manufacturer located in Norwalk, Connecticut, had a history of producing bombsights for the U.S. military during World War II and lenses for Cold War satellites and spy planes.
In the late 1970s, Perkin Elmer's scanner dominated the lithography market; however, by the 1980s, it was overtaken by GCA. As the Japanese chip industry grew, GCA started to lose its competitive advantage.
Prior to IBM's switch to Nikon steppers, the company aimed for each machine to operate for seventy-five hours before requiring downtime for adjustments or repairs. Nikon's customers experienced an average of ten times that duration of uninterrupted use.
When Jerry Sanders referred to chips as "crude oil," the Pentagon understood his analogy, as it emphasized the importance of semiconductors in the modern world. Chips were considered even more strategic than petroleum due to their critical role in various industries and technologies.
Pentagon officials were well aware of the critical importance of semiconductors in maintaining American military superiority. Utilizing semiconductor technology to "offset" the Soviet conventional advantage during the Cold War had been a key American strategy since the mid-1970s, when Bob Noyce's singing partner Bill Perry managed the Pentagon's research and engineering division.
In 1986, Japan surpassed the United States in the number of chips produced. By the end of the 1980s, Japan was supplying 70 percent of the world's lithography equipment, while the United States' share in an industry invented by Jay Lathrop in a U.S. military lab had declined to 21 percent.
Japan's military spending was capped at around 1 percent of its GDP by Tokyo, aiming to reassure its neighbors who vividly recalled the country's wartime expansionism. However, as Japan didn't allocate a significant portion of its budget to military spending, it had more funds to invest in other areas. The U.S. spent five to ten times more on defense relative to the size of its economy, while Japan concentrated on expanding its economy, leaving the responsibility of defending it to the United States.
Bob Noyce testified to Congress in support of reducing the capital gains tax from 49 percent to 28 percent and advocated for the relaxation of financial regulations to allow pension funds to invest in venture capital firms. Following these changes, a surge of funds poured into the venture capital firms located on Palo Alto's Sand Hill Road.
Congress strengthened intellectual property protections through the Semiconductor Chip Protection Act after Silicon Valley executives, such as Intel's Andy Grove, testified to Congress that legal copying by Japanese firms was undermining America's market position.
In 1986, to avoid potential tariffs, Washington and Tokyo reached an agreement. Japan's government consented to implement quotas on its exports of DRAM chips, thereby limiting the number of chips sold to the U.S. The agreement reduced the supply of DRAM chips, increasing prices globally, which negatively affected American computer producers, who were major customers of Japan's chips. Higher prices ultimately benefited Japan's producers, allowing them to maintain their dominance in the DRAM market. Despite the trade deal, most American producers were already in the process of exiting the memory chip market. As a result, only a few U.S. firms continued to produce DRAM chips.
One of Silicon Valley's complaints was that Japan's government assisted firms in coordinating their R&D efforts and provided funding for this purpose. As a result, many people in America's high-tech industry believed that Washington should adopt similar strategies.
In 1987, a consortium called Sematech was established by a group of leading chipmakers and the Defense Department. It was funded half by the industry and half by the Pentagon. Fifty-one percent of Sematech funding was allocated to American lithography firms. Bob Noyce explained the rationale, stating that lithography received half the money because it represented "half the problem" faced by the chip industry.
In 1990, Bob Noyce, the greatest advocate for GCA at Sematech, passed away from a heart attack after his morning swim. He had built Fairchild and Intel, invented the integrated circuit, and commercialized the DRAM chips and microprocessors that form the foundation of modern computing. However, lithography remained resistant to Noyce's transformative influence.
In 1993, GCA's owner, General Signal, announced that it would either sell or close the company. Eventually, GCA was shut down and its equipment was sold off, becoming another casualty of Japanese competition.
According to Morita, Japan has been more focused on creating engineers, while the United States has been preoccupied with creating lawyers. American executives were overly preoccupied with short-term profits, while Japanese management was more dedicated to long-term success. American labor relations were characterized by an outdated hierarchical structure, lacking adequate training and motivation for workers on the shop floor.
Would a top-tier technological powerhouse like Japan be content with a second-rate military standing? If Japan's triumph in DRAM chips served as a predictor, it appeared poised to surpass the United States in practically every significant industry. Why wouldn't it pursue military supremacy as well?
If this were to happen, how would the U.S. respond? In 1987, the CIA assigned a group of analysts to predict the future of Asia. They viewed Japan's dominance in semiconductors as evidence of an emerging "Pax Niponica," an East Asian economic and political bloc led by Japan.
American influence in Asia was founded on technological superiority, military strength, and trade and investment connections that united Japan, Hong Kong, South Korea, and Southeast Asian countries. Since the first Fairchild assembly plant on Kowloon Bay in Hong Kong, integrated circuits have been a vital part of America's position in Asia. U.S. chipmakers established facilities across Taiwan, South Korea, and Singapore.
These regions were safeguarded from Communist invasions not only by military force but also by economic integration, as the electronics industry drew the region's rural population away from farms — where poverty often fueled guerrilla opposition—into well - paying jobs assembling electronic devices for American consumers.
As the U.S. chip industry grappled with Japan's challenge, cowboy entrepreneurs like Jack Simplot played a crucial role in reversing what Bob Noyce had referred to as a "death spiral" and achieving a surprising turnaround. The U.S. surpassed Japan's DRAM giants not by imitating them, but by innovating around them.
Instead of isolating itself from trade, Silicon Valley outsourced even more production to Taiwan and South Korea in order to regain its competitive edge. Meanwhile, as the U.S. chip industry recovered, the Pentagon's investment in microelectronics began to yield results as it introduced new weapon systems that no other country could rival. America's unrivaled power during the 1990s and 2000s was a result of its renewed dominance in computer chips, the key technology of that era.
Facing intense competition from Japan, AMD, National Semiconductor, Intel, and other industry leaders also gave up DRAM production. Despite entering the DRAM market when Japanese competition was at its peak, Micron managed to survive and eventually flourished. Most other U.S. DRAM producers were driven out of the market in the late 1980s.
TI continued producing DRAM chips but faced challenges in making a profit, eventually selling its operations to Micron. Micron learned to compete with Japanese rivals like Toshiba and Fujitsu when it came to the storage capacity of each generation of DRAM chip and to outcompete them on cost. Similar to the rest of the DRAM industry, Micron's engineers pushed the boundaries of physics as they developed increasingly dense DRAM chips, meeting the memory chip requirements for personal computers.
Professor Christensen was renowned for his theory of "disruptive innovation," in which a new technology replaces established firms. Recognizing that Intel, once synonymous with innovation, was now being disrupted, Grove realized that the DRAM business was in decline. The idea of leaving the DRAM market seemed inconceivable. Intel had pioneered memory chips, and acknowledging defeat would be humiliating.
In 1980, Intel secured a small contract with IBM, the American computer giant, to manufacture chips for a new product called a personal computer. IBM enlisted the services of a young programmer named Bill Gates to develop software for the computer's operating system.
On August 12, 1981, against the backdrop of the Waldorf Astoria's grand ballroom with its ornate wallpaper and heavy drapes, IBM unveiled its personal computer, priced at $1,565 for a bulky computer, a large monitor, a keyboard, a printer, and two diskette drives. It featured a small Intel chip within.
In the end, Intel made the decision to abandon memories, relinquishing the DRAM market to the Japanese and concentrating on microprocessors for personal computers. In Grove's restructuring plan, the first step was to lay off over 25% of Intel's workforce, closing facilities in Silicon Valley, Oregon, Puerto Rico, and Barbados.
Intel’s new manufacturing method was named “copy exactly.” Once Intel identified the most effective production processes, they were replicated in all their other facilities. Intel's yields increased significantly, while its manufacturing equipment was used more efficiently, resulting in reduced costs. Each of the company's plants began to operate less like a research lab and more like a finely tuned machine.
Grove and Intel also experienced some luck. Some of the structural factors that had favored Japanese producers in the early 1980s started to change. Between 1985 and 1988, the value of the Japanese yen doubled against the dollar, making American exports more affordable. Interest rates in the U.S. dropped significantly over the 1980s, lowering Intel's capital costs.
Meanwhile, Texas-based Compaq Computer made inroads into IBM's PC market, driven by the realization that while writing operating systems or building microprocessors was challenging, assembling PC components into a plastic box was relatively simple. Compaq introduced its own PCs using Intel chips and Microsoft software, priced significantly lower than IBM's PCs. Intel entered the personal computer era with a near-monopoly on chip sales for PCs.
Lee Byung-Chul's initial products were dried fish and vegetables, which he gathered in Korea and shipped to northern China to support Japan's war efforts. He would transform Samsung into a semiconductor superpower with the help of two influential allies: America's chip industry and the South Korean government.
A crucial element of Silicon Valley's strategy to outmaneuver the Japanese was to find more cost-effective supply sources in Asia. Lee decided that Samsung could easily fill this role. Seven years after Lee established Samsung, it could have been destroyed in 1945, following Japan's defeat by the United States. However, Lee skillfully shifted focus, changing political patrons as effortlessly as he sold dried fish.
In the early 1980s, Lee perceived a shift in the environment. As Lee contemplated Samsung's future, he traveled to California in spring 1982, visiting Hewlett-Packard's facilities and being impressed by the company's technology. South Korea's government, however, indicated that it was willing to offer financial assistance. It had pledged to invest $400 million to develop its semiconductor industry. Korea's banks would follow the government's lead and lend millions more.
Similar to Japan, Korea's tech companies emerged not from garages, but from enormous conglomerates with access to low-cost bank loans and government backing. In February 1983, after a restless, sleepless night, Lee picked up the phone, called the head of Samsung's electronics division, and declared: "Samsung will make semiconductors." He bet the company's future on semiconductors and was prepared to spend at least $100 million, he declared.
However, Samsung's all-in bet on chips wouldn't have succeeded without support from Silicon Valley. The most effective strategy to handle international competition in memory chips from Japan, Silicon Valley speculated, was to find an even more cost-effective source in Korea, while focusing America's R&D efforts on higher-value products rather than commoditized DRAMs.
U.S. chipmakers, therefore, saw Korean upstarts as potential partners. “With the Koreans around,” Bob Noyce told Andy Grove, Japan’s strategy of “dump no matter what the costs” wouldn’t succeed in monopolizing the world’s DRAM production, as the Koreans would undercut Japanese producers. The outcome would be "deadly" to Japanese chipmakers, Noyce predicted.
Additionally, Korea's costs and wages were significantly lower than Japan's, so Korean companies like Samsung had a chance at gaining market share even if their manufacturing processes were not as perfectly optimized as the extremely efficient Japanese. Lee suggested licensing a design for a 64K DRAM from Micron, the financially struggling memory chip startup, and in the process, befriended its founder Ward Parkinson. The Idahoans, in search of any funds they could acquire, eagerly agreed, even if it meant Samsung would learn many of their processes.
The issue was that while pencils and tweezers were suitable tools for an integrated circuit with a thousand components, something more advanced was required for a chip with a million transistors. Eventually, Conway and Mead developed a set of mathematical "design rules," paving the way for computer programs to automate chip design.
Utilizing Conway and Mead's approach, designers no longer needed to map out the position of each transistor. Instead, they could utilize a library of "interchangeable parts" made possible by their innovative method. No one was more intrigued by what soon became known as the "Mead-Conway Revolution" than the Pentagon.
DARPA funded a program that allowed university researchers to send chip designs to be manufactured at state-of-the-art fabs. Despite its reputation for funding futuristic weapons systems, when it came to semiconductors, DARPA focused on building educational infrastructure to ensure an ample supply of chip designers in America. Additionally, DARPA assisted universities in acquiring advanced computers and organized workshops with industry officials and academics to discuss research problems over fine wine. DARPA reasoned that assisting companies and professors in maintaining Moore's Law was vital to America's military edge.
Members of Congress would likely have been furious if they discovered that DARPA — ostensibly a defense agency — was winning and dining computer science professors as they theorized about chip design. However, it was initiatives like these that reduced transistors, found new applications for semiconductors, attracted new customers, and funded the subsequent generation of smaller transistors.
By the end of the 1980s, a chip with a million transistors — unimaginable in the early 1970s, when Lynn Conway had arrived in Silicon Valley — became a reality, as Intel unveiled its 486 microprocessor, a small piece of silicon containing 1.2 million microscopic switches.
In 1963, the same year the USSR established Zelenograd, the KGB established a new division, Directorate T, which stood for teknologia. The mission was to obtain advanced Western equipment and technology, as a CIA report cautioned, in order to enhance the Soviet Union's capacity to produce integrated circuits and maintain their technological edge during the Cold War.
After Intel commercialized the microprocessor, Soviet Minister Shokin decided to close a research unit that was working on developing a similar device. Instead, the focus shifted towards replicating American microprocessors in an effort to keep up with the advancements in technology during the Cold War era. The USSR's "copy it" strategy was less successful than anticipated, despite initial optimism from surveillance buoys.
The USSR managed to acquire Intel's latest chips through theft and by diverting shipments to shell companies in neutral countries like Austria and Switzerland. This strategy allowed them to obtain advanced technology without raising suspicion. While stealing chip designs was a successful tactic for obtaining advanced technology, the real challenge for the USSR was to produce these chips at scale within their own country in order to truly benefit from the stolen designs.
During the early Cold War, the USSR had some success in producing stolen chip designs at scale, but by the 1980s, this became nearly impossible due to rapid advancements in technology and increased complexity of chip designs. As technology progressed in Silicon Valley, the process of fitting more transistors on silicon chips became increasingly difficult, making it more challenging for the USSR to keep up with these advancements during the Cold War.
The KGB believed that their efforts to steal advanced chip designs gave the Soviet semiconductor industry a significant advantage. However, simply possessing a copy of a new chip did not guarantee that Soviet engineers could successfully replicate and mass-produce it, as the process of manufacturing advanced chips became increasingly complex during the Cold War.
In the Soviet Union, ballistic missiles were designed to follow a predetermined flight path to reach their targets. The guidance computer played a crucial role in ensuring the missile stayed on track by making adjustments if it deviated from its intended trajectory. In the United States during the 1980s, missile technology advanced significantly. American missiles were designed to calculate their own path to the target, rather than following a preprogrammed route.
By the mid-1980s, the United States' new MX missile boasted an impressive level of accuracy, with public estimates suggesting it could land within 364 feet of its target in 50% of its launches. According to a former Soviet defense official, the Soviet Union's SS-25 missile, which was a comparable counterpart to the American MX missile, had an average accuracy of landing within 1200 feet of its target.
Apart from land-based missiles, the Soviet military had two additional methods to launch a nuclear attack on the United States during the Cold War: long-range bombers and missile submarines. During the Cold War, bomber fleets were considered the least effective method for delivering nuclear weapons due to their vulnerability to detection by radar soon after takeoff.
The United States' nuclear missile submarines were considered practically undetectable and, as a result, virtually invincible. The Soviet Union's nuclear submarines were less secure compared to their American counterparts, as the United States was making significant advancements in submarine detection technology.
By the early 1980s, the United States openly acknowledged the integration of its submarine sensors with the Illiac IV, one of the most powerful supercomputers at the time. This advanced computing system, which utilized semiconductor memory chips developed by Fairchild, greatly enhanced the accuracy of U.S. submarine detection capabilities. These systems were connected via satellite to an array of sensors on various platforms, including ships, planes, and helicopters, to track Soviet submarines. This sophisticated technology made the Soviet submarines highly vulnerable to American detection.
On January 17, 1991, during the early morning hours, the first wave of American F-117 stealth bombers launched from their airbases in Saudi Arabia. The target was Baghdad. The Paveway laser-guided bombs that struck the telephone exchange in Baghdad during the Gulf War in 1991 utilized a similar basic system design to the first generation of Paveway bombs, which had successfully destroyed the Thanh Hoa Bridge back in 1972.
During the Gulf War, aircraft equipped with laser guidance for their bomb strikes achieved a thirteenfold increase in target hits compared to similar planes not using guided munitions. U.S. airpower played a critical role in the outcome of the Persian Gulf War, as it inflicted massive damage on Iraqi forces while keeping American casualties to a minimum.
Weldon Word was honored for his significant contributions to the Paveway laser-guided bomb system, including his innovations in electronics and cost reduction. The impact of Paveway laser-guided bombs and Tomahawk missiles during the Gulf War was not only felt on the battlefield in Baghdad but also resonated strongly in Moscow.
In 1990, Japan's financial markets experienced a significant downturn, which led to a severe economic recession in the country. Following the financial market crash in 1990, the Tokyo stock market suffered a sharp decline, reaching just half of its previous value. The real estate market in Tokyo also experienced a significant downturn, with prices dropping even more dramatically. As a result, Japan's once-heralded economic miracle appeared to come to a grinding halt. The United States experienced a period of resurgence both in business and on the global stage.
Despite the growing competition from lower-cost producers like Micron and Samsung, Japan's major semiconductor companies continued to focus on DRAM production. Japanese DRAM manufacturers could have benefited from adopting the strategic mindset of individuals like Andy Grove, who was known for his paranoia-driven approach to business, or Jack Simplot, who had valuable insights into the volatility of commodity markets.
Japan's major semiconductor companies all focused heavily on the DRAM market. This collective decision led to intense competition and a lack of differentiation among the firms, ultimately resulting in a situation where few of them were able to achieve significant financial success.
In 1981, a mid-ranking factory manager at Toshiba named Fujio Masuoka invented a new type of memory chip that was different from the commonly used DRAM. This new chip, unlike DRAM, could retain data even after it was powered off. Toshiba ignored this discovery and it was Intel that ultimately brought this new memory technology, commonly referred to as "flash" or NAND, to market.
One of the most significant errors made by Japan's semiconductor companies was their failure to recognize the growing importance of personal computers (PCs) in the global market. Japan's major semiconductor companies were unable to replicate the strategic success of Intel. Out of Japan's major semiconductor companies, only NEC made a significant attempt to replicate Intel's success by focusing on microprocessors and the PC ecosystem.
In 1993, the United States reclaimed its position as the global leader in semiconductor shipments. By 1998, South Korean companies had surpassed Japan as the global leaders in the production of DRAM. During this period, Japan's market share in the DRAM industry experienced a significant decline, falling from 90 percent in the late 1980s to just 20 percent by 1998.
In 1985, Morris Chang was appointed by the Taiwanese government to head the country's leading electronics research institute. At the time, Taiwan was a prominent player in the Asian semiconductor industry, specializing in the assembly of semiconductor devices. This process involved importing chips from other countries, testing them, and then attaching them to plastic or ceramic packages for use in electronic devices.
Despite having a significant number of jobs in the semiconductor industry, Taiwan's share of the profits was relatively small during the mid-1980s. This was primarily due to the fact that the majority of the money in the chip industry was generated by companies involved in the design and production of cutting-edge, high-performance chips.
Morris Chang left Texas Instruments after more than two decades with the company, following his unsuccessful bid for the CEO position. Had Morris Chang been successful in his bid for the CEO position at Texas Instruments, he would have been placed at the very top of the semiconductor industry, on par with legendary figures such as Bob Noyce and Gordon Moore. When the government of Taiwan approached Morris Chang with the opportunity to lead the country's semiconductor industry and offered him a blank check to fund his plans, he found the proposition intriguing. At the age of fifty-four, he saw this challenge as an opportunity and took it.
During his time at Texas Instruments in the mid-1970s, Morris Chang had already been considering the idea of establishing a semiconductor company that would focus on manufacturing chips designed by its customers. The concept of separating chip design and manufacturing had been under consideration in Taiwan for a number of years before Minister K. T. Li approached Morris Chang with the offer to lead the country's semiconductor industry.
Minister K. T. Li fulfilled his commitment to Morris Chang by securing the necessary funding for the business plan that Chang had developed. As part of its commitment to support the establishment of TSMC, the Taiwanese government contributed 48% of the startup capital for the company. In exchange for this investment, the government required that Morris Chang secure a partnership with a foreign semiconductor firm to provide advanced production technology.
Despite being initially rejected by his former colleagues at Texas Instruments and Intel, Morris Chang was able to convince Philips, a Dutch semiconductor company, to invest in TSMC. Philips contributed $58 million, transferred its production technology, and licensed intellectual property in exchange for a 27.5% stake in the new company.
In addition to its initial investment in TSMC, the Taiwanese government also provided the company with generous tax benefits. From its inception, TSMC was not entirely a private business but rather a project supported and facilitated by the Taiwanese state.
Morris Chang, as the head of TSMC, made a commitment to focus exclusively on the manufacturing of semiconductor chips, rather than designing them. TSMC's business model was built on the principle of not directly competing with its customers. The company's success was closely tied to the success of its customers.
In the same year of 1987 when Morris Chang established TSMC, a relatively unknown engineer named Ren Zhengfei founded an electronics trading company called Huawei, located several hundred miles to the southwest. In Shenzhen, Ren Zhengfei engaged in purchasing inexpensive telecommunications equipment from Hong Kong and subsequently resold it at a higher price throughout China.
In 1965, Chinese engineers successfully created their initial integrated circuit, following the footsteps of Bob Noyce and Jack Kilby by half a decade. Nevertheless, the radical nature of Mao's policies made it unfeasible to attract foreign investment or engage in substantial scientific pursuits.
In the year following China's creation of its first integrated circuit, Mao initiated the Cultural Revolution, asserting that expertise was a source of privilege that hindered socialist equality. Mao's supporters launched an assault on the nation's educational system during this tumultuous period. As a result of these actions, thousands of scientists and experts were forced to work as farmers in impoverished rural areas.
While the majority of Chinese citizens were diligently committing their Chairman's deranged quotes to memory, workers in Hong Kong were diligently assembling silicon components at Fairchild's plant overlooking Kowloon Bay. Meanwhile, a few hundred miles away in Taiwan, numerous U.S. chip companies had established facilities employing thousands of workers in jobs that paid relatively lower compared to California's standards but were significantly more favorable than peasant farming.
As Mao was expelling China's limited pool of skilled workers to rural areas for socialist reeducation, the semiconductor industry in Taiwan, South Korea, and across Southeast Asia was actively recruiting peasants from the countryside and providing them with well-paying jobs in manufacturing plants. In the period when China was engulfed in revolutionary turmoil, Intel had invented microprocessors, and Japan had secured a substantial portion of the global DRAM market.
Following Mao's reign, Deng Xiaoping took over as the new leader, promising a policy of "Four Modernizations" aimed at transforming China. Shortly after, the Chinese government announced that "science and technology" were "the crux of the Four Modernizations." The government's assertion that semiconductors were strategically important led China's officials to attempt to regulate chipmaking, which entangled the sector in bureaucratic complications. During the late 1980s, as emerging entrepreneurs like Huawei's Ren Zhengfei began establishing electronics businesses, they were left with no choice but to depend on foreign semiconductors.
China's electronics assembly industry was established on a base of imported foreign silicon, sourced from the United States, Japan, and increasingly Taiwan, which the Communist Party still considered part of "China," but remained beyond its control.
The geographical landscape of chip fabrication experienced significant changes during the 1990s and 2000s. In 1990, U.S. fabrication facilities accounted for 37% of the global chip production, but this figure decreased to 19% by 2000 and further declined to 13% by 2010. Japan's share in chip fabrication also collapsed significantly.
South Korea, Singapore, and Taiwan each invested heavily in their respective semiconductor industries, resulting in a substantial increase in production output. After dethroning Japan's DRAM producers and claiming the title of the world's leading memory chipmaker in 1992, Samsung experienced rapid growth throughout the rest of that decade.
If anyone could establish a semiconductor industry in China, it was Richard Chang. During his tenure at Texas Instruments, he had successfully opened new facilities for the company on a global scale. Why couldn't he replicate the same success in Shanghai?
In 2000, Richard Chang founded the Semiconductor Manufacturing International Corporation (SMIC), raising over $1.5 billion from prominent international investors such as Goldman Sachs, Motorola, and Toshiba. According to an analyst, approximately half of SMIC's startup capital was provided by U.S. investors.
Utilizing these funds, Chang employed hundreds of foreign experts, including at least four hundred from Taiwan, to operate SMIC's fabrication facility. Similar to other Chinese semiconductor startups, SMIC enjoyed significant government support, such as a five-year corporate tax holiday and reduced sales tax on chips sold within China.
SMIC listed its shares on the New York Stock Exchange in 2004, further solidifying its global presence. During this period, fabless firms were in the early stages of introducing a groundbreaking new product, the smartphone, which featured an array of intricate semiconductors.
Offshoring had effectively reduced manufacturing costs and encouraged increased competition in the industry. Consumers reaped the benefits of these developments in the form of lower prices and the introduction of innovative, previously unimaginable devices.
Lithography companies were introducing new tools utilizing deep ultraviolet light, with wavelengths of 248 or 193 nanometers, which are imperceptible to the human eye. However, it was not long before chipmakers began demanding even greater lithographic precision.
John Carruthers aimed to focus on "extreme ultraviolet" (EUV) light, with a wavelength of 13.5 nanometers. The smaller the wavelength, the smaller the features that could be etched onto the chips. The primary issue was that the majority of people believed that it was not feasible to mass-produce extreme ultraviolet light. By the 1990s, the most advanced transistors were measured in the hundreds of nanometers (billionths of a meter), but it was already possible to envision significantly smaller transistors with features as small as a dozen nanometers in length.
The only notable competitor to Canon and Nikon was ASML, a small yet burgeoning Dutch lithography company. In 1984, Philips, the Dutch electronics firm, spun out its internal lithography division, resulting in the creation of ASML. Due to being excluded from the research at U.S. national labs, Nikon and Canon opted not to develop their own EUV tools, leaving ASML as the sole global producer.
In 2001, ASML acquired SVG, the last major American lithography company. The scientific networks that facilitated the development of EUV spanned the globe, bringing together experts from diverse countries such as the United States, Japan, Slovenia, and Greece. However, the manufacturing of EUV was not globalized; rather, it was monopolized. A solitary supply chain controlled by a single company would determine the future of lithography.
In 2006, Intel was the primary supplier of processors for most personal computers, having defended its position against AMD, the only other significant producer of x86 instruction set architecture-based chips—which was the standard for PCs—for the past decade.
Apple was the sole prominent computer manufacturer that did not utilize x86-based chips. Jobs and Otellini declared that this would change. Mac computers would now be equipped with Intel chips. Intel's dominion would expand, and its grip on the PC industry would strengthen further.
Since Intel initially adopted the x86 architecture, computer scientists at Berkeley had developed a more recent and more straightforward chip architecture called RISC, which provided more efficient calculations and consequently reduced power consumption. In the 1990s, Andy Grove had seriously contemplated transitioning Intel's primary chips to a RISC architecture but ultimately decided against it.
RISC was more efficient, but the cost of change was significant, and the threat to Intel's de facto monopoly was too substantial. Currently, virtually all major data centers employ x86 chips from either Intel or AMD. Nevertheless, Arm was unable to gain market share in PCs during the 1990s and 2000s, as Intel's partnership with Microsoft's Windows operating system was overwhelmingly robust to contest.
However, Arm's simplified, energy-efficient architecture rapidly gained popularity in small, portable devices that had to conserve battery usage. For instance, Nintendo opted for Arm-based chips in its handheld video games, a small market that Intel never paid significant attention to.
Intel's computer processor oligopoly was too lucrative to justify considering niche markets. Intel didn't recognize until it was too late that it should compete in another seemingly niche market for a portable computing device: the mobile phone.
Shortly after the agreement to include Intel's chips in Mac computers, Jobs returned to Otellini with a fresh proposition. Would Intel produce a chip for Apple's latest product, a smartphone? Intel declined the iPhone contract. Apple sought other sources for its phone chips. Jobs turned to Arm's architecture, which, unlike x86, was optimized for mobile devices that had to economize on power consumption. The initial iPhone processors were manufactured by Samsung, which had followed TSMC into the foundry business.
In the early 2010s, Intel maintained the world's most advanced semiconductor process technology, introducing smaller transistors before competitors, with the same regular cadence it had been known for since the days of Gordon Moore. However, the gap between Intel and competitors like TSMC and Samsung had started to diminish.
Apart from the loss of cutting-edge lithography, America's semiconductor manufacturing equipment firms generally flourished during the 2000s. Applied Materials remained the world's largest semiconductor toolmaking company, constructing equipment such as the machines that deposited thin films of chemicals on top of silicon wafers as they were processed.
Lam Research was unrivaled in etching circuits into silicon wafers. And KLA, also based in Silicon Valley, had the world's best tools for detecting nanometer-sized errors on wafers and lithography masks. These three toolmakers were introducing new generations of equipment that could deposit, etch, and measure features at the atomic scale, which would be critical for producing the next generation of chips.
The same was true for chip design. In the early 2010s, the most advanced microprocessors boasted a billion transistors on each chip. The software capable of arranging these transistors was provided by three American firms, Cadence, Synopsys, and Mentor, which controlled approximately three-quarters of the market.
Naturally, there were some risks associated with depending so heavily on a few facilities in Taiwan to produce a substantial portion of the world's chips. By the end of the 2000s, Intel maintained a lead over Samsung and TSMC in producing miniaturized transistors, but the gap had narrowed. Intel was operating at a slower pace, yet it still reaped the advantages of its more advanced starting point.
The U.S. was a leader in most types of chip design, although Taiwan's MediaTek demonstrated that other countries could design chips as well.
His specialty was sales, but Sanders never thought of relinquishing AMD's manufacturing facilities, even as the rise of foundries like TSMC made it feasible for large chip companies to contemplate divesting their manufacturing operations and outsourcing to an Asian foundry.
By the 2000s, it was common to divide the semiconductor industry into three categories. "Logic" encompasses the processors that operate smartphones, computers, and servers. "Memory" pertains to DRAM, which supplies the short-term memory computers require to function, and flash, also known as NAND, which retains data over time. The third category of chips is more diverse, comprising analog chips like sensors that convert visual or audio signals into digital data, radio frequency chips that communicate with cell phone networks, and semiconductors that manage how devices use electricity. This third category has not been primarily reliant on Moore's Law to propel performance enhancements.
Currently, the leading analog chipmakers are American, European, or Japanese. The majority of their production takes place in these three regions, with only a small portion outsourced to Taiwan and South Korea. The leading analog chipmaker at present is Texas Instruments.
The memory market, in contrast, has been dominated by a relentless drive to offshore production to a handful of facilities, mostly in East Asia. Instead of a scattered group of suppliers concentrated in advanced economies, the two primary types of memory chip—DRAM and NAND—are produced by just a few companies.
Apart from the notable exception of Intel, many key American logic chipmakers have relinquished their fabs and outsourced manufacturing. As long as Sanders was CEO, AMD, the company he founded, remained in the business of manufacturing logic chips, such as processors for PCs.
Traditional Silicon Valley CEOs continued to assert that separating the fabrication of semiconductors from their design resulted in inefficiencies. But it was culture, not business logic, that kept chip design and chip fabrication integrated for such a long time. Sanders could still recall the days of Bob Noyce experimenting in Fairchild's lab.
The field of computer graphics continued to captivate semiconductor startups as it provided an attractive niche. Unlike the situation in PC microprocessors, Intel did not hold a dominant monopoly in the realm of graphics. Each PC manufacturer, including prominent companies like IBM and Compaq, were obligated to incorporate an Intel or AMD chip as their primary processor. This was due to the fact that Intel and AMD held a de facto monopoly over the x86 instruction set, which was essential for PC functionality.
The company that ultimately led the graphics chip market, Nvidia, had its modest start not in a fashionable Palo Alto café but in a Denny's located in a rough area of San Jose. Nvidia was established in 1993 by Chris Malachowsky, Curtis Priem, and Jensen Huang, with Huang still serving as the CEO today. In addition to designing graphics processor units (GPUs) that could manage 3D graphics, Nvidia also developed a software ecosystem surrounding these chips.
Nvidia's GPUs can rapidly render images because, unlike Intel's microprocessors or other general-purpose CPUs, they are designed to perform numerous simple calculations—such as shading pixels—simultaneously. At the time, Huang could only vaguely envision the potential growth in what would eventually become the most significant application for parallel processing: artificial intelligence. Currently, Nvidia's chips, mainly produced by TSMC, are present in the majority of advanced data centers.
In each generation of mobile phone technology following 2G, Qualcomm played a crucial role by introducing key concepts on how to transmit more data through the radio spectrum and by offering specialized chips with the computing power necessary to decode the complex array of signals. The company's patents are so essential that it is impossible to manufacture a cell phone without them.
Qualcomm soon expanded into a new business line, designing not only the modem chips in a phone that communicate with a cell network, but also the application processors that manage a smartphone's core functions. However, it hasn't manufactured any chips: they are all designed in-house but produced by companies like Samsung or TSMC.
Five years after Sanders retired from AMD, the company announced it was splitting its chip design and fabrication businesses. Wall Street applauded, believing that the new AMD would be more profitable without the capital-intensive fabrication facilities. AMD spun off these facilities into a new company that would operate as a foundry like TSMC, manufacturing chips not only for AMD but other customers as well.
Around the early 2010s, it became unfeasible to increase transistor density by shrinking them two-dimensionally. One challenge was that, as transistors were reduced in accordance with Moore's Law, the narrow length of the conductor channel occasionally caused power to "leak" through the circuit even when the switch was off.
By the mid-2000s, the layer of silicon dioxide on top of each transistor was only a few atoms thick, too small to effectively contain the electrons within the silicon. To better control the movement of electrons, new materials and transistor designs were required.
Unlike the 2D design used since the 1960s, the 22nm node introduced a new 3D transistor, called a FinFET (pronounced finfet), that sets the two ends of the circuit and the channel of semiconductor material that connects them on top of a block, resembling a fin protruding from a whale's back. The channel that links the two ends of the circuit can have an electric field applied not only from the top but also from the sides of the fin, which improves control over the electrons and addresses the electricity leakage that was jeopardizing the performance of new generations of small transistors.
These nanometer-scale 3D structures were essential for the continuation of Moore's Law, but they were incredibly challenging to manufacture, demanding even more precision in deposition, etching, and lithography. This introduced uncertainty about whether all major chipmakers would successfully implement the switch to FinFET architectures or if one might lag.
Furthermore, the 2008-2009 financial crisis was threatening to reorder the chip industry. During the financial crisis, Chang's hand-picked successor, Rick Tsai, did what most CEOs did—reduce the workforce and cut expenses.
So Chang dismissed his successor and resumed direct control of TSMC. He was not going to let a financial crisis jeopardize TSMC's position in the industry leadership race. So at the depths of the crisis, Chang rehired the workers the former CEO had laid off and increased investment in new capacity and R&D.
He announced several billion-dollar increases to capital spending in 2009 and 2010 despite the crisis. We're just at the start," Chang declared in 2012, as he embarked on his sixth decade in the semiconductor industry.
Jobs didn't have the opportunity to incorporate all of his ideas into the hardware of the initial iPhone, which operated on Apple's iOS system but relied on Samsung for the design and manufacturing of its chips. The groundbreaking phone also featured a variety of additional chips: an Intel memory chip, a Wolfson audio processor, an Infineon modem for network connectivity, a CSR Bluetooth chip, and a Skyworks signal amplifier, to name a few. All of these components were designed by different companies.
A year following the iPhone's release, Apple acquired a small Silicon Valley chip design company called PA Semi, which specialized in energy-efficient processing. Shortly thereafter, Apple started recruiting top chip designers from the industry.
Two years later, Apple declared that it had developed its own application processor, the A4, which was subsequently utilized in the new iPad and iPhone 4. Nevertheless, Apple has made significant investments in research and development, and chip design facilities located in Bavaria, Israel, and Silicon Valley, where engineers are actively designing its latest chips.
Currently, Apple not only designs the primary processors for most of its devices but also auxiliary chips that manage accessories such as AirPods. This investment in specialized silicon accounts for the seamless operation of Apple's products.
In just four years following the iPhone's release, Apple was generating over 60 percent of the global profits from smartphone sales, effectively defeating competitors like Nokia and BlackBerry and leaving East Asian smartphone manufacturers to contend in the low-margin market for budget-friendly phones.
Similar to Qualcomm and other chip companies that propelled the mobile revolution, despite Apple's increasing chip designs, the company does not manufacture any of these chips itself. Apple is widely recognized for outsourcing the assembly of its phones, tablets, and other devices to hundreds of thousands of assembly line workers in China, who are tasked with connecting minute components together.
China's ecosystem of assembly facilities is the world's best location for the production of electronic devices. By 2010, when Apple unveiled its first chip, there were only a few leading-edge foundries: Taiwan's TSMC, South Korea's Samsung, and possibly GlobalFoundries, contingent upon its ability to secure market share.
Currently, no company apart from TSMC possesses the expertise or production capacity required to manufacture the chips that Apple requires. The text engraved on the back of each iPhone reads, "Designed by Apple in California." The text on the back of the iPhone, "Assembled in China," is quite misleading. The iPhone's most irreplaceable components are, in fact, designed in California and assembled in China, as stated. However, these components can only be produced in Taiwan.
Employing EUV light brought forth a host of challenges that seemed nearly insurmountable to overcome. In contrast to Lathrop's use of a microscope, visible light, and photoresists from Kodak, all essential EUV components needed to be custom-designed and developed. EUV light sources are not readily available for purchase as off-the-shelf products. Generating sufficient EUV light necessitates the use of a laser to pulverize a tiny tin droplet.
Cymer, established by two laser specialists from the University of California, San Diego, has been a prominent player in lithographic light sources since the 1980s. Cymer's engineers determined that the optimal method involved shooting a tiny thirty-millionths-of-a-meter-wide tin ball moving at approximately two hundred miles per hour through a vacuum.
The tin is initially warmed by a laser pulse, followed by a second pulse that vaporizes it into a plasma with a temperature of around half a million degrees, far surpassing the sun's surface temperature. To generate the required EUV light for chip fabrication, the process of blasting tin is repeated fifty thousand times per second.
Jay Lathrop's lithography process utilized a straightforward light bulb as the light source. The advancement in complexity from then to now is mind-boggling. In the end, Zeiss developed mirrors that were the smoothest objects ever produced, with remarkably minute impurities.
Frits van Houts, who assumed leadership of ASML's EUV business in 2013, believed that the most crucial element in an EUV lithography system was not any single component, but rather the company's expertise in managing its supply chain. ASML had no alternative but to depend on a single source for the essential components of an EUV system. To manage this, ASML delved deeply into its suppliers' suppliers to identify and evaluate potential risks.
The outcome was a machine with hundreds of thousands of components that required tens of billions of dollars and multiple decades to develop. The marvel lies not only in the fact that EUV lithography functions, but also in its ability to do so reliably and cost-effectively to manufacture chips. Exceptional reliability was essential for any component incorporated into the EUV system.
ASML aimed for each component to have an average lifespan of at least thirty thousand hours, or approximately four years, before requiring maintenance. The ASML's EUV lithography tool is the most expensive mass-produced machine tool in history, so intricate that it necessitates thorough training from ASML personnel, who remain on-site for the tool's entire life cycle.
Following three decades of investment, billions of dollars, a series of technological breakthroughs, and the establishment of one of the world's most intricate supply chains, by the mid-2010s, ASML's EUV tools were finally prepared for deployment in the world's most advanced chip fabs.
By the mid-2010s, it was possible to make some additional improvements, but Moore's Law required more advanced lithography tools to create smaller shapes. The only hope was that the EUV lithography tools, which had been in development since the early 1990s but experienced significant delays, could eventually be made functional on a commercial scale.
GlobalFoundries and Taiwan's UMC were competing to be the world's second-largest foundry, each holding approximately 10 percent of the foundry market. However, TSMC held more than 50 percent of the global foundry market.
In 2015, Samsung only held 5 percent of the foundry market, but when considering its massive production of in-house designed chips (such as memory chips and smartphone processor chips), it produced more wafers than any other company.
TSMC, Intel, and Samsung had strong financial positions, enabling them to take a risk and hope that they could make EUV technology work. GlobalFoundries concluded that, as a medium-sized foundry, it would not be financially feasible for them to develop a 7nm process. The company announced that it would stop manufacturing increasingly smaller transistors, reduced its R&D spending by a third, and subsequently returned to profitability after several years of losses.
The development of cutting-edge processors was cost-prohibitive for all but the largest chipmakers in the world. Even the substantial financial resources of the Persian Gulf royals who owned GlobalFoundries were insufficient. The number of companies capable of producing leading-edge logic chips decreased from four to three.
The PC market was stagnant, as it appeared that most people already owned a PC, but it remained highly profitable for Intel, providing billions of dollars annually that could be reinvested into research and development. Throughout the 2010s, the company invested over $10 billion per year in research and development, which was four times as much as TSMC and three times more than the entire budget of DARPA. Only a small number of companies worldwide spent more on research and development.
As the chip industry transitioned into the EUV era, Intel appeared well-positioned to dominate the market. The corporation played a pivotal role in the development of EUV, largely due to the initial $200 million investment made by Andy Grove in the early 1990s. After billions of dollars in investments, a significant portion of which was contributed by Intel, ASML successfully transformed the technology into a reality.
However, instead of capitalizing on the new era of shrinking transistors, Intel lost its advantage by missing significant changes in semiconductor architecture necessary for AI, mismanaging its manufacturing processes, and falling behind in keeping up with Moore's Law. Despite these setbacks, Intel remains highly profitable at present. It still holds the position of being America's largest and most advanced chip manufacturer.
Nonetheless, its future is more uncertain now than at any point since Grove's decision in the 1980s to abandon memory and focus solely on microprocessors. It still has the potential to regain its leadership position within the next five years, but there is also the possibility that it could cease to exist. At stake is not just the fate of a single company, but the future of America's entire chip fabrication industry.
If Intel were to falter, there would be no U.S. company—or any facility outside of Taiwan or South Korea—capable of producing state-of-the-art processors. The first issue Intel faced was with artificial intelligence. By the early 2010s, the company's primary market, which involved supplying processors for personal computers, had stagnated.
Currently, apart from gamers, very few people eagerly upgrade their PCs when a new model is released, and most don't give much thought to the type of processor they have. Intel's other major market—selling processors for data center servers—flourished during the 2010s.
Amazon Web Services, Microsoft Azure, Google Cloud, and other companies constructed extensive networks of enormous data centers, which supplied the computing power that enabled "the cloud." It is possible to execute any AI algorithm on a general-purpose CPU, but the extensive amount of computation required for AI makes using CPUs excessively costly.
The expense of training a single AI model—encompassing the chips utilized and the electricity they consume—can reach millions of dollars. In the early 2010s, Nvidia—the designer of graphic chips—started to catch wind of PhD students at Stanford using Nvidia's graphics processing units (GPUs) for purposes other than graphics.
GPUs were designed to operate differently from standard Intel or AMD CPUs, which are infinitely adaptable but execute their calculations one after the other. In contrast, GPUs are designed to simultaneously execute multiple iterations of the same calculation. It soon became evident that this "parallel processing" had applications beyond controlling image pixels in computer games. It could also be used to train AI systems efficiently.
While a CPU would feed an algorithm numerous pieces of data, one after the other, a GPU could process multiple pieces of data concurrently. When learning to recognize images of cats, a CPU would process pixel by pixel, while a GPU could "look" at many pixels simultaneously. As a result, the time required to train a computer to recognize cats was significantly reduced.
As investors bet that data centers will increasingly require more GPUs, Nvidia has become the most valuable semiconductor company in America. Its rise to prominence, however, is not guaranteed because, in addition to purchasing Nvidia chips, the major cloud companies—Google, Amazon, Microsoft, Facebook, Tencent, Alibaba, and others—have also started designing their own chips, specifically tailored to their processing requirements, with a focus on artificial intelligence and machine learning.
For example, Google has developed its own chips called Tensor processing units (TPUs), which are optimized for use with Google's TensorFlow software library. You can rent the use of Google's most basic TPU at its Iowa data center for $3,000 per month, but prices for more powerful TPUs can exceed $100,000 monthly. The cloud may sound ethereal, but the silicon that stores all our data is very real—and very costly.
Whether it will be Nvidia or the major cloud companies doing the vanquishing, Intel's near-monopoly in sales of processors for data centers is coming to an end. Losing this dominant position would have been less problematic if Intel had discovered new markets. By 2020, half of the EUV lithography tools, which were financed and cultivated by Intel, were installed at TSMC.
In contrast, Intel had only just started to use EUV in its manufacturing process. As the decade came to a close, only two companies were capable of manufacturing the most cutting-edge processors, TSMC and Samsung.
When it comes to the fundamental technologies that support computing, China is significantly dependent on foreign products, many of which are designed in Silicon Valley and almost all of which are produced by companies based in the U.S. or one of its allies. During most years of the 2000s and 2010s, China spent more money importing semiconductors than it did on oil.
High-performance chips were as crucial as hydrocarbons in driving China's economic growth. Unlike oil, however, the supply of chips is monopolized by China's geopolitical rivals. According to a widely discussed book by Kai-Fu Lee, the former head of Google China, the country was one of the world's two AI Superpowers in terms of artificial intelligence.
Beijing constructed a twenty-first-century fusion of AI and authoritarianism, maximizing the utilization of surveillance technology. However, even the surveillance systems that monitor China's dissidents and ethnic minorities rely on chips from American companies like Intel and Nvidia. All of China's most critical technology is built on a fragile foundation of imported silicon.
As Chinese technology companies expanded their reach into areas such as cloud computing, autonomous vehicles, and artificial intelligence, their need for semiconductors was undoubtedly set to increase. The x86 server chips, which continue to be the backbone of contemporary data centers, are primarily controlled by AMD and Intel.
There is no Chinese company that manufactures a commercially competitive GPU, which means that China is dependent on Nvidia and AMD for these chips as well. The Chinese government devised a plan called Made in China 2025, which aimed to decrease China's imported share of its chip production from 85 percent in 2015 to 30 percent by 2025.
Naturally, every Chinese leader since the establishment of the People's Republic desired a semiconductor industry. Mao's Cultural Revolution dream, which envisioned that every worker could produce their own transistors, had been a complete failure.
Decades later, Chinese leaders enlisted Richard Chang to establish SMIC and "share God's love with the Chinese." He constructed a capable foundry, but it encountered difficulties in generating profits and faced a series of challenging intellectual property lawsuits with TSMC. Eventually, Chang was removed from his position and private-sector investors were replaced by the Chinese state.
China's subsidy strategy during the 2000s had not succeeded in establishing a cutting-edge domestic chip industry. However, taking no action and allowing continued reliance on foreign semiconductors was not politically acceptable.
As early as 2014, Beijing made the decision to intensify semiconductor subsidies, initiating what became known as the "Big Fund" to support a new advancement in chips. Key "investors" in the fund encompass China's Ministry of Finance, the state-owned China Development Bank, and numerous other government-owned companies, such as China Tobacco and investment vehicles of the Beijing, Shanghai, and Wuhan municipal governments.
Some analysts celebrated this as a new "venture capital" model of state support, but the decision to compel China's state-owned cigarette company to fund integrated circuits was as far removed from the operating model of Silicon Valley venture capital as possible. If China's pursuit of self-sufficiency in semiconductors were to succeed, its neighbors, most of whom had export-dependent economies, would suffer even more.
Integrated circuits accounted for 15 percent of South Korea's exports in 2017; 17 percent of Singapore's; 19 percent of Malaysia's; 21 percent of the Philippines'; and 36 percent of Taiwan's.
At IBM, Rometty declared a shift in strategy that would be attractive to Beijing. Instead of attempting to sell chips and servers to Chinese customers, she announced, IBM would make its chip technology available to Chinese partners, thereby, she explained, "creating a new and vibrant ecosystem of Chinese companies producing homegrown computer systems for the local and international markets."
IBM's choice to exchange technology for market access was a sound business decision. The most contentious instance of technology transfer, however, was by Intel's archrival, AMD. In the mid-2010s, the company was facing financial difficulties, having lost PC and data center market share to Intel.
AMD was never on the verge of bankruptcy, but it wasn't far from it either. The company was seeking funds to buy time as it introduced new products to the market. For example, in 2013, it sold its corporate headquarters in Austin, Texas, to generate cash. In 2016, it sold an 85 percent stake in its semiconductor assembly, testing, and packaging facilities in Penang, Malaysia, and Suzhou, China, to a Chinese firm for $371 million. In that same year, AMD made an agreement with a consortium of Chinese companies and government organizations to license the production of modified x86 chips for the Chinese market.
In 2018, Arm, the British company that designs the chip architecture, divested its China division, selling 51 percent of Arm China to a group of investors, while holding on to the remaining 49 percent itself. Two years prior, Arm had been acquired by Softbank, a Japanese company that has invested billions in Chinese tech startups.
Softbank was, therefore, reliant on favorable Chinese regulatory treatment for the success of its investments. It was under scrutiny from U.S. regulators, who were concerned that its exposure to China made it susceptible to political pressure from Beijing. Softbank had acquired Arm in 2016 for $40 billion, but it sold a 51 percent stake in the China division—which, according to Softbank, represented a fifth of Arm's global sales—for just $775 million.
When considered independently, the deals that IBM, AMD, and Arm made in China were driven by sound business reasoning. Compared to a decade ago, while its capabilities still significantly trail the cutting edge, China is significantly less reliant on foreigners to design and manufacture chips required in data centers.
For Zhao Weiguo, it was a long, winding journey from a childhood spent raising pigs and sheep in China's western frontier to being celebrated as a chip billionaire by Chinese media. In 2013, four years after acquiring his stake in Tsinghua Unigroup, and just before China’s Communist Party announced new plans to provide substantial subsidies to the country’s semiconductor firms, Zhao decided it was time to invest in the chip industry.
In 2014, Zhao made an agreement with Intel to combine Intel's wireless modem chips with Tsinghua Unigroup's smartphone processors. Intel hoped the collaboration would increase its sales in China's smartphone market, while Zhao wanted his companies to learn from Intel's chip design expertise. He acquired a 25 percent stake in Taiwan's Powertech Technology, which assembles and tests semiconductors, a transaction that was permitted under Taiwan's regulations.
However, Zhao's true interest was in acquiring the island's crown jewels—MediaTek, the leading chip designer outside the U.S., and TSMC, the foundry on which almost all the world's fabless chip firms depend. He proposed the idea of purchasing a 25 percent stake in TSMC and advocated merging MediaTek with Tsinghua Unigroup's chip design businesses. Soon, Zhao turned his attention to America's semiconductor industry.
In July 2015, Tsinghua Unigroup proposed the idea of purchasing Micron, the American memory chip producer, for $23 billion, which would have been the largest ever Chinese acquisition of a U.S. company in any industry. Amidst this frenzied dealmaking, Tsinghua Unigroup announced in 2017 that it had received new "investment": approximately $15 billion from the China Development Bank and $7 billion from the Integrated Circuit Industry Investment Fund—both owned and controlled by the Chinese state.
Despite the country's success in exporting goods, Chinese internet companies primarily generate revenue from the domestic market due to regulatory protection and censorship. Without their strong hold on the domestic market, Tencent, Alibaba, Pinduoduo, and Meituan would be relatively small players in the global market.
In contrast, Huawei has engaged in global competition from the very beginning of its operations. Ren Zhengfei's approach to business differs fundamentally from that of Alibaba and Tencent. He has adopted innovative ideas from abroad, produced high-quality products at lower costs, and sold them globally, thus gaining international market share at the expense of other global competitors.
Ren recognized a chance to import telecom switches, which are the devices that link one caller to another. With a starting capital of $5,000, he started importing this equipment from Hong Kong. When his Hong Kong partners discovered that he was profiting significantly from reselling their equipment, they stopped supplying him, which prompted Ren to start manufacturing his own equipment.
In the early 1990s, Huawei had a few hundred employees working in R&D, primarily focusing on the development of switching equipment. Currently, it is one of the world's top three providers of equipment for cell towers, along with Finland's Nokia and Sweden's Ericsson.
Critics of Huawei often claim that the company's success is built on stolen intellectual property, although this is only partially accurate. While the theft of intellectual property might have contributed to the company's success, it is not the sole factor that can account for its achievements. No amount of intellectual property or trade secrets alone can be sufficient to establish a business as large as Huawei.
On the other hand, Huawei's investment in R&D is among the highest in the world. The company's annual R&D budget of around $15 billion is comparable to only a few other firms, including tech giants like Google and Amazon, pharmaceutical companies such as Merck, and car manufacturers like Daimler or Volkswagen. With the help of IBM and other Western consultants, Huawei acquired the knowledge to manage its supply chain effectively, predict customer demand, create superior marketing strategies, and sell its products globally.
Besides Western consulting firms, Huawei received support from another influential entity: the Chinese government. The extent of government support for a supposedly private company has raised concerns, particularly in the United States. China's leaders have indeed been supportive of the company's global expansion efforts.
As it expanded, existing Western firms in the telecom equipment market were compelled to merge or were driven out of the market. Canada's Nortel went bankrupt, and Alcatel-Lucent, the company that took over Bell Labs after AT&T's breakup, sold its operations to Finland's Nokia.
After establishing the infrastructure for phone calls, the company also began selling phones. As a result, its smartphones became some of the top-selling models globally. Furthermore, Huawei appeared uniquely positioned for a new era of ubiquitous computing that would accompany the implementation of the next generation of telecom infrastructure: 5G.
Huawei has excelled in the latest generation of equipment used to transmit calls and data through cell networks, known as 5G. 2G phones enabled picture texting; 3G phones allowed web browsing; and 4G made it possible to stream videos from almost any location. 5G will offer a comparable leap forward. The available space in the relevant portion of the radio-wave spectrum is finite. The next-generation network technology, 5G, will enable the wireless transmission of significantly more data.
Partly, this will be achieved through increasingly intricate methods of spectrum space sharing, which require more complex algorithms, as well as more computing power in phones and cell towers, to allow for the transmission of 1s and 0s in even the smallest available spaces in the wireless spectrum. Partly, 5G networks will transmit more data by utilizing a new, unoccupied radio frequency spectrum that was previously deemed impractical to utilize. Advanced semiconductors make it possible not only to fit more 1s and 0s into a specific frequency of radio waves, but also to send radio waves over greater distances and target them with an unprecedented level of precision.
Cell networks will determine a phone's location and send radio waves directly toward the phone using a technique called beamforming. A typical radio wave, such as one that transmits music to your car radio, sends signals in all directions because it is unaware of your car's location. This results in wasted power and an increased number of waves and interference.
With beamforming, a cell tower recognizes a device's location and sends the required signal only in that direction. Outcome: reduced interference and stronger signals for all users.
While Tesla's devoted fan base and rapidly increasing stock price have garnered significant attention, what is often overlooked is that Tesla is also a prominent chip designer. The company recruited renowned semiconductor designers like Jim Keller to develop a specialized chip for its autonomous driving requirements, which is produced using cutting-edge technology. As early as 2014, some analysts were pointing out that Tesla cars "resemble smartphones."
Given Huawei's design arm demonstrating world-class capabilities, it was not difficult to envision a future where Chinese chip design firms were as significant customers of TSMC as Silicon Valley giants. If the trends of the late 2010s were extrapolated, by 2030, China's chip industry could potentially rival Silicon Valley in terms of influence. This would not only disrupt tech companies and trade flows. It would also redefine the balance of military power.
Acquiring U.S.-designed, Taiwan-fabricated chips for many Chinese military systems has not been a significant challenge. A recent examination of 343 publicly available AI-related People's Liberation Army procurement contracts, conducted by researchers at Georgetown University, revealed that less than 20 percent of the contracts involved companies subject to U.S. export controls.
In other words, the Chinese military has not faced much difficulty in purchasing cutting-edge U.S. chips off-the-shelf and integrating them into military systems. The Georgetown researchers discovered that Chinese military suppliers openly advertise on their websites their utilization of American chips. China's controversial "Civil Military Fusion" policy, which aims to apply advanced civilian technology to military systems, seems to be effective.
In the absence of significant changes in U.S. export restrictions, the People's Liberation Army will likely acquire a substantial portion of the computing power it requires by simply purchasing it from Silicon Valley. In the mid-2010s, officials like Secretary of Defense Chuck Hagel began discussing the need for a new "offset," reminiscent of the initiative led by Bill Perry, Harold Brown, and Andrew Marshall in the 1970s to address the Soviet Union's quantitative advantage.
The U.S. is currently confronted with the same fundamental issue: China can deploy more ships and planes than the U.S., particularly in critical areas such as the Taiwan Strait. "We will never attempt to match our opponents or competitors tank for tank, plane for plane, person for person," declared Bob Work, the former deputy defense secretary and intellectual architect of this new offset, clearly echoing the reasoning of the late 1970s. In other words, the U.S. military will only prevail if it possesses a decisive technological edge.
The 1970s offset was propelled by "digital microprocessors, information technologies, new sensors, stealth," according to Work's argument. This time, it is going to be "progress in Artificial Intelligence (AI) and autonomy." The U.S. military is currently deploying the first generation of new autonomous vehicles, such as Saildrone, an unmanned windsurfer that can spend months exploring the oceans while monitoring submarines or intercepting adversaries' communications.
Russia has employed a range of radar and signals jammers in its conflict with Ukraine. The Russian government allegedly interferes with GPS signals around President Vladimir Putin's official travels, possibly as a security precaution. Not coincidentally, DARPA is researching alternative navigation systems that do not rely on GPS signals or satellites, to ensure American missiles can hit their targets even if GPS systems are compromised.
In 2017, DARPA initiated a new project called the Electronics Resurgence Initiative to aid in the development of the next generation of military-relevant chip technology. The U.S. government initially purchased nearly all of the early integrated circuits that Fairchild and Texas Instruments manufactured in the early 1960s. By the 1970s, that percentage had dropped to 10−15 percent. Currently, it accounts for approximately 2 percent of the U.S. chip market. In terms of chip purchasing, Apple CEO Tim Cook has more sway over the industry than any Pentagon official at present.
In 2018, researchers uncovered two fundamental flaws in Intel's widely used microprocessor architecture known as Spectre and Meltdown, which allowed the copying of data such as passwords—a significant security vulnerability. According to the Wall Street Journal, Intel initially shared the flaw with customers, including Chinese tech companies, before informing the U.S. government, which further heightened Pentagon officials' concern about their diminishing influence over the chip industry.
By around 2015, gears gradually started to shift within the U.S. government. The government's trade negotiators viewed China's chip subsidies as a blatant violation of international agreements. The Obama administration took a slow approach to semiconductors, as one person involved in the effort recalled, because many high-ranking officials simply did not consider chips to be a significant issue.
U.S. intelligence had expressed concerns about Huawei's alleged connections to the Chinese government for several years, but it was only in the mid-2010s that the company and its smaller counterpart, ZTE, began drawing public attention. Both companies offered competing telecom equipment; ZTE was state-owned, while Huawei was privately owned but was alleged by U.S. officials to have close ties with the government.
In 2016, during the final year of the Obama administration, both companies were accused of violating U.S. sanctions by supplying goods to Iran and North Korea. In April 2018, as Trump's trade dispute with China escalated, the U.S. government determined that ZTE had breached the terms of its plea agreement by supplying incorrect information to U.S. officials.
When the regulations were reinstated, ZTE was once again prohibited from purchasing U.S. semiconductors, along with other products. If the U.S. didn't alter its policy, the company would be hurtling towards failure. So when the Chinese leader proposed a deal, Trump eagerly accepted the offer, tweeting that he would find a way to keep ZTE in business due to his concern for the company "losing too many jobs in China." Soon, ZTE agreed to pay an additional fine in exchange for regaining access to U.S. suppliers.
Currently, three companies control the global market for DRAM chips: Micron and its two Korean competitors, Samsung and SK Hynix. Taiwanese companies invested billions attempting to enter the DRAM business in the 1990s and 2000s but never succeeded in establishing profitable businesses. The DRAM market necessitates economies of scale, making it challenging for small producers to be price competitive.
The relationship between Taiwan and Fujian Province is close but not always amicable. However, when the Fujian Province government decided to establish a DRAM chipmaker called Jinhua and provided it with more than $5 billion in government funding, Jinhua bet that a partnership with Taiwan would be its best route to success.
Taiwan did not have any leading memory chip companies, but it did have DRAM facilities, which Micron had acquired in 2013. Micron wasn't going to offer any assistance to Jinhua, which it perceived as a dangerous competitor. To compete, Jinhua had to acquire this manufacturing knowledge by fair means or foul. When Micron sued UMC and Jinhua for infringing on its patents, they filed a countersuit in China's Fujian Province.
The Obama administration's attempts to negotiate a deal with China's spy agencies, in which they agreed to stop providing stolen secrets to Chinese companies, only lasted until Americans had forgotten about the issue, at which point the hacking promptly resumed. With Micron's secrets at Jinhua's disposal, some analysts believed that it would only be a few years before Jinhua was producing DRAM chips at scale—at which point it wouldn't matter if Micron was allowed back into the Chinese market, as Jinhua would be manufacturing chips using Micron's technology and selling them at subsidized prices.
After careful consideration, the Trump administration decided to employ the same tool it had used against ZTE, reasoning that it made more sense to address a trade dispute with a trade regulation. Jinhua was cut off from purchasing U.S. equipment for chip manufacturing.
Huawei's annual R&D spending now rivaled American tech giants like Microsoft, Google, and Intel. It had become TSMC's second-largest customer, second only to Apple. When the Trump administration initially decided to increase its pressure on Huawei, it prohibited the sale of U.S.-made chips to the company.
This restriction alone was devastating, given that Intel chips are ubiquitous and many other U.S. companies manufacture virtually irreplaceable analog chips. However, after decades of offshoring, significantly less of the semiconductor production process took place in the United States than previously. Restricting the export of U.S.-made goods to Huawei would do nothing to prevent TSMC from producing advanced chips for Huawei.
In May 2020, the administration tightened restrictions on Huawei further. Now, the Commerce Department announced, it would protect U.S. national security by restricting Huawei’s ability to use U.S. technology and software to design and manufacture its semiconductors abroad. Since then, Huawei has been compelled to divest part of its smartphone business and its server business, as it cannot obtain the necessary chips.
China’s rollout of its own 5G telecoms network, which used to be a high-profile government priority, has been delayed due to chip shortages. Following the implementation of the U.S. restrictions, other countries, notably Britain, decided to ban Huawei, reasoning that in the absence of U.S. chips, the company would struggle to service its products.
The attack on Huawei was followed by blacklisting numerous other Chinese tech firms. After some discussions with the United States, the Netherlands decided not to approve the sale of ASML’s EUV machines to Chinese firms. Sugon, the supercomputer company that AMD described in 2017 as a “strategic partner,” got blacklisted by the U.S. in 2019. Similarly, Phytium, a company that U.S. officials claim has created chips for supercomputers used in hypersonic missile testing, was also blacklisted by the U.S., as reported by the Washington Post.
Nevertheless, the U.S. actions against Chinese tech companies have been a relatively restricted endeavor. Despite these actions, numerous major Chinese tech companies, such as Tencent and Alibaba, do not encounter specific restrictions on their acquisition of U.S. chips or their access to TSMC's semiconductor manufacturing services.
While SMIC, China's leading producer of logic chips, is now subject to new limitations on its acquisition of advanced chipmaking tools, it has not been forced to cease operations. In fact, Huawei is permitted to purchase older semiconductors, such as those utilized for connecting to 4G networks. Surprisingly, China has not taken any retaliatory measures against the limitations imposed on its most globally recognized tech company. It appears that Beijing has determined that it is more advantageous to accept Huawei's decline as a second-tier technology player than to retaliate against the United States.
As reported by Nikkei Asia, a Japanese newspaper renowned for its extensive coverage of China's chip industry, the Chinese government has demonstrated such strong backing for YMTC that the company was permitted to continue operations even during the COVID lockdown. China's leaders were prepared to go to great lengths in their battle against the coronavirus, yet their drive to establish a semiconductor industry remained a higher priority.
Wuhan, not only the location of YMTC, China's most promising prospect for NAND chip parity, but also the site of the nation's most substantial recent semiconductor scam. After an international acquisition spree, Tsinghua Unigroup has recently run out of funds and defaulted on a number of its bonds. Despite CEO Zhao Weiguo's high-ranking political connections, Tsinghua Unigroup was not spared, although the chip companies it owns are expected to remain largely unaffected.
Perhaps within a decade, China will be successful in constructing its own EUV scanner. If so, the program will come with a hefty price tag of tens of billions of dollars, and it is likely to be disheartening to learn that once it is completed, it will no longer be at the forefront of technology. By that time, ASML will have introduced a new generation tool called high-aperture EUV, which is expected to be ready in the mid-2020s and will cost $300 million per machine, double the cost of the first-generation EUV machine.
A central difficulty faced by China at present is that numerous chips employ either the x86 architecture (for PCs and servers) or the Arm architecture (for mobile devices). x86 is primarily controlled by two U.S. companies, Intel and AMD, while Arm, which licenses its architecture to other companies, is headquartered in the UK. However, there is a new instruction set architecture called RISC-V, which is open-sourced and therefore accessible to anyone without a fee.
Unless there are significant new limitations imposed on access to foreign software and machinery, China appears poised to play a more substantial role in the production of non-cutting-edge logic chips. Furthermore, it is investing heavily in the materials required to develop power management chips for electric vehicles. In the meantime, China's YMTC has a genuine opportunity to secure a portion of the NAND memory market.
According to industry estimates, China's share of chip fabrication is projected to rise from 15 percent at the beginning of the decade to 24 percent of global capacity by 2030, surpassing Taiwan and South Korea in terms of volume.
While Chinese stockpiling accounts for a portion of the chip shortage during the COVID era, it is not the sole contributing factor. The main reason for the chip shortage during the pandemic is the significant shifts in chip orders as businesses and consumers altered their demand for various products.
In 2020, PC demand surged as many individuals upgraded their computers to work from home. The demand for servers in data centers also increased as more aspects of life transitioned to online platforms. Initially, car companies reduced their chip orders, anticipating a decline in vehicle sales. However, when demand rapidly rebounded, car companies discovered that chipmakers had already reassigned production capacity to other clients.
Samsung and its smaller South Korean competitor, SK Hynix, enjoy the backing of the South Korean government but find themselves caught between China and the U.S., as both countries attempt to persuade South Korea's chip giants to expand manufacturing in their respective countries. For instance, Samsung recently announced plans to enlarge and modernize its advanced logic chip production facility in Austin, Texas, a project estimated to cost $17 billion.
Taiwan's government maintains a strong stance in safeguarding its chip industry, which it acknowledges as its most significant source of influence on the global stage. Morris Chang, supposedly fully retired from TSMC, has taken on the role of a trade ambassador for Taiwan. From 2022 to 2024, the company intends to invest over $100 billion to enhance its technology and increase chipmaking capacity.
The majority of this investment will be allocated to Taiwan, although the company also plans to modernize its Nanjing, China facility and establish a new fab in Arizona. Despite these new facilities, neither of them will produce the most cutting-edge chips, thus ensuring that TSMC's most advanced technology remains in Taiwan.
In terms of manufacturing these chips, however, the U.S. currently falls behind. The main prospect for advanced manufacturing in the United States lies with Intel. Following a period of uncertainty, Pat Gelsinger was appointed CEO of the company in 2021. He has devised an ambitious and costly strategy with three main components.
The first component is to regain the top position in manufacturing, surpassing Samsung and TSMC. In order to achieve this, Gelsinger has made an agreement with ASML to enable Intel to acquire the first next-generation EUV machine, which is anticipated to be ready in 2025. If Intel can master the use of these new tools before its competitors, it could provide a technological advantage.
China's ruling party has no greater objective than asserting control over Taiwan. Its leaders consistently pledge to achieve this goal. The government has enacted an "Anti-Secession Law" which considers the potential use of what it refers to as "non-peaceful means" in the Taiwan Strait.
It has made substantial investments in the military systems, such as amphibious assault vehicles, necessary for a cross-strait invasion. It regularly exercises these capabilities. It is universally acknowledged by analysts that the military balance in the Strait has decisively moved in China's favor.
The days are long gone when, as during the 1996 Taiwan Strait crisis, the U.S. could simply navigate a full aircraft carrier battlegroup through the Strait to compel Beijing to back down. Currently, such an operation would be fraught with risk for the U.S. vessels. Today, Chinese missiles not only pose a threat to U.S. ships in the vicinity of Taiwan but also to bases as distant as Guam and Japan. As the PLA strengthens, the U.S. becomes less likely to engage in war to protect Taiwan.
In the event that China were to attempt a campaign of limited military pressure on Taiwan, it is more probable than ever that the U.S. might assess the balance of power and conclude that countering the pressure is not worth the risk. Taiwan's President, Tsai Ing-wen, recently posited in Foreign Affairs that the island's chip industry serves as a "silicon shield," enabling Taiwan to safeguard itself and others from aggressive attempts by authoritarian regimes to disrupt global supply chains. That is a very optimistic interpretation of the situation. The chip industry of the island undoubtedly compels the U.S. to take Taiwan's defense more seriously.
However, the concentration of semiconductor production in Taiwan also places the global economy at risk if the "silicon shield" fails to deter China.
If you liked reading this article, you can follow me on Twitter: 0xmaCyberSec.