It is not hard to figure out who is in the catbird seat in the semiconductor foundry business.
In 2024, according to CC Wei, chief executive officer of Taiwan Semiconductor Manufacturing Co, the overall industry, including making chip masks, etching logic wafers, and packing and testing finished chips, grew by 60 percent. TSMC, on the other hand, grew five times faster at 30 percent. And in 2025, when TSMC is expecting for its revenues from AI training and inference chip manufacturing to more than double and is anticipating that it can grow close to the "mid-twenties percent" rate while the overall foundry business (which includes TSMC pulling up the class average, mind you) will rise by 10 percent.
We have confidence in Wei's prognostications for TSMC and its AI-related business. We are less certain about the rest of the foundry businesses in the world. For many good reasons.
Usually, when CEOs start swaggering, we take it in stride and know that it is hyperbole. But when Wei did it on the call with Wall Street to go over the financials results for the fourth quarter ended in December, everything he said before drilling down into the numbers rang true.
"As the world's most reliable and effective capacity provider, TSMC is playing a critical and integral role in the global semiconductor industry," Wei said. "With our technology leadership, manufacturing excellence, and customer trust, we are well positioned to address the growth from the industry megatrends of 5G, AI and HPC with our differentiated technologies. For the five-year period starting from 2024, we expect our long-term revenue growth to approach a 20 percent compound annual growth rate in US dollar terms, fueled by all four of our growth platforms, which are smartphone, HPC, IoT and automotive."
In the December quarter, TSMC's revenue rose by 37 percent to $26.88 billion, an increase of 14.4 percent sequentially from a record-breaking Q3 2024. Net income rose even faster, up 55.1 percent year on year to $11.59 billion, but up only 15.3 percent sequentially as increased costs of N2 2 nanometer processes, conversion costs of foundries from 5 nanometer to 3 nanometer processes, and the ramping of foundries in Japan and the United States drove down profits. The humming along and better yields on N5, N4, and N3 processes in the fabs in Taiwan offset some of these increasing costs, clearly, as did mature N6 and N7 processes and even older ones used for all kinds of semiconductor devices all the way up to 250 nanometer chippery.
When TSMC says HPC, of course, it doesn't mean technical computing as in simulation or modeling, but any high-end compute engine that is used in a PC or a server, but these days it mostly means - when counting money - high-end server CPUs and high-end server GPUs, plus GPU cards for gamers and other graphics-heavy use cases outside of the datacenter. Switch, router, and DPU ASICs are also in this HPC classification by TSMC.
For a long time, the smartphone drove more revenues - and certainly a lot more volumes - than this so-called HPC business at TSMC, but that is no longer true and it very likely will never be true again. That HPC platform as described above had revenues of $14.29 billion in Q4 2024, up 68.9 percent year on year and there is every reason to believe that AI system sales will continue to drive it higher and higher into 2025 and beyond thanks to the demand for compute, acceleration, and networking. This marketed the third quarter in a row when HPC chip manufacturing, packaging, and testing drive more than half of TSMC's revenues. Smartphones, by contrast, pushed a little more than a third of TSMC's sales in the December quarter, up a GDP-beating 11.5 percent to $9.41 billion.
The rest of TSMC's businesses - consumer devices, automotive, IoT, and other kinds of chips - are considerable smaller and, from our point of view, are not all that interesting. But TSMC should make money there, and does.
Based on statements made by TSMC in recent quarters, we have built a model of TSMC revenues for making AI inference and training devices, and as best we can figure the company made $5.28 billion on these devices in Q4, up by a factor of 4.1X compared to the year ago period and up around 40 percent sequentially. For the full 2024 year, we estimate that TSMC did $13.13 billion in AI chip making, up by a factor of 3.1X compared to the $4.24 billion in 2023, which itself was up by 2.8X compared to the $1.52 billion in 2022 as the GenAI boom was just building up steam.
Obviously, there were lots of AI machines being built prior to 2022, and in many cases these were hybrid machines that did traditional HPC simulation and modeling (and "HPC" is in our sense of the word here) as well as traditional machine learning work. It is not clear how TSMC is breaking these out.
What we can tell you is that pure AI inference and training devices were driving around 6 percent of TSMC's overall revenues in 2023, and busted through 9 percent in Q1 2024 and were around 19.6 percent of total revenues in Q4 2024. For the year, Wei said that AI inference and training chip making, packaging, and testing comprised just under "the middle teens" of the company's revenues. We have it pegged at 14.6 percent.
If you want to look at AI revenues as a share of TSMC HPC revenues, it looks like the chart above - and again, real HPC systems and mixed AI/HPC systems are part of the broader black line labeled "HPC." AI was about 5 percent of overall HPC chip sales at TSMC in 2022, and quickly rose to the teens percentage in 2023, kissed 20 percent in Q1 2024 and rose to 37 percent by Q4 2024.
TSMC is very much tied to Nvidia, in other words. With some help from AMD, Google, and a handful of other AI startups.
The question we have - and that everyone wants to know - is how much of the revenues that TSMC brought in comes from AI in recent years and how might that trend out to say 2029. With a CAGR of 20 percent for overall revenue growth, it is reasonable to expect for TSMC to have around $235 billion in sales in 2029, which is 2.5X the $90.08 billion the company posted in 2024. Even you assume a reasonable deceleration in AI chip revenues - more than doubling in 2025 as Wei said and then cooling every so slowly over the years down to a few points faster than the company's overall growth rate - then the scenario is that AI will drive more than half of TSMC's revenues by 2029. Call it somewhere around $130 billion if you put a gun to our heads.
These are just cells in spreadsheets and lines on charts - a lot of work has to be done to make them real. A lot depends on how the global economy behaves as well as political and social pressures that are shaping how AI can - and cannot - be deployed.
Back in the present, the rising wafer starts that TSMC enjoyed during the PC boom during the coronavirus pandemic sharply dropped in late 2023 and early 2024, and have not returned. There is no reason to believe we are all going to go out and buy a new smartphone, tablet, and PC, which is why you see that red line above continue to lag its historical level. And AI chips cannot make up the volume gap.
But oh boy, can they ever make up the revenue gap, and they have, and TSMC has managed to stay profitable despite the higher costs of making more complex AI devices.
It is hard to imagine Intel's foundry playing at this level any time soon, which makes TSMC a sure bet provided China doesn't invade Taiwan.
TSMC's 3 nanometer processes are ramping nicely and its 5 nanometer processes are fully ramped, with chips using the N3 process driving $6.99 billion in sales, up 2.4X year on year, and chips using the N5 process and its related N4 enhancement driving $9.14 billion, up 33.1 percent. The N7 process and its N6 refinement drove $3.76 billion in sales, up 12.8 percent and never hit the highs that N5 and N3 are seeing. All of those other older process nodes accounted for 26 percent of sales, or another $6.99 billion, and had a 7.9 percent increase year on year, which is still better than GDP growth. Old chip processes die hard because sometimes low and high yield lost matters more than peak performance and density.
In 2024, TSMC spent $29.8 billion in capital expenses to build out its existing fabs as well as rise up news ones in Japan and in the US. In 2025, just to make it clear how far Intel has to climb to be a rival foundry, TSMC expects to spend between $38 billion and $42 billion on capital equipment and facilities. This is a huge bump over 2024 spending levels.
About 70 percent of that dough allocated for 2025 will be spent on gear to drive advanced process nodes for future products, and somewhere between 10 percent and 20 percent will be spent on mask making, packaging, and testing gear. Another 10 percent to 20 percent will be spent on "specialty technologies," which are not outlined specifically but which include automotive and industrial chips. (TSMC has a fab in the works for Dresden, Germany for these devices.)
The way TSMC is profiting, it could spend a lot more. And for antitrust reasons, perhaps it might want to make some big loans out to Intel so it can afford to be a strong competitor. . . .
Or, it can just spend the money ramping up its own fabs in Arizona. Wei said on the call with Wall Street that the first fab in the United States has entered volume production on the N4 process and has yields comparable to those of its fabs running the N4 process in Taiwan. Wei added that the plans for the second and third fab in Arizona were on track, and will be driving its N3, N2, and A16 processes, which are what anyone wants to use on an advanced AI compute engine or a high-end CPU or DPU for that matter.
Wei said the N2 process was on track for volume production in the second half of 2025, and that the number of tapeouts for N2 would be higher over the first two years than for N3 and N5 over their first two years, driven by smartphone and HPC chip demand. N2 has 10 percent to 15 percent faster transistors at the same power or 20 percent to 30 percent power improvement at the same performance, and also sports 1 5 percent transistor density compared to the enhanced N3E process. The performance optimized N2P process comes in the second half of 2026.
Also coming in the second half of 2026 is the 1.6 nanometer A16 process, which delivers an 8 percent to 10 percent speed improvement over N2P at the same power or 15 percent to 20 percent lower power at the same performance, with a transistor density gain of 7 percent to 10 percent.