According to Deloitte analysis of press releases so far in 2024, more than 15 global telecoms in over a dozen countries, with a combined population of over two billion, have announced that they are building gen AI data centers (interchangeably called “AI factories”).1 Other telecoms around the world are actively considering the idea.2 These AI factories plan to sell gen AI training and inference computing capacity to others as a new telco business line. Although some gen AI computing can be done in the global cloud, some believe that other gen AI tasks should be performed locally, often via hybrid or private cloud models. This can fulfill national requirements for sovereign gen AI, with perceived benefits in areas like “security, performance, and fine-tuning with local language and cultural standards.”3 Likely end markets may include government, defense, health, and finance. Some governments are already demanding sovereign gen AI capacity for certain applications, while others are creating programs with financial incentives to build locally owned and operated gen AI processing capacity.4 Telcos are not the only possible provider of sovereign gen AI factories, but they are often among the leading candidates.
Telecoms getting into the data center business is nothing new. In the period from 2000 to 2015, multiple telecoms spent billions of dollars on building their own data centers or acquiring existing data center companies.5 It made sense in many ways: Data centers in that period needed access to high-speed connectivity, power, cooling, and many square feet in central locations, all of which telecoms had.
Since then, some telcos have divested or spun out some, most, or all of their data center assets in the last decade.6 According to a 2024 analysis, the number of telco mergers and acquisitions deals involving their data centers has increased recently: There were just over 20 deals per year in 2019 and 2020, while in 2022 and 2023, there were 63 and 48 deals, respectively.7 That said, some telecoms still have data centers, both for their own internal use and for selling computing capacity to others.8
The gen AI challenge is that often “old” data centers can’t be used to provide state-of-the-art gen AI training and inference. New rack scale servers weigh over a ton, draw up to 120 kilowatts of power (roughly 10x to 20x as much as a conventional data center rack), and require higher voltages, different and faster communications gear, and a shift from air cooling to liquid cooling (which is especially challenging).9 At best, it could require replacing much of a data center; at worst, an entirely new one may need to be built.
And that costs money. Based on our analysis, the average large sovereign telecom gen AI data center deployment (either standalone, or as part of a larger multiuse data center) costs on the order of US$100 million for a new build.10 For context, telecoms spent far more money during their first data center buildouts 15 to 20 years ago.11 Telecom gen AI data centers are also not a material factor in the almost US$200 billion projected AI server market for 2024.12 As an example, the leading gen AI chipmaker’s 10th largest customer was on track to spend almost US$2 billion on gen AI chips in late 2023.13 As an end-market, telecom gen AI data centers may be a relative drop in the bucket.
There are different financial models that telcos can consider if building AI factories.14 Some are forming multi-telco consortiums, likely to help spread the load across multiple, large players. Others may be partnering with existing data center players, not only for financial reasons but also to help surmount technical challenges in building advanced gen AI data center server racks and associated technologies. Some are at the same time partnering with private equity firms for access to capital. Some are creating new, separate corporate structures to own and operate the gen AI data centers. Others are planning to go it alone and use their own capital or borrow the money. In some cases, these telcos may already have clients willing to commit to purchasing a stated amount of gen AI training and inference at a fixed price over a period of years.
The chosen business model can affect the available funding options. Some AI cloud services currently operate on a pay-as-you-go model, which can make revenue hard to predict. However, in addition to their own clouds, telcos may also have a wholesale option. Build-to-suit gen AI data centers with large amounts of capacity (tens of megawatts) usually have tenants committing to minimum 10-year terms, with 12 to 15 years with multiple renewal options being common.15 Such cash flows can be advantageous for telcos wishing to raise external equity or debt funding rather than relying on their own capital.
It’s early days yet for gen AI. Software companies are predicted to see a relatively modest revenue lift in 2024.16 Energy infrastructure needs to be upgraded to meet potential demand from AI data centers.17 It is still early in consumer adoption: In the United Kingdom, only 36% of those surveyed had used gen AI even once, and of those who had used it, 64% used it less than once per week.18 Even as hyperscalers and other data center players are building a land rush of centralized gen AI computing capacity, some smartphone and PC makers are adding gen AI processing to their latest devices, which along with smaller AI models, could reduce the need for AI factories. These gen AI phones and PCs are predicted to represent 22% of the market in 2024, at nearly 300 million devices.19 Equally, one rationale for multiple local gen AI data centers is to reduce latency by locating inference processing closer to users. But given that inferencing currently takes thousands of milliseconds, the latency benefits offered by local processing may be immaterial. Further, it’s unclear if the best time to build gen AI factories is today and with today’s chips, or if waiting a year or two might see exponentially more power-efficient chips hit the market. These could make cloud infrastructure built with today’s chips relatively uncompetitive while reducing the overall data center capacity needed for any given amount of gen AI computing power.
Industry analysts are suggesting that telcos that are big enough to build a gen AI data center could generate revenues.20 But as mentioned above, history has seen telcos get into (and then out of) the data center business at scale in the 2010 to 2020 period.21 Will history repeat itself?