Telecoms tackle the generative AI data center market

Telecoms are going to need chips, energy, water, and money to get into the gen AI competition with the hyperscalers and others … But what else should they consider?

Dan Littmann

United States

Girija Krishnamurthy

United States

Matti Littunen

United States

Ariane Bucaille

United States

Kevin Westcott

United States

According to Deloitte analysis of press releases so far in 2024, more than 15 global telecoms in over a dozen countries, with a combined population of over two billion, have announced that they are building gen AI data centers (interchangeably called “AI factories”).1 Other telecoms around the world are actively considering the idea.2 These AI factories plan to sell gen AI training and inference computing capacity to others as a new telco business line. Although some gen AI computing can be done in the global cloud, some believe that other gen AI tasks should be performed locally, often via hybrid or private cloud models. This can fulfill national requirements for sovereign gen AI, with perceived benefits in areas like “security, performance, and fine-tuning with local language and cultural standards.”3 Likely end markets may include government, defense, health, and finance. Some governments are already demanding sovereign gen AI capacity for certain applications, while others are creating programs with financial incentives to build locally owned and operated gen AI processing capacity.4 Telcos are not the only possible provider of sovereign gen AI factories, but they are often among the leading candidates.

Telecoms getting into the data center business is nothing new. In the period from 2000 to 2015, multiple telecoms spent billions of dollars on building their own data centers or acquiring existing data center companies.5 It made sense in many ways: Data centers in that period needed access to high-speed connectivity, power, cooling, and many square feet in central locations, all of which telecoms had.

Since then, some telcos have divested or spun out some, most, or all of their data center assets in the last decade.6 According to a 2024 analysis, the number of telco mergers and acquisitions deals involving their data centers has increased recently: There were just over 20 deals per year in 2019 and 2020, while in 2022 and 2023, there were 63 and 48 deals, respectively.7 That said, some telecoms still have data centers, both for their own internal use and for selling computing capacity to others.8

The gen AI challenge is that often “old” data centers can’t be used to provide state-of-the-art gen AI training and inference. New rack scale servers weigh over a ton, draw up to 120 kilowatts of power (roughly 10x to 20x as much as a conventional data center rack), and require higher voltages, different and faster communications gear, and a shift from air cooling to liquid cooling (which is especially challenging).9 At best, it could require replacing much of a data center; at worst, an entirely new one may need to be built.

And that costs money. Based on our analysis, the average large sovereign telecom gen AI data center deployment (either standalone, or as part of a larger multiuse data center) costs on the order of US$100 million for a new build.10 For context, telecoms spent far more money during their first data center buildouts 15 to 20 years ago.11 Telecom gen AI data centers are also not a material factor in the almost US$200 billion projected AI server market for 2024.12 As an example, the leading gen AI chipmaker’s 10th largest customer was on track to spend almost US$2 billion on gen AI chips in late 2023.13 As an end-market, telecom gen AI data centers may be a relative drop in the bucket.

There are different financial models that telcos can consider if building AI factories.14 Some are forming multi-telco consortiums, likely to help spread the load across multiple, large players. Others may be partnering with existing data center players, not only for financial reasons but also to help surmount technical challenges in building advanced gen AI data center server racks and associated technologies. Some are at the same time partnering with private equity firms for access to capital. Some are creating new, separate corporate structures to own and operate the gen AI data centers. Others are planning to go it alone and use their own capital or borrow the money. In some cases, these telcos may already have clients willing to commit to purchasing a stated amount of gen AI training and inference at a fixed price over a period of years.

The chosen business model can affect the available funding options. Some AI cloud services currently operate on a pay-as-you-go model, which can make revenue hard to predict. However, in addition to their own clouds, telcos may also have a wholesale option. Build-to-suit gen AI data centers with large amounts of capacity (tens of megawatts) usually have tenants committing to minimum 10-year terms, with 12 to 15 years with multiple renewal options being common.15 Such cash flows can be advantageous for telcos wishing to raise external equity or debt funding rather than relying on their own capital.

It’s early days yet for gen AI. Software companies are predicted to see a relatively modest revenue lift in 2024.16 Energy infrastructure needs to be upgraded to meet potential demand from AI data centers.17 It is still early in consumer adoption: In the United Kingdom, only 36% of those surveyed had used gen AI even once, and of those who had used it, 64% used it less than once per week.18 Even as hyperscalers and other data center players are building a land rush of centralized gen AI computing capacity, some smartphone and PC makers are adding gen AI processing to their latest devices, which along with smaller AI models, could reduce the need for AI factories. These gen AI phones and PCs are predicted to represent 22% of the market in 2024, at nearly 300 million devices.19 Equally, one rationale for multiple local gen AI data centers is to reduce latency by locating inference processing closer to users. But given that inferencing currently takes thousands of milliseconds, the latency benefits offered by local processing may be immaterial. Further, it’s unclear if the best time to build gen AI factories is today and with today’s chips, or if waiting a year or two might see exponentially more power-efficient chips hit the market. These could make cloud infrastructure built with today’s chips relatively uncompetitive while reducing the overall data center capacity needed for any given amount of gen AI computing power.

Industry analysts are suggesting that telcos that are big enough to build a gen AI data center could generate revenues.20 But as mentioned above, history has seen telcos get into (and then out of) the data center business at scale in the 2010 to 2020 period.21 Will history repeat itself?

By

Dan Littmann

United States

Girija Krishnamurthy

United States

Matti Littunen

United States

Endnotes

  1. Deloitte analysis of public announcements. Countries include Germany, South Korea, Singapore, Indonesia, Thailand, Japan, the United Arab Emirates, France, Qatar, Spain, Switzerland, Italy, and India.

    View in Article
  2. Lead author Duncan Stewart was consulted by employees from Deloitte member firms in Asia Pacific and North America between February 2024 and September 2024 as to whether specific telecom companies should build gen AI data centers for providing gen AI training and inference to other companies, in addition to internal training and inference needs.

    View in Article
  3. Nvidia, “Transforming telcos into sovereign AI factories,” accessed Aug. 11, 2024.

    View in Article
  4. Government of Canada, “Securing Canada’s AI advantage,” April 7, 2024.

    View in Article
  5. Peter Judge, “The telco data center sell-off,” Data Center Dynamics, Jan. 2, 2020.-

    View in Article
  6. Ibid.

    View in Article
  7. CSI Magazine, “Telecom consolidation: Over 500 M&A deals in five years,” July 23, 2024.

    View in Article
  8. Judge, “The telco data center sell-off.”

    View in Article
  9. Lachlan Colquhoun, “AI is forcing a data center design rethink,” CDO Trends, Sept. 18, 2023.

    View in Article
  10. See endnote 2. From these conversations, the approximate size of projects being considered was about US$100 million.

    View in Article
  11. Judge, “The telco data center sell-off.”

    View in Article
  12. Manoj Sukumaran and Aaron Lewis, “Server market analysis – 1H24,” Omdia, April 11, 2024.

    View in Article
  13. The Verge, “The GPU haves and have nots,” Dec. 4, 2023.

    View in Article
  14. All of the various models cited in this paragraph are taken from the public announcements of the various telecoms included in the Deloitte analysis.

    View in Article
  15. Alicia Villegas, “Opportunity knocks for build-to-suit data centers,” PERE, Sept. 2, 2024.

    View in Article
  16. Duncan Stewart, Baris Sarer, Gillian Crossan, and Jeff Loucks, “Generative AI and enterprise software: What’s the revenue uplift potential?,” Deloitte Insights, Nov. 29, 2024.

    View in Article
  17. Madeline Ruid, “AI boom is creating opportunities for renewables and power infrastructure,” Global X,June 4, 2024.

    View in Article
  18. Paul Lee and Ben Stanton, “Generative AI: 7 million workers and counting,” Deloitte UK, June 25, 2024.

    View in Article
  19. Gartner, “Gartner predicts worldwide shipments of AI PCs and GenAI smartphones to total 295 million units in 2024,” Feb. 7, 2024.

    View in Article
  20. Michelle Donegan, “Can telcos capitalize on the cloud and AI infrastructure building boom?,” Inform TM Forum, May 31, 2024.

    View in Article
  21. Judge, “The telco data center sell-off.”

    View in Article

Acknowledgments

The authors would like to thank Hugo Santos Pinto, Craig Wigginton, Jim Aber, Cindy Varga, Peter Corbett, Michael Steinhart, Andy Bayiates, Prodyut Borah, Prashant Raman, and Shambhavi Shah.

Cover image by: Harry Wedel