New for 2025 is our series of shorter articles on emerging technologies. These trends tend to be earlier in adoption and smaller in revenues than our traditional Predictions topics, but we’re betting they will grow faster than average and make it to the big leagues in the next year or two:
Recognizing generative AI’s potential for enabling both threats and cyber solutions, cybersecurity professionals are exploring ways to harness its power to counter emerging risks and help fortify the technology environment
Arun Perinkolam, Sabthagiri Saravanan Chandramohan, Alison Hu, and Duncan Stewart
AI, including gen AI, is an increasingly important part of growing cyberthreats: Seventy-one percent of US state chief information security officers characterized AI threat levels as “very high” or “somewhat high” in a 2024 survey.1 AI is creating a mix of challenges including regulatory changes, undermining the effectiveness of current solutions, and AI-armed adversaries—complicated by the pace of enterprise AI adoption.
Deloitte expects that gen AI-based cyberattacks, already occurring more frequently in 2024 (doubling or even tripling), will continue to grow in 2025.2 There are multiple ways in which gen AI can be used in a cyberattack, but one example would be in writing malicious phishing emails: These attacks were up more than 856% as of Q1 2024 compared with the same period in 2023.3 Threat actors are already using gen AI tools to write code for malware attacks.4
But gen AI tools can also be a force for good, defending against or ameliorating the new generation of AI-backed cyberthreats.
Some inside the cyber industry fear that gen AI, in addition to offering various benefits, can increase cyber risk, as it is a new attack vector that increases the attack surface.5 There are many ways in which gen AI can be used in cyberattacks. It can be used to generate sophisticated and high volume, text-based phishing attacks, as well as deepfake images and videos used to impersonate CEOs or other C-level executives: Sixty-one percent of organizations surveyed have experienced a deepfake attack in the last year, with 75% of those being executive impersonations.6 In response, several gen AI solution providers are putting up guardrails in an effort to prevent their tools from being used for generating these text and video attacks, as well as embedding digital watermarks so that gen AI images or text can be detected, flagged, or blocked (see the 2025 TMT Prediction about watermarks and AI detection).
Next, many industries, led by the tech industry, have been increasingly using gen AI coding tools, which can help coders to write more code faster.7 However, the code the gen AI tool creates could have security issues, which more than half (56%) of developers surveyed in late 2023 said happens sometimes or even frequently.8 Further, the same survey showed that some developers often bypassed company policies about using coding tools, were routinely overconfident in the security of the code being generated, and were not always scanning the generated code for security issues.9 While there may be security concerns, gen AI coding and security large language models (LLMs) are helping to accelerate the maturity, efficiency, and efficacy of security processes, such as auto generation of monitoring rules in security information and event management (SIEM) technologies, use cases in the identity and access management arena around access workflows, provisioning, and third-party risk management.10
Currently, there are regulatory and geopolitical developments relevant to the intersection of gen AI and cyber. As an example, Article 15 of the EU AI Act explicitly addresses cybersecurity issues for high-risk AI systems.11 Further, there have been export restrictions on various technologies that have been implemented for AI, especially gen AI, since 2022, such as advanced node semiconductors necessary for training and inference of gen AI models, the equipment used to make those advanced chips, and the design tools for those chips.12
As gen AI becomes more integrated with businesses in general, companies providing AI solutions should continue to focus on making secure products for end users. It isn’t just the products themselves that need to be secure, but companies should be careful when sharing their own or others’ customer data with fourth parties, who are usually the providers of LLM services.
There is increased regulatory complexity due, in part, to the EU’s Digital Markets Act,13 Digital Services Act,14 and the new AI Act.15 Tech companies are not only the lead developers, but also are among the biggest deployers of broad use AI models. Therefore, there is likely a higher level of expectations that tech companies should play a larger and a more meaningful role in working to ensure trust and safety of gen AI being implemented in the products and solutions that they develop and sell to enterprises. The use and abuse of gen AI technology by threat actors, particularly in times of heightened risk, divisive geopolitical matters, elections, wars, etc., is likely to become an increasingly critical defense and strategic consideration in 2025 and beyond (see the 2025 TMT Prediction about trust in AI and the tools that companies are often using).
Chiplets promise to deliver more flexible, scalable, and efficient systems for AI and high-performance computing environments, at higher yields
Karthik Ramachandran, Duncan Stewart, Christie Simons, and Dan Hamling
Deloitte predicts that worldwide advanced packaging revenue based on “chiplets,” the building blocks of today’s most advanced systems in a package (SiPs), will more than double from an estimated US $7 billion in 2021 to reach US $16 billion in 2025.16 Compared to more traditional architectures that rely on separate interconnected chips on a printed circuit board (PCB), chiplets offer high-speed data transfers, reduce latency, and optimize PPA (power, performance, and area), and even extend Moore’s Law.17 Chiplets are often being used and explored in some of the fastest-growing markets such as AI accelerators (especially generative AI), high-performance computing (HPC), and telecommunications applications.
How is a chiplet different than chips? Typically, chipmakers use a 300 mm silicon wafer (about 70,000 mm²) and make a number of monolithic dies, which we call chips, once they are packaged. A high-end advanced chip generally measures 20 mm by 20 mm (or 400 mm²), resulting in approximately 175 dies per 300 mm wafer. But a chiplet is not monolithic: It is a heterogeneous architecture, where smaller dies are packaged in such a way that they work together akin to a monolithic die. Moreover, those dies and modules can come from various chip manufacturers.18
Why are chiplets important now? Chiplets have been around since the 1980s. But there has been a renewed interest and a large-scale shift toward chiplets in the past four to five years, largely because of the need for improved yield at leading-edge manufacturing nodes.19 Making advanced chips is becoming more challenging, as the industry is getting closer to the physical limits of Moore’s Law. But chiplets are making it possible to advance the miniaturization of semiconductors, and SiP-based chips are delivering performance that is comparable to the traditional system-on-chips designed using monolithic dies.20
The smaller and more complex chips get (e.g., advanced processing nodes at 5 nm and 3 nm), the more likely they are to have a defect rate across the 300 mm wafer that would affect yield.21 An 800 mm² die, which is used to produce the most-advanced AI chips at 3 nm or 5 nm will likely only have 50% to 55% yield when assembled and packaged using a traditional monolithic approach, with a defect density of 0.1 defects per cm².22 For context, normal yields for a mature semi process node (at 90 nm and 130 nm) would be on the order of 90% to 95%.23 To address this problem, chiplets combine several smaller, higher-yield chips into a larger system that functions as one: Smaller die of 180 mm² size each (with a yield of 95% each) packaged using chiplet-based architectures can create more efficient and powerful AI processors at a lower cost and enhance product/functional flexibility and configurability to meet dynamic market needs.24
As chiplet adoption grows, industry players are finding creative ways to improve design processes, increase connection speeds and bandwidth, and improve energy efficiency. For example, the industry is looking at digital twins to emulate and visualize complex design processes step-by-step, including the ability to move around or swap chiplets to measure and assess performance of a multi-chiplet system.25 Some companies have introduced a range of interconnection techniques to assemble and stack discrete components on a chip, improving efficiency over traditional, large-sized monolithic designs.26 There is also ongoing R&D to explore using glass, which is proving to be a more flexible and scalable organic substrate, as well as superior in thermal conductivity and performance-per-watt as a substrate in chiplets packaging, targeted at HPC and AI environments.27 Even photonics, using light for data transfer, is being explored as an interconnect solution to provide optical input/output (I/O), especially for HPC and AI workloads. This technology could deliver energy-efficient and high-speed data transfer and processing (see the rising trend about silicon photonics, below).28
At the same time, chiplet architectures continue to deal with unique challenges. For example, stacking multiple dies that are connected by thin substrates can create thermal management problems, leading to potential circuit malfunctions and power loss.29 Additionally, as more intellectual property gets integrated into these complex packages, sourcing components from various vendors across different regions could increase the risk of cyberattacks and expose underlying systems to new security threats.30
To help unlock business value from chiplets, participants across the semiconductor value chain should consider working together to close the gaps and address challenges, while exploring avenues for growth.
Equipment makers, foundries, integrated device manufacturers, fabless companies, and outsourced semiconductor assembly and test providers could further bolster their fab-to-packaging partnerships and co-development efforts. Power and logic integrated circuit manufacturers and designers can consider nuances related to thermal and heat management.31
Companies should also consider pivoting toward establishing standards for chiplet interconnects and data interoperability—building on and advancing their early efforts such as the Universal Chiplet Interconnect Express standard, the High Bandwidth Memory Protocol, and the Bunch of Wires interconnect technology.32
Electronic design automation (EDA) companies, chip designers, and security experts can devise ways to develop in-built functionality that could sense potential IP theft and cyber infringement at the chiplet level and work with the rest of the supply chain to help address the broader threat and attack parameters that could affect chiplets. Additionally, designers should work with EDA and other computer-aided design and computer-aided engineering companies to strengthen the design, simulation, and verification and validation tools and capabilities for hybrid and complex heterogenous systems—including applying AI techniques to chip design.33
Telcos’ back-end business and operations software market is growing slowly but modernizing it—by adopting SaaS and microservices architecture, moving to the cloud and more—is a hot spot of growth for software vendors and an opportunity for telcos to do more with 5G, fiber, and AI
Amit Kumar Singh, Duncan Stewart, Hugo Santos Pinto, and Dan Littman
Historically, telcos had two separate but important suites of telecom-specific IT systems. Business support software (BSS) mainly consisted of capturing customer orders, customer relationship management, and billing. Operational support software (OSS) handled service order management, network inventory management, and network operations.34 These were usually two distinct systems, often custom, mainly on-premises, mainly hardware defined, and composed of a series of individual, specialized solutions targeting specific service lines (fixed and mobile) or technological areas such as access, core, and transmission, creating a fragmented and complex infrastructure.35 In 2025 and beyond, telcos are expected to modernize and infuse next-level automation and intelligence into these systems, which could lead to an acceleration of growth. Longer term, there may even be an integration of BSS and OSS into a single platform.
Why is this happening now? The advancement of on-demand access to services and products can require businesses to reimagine their customer experience, redefine offerings, reinvent business models, and reprogram sales channels. Within the BSS domain, specifically within billing, evolving customer expectations and new digital revenue streams may require new capabilities and support for offer-centric and customer-centric billing. This impact may be felt throughout the B/OSS life cycle.
In line with analyst estimates, Deloitte predicts that the global revenues from the combined OSS and BSS market, or B/OSS, will be about US$70 billion in 2025, up from an estimated $63 billion in 2023, or about a 5% annual growth rate.36 That said, telcos may want to take advantage of potentially new revenue-generating features delivered via 5G stand-alone, fiber, and more. Plus, the maintenance cost of legacy infrastructures (telcos can spend up to 80% of IT budgets on integrating and customizing legacy B/OSS systems)37 could drive telcos to modernize their B/OSS software at a more rapid pace. The B/OSS software-as-a-service offering is expected to grow at about 18% annually, and moving to the cloud (aka “cloudification”) is expected to grow at 21% annually, which suggests that the various subtypes of BSS/OSS modernization are growing at triple to quadruple the growth rate of the overall BSS/OSS software industry of 5%.38
As modernization accelerates, BSS and OSS can achieve more effective integration through tools such as APIs and microservices, leveraging cloud-based, software-defined solutions that are available in off-the-shelf standard configurations and offer modularity. Furthermore, these integrations may allow telcos opportunities for greater efficiencies (including lower costs), new revenue streams, a more resilient network, greater cyber and operational security, as well as the ability to better leverage gen AI technologies as they merge over the coming years. Service-centricity could be key, as next-gen OSS systems work to orchestrate the provisioning, fulfillment, and assurance on a per-service level, as opposed to a per-tech-domain level. This could help make the processes for OSS systems more service-centric, along with the horizontal consolidation (integrating various technology platforms and supporting software into a unified system, mainly on the cloud) of technology and supporting software.
Many European telcos have already modernized their B/OSS software in the last few years, with some financial gains.39 However, a goal of new deployments will likely be service-centricity. Most of the growth in the next few years is expected to come from the Americas, Middle East and North Africa, and emerging Asia Pacific.40 Also, BSS (specifically within customer engagement systems) have been moving to the cloud more recently, while OSS have been slower to shift due, in part, to telcos being cautious about moving important functions to these newer systems; however, that seems to be changing.41
One question might be “Who provides these new services?” Historically, there were various BSS or OSS solution providers and integrators, or companies built their own solutions in-house. As B/OSS modernizes, established enterprise software vendors and the hyperscalers are attempting to introduce their own offerings.42 Their success will likely require modern system integration with a cloud and AI-enabled mindset.
A billing transformation (a subset of B/OSS modernization) can have significant implications as billions of dollars flow through the legacy billing systems. Some telecom executives have had trepidation about putting this revenue at risk with a billing transformation.43 Balancing this financial cornerstone, aligning business objectives, and minimizing business disruption while evolving billing often requires a delicate balancing act. Considerations and options for navigating this gauntlet are further explored in Navigating the complexities of billing transformation.44
Telcos should spend money to both save money and make money during B/OSS modernization. Cost reduction can be an important part of the business case for modernization, but the opportunity to grow revenues through offering products such as network-as-a-service or converged offerings could help enhance the ability of telcos to monetize fixed wireless access services.
Further, modernizing B/OSS could require telcos to integrate their OSS and BSS, work with industry-standard APIs,45 focus on DevOps for cost control, and work with emerging AI and machine learning technologies.
Finally, as telcos look to move from a relatively fragmented B/OSS environment to a more monolithic model, the governance model should also evolve. Where B/OSS was once the exclusive purview of engineering, new stakeholders who should be involved at all points in the modernization process could include the HR, IT, and finance functions.
Propelled by the demanding requirements of gen AI, optical devices on silicon are stepping out of research labs and into the limelight of data centers
Duncan Stewart, Karthik Ramachandran, Jeroen Kusters and Christie Simons
Deloitte predicts that sales of silicon photonics chips used as optical transceivers will grow from US$0.8 billion in 2023 to US$1.25 billion in 2025, a compounded annual growth rate of 25%.46 Although a fraction of estimated global 2026 chip sales of US$687 billion,47 these chips allow generative AI data centers—which need to move around much more significant amounts of data at higher speeds than other data centers—to communicate at lightspeed, using smaller, cheaper components, and less energy, and producing less heat (better thermal management) than the traditional alternatives.48
Silicon chips work in the electrical domain, either communicating with other chips using electrical signals over wires or needing to be attached to or combined with external lasers and modulators that use photons traveling over fiber optic cables. Fiber usually has higher bandwidth than copper wires; signals can travel longer distances using less energy. Moreover, fiber optic cables are immune to electromagnetic interference, which can be a problem for copper wires. Fiber optic cables are often more challenging to tap into or intercept than copper wires, making them more secure. However, traditional photonics have limitations, mainly around cost and size, which silicon photonics hope to overcome.49
In 2025, photonic devices are expected to increasingly be made:
This can allow chip companies to integrate electronic and photonic components on a single chip. Over time, this could matter in many different use cases. However, in 2025, the main driver of silicon photonics adoption is expected to be in data center applications, specifically for those running gen AI training and inference. Where most data centers have chips, trays, and racks that communicate with each other at speeds of less than 100G (100 gigabits per second), gen AI equipment needs to move more data faster—speeds of 400G or even 800G are required—and photonics is the optimal solution.50
Some necessary, detailed data center context may be needed. Within gen AI data centers, there are many server racks. The standard size is 24 inches (600 mm) wide, 42 inches (1066.80 mm) deep, and 73.6 inches (1866.90 mm) tall: This is called a 42U (each U is a “rack unit” that is 1.75 inches [44.45 mm] in height).51 Many different types of chips and racks need to talk to each other over different distances and speeds, which are determined partly by rack dimensions.
As a result, there are opportunities for silicon photonics in 2025 and beyond. Different technologies have different sweet spots, primarily driven by the distance between components, with silicon photonics having an optimal zone that is not too short, not too long of more than 10 cm and less than 10 meters, where it could have the greatest near-term advantages versus copper or traditional photonics and therefore may have more opportunities for near-term revenues.
Chip to chip, on a tray: One configuration for gen AI rack-scale servers features trays of 2 GPUs and 1 CPU that are either 1U or 2U in height, depending on the cooling technology chosen. In 2025, the tray-level communication between chips (distance is less than 10 cm) is done electrically, but this could evolve and be done optically over time. Given the limited space available (two to four inches in height) and an effort to keep costs low, this may need to be done with integrated silicon photonics rather than discrete photonic devices. However, given the very short distances, electrical signals could be adequate in 2025.
Tray to tray, on a rack: There can be 18 1U trays as described above in a single server rack. This is the densest possible configuration. Each tray needs to talk to all the others by communicating from a distance of no more than a meter or two vertically.52 In early 2025, this will be doable optically for about US$144,000 per rack, according to one estimate.53 By the back half of 2025, or early 2026, silicon photonic devices could start gaining traction in this application.
Rack to rack, but close: For various reasons (power, cooling, cost) there could be many paired server racks with half the density, sitting beside each other and needing to communicate over a meter or two. Communication between two server racks could, at some point, almost entirely be done optically, and this may be the largest near-term opportunity for silicon photonics in 2025.
Rack to rack, but less close: Each server rack (or pair of server racks) needs to talk to all the other rack servers (and memories and processors of various kinds) across a full hyperscale data center: These can involve fiber optic cables that are tens or even hundreds of meters long. Silicon photonics can offer very high bandwidths and long reach, reducing the costs and power consumption due to the high level of photonic device integration.54 Although the higher cost is a consideration, silicon photonics are not expected to displace traditional photonics for this application in the near term.
One additional prediction around silicon photonics: M&A. If there is continued growth in gen AI data centers, and especially in the need for high speeds and reduced power consumption, both of which may be likely, then large companies may spend money, billions, on acquiring silicon photonics startups, companies, or divisions of other companies who are leaders in what could increasingly be seen as a critical emerging technology.55
Although this article is focused on the importance of gen AI data centers in accelerating the demand for silicon photonics, it’s important to note that the technology is of potential interest in other use cases. Perhaps the most notable near-term opportunity is to make on-chip LIDAR units for advanced driver assistance systems (near term) and autonomous driving features (long term).56