While Robert Frost’s choice of the road “less traveled” led to global acclaim, taking such a path in the realm of tech adoption can often mean having to tackle never-before-seen technical and organizational challenges.1 As government organizations ponder their future with generative AI, they should work to understand not only how the technology can be used, but also its paths to scaling. When adopted at scale, generative AI can bring significant benefit for the public.
To gain deeper insights on this, Deloitte conducted its quarterly State of Generative AI in the Enterprise Survey. The survey included 2,770 senior leaders across 14 countries. Respondents were asked about level of expertise, use cases, barriers, and impact and value created by gen AI (methodology).
So, what is the current state of government thinking on gen AI?
Over the past nine months, government leaders have become more optimistic about gen AI driving transformative value. This optimism is being converted into tangible use cases as well. Seventy-eight percent of respondents report that their organizations are adopting gen AI somewhat fast or very fast—an 18 percentage-point increase since wave 2 of the survey (conducted in January 2024), reflecting increased adoption of generative AI in government (figure 1).
Deloitte surveyed 2,770 leaders across 14 countries between May and June 2024. Respondents were senior leaders in their organizations and included board and C-suite members and those at the president, vice president, and director levels. The survey sample was split equally between IT and line-of-business leaders. Government respondents were surveyed from the United States.
All participating organizations have one or more working implementations of AI being used daily. Plus, they have pilots in place to explore generative AI or have one or more working implementations of generative AI being used daily. Respondents were required to meet one of the following criteria with respect to their organization’s AI and data science strategy, investments, implementation approach, and value measurement: influence decision-making, be part of a decision-making team, be the final decision-maker, or manage or oversee AI implementations.
With greater expectations and use of gen AI, it’s natural to ask how government organizations are using gen AI.
Leaders in government organizations, like in other industries, are aiming to improve efficiency and productivity as their top use case for gen AI, followed by cost reduction. However, the benefits that government has realized from gen AI use cases tend to be different from those in other industries (figure 2).
The primary benefit realized by government agencies is the ease of developing systems and software using gen AI. This makes sense in the wider context of how government respondents say they are using gen AI. Eighty-six percent of respondents indicated using code generators to accelerate software development. Additionally, the adoption rate in IT/cybersecurity is more than 10 times higher than in other business functions (figure 3).
Given the shortage of IT workforce in government, it seems understandable that organizations are using gen AI to develop software.2 This can be especially critical in cybersecurity, where the government faces a massive shortage of workforce. With 3.4 million unfilled jobs3 in the wider cybersecurity sector, most state-level IT security organizations are severely understaffed, typically employing only 6–15 cyber professionals compared to more than 100 in similar-size private financial organizations.4 Therefore, gen AI’s ability to accelerate the development of software can be a significant labor saver, boosting the productivity of existing IT and cybersecurity staff.
Government organizations are finding success with technical applications of gen AI, and yet they face challenges in scaling its use. This issue is not unique to government, though. A significant number of both commercial and government respondents have transitioned fewer than one-third of their gen AI experiments into full production (figure 4).
However, the situation is even more dire for government respondents, because of an overall low rate of gen AI adoption. Outside of IT/cybersecurity, no business function reported more than 3% adoption of gen AI (figure 5). Consequently, even though both government and commercial organizations struggle to scale, the overall smaller number of gen AI proofs of concept means that even fewer solutions are adopted at scale.
Difficulty scaling gen AI isn’t new; our research has suggested that government has found it difficult to scale earlier eras of AI as well. But our data reveals that two factors may be contributing to government’s challenges in scaling gen AI this time around:
1. Lack of expertise. Among all industries, government respondents report the least expertise in generative AI (figure 6). They also report being the least prepared for gen AI from a talent perspective. This lack of expertise explains why choosing the right technologies is the biggest reported barrier for the public sector adopting gen AI.
However, government has recognized the issue and has started hiring more AI specialists. The US federal government has added more than 200 professionals to its AI workforce over the past year and plans to hire at least 500 AI experts by the end of fiscal 2025.5
2. Difficulty measuring mission value from gen AI. Even where talent and interest align, the unique nature of government work can pose a barrier. While commercial companies have clear, quantifiable metrics such as sales and revenue, government organizations can struggle to measure improvements to the mission. As a result, while they may agree that gen AI is important and plan to increase their investment in gen AI, those investments may not compete well in tight budget environments: 78% of respondents report struggling to measure impacts from gen AI (figure 7).
The difficulty in measuring mission value from gen AI highlights the distinct set of circumstances facing government leaders compared to their commercial counterparts. Those different circumstances also begin to carve out a different path to scaling.
To begin with, government leaders face different incentives than their commercial counterparts. With fixed pay and promotion scales, government leaders will likely not see any financial benefit if they adopt a cutting-edge technology, even if it succeeds. Conversely, if they adopt and it fails, it can have significant negative consequences. Therefore, it is not surprising that public-sector line-of-business leaders show less interest in generative AI compared to their commercial counterparts.
Public employees, on the other hand, tell the opposite story. Government employees often enjoy greater employment protection than their commercial counterparts, which could explain why surveyed government employees are more eager to use gen AI than commercial employees and government leaders (figure 8).
But far from being solely an impediment, this unique set of incentives may offer government organizations a novel pathway to scaling gen AI.
For commercial organizations, top-down interest is driving a strategy of embedding gen AI into as many business functions and processes as possible (figure 9). However, for public-sector organizations, the path to scaling gen AI may be more bottom-up. Given the high levels of interest among government employees, giving the workforce greater access to gen AI could enable them to experiment with the technology, quickly identifying use cases where it could bring value to the mission.
The real value of the bottom-up pathway is that it can offer concrete steps on how to scale gen AI in the unique circumstances faced by government organizations.
1. Provide wide access to gen AI within set guardrails
To experiment with gen AI, workers need access to gen AI. Seventy-five percent of government respondents in our survey reported that fewer than two out of every five of their workers have access to gen AI (figure 10)—demonstrating how far government organizations lag behind their commercial counterparts. Providing wide access to preapproved tools that have preset limitations or draw from only a curated knowledge base can be a good way to balance the risk of wide access with the reward of gen AI experimentation.
2. Strengthen expertise and build talent
If choosing the right technology is often the prime barrier to exploring new technological solutions, then expert knowledge of the chosen technology can be crucial for getting started. Government organizations can get this knowledge in two ways. First, internal upskilling programs designed around the learner and focused on specific outcomes. Second, government organizations should engage a wide ecosystem of academic, industry, and even other government partners to mine their expertise. Our previous research has shown that this can quadruple the likelihood of AI experiments exceeding expectations.6
3. Lead with mission outcomes
Gen AI’s potential to deliver incredible benefits to the public is beyond doubt. However, if those benefits cannot be quantified, it could become difficult to justify further investments in the face of tight budget constraints. Therefore, scaling even the most successful gen AI pilots may depend on having a clear vision of their mission outcomes and how those outcomes are to be measured.
As with any journey, these considerations are only the beginning. But with a clear understanding of the destination and the path, government may just be able to arrive at real value from generative AI.