Generative AI’s the flavor of the season—from boardroom agendas to everyday conversations, it’s everywhere. Organizations across industries are exploring generative AI, but how exactly do they perceive the technology? How far are they on their journey of adopting it? What according to them are the biggest challenges in embracing it?
Perhaps surprisingly, the answers to these questions are often driven by emotions—how people feel about a given technology. So, Deloitte conducted the state of generative AI in the enterprise survey to unearth answers to these questions and more. The survey unearthed some interesting findings about how public sector leaders are thinking about generative AI. Check it out:
Leaders of public sector organizations—federal, state, local governments as well as higher education—have not failed to notice the meteoric impact of gen AI. Fifty-six percent of government respondents felt that generative AI would drive transformational change within the next year compared to only 24% of respondents in commercial industry.
Unlike their commercial counterparts who reported excitement as the chief emotion, the emergence of generative AI merely surprised government respondents.
While a large majority of public sector respondents believed that generative AI would have a positive impact on society, they were 21 percentage points less optimistic than commercial counterparts that it would produce big productivity gains. And those benefits are balanced against significant perceived risks, with 63% also worried that gen AI would further erode trust in public institutions.
With such a balance between respondents’ perceived benefits and risks of gen AI, many public sector leaders are taking it slow with generative AI. Our data shows that many public institutions have been cautious in approaching the new technology, increasing their funding of gen AI, albeit at a lower rate than commercial counterparts.
This caution does not mean that public sector leaders are not taking action. Respondents report feeling well-prepared for workforce education, reskilling, and recruitment. Our survey showed that they are also implementing a variety of risk management controls from governance frameworks to human supervision at rates higher than commercial counterparts.
These efforts are not shared evenly across the public sector, which could lead to overconfidence. Sectors with significant previous experience with AI, such as defense, report higher levels of readiness while other sectors see themselves lagging in preparedness.
Public sector organizations seem to be attempting to walk a tight rope between caution and innovation. To make more strides in their gen AI journey, they should build the workforce they need and equip them with the tools required. A large pool of talent is available at their disposal to test potential generative AI use cases at low risk before deciding to scale use of generative AI.
Also, trusting the technology and expecting public to trust the technology are two different things. Governments should be transparent with the public about what, when, and how AI tools are being used, which can help to retain and build trust in the technology.
But creating AI solutions that are both successful and trustworthy requires an organization to have specific capabilities across technology, people, and even governance. For more on steps needed to develop those capabilities, see our work on the unique challenges to making trustworthy AI within government.