Imagine aliens arrive on Earth with incredible powers that captivate some and terrify others. Governments quickly convene to try and make rules for how these enigmatic beings will live and work among us.
Read in the voice of Rod Serling, this scenario would make an excellent episode of The Twilight Zone, but it is also the situation many governments are facing with AI today. While AI, including generative AI, has been around for years, the release of ChatGPT in November 2022 was almost like an alien landing. In mere weeks, it made the incredible capabilities of the most recent models suddenly available to more than 100 million people.1 The intense optimism and concerns inspired by the sudden appearance of this powerful generative AI jump-started conversations not only about the uses of AI but also about how to govern it (Figure 1).
However, the intense reaction may have an over-focused discussion on the regulation of AI itself. Deloitte's analysis of the more than 1,600 policy initiatives—including regulation but also other policies aimed at supporting or shaping AI—from 69 countries and the EU suggests that many countries are following a similar path in addressing AI.2 While many are beginning to grapple with how to shape the development of the technology, there are tools besides direct regulation of AI available to governments to help ensure a future that is both innovative and protects the public.
Today’s headlines are seemingly filled with a wide variety of proposals for how policymakers should prepare for the future of AI, but these diverging proposals come from a shocking similarity. Against expectations, most countries have so far approached AI with a very similar set of policy responses.
Many areas of public policy interest feature distinct sets of policies that are adopted by different sets of countries. For example, researchers at Oxford University found that there were distinct clusters of climate policies based on whether a country had a “strong” or “more limited” institutional capacity.3 For AI, given that countries are at different points in their development, with different economic and social circumstances, it might be expected to see similar clusters of policies emerge. Surprisingly though, thus far we haven’t seen such clusters emerge in AI policies.
Deloitte analyzed a database of 1,600+ AI policies ranging from regulations to research grants to national strategies (for more information see methodology section).4 We then mapped those policies so that the policies frequently adopted together in the same countries appear closer together in our visualization. However, rather than finding clear sets of related policies, we found that most policies were clustered together (Figure 2). This implies that most countries included in the study are using a common set of policies as a starting point.
The patterns in which policy instruments were used reveal that it isn’t just the core policies that are common across countries, but the same overall pathway to regulation. Nearly all countries seem to be following a similar process of understand, then grow, then shape (Figure 3).
1. Understand. When confronted with an unfamiliar and fast-moving technology, governments first try to understand it. Many use collaborative mechanisms such as coordinating or advisory bodies to gather a diverse set of expertise to help understand and predict the likely impacts of AI.
Real-world example: Establishing committees, coordinating bodies, or information-sharing hubs on AI
2. Grow. With a clearer picture of what AI is and its likely impacts, most countries in our sample next create national strategies that deploy funding, education programs, and other tools designed to help spur the growth of the AI industry.
Real-world example: Providing research grants, loans, equity financing, or other mechanisms for funding the growth of the AI industry
3. Shape. As the AI industry continues to grow and develop, governments then look to shape the development and use of AI through instruments like voluntary standards or regulations.
Real-world example: Creating oversight bodies, regulatory sandboxes, or voluntary standards to govern the use of AI
There is a significant overlap between the activities of each stage. Efforts to grow the AI industry may continue for decades for example. Yet, only relatively recently have countries in our sample begun to adopt policies aimed at shaping AI. This seems to be an indication that most countries, having worked their way through the “understand” stage, are now transitioning from the “grow” to the “shape” phase.
This transition is reflected in the public debate around AI. For the past several years, headlines may have focused on national strategies, research grants, or other efforts to grow AI. Yet, only recently, a sudden and intense debate has emerged over whether, when, and how to regulate AI.5
It is at the “shape” stage where countries sampled begin to diverge. Different histories and different regulatory philosophies can push countries down different paths to regulating AI. Comparing the EU’s draft AI Act with the US AI Risk Management Framework is a good example. Both policies take a risk-weighted approach but differ in how to apply it. The US Risk Management Framework includes nonbinding recommended actions while the EU’s draft AI Act is binding legislation that, if enacted, would directly regulate use cases or applications of AI algorithms.6 Both approaches reflect the different histories and regulatory philosophies of the EU and the US.
These differences in regulatory philosophy mean that governmental action is at an inflection point. Governments have been traveling with many companions down a common path of dealing with AI, but as governments begin to grapple with regulating AI, the path appears to branch off with many different options: Who should have jurisdiction over governing AI? How should societies balance competing imperatives of innovation and safety? What is government’s role in responding to AI developments? And many more.
The answers to these questions could have significant impacts on the future of AI. And because the differences between these options are philosophical at some level, it can be challenging to make early predictions about which will yield better results. However, the policy debate is not the only path that leads to the destination. Our research on existing AI policies has found an underused set of tools that could not only help shape the positive development of AI, but also help moot some of today’s most polarizing policy debates.
Perhaps because of the conversations surrounding generative AI, or perhaps because of the pathway that led from “understand” to “grow”, discussion about shaping the future of AI has largely focused on AI directly. Draft policy proposals, for example, frequently focus on controlling the workings or applications of AI itself rather than the outcomes that it creates.
But our research shows that this focus may be overlooking some of the most important tools already on the books. Of the 1,600+ policies we analyzed, only 11% were focused on regulating AI-adjacent issues like data privacy, cybersecurity, intellectual property, and so on (Figure 5). Even when limiting the search to only regulations, 60% were focused directly on AI and only 40% on AI-adjacent issues (Figure 5). For example, several countries have data protection agencies with regulatory powers to help protect citizens’ data privacy. But while these agencies may not have AI or machine learning named specifically in their charters, the importance of data in training and using AI models makes them an important AI-adjacent tool.
This can be problematic because directly regulating a fast-moving technology like AI can be difficult. Take the hypothetical example of removing bias from home loan decisions. Regulators could accomplish this goal by mandating that AI should have certain types of training data to ensure that the models are representative and will not produce biased results, but such an approach can become outdated when new methods of training AI models emerge. Given the diversity of different types of AI models already in use, from recurrent neural networks to generative pretrained transformers to generative adversarial networks and more, finding a single set of rules that can deliver what the public desires both now, and in the future, may be a challenge.
Rather than trying to find a set of rules that can make AI models deliver the right outcomes in all circumstances, our data suggests regulators should focus on incentivizing those desired outcomes. For example, if it’s in the public interest to limit bias in AI-enabled decisions, then requiring that the outcomes of all those decisions, regardless of the technology used, meet certain standards—rather than regulating the workings of AI itself—a help protect public goals even as new generations of technology come online.
Not all outcomes are equally important to the public. Functions like health care, education, and finance can have more significant impacts on society than more mundane functions like street maintenance, garbage collection, or call center scheduling. As a result, outcomes in higher-risk functions may require more scrutiny than in others.
Outcome-based, risk-weighted regulations can be a powerful tool for regulators, but they are often overlooked. While there are proposed regulations that are outcome-based and those that are risk-weighted, very few are both. We have previously identified five principles for the Future of Regulation that governments have used to help regulate fast-moving technologies. When categorizing the 1,600+ AI policies according to those principles, we see that many are well-represented—especially collaborative and adaptive policies that played such an important part in the “understand” and “grow” phases. However, despite being powerful tools, we found that only about 1% of regulations were either outcome-based or risk-weighted, and none in the data set were both (Figure 6).
This is not to say that outcome-based and risk-weighted regulations do not exist. In fact, they likely constitute part of the regulatory structures of the 69 countries we studied. It is just that those regulations are not considered “AI” regulations. And that is exactly the mindset that we believe needs to shift.
In many countries, the use of AI is not exempt from existing consumer protection, employment, anti-discrimination, or competition laws. In the United States, the recently released Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence explicitly reiterates that AI and AI-enabled products must follow existing consumer protections for bias, discrimination, privacy, and more.7 In other cases, small modifications to these AI-adjacent pieces of legislation can go a long way in shaping the responsible development of AI without constraining innovation.8 For example, it may be necessary to give regulators access to different data sources or tools to ensure that new AI tools are, in fact, achieving the outcomes set forth by regulation.
Regulation can directly and immediately alter the course of a technology, but occasionally, relying on regulation alone can stall the development of fast-moving technologies.9 Regulators unfamiliar with the technology may be hesitant to issue rules, while innovators slow their work because of the risk of ending up on the wrong side of future regulations. This can create a vicious cycle that stalls progress (Figure 5).10
Generative AI may be at particular risk for this type of uncertainty trap. While the development of generative AI remains rapid, there are initial signs that innovators are slowing development in certain areas due to complex trustworthiness, ethics, and regulatory concerns.11 While a stall in development is not necessarily good or bad, a pause without any progress on resolving the underlying concerns merely kicks the ethical questions down the road. It remains to be seen whether large language models and other forms of Generative AI may slow development until compelling concerns in these areas are resolved.
But regulators are not alone. Acting as a regulator is one of the three roles government can play. In addition to regulation, governments can also shape the development of a technology by providing needed infrastructure or by using their buying power as a user of technology. Those roles can help reduce uncertainty around public equities, breaking a cycle and accelerating innovation. For example, governments have played a key role in providing the infrastructure needed for technological development, whether physical infrastructure like the interstate highway system, technical infrastructure like access to test equipment for the development of early MRI machines, or even human capital infrastructure in the form of education and workforce development programs for quantum computing skills.12 Similarly, the large buying power of governments often means that they can create large market incentives purely as a user of technology. For example, guaranteed purchases from the Department of Defense helped the early commercial satellite communications industry weather a lack of customers and develop its technology, and earn market share.13
These levers may seem too small to move a $400 billion industry growing at more than 20% per year, but government has shown it previously can do exactly that. 14 Take encryption and cybersecurity standards in cloud computing as an example. Even though the US Federal government’s roughly $8 billion in cloud spending was a fraction of the more than $400 billion cloud market, it still represented one of the largest single buyers.15 As a result, even a relatively small percentage of sales could move the whole industry to adopt standards required for government sales such as the Federal Information Processing Standards (FIPS) standards for encryption.16
Providing critically needed infrastructure has also helped shape the development of entire industries like prevision agriculture and location-based services. For example, the decision to release GPS to the public in 1983 made precise positioning, navigation, and timing available to the public anywhere on the globe for the first time. This had big impacts across industries: telecom providers could compress mobile phone calls with more accurate time stamps; farmers could precisely target water and fertilizer on their crops; and all of us could find the fastest route to our destination. While GPS remains government provided infrastructure, the benefit to the wider economy has been immense, generating an estimated $1.4 trillion in economic value since 1983.17
But aren’t governments already providing critical infrastructure to the AI industry? Absolutely. In fact, our data shows that providing critical infrastructure is the most common category of AI policy as governments look to help grow the industry. However, our data also shows a significant gap. Most of the infrastructure provided to date is social infrastructure—information sharing bodies, educational programs, workforce development initiatives, and the like—with funding infrastructure, such as R&D grant programs as a much smaller fraction. Very few governments have invested in providing technical infrastructure such as compute-sharing platforms or representative training data sets. Such tools can both accelerate AI development by removing key obstacles for developers and shape that development toward outcomes desired by policymakers (Figure 8).
Some countries are taking early steps with these tools. For example, the EO on Safe, Secure, and Trustworthy Artificial Intelligence directs the Office of Management and Budget to develop a means to ensure that all Federal acquisitions of AI follow risk-management practices.18 However, there remain significant opportunities for leaders to use governments’ buying power and infrastructure to guide AI toward desired outcomes without holding back its development.
While this study has been a purely descriptive look at the current landscape of AI regulation, it has uncovered new opportunities for both regulators and policy makers to shape AI’s development. Regardless of the specific direction different countries take with that shaping, a few steps can help countries be better prepared for an AI-enabled future.
For regulators, the power of often-overlooked outcome-based, risk-weighted regulations means that regulators should consider a few steps:
For policymakers, the buying power and infrastructure of government in shaping AI cannot be overlooked. Policy makers should consider:
The course of an AI-enabled future is far from set. With the right tools, government leaders can make policies that help accomplish their goals, minimize unintended consequences, and guide an AI-enabled future that works for everyone.
Our analysis began with the database of policy instruments maintained by the OECD.AI Policy Observatory. This database contains more than 1,600 policy instruments related to AI from 69 countries and the European Union.22 More than just regulations, these policy instruments include a range of policy tools meant to influence AI ranging from national strategies to grant programs to standards.
We then categorized those policy instruments based on a number of factors including:
This categorization allowed us to further analyze the set of policy instruments in more detail using methods such as network analysis, time series, and more. These tools helped us uncover the patterns in policy adoption reported in this paper.