The AI regulations that aren’t being talked about

Patterns in AI policies can expose new opportunities for governments to steer AI’s development

Joe Mariani

United States

William D. Eggers

United States

Imagine aliens arrive on Earth with incredible powers that captivate some and terrify others. Governments quickly convene to try and make rules for how these enigmatic beings will live and work among us.

Read in the voice of Rod Serling, this scenario would make an excellent episode of The Twilight Zone, but it is also the situation many governments are facing with AI today. While AI, including generative AI, has been around for years, the release of ChatGPT in November 2022 was almost like an alien landing. In mere weeks, it made the incredible capabilities of the most recent models suddenly available to more than 100 million people.1 The intense optimism and concerns inspired by the sudden appearance of this powerful generative AI jump-started conversations not only about the uses of AI but also about how to govern it (Figure 1).

However, the intense reaction may have an over-focused discussion on the regulation of AI itself. Deloitte's analysis of the more than 1,600 policy initiatives—including regulation but also other policies aimed at supporting or shaping AI—from 69 countries and the EU suggests that many countries are following a similar path in addressing  AI.2 While many are beginning to grapple with how to shape the development of the technology, there are tools besides direct regulation of AI available to governments to help ensure a future that is both innovative and protects the public.

Two roads diverged in a digital wood

Today’s headlines are seemingly filled with a wide variety of proposals for how policymakers should prepare for the future of AI, but these diverging proposals come from a shocking similarity. Against expectations, most countries have so far approached AI with a very similar set of policy responses.

Many areas of public policy interest feature distinct sets of policies that are adopted by different sets of countries. For example, researchers at Oxford University found that there were distinct clusters of climate policies based on whether a country had a “strong” or “more limited” institutional capacity.3 For AI, given that countries are at different points in their development, with different economic and social circumstances, it might be expected to see similar clusters of policies emerge. Surprisingly though, thus far we haven’t seen such clusters emerge in AI policies.

Deloitte analyzed a database of 1,600+ AI policies ranging from regulations to research grants to national strategies (for more information see methodology section).4 We then mapped those policies so that the policies frequently adopted together in the same countries appear closer together in our visualization. However, rather than finding clear sets of related policies, we found that most policies were clustered together (Figure 2). This implies that most countries included in the study are using a common set of policies as a starting point.

The patterns in which policy instruments were used reveal that it isn’t just the core policies that are common across countries, but the same overall pathway to regulation. Nearly all countries seem to be following a similar process of understand, then grow, then shape (Figure 3).

1. Understand. When confronted with an unfamiliar and fast-moving technology, governments first try to understand it. Many use collaborative mechanisms such as coordinating or advisory bodies to gather a diverse set of expertise to help understand and predict the likely impacts of AI.

Real-world example: Establishing committees, coordinating bodies, or information-sharing hubs on AI

2. Grow. With a clearer picture of what AI is and its likely impacts, most countries in our sample next create national strategies that deploy funding, education programs, and other tools designed to help spur the growth of the AI industry.

Real-world example: Providing research grants, loans, equity financing, or other mechanisms for funding the growth of the AI industry

3. Shape. As the AI industry continues to grow and develop, governments then look to shape the development and use of AI through instruments like voluntary standards or regulations.

Real-world example: Creating oversight bodies, regulatory sandboxes, or voluntary standards to govern the use of AI

There is a significant overlap between the activities of each stage. Efforts to grow the AI industry may continue for decades for example. Yet, only relatively recently have countries in our sample begun to adopt policies aimed at shaping AI. This seems to be an indication that most countries, having worked their way through the “understand” stage, are now transitioning from the “grow” to the “shape” phase.

This transition is reflected in the public debate around AI. For the past several years, headlines may have focused on national strategies, research grants, or other efforts to grow AI. Yet, only recently, a sudden and intense debate has emerged over whether, when, and how to regulate AI.5

It is at the “shape” stage where countries sampled  begin to diverge. Different histories and different regulatory philosophies can push countries down different paths to regulating AI. Comparing the EU’s draft AI Act with the US AI Risk Management Framework is a good example. Both policies take a risk-weighted approach but differ in how to apply it. The US Risk Management Framework includes nonbinding recommended actions while the EU’s draft AI Act is binding legislation that, if enacted, would directly regulate use cases or applications of AI algorithms.6 Both approaches reflect the different histories and regulatory philosophies of the EU and the US.

These differences in regulatory philosophy mean that governmental action is at an inflection point. Governments have been traveling with many companions down a common path of dealing with AI, but as governments begin to grapple with regulating AI, the path appears to branch off with many different options: Who should have jurisdiction over governing AI? How should societies balance competing imperatives of innovation and safety? What is government’s role in responding to AI developments? And many more.

The answers to these questions could have significant impacts on the future of AI. And because the differences between these options are philosophical at some level, it can be challenging to make early predictions about which will yield better results. However, the policy debate is not the only path that leads to the destination. Our research on existing AI policies has found an underused set of tools that could not only help shape the positive development of AI, but also help moot some of today’s most polarizing policy debates.

Overlooked regulatory tools may be key

Perhaps because of the conversations surrounding generative AI, or perhaps because of the pathway that led from “understand” to “grow”, discussion about shaping the future of AI has largely focused on AI directly. Draft policy proposals, for example, frequently focus on controlling the workings or applications of AI itself rather than the outcomes that it creates.

But our research shows that this focus may be overlooking some of the most important tools already on the books. Of the 1,600+ policies we analyzed, only 11% were focused on regulating AI-adjacent issues like data privacy, cybersecurity, intellectual property, and so on (Figure 5). Even when limiting the search to only regulations, 60% were focused directly on AI and only 40% on AI-adjacent issues (Figure 5). For example, several countries have data protection agencies with regulatory powers to help protect citizens’ data privacy. But while these agencies may not have AI or machine learning named specifically in their charters, the importance of data in training and using AI models makes them an important AI-adjacent tool.

This can be problematic because directly regulating a fast-moving technology like AI can be difficult. Take the hypothetical example of removing bias from home loan decisions. Regulators could accomplish this goal by mandating that AI should have certain types of training data to ensure that the models are representative and will not produce biased results, but such an approach can become outdated when new methods of training AI models emerge. Given the diversity of different types of AI models already in use, from recurrent neural networks to generative pretrained transformers to generative adversarial networks and more, finding a single set of rules that can deliver what the public desires both now, and in the future, may be a challenge.

Rather than trying to find a set of rules that can make AI models deliver the right outcomes in all circumstances, our data suggests regulators should focus on incentivizing those desired outcomes. For example, if it’s in the public interest to limit bias in AI-enabled decisions, then requiring that the outcomes of all those decisions, regardless of the technology used, meet certain standards—rather than regulating the workings of AI itself—a help protect public goals even as new generations of technology come online.

Not all outcomes are equally important to the public. Functions like health care, education, and finance can have more significant impacts on society than more mundane functions like street maintenance, garbage collection, or call center scheduling. As a result, outcomes in higher-risk functions may require more scrutiny than in others.

Outcome-based, risk-weighted regulations can be a powerful tool for regulators, but they are often overlooked. While there are proposed regulations that are outcome-based and those that are risk-weighted, very few are both. We have previously identified five principles for the Future of Regulation that governments have used to help regulate fast-moving technologies. When categorizing the 1,600+ AI policies according to those principles, we see that many are well-represented—especially collaborative and adaptive policies that played such an important part in the “understand” and “grow” phases. However, despite being powerful tools, we found that only about 1% of regulations were either outcome-based or risk-weighted, and none in the data set were both (Figure 6). 

This is not to say that outcome-based and risk-weighted regulations do not exist. In fact, they likely constitute part of the regulatory structures of the 69 countries we studied. It is just that those regulations are not considered “AI” regulations. And that is exactly the mindset that we believe needs to shift.

In many countries, the use of AI is not exempt from existing consumer protection, employment, anti-discrimination, or competition laws. In the United States, the recently released Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence explicitly reiterates that AI and AI-enabled products must follow existing consumer protections for bias, discrimination, privacy, and more.7 In other cases, small modifications to these AI-adjacent pieces of legislation can go a long way in shaping the responsible development of AI without constraining innovation.8 For example, it may be necessary to give regulators access to different data sources or tools to ensure that new AI tools are, in fact, achieving the outcomes set forth by regulation.

Regulators aren’t alone: other roles of government can help

Regulation can directly and immediately alter the course of a technology, but occasionally, relying on regulation alone can stall the development of fast-moving technologies.9 Regulators unfamiliar with the technology may be hesitant to issue rules, while innovators slow their work because of the risk of ending up on the wrong side of future regulations. This can create a vicious cycle that stalls progress (Figure 5).10

Generative AI may be at particular risk for this type of uncertainty trap. While the development of generative AI remains rapid, there are initial signs that innovators are slowing development in certain areas due to complex trustworthiness, ethics, and regulatory concerns.11 While a stall in development is not necessarily good or bad, a pause without any progress on resolving the underlying concerns merely kicks the ethical questions down the road. It remains to be seen whether large language models and other forms of Generative AI may slow development until compelling concerns in these areas are resolved.

But regulators are not alone. Acting as a regulator is one of the three roles government can play. In addition to regulation, governments can also shape the development of a technology by providing needed infrastructure or by using their buying power as a user of technology. Those roles can help reduce uncertainty around public equities, breaking a cycle and accelerating innovation. For example, governments have played a key role in providing the infrastructure needed for technological development, whether physical infrastructure like the interstate highway system, technical infrastructure like access to test equipment for the development of early MRI machines, or even human capital infrastructure in the form of education and workforce development programs for quantum computing skills.12 Similarly, the large buying power of governments often means that they can create large market incentives purely as a user of technology. For example, guaranteed purchases from the Department of Defense helped the early commercial satellite communications industry weather a lack of customers and develop its technology, and earn market share.13

These levers may seem too small to move a $400 billion industry growing at more than 20% per year, but government has shown it previously can do exactly that. 14 Take encryption and cybersecurity standards in cloud computing as an example. Even though the US Federal government’s roughly $8 billion in cloud spending was a fraction of the more than $400 billion cloud market, it still represented one of the largest single buyers.15 As a result, even a relatively small percentage of sales could move the whole industry to adopt standards required for government sales such as the Federal Information Processing Standards (FIPS) standards for encryption.16

Providing critically needed infrastructure has also helped shape the development of entire industries like prevision agriculture and location-based services. For example, the decision to release GPS to the public in 1983 made precise positioning, navigation, and timing available to the public anywhere on the globe for the first time. This had big impacts across industries: telecom providers could compress mobile phone calls with more accurate time stamps; farmers could precisely target water and fertilizer on their crops; and all of us could find the fastest route to our destination. While GPS remains government provided infrastructure, the benefit to the wider economy has been immense, generating an estimated $1.4 trillion in economic value since 1983.17

But aren’t governments already providing critical infrastructure to the AI industry? Absolutely. In fact, our data shows that providing critical infrastructure is the most common category of AI policy as governments look to help grow the industry. However, our data also shows a significant gap. Most of the infrastructure provided to date is social infrastructure—information sharing bodies, educational programs, workforce development initiatives, and the like—with funding infrastructure, such as R&D grant programs as a much smaller fraction. Very few governments have invested in providing technical infrastructure such as compute-sharing platforms or representative training data sets. Such tools can both accelerate AI development by removing key obstacles for developers and shape that development toward outcomes desired by policymakers (Figure 8).

Some countries are taking early steps with these tools. For example, the EO on Safe, Secure, and Trustworthy Artificial Intelligence directs the Office of Management and Budget to develop a means to ensure that all Federal acquisitions of AI follow risk-management practices.18 However, there remain significant opportunities for leaders to use governments’ buying power and infrastructure to guide AI toward desired outcomes without holding back its development.

Getting started

While this study has been a purely descriptive look at the current landscape of AI regulation, it has uncovered new opportunities for both regulators and policy makers to shape AI’s development. Regardless of the specific direction different countries take with that shaping, a few steps can help countries be better prepared for an AI-enabled future.

For regulators, the power of often-overlooked outcome-based, risk-weighted regulations means that regulators should consider a few steps:

  • Conduct an inventory of all existing outcome-based regulations within their authority. This can give regulators a clear picture of what outcomes they are responsible for and what processes AI may be changing that require attention.
  • After examining existing regulations, regulators can examine whether the use of AI would require any changes to be made.
  • Finally, while outcome-based, risk-weighted regulations may be well-suited to certain sectors, powerful systems such as large language models may require their own unique considerations.

For policymakers, the buying power and infrastructure of government in shaping AI cannot be overlooked. Policy makers should consider:

  • Building on existing policies that call for inventories of government uses of AI, to include inventories of how the government buys AI. This can help streamline the AI procurement process. Just as FedRamp helped communicate and promote cloud security practices by defining performance standards for government purchases, creating a streamlined process with common performance standards for AI can quickly communicate important equities to the AI industry.19
  • Establishing governance procedures throughout the model lifecycle to help ensure that not only is government’s use of AI safe, but also that its buying power pushes the entire industry toward desirable practices and outcomes.20
  • Finally, policymakers should endeavor to understand the risks and incentives in developing AI models. A detailed study of the players and interactions within the AI industry can help uncover barriers to innovation.21 With this knowledge, government leaders can better deploy infrastructure and government funding in such a way that not only removes barriers to AI development but also promotes the public’s interest.

The course of an AI-enabled future is far from set. With the right tools, government leaders can make policies that help accomplish their goals, minimize unintended consequences, and guide an AI-enabled future that works for everyone.

Methodology

Our analysis began with the database of policy instruments maintained by the OECD.AI Policy Observatory. This database contains more than 1,600 policy instruments related to AI from 69 countries and the European Union.22 More than just regulations, these policy instruments include a range of policy tools meant to influence AI ranging from national strategies to grant programs to standards.

 

We then categorized those policy instruments based on a number of factors including:

  • Whether the policy primarily dealt with AI
  • The role of government (regulator, technology user, or infrastructure provider)
  • Government type (federal, unitary, or multinational)
  • Future of Regulation principle embodied in the policy

This categorization allowed us to further analyze the set of policy instruments in more detail using methods such as network analysis, time series, and more. These tools helped us uncover the patterns in policy adoption reported in this paper.

Show more

By

Joe Mariani

United States

William D. Eggers

United States

Endnotes

  1. Krystal Hu, “ChatGPT sets record for fastest-growing user base - analyst note,” Reuters, February 2, 2023.

    View in Article
  2. Data was drawn from OECD AI Policy Observatory database and then analyzed using a number of frameworks to find patterns in global AI regulation.

    View in Article
  3. Santa Fe Institute, "Action briefing: How can complexity economics give more insight into political economy,” September 22, 2022. 

    View in Article
  4. OECD.AI Policy Observatory, “Country dashboards and data,” accessed October 31, 2023.

    View in Article
  5. Examining web search trends or the number of Congressional hearings on AI per year shows a sudden increase after November 2022.

    View in Article
  6. Gina M. Raimondo, Artificial intelligence risk management framework (AI RMF 1.0), National Institute of Standards and Technology, January 2023; European Parliament, "EU AI Act: First regulation on artificial intelligence,” June 8, 2023. 

    View in Article
  7. The White House, “Executive order on the safe, secure, and trustworthy development and use of artificial intelligence,“ October 30, 2023.

    View in Article
  8. Nicol Turner Lee, Niam Yaraghi, Mark MacCarthy, and Tom Wheeler, “Around the halls: What should the regulation of generative AI look like?,” Brookings, June 2, 2023.

    View in Article
  9. Uncertainties around future regulation is just one category of uncertainty risk, but it can be a significant barrier to technological progress. To see how all forms of uncertainty risk are impacting the quantum information technology industry, read: Scott Buchholz, Kate Abrey, and Joe Mariani, Sensing the future of quantum, Deloitte Insights, February 22, 2023.

    View in Article
  10. We have previously examined this phenomenon in regulation of the Internet of Things: Joe Mariani, Guiding the IoT to safety, Deloitte University Press, 2017. 

    View in Article
  11. For example, there have been several instances of technology companies developing, but not releasing technologies such as web-scraping facial recognition or voice-mimicking generative AI. See: Cyberscoop, “Journalist Kashmir Hill on facial recognition and the underage hackers hitting Vegas,” September 28, 2023; Dan Bova, “Meta decides not to release AI that can mimic the voices of everyone you know,” Entrepreneur, June 20, 2023.

    View in Article
  12. For MRI test infrastructure, see: National Science Foundation, “MRI: Magnetic resonance imaging—Nifty 50,” accessed October 31, 2023. For workforce development efforts in quantum see: National Science and Technology Council and NSTC Subcommittee on Quantum Information Science, Quantum information science and technology workforce development national strategic plan, February 2022. 

    View in Article
  13. Ocean Navigator, “Iridium reborn. Globalstar expands,” January 1, 2003.

    View in Article
  14. Fortune Business Insights, Artificial Intelligence market size share & COVID-19 impact analysis, April 2023.

    View in Article
  15. For estimates on Federal cloud spending see our research in: Meghan Sullivan, Malcolm Jackson, Joe Mariani, and Pankaj Kamleshkumar Kishnani, Don’t just adopt cloud computing, adapt to it, Deloitte Insights, January 21, 2022; and for estimates of global public cloud market size, see: Statista, “Public could services end-user spending worldwide from 2017 to 2023,” October 2022.

    View in Article
  16. National Institute of Standards and Technology, “Security requirements for cryptographic module,” NIST, May 25, 2001. 

    View in Article
  17. Alan C. O’Connor, Michael P. Gallaher, Kyle Clark-Sutton, Daniel Lapidus, Zack T. Oliver, Troy J. Scott, Dallas W. Wood, Manuel A. Gonzalez, Elizabeth G. Brown, and Joshua Fletcher, Economic benefits of the global positioning system (GPS), National Institute of Standards and Technology, June 2019. 

    View in Article
  18. The White House, “Executive order on the safe, secure, and trustworthy development and use of artificial intelligence,“ October 30, 2023.

    View in Article
  19. According to the annual FedRamp survey of government and industry professionals, 85% agreed that FedRAMP promotes the adoption of secure cloud services across the US Government. FedRAMP, “FedRAMP FY22 annual survey recap,” January 17, 2023.

    View in Article
  20. One example is how the DoD’s Chief Digital and AI Office is building down from its top-level responsible AI principles to detailed governance actions across model design, development, and deployment: US Department of Defense, U.S. Department of Defense responsible artificial intelligence strategy and implementation pathway, June 2022.

    View in Article
  21. CHIPS Research and Development Office, A vision and strategy for the national semiconductor technology center, National Institute of Standards and Technology, April 25, 2023. 

    View in Article
  22. Data was retrieved on June 20, 2023.

    View in Article

Acknowledgments

The authors would like to thank Bennett Stillerman, Thirumalai Kannan D, and Nicole Savia Luis for their contribution to the data analysis of the report. They also thank Kyra Kaczynski, Kelsey Lilley, Anita Soucy, Kristin Loughran, and Kimmerly Cordes for their thoughtful feedback on the draft, and Rupesh Bhat and Shambhavi Shah from Deloitte Insights for their editorial support.

Cover image by: Jaime Austin