Few AI regulations across the globe address the outcomes rather than the tech

Outcome-based and risk-weighted regulations are an underused tool that can both protect the public interest and encourage innovation, a Deloitte US analysis shows

When it comes to fast-moving technologies like artificial intelligence, how can governments strike the balance between enabling innovation and protecting the public interest? Innovation and regulation tend to operate on two different time frames, which can cause problems when governments are working to regulate rapidly evolving technology like AI. And consider AI’s complexity and diversity: From computer vision finding potholes in roads to generative pretrained transformers answering people’s tax questions and more, it could be a formidable challenge to find a single set of rules that addresses all forms of AI and their uses, both now and in the future.

Rather than trying to find a set of rules that can control the workings of AI itself, a more effective route could be to regulate AI’s outcomes, but it seems that few such regulations explicitly addressing AI exist yet, according to a recent Deloitte US analysis.

The Deloitte Center for Government Insights has defined five regulatory principles to address rapidly evolving technologies.1 First, there are principles related to the fast-evolving nature and cross-border reach of modern technologies: adaptive regulation, which advocates for a responsive, iterative approach rather than a static one; regulatory sandboxes that allow for prototyping and testing new methods; and collaborative regulation, which seeks alignment and engagement across national and international players within the ecosystem. Second, the research center outlines principles related to the regulations’ focus: outcome-based regulation, which focuses on the results rather than the processes; and risk-weighted regulation, which proposes a shift from one-size-fits-all regulation to a data-driven, segmented approach.

Outcome-based and risk-weighted regulations can be powerful tools for regulating AI, according to the research center. For example, if it’s in the public interest to limit bias in AI-enabled decisions, then requiring that the outcomes of all of those decisions, regardless of the technology used, meet certain standards—rather than regulating the workings of AI itself—could help protect public goals even as new generations of technology come online.

However, the Deloitte researchers reviewed the OECD.AI Policy Observatory’s database, which contains over 1,600 AI policy initiatives from 69 countries and the European Union—including regulations and policies aimed at supporting or shaping AI technology—and found that only about 1% of regulations were either outcome-based or risk-weighted, and no regulations included in the data set were both.

This isn’t to say that outcome-based and risk-weighted regulations don’t exist. They likely constitute part of the regulatory structures of the 69 countries included in the analysis, according to the researchers. It’s just that those regulations aren’t considered “AI” regulations, so there’s an opportunity for many governments’ AI-adjacent regulations to become more explicit. And these clarifications don’t just protect the public. They can also be critical for speeding up innovation. In some cases, innovators may slow down their work on sensitive use cases for fear of ending up on the wrong side of future regulations, while regulators, unfamiliar with the technology, are hesitant to make rules.2

Outcome-based and risk-weighted regulations can break this cycle by clearly showing where public equities lie, giving regulators confidence in their approaches, and giving innovators the clarity they need to move forward.

Research and analysis by the Deloitte Center for Government Insights

Read the full report at www.deloitte.com/insights/ai-regulations.

Show more

Endnotes

  1. William D. Eggers, Mike Turley, and Pankaj Kishnani, “The future of regulation: Principles for regulating emerging technologies,” Deloitte Insights, June 19, 2018.

    View in Article
  2. We have previously examined this phenomenon in regulation of the Internet of Things: Joe Mariani, “Guiding the IoT to safety,” Deloitte University Press, 2017.

    View in Article

Acknowledgments

Cover image by: Molly Piersol