Article
8 minute read 01 December 2021

AIs wide shut: AI regulation gets (even more) serious

AI in 2022 will face intensifying regulatory scrutiny, with implications that resonate across industries

Duncan Stewart

Duncan Stewart

Canada

Paul Lee

Paul Lee

United Kingdom

Ariane Bucaille

Ariane Bucaille

France

Gillian Crossan

Gillian Crossan

United States

Though regulation typically lags behind technological innovation, it appears to finally be catching up with artificial intelligence (AI) applications, including machine learning, deep learning, and neural networks. Deloitte Global predicts that 2022 will see a great deal of discussion about regulating AI more systematically, with several proposals being made—although enacting them into actual enforced regulations will likely happen in 2023 or beyond. Some jurisdictions may even try to ban whole subfields of AI, such as facial recognition in public spaces, social scoring, and subliminal techniques, entirely. 

We know why, but do we know how?

Normally, predictions are precise and quantified, but that’s generally not possible when talking about regulatory changes. Still, we have good reasons to believe that AI regulations will be on their way to becoming more prevalent and stricter in the next year. As of 2021, there are detailed proposals from the European Union1 and policy papers from the Federal Trade Commission (FTC) in the United States2 on regulating AI more heavily. And China is proposing multiple regulations around technology companies, some of which include AI regulation.3

Why now and not before? We see several reasons:

• AI in 2022 will be more powerful than it was only five years ago. Thanks to vastly faster specialized processors, better software, and larger data sets, AI can do more, and more affordably, than ever.4 As a result, AI is becoming pervasive and ubiquitous—which in turn is attracting greater regulatory scrutiny. 

• Some regulators have concerns about AI’s implications for fairness, bias, discrimination, diversity, and privacy. For example, the fundamental tool behind today’s AI is machine learning, which has received significant scrutiny from regulators and others for potential social bias.5

• AI regulations are a competitive tool at the geopolitical level. If one country or region can set the global standard for AI regulation, it may give a competitive advantage to companies operating in that country or region and disadvantage outsiders.

Some regulators have become quite vocal about AI’s perils. For example, in an August 2021 paper, US FTC Commissioner Rebecca Kelly Slaughter wrote: “Mounting evidence reveals that algorithmic decisions can produce biased, discriminatory, and unfair outcomes in a variety of high-stakes economic spheres including employment, credit, health care, and housing.”6 She went on to say that, although the FTC has some existing tools that can be used to better regulate AI, “new legislation could help more effectively address the harms generated by AI and algorithmic decision-making.”7

Figuring out how to effectively regulate AI will be challenging. One fundamental problem is that many AI computations are not “explainable”: The algorithm makes decisions, but we don’t know why it made a particular decision. This lack of transparency makes regulating AI exponentially harder than regulating the more explainable and auditable technology that often informed decision-making in the last century. Regulations aim to prevent AI-powered decisions from having negative outcomes, such as bias and unfairness, but because the AI systems responsible for those decisions are hard to understand and audit, it can be difficult to predict when negative outcomes will occur—until after people or institutions have been impacted.

Another potential problem is the quality of the training data. The draft of the European Union’s AI regulation specifies that “training, validation, and testing datasets shall be relevant, representative, free of errors, and complete.” However, at the scale of the data required for machine learning, this standard, especially the stipulation that it be “free of errors and complete,” sets an extremely high bar that most companies and use cases may not be able to meet.8

As AI becomes used everywhere, everybody has reason to care about how it is regulated, because those regulations can shape the extent of the good and harm that its use could bring about. The following big stakeholders should be especially interested:

AI tool users. Regulators are likely to crack down on cases where algorithmic bias or other issues harm classes of people. Multiple studies show that AI-encoded bias can discriminate by gender, race, sexuality, wealth or income, and more. The bias usually works to further disadvantage the already disadvantaged. This is because artificial intelligence isn’t actually 100% artificial at all: It needs to be trained on datasets, which can reflect human biases. The result is that AI trained on those datasets doesn’t eliminate human bias, but often amplifies it. 

One famous example of dataset-driven bias is a company that was trying to hire more women but knew that the AI tool kept recruiting men. No matter how hard the company tried to eliminate the bias, it persisted due to the training data, so the company stopped using the AI tool entirely.9

AI regulations will affect the use of AI tools by different industries and functions within them to different degrees. For instance, AI in human resources, specifically for hiring or performance management, is likely to be profoundly affected.10 There are already multiple cases where AI-powered decisions about recruitment, hiring, promotion, disciplining, termination, and compensation have been problematic. 

Regulators may also be particularly concerned by internet platforms that moderate user-generated content, many of which rely heavily on AI to do so. Moderating millions of pieces of content daily in real time is essentially impossible, or at least unaffordable, without AI. However, a 2020 study claims that algorithmic moderation systems “remain opaque, unaccountable and poorly understood” and “could exacerbate, rather than relieve, many existing problems with content policy as enacted by platforms.”11

From an industry standpoint, the public sector—health, education, government benefits, zoning, public safety, the criminal justice system, and more—can be deeply affected. For example, facial recognition in public spaces for law enforcement and criminal justice is already in wide use, but it is one of the technologies that the European Union regulations are looking at banning, with certain exceptions.12 Regulation will also be a big issue for private-sector health care and education, affecting matters such as grades, scholarships, student loans, and disciplinary actions. The financial services industry will likely face substantial implications as well, as it uses AI to inform everything from credit scores, loans, and mortgages to insurance and wealth management. 

Industries such as logistics, mining, manufacturing, agriculture, and others may feel less of an impact. These industries’ AI algorithms can of course have problems, but they tend to be around accuracy and errors rather than bias. However, although these issues are less apt to lead to direct human harm, they may have an environmental impact.

AI tool vendors. Dozens of tech companies sell pure-play AI tools or solutions. Some of these include subsets of AI technology likely to be more heavily regulated or banned; some even consist of nothing but those subsets. Dozens more provide overall solutions that have AI components or features that could be affected by regulation. Hyperscalers, especially, have reason to watch regulators closely. All of them have AI-as-a-service offerings that could be affected to varying degrees; regulations could prevent them from selling some services in some geographies, or companies could even be made liable for customers’ use of their AI services. 

AI users that are also vendors. Many technology internet platforms and apps are heavy users of the same AI technologies that they sell outright, use to execute their business model, or both. Common among these technologies are facial recognition, sentiment detection, and behavior prediction, all of which are possibly contentious AI features. 

Regulators and society. Those making the rules face challenges of their own in balancing rapidly changing technological advances with a range of stakeholder concerns. They will need global and national policy objectives to be clearly articulated, so that they can develop legislation, regulations, and codes of conduct that speak to them. An agile, improvement-based regulation approach will likely be more effective than inflexible rules-based legislation. Finally, although regulators and societal goals overall are linked, they are separate, distinct, and sometimes not aligned.

The bottom line

The next two years could see a number of scenarios play out. 

First, stakeholders affected by regulations that are adopted and enforced may shut down AI-enabled features in certain jurisdictions or cease operating in some jurisdictions entirely—or they may continue to operate, get fined, and pay those fines as a cost of doing business. 

Second, it’s possible that large and important markets such as the European Union, the United States, and China will pass conflicting AI regulations, making it impossible for companies to comply with all of them. 

Third, it’s also possible that one set of AI regulations will emerge as a gold standard, as has happened with the European Union’s General Data Protection Regulation around privacy, which could simplify cross-border compliance. 

Fourth, it’s even possible that AI vendors and platforms could group together in a consortium and lead a conversation about how AI tools should be used and how they can become more transparent and auditable—adopting a degree of self-regulation that would lessen regulators’ perception that oversight needs to be imposed from above.

Even if that last scenario is what actually happens, regulators are unlikely to step completely aside. It’s a nearly foregone conclusion that more regulations over AI will be enacted in the very near term. Though it’s not clear exactly what those regulations will look like, it is likely that they will materially affect AI use.

  1. European Commission, Proposal for a regulation of the European Parliament and of the council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, EUR-Lex, April 21, 2021.View in Article
  2. Elisa Jillson, “Aiming for truth, fairness, and equity in your company’s use of AI ,” Federal Trade Commission, April 19, 2021.View in Article
  3. Arjun Kharpal and Evelyn Cheng, “The latest target of China’s tech regulation blitz: algorithms ,” CNBC, September 9, 2021.View in Article
  4. As one example, moving to specialized AI chips can see literally thousandfold improvements; Saif Khan and Alexander Mann, AI chips: What they are and why they matter , Center for Security and Emerging Technology (CSET), April 2020.View in Article
  5. James Manyika, Jake Silberg, and Brittany Presten, “What do we do about the biases in AI? ,” Harvard Business Review, October 25, 2019.View in Article
  6. Rebecca Kelly Slaughter, “Algorithms and economic justice: A taxonomy of harms and a path forward for the Federal Trade Commission ,” Yale Journal of Law & Technology, August 2021.View in Article
  7. Ibid.View in Article
  8. European Commission, Artificial Intelligence Act.View in Article
  9. Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women ,” Reuters, October 10, 2018.View in Article
  10. Tom Simonite, “New York City proposes regulating algorithms used in hiring ,” Wired, January 8, 2021.View in Article
  11. Robert Gorwa, Reuben Binns, and Christian Katzenbach, “Algorithmic content moderation: Technical and political challenges in the automation of platform governance ,” Big Data & Society 7, no. 1 (2020).View in Article
  12. European Commission, Artificial Intelligence Act.
    View in Article

The authors would like to thank the following individuals for their contributions to this chapter: Beena Ammanath, Ralf Esser, Lukas Kruger, Susie Samet, and Nick Seeber.

Cover image by: Jaime Austin

 

Risk Advisory (Regulatory Risk)

Organizations must meet the demands of the complex regulatory landscape, but be flexible enough that the regulatory program keeps pace with a rapidly changing environment– all with an industry-focus. Is your approach to regulatory risk designed to preserve value and power performance?

Ariane Bucaille

Ariane Bucaille

Global Technology Sector Leader

Subscribe

to receive more business insights, analysis, and perspectives from Deloitte Insights