We know why, but do we know how?
Normally, predictions are precise and quantified, but that’s generally not possible when talking about regulatory changes. Still, we have good reasons to believe that AI regulations will be on their way to becoming more prevalent and stricter in the next year. As of 2021, there are detailed proposals from the European Union1 and policy papers from the Federal Trade Commission (FTC) in the United States2 on regulating AI more heavily. And China is proposing multiple regulations around technology companies, some of which include AI regulation.3
Why now and not before? We see several reasons:
• AI in 2022 will be more powerful than it was only five years ago. Thanks to vastly faster specialized processors, better software, and larger data sets, AI can do more, and more affordably, than ever.4 As a result, AI is becoming pervasive and ubiquitous—which in turn is attracting greater regulatory scrutiny.
• Some regulators have concerns about AI’s implications for fairness, bias, discrimination, diversity, and privacy. For example, the fundamental tool behind today’s AI is machine learning, which has received significant scrutiny from regulators and others for potential social bias.5
• AI regulations are a competitive tool at the geopolitical level. If one country or region can set the global standard for AI regulation, it may give a competitive advantage to companies operating in that country or region and disadvantage outsiders.
Some regulators have become quite vocal about AI’s perils. For example, in an August 2021 paper, US FTC Commissioner Rebecca Kelly Slaughter wrote: “Mounting evidence reveals that algorithmic decisions can produce biased, discriminatory, and unfair outcomes in a variety of high-stakes economic spheres including employment, credit, health care, and housing.”6 She went on to say that, although the FTC has some existing tools that can be used to better regulate AI, “new legislation could help more effectively address the harms generated by AI and algorithmic decision-making.”7
Figuring out how to effectively regulate AI will be challenging. One fundamental problem is that many AI computations are not “explainable”: The algorithm makes decisions, but we don’t know why it made a particular decision. This lack of transparency makes regulating AI exponentially harder than regulating the more explainable and auditable technology that often informed decision-making in the last century. Regulations aim to prevent AI-powered decisions from having negative outcomes, such as bias and unfairness, but because the AI systems responsible for those decisions are hard to understand and audit, it can be difficult to predict when negative outcomes will occur—until after people or institutions have been impacted.
Another potential problem is the quality of the training data. The draft of the European Union’s AI regulation specifies that “training, validation, and testing datasets shall be relevant, representative, free of errors, and complete.” However, at the scale of the data required for machine learning, this standard, especially the stipulation that it be “free of errors and complete,” sets an extremely high bar that most companies and use cases may not be able to meet.8
As AI becomes used everywhere, everybody has reason to care about how it is regulated, because those regulations can shape the extent of the good and harm that its use could bring about. The following big stakeholders should be especially interested:
AI tool users. Regulators are likely to crack down on cases where algorithmic bias or other issues harm classes of people. Multiple studies show that AI-encoded bias can discriminate by gender, race, sexuality, wealth or income, and more. The bias usually works to further disadvantage the already disadvantaged. This is because artificial intelligence isn’t actually 100% artificial at all: It needs to be trained on datasets, which can reflect human biases. The result is that AI trained on those datasets doesn’t eliminate human bias, but often amplifies it.
One famous example of dataset-driven bias is a company that was trying to hire more women but knew that the AI tool kept recruiting men. No matter how hard the company tried to eliminate the bias, it persisted due to the training data, so the company stopped using the AI tool entirely.9
AI regulations will affect the use of AI tools by different industries and functions within them to different degrees. For instance, AI in human resources, specifically for hiring or performance management, is likely to be profoundly affected.10 There are already multiple cases where AI-powered decisions about recruitment, hiring, promotion, disciplining, termination, and compensation have been problematic.
Regulators may also be particularly concerned by internet platforms that moderate user-generated content, many of which rely heavily on AI to do so. Moderating millions of pieces of content daily in real time is essentially impossible, or at least unaffordable, without AI. However, a 2020 study claims that algorithmic moderation systems “remain opaque, unaccountable and poorly understood” and “could exacerbate, rather than relieve, many existing problems with content policy as enacted by platforms.”11
From an industry standpoint, the public sector—health, education, government benefits, zoning, public safety, the criminal justice system, and more—can be deeply affected. For example, facial recognition in public spaces for law enforcement and criminal justice is already in wide use, but it is one of the technologies that the European Union regulations are looking at banning, with certain exceptions.12 Regulation will also be a big issue for private-sector health care and education, affecting matters such as grades, scholarships, student loans, and disciplinary actions. The financial services industry will likely face substantial implications as well, as it uses AI to inform everything from credit scores, loans, and mortgages to insurance and wealth management.
Industries such as logistics, mining, manufacturing, agriculture, and others may feel less of an impact. These industries’ AI algorithms can of course have problems, but they tend to be around accuracy and errors rather than bias. However, although these issues are less apt to lead to direct human harm, they may have an environmental impact.
AI tool vendors. Dozens of tech companies sell pure-play AI tools or solutions. Some of these include subsets of AI technology likely to be more heavily regulated or banned; some even consist of nothing but those subsets. Dozens more provide overall solutions that have AI components or features that could be affected by regulation. Hyperscalers, especially, have reason to watch regulators closely. All of them have AI-as-a-service offerings that could be affected to varying degrees; regulations could prevent them from selling some services in some geographies, or companies could even be made liable for customers’ use of their AI services.
AI users that are also vendors. Many technology internet platforms and apps are heavy users of the same AI technologies that they sell outright, use to execute their business model, or both. Common among these technologies are facial recognition, sentiment detection, and behavior prediction, all of which are possibly contentious AI features.
Regulators and society. Those making the rules face challenges of their own in balancing rapidly changing technological advances with a range of stakeholder concerns. They will need global and national policy objectives to be clearly articulated, so that they can develop legislation, regulations, and codes of conduct that speak to them. An agile, improvement-based regulation approach will likely be more effective than inflexible rules-based legislation. Finally, although regulators and societal goals overall are linked, they are separate, distinct, and sometimes not aligned.