Legislation is an inherently human endeavor. But just as organizations across industries are unlocking new capabilities and efficiencies through artificial intelligence (AI), governments also can aid their legislative processes through the application of AI.
For the past five years, we’ve studied the potential impact of AI on government. We’ve looked at everything from how much time AI could save workers in each US federal agency to the rate of AI adoption in US federal, state, and local governments.2 While AI can help many different areas of the legislative process—from AI assistants answering members’ questions about legislation to natural language processing analyzing the US Code for contradictions—two key applications stand out.
AI as a microscope: Assess the impact of existing legislation
The same broad scope and volume of data that make assessing legislation a difficult problem for humans make it an ideal challenge for AI.
Machine learning (ML) models can find patterns in inputs and outputs without having to specify ahead of time how those inputs and outputs are likely to be linked. Just as a microscope can examine a leaf to find structures and patterns invisible to the human eye, ML models can find patterns in the outcomes of programs that may be invisible to humans.
There are already examples of ML models examining public policy in exactly this way. For several years, researchers have been using ML to understand the risk factors for infant mortality in childbirth. With the data available in electronic health records, many of these models can predict the likelihood of complications with 95% or greater accuracy.3 Researchers from RAND then took those models to the next step. They used ML on data from Alleghany County, Pennsylvania, to evaluate which interventions had the biggest impact on reducing infant mortality.4
The strength of ML models is that they’re able to say not only what outcomes each intervention is likely to produce, but also among which groups those outcomes are likely to occur. These findings can then guide policy recommendations. For example, researchers found that mothers who used social services were less likely to use prenatal medical services and vice versa, pointing to a need for policies aimed at building awareness of other services aimed at improving child health.
While anything that improves the lives of infants is clearly a good policy outcome, other issues could be less clear-cut. ML models can uncover hidden outcomes of policies or programs, but only humans can decide if those outcomes would qualify as successes or failures. In the spirit of human-machine teaming, once the ML model has uncovered the hidden outcomes of a program or piece of legislation, members or staff can then look at those outcomes and determine: 1) if they are positive or negative and, 2) if the overall benefits are worth the cost and effort.
AI as a simulator: Test the potential impacts of future legislation
The ability of ML models to predict outcomes of policies begs the questions: What if we did something differently? How would things change? In essence, can AI be used as a “simulator” for problems? Think of Apollo 13. After an explosion, the crew had to figure out new interventions, new ways of doing things. They used the ground-based simulator to try procedure after procedure until they found one that worked. Imagine having an AI simulation run through hundreds of thousands of possible interventions in minutes, instead of locking astronaut Ken Mattingly in a dark box for days.
In place of ML models trained on historical data and projecting trends into the future, these simulations are designed to capture the dynamics of complex systems like the economy or the health care system. Simulations are based on models for how a portion of a complex system operates. For example, one form of simulation well-suited to legislative tasks are agent-based models that replicate how individual actors would respond to and interact in different situations. These models are good at capturing the “emergent properties” of complex systems where individual decisions add up in unusual ways. Think about flocking birds: Each bird just tries to stay close to the bird next to it, but together, they make intricate patterns in the sky as they avoid obstacles and predators. The big human systems that legislatures are often interested in like health care, the economy, or national defense exhibit similar traits.5
Researchers in Europe have created an agent-based model designed to help policymakers understand the likely impacts of different interventions on the Irish economy. The Innovation Policy Simulation for the Smart Economy uses data from patents, knowledge flows, and other economic data to model how individual companies and investors are likely to react to different policies.6 For example, researchers can examine if different funding methods or tax incentives would help support the creation of new small businesses in a specific city or high-tech industry. Such models could be of great benefit as a government examines which policies could help spur domestic semiconductor manufacturing or other advanced technologies.
In 1964, economist George Stigler said, “We do not know the relationship between the public policies we adopt and the effects these policies were designed to achieve.”7 ML models can help uncover just this. But that relationship is only helpful in making future policies if we assume the future is like the past. So when we find ourselves in an era we know is different from the past (during a global pandemic, for example) or when we see a model based on historical assumptions drifting away from current data (such as with the widespread adoption of new technology, for example), then simulations become critical tools for understanding the likely outcomes of new public policies.
As the Apollo 13 simulator helped astronauts, AI simulations can help policymakers to:
- Uncover the drivers of a particular problem, whether that is the amperage limitations on the lunar module in Apollo 13 or the causes of regulatory noncompliance.8
- Understand which interventions could be effective, whether sequencing systems start up for Apollo 13 or organizing national airspace to allow for more on-time flights.9
- Understand the trade space of a given issue. Of all the effective interventions, how much is required, at what cost, and to achieve what outcome?10
Yet even these complex AI simulations can’t make value judgements. They can’t determine the best option. They can only assess the optimal choice for the given values and assumptions that humans specified at the start. However, by forcing the human side of the human-machine team to be specific about those values and assumptions, AI simulations may hold the potential to transform legislative processes.
For example, modeling the impact of different policies on economic development can help validate the assumptions that undergird our positions. We may find that we assume in the model that government research and development spending will crowd out private research & development in that industry. But this assumption can be tested, providing ground for more constructive debates. Similarly, simulations can help uncover human values that may not be well-articulated in a policy debate. For example, running a simulation to optimize economic growth may yield undesirable consequences, leading members to realize that, while economic growth is a goal, it may only be desirable when it improves living conditions for the public in a particular area. Testing assumptions and uncovering hidden values can help provide a firmer foundation for data-driven policy debates.
In fact, there is evidence that experimenting with models in itself may help drive consensus.11 As members examine the values, potential interventions, and trade space of a topic, they’re likely to see more of the factors that they agree on, rather than the few on which they don’t. This certainly won’t bridge all ideological divides, but it can offer fertile ground for productive debate on evidence-based policies.
Potential challenges
While human-machine teaming has the potential to bring transformational benefits to the legislative process, it’s not without risk—with concerns over the quality of data, security, and handling of the human component standing out. However, by focusing on the tasks we want those teams to perform, we can carefully control for risks while still realizing the transformational benefits.
Data and model governance
AI’s outputs are only as good as the reliability of the model and the accuracy of the data. If the data isn’t accurate and fit for purpose or the model isn’t robust and explainable, it can create significant issues for the privacy, security, or fairness of an AI tool. There are already several frameworks for understanding and managing the risks of using AI in government. The National Institute of Standards and Technology and US Government Accountability Office have issued several important reports on the topic, and organizations like the Department of Defense are already operationalizing much of that guidance.12
These guidelines are by no means one-size-fits-all. The unique tasks that any given model performs can lead to different challenges that require different controls. For example, the central role played by historical data in the “microscope-like” use of ML to assess existing programs means that those ML models need clean, accurate data that’s matched to their task. Open public data can help ensure the availability of good data.13 Similarly, tagging data sources with the use cases for which they are suitable can help avoid instances where data gathered in one context is used in another. For example, data that is representative for race and gender may not be representative for income level, so it shouldn’t be used in models where that’s an important parameter.14
While ML models are based on historical data, AI simulations are primarily based on assumptions of how factors relate to each other—whether that’s how individuals will react to a given choice or how smoking rates vary with rates of physical activity, for instance.15 These assumptions can and should be based on real data, but they’re still assumptions and there’s never a guarantee that the future will look like the past. Therefore, when we see models based on historical assumptions drifting away from expectations, it can be a sign that those assumptions need to be adjusted to better match a changing world. For example, many mass transit models assumed only a few major modes of transit such as car, bus, train, or bike. However, the sudden, massive growth of e-scooter ridership in 2018 and 2019 would have altered these models, forcing them to reevaluate assumptions about how people get around.16
As a result, in the case of AI simulations,special attention needs to be paid to the transparency of assumptions. Model governance procedures can help ensure the transparency of assumptions so that human team members can understand the context around how the simulation reached its conclusions.
Security
Whether used to assess or shape legislation, AI tools need protection beyond typical cybersecurity considerations. The potential for adversaries to manipulate the outcomes of these AI models to tip policymaking to their advantage calls for careful safeguards.17
The centrality of data in “microscope-like” ML models means that they can be particularly vulnerable to the poisoning of training data—that is, tampering with data used to train ML models with the aim of influencing the results.18 Therefore, it’s critical to have controls on the access to and quality of the data. On the other hand, AI “simulator-like” models need safeguards placed on the variables, assumptions, and even outputs of the models to avoid manipulation.
New processes, new skills, new training
The AI portion of the human-machine team isn’t the only aspect of the partnership that needs attention. Introducing new tools to the legislative process will require human team members to learn new skills, adapt to new processes, and work together in new ways.
For example, as “microscope-like” ML models uncover new outcomes of public policies, policymakers will quickly find themselves consuming new types of data beyond bar charts and budget trends. New forms of information such as geospatial data, statistical relationships, and more are likely to become important for decision-making. To ensure that these new sources of information are easily consumable, legislators and their staff may need new data visualization tools. Similarly, staff members will likely need more data science skills to analyze, create, and present the visualizations.
The “simulator-like” AI models may bring even more radical changes. In place of an “analyze then present” form of giving information to decision-makers, these models can allow for real-time decision support where policymakers can sit with staff to adjust models and examine conclusions as new data comes in. This shift has already taken place in industries such as auto racing, where Formula One race teams adjust strategy models in real time based on thousands of data points collected as cars race around the track.19 The shift to this mode of decision support can bring significant changes to how staffers spend their time. When Deloitte applied this concept to prototypical analysts in the intelligence community, for example, our model suggested that analysts could spend up to 39% more time advising decision-makers with the adoption of AI at scale.20
The way forward
Implementing AI in the legislative process can seem like a seismic transformation, but the shift is possible with the right commitment and investment. The experiences of other industries and even other parts of government already using AI models highlight that while the change is eminently possible, it will take considerable leadership—not just to put the technologies in place, but also to incorporate the training, education, and business practices needed to make them work.
Lessons learned from other industries can help policymakers get started on their AI journey:
Don’t try to model everything
The scale of the issues that legislatures tackle is often tremendous and trying to model every aspect of each issue is practically impossible. The Formula One example shows that even relatively simple models can quickly get out of hand: For a single race, there are more race outcomes possible than there are electrons in the universe.21 This is where the human part of the human-machine team can help. Rather than trying to model everything, using human value judgements prior to modeling can help identify the core aspects of the problem that need to be modeled. In short, it all starts with deciding what the problem is and understanding what’s important. Then the technology can get to work.
Make a platform, not a solution
As the controller of the nation’s finances, the government also has a financial duty to the public. How can legislatures get the most out of AI without having to build a new tool from the ground up for every new policy debate? The answer is to build an AI-enabled platform, rather than a single-point solution. This is the approach Singapore took with its Virtual Singapore 3D model of the city-state. Virtual Singapore not only models the 3D layout of the city, but also allows for hosting of all other manner of data sources such as census and geospatial data.22 That way, when a new problem emerges, developers can simply create a new app within Virtual Singapore to run simulations about the new issue. Such an approach would allow legislatures to tap into AI in a way that’s cost-effective, efficient, and able to evolve as technology changes.
Invest in the human dimension
Finally, the human element of the human-machine team is critical to the long-term success of digital transformations. AI is a powerful tool. It can run thousands of extremely precise calculations on mountains of data but, importantly for the purposes of legislating, AI cannot make value judgements.23 AI can calculate the fastest, cheapest, or largest solution to a problem. But it cannot tell you if that solution is good or bad, right or wrong, desirable or undesirable. So, the human element will always be critical to legislative decision-making. As a result, leaders should pay attention to the new tasks that may take up more time, new skills that may require difficult retraining, and even new career paths that may change employees’ life goals. Taking care of the people will help take care of the technology.
AI is a powerful tool for the assessments and simulations that legislatures around the world need in their legislative processes. Pairing AI with the right people and the right processes can help provide a common foundation for debate, encourage consensus, and deliver meaningful results for the public.