Impactful innovations generally require some oversight to be effective. Just as artificial intelligence is entering the public consciousness as something requiring regulation, car design and safety also underwent a regulatory transformation. In the years since the first Ford Model T, governments developed guidelines like traffic laws and licensing requirements, manufacturers shifted focus to safety and reliability, and designers developed new safety features such as rear-view mirrors and seat belts. None of these actions on their own are sufficient to make driving safe—they need to work together. Similarly, governing AI for government requires an adaptable approach that integrates common governance approaches to form one cohesive strategy.
In the past five years, the federal government has promoted trustworthy AI1—that is, AI systems that are safe, secure, reliable, and transparent—and set forth guiding principles, such as those outlined in the Office of Management and Budget’s (OMB) Memorandum M-21-06.2 Focus is now turning to implementation. Nearly half of all enterprise leaders surveyed say they are investing more in responsible AI in 2024 than ever before.3 These investments are coinciding with increased legislative and executive actions, such as the White House Executive Order 14110 on “safe, secure, and trustworthy development and use of AI,” which prioritizes the implementation of trustworthy AI practices across all American government and commercial enterprises.4
For government leaders tasked with implementing responsible and effective AI solutions, prioritizing a cohesive AI governance strategy that works with its people’s existing tendencies for governance is likely critical to success. Currently, three personas represent the three common approaches to effective AI governance: Guides, who focus on enterprise policymaking like Executive Order 14110; Guards, who focus on standardized quality assurance checkpoints; and Gadgeteers, who focus on tooling and feedback mechanisms.
While these approaches can provide a strong foundation, the exponential pace of AI innovation should have dynamic AI governance that can integrate all three approaches and adapt as AI continues to evolve.
Governance involves the management of business processes to promote efficacy and efficiency while mitigating risks and liabilities. Cars, computers, mobile phones, and other transformative innovations of the 20th and 21st centuries spawned their own governance bodies like Federal Communications Commission or National Highway Traffic Safety Administration. So too does AI call for its own governance approaches. But given the fast-moving nature of AI, these approaches should be even more dynamic to accommodate the unique AI challenges, such as:
Exemplifying a persona, government agencies typically follow one of three approaches to AI governance.8 Success often comes from fusing these three AI governance personas or approaches.
Guides focus on establishing clear policies and guidelines to promote safe, secure, and transparent outcomes. Guards focus on establishing standardized checklists and stage gate reviews during the AI product life cycle to facilitate quality assurance. Gadgeteers focus on embedding tools in the AI product life cycle to help evaluate the trustworthiness of model outputs. Although agencies may consider all three approaches, they often frontload effort in one area. Consider taking a dynamic approach to AI governance that integrates all three approaches and adapts over time to help achieve trustworthy AI goals.
Guides focus on establishing enterprise policies to document point-in-time standards with the expectation that others in the organization will follow suit. This approach typically involves policy-setting at different levels of governance, such as the federal, state, or enterprise levels. Examples include:
Guides can lay an important foundation but may set policies that are resource-intensive and can potentially cause ineffective operational processes. On their own and without supplementary activities, Guide-driven approaches may fall short because:
Guards focus on managing AI risks through quality assurance checkpoints during the AI product life cycle. Similar to the Federal Information Security Modernization Act requirements for information systems, Guards aim to establish an approach typically involving two components for AI development, operations, and maintenance:
Standardized checkpoints and templatized procurement requirements can improve consistency in AI governance by holding both internal and external parties accountable to baseline expectations. Taken on its own, however, the Guards’ approach often faces three primary challenges:
Gadgeteers focus on governing AI with platforms and tools that automate governance and management of AI products. This approach typically involves the following components:
Tools can be an effective method for tracking and improving model performance, but the Gadgeteer-focused approach can experience two challenges when implemented without other activities:
Guides, Guards, and Gadgeteers address important aspects of AI governance, but none on their own are sufficient to achieve trustworthy AI goals. Imagine you are embarking on a backpacking expedition. A Guide may help you set a path but may not help you re-route if weather affects your course. A Guard may provide checklists to help you pack, set up a campsite, and complete other milestones on your journey but may not help you face unforeseen obstacles along the way. A Gadgeteer may help you choose durable tents, hiking boots, and other gear, but may not help you evaluate the expedition itself.
To achieve trustworthy AI goals, agencies should integrate all three approaches and incorporate mechanisms for short-term and long-term adaptations in AI governance. Together, the five key features of dynamic AI governance can help form a well-rounded and adaptable governance strategy.
As AI adoption spreads and AI technology grows in sophistication, the need for effective governance is expected to increase, and in turn, so would a need for those who can oversee the integration of dynamic AI governance. Agencies with the greatest success will likely require individuals with the instincts of Guides, the sensibilities of Guards, and the savviness of Gadgeteers to drive dynamic AI governance. To do so, agencies should bring together individuals who understand the potential benefits of AI technology, can identify current and future challenges with AI governance, and are intimately familiar with the agency’s mission. These cross-functional teams can strategically link AI governance activities to the agency’s mission, vision, and desired outcomes.
In the coming years, there will likely continue to be an increasingly systematic focus on building and maintaining responsible and effective AI solutions. Just as safety features in cars work together and evolve over time to effectively protect drivers and passengers, government agencies should integrate and continuously adapt AI governance approaches to proactively instill trustworthy AI principles across the enterprise. By leaving room for future changes, this dynamic approach to AI governance can set the foundation for truly trustworthy AI practices.