Looking ahead to a trustworthy future
There is no shortage of pontificating and handwringing over the ethics of AI, and views range from a future of abundance to dystopia. Often, the matter is reduced to concerns over bias. While that is a valid issue, it is just one of several dimensions of trust that should have purposeful treatment and effective governance.
Trustworthy AI does not emerge coincidentally. It takes purposeful attention and effective governance. Indeed, the path from conceiving an AI use case to deploying the model at scale is paved with critical decisions based on careful assessment of impact, value, and risk. Creating and using Trustworthy AI takes more than a discrete tool or a periodic review. It should have a larger governance structure that permeates the entire organization. Taking an end-to-end view, what's needed is an alignment of people, processes, and technologies that together promote effective AI governance, and ultimately, AI solutions we can trust.
When an organization orients its AI initiatives toward an intentional focus on ethics and trust, the reward is often a greater capacity to promote equity, foster transparency, manage safety and security, and in a structured way, address the ethical dimensions of AI that are relevant and important for each use case and deployment. To reach this future state, there are a number of considerations and priority issues in mobilizing people for AI governance, enhancing processes and controls, and using technology to bolster trust.
Mobilizing people for AI governance
Across the AI life cycle, there are many critical stakeholders, each of whom brings a diverse perspective and priorities. Whether it is an executive, a plant floor operator, or an IT professional, each stakeholder has a role to play in promoting Trustworthy AI. Some important areas for attention include:
- Roles, responsibilities, and accountabilities – Define who is a stakeholder for AI outcomes and how they are meant to participate in the AI life cycle.
- Education and communication – Develop and structured opportunities for the workforce to learn about and understand AI governance.
- Role specific upskilling – Upskill employees to be better equipped to provide insights and guidance on how AI solutions impact business processes and decision-making.
Enhancing processes and controls for trust
Operationalizing Trustworthy AI typically requires creative thinking among business leaders, critical analysis throughout every stage of the AI life cycle, and reliable assurance that the tools and the system around them are meeting the relevant dimensions of trust. Every business is different and faces challenges and priorities unique to their business strategy and objectives, which results in different opportunities to successfully leverage AI capabilities. As such, there is no one-size-fits-all framework for effective AI processes. Instead, conceiving and implementing processes takes key activities for devising the right processes to govern the enterprise’s AI programs.
While much of the work of transforming the workforce and processes to foster Trustworthy AI is rooted in human planning and decision-making, there is clearly a role for technology.
Define the vision – A catalyzing tactic is bringing the organization’s leaders together to develop a holistic, equitable approach to creating and using Trustworthy AI. This is not simply a sporadic “business as usual” C-suite meeting. Rather, in a conducive setting with focused goals, leadership can identify the AI’s purpose and how and whether it is delivering its intended outcomes.
Identify the risks – Risk analysis is familiar territory for business leaders, and the same principles apply for AI. Risk management strategies and ongoing model risk assessments can help the enterprise prepare for and guard against external sources that could negatively affect the model and the business strategy.
Identify the gaps – To know how to make process changes, the business should understand where gaps exist in AI risk controls. Building on the risk analysis, organizations can implement or adapt processes and controls to support AI governance. Importantly, AI governance encompasses more than coding of the model. It also includes the broader infrastructure necessary for successful implementation and oversight of AI models.
Validate performance – Business leaders need confidence that AI models perform as expected and are in line with business strategy and regulatory compliance. This requires increased transparency, which can be achieved through a rigorous validation process. Validation includes model testing, assessing whether documentation adequately describes the theory and design of the AI algorithm, and ongoing monitoring.
Bolstering trust with technology
Continuous, effective oversight should include monitoring AI tools with equally innovative technology solutions. This allows the organization to evaluate whether an AI tool is performing as intended and in line with the relevant dimensions of trust. Executives need this capability for real-world AI assessments, such that they can measure and evidence performance, value, and trustworthiness—which then provides outputs and measures that can be governed.
There is a pronounced challenge when working with “black box” AI, whose inner workings may defy transparency and explainability. Building a model that is intrinsically transparent may be feasible for some use cases, but it may also lead to less accurate outcomes given a trade-off between accuracy and transparency. One approach to navigating this balance is selecting a technology solution that pairs with AI tools to interrogate the model’s performance and deliver an assessment and validation. In addition to model evaluation technologies, enterprises may also look to solutions in other technology areas, including AI data management (e.g., synthetic data generation), privacy, cybersecurity, regulatory compliance, risk, and post-deployment monitoring.
For all enterprises, the application of validating technology can improve AI assessments. Importantly, the product of technology-enabled assessments is not just improvements to the model itself. By deeply understanding how a model performs, an organization can be more effective with other components of AI governance.
Board consideration
Like any risk topic, the ethics of AI and the impact on equity are areas of particular interest to those charged with governance. Some implications and outcomes, such as reputational risk, require board consideration and attention. When delving into the more granular details of governance, however, points of concern may be better weighed and addressed by a risk committee or an audit committee. Of particular interest to audit committees in the coming years will likely be consideration of AI relative to internal controls over financial reporting. This is especially the case as the technology moves from automation of financial reporting tasks (e.g., data population into forms) to more advanced areas, such as decision-making.
Whether built in-house or by a third party, it is important for the audit committee to understand whether AI is being used in the Internal Control over Financial Reporting (ICFR) process, and if so, the level of delegated authorization that the tools are providing to the overall ICFR process. For example, if AI is highlighting key terms in a contract, but the accountant continues to review the entire contract, this may not require significant oversight. However, if the AI is making authorization decisions for transactions (e.g., investment, payment, etc.), this may require a more in-depth understanding of the decision rights and management’s monitoring controls. The reality is that regardless of whether AI is used in ICFR specifically, the governance aspects are similar relative to trust, brand, and equity—all of which are on the agenda for public company boards.
Increasingly, audit committees should be informed of the implications of new technologies via educational sessions and continual updating. From there, the audit committee is better equipped to ask the right questions.
Looking ahead to a trustworthy future
These approaches for orienting people, processes, and technologies are demonstrated tactics for operationalizing Trustworthy AI. Yet, knowing a solution is often simpler than implementing it, and when confronting an evolving technology with enormous potential for business benefit, enterprises can benefit from guidance and support. A collaborator in Trustworthy AI development and application can offer an independent perspective and critical knowledge.
We know business leaders are seeking 20/20 vision on potential high-risk exposure areas, advice and recommendations on AI controls, and guidance throughout the AI life cycle in the areas of feasibility, discovery, modeling, acceptance, and integration. Deloitte's Trustworthy AI™ framework provides a cross-functional approach to identify many of the key decisions and methods needed for appropriate procedures.
Deloitte uses a three-pronged approach to enabling AI governance:
- Evaluating roles and responsibilities, implementing change management, and conducting training.
- Offering services around AI strategy and governance, risk management, and AI assessments.
- Using tools to test and monitor your AI models.
Get in touch
Brian Cassidy
US Audit & Assurance Trustworthy AI leader
Partner | Deloitte & Touche LLP
Ryan Hittner
Risk & Financial Advisory Managing Director |
Deloitte & Touche LLP
Oz Karan
US Risk & Financial Advisory Trustworthy AI leader
Partner | Deloitte & Touche LLP