Understanding AI in financial services from a risk perspective | Innovation | Deloitte Netherlands

Article

Understanding AI in financial services from a risk perspective

Artificial intelligence (AI) solutions can give financial services firms a competitive advantage, but also create new and unforeseen risks. The Deloitte AI Risk Management Framework can help firms to identify and manage AI-related risks and innovate with confidence.

AI will increasingly become a core component of many financial services firms’ strategies. AI solutions can drive operational and cost efficiencies, deliver a better customer service and help gain a competitive advantage. Overall, however, adoption of AI in financial services is in its early stages. Firms are still learning about the technology and which use cases could deliver the most value for them.

Insufficient understanding of AI inherent risks and a firm’s culture can act as barriers to widespread adoption of AI. EU and international regulators are increasingly mindful of the potential risks and unintended consequences of the use of AI by regulated firms. Moreover, in the past few years the financial sector has been hit by a significant number of financial and other penalties in relation to the mistreatment of customers and market misconduct. As a result, financial services firms have been understandably cautious about the adoption of AI solutions.

AI and risk management Innovating with confidence

Effective risk management is essential to a firm’s successful adoption of AI

 

Effective risk management

The adoption of AI requires firms to go through a learning journey. Such a journey, however, is not about avoiding all AI-related risks, but about developing processes and tools to identify and manage these risks in a timely and effective manner. Effective risk management, far from being an inhibitor of innovation, is in fact essential to a firm’s successful adoption of AI.

To start implementing AI solutions, firms do not need to develop an entirely new risk strategy. Instead, firms can review and adapt their existing Risk Management Framework (RMF) to take into account AI-specific considerations. Existing risk appetite statements will also need to be reviewed, and a number of new components, such as a fairness policy for example, may need to be developed. This will ensure their RMFs remain fit-for-purpose and will leave businesses feeling confident that AI-related risks can be effectively identified and managed.

The ability of AI to learn continuously from new data, and to make decisions that are driven by complex statistical methods rather than clear and predefined rules, can make it difficult for firms to understand the decision drivers that underpin the final output. AI solutions can make auditability and traceability challenging, and the speed at which they evolve can result in errors manifesting on a large scale, in a very short period. Firms will therefore need to review and update their risk practices to manage risks through the various stages in the RMF lifecycle (identify-assess-control-monitor). The continuously evolving nature of AI solutions will require some of these activities to happen at shorter and more frequent intervals.

The speed at which AI models evolve can result in errors manifesting on a large scale, in a very short period

 

A comprehensive view

The Deloitte AI Risk Management Framework provides a mechanism for identifying and managing AI-related risks and controls. The framework comprises a wide variety of risks related to AI solutions, and the key considerations to control these risks. For instance, a familiar AI model risk is bias. Inherent bias in input data may result in inefficient or unfair outcomes, and dependence on a continuously evolving data set makes it harder to identify inherent bias in the model. Avoiding bias should be a key consideration by data scientists from the start.

The Deloitte AI Risk Model discusses all kinds of AI-related risks and how to deal with them across the organisation – not only on the level of Models, but also on the level of Technology (Cyber Security, Change Management and IT-operations), Regulatory Affairs (Data Protection and Compliance), Conduct (Culture and Product Innovation), People (Roles and Responsibilities, Recruitment and Skills), Market and Suppliers. Taking a comprehensive view on AI risk management will help firms avoid unexpected problems.

Examples of key considerations for AI solutions relative to other technologies

Learning process

Developing a risk perspective on AI is a two-way learning process. The board, senior management teams and business and control functions will need to increase their understanding of AI, while AI specialists will benefit from an understanding of the risk and regulatory perspectives. Financial services firms that identify such cross-functional teams and incentivise them to collaborate in this way will be better able to exploit the benefits of AI while avoiding risks.

Moreover, adopting and advancing AI both require an organisation and the people who work in it to embrace a more scientific mind-set. This means being comfortable with a trial-and-error journey to the final product, accepting risks and tests that fail, and continuously testing the feasibility of the product. This mental shift is not just solely for heads of business or functions, but is relevant to all areas of the organisation, including the board and other functions such as risk and compliance, HR and IT.
Lastly, firms should recognise that the challenges of regulating AI are not unique to financial services. It is important for both the industry and regulators to work together and contribute to the cross-border and cross-sectoral debate about the long-term societal and ethical implications arising from widespread adoption of AI, and what the appropriate policy response should be.

*) To learn more about AI and risk management in financial services, read the full report: AI and risk management Innovating with confidence

Authors


Tom Bigham
Partner, Risk Advisory Technology and Digital Risk 
Management Lead
tbigham@deloitte.co.uk

Suchitra Nair
Director, Risk Advisory
EMEA Centre for Regulatory Strategy
snair@deloitte.co.uk

Sulabh Soral
Director, Consulting
Artificial Intelligence
ssoral@deloitte.co.uk

Alexander Denev
Head of AI
Financial Services Advisory 
adenev@deloitte.co.uk 

Valeria Gallo
Manager, Risk Advisory
EMEA Centre for Regulatory Strategy
vgallo@deloitte.co.uk

Michelle Lee
Manager, Risk Advisory
Artificial Intelligence
michellealee@deloitte.co.uk

Tom Mews
Manager, Risk Advisory
Technology and Digital Risk Management
michellealee@deloitte.co.uk

 

Did you find this useful?