Posted: 18 Jul. 2022 5 min. read

The increasing role of AI in financial services:

Considering AI and ML in your Audit and Assurance Policy

Financial services firms are increasingly focusing on how they can use artificial intelligence (AI) to drive strategy and improve business models. As AI becomes more central to the business, links to directors’ remuneration and key performance indicators are increasingly prevalent in disclosure to investors and in Annual Reports, but may not be subject to assurance or considered as part of the statutory audit.

Ethical use of AI

Banks, insurers and payment providers are natural users of AI and machine learning (ML) as they are able to amass high volumes of data which give valuable insights into risk and customer behaviour. As AI and ML start to be used to reduce costs, improve pricing and accelerate growth, many firms are developing frameworks to make sure this data is used in ethical and appropriate ways. These might include issues around fairness, explain-ability and robustness, as well as how appropriate oversight of AI is ensured.  Firms should consider whether they have defined a set of ethical AI principles and the extent they wish to state their commitment to these principle  in their public disclosures, and may wish to consider how the OECD  principles or the EU’s ethical guidelines on ethical AI can help them shape these. 

For firms subject to the Senior Managers and Certification Regime, individuals subject to the regime need to be as confident in the use of AI and ML by the business as they are in traditional models and human decisions. More broadly, Deloitte research has found that Boards are “over optimistic” about their oversight of technology, and emphasises that “Boards need to be vigilant and self-critical in fast-changing areas.”

This is particularly important where the use of AI and ML can impact customer outcomes and lead to detriment by exacerbating existing inappropriate biases in data and leading to unfair decision making or pricing if not subject to correct controls, processes and oversight. The UK Financial Conduct Authority remains focussed on how the use of AI can benefit consumers, whilst aware of the risks and the need for consumer confidence.

As well as needing to ensure that metrics presented around pricing, growth and costs are robust, Boards may also want to seek assurance that frameworks for ethical use of data developed by the business, and in some cases shared publicly, are being adhered to.
 

Regulatory interest

Whilst use of AI and ML is most extensively discussed in reporting by motor insurers, its use is increasing across the sector, including within banking. In its recent consultation paper the Bank of England reiterated their view that use of AI and ML introduces unique risks, as well as amplifying existing risk associated with the use of models. The paper also introduces an expanded definition of model, which could impact firms existing use of automated decision making, for example:

In the face of new and increased risk, claims made about the use of AI and its centrality to strategy and business model should be included within the Audit and Assurance Policy, as well as considering how assurance over the operation of artificial intelligence and machine learning where it is material to the business.
 

Looking forward

As financial services firms continue to face cost pressures and seek to innovate the use of AI and ML will grow. Firms face need to balance technological progress and the need to maintain the trust and confidence of consumers. Assurance can help firms report on its use in a responsible and robust way, giving confidence to Boards and consumers that the benefits are accurately captured and that its deployment is delivering equal or better outcomes for consumers.

To learn more about how Assurance can give confidence in use of AI and ML, contact our expert teams.

For more information, or to discuss how Assurance can give confidence in use of AI and ML, contact our expert teams.

Key Contacts

Philippa Kelly

Philippa Kelly

Director

Philippa is a Director in Regulatory Assurance where she is part of the Conduct & Prudential team. Prior to joining Deloitte she was Director of Financial Services at ICAEW (Institute of Chartered Accountants in England & Wales) where she was responsible for ICAEW’s technical, policy and thought leadership work related to accounting, audit, risk and regulation across banking, insurance, and investment management. She trained as a Chartered Accountant with PwC.

Mark Cankett

Mark Cankett

Partner

Mark is a Partner in our Regulatory Assurance team. He is our AI Assurance, Internet Regulation and Global Algorithm Assurance Leader with 20 years of experience across financial services audit and assurance, regulatory compliance, regulatory investigations and disputes. He has led the development of our assurance practice as it relates to our approach to assisting firms gain confidence over their algorithmic and AI systems and processes. He has a particular sub-sector specialism in the area of algorithmic trading with varied experience supporting firms enhance their governance and control environments, as well as investigate and validate such systems. More recently he has supported and led our work across a number of emerging AI assurance related engagements.

Barry Liddy

Barry Liddy

Director

Barry is a Director at Deloitte UK, where he leads our Algorithm, AI and Internet Regulation Assurance team. He is a recognised Subject Matter Expert (SME) in AI regulation, has a proven track record of guiding clients in strengthening their AI control frameworks to align with industry best practices and regulatory expectations. Barry’s expertise extends to Generative AI where he supports firms safely adopt this technology and navigate the risks associated with these complex foundation models. Barry also leads our Digital Services Act (DSA) & Digital Markets Act (DMA) audit team providing Independent Assurance over designated online platforms' compliance with these Internet Regulations. As part of this role, Barry oversees our firm's assessment of controls encompassing crucial areas such as consumer profiling techniques, recommender systems, and content moderation algorithms. Barry’s team also specialises in algorithmic trading risks and controls and he has led several projects focused on ensuring compliance with relevant regulations in this space.