Skip to main content

Algorithm & AI Assurance

The growing use of algorithms and Artificial Intelligence

Algorithms have become fundamental for the operations of many organisations in the modern business environment. Combined with accelerated advances in Artificial Intelligence (“AI”), the broadened scope of algorithm use allows businesses now more than ever to unlock increased operational efficiencies, from enhanced customer experience to targeted strategic planning. Assurance is rising to the fore in the context of these technological developments to ensure risks created by algorithms and AI are appropriately managed and to respond effectively to magnified interest from regulators and the public alike.

Algorithm and Artificial Intelligence Assurance

 

Whilst many benefits can come from leveraging the power of algorithms within your business, there remains the real threat that the inappropriate use, or ineffective management, of these systems can significantly increase the organisation’s exposures to legal, regulatory and operational risks. Recent years have witnessed first-hand the consequences of defective algorithm risk management and the reputational, regulatory and financial damage that can be caused.

An increasing gravitation towards Artificial Intelligence systems has marked a departure from the traditional operation of algorithms and a shift in the associated risks that these now pose to the market. Customer interactions driven by chatbots, healthcare screenings powered by AI or fraud detection in consumer spending habits monitored using Machine Learning all carry consequences of direct consumer harm and yet these risks are often unknown to the end user. For a technology where the playing field is constantly evolving, incoming regulation calls more for a focus on validation of AI development beyond simple algorithmic control assessments.

Market disorder and consumer harm remain at the core of public scrutiny over the use of algorithms and AI, forcing regulators and those charged with governance to consider how they are being identified, used, controlled and managed. A robust algorithm control environment is fundamental to good algorithm risk management, ensuring regulatory compliance and assessing ongoing Artificial Intelligence system integrity. To examine if algorithms are operating as expected there is an increasing need for assurance over organisations’ management of these risks; this should ask the question of whether the algorithm still addresses the initial objective post-deployment and provide confidence over the management of regulatory, operational and/or financial risk.

Leveraging our extensive industry experience, garnered across a broad client base within the Financial Services sector and beyond, Deloitte’s Algorithm Assurance Practice has developed its own proprietary approaches and toolkits rooted in the latest technological and regulatory developments. Leveraging these materials and the team’s expansive skillset, we’re well-placed to provide assurance over algorithm and AI technology, risk management and governance environments in organisations of all size.

The Team

 

Our specialist team holds extensive experience assisting organisations in identifying and understanding how they use their algorithms and other AI systems in a broader business context. We can challenge related governance and oversight practices, examine the adequacy of algorithm policies and procedures, as well as support to identify, manage and mitigate associated risks. Bringing together audit, finance, regulatory and industry professionals, alongside engineers and data scientists, the team is also well-placed to comprehensively support with specific technical algorithm review activities as part of a wider algorithm and AI assessment against industry best-practice and regulatory requirements.

Explore Audit services

A fundamental pillar in Deloitte’s approach to algorithm and AI assessment is the governance and control framework and broader risk management environment, breaking down and assessing the oversight structures in place to identify, assess, monitor and manage risk.

A good way for organisations to begin understanding whether or not they’re sufficiently prepared to manage some of the potential risks that come from algorithm and AI use is to reflect on the robustness of their control framework. The types of questions organisations need to ask include:

  • How has your organisation defined the term “algorithm” and/or “artificial intelligence”?
  • Does your organisation’s governance structure encompass all algorithms and AI systems that it deploys?
  • Do you have a cross-firm algorithm/AI risk control framework applicable to all business areas, and have you considered how your organisation could improve the robustness of this framework?
  • Has your organisation identified key algorithm/AI risks from the systems it deploys and necessary controls to mitigate them?
  • How does your organisation address the regulatory developments that may impact its use of algorithms/AI?

Parallel to an organisation’s algorithm risk management environment is the extent to which an algorithm or AI system is fit-for-purpose in the first instance. Applying Deloitte’s holistic assurance approach assesses elements of system architecture and algorithm lifecycle stability, providing validation over isolated algorithms or more complex AI ecosystems. Organisations may wish to reflect on the technical soundness of their algorithm design, and whether this stands up to regulatory demands and technical certification standards. The type of questions organisations should be asking include:

  • What is the objective output of each algorithmic system?
  • Does the current state of the AI system design match up with the original objective output for that system?
  • Are we confident in the quality and integrity of data used in the development or training phase?
  • Are we using performance metrics that are relevant to the original system design objective, and are we able to interpret these metrics meaningfully?
  • Do our algorithms or AI systems interact with the market as intended and consistently, post-deployment?