Skip to main content

Algorithm Assurance

The growing use of algorithms

Algorithms have rapidly become a critical aspect of the processes and operations of organisations in every industry. However, algorithms have the potential risk of producing discriminatory, non-compliant and incorrect outputs. It is essential organisations implement effective controls and oversight.

Algorithms: the need for a robust risk and controls framework

Artificial Intelligence (AI) and the use of algorithms are at the heart of the Fourth Industrial Revolution and are expected to radically increase the efficiency and operationalisation of every industry. Algorithms are capable of analysing vast volumes of data to generate outcomes and decisions that businesses, governments, and individuals rely upon. However, as algorithms become more pervasive, humans are relying on algorithmic outputs more, whilst understanding algorithmic outputs less.

Algorithm Assurance – Offering OverviewDownload the report An explanation of the contributing data points that inform an algorithmic decision would be ideal, however in reality, AI randomly analyses millions of data points to generate probabilistic decisions from the patterns. These outcomes are often too abstract to explain. As such, decisions informed by AI are considered a ‘black box’, as the decision processes are non-transparent.

This poses many risks, considering that if outputs cannot be explained there is no guarantee they are ethical, not anti-competitive, non-discriminatory, compliant, or even correct. Organisations should establish and continuously monitor, a robust risk and controls framework to manage these risks, and ensure their algorithms are operating as effectively and ethically as possible.

The regulatory landscape encompassing algorithms is growing. The European Commission has proposed regulations for high-risk AI and within Australia, the Federal Government has released 8 AI Ethics Principles to help organisations ensure AI is safe, secure and reliable.

Several organisations have already felt the sometimes severe consequences of their algorithms performing defectively:

  • A class action lawsuit due to a faulty Income Compliance algorithm resulted in over a $1.5 billion settlement.
  • A major bank was required to write-off over $2m in loan balances due to a programming error in their automated serviceability calculator.
  • An error in the trading software of a financial services company resulted in the accidental purchase of almost $7 billion shares within an hour.
  • The SEC fined a multinational insurance company $200+ million for investor losses caused by a known error in investment algorithm.
  • Racial bias in a medical algorithm was found to favour attending to white patients over sicker, black patients.
  • The SEC charged a stock exchange $10 million for "poor systems and decision making"
  • A multinational organisation's machine learning recruitment tool was found to be biased against women.
  • A stock exchange's computer malfunction caused almost 20,0000 erroneous trades, that were later cancelled.

The risk of algorithmic failure amplifies as technology application evolves and accelerates.

Our toolkit

We at Deloitte view our algorithms as we view the workforce. Just as we do for our people, we owe our algorithms a duty of care. They deserve the same guidance, developmental support and performance reviews we give our employees. With our support, you can help your algorithm to flourish and to operate efficiently, legally, and productively.

In line with our efforts, as a leading provider of assurance services relating to AI, Deloitte has developed the AI Institute. Deloitte AI Institute’s mission is to guide the development of trustworthy, powerful AI solutions by fostering a network of like-minded organisations and facilitating cutting edge knowledge-sharing to accelerate Australia’s AI agenda.

Ask yourself:

  • Do you have an inventory of the algorithms in use at your organisation? Have you identified risks associated with AI?
  • Can you explain how the underlying algorithm driving the AI works? Is there unintended bias in the results from the AI?
  • Have you challenged or assessed the outputs from the AI? Has adequate testing been performed?
  • What lines of defence do you currently have over your AI? Are adequate controls in place?
  • Are you compliant with your regulatory requirements associated with the organisation’s use of AI?

If your answer to any of these questions was a no, or you are unsure of the answer within your organisation, then we are here to help.

Deloitte can benefit your business and resolve these questions through our Algorithm Assurance service. A specialist assurance offering that will assist your organisation in the definition, identification, classification, assessment, enhancement and monitoring of healthy and effective algorithms – ensuring these are conforming with regulatory requirements whilst optimising opportunities for efficiency gains.

To hear more about our service and how it will benefit you, please get in touch.