Managing Algorithmic Risks has been saved
Perspectives
Managing Algorithmic Risks
Safeguarding the use of complex algorithms and machine learning
Complex algorithms and machine learning systems are used to accelerate performance, and create differentiation. But they often operate like black boxes for decision making, and may not be controlled. Learn to harness the power of your algorithms and manage with a robust risk management framework.
Algorithmic Risks
Increasingly, complex algorithms and machine learning-based systems are being used to achieve business goals, accelerate performance, and create differentiation.
Algorithmic risks arise from the use of data analytics and cognitive technology-based software algorithms in various automated and semi-automated decision-making environments.
Deloitte has developed a framework for understanding the different areas that are vulnerable to such risks and the underlying factors causing them.
Input Data: Is valuable to risks, such as biases in the data used for training; incomplete, outdated, or irrelevant data; insufficiently large and diverse sample size; inappropriate data collection techniques; and a mismatch between the data used for training the algorithm and the actual input data during the daily operations.
Algorithm Design: Is vulnerable to risks, such as biased logic; flawed assumptions or judgements; inappropriate modelling techniques; coding errors; and spurious patterns in the training data.
Output Decisions: Are vulnerable to risks, such as incorrect interpretation of the output; inappropriate use of the output; and disregard of the underlying assumptions.
The risks around input data, algorithm design and output decisions can be caused by several underlying factors:
Human Biases: Cognitive biases of model developers or users can result in flawed output. In addition, lack of governance and misalignment between the organization's values and individual employees’ behavior can yield unintended outcomes.
Example: developers provide biased historical data to train an image recognition algorithm, resulting in the algorithm being unable to correctly recognize minorities.
Technical Flaws: Lack of technical rigor or conceptual soundness in the development, training, testing, or validation of the algorithm can lead to an incorrect output.
Example: Bugs in trading algorithms drive erratic trading of shares and sudden fluctuations in prices, resulting in millions of dollars in losses in a matter of minutes.
Usage Flaws: Flaws in the implementation of an algorithm, its integration with operations, or its use by end users can lead to inappropriate decision making.
Example: Drivers over-rely on driver assistance features in modern cars, believing them to be capable of completely autonomous operation, which can result in traffic accidents.
Security Flaws: Internal or external threat actors can gain access to input data, algorithm design, or its output and manipulate them to introduce deliberately flawed outcomes.
Example: By intentionally feeding incorrect data into a self-learning facial recognition algorithm, attackers are able to impersonate victims via biometric authentication systems.
With increased volumes of data, processes automation and decisions being made by algorithms, you will need assurance that your algorithms are working as intended and achieving the desired business outcomes.
How we can help
Our offering specifically focuses on areas where you might be most vulnerable, algorithms operating in environments outside of their ERP, potentially posing real or dormant business and reputational risks.
Check out our Algorithm Assurance Services to find out how we can help you gain trust into your algorithmic operations.
Recommendations
Algorithm Assurance Services
Ensuring algorithms work as intended