Posted: 20 Apr. 2021 4 min. read

Making AI transparent, fair and trustworthy

There are many mitigation procedures that a company can take to remedy a situation when an employee does not behave within the organization’s ethical standards. But what happens when the fault lies with technology?

This is the dilemma that many organizations are currently facing with the emergence of artificial intelligence (AI). AI originates from the machine-to-machine world, using controllable algorithms in manageable scenarios. This technology has evolved into the machine-to-human world and taken on elevated intelligence. We are on the verge of delegating human decision-making to AI. Consequently, questions of ethics have surfaced because in some instances these decisions impact people’s lives.

At the same time, the way AI makes these decisions has become increasingly unclear. If decisions are made using implicit bias based on gender, race, geography or other reasons, it can be a widespread issue. After all, when an employee engages in this type of behavior, the damage may be confined to the people with whom that employee has interacted. When AI does it, it can scale exponentially in an instant and wind up in the headlines for the wrong reasons.

This raises the question: How can companies be sure that their AI is trustworthy?

How AI Becomes Biased

AI is an essential tool for every successful business seeking to continuously enhance client loyalty, product quality and operational efficiency, with the potential to: expand revenue; and minimize cost, time and effort wastage. This is clearly shown by the AI-driven robo-advisors in financial services, where they provide the “self-serve” technology experience at a lower cost-per-investor than would be possible with human financial advisors.  

A robo-advisor offers products based on the customer’s profile and behavior. But what if the person on the receiving end is not empowered in the same way? How do they deal with a bot that is not working to their benefit, and how can they be certain they are seeing all that can or should be seen? This can be avoided by allowing customers to have full access to the bot’s algorithm so they can exercise discretion to override the bot when it behaves in this way.

Further, it becomes more complicated when AI is making assessments about credit worthiness, insurance risk, product selection or other areas where implicit bias can become a problem. For example, there have been high-profile media stories of AI credit card qualification algorithms giving higher credit limits to men compared to women, or AI-driven real estate engines only showing houses in certain neighborhoods to “preferred” races or genders. The root of this bias is from the historical data that is fed into the machine learning (ML) component of AI. If the historical data does not weigh in the evolution of the current social structure, then the AI algorithm would take the human bias from the past and embed it as the mathematical bias of today. In addition, more recent algorithms such as deep learning have reduced human data scientist intervention, where the data is not being cleansed or checked for balance, and therefore reducing opportunities to ask “human questions.”

Protecting Against AI Bias

AI comes into the business organizations in two ways: either developed in-house by data scientists or brought in from external sources via third-party products. Both present their own challenges.

Data scientists are technologists, not ethicists, therefore it is unrealistic for organizations to expect them to apply an ethical lens to their development activities. A good and disciplined data scientist checks the data for bias and runs tasks designed to bring balance between technology and ethics, but the data scientist may not have the tools to judge ethical values. There are best practices that data scientists should follow to minimize the risk of embedding implicit bias in AI systems, but even so, that is not enough to truly control the risk. There needs to be an additional layer of audit, above that completed by the data scientists, to provide validation that the AI models are trustworthy.

The ethical lens can be quite complicated because many AI systems are discriminatory by design. For example, an AI-driven chatbot recommending a pair of men’s shoes to a male shopper, and women’s shoes to a female shopper, is discriminating based on gender, but in a useful way. Discrimination can’t be banned from algorithms, or AI simply would not work. The goal is to eliminate unethical discrimination in the development process and to be able to detect implicit bias during testing.

Addressing the bias of AI in third-party products can be slightly more challenging, because AI is part of the vendors’ intellectual property portfolio, which they may not want to expose for audit. However, it is still possible to audit the vendor’s AI. However, customers can request the ML training data, and inspect this in addition to the AI results AI to detect any implicit bias. Not all vendors are accustomed to this level of transparency – but they should be, particularly as mitigating AI risk becomes a greater priority for businesses.

Building Trustworthy AI into the Corporate Culture

We should not lose discipline in our race to AI and the automated future. AI must reflect the values of the company, taking care to avoid brand damage and legal risk. However, there is still an underlying risk of inadvertently violating those values in the development process. This is what makes AI such a daunting venture.

Implementing effective AI audits and safeguards can be a challenge for some companies. Corporate risk management is often divided into technical risk and business risk, with the technical team responsible for mitigating technical risks, and the business team mitigating business risks. AI spans both technical and business risk. For example, the credit card application mentioned earlier would be highly technical on the back end, but would also interact with, and impact customers on the front end. A technologist may not identify any issues with how the application is developed, because it is technically correct. Additionally, the business auditor who could identify the problem is not part of the process, because the application is defined as a “technical risk.”  Moving forward, it is important for both teams to work together, in order to provide sound auditing and controls for a trustworthy AI.

Like many important technologies, AI is moving faster than human capacity and creating issues that most companies are not built to handle. Companies will need to advance quickly as ethical AI regulations are currently in development in many countries. The brand and legal risks of “AI gone awry,” and the risk of being perceived to be struggling with sensitive data, are increasingly profound as technology continues to permeate the business landscape.

Companies can keep pace with the continued advancement of technology and enhance AI, ensuring it is applied in a transparent, fair and trustworthy way by:

  • Elevating the importance of its governance;
  • Developing AI model validations, to guide and support data scientists;
  • Working with vendors to audit and validate their AI technology; and Evolving enterprise risk management to audit AI at both the technical and business levels.

 

Return to the Responsible Business home page to discover more insights from our leaders.

Key Contact

Philip Chong

Philip Chong

Global Digital Controls Leader

Philip Chong is the Global Leader for the Digital Controls offering for Risk Advisory in Deloitte. The practice help clients across first line businesses and second line corporate control functions to design, implement and test controls that utilize analytics, automation, artificial intelligence and algorithms to manage risk, help strengthen, and continuously monitor control systems from both an effectiveness and efficiency perspective. Philip concurrently leads the SAP alliance for Risk Advisory Asia Pacific and provides direction for Deloitte Risk Advisory teams in serving large SAP implementations

Sam Cammiss

Sam Cammiss

Director | AI & Risk

A Director in the Risk Advisory business at Deloitte Southeast Asia, Sam specialises in the “risk of AI” and the use of “AI for risk”. His work includes the development of automated risk sensing tools for the financial, government, and consumer sectors, together with the development of governance and control systems for “Trustworthy AI”. Sam’s perspective on the uses and abuses of data stem from his experiences in Advertising, Policy Economics, and Neuroscience. A keen advocate of a greater transparency and understanding of data sciences, Sam is a lecturer at Singapore Management University.