Posted: 10 May 2022 5 min. read

Prioritising AI & Ethics: A perspective on change

AI is everywhere, and its pervasiveness is now a given. Capturing AI’s unprecedented opportunities requires understanding its impact and deciding the role we allow it to have in society.

This is what we call AI Ethics – and the very core of this debate is the role of ethics in the digital world, the ways it interacts with existing regulations, and the way we strive to achieve a safer and fairer use of AI across organisations and society more broadly.

AI, a transformative technology

"Deploying AI is the difference between a company flourishing or floundering” as it enables   automating tasks at a level never seen before.[i] As AI becomes more accessible and efficient, we all see sectors and industries increasingly embracing automation as part of their digital transformation journey, attempting to tap into its inherent transformative potential.

From targeted advertisement, to analysing large sets of patient data for vaccine efficiency, to determining chances of obtaining a job or a loan, automation can have different purposes, and vastly different impacts.

The paradigm is moving from the physical world entering the digital, to the digital permeating the physical (e.g. Web3). Along with leading private organisations, regulatory bodies, academia and civil society, we investigate what is beyond the algorithm, and where it has broader implications for privacy, data protection, human rights, and consumer rights.

From expert conventions to dinner table conversations: the need for an all-encompassing approach

AI artefacts are based on a series of automated actions programmed by engineers to deliver on an objective. These actions are trained on large datasets, configuring a set-up inherently enabling biases to influence the output. Data reflects what happens in society and for that reason, is not neutral. Similarly, we as humans live in those very same socio-political contexts and are influenced to see and understand the world following a unique value system. When applied to the context of AI, there is a risk of “sleepwalking into a future written by algorithms which encode racist, sexist and classist biases into our daily lives”.[ii] Questioning, evaluating, mitigating, and monitoring the intended and unintended consequences of AI is hence is of paramount importance to avoid encoding biases in the technology.

The last decade has given us many cases of impact of AI if it is misused or abused. Appreciating the multidisciplinary nature of AI and its ability to have far reaching impacts to our daily lives enables us to conceptualise AI, to formulate an opinion around it and on its outputs in order to use AI in a way that is aligned with our societal, organisational and personal values, and in a way which remains effective and trustworthy.

Civil society and customers alike participate in the conversation and increasingly question organizations and governments adopting AI. Not long ago, we saw for the first time students and their families demonstrating the streets of the UK in protest against the algorithm to grade A-level exams over the pandemic.[iii] A cross-generational encounter brought together by a biased algorithm, and if anything, a first protest of this kind against a machine. While this shows increased awareness, it also shows the importance of understanding the process behind an AI artefact, by keeping a “human in the loop” approach rather than blaming the technology.

Being part of the conversation and adopting ethics into AI means responding to the quest for corporate responsibility, reputation, and innovative business models, whilst maintaining the highest standards of compliance to all intersecting regulations, including privacy, human rights, anti-discrimination, liability, labour, and consumer law, to name a few.

The journey to the ethical use of AI requires mapping and understanding those regulations, navigating complex trade-offs to define the values that will drive innovation from the outset to post-deployment, and integrating a robust culture across the organisation.

We help our clients to move beyond compliance by establishing an all-encompassing approach driven by ethics to future-proof and enable AI deployment and innovation. And so are regulators.

Regulators are stepping in, supporting an ethical approach to AI

As public concern spreads and digital literacy grows, we have seen regulators increasingly acting and engaging with industry and academia to provide frameworks around the development of AI artefacts.

Developing the appropriate regulation for AI is complex, and regulators attempt to strike the right balance between safeguarding rights and values while allowing innovation, avoiding complex regulatory requirements for organisations, and providing a framework that could adapt to the fast evolution of technology.

The current regulatory landscape comprises three main ways to encourage ethical AI: (1) principle-based codified law, (2) auditing frameworks and (3) guidelines.

The EU Commission has launched the legislative process for an EU AI Act, which will prohibit certain AI systems with clear risks to the rights of individuals (for example, social scoring), and subject AI systems deemed as “high-risk” to strict requirements in terms of risk management, data governance, transparency, and human oversight. While this draft is going to be iterated and shaped in the course of this year by the different EU Council presidencies, the Commission, European Parliament, and the ministers, it clearly indicates the EU’s drive to regulate AI with a binding, principle-based and risk-based approach regulation, similar to GDPR. [iv],[v]

On this side of the Channel, the UK government commissioned the Information Commissioner’s Office (ICO) to develop an AI Ethics Auditing Framework. Starting in 2019, the ICO published initial guidelines[vi], which were then put to a public consultation in 2020. [vii] The objective is to build upon the GDPR and Data Protection Act principles to develop a methodology for the ICO to assess AI artefacts’ respect of data protection rights, as well as provide organisations with best practices when deploying an AI.

Finally, many countries and international organisations have published guidelines towards ethical development and deployment of AI artefacts, creating a mosaic of recommendations: OECD Principles on AI[viii], the G20 AI Principles[ix], UNESCO’s Recommendation on the Ethics of Artificial Intelligence[x], as well as national guidelines in the US, China, Singapore, Hong Kong, France amongst others. 

Be a part of the AI Ethics cultural change

Citizens, customers, regulators – the AI Ethics debate is moving fast, and demands for an all-encompassing approach to consider AI’s multiple dimensions and affordances are on the rise.

As organisations look to simultaneously transform their operations to reap the benefits of technological advancement and to uphold the values of corporate sustainability and business responsibility, AI Ethics is the key to unlock the full potential of digital transformation.

Defining the values we embed in the technologies we develop, and establishing the values we accept behind the technologies we use is necessary both for organisations, and for individuals. With increasing awareness comes increasing expectations, and these will soon become the “make or break” of an emerging tech. Stephen Hawking wrote about AI “welcome to the most important conversation of our time”.[xi] This conversation needs to happen, and it needs to happen now.

Our expert team of professionals help organisations navigate these complex trade-offs, towards an ethical culture for the diverse and impactful AI used in customer facing and internal activities.

We at Deloitte are a part of the change upholding the values behind the technology we use and provide to our clients - fair, purpose-driven, human-centric technologies to make an impact that matters. What about you – what do you stand for?

Contact us to start this conversation!

[i] Dashevsky, “DWS 2018: The Acceleration of AI Is Transforming Business Practices”, Amelia.AI, 2018: https://amelia.ai/blog/dws-2018-max-tegmark-ai-systems/

[ii] Bartoletti, “An Artificial Revolution: On Power, Politics and AI”, 2020, Indigo Publisher.

[iii] BBC, “A-levels and GCSEs: How did the exam algorithm work?”, 20th August 2020: https://www.bbc.co.uk/news/explainers-53807730

[iv] Euractiv, “EU Council presidency pitches significant changes to AI Act proposal”, 2021:  https://www.euractiv.com/section/digital/news/eu-council-presidency-pitches-significant-changes-to-ai-act-proposal/

[v] Deloitte, “The new EU AI rules: a new regulatory paradigm for innovation”, Webinar, 2021: https://www2.deloitte.com/nl/nl/pages/risk/articles/the-new-eu-ai-rules-a-new-regulatory-paradigm-for-innovation.html

[vi] ICO, “An overview of the Auditing Framework for Artificial Intelligence and its core components”, 2019: https://boltstatistics.com/ai-blog-an-overview-of-the-auditing-framework-for-artificial-intelligence-and-its-core-components/

[vii] ICO, “ICO consultation on the role of data ethics in complying with the GDPR”, 2020: https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-consultation-on-the-role-of-data-ethics-in-complying-with-the-gdpr/

[viii] OECD Principles on AI : https://www.oecd.org/digital/artificial-intelligence/  

[ix] G20 (2019), G20 AI Principles : tps://www.g20-insights.org/wp-content/uploads/2019/07/G20-Japan-AI-Principles.pdf

[x] UNESCO, “Recommendation on the Ethics of Artificial Intelligence”, 2020: https://en.unesco.org/artificial-intelligence/ethics#recommendation

[xi] Hawking (2017), quoted by Tegmark M., Life 3.0: Being Human in the Age of Artificial Intelligence, 2017: https://www.google.co.uk/books/edition/Life_3_0/3_otDwAAQBAJ?hl=en&gbpv=0

Key Contacts

Lucia Lucchini

Lucia Lucchini

Senior Manager

Lucia is a Senior Manager in our Cyber, Data and Digital practice within Deloitte Risk Advisory. Her experience ranges from privacy & data protection to the intersection between privacy & ethics in new technologies, specifically AI. Lucia focuses on the changing regulatory landscape surrounding new technologies, with particular attention to AI governance and policy. Lucia is also part of the of the Research & Innovation team, specializing in conducting cyber-related research.

Margaux Girard

Margaux Girard

Consultant

Margaux is a Consultant in the Cyber Risk practice, with a background in French Politics. Her interests lie in the social and political aspects of technology, particularly in how technology changes the way we interact with one another, in both teams and wider society. She helps clients understand complex Cyber topics and their relation to internal or national policies, as well as advising on the implementation of technologies in a secure and ethical way.