Beyond the Crisis - Ethics of Data and AI


Beyond the Crisis - Ethics of Data and AI

The EC Whitepaper on AI “unpacked” through the lens of ethics

On 11 May 2020, Deloitte held a webinar for 150 participants featuring keynote speaker Dr Aimee van Wynsberghe, Associate Professor of Ethics and Technology at TU Delft. She “unpacked” the European Commission’s 27-page whitepaper on AI through the lens of ethics and sparred on this subject with Deloitte partner on cyber risk Sir Rob Wainwright, formerly Executive Director of Europol.

Europe wants to lead in AI

Van Wynsberghe emphasised that the Commission is by no means shying away from AI. They want Europe to be globally competitive in this discipline, but also to strike a balance between innovation and protection of EU values like privacy, democracy and human dignity, resulting in a European-style “trustworthy AI” brand. They see potential for AI as an accelerator of progress on Sustainable Development Goals and an enabler of the Green Deal. They want member states to pool their efforts, thus avoiding fragmentation. They outline what the much-needed EU-wide AI regulation scheme might look like. And they actively invite dialogue with the private and public sector.

EC Whitepaper Artificial Intelligence (feb 2020)


The paper, Van Wynsberghe explained, proposes a policy framework aimed at building an “ecosystem of excellence”. How do we achieve and sustain Europe’s excellence in AI? The Commission describes measures to mobilise resources, accelerate adoption by public sector and SMEs, build skills, and more.. In parallel, the paper outlines a regulatory framework aimed at building an “ecosystem of trust”. How do we give citizens confidence in AI-based products and systems? What rules are needed to protect fundamental rights and consumer rights?

Digital Ethics AG Flyer


In a risk-based approach, the Commission proposes to label AI applications as either high-risk or not high-risk. Compliance with the regulatory framework will be mandatory only for the high-risk category. High risk is identified on the basis of the sector involved (critical sectors like health, transport, police) or the way AI is used (legal implications or risk of death/injury/damage). Some examples of high-risk AI applications are recruitment (in any sector) and biometric identification. Note that the criteria are still under development: what is outside the high-risk category in one year could be labelled high-risk in the next year.

Record-keeping is key

The details of the regulatory framework are only starting to take shape, Van Wynsberghe explained, but keeping records and data will be key. This is true for high-risk applications, for which organisations will need to demonstrate compliance, but those outside the high-risk category would also do well to document their policies. These applications will fall under a voluntary labelling scheme, which is also an important building block for the ecosystem of trust. Participating in such a scheme is attractive for organisations, Van Wynsberghe believes. Not only does it boost consumer trust in their brand, it can prove to be useful groundwork should they ever come in scope of stricter AI regulation.

More than just extra red tape

Sir Rob Wainwright said he was worried that to corporates the whole issue of AI labelling will feel like just another administrative burden. They need convincing that this is worth the effort and presents opportunities. In his view, “trustworthy AI” fits in with the broader concept of stakeholder capitalism, which involves building relationships of trust with all your stakeholders.

Taking ownership

Van Wynsberghe signalled quite a few challenges in regulating AI, an important one being: who is responsible for what? If this is not clarified, nobody takes ownership. A poll among the audience revealed that 86% have delegated responsibility for AI ethics within the organisation, with 31% choosing their risk & compliance team, 9% product owners and 19% board level. In Wainwright’s view, since this touches on brand reputation, the ethics of AI should be owned at the top.

Golden thread

In conclusion Van Wynsberghe explained how she sees ethics as the golden thread through the AI conversation — as a continual process of deliberation helping us translate our EU values into regulations and policy, then helping us implement these within our organisation, and finally helping us identify the gaps between our day-to-day practice and the values we started out with. Through ethics, we understand what we mean by “trustworthy AI”.

Aimee van Wynsberghe

Aimee is an Associate Professor in Ethics and Technology at TU Delft. As a Deloitte Edge Fellow she aims to share her expertise on responsible innovation, ethics of technology, digital ethics, robot ethics and the ethics of AI with Deloitte and partners. More info:

Rob Wainwright

Sir Rob is a senior partner in Deloitte, co-leading the NWE region cyber practice. Since November 2018 he has been a member of the Board of Directors of the Global Cyber Alliance. More info:

Did you find this useful?