Article

The hidden power of AI 

Why ethics is an important consideration when building AI algorithms 

Ivana Bartoletti is Technical Director – Privacy & Ethics at Deloitte, Visiting Policy Fellow at the University of Oxford and Author of An Artificial Revolution: on Power, Politics and AI.

Artificial Intelligence (AI) is progressing at speed with innovations and new opportunities coming out every single day. This is especially encouraging in fields such as medicine where we can now discover drugs as well as diseases much more easily – and earlier – than before.  

But when we talk AI we are talking about much more than technology. It is often said that AI is about power. The power to transform work as we know it and the power to reshape geopolitical relationships as countries engage in an AI race coupled with a digital sovereignty agenda.   

But there is another more hidden power of AI, and that is the possibility to scale up existing inequalities by hardwiring them into the products themselves. This is a highly debated area which has led to the proliferation of tools and systems to mitigate the risks of algorithmic unfairness.    

AI products are fed with data and data represents society as it is. To an extent, data is a picture of the structure of the world we inhabit and that we have so far built, with its hierarchies, history and power structures. There is nothing neutral about data., Its collection and classification needs to be handled with care to avoid automating current inequalities by coding them uncritically into the systems we produce.   

Recent events have brought this to life for the general public. The debate that originated from the A level algorithm which downgraded thousands of students on the basis of their school’s poor historical performance rather than individual performance. Citizens and consumers are now becoming more familiar with the immense power of technology and how technology can no longer be viewed as separate from the social dynamics that underpin both its creation and its deployment. This is often referred to as ‘AI ethics’ and both the general public and policy-makers are becoming more aware of the importance of minimising the potential harm that AI could cause.     

If we want to harness the value of data and technologies we must not shy away from the questions around ethics, freedom and human dignity that the unfettered use of AI can undermine. This is the reason why I wrote ‘An Artificial Revolution: on Power, Politics and AI’ – I do not dislike tech, quite the opposite. I like it so much that I want it to benefit all. But to achieve this, we need to understand what is at stake, what is we risk of losing and what governance and tools we need to establish to ensure we build technology that improves our lives and is driven by our human values.   

Did you find this useful?