AI in healthcare

Article

AI in Healthcare: opportunity or threat?

Reaping the benefits while embedding ethical values

Artificial Intelligence has the potential to revolutionise healthcare and wider dimensions of wellness, but the ethical implications are becoming increasingly complex. Where decisions are often sensitive and impact people’s lives directly, what can and should you leave to self-learning algorithms?

When it comes to ethics, there is no sector more mature than healthcare. In fact, its ethical history goes as far back as the ancient Greeks, when Hippocrates composed the oath still sworn by physicians around the world to this day. The main points — do no harm, respect the confidentiality of your patients — are as relevant as ever in the digital age, and not just for doctors, but for a far wider range of healthcare sector players, from producers of pharmaceuticals and medical devices to wellness app developers.

The promises and challenges of AI in healthcare

AI can significantly speed up administrative processes, and it’s a fantastic tool for researchers in making sense of vast amounts of data. The most spectacular impact is likely to be in sophisticated clinical applications such as diagnosis and care delivery. AI makes it possible to create and execute individualised treatment plans based on a complex mix of datasets (including a patient’s health history, lifestyle, genomic make-up, and personal preferences). However, there are several ethical challenges. Let’s look at a few:

Consent
To feed data-hungry AI algorithms, online health platforms use a variety of data gathering practices. We leave a lot of data behind everywhere, making consent and awareness of data collection tricky.

Representative data
Not all people are proportionally represented in the data sets used in AI tools. These sets show a predominance of white males versus women and people with other backgrounds.

Confidentiality and ownership
AI in healthcare contains a lot of sensitive data that can be of interest to other parties. Confidentiality between doctor and patient may be breached. Patients are the owners of the data and must have a say in how it is shared with parties like insurers and pharmaceutical companies. We must take a close look at potential for misuse, and the incentives driving other parties (e.g. selling patients' data for profit). Patients should have the right to withdraw data once it has been used in aggregated form.

Judgement and decision-making
AI tools make decisions that seem to come out of a black box, but they follow from the choice of data and the programming of the algorithm. This is where moral choices are potentially made – by developers. Consequently, there must always be a way to trace back where decisions come from. And there must be a real person who ultimately makes the calls on questions like who to treat first, or how long to continue treatment. A real person who takes responsibility if something goes wrong.

Efficiency versus valuing professional judgement
AI is likely to change work practices and education of health professionals. They no longer need to commit so much knowledge to memory, but they do need enough medical knowledge of their own to judge the AI tool’s output - and have the confidence to override it if necessary. Meanwhile, the efficiency gains need to be closely examined: unpacking and accounting for AI decisions cost time and money, too.

Monitoring
In addition to reducing the professional’s workload, an AI tool can also be used as a surveillance tool, collecting data to evaluate the professional’s performance. This could impact trust and job satisfaction.

Regulatory developments

What the above examples make clear is that all AI applications need to be used with care to avoid infringement of privacy and human rights, and this is especially important in healthcare settings. The technology should be resilient and secure, and it’s crucial to have a keen eye for digital ethics during the entire life cycle of development and deployment. So how do we reap the benefits while staying on the right side of the law and in line with ethical principles? For one thing, the relevant laws and regulations need to be updated and enforced.

AI applications in the medical domain are already coming under increasing regulatory pressure. In the EU, draft regulation for AI rightly classifies healthcare as a high-risk area and regulatory frameworks are being updated to cover the various uses of AI in healthcare settings. Under the EU Medical Device Regulation, introduced in May 2021, software applications, including AI algorithms, qualify as a medical device if they are used for a specific medical purpose. In the Netherlands, the Electronic Exchange of Healthcare Data Act will soon be coming into force, with consequences for all healthcare professionals and organisations, as well as their IT providers.

Where to start?

The first step on your AI journey is a rather mundane one: fix the plumbing. In other words, standardise the data to be used and ensure it contains no mistakes, omissions or unwarranted biases. Data needs to be labelled according to agreed sector-wide definitions so that it can be aggregated, and you can really start connecting the dots. This is quite a tall order, but there’s no need to get all the data cleaned up at once. Start small, with a set of data you know is good quality, to tackle a well-defined problem. In the Netherlands, medical professionals themselves have written a Guideline for AI in Healthcare, including an online course, to help healthcare players recognise trustworthy AI.

The start of your AI journey is also the right time to begin actively involving stakeholders. It’s important to look first at the ‘human flow’ in the organisation and subsequently link this to the ‘technical flow’. Rather than on efficiency, the focus must be on the needs and values of healthcare professionals and patients. On what works for them. After all, the responsibility for healthcare remains theirs. While AI’s strength lies in processing mountains of data in a short time, medical professionals can take a more holistic view, weighing in less quantifiable factors. AI is a tool to augment human decision-making, not to replace it.

More information?

For more information please contact Hilary Richters (Digital Ethics),  or Paul van Geffen (Medical Devices) via the contact details below.

Did you find this useful?