Circle with bright coloured lines on a dark blue background

Article

Are you ready for the age of Generative AI?

During a hearing at the Senate of the United States of America on 16 May, Sam Altman, the Chief Executive Officer of OpenAI, urged governments to regulate Artificial Intelligence (AI). The ‘father of ChatGPT’ acknowledged the risks posed by AI – and Generative AI in particular. The world does not seem ready for such a powerful technology; but the potential of AI to drive economic growth is so high, that pausing its development (as a number of experts have proposed) or not embracing the opportunities it creates out of fear of the risks would have a high cost: the cost of inaction. So is your organisation ready for the age of Generative AI?

The potential benefits of Generative AI are huge. However, they come with substantial risks and raise a number of legal, ethical and regulatory concerns. For instance, the capacity of Generative AI to generate hyper-compelling content creates risk of misinformation, and also risk of fraud for organisations that rely on inputs from third parties to support their decision-making. Generative AI also raises concerns over compliance with data protection laws given the data privacy and security risks involved, as providers may use sensitive input data to improve their model or forshare those data with third parties, and individual rights such as deletion, information, consent, and withdrawal are more difficult to enforce. There is also the risk of misuse. Generative AI systems can be used for purposes other than those originally intended, such as the creation of sophisticated phishing campaigns and malware – which would also pose cybersecurity risks – even when guardrails are in place.

Whether users can claim the copyright of content generated by a Generative AI system trained on copyrighted data is the subject of ongoing lawsuits worldwide. And whether a Generative AI system should be recognised as the author of scientific publications is an ethical issue, as it could not be held responsible for the quality and integrity of the content it generates. To top it off, organisations that take to the market or put into service AI systems will soon have to comply with regulations that aim to control the risks, as Sam Altman has implored.

The European Union (EU) Artificial Intelligence Act is currently under review by the European Parliament and is set to be approved in 2024. It is an approach to regulating AI that sets minimum requirements to address the risks and concerns relating to AI.

Generative AI poses specific regulatory challenges. High-risk AI systems, defined as those that risk harm to the health and safety of individuals or have a risk-adverse impact on their fundamental rights, will be subject to strict compliance requirements − continuous risk management; stringent data governance and management; comprehensive technical documentation and record keeping; assurance of human oversight; transparency; and high levels of accuracy, robustness, consistency, and cybersecurity. It is clear Generative AI systems have a huge potential to be considered high-risk, and compliance of such systems with the requirements of the EU AI Act will therefore be a challenge.

As an example, consider the requirement for transparency, which demands that the operation of an AI system must be sufficiently transparent to enable users to interpret and use its output appropriately. A recent study investigating the consistency of ChatGPT in giving moral advice concluded that it was inconsistent and lacked a moral standpoint, and more importantly, that it influences the moral judgement of users more than they can perceive despite being aware of their interaction with an AI system [1]. ChatGPT has the potential to corrupt, and transparency turns out to be insufficient to enable the responsible use of AI.

What is sufficient then? We must wait to see how the EU regulates the specific problems posed by Generative AI. In the meantime, organisations using Generative AI must distinguish between hyped and realistic applications and ensure responsible deployment. They should also prepare for compliance with the EU AI Act, as non-compliance is likely to expose them to large regulatory fines (more than GDPR fines in some cases) as well as reputational damage.

 

This article is written by Aczel Garcia Rios PhD and Rebeka Gadzo

References

[1] “ChatGPT’s inconsistent moral advice influences users’ judgment” S. Krügel, A. Ostermaier, and M. Uhl, Nature, Scientific Reports 13, 4569 (2023). DOI: s41598-023-31341-0

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Did you find this useful?