Understanding the proposed EU AI Act | Deloitte UK has been saved
Limited functionality available
Artificial intelligence (AI) is becoming an instrumental part of human society, revolutionizing many aspects of life. AI adoption is now more widespread and no longer limited to high tech or specialised industries such as banking. With the many benefits AI creates, there remains a risk that it could be utilised in negative ways, some of them being manipulative and unlawful, aiming to harm and defraud people.
Inevitably, questions around risks and ethics come to light. To fill this void, address the rising concerns and harness the power of AI, several regulatory initiatives have emerged, a prominent one being the proposed EU Artificial Intelligence Act1 . It is the culmination of years of continuous discussions and consultations among regulators, industry participants, scientists and the wider public. Its scope is vast and will have a seismic impact on the many industries and entities which it will affect.
This blog introduces the key considerations within the EU AI Act and sets the scene for a series of subsequent blogs exploring the main principles of the proposal.
De-coding the act - creating a ‘trustworthy’ AI environment
Prior to the EU releasing the proposal for an AI Act, an independent high-level expert group on AI set up by the European Commission produced a study on the ethics guidelines for trustworthy AI2. The principles outlined in this report are based on three fundamental cornerstones – lawfulness, ethics, and robustness. By endorsing this framework, the group aimed to facilitate an environment of transparency, inclusion, and fairness where technology is not used to manipulate and harm people and societies.
Risk Tiers and Main Pillars
The proposed EU AI Act considers the guidelines for trustworthy AI to frame the requirements for different types of AI systems based on their risk profiles – namely, prohibited, high-risk and low-risk AI systems. Some uses of AI, such as social scoring and dark pattern AI, are prohibited unless they are used for a justifiable reason by the authorities such as solving a criminal case3.
The definition of High-risk AI is outlined in Annex III of the Act and includes systems intended as a safety component of products, those operating in areas such as biometric identification, law enforcement, employment, migration and other critical areas - such as evaluating the creditworthiness of natural persons.
Providers of such High-risk AI systems must comply with the main pillars of the proposed act:
Conformity Assessments and Codes of Conduct
In simple terms, the proposal translates into firms being required to enhance their risk management systems and comply with each of the above-mentioned principles. Entities that are engaged in developing, deploying and using AI systems, especially high-risk ones, should have clearly defined methodologies and practices in place. Not only should firms produce detailed technical documentation, but they should also undergo conformity assessments of their internal controls to prove compliance. This will be voluntary for the providers of low-risk AI systems; however, the EU Commission will encourage and facilitate the drawing up of codes of conduct that are intended to cover the above-mentioned requirements.
Like other European Union legislation, the principle of extraterritoriality adds another dimension of complexity – the AI Act will cover not only EU companies but also entities from abroad that provide AI services to EU citizens. Undoubtedly, this increases the prominence of the Act and serves as a reminder for companies across the globe that they must enhance their AI practices and adhere to the EU rules, notably as the Act also brings the prospect of significant fines for non-compliance.
The scene is set, and this proposal is a clear indication of the future direction of regulatory scrutiny. Authorities are applying greater pressure on companies over their AI systems and it is essential that AI providers and users have robust risk management frameworks, comprehensive controls and validation methodologies in place. Firms must re-examine and, where necessary, adapt and enhance their internal practices to the new governing practices of the EU AI Act. This will be a step towards harnessing the power of AI in a positive way, and importantly, help manage risk.
At Deloitte, we appreciate the difficulties of navigating through the proposed EU AI Act. Considering the complex global regulatory environment, we have developed a comprehensive approach to validate your algorithms and AI systems to safeguard compliance and improve internal controls. We invite all market participants who wish to discuss the AI Act, its implications, and our Algorithm and AI approach, to get in touch.
3 Article 5 of the proposal
4 Article 10 of the proposal
5 Articles 11 and 12 of the proposal
6 Article 13 of the proposal
7 Article 14 of the proposal
8 Article 15 of the proposal
Proposal and Annex to the Proposal for a Regulation of the European Parliament on Artificial Intelligence: EUR-Lex - 52021PC0206 - EN - EUR-Lex (europa.eu)
Independent High-Level Expert Group on Artificial Intelligence – Ethics Guidelines for Trustworthy AI: Ethics Guidelines for AI (europa.eu)
Mark is a Partner in our Regulatory Assurance team. He is our AI Assurance, Internet Regulation and Global Algorithm Assurance Leader with 20 years of experience across financial services audit and assurance, regulatory compliance, regulatory investigations and disputes. He has led the development of our assurance practice as it relates to our approach to assisting firms gain confidence over their algorithmic and AI systems and processes. He has a particular sub-sector specialism in the area of algorithmic trading with varied experience supporting firms enhance their governance and control environments, as well as investigate and validate such systems. More recently he has supported and led our work across a number of emerging AI assurance related engagements.
Barry is a Director in our Banking & Capital Markets Audit and Assurance Group in London and has over 15 years’ experience spread across industry and financial services. Barry has led several large projects specifically focused on enhancing our clients AI & Algorithm control frameworks and assessing their design and operating effectiveness to ensure full regulatory compliance.