Perspectives

Generative AI regulations in life sciences

How global clarity on the AI regulatory environment can help accelerate the AI journey

While generative artificial intelligence (AI) (gen AI) has much promise, the global AI regulatory environment could pose challenges for the life sciences industry. Clarity in the regulatory environment can help accelerate the AI journey and adoption across regions. Explore the regulatory environment of AI in six geographical jurisdictions.

Gen AI in life sciences and the potential role of global guardrails

Use of AI and gen AI in the life sciences industry, if used boldly, can revolutionize work production and be “a catalyst to a radical business transformation.”1

As discussed in a recent Deloitte article,2 “Can life sciences companies unlock the full value of gen AI?,” the use of a string-of-pearls strategy—stringing multiple use cases and other digital tools together (rather than using individual gen AI use cases)—could transform entire processes.

We recognize that a global set of regulations is likely not feasible. However, it is our belief that establishing global guardrails based on countries’ regulatory approaches could be beneficial. A string-of-pearls approach can be utilized effectively only if the multiple technologies and geographies are aligned to a harmonized regulatory environment.

Regulation of AI by region

Deloitte’s Global Regulatory Intelligence Team explored the regulatory environment of AI in six geographical jurisdictions as well as other international standards development organizations, such as the International Organization for Standardization (ISO).

While at the global level there are a high number of overall cross-industry AI standards available3 under development, ISO Technical Committee (ISO/TC) 215 was tasked to set up a road map to steer the creation of AI life sciences-specific standards at the ISO level. Resolution 2019-106 of ISO/TC 215 created a road map to future directions in developing standards for AI health applications and provided a set of 28 recommendations. The more technical staff will utilize International Electrotechnical Commission standards through the Software Network and Artificial Intelligence Advisory Group (SNAIG) TC 624 on AI and connected topics.

Across the six geographies observed—European Union, United Kingdom, United States, China, Japan, and India—governments and health authorities have stated they strive for AI uptake to be human-centric and trustworthy and to facilitate protection of health, safety, fundamental rights, democracy and rule of law, and the environment from harmful effects. However, despite common stated goals, a variety of approaches to AI regulations were found.

Explore more

Global moves

At the 18th G20 Summit in New Delhi, which took place September 9–10, 2023, a G20 Leaders’ Declaration was adopted in which leaders committed to “work together to promote international cooperation and further discussions on international governance for AI.5” On October 30, 2023, leaders of the G7 announced international guiding principles (known as the Hiroshima AI Process6) and a code of conduct for companies developing advanced AI systems aimed at fostering international cooperation in the realm of AI7. The ministers intend to collaborate with prominent international organizations, among them the Global Partnership on Artificial Intelligence (GPAI)8, which met in India in December 2023 to start the process. Once those guiding principles are established, there may be a shift in approach of jurisdictions toward similar AI regulations and guiding principes. On December 6, 2023, G7 leaders issued a statement on many global challenges, AI regulation being one of them, following their virtual meeting: “We renew our commitment to advancing international discussions on inclusive artificial intelligence (AI) governance and interoperability between AI governance frameworks, while we recognize that approaches and policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members, to achieve our common vision and goal of safe, secure, and trustworthy AI, in line with our shared democratic values." 9

Clarity of the global regulatory AI environment could be beneficial

Because many life sciences companies are operating globally and would need to adhere to a range of regulations across jurisdictions, moving forward with a string-of-pearls strategy may require clarity and standardization of those regulations. Clarity of the trajectory of the global regulatory AI environment and framework could be beneficial to global life sciences companies in their ability to plan for the future in terms of products, systems, and process enhancements using AI. It could ultimately enable life sciences companies to either maintain or enhance their competitive edge. It may also benefit consumers, as enhancements using AI can bring to the market innovative new life sciences products and services that can improve consumers’ lives and health.

Endnotes:

Vicky Levy and Pete Lyons, “Can life sciences companies unlock the full value of GenAI?,” Health Forward Blog, Deloitte, October 3, 2023.
Ibid.
World Standards Corporation, “Landscape of ISO/IEC/ITU-T existing fields,” September 24, 2022.
International Electrotechnical Commission (IEC), “TC 62: Medical equipment, software, and systems – Work program,” accessed February 1, 2024.
G20 Summit, G20 New Delhi Leaders’ Declaration, New Delhi, India, September 9–10, 2023.
Organisation for Economic Co-operation and Development (OECD), G7 Hiroshima Process on generative artificial intelligence (AI): Towards a G7 common understanding on generative AI (Paris: OECD Publishing, 2023).
The White House, “G7 Leaders’ Statement on the Hiroshima AI Process,” October 30, 2023.
8  Global Partnership on Artificial Intelligence (GPAI) homepage, accessed February 1, 2024.
UK Prime Minister Rishi Sunak, “G7 Leaders issued a statement following their virtual meeting on 6 December 2023,” press release, UK.gov, December 6, 2023; The White House, “G7 Leaders’ Summit,” December 6, 2023.

Contacts

Oliver Steck
Principal
Deloitte & Touche LLP
osteck@deloitte.com

Malka Fraiman
Specialist Master
Deloitte & Touche LLP
mfraiman@deloitte.com

 

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Did you find this useful?