Using AI for good on Government’s future frontiers

The AI for Good summit tackled topics of regulation, inclusion, and bias, while laying out a strong case for why AI is integral to the UN’s sustainable development goals

Click to listen on your favorite streaming platforms:

Apple podcasts     Spotify

Today’s guests:

  • Gustav Jeppesen, vice chair of Deloitte Denmark and Global lead in Central Government
  • Ines da Costa Ramos, Director at Deloitte Belgium and leader in the AI practice.

Audio from the UN’s AI for Good Summit features excerpts from

  • Antonio Guterres, Secretary-General of the United Nations
  • Doreen Bogdan-Martin, Secretary-General of the International Telecommunication Union (ITU)
  • Costi Perricos, Global Generative AI Leader for Deloitte UK
  • Tomas Lamanauskas, Deputy Secretary-General of the ITU
  • Thomas Schneider, Director of International Affairs at the Swiss Federal Office of Communications

AI has been touted as a boon for business and a benefit for humanity. To help ensure that benefit is available worldwide, the International Telecommunications Unit (ITU) of the United Nations created the AI for Good initiative, which brings together prominent thinkers and government officials to consider how AI can help achieve the United Nation’s sustainable development goals.

How governments could—or should—regulate AI was a key topic at the fifth AI for Good summit, held in Geneva in May. Delegates from around the globe came together to address what the ITU’s secretary-general, Doreen Bogdan-Martin, described as the conundrum at the heart of the issue: “How do we govern technologies if we don’t yet know their full potential?”

In this episode of Government’s Future Frontiers, we speak with Ines da Costa Ramos, director at Deloitte Belgium and leader of the AI practice, and Gustav Jeppesen, vice chair of Deloitte Denmark and global lead of the Central Government practice. They discuss some of the ramifications and real-world applications of the issues raised at the summit.

The issues around regulation are complex, and there is no single right answer. Similarly, the promise and potential complications with AI are evolving. As Bogdan-Martin reminded summit goers, we’ve been here before: “It was 20 years ago the internet was met with a sort of similar mix of shock, awe, and skepticism. It raised the same questions about how our economies, our societies, our environment would transform for better and for worse, and we’re still grappling with those questions two decades later.”

Tanya Ott: The possibilities inherent in AI have prompted some stirring calls to action. Like this:

Doreen Bogdan-Martin: We are the AI generation, so let’s meet the moment. Let’s write the next chapter of the great story of humanity and technology. Let’s remember that the future starts not with algorithms, but with us, with all of you right here, in our brain, in our brain that is the most complex, powerful, creative computer that the world has ever known.

Ott: That was Doreen Bogdan-Martin, secretary-general of the International Telecommunication Union, a United Nations agency that deals with information and communications technologies.

And then there’s this, from Costi Perricos, Deloitte UK’s Global Generative AI leader.

Costi Perricos: I don’t see ourselves sitting at an inflection point in just technology. I see ourselves sitting at an inflection point in humanity. And if we learn how to use AI responsibly and for good, we as individuals, as society and as nations can propel humanity forward.

Ott: And finally this, from UN Secretary-General António Guterres.

Antonio Guterres: Artificial intelligence is changing our world and our lives, and it can turbocharge sustainable development, from bringing education and health care to remote areas to helping farmers boost their crops; from designing sustainable housing and transportation to providing early warnings for natural disasters. AI could be a game changer for the SDGs [sustainable development goals], but transforming its potential into reality requires AI that reduces bias, misinformation, and security threats instead of aggravating them …

Ott: Those words all come from the AI for Good Summit, held in Geneva, back in May. The global summit is aimed at promoting AI to advance health, climate, gender, inclusive prosperity, sustainable infrastructure, and beyond. The focus is on ways that artificial intelligence can be a tool when striving to meet the United Nations’ Sustainable Development Goals.

The initiative is in action year-round, with events and seminars worldwide. In fact, AI for Good Impact India is launching right about when we release this episode.

I’m your host, Tanya Ott. In this episode of Government’s Future Frontiers, we’re looking back at the AI for Good summit and teasing out some of the implications and real-world responses to the issues raised. I will be joined by two guests today who’ll guide us through the complex world of governance of artificial intelligence. It’s a world that involves both technological issues and ethical questions.

Gustav Jeppesen is the vice chair of Deloitte Denmark and global lead in [the] Central Government [practice]. Ines da Costa Ramos is a director at Deloitte Belgium and leader in the AI practice.

I started by asking Ines about the sort of good that AI can do.

Ines da Costa Ramos: AI brings really a lot of hope, and there are different sectors where AI can make a difference. The capability of collecting data, analyzing enormous amount of data, to provide guidelines to act in real time. It’s something that really brings a lot of added value to society.

In the health domain, [AI is] able to predict and diagnosis diseases in a more accurate manner. [Doctors] can provide specialized treatments to people with a specific medication that helps them … the added value that AI is bringing to the health is enormous.

There are also other layers, for example, like climate change, with the possibility of predicting weather patterns, of having models to understand and even to anticipate catastrophes that can help you to take informed conscious decisions. Even in disaster report responses, not only on the prediction but when they are happening.

So, it’s really an exciting moment that we are living around and all the benefits that that it can bring.

Ott: These domains were on display at the summit. Medical breakthroughs included a robot exoskeleton that allowed a paralyzed woman to walk across the stage, and a device that used generative AI and a neural interface to allow a man with ALS [Amyotrophic lateral sclerosis] to communicate in real time.

Multiple speakers covered the risks and potential benefits of AI to help meet the United Nations’ climate goals. For example, here’s Tomas Lamanauskas, deputy secretary-general of the ITU.

Tomas Lamanauskas: Our AI use makes a large and growing carbon footprint. These technologies are hungry for electricity, more than they can get from renewable power supply. The International Energy Agency, or IEA. tells us that two years from now, data centers could consume twice as much energy as Japan does.

AI can also bolster eco-solutions that help protect biodiversity. For example, AI technologies can detect and analyze subtle ecosystem changes and help plan conservation efforts. AI solutions boost energy efficiency and cut waste and emissions across other sectors. Recent research actually suggests that AI can help mitigate between five and 10% of global greenhouse gas emissions by 2030, equivalent to the total annual emissions of the European Union. The potential for AI-based climate solutions is crystal clear but it won't be realized until we tackle the risks to our planet presented by rapid AI growth head on.

Ott: Those risks—and how to address them—were another major component of the summit. And  this year, for the first time, it dedicated an entire day to discussing the importance of, and the philosophies around, regulating AI. Here’s the International Telecommunication Union Secretary-General Doreen Bogdan-Martin addressing the Summit.

Bogdan-Martin: What’s new is this much sharper, stronger focus on governance because it's not the benefits, it’s the risks of artificial intelligence that keep us all awake at night.

Ott: Risks like threats to privacy, built-in bias, lack of transparency, and the encroaching fears of cyberattacks, misinformation, and deepfakes. It’s a sobering list, and it’s likely to get longer. So is the list of benefits. Secretary-General Bogdan-Martin summed it up this way:

Bogdan-Martin: At the heart of all of this is a conundrum: How do we govern technologies if we don’t yet know their full potential?

Ott: There’s no single answer to that question. There are multiple approaches to regulating new technology and walking the tightrope between fostering innovation and minimizing threats can be tricky.

One model for walking that tightrope came from Ambassador Thomas Schneider, director of International Affairs at the Swiss Federal Office of Communications.

Thomas Schneider: There are many people that ask for one new or established institution to solve all the problems, one law to be created that will solve all the problems. But if you look, for instance, at how engines are regulated, we don’t have one UN convention on engines … we have hundreds and thousands of technical, legal, and social cultural norms that regulate mostly not the engine itself but the vehicles and the machines that are using engines. We regulate them in different contexts: We regulate the people that are driving the engines, we regulate the infrastructure, we regulate or protect people affected—so it’s all context-based. It’s not the engine it’s the function of the engine, the effect of the tool …

The same logic should be applied to AI. It should be context-based … It should be about risks and impacts, not the technology itself, and that is specific to every culture in many ways to economic incentives and so on. At the same time, we need a common understanding about what we try to achieve. We need to have a global discussion about how we deal with risks, what the risks are, what are we trying to protect, and so we need a coherent approach. Not necessarily one institution, but hundreds and thousands of pieces that work together.

Ott: Some regulation is already emerging. For example, earlier this year, the European Union passed the AI Act, Europe’s first legal framework for AI. It aims to manage risks and establish Europe as a global leader in AI tech.

Here’s Ines da Costa Ramos to tell us more.

Ramos: This is one of the most ambitious regulatory frameworks that is related with AI. It’s really bringing a pioneering effort that that brings together the legal part, together with transparency and safety and accountability, together with the technology itself.

One of the basis of the act, it’s to proceed with the classification of AI systems by levels of risk. And there are three main risk [categories] considered for the AI systems.

The first one, it’s … unacceptable risks. It means all the AI systems that can provoke enormous, dangerous, damage to the society can be classified, as an unacceptable risk. And, of course, there are really strict measures that needed to be put in place to tackle overseeing these AI systems.

Ott: These unacceptable risks include things like social scoring and real-time biometric identification, for example.1

Ramos: The second level of risk, what we call high risk, normally these are systems that operate in society sectors like health. So, where it’s necessary to have a human to oversee the behavior of the AI systems because there are really sensitive topics around it.

And the last, classification of the risk is what we call minimal risk. It’s like the day-to-day AI systems that can be used to facilitate and to bring more efficiency, for example, to services, such as the use of chat bots to support clients in customer services.

Ott: The purpose of these classifications: To give developers a baseline for how stringent any future regulation may be.

Ramos: It will be important to have clear and transparent guidelines because the industry will need to understand what to expect from the governments, from the regulation, from the act. Only if [the] industry knows what to expect can [it can] take measures and understand what are the boundaries, where they can develop, and to increase their investments in what they are they are doing.

At this moment, of course, it’s very difficult, to cope with the pace of the development of the technology itself. It’s very difficult to have a common understanding on how to treat everything that is happening at the moment. And of course, this is very difficult to regulate. There will be a lot of efforts, conversation, and understanding to arrive to a minimum base, to have a common ground.

Ott: There will also be room to experiment with regulation.

Ramos: Another strategic aspect that is in the package is related with the creation of regulatory sandboxes.

In a very simplistic way, regulatory sandboxes are safe environments that allows developers to test new technologies in a very strict environment and in a very safe way.

Ott: One aspect of regulation that was highlighted at the AI for Good summit was the importance of bringing industry to the table early in the regulatory process—and perspective was also part of the European Union’s AI Act.

Ramos: If you want to clearly put regulations and standards in place, of course, the government bodies can create them, but they need to be shaped by the industry, because the industries [are] the ones that really will tell about the needs, about the challenges, and about what needs to be shaped. One without the other will never work.

And of course, at the same time, it’s very important—and actually it’s already happening—to continue to invest in research and development, both from industrial point of view, but also from the government.

From the government point of view, the more invests in research and development, the more tools we want to have at the government disposal and to society. It’s very important, to have partnerships, to have dialogs, and even to share common infrastructures for the good of all this development.

Ott: To support that, The European Union’s AI Act contains an innovation initiative. I asked Gustav explained the role that the initiative will play as the sector progresses.

Jeppesen: This regulation here is also about securing the EU [to be] well supported in its development. So there’s really high politics and also high trade politics in regulation. It’s about protecting your populations and societies, but it’s also about securing financial success.

The devil lies in the detail and the devil lies in implementation and of course, the discussion will go on in the next years.

There’s concern that AI [is being] developed so fast that no regulator can really keep pace with the innovation. And so, it’s been a strong, message from several of the digitization ministers [and] secretaries of digitization across Europe that we should not overregulate ourselves.

Ott: But passing legislation over the governance of AI isn’t just a onetime job, as both Ines and Gustav explain.

Ramos: One of the things that cannot happen is the legislation to be put in practice and then continue to be static. Especially with everything with technology, if there isn’t an evolution and adaptation to the technology, the regulation will be obsolete in a very short period of time. So all these package have key strategic initiatives that need to be put it together to continue this journey.

Jeppesen: If we look at regulation over the last 150 years, that’s really how it’s always been.

One of my favorite examples is, when cars were introduced in Pennsylvania in, I think it was in the 1890s, local state government decided that the max speed of cars should be two miles an hour. And, if you were driving a car, you had to engage a man to walk in front of the car, with a bell, warning the cattle on the road, that a car was coming.

Of course that regulation was abandoned after just a few years. But it just shows that that sometimes—you don’t get it. We have to learn fast from the errors we made.

Ott: And just like that, we’re back to engine metaphors.

One red flag that has been raised has centred around bias in AI models.

Using flawed training data can result in algorithms that repeatedly produce errors, unfair outcomes, or even amplify the bias found in the flawed data. Algorithmic bias can also be caused by programming errors, such as a using their own conscious or unconscious biases.

And it has happened.

Algorithms based on large language models have been found to produce different results in reference letters for men and women.2 Race bias has been found in prediction of re-offending rates amongst prison inmates.3 Health care algorithms have underestimated needs of Black patients.4 Chatbots can exacerbate the effects of the echo chamber.5

So when bias goes unaddressed, it hinders people’s ability to participate in the economy and society. It also reduces AI’s potential. Distorted results hinder businesses from benefitting from data sets and huge chunks of the population can be missed: racially and ethnically diverse people, people with disabilities, minorities, and marginalized groups.

So what can regulators do to help eradicate bias?

First, it comes down to data, as Deloitte UK’s Costi Perricos explains.

Perricos: What I tell my clients is that good data gives you artificial intelligence; bad data gives you artificial incompetence.

Ott: Ramos agrees.

Ramos: Bias is something that is in place, and it's impossible to not talk about because we need to understand that AI is based on data. And unfortunately, there are a lot of, let’s say, stereotypes, integrated in the data itself that sometimes bring bias in different areas with race, genders, economic backgrounds, that can harm society.

Jeppesen: There is always the risk of the human error in those producing the system. So that can be biases that have not been thought of or are not being tested for because of inadequate work from the people developing it.

And then there could be bad actors, either working for themselves or within larger, corporations, but without the necessary level of internal control. And both things can happen in a huge industry.

Ramos: In a very simplistic way, the more analysis you do with the data, the more fruitful outputs you are going to have. It’s really a matter of training. The more use cases you bring, the more data is analyzed […] starting to create models and patterns, the better we do.

The problem is that when there are biases that you are really not expecting, and then really this can bring a different output that you were expecting. And when you realize it’s already too late because the damage was already done. So I cannot say that there is a unique formula, to put in place. But again, it’s really a matter of bringing together all the information, all the expertise, all the capabilities. And the more these cases continue, the better it will get.

Ott: And this is where something called ‘trustworthy AI’ can play a part. Trustworthy AI initiatives recognize the real-world effects that AI can have on people and society and aim to channel that power responsibly for positive change. It’s something that the industries involved are striving toward.

Jeppesen: You know, trustworthy AI is AI that reflects good ethics, that has been tested. The pitfalls and the potential side effects have been tested for and that we are sure that there is not building any biases, that are not documented, in it. So that at the end of the day, this is about securing AI that works highly efficiently for the greater good of our societies.

Ott: That does not mean that everyone is quite ready to trust AI.

Perricos: If you look at 100 humans, 40 of them will be what I would call the fearers, people who you know don’t quite understand the technology, do not trust it, and basically think it’s there to replace their jobs and perhaps even destroy humanity.

Ott: And on the other end of the spectrum?

Perricos: [Another] 40% are the reverers [sic]: They think it's something magical, they think it’s going to solve all their problems, they think it’s going to transform humanity as a whole as it is today.

And of course neither of those two groups [are correct]. What you want is a workforce and I would argue a society that is the remaining 20% in the middle, understanding that these are great tools, that they can help advance humanity, but they have their limitations, they need to be controlled, and they need to be used responsibly.

Ott: He highlighted one of the unexpected ways that AI was benefitting us now.

Perricos: What these algorithms are very, very good at is about understanding code and also translating it. Particularly in areas like climate change and climate modeling, where many of the models are actually still written in Fortran […] invented in the 1950s. The models themselves are decades old; the challenge is you don’t have many people who can still program in Fortran who are in the workforce. Well, these models can be used to look at the code and translate it into something more modern, like Java, saving huge amounts of time and also managing risk.

Ott: That’s one of Costi Perricos’ examples of how AI can be used for good. I ask Ines and Gustav for some of theirs, both from the summit and beyond.

Jeppesen: What impressed me the most was the commitment from some of the African countries, for example, Namibia, for securing the infrastructure. The commitment for being a strong actor in AI, in securing the education of AI and getting the whole population on board. It gave me hope that the there is an AI-augmented or AI-based future for the entire global community.

Ramos: One of the benefits that [AI] can bring [is] the treatment of the enormous amount of data, to have the capacity to have information in a very short period of time that allows you to make conscious decisions. And this is really something that we are only learning now, that before was not possible. In the question of seconds, you are capable of having different options and different characterization or possible consequences of the options that you are going to take.

For example, when you have a natural catastrophe that will put civilians, in a town, in a city in the country [in danger], if you are really capable of understanding what is surrounding you, and what are the options, you can take measures to, for example, understand how many hospitals exist in the area and what is the [occupancy rate] of the hospitals. If there are roads that you cannot that you cannot [drive].

In terms of the technology, every day, there are new technologies that are being tested. There are new applications created that are not only applied for one sector but can be applied in multiple sectors. There’s the evolution that we see every week, every month, and every year. It's enormous—but of course aligned with this technical evolution. I believe that the standards, the regulation will also evolve within the next 12 months.

Because at the end of the day, it will be very, very important to have ethical safety and transparency in everything that is that is going on. And I believe that in 12 months this will rapidly evolve.

Ott: A huge thanks to my guests this episode: Gustav Jeppesen, vice chair of Deloitte Denmark and global lead in central government, and Ines da Costa Ramos, the AI lead at Deloitte Belgium. And thank you to the United Nations and the International Telecommunication Union, which allowed us to use audio from the AI for Good Global Summit.

AI is a topic we’ll be revisiting frequently this season on Government’s Future Frontiers, as we examine issues like how governments are dealing financial crime, why supply chains are essential to global diplomacy, what the future of food looks like. Next episode, we’ll be sharing stories from the people who are on the ground and in the field as society grapples with how to respond to two major issues brought on by climate change: wildfires and heat.  

If you’ve already subscribed, you’ll get new episodes delivered automatically. If you’re not subscribed, maybe hit that button right now so you don’t miss out.

This podcast is produced by Deloitte. The views and opinions expressed by podcast speakers and guests are solely their own and do not reflect the opinions of Deloitte. This podcast provides general information only and is not intended to constitute advice or services of any kind. For additional information about Deloitte, go to Deloitte.com/about.

by

Tanya Ott

United States

Endnotes

  1. European Parliament, “EU AI Act: First regulation on artificial intelligence,” June 18, 2024.

    View in Article
  2. Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, and Nanyun Peng, “‘Kelly is a warm person, Joseph is a role model’: Gender biases in LLM-generated reference letters,” Arxiv, October 13, 2023. 

    View in Article
  3. Julia Dressel and Hany Farid, “The accuracy, fairness, and limits of predicting recidivism,” Science Advances 4, no. 1 (2018). 

    View in Article
  4. Tom Simonite, “A health care algorithm offered less care to Black patients,” Wired October 24, 2019. 

    View in Article
  5. Jeremy White, “See how easily A.I. chatbots can be taught to spew disinformation,” The New York Times, May 19, 2024.

    View in Article

Acknowledgments

Cover image by: Sofia Sergi, Adobe Stock