The future of AI in government has been saved
Cover image by: Traci Daberko
Tanya: When we talk about artificial intelligence in government—especially in the field of national defense—we can start to sound a little tense. And with good reason.
SFX1: “The tempo of war fighting it in for a dramatic acceleration. Today a threat actor can have effects almost instantaneously. So to be able to respond in that age, you have to be thinking about how you can succeed in a digital environment.”1
Tanya: But it’s possible we’re getting so distracted by sci-fi stories and worst-case scenarios that we’re losing sight of our history. When we do that, we’re in danger of missing out on some big opportunities.
Ed Van Buren: When citizens think about AI, they tend to race to the dark corner of the room and start thinking about things like killer robots or ubiquitous surveillance or scary technologies.They often don’t think about things like their phone, which is helping to complete sentences for them and giving them word selections. There is a whole host of very practical, very mundane applications of artificial intelligence that government can take advantage of, that does not get into the extreme use cases that generate a lot of press.
Tanya: That’s Ed Van Buren. He leads artificial intelligence for Deloitte’s government and public services industry. He says in reality, AI can allow government agencies to provide better services, more efficiently.
Ed: We have lots of text in our government agencies … lots of paper documents that have been scanned, lots of regulations, lots of policies, and being able to go into that information and identify items of interest, identify patterns, identify things that you might be looking for, would be a huge benefit.
Tanya:Natural language processing—and more specifically, its subset: natural language understanding—can dramatically speed up that process, sifting through thousands of documents in a fraction of the time it would a human.
Ed: If you are a government policy-maker and you are involved in issuing regulations, revising regulations, you need to be aware of a whole scope of regulations on the topics that you’re interested in. Those regulations historically started on paper. They’re now obviously online and they require searching in order to find the different pieces in the information you might be looking for. Using natural language processing, you can search across that whole corpus of regulation in one fell swoop and identify all of the different pieces that are connected to each other so that you can see that you may have a number of regulations that are all achieving the same thing, which would give us the opportunity to remove some of those regulations. Or you might be able to find regulations that are in opposition to each other. In the past, someone could have done some of those things, but it would be painstaking. It would be exhaustive. Now it is something that can be done rapidly using NLP technologies.
Tanya: NLP is just one of the many tools that falls under the broad category of AI. There’s also robotic process automation, computer vision, machine learning, deep learning. But government faces some challenges.
Ed: Government certainly has a lot of investment in legacy technology, and those systems have been developed over many years to the very unique and specific needs of government, which tends to make them hard to maintain, hard to upgrade, and hard to inject new capabilities like artificial intelligence. The ability to completely remove a whole entire system and replace it with something modern is pretty limited in government. So it’s important to develop technology solutions that can be fit around existing IT investments, finding ways to use the power of cloud while at the same time recognizing previous investments in on-premises solutions, being able to take artificial intelligence components and incorporate them into the larger legacy environments.
Tanya: Ed says in the last few years his team has seen a dramatic increase in the number of government agencies requesting help with AI. Not surprisingly, one agency that is very interested in the technology is the Department of Defense.
Bob Work: I focus primarily on the future of war.
Tanya: Bob Work was the U S deputy secretary of defense from 2014 to 2017. He served in both the Obama and Trump administrations. Before that he was the US undersecretary of the Navy—the second highest ranking civilian in the Navy. These days he’s a consultant and serves as cochair of the National Security Commission on Artificial Intelligence.
Bob: About two years ago, Congress established the commission and they asked us to take a look at how AI was going to affect U.S. national security. The commission thought of this in the same way that Thomas Edison thought about electricity. He described electricity as a field of fields. It holds the secrets which will reorganize the life of the world. We actually believe that artificial intelligence is similar to electricity. It’s going to be a new way of learning in every single scientific endeavor. It really is going to be something that has an enormous impact on our lives.
Tanya: The commission recently released a roughly 760 page report detailing how artificial intelligence could impact not only national security, but also economic security.2
Bob: We didn’t want to be hysterical in this report, but we did want to sound the alarm. The United States has been the world’s leading technology-generator since World War II. It is a major reason why we have the most robust economy in the world and why are we are so militarily strong. For the first time, the United States is faced with a competitor, China, who is every bit as technologically capable as we are. They have a national goal to leapfrog the United States, both in terms of the largest economy in the world, but also in terms of the most capable military in the world. AI is central to that plan.
Tanya: China has made it a national goal to become the world’s leader in artificial intelligence by 2030.3 It’s hard to know exactly how much China is spending on AI, but Bob estimates it’s up to six times as much as the U.S.
Bob: We’re not even organized for the competition.
Tanya: The commission hopes their new report will be a wake-up call. They want Congress to double the amount of money it allocates to AI … $32 billion each year until 2026.
Bob: Depending on who you talk to, the winner of the AI competition is going to have a $13 to $15 trillion economic advantage. So when you think of that, the $32 billion doesn’t sound like it’s totally out of the ordinary.
Tanya: They also recommended the creation of the Technology Competitive Council—much like the National Security Council, at the start of the Cold War.
Bob: It would be up to the TCC to develop a technology competition strategy for the United States and to make priorities among quantum and biotechnology and AI and 5G and additive manufacturing and saying this is how we need to structure ourselves to win this competition.
Tanya: Bob admits that one big challenge is the talent pool. There are plenty of—as he calls them—“digital wizards” who understand how to build and deploy AI. But most of them don’t end up in government. The commission wants to change that.
Bob: The first idea was a Digital Services Academy. It’s modeled after the military service academies. Anybody could apply. You go to the Digital Service Academy, get a full ride and get a degree in some STEM specialty—computer science, physics, software engineering, something like that—and when you graduated, you would go into the federal government for some period of time.
Tanya: They recommended five years.
Bob: [The Digital Service Academy provides] “active-duty” digital wizards. Then we recommended a National Digital Reserve Corps, which was modeled after the ROTC, Reserve Officer Training Corps and the National Guard. In this case, you would apply, you’d go to any college on a full ride paid for by the government. You would get [a STEM] degree, and could work anywhere you wanted—you could work at Google or Apple or a VC firm—but for two days out of every month, you would go to a [government agency or] military unit and you would say, “Hey, what problems do you have? This is how could advanced technology help solve that problem?” And then for two weeks out of the year, you go to maybe a military exercise or perhaps you go to a national laboratory [or to an internship at a government agency] and you would do the same thing.
Tanya: The commission hopes to have the Digital Services Academy up and running by 2025. Bob says they have bipartisan support. But their call for immigration reform might be a harder sell.
Bob: Immigration is a national security imperative in this competition and we need new visa categories for people with these high-tech skills. We should provide green cards to every student who graduates with the doctoral degree in a STEM field. We want them to stay in the United States. We don’t want to train all of these people in the best universities in the United States—we don’t want to train all of these people in the best universities on the planet and then have them go to China and compete against us.
Tanya: Bob cites another reason that the US needs to be working to maintain its lead in technology development.
Bob: Technology reflects the values of the governments that pursue them.
Tanya: My next guest has been thinking deeply on just how AI could be reflecting our values.
Molly Steenson: A few years ago, I noticed the word ethics showing up an awful lot in our conversations about artificial intelligence and technology. As I began to delve into it, I had a nagging feeling that when we say AI ethics, sometimes it’s neither AI nor ethics.
Tanya: Molly Steenson is professor of ethics and computational technologies at Carnegie Mellon University. She’s also a historian of architecture and a journalist, so she’s got a distinct perspective.
Molly: I am a designer. I’m a human-centered designer.
Tanya: It’s estimated that by 2025 there will be 25 billion Internet of Things smart devices in our homes, cars, and offices. Molly wants to make sure they’re doing right by the humans who use them.
Molly and her team of researchers examined nearly 14,000 news articles that mentioned terms related to AI and ethics to get a sense of how they frame the issue of ethics.
Molly: We see terms like AI, algorithm, bias, data, ethical and ethics, facial with recognition, machine, moral, privacy, responsibility, risk and transparency. When we see transparency, we see it in combination with the terms “lack of” transparency, which is to say that the press isn’t covering when a company is transparent or doing something right. They’re covering it when it goes wrong.
A friend who works at a Silicon Valley company points out that there tends to be a life cycle, that something will go very wrong with the company, and there will be some kind of major moment or PR crisis, and the company will put out a set of statements. So inside of the company, someone will say, “Something’s gone wrong. We need to care about this. This matters across the board.” Whoever does that needs to have enough seniority to be credible to raise that risk, that threat. Then the ethics statements are put out there and someone is given the job of AI ethics. Often it happens more symbolically. It’s in addition to their existing position and it’s in addition to their existing responsibilities. So, it doesn’t come with resources.
Tanya: Molly says there are many things companies and government can get wrong when deploying AI. But one of the biggest problems is not considering the consumer. She points to the case of the Boston Public Schools. A few years ago there was a lot of pressure to redesign their school bus system. The buses were really inconsistent in when they showed up to many students’ homes … and they were expensive: Roughly $2,000 per student per year. It was 10% of the district’s budget.4
Molly: They worked with some MIT folk who came up with some really great algorithms and did an amazing job of balancing out start times and [being] more equitable where certain neighborhoods were and for people of color, [for] different class boundaries. They rolled out the system and it was a total failure.
Tanya: It failed, she says, because they didn’t really think of the end user.
Molly: Families now had three start times over the course of the morning instead of one or two. It threw too many things into disarray. Despite everything [Boston Public Schools] put into this project, including a lot of money, they had to roll it back. Now, nothing’s perfect when you’re trying to roll out something that has high stakes, like how your kids get to school, like how you run your morning, and really your life. But if they had included more stakeholders in better ways in that process, they might have been more successful in how they collected that data and then how they built that data into the algorithms and the systems, and the various AIs that created this new way of doing school bussing in Boston.
Tanya: So, what are the lessons for other local, state, and national government agencies?
Molly: One is a question of understanding who the stakeholders are. In a lot of ways governments do understand this, especially when you’re working at a local or state level. There are close relationships with the constituents and the different stakeholders in a community. Then you try to design to meet them where they are and where they need to go or where they hope to go—what they hope to achieve versus standing back in our office spaces and doing our best guesses and working the way that a lot of policymakers and engineers have worked for a long time, which is to do our best guess and deploy it. When we start engaging these design processes, we have another opportunity to help meet people’s expectations and better understand who they are and help us to create better government and better governance as a result.
Ed: Any technology can be used for an array of purposes. Some are good, some are not. It is going to be critical for government to keep thinking about the ethical use of AI and make sure, as they look at use cases for AI, that they’re also thinking about how that capability could be misused
Tanya: Again, Deloitte’s Ed Van Buren.
Ed: They want to use AI in a way that helps to build that trust and not erode it. So being very sensitive to using the data that they possess for the purposes that are appropriate and not overreaching that use, not jeopardizing the trust in the privacy of the data that they hold is going to be something that they need to continue to really keep in the forefront of their mind.
Molly: One thing it helps to remember is that AI is not new. “Artificial Intelligence” was coined in 1955. It’s a term that, as I like to say, is old enough to get Social Security at this point. The really big claims of what we thought AI was going to do, we’ve been making for decades. They’re also the big scary claims, even the claims of superintelligence that someone like Nick Bostrom makes. But we’ve been making those claims since the 1960s.
Bob: There’s AI optimists and pessimists.
Tanya: Bob Work says he’s an optimist.
Bob: A lot of people argue over when AGI—artificial general intelligence—comes, this will be computers that can truly mimic human intelligence. I don’t worry about that. Artificial narrow intelligence applications that will be in our lives. It will be everywhere, will be ubiquitous. We will have autonomous cars and autonomous trucks on the roads. You’re going to have drone delivery of packages. You’ll wake up in the morning, you’ll converse with your digital assistant. The digital assistant will already know where you like to have breakfast and will order the breakfast for you.
Tanya: That does it for this episode of Insights In Depth: The Future of Government. We heard from Bob Work—cochair of the National Security Commission on Artificial Intelligence; Molly Steenson—professor of ethics and computational technologies at Carnegie Mellon University; and Ed Van Buren—who leads artificial intelligence for Deloitte’s government and public services industry.
Next up in our series, leveraging AI for actual government service delivery.
That’s coming up on Insights In Depth: The Future of Government. Follow on your podcatcher of choice, and check us out at www.deloitte.com/insights.
You can also connect with us on Twitter at @DeloitteInsight. I’m on Twitter at @tanyaott1. I am Tanya Ott. Have a great day!
This podcast is produced by Deloitte. The views and opinions expressed by podcast speakers and guests are solely their own and do not reflect the opinions of Deloitte. This podcast provides general information only and is not intended to constitute advice or services of any kind. For additional information about Deloitte, go to Deloitte.com/about.
Cover image by: Traci Daberko