Who determines ethics in a machine-run world? Bookmark has been added
Who determines ethics in a machine-run world?
A case for "Society in the loop artificial intelligence"
As automation and robotics fueled by artificial intelligence (AI) become more mainstream, many areas of industry are set to undergo revolutionary changes. New sorts of jobs will likely emerge, some existing jobs will likely undergo a transformation, and others may go away.
May 19, 2017
A blog post by Jim Guszcza, chief data scientist, Deloitte Analytics.
There is a good reason for concern about societal disruption, and a pressing need for enlightened societal-level dialogue. But we should not lose sight of the bright side to the creation of machines capable of helping with laborious "spade work." AI has the potential to create significant value by making us more efficient, extending our intelligence and decision-making capabilities, saving organizations money, and generally helping societies run more smoothly.
To take an everyday example, consider how lost many of us would feel without the in-vehicle navigation technologies that help us reach our destinations with greater speed and less hassle. Humans can navigate themselves based on habit, rules of thumb, and limited information. But GPS-enabled navigation systems benefit from practically unbounded computing power and real-time information when suggesting the most efficient routes. At the individual level, complicated urban landscapes become more navigable (and I write from experience as a denizen of Los Angeles County); at the population level, drivers come closer to achieving a kind of technology-enabled collective intelligence, analogous to birds in a flock.
Yes, this can lead to the temptation to just switch off our minds when we really should remain engaged. I am thinking of a recent taxi ride in which the (local) driver got lost on a straight shot from my hotel to the restaurant because he was uncritically following garbled GPS signals rather than keeping his common sense in the loop. One might call this sort of thing "artificial stupidity," in contrast with artificial intelligence. But when we—and our GPS-based navigation systems—are at our best, humans and algorithms can work together to make smarter decisions.
This tiny example points to a larger, increasingly important issue: the need to reflect societal norms and values in autonomous AI technologies; and the need for individuals and societies to develop practices surrounding AI that makes us collectively smarter—rather than the opposite. To quote the great John Seely Brown (the former Xerox PARC director):
The technology is the easy part. The hard part is, what are the social practices around this?1
While AI technologies are developing rapidly, we are only beginning to discuss how to reflect societal values in the algorithms and autonomous systems that are increasingly used to make important—and value-laden—decisions. In a recent webinar entitled "Society in the loop artificial intelligence: Autonomous vehicles and beyond"—the first in a new series on data science, AI, and design thinking—we explored this issue with Iyad Rahwan, associate professor of media arts and sciences at MIT Media Lab.
The prospect of autonomous vehicles populating our roads offers an important case study. For better or worse, we have traditionally relied on human judgment to negotiate such ethically fraught questions as whether to swerve to avoid hitting (say) a runaway dog while risking harming yourself or another in the process. Soon, algorithms might be making such morally fraught decisions.
Let’s face it—humans are far from perfect—either behind the wheel, or otherwise. As we know from Daniel Kahneman, Richard Thaler, and "Moneyball," bounded human minds are systematically prone to unconscious biases and cognitive traps that AI algorithms can help us avoid. So we need algorithms. Furthermore, algorithms do not become distracted, emotional, or intoxicated. So it should come as no surprise that AI-enabled autonomous vehicles could reduce traffic fatalities by 90 percent2.
But how should autonomous vehicles react in ethically fraught, life-or-death situations? For example, should a car swerve to avoid hitting several innocent pedestrians at the cost of killing the driver and passengers? Does it matter how many pedestrians and passengers are involved? Their ages? Whether the pedestrians are illegally jaywalking? Such ethical questions can’t be engineered away, and it’s not always obvious who is making the rules.
Keeping society in the loop—reflecting societal values in autonomous systems—requires understanding those values in some detail in order to operationalize them. To that end, Rahwan developed an ingenious method for crowdsourcing information about moral intuitions: the "Moral Machine." At moralmachine.mit.edu, anyone can offer his or her opinion on how machines should make decisions in a variety of life-or-death situations, similar to the ones mentioned above. This has yielded a granular view of societal values crowdsourced from people around the globe.
To date, Rahwan and his team have collected 27 million decisions from 3 million visitors worldwide. In general, people express the utilitarian view that autonomous vehicles should minimize overall potential harm. However opinions tend to vary with the respondent’s imagined role in the scenario: pedestrians want automated cars to protect them, citizens want the cars to minimize potential harm, and car owners tend to wish that the car protect its occupants at all costs!3
Such work gathering information and gaining consensus about the values to be reflected in AI systems is the first, highly important step. After that comes operationalizing these decisions. This will be neither straightforward nor easy: In contrast with big data or computing technology, there is no "Moore's Law" of exponential growth for societal deliberations. But that is all the more reason for the business, public policy, and data science communities to engage in Rahwan's dialogue about society-in-the-loop-AI without delay. Technological development is not the same thing as societal progress, and building the kind of AI-infused world that we want to live in will require active societal participation.
1John Seely Brown, “Cultivating the Entrepreneurial Learner in the 21st Century,” 2012 Digital Media and Learning (DML) Conference, March 1, 2012.
2Adrienne Lafrance, “Self-Driving Cars Could Save 300,000 Lives Per Decade in America,” The Atlantic, September 29, 2015.
3Webinar: “Society in the loop artificial intelligence: Autonomous vehicles and beyond,” March 15, 2017.
A webinar series on data science, artificial intelligence, and design thinking
Catch up on the latest