Jim Guszcza, Deloitte’s chief data scientist, explains how behavioral nudges, aided by AI and data science, can help governments, universities, and businesses address some of their most pressing issues.
Not only would two different people make different judgments about the same risk, the same person might make a different judgment depending on whether it was before lunch or after lunch. So, your mood, your blood sugar level, the weather, all these extraneous factors can affect your decisions.Jim Guszcza, chief data scientist, Deloitte Consulting LLP
Tanya Ott: I’m Tanya Ott. This is the Press Room. And today, we’re kicking off a series of podcasts recorded at Georgetown University— where Deloitte’s Centers for Government Insights and Integrated Research recently held a one-day conference to explore how behavioral science and data science can help governments, universities, and businesses address some of their most challenging problems.
What’s a Nudgapalooza, you say? Well, it’s basically one researcher after another on the topic of behavioral nudges, and there’s one guy who can really explain it.
Jim Guszcza: My background is somewhat idiosyncratic. I was not trained in behavioral science. I didn't get a PhD in this domain. My PhD is actually in philosophy.
After I got my PhD in philosophy, I became what we now call a data scientist, but that didn't exist when I started working in this area.
Explore the Behavioral economics collection
Subscribe on iTunes
Listen to more podcasts
Subscribe to receive related content
Download the Deloitte Insights and Dow Jones app
Tanya Ott: Jim trained to be an actuary. And he was one until he, as he says, backed into understanding the importance of what we now call behavioral economics—or nudge.
Today, Jim is really interested in the intersection between behavioral insight and technologies like artificial intelligence that increasingly surround us.
Jim Guszcza: There are two forces that we all know are reshaping our worlds. One is data and the other is digital. The big data revolution is giving rise to an artificial intelligence revolution, right? So, Andrew Ng is saying that AI is the new electricity. It's a bit of hype, but I actually believe it. AI is the new electricity. It makes sense in the sense that this is the kind of general technology that can transform just about any area of industry in ways we can only dimly perceive right now. It’s going to be a big game-changer in digital.
This is Douglas Englebart sitting there with his new invention, the [computer] mouse, many, many years ago. Douglas Englebart is the human–computer interaction pioneer who invented the mouse, the graphical user interface, and so on. Really a pioneering thinker and maybe one of the original choice architects. And he said that the digital revolution is far more significant than the invention of writing, or even printing.
Okay, so these are definitely two forces that are reshaping our world. But we're beginning to see in the headlines that the ways AI and digital are reshaping our world don't always have to be automatically positive. Right? We're seeing the increase in group polarization due to the collaborative filtering of news and opinions.
We see social technologies that are actually causing isolation, anxiety, even addiction, sometimes even by intentional design.
So [those] two forces are reshaping our world, but there's a third force that technologists don't pay attention to quite enough, which is the design revolution.
Tanya Ott: Jim says we need to take into account human psychology when we design our products and services.
Jim Guszcza: And I find it convenient to quote Richard Thaler, one of the pioneers of behavioral economics, who summarizes this very nicely when he talks of the difference between what economists have traditionally assumed about humans versus what we now know about humans.
Richard Thaler: Homo Economicus, or I call them “Econs” for short, they’re the people you study in any basic economics class. They’re really smart. They can make any kind of calculation. Never forget anything. Unemotional. No self-control problems. And they’re complete jerks. If you leave your wallet behind, they’ll take it if they think they can get away with it.
Jim Guszcza: And he says most of the people I meet don't have any of those qualities. They have trouble balancing the checkbook without a spreadsheet, they eat too much, and they save too little. They'll leave a tip at a restaurant even if they don't plan on going back, okay?
So, he's talking about actual humans versus econs. After Thaler won the Nobel Prize, he said all these years economists might have well been sighting unicorns. That traditional econ view is so much different from what we know about the way actual humans behave. So what Thaler is doing in that quick quote is summarizing what he and many other behavioral economists call the “Three Bounds.”
Tanya Ott: The Three Bounds. Those are the three major ways in which actual humans diverge from what economists call “econs.”
Jim Guszcza: Management experts, government experts, people who have gone to business school, traditionally either implicitly or explicitly, assume [these three things] about human psychology. One is that we have bounded rationality, and research is bounded in that we all have biases that creep into our decisions, too.
One way of summarizing this is that we're terrible natural statisticians. We need help from data science and algorithms, right? Most of our mental operations are actually automatic, thinking-faster operations. Very few are thinking-slow. And in terms of thinking fast, it's terrible [for] statistics.
When we're making really important choices, putting on a suit and going to a boardroom [for example], we actually really need help from things like collective intelligence and data science and algorithms.
Tanya Ott: Which, Jim says, are really good at thinking rationally. The second of the three bounds is bounded selfishness.
Jim Guszcza: There's a picture I snapped at LAX airport a couple of years ago, Marvin's Complimentary Shoe Shine. How many of you guys would sit down and let Marvin shine your shoes and then just walk away saying, “It's free, thanks, Marvin”? You wouldn't do that, right? It doesn't make sense. You're economically worse off if you give Marvin some money, but you would just never do that.
So, there's some kind of social force–Adam Smith calls this a moral incentive—which is a very, very strong motivating force that would prompt us to give Marvin some money. And that's a very strong force that we don't necessarily tap into or acknowledge very much when we design our programs and policies.
Tanya Ott: And the third in the triumvirate? That’s bounded self-control. We tend to make short-term decisions that are at odds with our long-term goals.
Jim Guszcza: So that's a picture of me saying I will do my open enrollment at Deloitte after I watch the 49th episode of Game of Thrones, I promise. (Crowd laughs.) I missed it this year again, all right.
Tanya Ott: It's not always fun to make the short-term decisions that our long-term self would really appreciate.
So, three bounds: self-control, selfishness, and rationality. That latter one is what Jim, the data scientist, spends the most time thinking about. And the book Moneyball—about how the Oakland Athletics GM built a powerhouse by using metrics to suss out undervalued talent—figures prominently in that thinking.
Jim Guszcza: When we make decisions, we're not necessarily doing an approximate job of building a regression model in our mind. What we're really doing is, we're trying to tell stories that make narrative coherence; they [stories] don't always make logical or statistical coherent sense. Bottom line is that both bias and noise can affect your decisions. And one of the ironies of this is that Moneyball now is a 15-year-old example of the power of data science, the power of data and algorithms to help us make better decisions and overcome our cognitive biases.
But if you think about it, it was a story about HR. It was a story about making hiring decisions. We've known for a long time that unstructured interviews, which are the most common way to make hiring decisions, are notoriously bad at telling you who is going to do well in the job.
They don't have a lot of predictive power. Both biases can affect a decision, and also the noise. Biases, the classic experiment illustrates this: You can show people little thinly sliced clips of the first 15 seconds of job interviews and they're able to predict just from those first 15 seconds, to a surprising extent, how well those interviews went and who got the job.
Which means that first impressions matter way too much. There's not much information in those first 15 seconds, but you do form a first impression [through] things like halo effects and affinity bias. This person looks the type, right? Even if you don't realize it, that can have a strong effect on your decisions and this can, of course, result in gender bias and racial bias in workplaces.
Noise can affect your decisions, too. Even before I read about this, I used to say this to my underwriting clients: Not only would two different people make different judgments about the same risk, the same person might make a different judgment depending on whether it was before lunch or after lunch.
Your mood, your blood sugar level, the weather, all these extraneous factors can also affect your decisions. And noise might be almost as big a problem as bias when it comes to making these decisions.
And controlling noise is one thing algorithms are very good at, right? They just make the same decision over and over again. Algorithms are not a panacea, but that is one thing that they are not affected by.
Tanya Ott: So, because of all these problems, a lot of people are looking to algorithms to do things like improve hiring decisions. Companies are using AI, machine learning, text processing of resumes, psychological inputs—all kinds of metrics to filter candidates and, they hope, find the right person for a job. But Jim says there’s a problem …
Jim Guszcza: If you're training an algorithm on decisions that have been made by people who are affected by bias, those biases can be encoded in data and picked up by the algorithms. You don't want to just do away with human judges and interviews altogether, right? That wouldn't make any sense, especially if you're hiring rare one-off cases. So, what should you do? Well, here's one case where behavioral insights and behavioral design, designing choice environments with a sophisticated understanding of human psychology, come in handy.
For bias, rather than just looking somebody in the eye and saying, “Is this person a good candidate? Do I like this person? Is my overall impression good?”—no, don't compare a candidate to some abstract ideal. Compare multiple candidates to each other, one dimension at a time, separately, to break that effect of halo effect and infinity bias.
Try to operationalize these dimensions that you're measuring them against, so you can actually come up with scores of one to five in a somewhat objective way. And then, once the whole thing is done, just add up those scores and compare people on those scores and try to promise yourself that you're going to do the best you can to adhere to those scores.
So that's ameliorating bias; what about ameliorating noise? Well, have multiple people do this. Maybe one of those people is having a bad day or is in a cranky mood, but maybe the other four aren’t—those are independent. Just like in a regression model, the errors cancel out. The noise will cancel out. So, it's a brilliant use of both behavioral design and what you might call collective intelligence to ameliorate both bias and noise.
In many cases, what you want is for the algorithm to be another voice in the room. You want the algorithm to be something that an expert consults and can understand. You need to embed this algorithm within a sophisticated choice environment where the person is being informed, not been just told something from black box.
Tanya Ott: A sophisticated choice environment! It’s not just about the technology. It’s also the design. It’s using human psychology to design environments that will lead people to make smart choices.
Jim Guszcza: One is smart defaults. It's a beautiful case of designing a choice environment to go with the grain of human psychology. If you simply have people into a retirement plan and give them an automatic escalation so they can decide, now that every time they get a raise in subsequent years, now inertia will act as their friend rather than their foe and they'll save more money for retirement.
It's been estimated that this has resulted in an extra US$30 billion of savings being deferred into retirement accounts.1 It's a very small change that's had a huge outsize impact.
Tanya Ott: We are surrounded by technologies and design choices that lure our fast-thinking brains into spending, even when our slow-thinking brains know that we need to be saving money. I’ll cop to it. I love that one-click online shopping as much as the next person. But there’s a tech company that’s using human-centered design to encourage saving.
Jim Guszcza: What they want to do is introduce one-click savings. So, every time you buy a cup of coffee at the hipster coffee shop, and it's US$3.50, it rounds it up to the nearest dollar, and US$0.50 goes to your retirement fund. So, it's a very nice way of making it easy, right?
Tanya Ott: Who doesn’t want to reduce the friction and make it easy to make smart choices, in business and our personal lives? In today’s show Deloitte’s chief data scientist Jim Guszcza shared some ideas on how we can use technology to live and work smarter. We’ll have more conversations from the Nudgapalooza event at Georgetown University in upcoming episodes of the podcast. I’m Tanya Ott for the Press Room. Thanks for listening. We’ll be back in about two weeks.
This podcast is provided by Deloitte and is intended to provide general information only. This podcast is not intended to constitute advice or services of any kind. For additional information about Deloitte, go to Deloitte.com/about.