Nudging New Mexico has been saved
After the Great Recession, New Mexico’s Department of Workforce Solutions (DWS) tackled the difficult job of recovering money from people who’d been overpaid unemployment insurance. Joy Forehand of DWS and Deloitte’s Mike Greene spoke with Tanya Ott about how a combination of behavioral economics and analytics proved surprisingly effective.
Everybody embraces technology. You do want to move that next [thing] forward, but at the same time really be able to step out of the box and look at what are some of the consequences of bringing that technology on board. Or where are some of the complications that are a result of that?
The Press Room
Subscribe on iTunes
Listen to more podcasts
Subscribe to receive insights on behavioral economics and management
Read the article
Watch the video
View the full collection
Deloitte Review, issue 18
TANYA OTT: New Mexico allows itself to think very differently about how it uses technology—makes some minor tweaks, and saves millions.
TANYA OTT: This is The Press Room, Deloitte University Press’s podcast on the issues and ideas that matter to your business today. I’m Tanya Ott and today we’re going on a journey with two data geeks who dug into the numbers and discovered some real surprises.
TANYA OTT: Joy Forehand has always loved numbers and policy. But she also loves talking and teaching. She started working for the state of New Mexico as a labor economist, but then moved to communications director and public information officer. It’s not a typical career trajectory.
JOY FOREHAND: I don’t know a lot of people who are able to have those opportunities in your career where you can have something as far apart as economic and employment numbers matched with your passion in communications and marketing and have those meld into a perfect scenario. So I’m very, very lucky and very grateful.
TANYA OTT: These days Joy is deputy cabinet secretary of the New Mexico Department of Workforce Solutions.
JOY FOREHAND: We’re one of the smaller state agencies. We’re not one of the mega ones that you see that’s thousands and thousands of employees. But what I always tell everybody is that although we’re smaller in staffing size, the range of our services and our programs is incredible.
TANYA OTT: They compile and analyze New Mexico’s labor statistics. They handle labor relations and workforce training. But their biggest, most high-profile program is the unemployment insurance program. If you’re laid off in New Mexico, this is the place you turn to.
(phone messaging system for Department of Workforce Solutions)
TANYA OTT: When times got tough in 2008, that phone number got a lot of use.
JOY FOREHAND: The Great Recession hit every state so hard, with the huge layoffs and with so many people looking for work. Our little state went up to 60,000 individuals who were certifying for benefits every week, when in regular, normal economic times we’re around 12,000—so, a huge jump. Huge strain on the system, and huge strain on the staff to be able to start processing things faster and to still keep high levels of accuracy and collect the right information and be bombarded and try to serve all of the customers who had just lost their jobs.
TANYA OTT: They also had to be on the lookout for fraud and overpayments. The US Department of Labor estimates that in 2014, states made more than $5 billion in improper unemployment insurance payments. In previous years that number was even higher. Some of it is fraudulent claims, but most is simply because unemployed people fill out paperwork incorrectly or didn’t understand eligibility requirements.
During the economic recovery, New Mexico’s Department of Workforce Solutions invested a lot of money into marketing campaigns and education efforts aimed at consumers. They also upgraded their computer system so it could better cross-match databases to catch overpayments. Then comes the difficult job of tracking down people who’d been overpaid and getting that money back.
JOY FOREHAND: Most of the approaches and initiatives that we do are just “pay and chase.”
TANYA OTT: I would imagine the chase itself costs your department money because it takes resources to be able to track down that money.
JOY FOREHAND: It takes a lot of staff time and a lot of resources to be able to chase and to be able to get back improper payments. And so that was one of our priorities: How do you work smarter and not harder?
TANYA OTT: To do that, Joy called in that other data geek I mentioned at the start of the podcast.
MIKE GREENE: I’m Mike Greene. I’m a data scientist with Deloitte Consulting. In the last few years companies have started really wanting to analyze all of the data they’ve been collecting and it’s gotten cheaper to have big databases and big data revolutions. And somebody finally asked, well why are we storing all this stuff? Can we actually get some kind of benefit out of it?
TANYA OTT: In New Mexico’s case …
MIKE GREENE: Could we figure out in advance where they were paying people who were ineligible—the improper payments?
TANYA OTT: Mike and his team scoured state databases—the ones with unemployment claims, the ones with payroll taxes. They hunted for anything that might help them better understand why someone was overpaid and be able to better predict future overpayments.
JOY FOREHAND: How do we prevent it from even happening? So we don’t have to pay and chase? So we don’t have to find it because it never happened?
MIKE GREENE: So there’s no silver bullet here. It’s not like we’re able to say, okay, if we see this one thing then that’s it—we know they’re going to cheat a little bit or not enter accurate information. But it didn’t work that way. Most people are fully honest, fully accurate with what they enter. We needed to find a combination of things. We found that we needed to use 40–50 different variables, which when all combined together, gave us a better idea that maybe this is a situation where we wanted to act.
TANYA OTT: Before we get to what the team actually did, it’s important to understand how people file for unemployment in New Mexico—and in most states around the country. Gone are the days of walking into an unemployment office, sitting down with a caseworker to explain your situation and filling out paperwork. These days, it’s mostly done online or by calling that toll-free phone number.
MIKE GREENE: You know back when everybody was filing on paper there was no opportunity to, you know, hide in that way. But now that things are going online, you’re interacting with a computer that’s a lot easier to see as an impersonal machine or a system, as opposed to a person in front of you.
TANYA OTT: That means it’s easier to cheat or, more likely, just get something wrong. You can’t ask a computer screen to explain a question in more detail. Mike’s and Joy’s teams drilled down into the data and discovered there were certain points in time—certain screens on the computer—where users entered the most inaccurate information.
MIKE GREENE: And then we went back to the screen in the system to figure out what it looked like. What was it people were seeing and how could you misinterpret or answer differently in that situation?
TANYA OTT: So you were actually looking at the way it was phrased on the screens—is it a matter of somebody misinterpreting and being confused by the question or, on the other extreme, did they just answer incorrectly in order to get a benefit?
MIKE GREENE: Right. And we don’t know what any individual person’s motivation was. We’re not inside anybody’s head. But we do sort of know that as we started making changes to the screen—
TANYA OTT: They ran a little science experiment and gave some new unemployment filers the old language and some new messages.
JOY FOREHAND: It became, let’s provide messaging where it has the biggest impact. Where it can really impact somebody’s decision and nudge them, sway them into providing really robust information and correct information.
MIKE GREENE: The behavior did shift pretty dramatically and pretty quickly.
TANYA OTT: Can you give me an example of a switch in wording? Is there something that stands out where it was worded this way and then you changed it and worded it that way and then got completely different results?
MIKE GREENE: The one that’s really my favorite—you file your initial claim and figure out, “Okay, I’m eligible to get paid.” Then what happens is you have to come back every week and there are requirements so that you get your benefit check every week. You have to be actively looking for a job. If one comes up you have to be available. There’s a series of things and there’s basically one page with five or six points on it and it’s sort of this key page. And you see this page every single week that you get paid for benefits—usually unemployment insurance can be up to six months, 26 weeks. So you see this page a lot. And the first few weeks you read it, it’s kind of novel, but after that you kind of know the answers to the questions. And one of those questions in particular reads, “Did you work last week?” And if you click “No, I didn’t work,” then you go forward and you get paid. If you click “Yes, I did work,” then it brings up another page and it asks you to report how much you got paid, who paid you, etc. What happens is the amount that you get paid, the amount you earned, gets deducted from your benefits. So your incentive is to click “No, I didn’t get paid yet.” But that’s also easily verifiable, right, and that’s one of those types of overpayments that I talked about we found earlier as we were going through their data.
TANYA OTT: So, essentially the question initially said “Did you work last week?” and somebody knew they’d have to click no in order to get their full benefits. How did the question then change?
MIKE GREENE: So this is the sort of interesting part. We didn’t actually change the wording of the question. What we wanted to do was have the experience on the page be a little bit different, but only in weeks where we thought people weren’t being, maybe, fully accurate. So let’s say you were eligible. You got your benefits. You come in for maybe a few weeks, two, three, four weeks, whatever. You’ve seen this page. You didn’t earn any money. You honestly and accurately click, “No, I didn’t earn anything.” But then you sort of show up the next week, and you did earn a bit of money. Well that’s the point where you really have a chance to effect someone’s behavior because in the back of their head they probably know that they earned some money and they may know that question is sort of getting at that. So if by using predictive analytics, we can figure out this is the likely time and situation where we should intervene, and you click, “No, I didn’t work,” that’s when a pop-up would show up on the screen that you’ve never seen before. And in there would be a message. And one of the most effective messages that goes up there says “Nine out of ten people in your county accurately report their earnings every week.” And then it gives you an option to continue or to go back and change your answer. And what we’ve seen is that people who see that message are about twice as likely to report earnings as people who are randomly selected and don’t see that message.
TANYA OTT: Mike says they shamelessly borrowed that idea from across the pond.
MIKE GREENE: The behavioral team from the United Kingdom tried this a few years ago where they had a bunch of people that owed back taxes. The kind of standard thing you do, I guess, to get people to pay you is send a letter and it has all of this legalese that says, “Pay up or else …” [or] whatever those letters say. They wanted to try one new thing and they added a single sentence to that letter. One had the usual version that goes out and then one had one more sentence at the top. And it said, “Nine out of ten people in your town pay their taxes on time,” and they randomly sent the two versions of the letter out to different people. They found that the people who received the letter with the additional sentence were more than 20 percent more likely to pay, resulting in hundreds of millions of pounds in savings.
TANYA OTT: So that’s like, social nudging, or what we’d call in middle school, peer pressure. You don’t want to be the outlier in your community.
MIKE GREENE: Exactly. Behavioral economists would call it social norm, but like you said, peer pressure is what the rest of us would call it.
TANYA OTT : What was the most surprising thing that you discovered along the way in this?
MIKE GREENE: I think the most surprising thing was how quickly we were able to get results. The system went live on, I think, Thursday or Friday and by Monday we had a statistically significant result where we could already tell the messages were having an effect and people were reporting more earnings. And I was really surprised by how quickly we could figure that out.
TANYA OTT: What didn’t work? The same old, same old.
MIKE GREENE: One of the wordings we tried out was to cite the New Mexico statute citing penalties for fraud and talking about what will happen to you if you get caught.
JOY FOREHAND: In the unemployment insurance program we’re notorious for always citing state statutes and state regulations. This program is very cumbersome. So some of our pop-ups were just stating things in regard to our fraud penalty, like “Per New Mexico statute … blah, blah, blah, blah, blah … you may be liable for 25 percent of your fraud penalty.” And it’s just this citation about state statutes, something that goes out in our correspondence.
MIKE GREENE: Well it turns out that was one of the worst-performing messages. In fact, putting that in a pop-up had no effect at all.
JOY FOREHAND: They scored the lowest. The lowest!
MIKE GREENE: You might as well not even show a message.
JOY FOREHAND: And so it really caused us to sit back. It slightly surprised us, but then we realized if it’s not effective in the system then it’s also not effective in correspondence.
MIKE GREENE: So the framing and the wording really matters in terms of how you communicate with people.
TANYA OTT: Had this kind of thing been done before?
MIKE GREENE: This is pretty new in the public sector. States are just starting to play around with the idea of behavioral analytics and nudges. I think we’re combining a few things here that are, honestly, even pretty rare in the private sector. Where you’ve got pretty good analytics, you’ve got behavioral economics and you’ve got A-B testing or randomized control trials. And combining those things together, I think maybe some technology companies are doing that but it’s pretty rare to bring all of those things together to start to change minds.
TANYA OTT: New Mexico tested a lot of messages and continues to do so even today. And it’s learning lessons along the way.
JOY FOREHAND: We can do little tiny changes to messaging and then let that sit and watch our customers interact with the system and then look at the analysis and say, “Oh, this message worked, but this didn’t. This screen change worked or this didn’t. Asking the customer to put their initials so that they certify that above is true on every screen—that worked okay. Maybe if we do this …” So it’s that continual change, too, that is exciting for the economists. It’s exciting for everybody.
TANYA OTT: Make that everybodies.
JOY FOREHAND: It became clear that there’s a diverse team that has to come together to evaluate this. You have to have your IT team in the room constantly for any kind of system updates or coding changes that that is going to require. You need to have your communications group to be able to approve messages and to be able to research potential changes to messages or to look at other studies where different communication interventions may be working better. You have to have your legal team in there. You have to have your policy team in there to be able to dictate what’s in line with the program, and what’s not. You have to have your operations people. We often have a customer service person in the room because they’re the ones on the other side of the phone who will be reading those pop-ups to some of our claimants who aren’t using the online system but are calling in. So you have to have a really diverse group. Then of course you have to have the economists and the statisticians to be able to report on the results and to be able to suggest any changes or tweaks that need to be made to the model and do that correctly.
TANYA OTT: You can learn more about what worked for New Mexico—and just as importantly, what didn’t—in the article Nudging New Mexico: Kindling compliance among unemployment claimants at dupress.com. Also, check out our podcast archive. There’s a lot of good stuff there. Subscribe so you don’t miss anything, and give us a rating and leave a comment—it helps us show up better when someone else searches the podcast store. Follow us on Twitter @dupress and you can always email us at firstname.lastname@example.org. I’m Tanya Ott. Thanks for listening!
This podcast is provided by Deloitte LLP and its subsidiaries and is intended to provide general information only. This podcast is not intended to constitute advice or services of any kind. For additional information about Deloitte LLP and its subsidiaries, go to Deloitte.com/about.