Nudging New Mexico: Kindling compliance among unemployment claimants has been saved
Limited functionality available
Deterring small-scale fraud, such as by those who file exaggerated weekly unemployment reports, often requires more resources than the effort is worth. But by applying concepts from behavioral economics—from making a process less confusing to communicating that accurate filing is the social norm—administrators can induce more honest behavior.
Subscribe to receive insights on behavioral economics and management
Watch the video
Listen to the podcast
View the full collection
It’s another brutal Monday morning. The New Mexico sun is all but burning a hole in the sidewalk in front of your home. Eight weeks ago, after nine years on the job, you abruptly turned from a seasoned employee into a corner the company could afford to cut. Which brings you here, to your home office where you log in and click your way through the same form you filled out last week, and the week before, and the week before that. . . . “Did you complete two employment-related activities this week?” Yes, you submitted your resume to one employer and sat through an informational interview with another. The tossing and turning don’t count. Click, click. “Did you work last week?” You’re on the website, for crying out loud, applying for “benefits,” which are inescapably a pay cut. It’s been six weeks of “no,” but this time, you hesitate. On Thursday, a sympathetic friend offered you a shift, unloading dry goods at his grocery store. But the check won’t show up for two more weeks. Once again, you click “no.” And who could blame you?
We all have such fleeting, mundane moments. Maybe the wording was confusing, or maybe you reinterpreted a question in a way that psychologists call “motivated cognition.” Few would admit to these temptations, but the willingness to cheat, even slightly, is a thorny issue for public benefits programs.
On the other end of the spectrum, there’s identity theft, which is often cerebral in its conception and devastating in its effect. And that’s why it makes for great headlines—someone mysteriously masquerading as your clone files for benefits or adds to your credit card balance from some town you’ve never visited. Like many great stories, it has crime, intrigue, and (maybe) a happy ending. But such elaborately orchestrated schemes account for only a small fraction of state agencies’ erroneous payments.
Nearly one dollar out of eight distributed under the unemployment insurance program by US states went to someone who was ineligible.
First, the good news: Most people who receive benefits are completely honest. In the past year, about 15 million people filed for unemployment across the country, and most of these people truthfully answer questions when they apply for benefits.1 Now for the bad news: Nearly one dollar out of eight distributed under the unemployment insurance program by US states went to someone who was ineligible.2 Of course, not all of these overpayments are due to dishonesty; some are the result of a misunderstanding or of an appeal from a claimant or employer. However, the total came to over $4 billion in 2014 alone. State trust funds distribute unemployment benefits with trust funds that are 100 percent financed by employer taxes. That means every incorrectly paid dollar comes out of a company’s bottom line. Fixing this problem can mean reducing taxes, lowering prices, or increasing wages.
In applications ranging from hiring employees to underwriting insurance contracts to triaging patients, “playing Moneyball”—using data analytics and predictive models to make decisions—reliably outperforms unaided judgment. But predictive models face what might be called a “last-mile problem”: They yield benefits only if appropriately acted upon. Often, this is (in principle) a straightforward economic decision: If a model indicates that Aparna will code better than Ben, you might hire her first; if Carly is predicted to be a riskier driver than Dev, you might set her premium accordingly; if Eddy’s and Frida’s symptoms slot them in the lowest- and highest-risk cells, respectively, of an emergency-room triage decision tree, you might send him home and admit her to intensive care.
But in many situations, the “augmented intelligence” offered by predictive models isn’t enough: Augmented behavior is needed as well. Suppose a fraud model suggests that Greta has higher-than-average odds of embellishing her insurance claim. Though valuable, such indications by themselves don’t necessarily warrant decisive action, and it is expensive to investigate everyone. It turns out that a certain behavioral economics application can prompt some cases to “self-cure”: Targeted behavioral nudges (such as pop-up screens on a website) can be designed and optimized to invoke people’s intrinsic desire to be good citizens. To take another example, suppose a predictive model indicates that Hal is likely to fall behind on his child support payments. Low-cost nudges such as invoking social proof and using commitment devices might lessen Hal’s risk of falling behind. Such ideas should not be taken on faith. Whenever practical, such nudge tactics should be field-tested using randomized controlled trials to estimate their economic benefits.
Data science and behavioral science can be viewed as two parts of a greater whole. Behavioral science gives us a powerful new set of tools for acting on data analytic indications when behavior change is the order of the day. And data science can help overcome “the flaw of averages” by moving from population-wide to personalized behavioral interventions. For a specific person, there might be a specific intervention with his or her name on it.
See “The last-mile problem” in Deloitte Review 16 for more information.3
At this point, you’re probably wondering what states are doing about this. Right now, they’re scrambling to combat identity theft and criminal activity. They hire investigations staff, install fancy software, and bring cases to trial when they uncover a new scheme. But it turns out that the mundane issues, such as understating earnings, are the largest piece of the pie (see figure 1). While identity theft and criminal activity are certainly expensive for states, they are relatively rare compared with the more frequent small lies people tell when applying for benefits.
In an effort to tackle these smaller but significantly more prevalent issues, the New Mexico Department of Workforce Solutions (DWS) combined insights from data science and behavioral science to identify and gently nudge individuals into more honest, consistent behavior. Through this case, we explore how these two sciences work in tandem to solve “last-mile problems” (see sidebar for more information) and bring about behavior change. These are the complex issues that benefit from both predicting behavior (such as of those likely to commit fraud) and, in turn, nudge those individuals with behavioral tactics to act a bit more honestly.
In this article, we discuss why fraud is both difficult to predict and, often, even harder to prevent. Next, we demonstrate how the careful implementation of behavioral sciences in public policy is a powerful way to influence individuals’ decisions. We conclude with real-world examples of how the New Mexico DWS successfully increased honest reporting while reducing improper payments in three key moments: during the vetting process for eligibility, when individuals report work and earnings, and while determining an action plan to seek new employment.
The fundamental principle of unemployment insurance (UI) is simple: to provide a safety net or replacement wages for people who have lost their job through no fault of their own, with the goal of helping them to rejoin the labor force. However, anyone who has experienced a UI program—as a claimant, employer, or state labor agency—quickly realizes the program is far more complex. UI is subject to federal and state regulations while also involving all three parties in most of the processes and determinations. With this level of complexity, reducing improper payments in an effective and substantial way is a significant challenge.
Fraud detection and prevention is an imprecise science for a few reasons. To start with, the very definition of “fraud” can be murky. Sure, there are clear-cut cases of criminal activity, such as identity theft, but smaller, less severe issues comprise a vast majority of all inaccurate submissions. In some ways, labeling an application as fraudulent can be akin to stitching a scarlet F on the file. If a person is wrongly accused, the damage is even greater.
Mislabeling someone as fraudulent may have an even greater downside for state agencies: It withholds benefits from the neediest beneficiaries. Complicating matters is the low base rate of overpayments. In other words, mistaken payments amount to less than 1 percent. Unless a very good mechanism is in place to sort legitimate payments from their fraudulent counterparts, many deserving beneficiaries might be impacted. In addition, wrongly accused UI claimants can—and sometimes do—file lawsuits against state agencies that accuse first and investigate later. When false fraud accusations, lawsuits, and negative headlines are involved, few parties walk away happy.
Unless a very good mechanism is in place to sort legitimate payments from their fraudulent counterparts, many deserving beneficiaries might be impacted.
Acting on a suspicion but not certainty is particularly fraught because of the high cost of being wrong and the fact that fraud is actually very rare. Let’s say we find a pattern in the data that occurs only 5 percent of the time but leads us to every case of fraud. If we stumble on a case that exhibits this pattern, what are the chances we are onto some fraud? The obvious answer is 95 percent, right? It turns out that your brain is taking a shortcut and neglecting a few things. Daniel Kahneman, a founding father of behavioral economics, calls this “base rate neglect”: The human brain can’t properly grasp the low base rate of rare phenomena such as crime.4 Our intuition hits a snag: We typically fail to account for the idea that the overall fraud rate might be really small. Second, you might have forgotten about those picked up by the pattern who are, in fact, innocent. If fraud happens 1 percent of the time, but 5 percent are picked up in the net, then the chance we’ve found a fraud is really only 20 percent. In other words, there’s a 4 in 5 chance we’ve snagged an innocent. So be careful.
But fraud detection still requires flexibility, which may clash with overly bureaucratic and rigid programs. Most state agencies looking for improper payments employ a “pay and chase” method. Agencies across the United States pay benefits and try to recover improper payments after the fact. In New Mexico, only about half of the improper payments are ever identified, and, of those, less than half are recovered; New Mexico currently recovers about 15 percent of its overpayments.5
Suppose two people with the same job are laid off the same day from the same company. Both file for unemployment insurance and are deemed eligible. After certifying for three weeks, both find part-time work. Scott sits in front of the computer and answers all of the questions each week, including reporting his wages from his new part-time job. Mike sits in front of the computer and answers the exact same questions, but he decides not to report his wages, just this week. Two months later, Mike faces an improper payment and fraud investigation. Why did Mike have such a different reaction to the standardized program and exhibit different behavior? What interventions can be applied to reduce this negative behavior within the confines of a standardized online system?
State labor agencies are required to administer their programs in a standard and uniform manner, both to meet legal requirements and to ease the administrative burden of determining initial as well as ongoing eligibility.6 At the same time, agencies must serve diverse populations that come from different industries, levels of work experience, income levels, and educational attainment. In addition, all customers have different behaviors and experiences with the program. It is therefore no surprise that administering a program in a very rigid and scripted manner to react to an ever-changing customer base would result in a high percentage of improper payments. States must therefore develop a new approach, one that better targets and changes customers' behavior and improves the self-reporting of wages earned, returning to work, or any of the myriad situations that directly impact an individual’s eligibility to receive benefits.
A final point is that most state agencies aren’t sitting on a pool of idle resources. Identifying which cases have been overpaid requires investigation. Today, most agencies devote resources to investigating cases that appear fraudulent and pursue potential overpayments through technological means (such as database systems) or traditional means (such as tips and leads or fraud hotlines). The New Mexico DWS investigators get more than 45,000 potential items to work on each year, and the investigations unit is able to work on approximately 25 percent of them.7
Most states actually have systems in place today that determine whether money gets to the right people. In 2013, New Mexico updated its software and processes to manage UI claims. The changes reduced fraud by 60 percent.8 Today, the DWS sends letters electronically to employers to confirm that the person claiming benefits is truly eligible. Adjudicators compare databases of wages from employers with answers to questions such as “Did you work this week?” States even go back and audit a sample of past cases. These auditors dig even deeper, interviewing the claimants and employers, and poring over records to make sure everything is in order. The trouble is that most of these tools can help find issues that happened in the past rather than avoid issues before they happen. When a state UI agency finds a case where money was paid to someone who is then deemed ineligible, it has to figure out how to recover some of that money.
The tried-and-true methods probably wouldn’t work. The New Mexico DWS, like many other state unemployment insurance agencies, has always combated improper payments and fraud with policy changes, major training initiatives for frontline staff, and the latest tools and modules.
Despite the major technological gains and reductions in fraud, overpayments continue to be paid. The agency wanted to continue to innovate to reduce improper payments but is worried about the downsides. Agency leadership has laid out three goals: Reduce improper payments; don’t take benefits from people who are eligible; and don’t increase workload on an already-overburdened staff.
The tried-and-true methods probably wouldn’t work. The New Mexico DWS, like many other state unemployment insurance agencies, has always combated improper payments and fraud with policy changes, major training initiatives for frontline staff, and the latest tools and modules.9
In the United Kingdom, a new approach to collecting payments, inducing honesty, and communicating public policies was developed by the Behavioural Insights Team (BIT). This group looked to employ concepts from the field of behavioral economics to inform government communications. Behavioral economics combines elements from economics and psychology to understand human behavior—even when it’s irrational.10 BIT put itself on the map with a simple tax collection experiment that resulted in the “190 million pound sentence.”11 Each year, people behind in paying taxes were sent a letter filled with legalese, detailing the penalties for an individual who doesn’t pay taxes. In an attempt to encourage more people to pay their taxes, the team leaned on a behavioral concept, social norms. This concept explains how the accepted behaviors of others implicitly encourage individuals to act in a similar manner.12Putting this information to use, the experiment created two nearly identical letters, except that one letter contained a single additional sentence: “Nine out of 10 people in your town pay their taxes on time.” In the experiment, “your town” is replaced by the name of the town in which the person lives. People owing back taxes were randomly sent these different versions of the letter, and the results were compelling: The percentage of people receiving the letter with the additional sentence paid taxes at a rate that was 23 percent higher than the group that did not see the socially driven statement. This is one of the first recorded cases where the careful deployment of simple behavioral techniques such as social norms can lead to incredible outcomes in government policy.
Elsewhere in the field of behavioral economics, priming has proven to be an effective tool to subtly encourage honest behavior. Priming occurs when an individual is exposed to a specific stimulus that influences his or her ensuing actions.13 In one experiment designed to influence honest behavior, researchers “primed” people with a stimulus that involved morality and then observed how often cheating occurred when solving small math problems. When the participants were asked to recall the Ten Commandments, cheating significantly decreased compared with those who were instead asked to recall the names of Shakespeare’s sonnets.14Interestingly, this finding holds even among self-proclaimed atheists. Researchers observed a similar result when reminding Massachusetts Institute of Technology students that the school’s “honor code” applied to a similar math-problem test—despite the fact that MIT does not have an honor code. Armed with the behavioral insights toolbox, New Mexico was ready to embark on an experiment of its own: to determine if these same principles could help the thorny problem of improper payments in the UI system.
To our knowledge, no UI agency had yet used behavioral insights to reduce improper payments. The key would be to improve design communications and notifications for claimants. Deloitte Consulting LLP and the New Mexico DWS collaborated on an initiative to reduce fraud and overpayments in the UI program. We revisited how claimants interact with the system for filing and reporting earnings and work search activity. That meant going back to each screen an applicant sees and redesigning key experiences. With communication, design, and behavioral insights in mind, a red pen was taken to the screens and forms, standardizing and simplifying the language and the customer experience.
When filing for unemployment insurance, you generally file an online application to determine if you are eligible and, if so, how much you could potentially receive. Each week you are eligible, you return to the website to certify that you are eligible by answering a few questions. Before the Web, customers filed paperwork, visited offices, and met one on one with case workers to provide the same information. Ironically, while the online systems made it more convenient for customers to file these applications and certifications, technology also anonymized the system to the customers. No longer do you discuss job prospects with a live person. You just file online and hit “submit.”15 Behavioral researchers have found that when you feel you are interacting with a computer, you are more likely to be dishonest. Mentally, you identify the computer or website as a personless system.16 Inadvertently, it may encourage dishonesty. In person, the departmental representatives may be more likely to pick up on subtle hints from body language and eye contact to detect dishonesty. The goal of the collaboration was to design a system that feels more human.
Behavioral researchers have found that when you feel you are interacting with a computer, you are more likely to be dishonest. Mentally, you identify the computer or website as a personless system. Inadvertently, it may encourage dishonesty.
New Mexico identified one key moment during the initial application and two more in the weekly certification process where a customer providing inaccurate data leads directly to overpayments. These key moments account for a majority of overpayments in New Mexico each year.
Imagine for a moment that you have lost your job and are filing for unemployment benefits with the state. One of the first steps is to fill out a series of online forms. Not everyone is eligible for unemployment benefits. In most states, you are eligible if you’ve been laid off—seasonal jobs, closed factories, and so on. If you were fired because of absenteeism or other reasons, usually you are not eligible for benefits. As you fill out your application online, one of the key questions is the reason you were separated from your former employer: layoff or other reasons. Nudging claimants to enter more accurate information on separation reasons is the first key moment New Mexico tackled.
Of course, there was already a process in place to verify separation reasons. Employers are notified and asked to verify that the person claiming is, indeed, eligible. The trouble is that not all employers respond right away, or there might be a misunderstanding or disagreement between a claimant and an employer. When this happens, claimants are typically given the benefit of the doubt. Unfortunately, each time a claimant is paid in error, this type of separation overpayment can be very costly—eight weeks of payments are made on average.17
The goal, then, is to help claimants fill out the right answer up front. Although claimants are notified that employers will be contacted, this information might be buried among pages of requirements and notices. Instead, a procedural change was implemented to show claimants a copy of the verification letter that goes to their employers. By showing a letter to the applicant with his or her name and showing that this will go to the employer, New Mexico is trying to make the process more transparent and real in the mind of its applicants.
In addition, it turns out that a quirk of the paperwork may also discourage accurate information. In the online system, New Mexico asks follow-up questions or “fact finding” of applicants when certain answers are provided. For example, if an applicant indicates he or she has been terminated for cause, the applicant is prompted with follow-up questions seeking more information. However, there were no follow-up questions if the applicant selected “layoff” as the reason. The layoff reason was, in fact, the path of least resistance. People don’t like answering additional questions. This fact has troubled survey designers for years. The state was inadvertently pushing people to an answer that leads to overpayments. This is an example of choice architecture—the notion that how options are organized influences how people make choices.18 Or more specifically, this was an example of a less-than-ideal choice architecture since it influenced unintended behavior.
To improve the choice-making process, new screens were prepared with letters to employers more prominently displayed and a few additional fact-finding questions. One group of claimants was randomly shown these new screens, while a control group saw the traditional screens.19 After a few months, applicants who were shown the new screen behaved differently. Having seen the new screen, applicants are 10 percent less likely to have been paid benefits and 15 percent more likely to have self-reported something that delays payment until additional investigations are complete. Each person responding more accurately helps to avoid an overpayment and typically generates no additional staff work. Utilizing these behavioral insights for this single moment put New Mexico on track to save millions each year.20
After separation reasons are properly vetted, the second key moment occurs during the continued interaction with unemployment beneficiaries. As part of the process, the claimants log in to the system every week and certify that they are, indeed, still eligible by reporting personal availability, earnings, and work search behavior.21 Usually, money a person earns while receiving benefits is deducted from the weekly benefit amount. The logic is that working can earn a person more each week than benefits can provide alone.
During this weekly certification process, there is a key question: “Did you work this week?” If you have worked, you are prompted to enter the details. If you actually earned wages but did not report them, you might be committing fraud. New Mexico already had a process for identifying these issues using wage records from employers and databases of new hires. Unfortunately, it can take up to six months for these issues to be detected, investigated, and adjudicated. In the meantime, claimants may be earning wages and wrongly paid benefits.
Imagine, once again, that you are sitting in front of a computer answering questions online to get your benefits. You were eligible when you applied and for the first few weeks. But then you get a job and start earning. If you report your earnings, you know your benefit check will be reduced. So you might think it’s not a big deal if you click no, right?
Behavioral insights can help in a few ways here. First, it is easier to be just a little dishonest. Work by the behavioral economist Dan Ariely on the “personal fudge factor” shows that we are more likely to cheat over a small amount of money than a large amount.22 But for a claimant, each subsequent lie compounds and grows. Second, a claimant is interacting with a seemingly inanimate system. People tend to find it harder to be dishonest when interacting with another person than with a system. The good news is that we can employ what we know about social norms and priming to try to prevent these small, often compounding lies. For instance, many people incorrectly believe that others regularly cheat the system, so they can too.23 We hypothesized that when individuals are primed by the (truthful) information that most people are honest when reporting earnings, the claimants would be encouraged to behave more honestly. To test this assumption, some claimants were required to certify to the accuracy of the provided information and sign their initials online. Research suggests that personal attestation can improve honesty. In fact, some researchers have shown that requesting a personal attestation up front can reduce cheating, compared with an attestation at the end.24 Lastly, clearer wording that spells out the requirement in simple language has been shown to improve the accuracy of reported data.25
In total, the New Mexico DWS tested three behavioral nudges: a pop-up message on the screen; a personal attestation or certification box; and clearer, simpler wording. For the pop-up messages, 12 different wording options were tested in two formats: either showing the same message or an escalating series of messages. Two versions of the certification box are possible: placed before the key questions or placed after. In all, 84 combinations were tested.
Claimants were randomly assigned to groups for these different messaging options to ensure a consistent user experience as well as to accurately measure results. Each group of claimants was tracked to understand if different messages promote different behavior and associated overpayments.
To avoid pop-up fatigue, the claimant is prompted only when he or she is at risk of understating earnings. Here, the system shows a message only when it matters, thanks to predictive algorithms tuned on historical cases of overpayment to isolate situations at the highest risk of overpayment. These algorithms identify each certification as high risk or low risk based on the claimant and situation. By analyzing past claims, we have seen that high-risk certifications typically account for 70 percent of all benefit year earnings fraud.26
When predictive algorithms point to a high-risk situation, claimants randomized to see pop-up messages were shown a message to remind or inform. Borrowing from the “190 million pound sentence,” a claimant might see a social-norm-driven message on how “9 out of 10 people in <your county> accurately report earnings each week.” A dozen different messages were tested at first, using different principles rooted in behavioral science. New Mexico has the ability to measure and track the effectiveness of each message, allowing continual improvement as messages that work better are kept, while those that don’t hit the cutting-room floor. This example illustrates the last-mile problem mentioned earlier. New Mexico now has the ability to first identify high-risk individuals with predictive analytics and then subtly nudge them into more honest behavior with behavioral interventions. Conveniently, nudges encourage voluntary actions. Compared with the heavy-handed approach of cutting off benefits, claimants are self-reporting information, leading directly to fewer overpayments.
Claimants have to return each week to certify their eligibility, leading to a large sample size for us to study. In fact, we found statistically significant differences in behavior after only three days. Within a few months, we were able to determine which messages work more effectively at encouraging claimants to report earnings (see figure 2). Overall, claimants who see pop-up messages reported earnings 25 percent more often than the control group. They reported the same amount of earnings on average, but they reported more frequently.
Not surprisingly, messages with simpler language worked better. When statutes were referenced or penalties were described with legal language, earning rates were comparable with the control group, who saw no pop-ups. On the other hand, claimants who saw simple messages such as “Reminder, if you worked last week, you are required to report these earnings even if you have not yet been paid” were 40 percent more likely to report earnings than the control group (message 4 in figure 2). The most effective message to date? A variant on our initial inspiration: “99 out of 100 people in <your county> report their earnings accurately. If you worked last week, please ensure you report these earnings.” If claimants saw this message (message 1 in figure 2), they were nearly twice as likely as the control group to report earnings.
The third key moment occurs when claimants report their work search activities. New Mexico requires that claimants conduct at least two “work search activities” each week in order to receive benefits. These activities can include dropping off resumes, interviewing for a position, or attending training courses. This may seem obvious, but when someone looks for a job, he or she is more likely to find a job. As in the other key moments, the idea is to help people better understand requirements. But we added a second wrinkle here—we implemented a commitment mechanism. Behavioral research suggests that people are more likely to follow a particular course of action, such as seeking a new job, if they articulate, or commit to, a specific goal or objective.27
By reminding people of their requirements and providing resources to look for jobs more effectively, the intent is to encourage people to search for, and find, jobs more quickly.
The UK BIT group has also been experimenting with using commitment mechanisms to encourage job seekers to find work more quickly.28 They are attempting to persuade people to make a commitment pertaining to how they will look for work. The study found that people who set plans found work more quickly than the control group, getting off public assistance and back into the workforce. By asking people to commit in advance, they were more likely to actually act on this commitment. In other words, the research showed that simple changes to how advisors talk to job seekers could help hundreds of thousands of people find a job more quickly. A nifty side benefit of the experiment? The staff working in the pilot training center reported higher levels of happiness and job satisfaction.
The New Mexico DWS implemented a variant of the same idea. They are influencing people to engage in job-seeking behavior by asking them to be more specific on what actions they will take each week. When filling out last week’s work search activities, claimants are shown what they had previously planned to do and how they actually stacked up, and then asked to indicate what they will do next week. By reminding people of their requirements and providing resources to look for jobs more effectively, the intent is to encourage people to search for, and find, jobs more quickly. Unlike the other nudges, this messaging is extended to everyone filing weekly certifications. All claimants are encouraged to make this plan.
By following these claimants and the plans they make, New Mexico can see which plans work better for finding jobs. It turns out that the plan matters. Claimants are encouraged to make a plan to inquire, interview, or apply for jobs, online, via phone, or in person. Claimants who plan to interview for a job the following week are 25 percent more likely to find a job than those who don’t plan to interview, while claimants who plan only to inquire about a job are approximately 18 percent less likely to find a job than those who don’t.29 Interestingly, claimants planning actions online may be more likely to find a job than those planning via phone or in-person activities. Although a formal test was not implemented, the DWS internal research shown in figure 3 suggests that claimants who are planning certain work search activities are finding jobs faster.
We no longer live in an age where combatting UI fraud and reducing overpayments necessarily result in significant resource spending and policy changes to frontline staff. Instead, we are able to reduce high instances of “small” fraud and reporting errors with smart, subtle changes in how we communicate.
By leveraging concepts from the field of behavioral economics, the New Mexico DWS was able to substantially influence claimants’ behavior. To improve the response accuracy for separation reasons, a better, more transparent choice architecture was designed to make the process less confusing and more straightforward for the claimants. For more honest earnings reporting, the new system humanized the process by intimating that the social norm is to accurately report earnings. And finally, in an effort to help people rejoin the workforce, commitment devices were put in place to help workers develop plans in pursuit of their next opportunity.
Perhaps the most powerful element of these findings is the flexibility that it possesses for other issues. Whether you work for a UI agency or a fraud detection department, or simply want to induce more honest behavior in a process, the behavioral sciences can be an effective toolset. If looking to implement your own behavioral communications, consider these four dimensions:
The beauty of behavioral insights resides in their simplicity. Not every tough problem requires drastic solutions—some problems just need creative ones.