The craft of incentive prize design has been saved
Certain common elements in prize design can be thought of as ingredients, combinable in various ways to create prizes aimed at generating specific outcomes.
Designing a successful prize can be a daunting task. No one formula is adequate because each prize addresses a unique problem and set of potential participants whose incentives must be carefully understood.
Many public organizations do not possess all of the skills and capabilities needed to design an effective prize, such as online platform development or marketing expertise. In some cases, the necessary abilities involve distinct and highly specific insights into market dynamics or participant incentives. And in almost all cases, designers need help with problem definition, because a poorly defined problem statement can make it extremely difficult to achieve the desired outcomes.
Despite the unique nature of each problem, designers can rely on certain common elements. These can be thought of as ingredients, combinable in various ways to design prizes that generate specific outcomes. All of the elements matter, together forming an integrated and often complex set of strategic choices. How designers assemble and use them is at the heart of prize design.
There are many ways, for example, to craft a communications strategy to draw the attention of potential participants to a prize. But who should develop the communications campaign and its messaging? What channels should be used? How much time and money can be spent on the campaign? How can we measure its success? These are just some of the questions that designers must answer.
The strategic choices involved in challenge design can be grouped into five core design elements:
Designers consider these elements of prize design from the very early stages of problem definition to the period after the prize concludes, when sustaining participants’ energy and focus can significantly help to achieve outcomes. Below we discuss these elements and feature examples of how designers use them to create, implement, and ensure the legacy of their prizes.
There are four critical resource phases: design, implementation, award, and post-prize “legacy” activities. Depending on the desired outcome, these phases can be quite variable in terms of length, cost, and demand on resources. They can involve a few or many small contracts for vendor services as well as different types of partnerships. Most importantly, as each of these phases unfolds, designers learn a wealth of new information about what successful execution will require, with inevitable impacts on resource requirements and timing.
One major resource requirement, of course, is funding for the purse.1 Since the purse is often relatively small, it can be tempting to view prizes as less-expensive alternatives to more traditional grants and contracts. Even if no one wins the prize, however, its administration costs can be substantial, particularly if the goal is to achieve outcomes that could require significant commitments to marketing, mentorship, and networking. LAUNCH, for example, a global challenge led by NASA, USAID, the Department of State, and NIKE Inc., is intended to identify and support innovative work contributing to a sustainable future. The initiative focuses on spurring collaboration among innovators; it offers no monetary incentives, but instead invests its resources in helping participants develop and scale their solutions.2
Furthermore, prize administration involves significant costs that fall into different categories including, but not limited to, labor, technology platform, marketing, events, travel, and testing facilities.3 It requires a diverse set of abilities and experiences, obtained in-house or through in-kind support from partners and paid vendors. Each designer must define the right mix of in-house and external support by first assessing the organization’s abilities.
Labor costs are involved in developing prize rules, advertising the prize, connecting with participants, administering interactions among stakeholders, judging entries, and evaluating the success of the prize after award. These activities will require a diverse team, with subject-matter experts to develop, advertise, and judge the prize, and experienced administrators to run it.4 Effective designers should consider the labor resources required for each phase of the prize, such as estimating the number of potential submissions to ensure the availability of a sufficient number of judges. Bloomberg Philanthropies’ Mayors Challenge, for instance, assessed how many submissions it might receive by sending RSVP cards to potential competitors.5 The challenge also planned for and included labor costs that extended beyond award to establish a lasting legacy for the prize. For example, post-award coaching, technical assistance, and networking were provided in order to continue to spur action following the award.
The technology platform used to facilitate certain prizes also represents a major cost, as well as a critical component for success. Such online platforms can help target the right audiences, enforce rules, and standardize submissions. NASA’s Mapping Dark Matter challenge, for instance, sought an algorithm for mapping dark matter, an elusive task that has stumped astronomers for years. NASA partnered with the online challenge platform Kaggle, using its leaderboard feature to offer an environment allowing data scientists and mathematicians to collaborate and compete. Kaggle’s platform enabled the creation of a specialized community that ultimately included 73 teams. Within 10 days, a doctoral candidate in glaciology from Cambridge University had built an algorithm that outperformed NASA’s existing one.6When considering different platforms, designers can evaluate a few key cost elements such as platform access fees and design consulting.7 Appendix D offers more information on online challenge platform vendors.
Additionally, certain administrative costs may be directed toward activities to improve or strengthen submissions, including standard, accessible data, consulting/coaching support, and testing facilities. For example, a number of US government agencies have provided easy access to data and data standards for developers to improve entries in apps challenges such as DOE’s Apps for Energy and Apps for Vehicles challenges.8 This support structure was provided more directly in the Progressive Insurance Automotive X PRIZE where semi-finalist teams were given vouchers for consulting services from private consulting firms and national laboratories in order to allow participating teams to improve their designs.9 Testing facilities are also resources that many participants will not have access to when developing their prototypes; the provision of these places will help to improve and iterate participant designs in a laboratory setting. For example, the US government has been a key source for providing these facilities. In the Wendy Schmidt Oil Cleanup X Challenge, a Department of Interior testing facility was used to host physical and laboratory testing of finalist prototype designs for high-performing oil cleanup equipment, and in the Progressive Insurance Automotive X PRIZE, the Argonne National Laboratory provided dynamometer testing of the super-efficient finalist vehicles.10
Because prizes are still relatively novel, designers must often commit resources to mobilize their own organizations. Most champions are senior executives, but they can be other employees who have the networks and political capital needed to generate momentum. Champions can clear away significant internal barriers by clearly communicating to employees how solutions derived from the prize will supplement and support those developed within the organization.
Finally, designers should expend resources to find partners that can help fund prizes and play various strategic roles in execution. Many designers carefully assess their own internal capabilities to understand the kind of partner support they may need. As categorized by Raymond Tong and Karim Lakhani, partners can play a variety of roles across a spectrum: a “host” who develops and oversees the prize, a “coordinator” who solicits others to develop operational components, or a “contributor” who assists the hosts wit these tasks.11 For example:
Many designers believe that partners from the private, public, and philanthropic sectors can help unleash the full potential of prizes.15 For example, the Hurricane Sandy Task Force launched Rebuild by Design, a multi-stage challenge to create designs that increase the resiliency of those regions affected by Hurricane Sandy. The challenge administration involved a mixture of partners from federal (Department of Housing and Urban Development, National Endowment of the Arts), academic (New York University Institute for Public Knowledge), and non-profit (Regional Plan Association, Municipal Art Society of New York, and Van Alen Institute) organizations. The $2,000,000 purse was funded entirely by the Department of Housing and Urban Development’s philanthropic partners, led by the Rockefeller Foundation. Through this integration of partners, the challenge resulted in the participation of 148 teams from more than 15 countries. Ten finalists received $200,000 and met with community leaders and stakeholder groups to receive feedback and compete for the opportunity to implement their designs.16
When selecting partners, designers often consider a number of factors, including what control may be ceded to partners in prize administration, and how their brands and support can help the prize succeed.
Evaluation includes a broad set of assessment and measurement activities that occur during every stage of a prize. It involves the initial determination of whether it is likely to be effective and appropriate, assessment of the quality of implementation processes, development of the criteria and mechanisms used to select winners (including providing feedback to participants during and after the prize), and evaluation of impact and overall value. Proper evaluation is critical because it can affect whether participants view the prize as fair, shape the validity of the results, and, thus, ultimately determine its success. Effective evaluation is also an essential input to strong prize management, both to improve implementation processes and to inform decisions about whether to use a prize again.
In the early stages of design, there are two useful evaluative techniques. The first, sometimes called “theory of change,” involves identifying how the prize, through its structure, rules, and activities will incent participants to engage in the behaviors that will help solve the defined problem. For example, a monetary reward may prove to be a stronger incentive for some participants than the opportunity for professional networking or coaching. This is also a good time to determine how prize-generated incentives may be influenced by the external environment (that is, incentives from other domains, such as the market) and other interventions, such as previously existing challenges seeking similar outcomes.
Second, using research and logical analysis, it is important to check whether the planned challenge activities and outputs are likely to achieve the desired outcomes. This evaluative technique includes identifying other factors that would be likely to help or hinder the achievement of these outcomes. The major benefit of this early assessment is that the design can still be changed to address these factors, including adding activities to reduce risks or reinforce positive outputs, such as adding additional elements of a broad program that supports scaling up once the prize has identified winners. To properly evaluate the prize, designers should develop indicators consistent with their theory of change for the prize’s activities, milestones, outputs, and outcomes.
The quality of the implementation processes should be evaluated during and after the prize to determine whether discrete activities were actually successful. For example, some designers undertake special efforts to identify participants with particular characteristics. In some cases, this recruitment involves finding participants with specific technical expertise; in others, the goal may be to engage new and diverse individuals and organizations in the problem-solving space. In all cases, capturing good information about these processes during implementation can guide efforts to iteratively improve engagement activities for the current prize and provide insight into more effective engagement efforts for future prizes. Similarly, evaluation should include looking for patterns of who initially engages but then drops out or fails to continue through several rounds. It may be that the prize needs to be redesigned to provide additional support or that the current process is effectively winnowing out those who are unlikely to provide useful ideas or results.
A unique element of evaluation in prizes is defining the criteria used to select winner(s). In creating these criteria, designers are shaping how participants will work, preventing unintentional and undesirable outcomes and curbing potential fraud. Appropriate selection criteria are grounded in and consistent with the overarching view of how the prize will generate change or solve a problem. Because the wrong criteria could lead participants to submit solutions that do not actually address the fundamental problem, designers often review their selection criteria repeatedly, working with internal and external stakeholders to anticipate and account for all possible responses.
One helpful practice for designers to follow is to open up draft rules for a period of public comment, as was done by USAID recently for its potential challenge for desalination technologies, by the Department of Energy for its potential challenge on home hydrogen refueling technologies, and by NASA for its various Centennial Challenges.17
Designers should also carefully consider whether to use quantitative or qualitative criteria, or a mix of both. The Department of Defense’s HADR challenge, which seeks a kit for use in humanitarian assistance and disaster relief, sets specific quantitative criteria for acceptable solutions—weight of less than 500 pounds, constant one-kilowatt power production, production of 1000 gallons of water per day, and so on.18
When quantitative criteria are not applicable or relevant, clear parameters and appropriate evaluation arrangements become even more critical. In the case of the Prize for Community College Excellence, the Aspen Institute needed to find a way to evaluate qualitative data about US community college performance. To make this process as rigorous and independent as possible, the institute employs a third-party evaluator that specializes in evaluation criteria framework design and in collecting and analyzing such data to ensure a strong basis for evaluation.19
To ensure validity and objectivity in the evaluation process, designers should determine who will judge submissions. Expert judging can be effective when the desired solution is highly technical, while crowdsourced voting is valuable when the goal is to engage public participation.20 Some organizations have begun to examine how crowdsourced selection can lead to viable solutions. For example, DARPA’s Experimental Crowd-Derived Combat-Support Vehicle Design Challenge solicited vehicle concepts from the public for different missions. The challenge also sought to examine the question, “How could crowdsourced selection contribute to the goals of defense manufacturing?”21 While crowdsourcing the evaluation of winners can work and, at the same time, draw publicity, expert judging provides two distinct benefits. Judges with particular domain expertise can lend credibility to the challenge results and can improve submission quality through formal and informal feedback, if it is built into the prize structure.
One of the important elements of high-quality evaluation is to revisit the criteria at the end of the prize and assess whether they were appropriate: Did they lead to the selection of the best winning solution(s)? If the winner did not perform well, and some unsuccessful participants seemed stronger, it might be that the criteria were not right or were not operationalized correctly. For example, if simple weighting is used to derive an overall score, a proposal which scores badly on one criterion and well on another might end up the winner overall, even though it was inadequate in a vital area.
Another major component of evaluation is measuring prize impact. Designers should develop measurable indicators of success before launching the prize. Without these indicators and corresponding impact evaluation approaches, the prize may conclude without producing a clear understanding of whether it achieved or at least advanced the organization’s goals, which can be disheartening to participants and designers alike. Thus “evaluability” should be an explicit objective of prize design.
Developing measures of success during the design phase can be helpful in several respects. It reinforces discipline in the design team to ensure that design elements link to desired outcomes. It shows skeptical stakeholders that the prize’s effectiveness can be gauged objectively. And it assists the organization in assessing its overall return on investment. In anticipation of end-of-prize impact evaluation, measures of success can be deployed for intermediate outcomes, such as milestones for building prototypes or website page impressions for raising awareness. In addition, designers can evaluate other important intermediate outcomes, such as strengthening the community of participants, improving their skills and knowledge, and mobilizing capital on their behalf.
Because measures of success can be both quantitative and qualitative, effective evaluation will typically include systems to gather both kinds of data systematically and also capture unexpected data, such as wider impacts of the prize process. Common approaches include:
To create these metrics, designers should consider what evaluation indicators and measures can be collected during the prize (that is, media impressions or surveys of competing teams that collect information regarding dollars/hours spent preparing solutions), and what outputs and outcomes should be assessed in the months and years following the challenge (that is, follow-on investment, change in public opinion, market adoption, scale, and behavior change decay rate). The latter measures may require significant investment of time and resources during the “legacy” phase post-award. Designers should also note that getting post-award data from participants may necessitate building reporting requirements into the prize rules to enforce compliance or allow access.
The use of objective, third-party data such as government statistics can increase the credibility of the prize evaluation process, but in almost all cases it is necessary for designers to obtain new data. The Aspen Prize for Community College Excellence, for instance, first worked with a data/metrics advisory panel to develop a model for selecting the top 120 US schools. The institute then asked the eligible institutions to submit applications featuring data about how they were advancing student learning. Working in tandem with the data/metrics advisory panel, the institute organized and analyzed these data to determine winners.25
There should be an overall evaluation of whether the prize was worth it. This is not a simple matter of comparing the direct cost of running the prize to the value of the solution produced. In some cases, a prize might have been unnecessary, and the solution would have come about through other means. In other cases, the wider impact on participants who don’t win, including those who go on to develop new innovations because of what they learned during the prize, will be significant.
Measuring changes should not only be limited to positive impacts. Particularly for government agencies, there should be follow-up to explore whether there have been unintended negative impacts of the prize implementation. Return on investment calculations often leave out the wider costs incurred by other parties in the process. An overall “value for effort” calculation, taking into account positive and negative impacts on winners and losers as well as resources used by other parties, provides a more reliable and comprehensive view of the merit, worth, and value of a prize. In particular, such an analysis would be helpful in checking for wider potentially negative impacts—such as organizations becoming less inclined to participate in prizes because of the low return on their investment.
In addition to measuring the changes that have occurred, there should be some investigation of the extent to which change can be attributed to the prize. Experimental and quasi-experimental designs, involving a control group or comparison group of participants may be feasible in some circumstances, but they are unlikely to be cost effective or ethically acceptable given the human subjects that need to be involved. Instead, rigorous non-experimental approaches to causal attribution and contribution are useful to identify possible alternative explanations for the impacts, and whether they can be ruled out.
These various approaches to evaluation need more than a few simple metrics to track. Designers need to think carefully about what they are trying to assess, when and how, so that they can surface the most helpful insights for their current and future prizes. Designers sometimes create independent teams to assess the success of their work, as illustrated by the Rockefeller Foundation, which uses an evaluation group to study the impact of its innovation projects.26
Motivators spur participation and competition. These incentives should encourage the right participants in the right ways to do the work required by the prize. Successful designers use motivators to increase the participants’ return on their investments of time, effort, and resources.
The prize award itself is, of course, the most visible motivator, encouraging participation and channeling competitive behavior toward the desired outputs and outcomes. Historically, awards have included cash purses, public recognition, travel, capacity building (that is, structured feedback and skills development), networking opportunities (that is, trips to conferences), and commercial benefits (that is, investment and advance market commitments). Public sector challenges often feature diverse awards. At one end of the spectrum is the Department of Energy’s L-Prize, which offers a $10,000,000 cash award and an advance market commitment to those who develop the next-generation light bulb. At the other end is the Department of Health and Human Service’s Apps Against Abuse, which targets domestic violence and motivates participants with an award solely of a public winner announcement by government leaders.27
The size and type of award provides designers with important signaling effects and leverage opportunities. Designers typically try to ensure that the purse is commensurate with the magnitude of the problem, the types of participants required, the amount of time likely to be involved in reaching a solution, and the amount of media and public attention desired. Qualified participants are unlikely to compete if the prize offers a small purse but requires a year or more of effort on a hard problem. For prizes that require commercial participants, such as established companies or startups, the purse must be economically interesting in the sense that it could defray research and development costs, pay for certain types of risks and opportunity costs, or provide something companies can highlight for branding purposes, such as third-party validated performance data or a “badge” marking the company’s submission as successful in the prize. Large purses are also more likely to encourage the formation of new teams including both technicians, experts from relevant disciplines, and investors. For prizes seeking outcomes such as development of prototypes, pilots, or market stimulation, this element of design is critical because it helps designers attract outside capital.
Mentorship also can be a motivator and is used increasingly in prize design. Designers can incorporate mentorship in the prize structure, providing participants with access to experts, tools, leading practices, and other resources to accelerate the development of high-quality solutions and support the formation of communities of interest around the problem.28 Participants do not need to win to benefit from this experience.
Some designers pair winners with industry leaders to drive post-award momentum. The Apps 4 Africa challenge, established by Appfrica (one of Africa’s oldest acceleration programs), provides winning African technology entrepreneurs with mentors who help them with business development and product design. This mentorship has helped 11 new companies raise an average of more than $90,000 each in follow-on funding.29
Many designers are developing collaborative environments, enhancing knowledge sharing among participants by developing rules and evaluation criteria that encourage them to work together. Some intentionally develop opportunities for traditional participants to collaborate in problem solving, using virtual and in-person team summits and participant “bootcamps.”30
But collaboration in prizes is not always useful. Intentional matchmaking among participants can be tempting, but it can also lead participants or observers to think the prize is fixed or that its administrators are interfering too much in the prize’s outcomes. Furthermore, while collaboration may be appropriate for achieving certain outcomes, fierce competition can also be useful, particularly for shortening product development timelines. Designers should carefully evaluate this trade-off between collaborative and competitive motivations when thinking about the best path to a particular outcome. For example, if seeking a new prototype, the intensity of competition may need to be high to accelerate prototype performance on an aggressive timescale. If, however, the designer is seeking increased engagement among a population, then more collaboration may inspire others to begin participating in the prize.
Finally, for certain outcomes, intellectual property rights can serve as a powerful motivator. The prize sponsors’ degree of ownership over submissions is a key design consideration. Do they want to use the solution in a proprietary manner, require that solutions be made available to the public through an open source license, or just to have access to it in the marketplace? The options range from full retention of rights by participants to full retention of rights by the organization running the prize. One important consideration for US government leaders interested in stimulating innovation is how the America COMPETES authority protects participants’ intellectual property.31 Regardless of where the prize falls on this spectrum, clear, upfront terms of ownership are critical. The rules for the US Air Force’s Fuel Scrubber challenge, for example, clearly stated that winners will retain their intellectual property rights, signaling in advance that challenge participants can commercialize their winning solution and profit from it in the market.32
Structure, or prize architecture, is the set of constraints that determines the scale and scope of the prize, as well as who competes, how they compete, and what they need to do to win. A competition period that lasts too long risks losing participant interest and one that ends too quickly may not give participants enough time to develop solutions. Winner-takes-all prizes can discourage participants with low risk tolerance. Those with well-defined phases and milestones can modulate competition, winnow participants at different stages, and reward only the most innovative solutions. Due to such considerations, successful designers devote significant time and effort to prize architecture.
Eligibility requirements shape the population of participants. Which participants should designers target—individuals, teams, organizations, established institutions, or even political entities such as cities or states? The choice involves at least two considerations. First, given the desired outcome, who is best positioned to solve the problem? Who has the right skills, resources, and interests? Second, if the desired outcome includes a form of engagement extending beyond the immediate pool of potential participants, how can they influence the larger community or stakeholder group? It is worth noting that in the case of challenges sponsored by the US government, participant eligibility is shaped by the authorities under which the challenge is administered.
The Georgetown University Energy Prize, sponsored by the Joyce Foundation, the American Gas Foundation, and the Department of Energy, among other partners, challenges communities “to work together with their local governments and utilities in order to develop and begin implementing plans for innovative, replicable, scalable and continual reductions in the per capita energy consumed from local natural gas and electric utilities.”33 This example provides insight into how designers can structure eligibility requirements to shape team formation and expand the influence of the prize beyond individual citizens.
Successful designers often try to define their prizes in ways that will attract the largest and broadest pool of participants, as the most innovative solutions often come from those without previous exposure to the underlying problem. Even when casting a wide net, however, designers should be careful about eligibility. For some, the quality of submissions is more important than their quantity, or resource constraints may dictate a smaller participant pool, making restricted eligibility the best choice. For others, the variety and sheer quantity of submissions that can be obtained from broad eligibility requirements are more desirable. Narrow eligibility requirements thus may be best for a prize seeking a handful of thoughtful concept papers about a technical solution, while broad requirements could be better for a challenge seeking a new logo design.
If multiple types of participants are desired, designers should consider whether a certain team profile increases the possibility of a successful outcome. Additionally, designers must think about whether different types of participants should compete in one pool or be separated into different categories. For example, the US FIRST Robotics Competition hosts four age-based classes of challenges for students aged 6-18: Junior FIRST LEGO League, FIRST LEGO League, FIRST Tech Challenge, and FIRST Robotics Competition. The FIRST Robotics Competition requires a minimum of 15 high school students and 3–6 professional adult mentors per team.34
Prize length typically consists of two time periods, those for submission development and for judging. The former requires designers to determine the appropriate time likely to be needed to reach a particular outcome. For example, the Case Foundation’s Finding Fearless competition was focused on generating ideas to solve chronic social challenges and gave participants only 20 days to submit their ideas. DARPA’s UAV Forge competition, by contrast, gave participants 152 days to showcase a working prototype of an unmanned aerial vehicle.35 Data on prize length is detailed in the following sections by outcome. Designers should note that the lengthier the prize, the higher the likelihood of administrative staff turnover. It is critical that designers document their rationale and assumptions behind key design decisions and desired outcomes for any potential staff transitions.
Designers often engage with subject-matter experts or potential participants to develop a realistic assessment of the time needed for solution development and the likely number of submissions. This information can also be used to estimate the appropriate number of judges needed to ensure a timely review. The selection of judges with the appropriate technical expertise and availability to commit their time for thorough reviews is critical for outcomes focused on developing prototypes and stimulating markets. Designers should estimate the time required for an individual judge to assess submissions or the time for a panel of judges to reach consensus on the relative merits of prize submissions, and use those estimates to determine the number of part-time judges needed. If the number of part-time judges becomes unwieldy for challenge administrators based on this approach, designers should consider compensating judges to receive full-time evaluation support.
Designers also consider various forms of challenge segmentation to encourage certain kinds of behavior. Dividing the challenge into rounds can allow participants to modify and improve their submissions, thus increasing their quality. As an example, the Institute of Justice’s Ultra-High Speed Apps challenge has two phases, the first solely for the generation of the app ideas and the second for actual software development.36
Some designers segment their prize structure by topic, with multiple related sub-challenges taking place concurrently. This can increase the prize’s impact by elevating the importance of certain topics and attracting a broader set of solutions. The EPA’s Campus RainWorks Challenge, for instance, invites students to design an innovative green infrastructure project for their campus, offering two topic areas. One category involves designing a master plan for a broad area of campus; the other seeks designs for a smaller location.37
Designers can also segment prizes by geography, with simultaneous challenges in separate locations (such as state challenges leading to a national final round). The Strong Cities, Strong Communities (SC2) Challenge is a federal interagency initiative seeking innovative ideas to incent economic development. The challenges are customized to the areas they are designed to help: Las Vegas, Nevada; Hartford, Connecticut; and Greensboro, North Carolina.38 Such a strategy can help manage larger-scale challenges and focus attention on site-specific solutions for targeted areas.
Communications serve several different strategic goals. They can attract participants, spur them to compete, and maintain their interest afterward. Also, communications keep partners and stakeholders informed about the purpose and progress of the prize, helping to secure their support and, in some cases, funding. For many designers, communications are also a mechanism for achieving certain specific outcomes, such as building market awareness of new capabilities or public enthusiasm for new behaviors that further the public good. Because communications are so important, designers should plan and invest carefully to build the right buzz.
Effective prizes use robust branding plans to build recognition and credibility among the participant and stakeholder communities. This can be achieved through press releases, social media, and targeted invitations, using the organization’s and partners’ networks where appropriate. During Bloomberg Philanthropies’ Mayors Challenge, for instance, challenge administrators sent personalized invitations to eligible cities outlining the challenge’s importance.39 Establishing a clear and powerful brand is critical to the post-award legacy of the challenge and will significantly impact the challenge’s sponsors’ ability to attract public attention and the desired participants to future rounds. Many broadly recognized challenges dedicated significant time and resources to building a lasting brand including but not limited to the Mayors Challenge, XPRIZE, and the NASA Centennial Challenge.
To build credibility, designers should clearly publicize rules and evaluation criteria and regularly update participants and stakeholders on the process. To facilitate these communications, external partners can provide expert advice and support. For example, Nesta has partnered with the UK Department for Business, Innovation and Skills and made use of their combined networks to market its Open Data Challenge Series to potential participants.40
Strong communications help designers to manage relationships with participants and partners during prize implementation. It’s useful to create regular check-ins with participants and provide them with effective communication channels to discuss any issues that may arise. Check-ins also provide participants with feedback that can lead to more effective solutions. For example, the Department of Energy’s National Geothermal Student Competition featured two phases. The first 30-day phase required an initial concept paper. Teams chosen for advancement were then required to participate in three biweekly review meetings and submit regular reports documenting their progress over the course of the challenge to ensure they were progressing toward a final product.41
Designers attempting to build communities or markets typically establish post-award messaging capabilities. This may involve periodic post-award webinars; publications summarizing lessons learned, data captured, and aggregate outputs from the prize; “road shows” to visit relevant conferences, agencies, legislators, and other stakeholders; and reunion conferences that encourage participants to discuss their progress or even online collaborative spaces. For example, the International Space App Challenge was a two-day “hackathon” that included 9,000 people who met at 83 locations as well as 8,300 remote participants. Together, they worked on 50 different NASA challenge topics and developed 770 solutions in the course of one weekend. After the global awards, local leads from each location facilitated the creation of Google Groups to serve as a medium for ongoing communication and idea sharing between the participants.42
In the last five years, public sector prize design has become increasingly diverse and sophisticated, with a shift in focus from prize types to outcomes. In the past, the selection and use of a prize type, such as a “point solution” prize for new technology, reflected a somewhat rigid belief that prize types and outcomes should match exactly. As designers have become more comfortable and flexible in crafting prizes, they are finding that it is better to begin with the outcomes they want to achieve and then assemble the right mix of design elements to achieve them.
In this section, we examine the six key outcomes designers most often pursue as well as the prize design elements that are critical for achieving these outcomes. While designers should recognize that prizes usually require all five of the elements of design introduced above, we highlight those elements that are most important to get right to ensure that the prize achieves its intended outcome. We also know that many prizes seek and achieve multiple outcomes. Consider the MIT Clean Energy Prize, which distributed $1,000,000 to its winning teams. While the prize explicitly solicited business plans, it has also stimulated the market by generating $85,000,000 in capital and research grants.43 Many advanced designers attempt to use prizes both to develop markets for a technology, good, or service as well as to create social impact. Appendix A offers more detailed guidance.
Advanced prize designs can reach a range of actors. For outcomes aligned to developing ideas, technologies, products, or services, designers typically focus on the participants who are creating models or tangible items to achieve a particular outcome. For prizes aimed at engaging people, organizations, and communities, designers are generally concerned with participants as well as a broader audience that may include people, groups, organizations, or even institutions.
As designers work with the elements of design to build a prize, they also consider its legacy. Using prizes or challenges more generally to achieve certain outcomes requires taking the long view. Designers evaluate how a prize will work with other problem-solving approaches, which their organizations may be able to deploy. They make plans to engage participants and broader audiences after the prize concludes to reinforce key messages, branding, or desired behaviors. They build post-prize activities and foster networking and learning opportunities to help participants strengthen and refine the innovations that were incented by the prize. When designers want to simulate markets, they may develop a series of challenges that pull participants through different stages of the innovation process—first a prize to produce, test, and improve a model and then perhaps an advanced market commitment to help winning participants gain traction in an emerging market. Designers who ignore their post-prize legacy when trying to assemble the elements of design risk undermining their own desired outcomes.
Prizes allow designers to identify and expand on fresh, innovative ideas. They can focus the efforts and ideas of lots of different people with widely varying viewpoints on a broad range of public problems. The prize can gather existing ideas, expand existing ideas, or help create new ones, especially if new participants are brought into the solution space, given additional resources, or stimulated with new ideas and connections. As Michael Smith from the Corporation for National & Community Service and formerly the Case Foundation put it, “Prizes give you a way to lift up an idea.”44 Idea outcomes may take the form of:
In order to generate useful submissions, effective designers often provide participants with context about why they are seeking ideas and what they intend to do with them. For example, the Rebuild by Design competition administered by the Hurricane Sandy Task Force used a multistage challenge to attract design proposals that increase the resiliency of regions affected by Hurricane Sandy. The designers quickly and effectively solicited concepts and communicated the end goal of employing the solutions to rebuild the Tri-State area.48 But, caveat emptor: The quality and workability of submissions will depend strongly on the selected design elements.
The fundamental design challenge for this outcome is to strike the right balance between numbers of concepts and techniques solicited, processes used to review them, and plans for what happens to winning ideas.
Outcome benefits
Critical design elements
Select your competitors. Designers typically seek one of three types of participants: the public; a broad mix of expertise; or specialized, often scientific, communities of interest. This choice strongly influences the quality and diversity of participant submissions, with the risk that a mismatch between the problem and participant pool may generate few workable ideas.
To manage this problem, it can be helpful to use a technology platform associated with specific types of participants. Today, multiple online platforms can help facilitate and run prizes, such as InnoCentive, which solicits ideas from the scientific community, and Ashoka, which engages social entrepreneurs. Such platforms can tap into particular communities of interest, facilitate collaboration among participants, and support prize-related communications. (See Appendix D for a list of technology platforms.)
Determine how you’ll use the idea. It’s tempting to measure challenge success simply by the number of responses. While it’s true that a large number of responses increases your odds of finding a good idea, the workability of those ideas is even more important. In the Stanford Social Innovation Review, Kevin Starr warns designers: “Most crowdsourced ideas prove unworkable, but even if good ones emerge, there is no implementation fairy out there, no army of social entrepreneurs eager to execute someone else’s idea.”51
The Air Force Research Lab (AFRL) provides a strong example of translating submissions into workable solutions. Specifically, AFRL challenges include submission evaluation criteria that can be validated and further refined through laboratory testing with a focus on the ultimate use of the idea.52Additional examples for designers include criteria to evaluate the maturity of submissions, the speed at which the submissions can be developed into prototypes or pilots, and the cost and ease of implementing submissions given an organization’s resource constraints.
Be prepared to assess submissions efficiently. Good designers typically match the anticipated volume of submissions with an appropriate number of properly resourced judges. Given the relatively low barriers to entry for prizes seeking ideas, however, the sheer volume of submissions can sometimes surprise and even overwhelm. Designers can forecast the likely number of submissions by examining trends from past prizes, surveying the potential participant community, and sending invitations requiring RSVPs to targeted groups.
To maintain credibility with participants and sustain interest in the prize, successful designers often seek to reduce judging time. Many employ a two-step screening process: a larger, less specialized staff conducts an initial review before passing on the most promising ideas to expert judges. This review process, however, must be transparent to avoid perceptions of unfairness.
Recommended design tactics
Leaders of the G-20 countries in partnership with Ashoka Changemakers launched the Small and Medium Enterprise (SME) Finance Challenge to solicit groundbreaking ideas on how public interventions can unlock private finance for SMEs across the world.
Designers knew how these new ideas would be used after the challenge—the G20 countries created a $558 million fund to scale and support these new ideas. The short time period between start and submission (only 41 days) as well as a $1,000 early entry prize maintained momentum to increase the number of participants.
Challenge designers lined up eight well-respected judges to work through the 333 participant submissions. As a non-monetary reward, the challenge winners attended the G-20 Seoul Summit as well as an SME conference in Germany.54
For prizes seeking to build prototypes or launch pilots, the goal is not simply to generate an idea that addresses an important public problem, but rather to realize a functional version of a technology, product, or service, and sometimes test it with its intended customers.
Building prototypes or launching pilots often entails the creation of new technologies and can be particularly effective for shepherding them through late-stage research and early-stage development, a difficult part of the innovation lifecycle sometimes called the “valley of death.”55 For example, the My Air, My Health Challenge run by the EPA and the HHS spurred the creation of sensor prototypes measuring pollution’s health impacts, but also required participants to demonstrate how environmental agencies and individual citizens could put these systems into practical use.56
This outcome is particularly attractive because it can provide access to a new range of useful products and services, while requiring the organization to pay only for those that meet its needs. Prizes leading to products have the added benefit of relatively quantifiable and objective metrics of success.
Designers focused on services can also require practical demonstrations of success. For example, in New York City, a School Choice Design Challenge recently asked participants to develop a new software application to help families select high school programs. If a winning app is selected, it will make it easier for New York City eighth graders to choose among more than 700 high school program options each year.58
An important consideration for designers focused on this outcome is providing participants access to facilities to test prototypes. The cost and logistical challenges of creating an environment to iterate upon solutions is a significant barrier to entry that can stifle innovation. Designers focused on this outcome should consider providing access to testing facilities in order to place the focus of participants on research, innovation, and ideally future commercialization.59 For example, the Wendy Schmidt Oil Cleanup X CHALLENGE asked participants to develop solutions to clean surface oil from seawater. The challenge was valued at $1,400,000 and provided participants an opportunity to test their work at the National Oil Spill Response Research & Renewable Energy Test Facility.
Designers seeking to build prototypes or launch pilots should pay careful attention to problem definition as well as particular elements of design, such as motivators and structure. Expert designers can spend months in defining the technical problem, so that the prize is appropriately bounded. The Centers for Medicare and Medicaid Services’s Provider Screening Innovator Challenge, which asked competitors to develop screening software programs to help ensure that Medicaid funds are not diverted from the most vulnerable Americans, required more than a year to develop and ultimately involved 124 “mini-challenges” to attract the right solutions.60
Motivators and structure also matter because prize designers need to ensure that they attract the right kinds of participants, and that those participants are encouraged to compete in the right ways. Designers will often carefully study the motivations of distinct participant groups, including startup companies, large corporations, and academics, to ensure the challenge appeals to those most likely to compete.
Outcome benefits
Critical design elements
Be prepared with market analytics. Organizations often seek technical solutions unavailable in the commercial market. In these cases, prize development may require a relatively high operational budget to conduct a landscape review of immature market players, craft the problem statement, and design selection criteria. Partners that could make money from winning prototypes and are willing to invest in the prize can help cover some of these costs.
Tailor the purse to competitor risk and market conditions. To set the purse appropriately, designers typically investigate the costs of solution development as well as the potential market value of the new product or service. This requires economic and market analysis, a capability many public organizations lack and therefore engage vendors to complete.
The purse does not need to cover the entire cost of development, particularly if outside investors are interested in supporting participating teams, but it does need to cover at least some of the risk participants assume. If only a small purse is possible, designers can supplement it with other non-monetary benefits, such as access to data, strong intellectual property protections, and introductions to venture capitalists. Remember, though, that commercial participants are unlikely to devote money or time to develop new products or services unless they believe they can sell them into an existing or emerging market.
Make sure the winner selection is unambiguous. The selection criteria for the winning submission should be quantitative, rigorous, and testable, particularly for prizes with a technological focus. In the course of prize design, it is helpful to develop, vet, and test criteria with outside experts and potential participants and partners to avoid having to revisit selection criteria during the course of the prize.
Recommended design tactics
New York City’s Big Apps Challenge sought innovative software applications that made municipal data more accessible to city residents.
Designers tapped into the developer community to access external expertise. They considered analogous challenges to help set the $50,000 purse. Designers broke the challenge into 10 topics (for example, green, health and safety, and mobility) and posted clear requirements for each category. They included commercial benefits, inviting investors such as BMW to help judge the challenge. Finally, New York City included an “Investor’s Choice Winner” and allowed the grand prizewinner to demo the app at the New York Tech Meetup.
The Big Apps Challenge spurred the development of 96 apps using municipal data in new and innovative ways.64
If building prototypes or launching pilots seeks new technologies, products, and services, market stimulation seeks their commercialization. Public organizations often want to develop products or services not yet available in the market, or want to broadly encourage markets to sell innovative products or services that can achieve a public good. Using prizes to simulate markets can be a powerfully and positively disruptive force. It can, for example, lead to new cybersecurity capabilities, or foster the creation of next-generation sustainable energy technologies that governments and ordinary citizens can buy.
One example of a challenge that stimulated a market was the NASA/Google Green Flight Challenge, which sought to create emission-free flight vehicles, and led participants to invest more than $6,000,000 in pursuit of a purse of only $1,650,000. The Green Flight Challenge energized this nascent market; the two winning companies continue to make waves in the industry.65 The first-place winner, Pipistrel, has developed additional ultralight aircraft models, with more than 350 of them flying around the world.66
Outcome benefits
Critical design elements
Make rewards large enough to sustain a business and stimulate the market. A large purse is required to support the high costs of market entry. Because market stimulation requires multiple participants to invest for an extended period (that is, the start to submission time is on average 604 days longer than challenges focusing on building prototypes or launching pilots), the purse should be structured to provide a substantial benefit for multiple winners. In fact, the size of the purse needed to stimulate a market can be over two orders of magnitude larger than those for challenges focused on building prototypes or launching pilots as an outcome.72 By ensuring that multiple participants receive economic benefit and recognition as a part of the challenge, designers can encourage a larger, more diverse group to submit entries.
As noted previously, designers can incorporate commercial and networking benefits into their prize structures, such as inviting participants to trade conferences, promising advance market commitments and engaging end users and investors (such as venture capitalists) as judges. Doing so can expand participants’ long-term stakes in the prize, encourage them to compete again, and attract others to the new space.
Balance technical performance with the ability to implement and scale. When evaluating prize submissions focused on market stimulation, it may be necessary to look beyond technical performance to a more qualitative, nuanced assessment of how a given solution might perform in a market setting. Thus evaluation criteria should include considerations of market entry, adoption, implementation, scaling, and firms available to exploit the opportunity over the long term. As an example, the Gates Foundation and USAID Haiti Mobile Money Initiative offered financial rewards for companies reaching certain transaction milestones in creating a market for mobile money services in Haiti.73
Sustain your efforts with post-prize momentum. To stimulate markets beyond the conclusion of the challenge, designers use post-award features such as communications, marketing, support, and incentives that can help participants continue to grow the market or scale solutions. Leading practices include promoting partnerships with key stakeholders interested in scaling solutions, hosting follow-up webinars, distributing regular email newsletters, and building mentorship programs. Mentorship can take many different forms, including pairing winners with more established players in the business community to help them build their networks.74
Recommended design tactics
Oil dependence and the impact of burning fossil fuels on climate change have long stirred concerns about the sustainability of US transportation infrastructure. The Progressive Insurance Automotive XPRIZE, supported by the Department of Energy, sought to address these issues by reshaping the automotive industry. The challenge incented companies to create a new generation of viable, energy-efficient vehicles. Designers attempted to transform the market by using the prize as an opportunity to create and popularize a new consumer metric called MPGe (miles per gallon gasoline equivalent), which offers consumers a way to compare new vehicles that use a variety of energy sources with conventional vehicles. Using this metric and a series of other clearly defined technical specifications that integrated notions of safety, affordability, and desirability, designers created a multiple-round challenge, which allowed a wide range of participants to embrace different kinds of technology, yet still be judged in a transparent and fair manner. Designers awarded $10,000,000 to the top three companies—all of their vehicles had over 100 MPGe—to ensure that the new market would have multiple players.76
For many public organizations, raising awareness of the public or key stakeholder groups is a central part of their mission. This
can be part of a series of integrated goals or a primary objective, such as increasing public knowledge of a particular service, topic, or issue. Successful designers who wish to raise awareness typically choose design elements that engage large populations, involve robust marketing plans, and feature clear metrics for evaluating success.
To raise awareness using prizes, designers find it helpful to get specific about who is in their audience. For some challenges, such as the SunWise with Shade Poster Contest, the audience was quite focused—children under the age of 13. Effective design requires highly targeted marketing and communications to reach an audience like this.78 In other cases, however, the audience can be quite broad, such as for the Famine, War and Drought (FWD) Relief campaign sponsored by USAID, which generated awareness and donations for these types of crises.79 Designers are typically careful not to view broad audiences as undifferentiated or consisting of like-minded individuals who all have similar interests and goals. Rather, the larger the audience, the more important it is for designers to undertake audience segmentation, a type of marketing analysis that breaks large audiences into pieces, each of which has a common set of characteristics that can be targeted through specific media channels and with tailored messaging.
Outcome benefits
Critical design elements
Use a big megaphone as a reward. Challenges for raising awareness often have small purses because recognition is the primary reward. Successful designers use recognition to motivate participants by clearly communicating the types of acknowledgment winners will receive. The Small Business Administration’s (SBA’s) Small Business Week Video Challenge helped educate the public about how its programs and services can help entrepreneurs and small business owners start, scale, and succeed. Participants, in turn, used the challenge as an opportunity to market their small businesses and highlight how they had leveraged useful SBA programs. While no purse was offered, participants were incented to enter the challenge by the possibility of being profiled by both SBA Administrator Karen Mills and the White House through a Google + Hangout session.80
Check whether the intended awareness is being achieved. Maintain a concerted focus on evaluating the demographics and characteristics of participants during the entire prize. While it’s important to select a winner, it is equally valuable to ensure that the appropriate participants and stakeholders are engaged and energized following award. Designers should develop metrics specific to the prize to confirm that their communication, marketing, and outreach efforts are working.
Partner with others to maximize reach. Successful designers invest time and money in marketing to build a prize’s profile. Often, this involves partnering with an organization whose network can promote the prize within a target community. Strategic marketing can further the positive perception and prestige of the prize, thereby enhancing the value of its award and the recognition winners receive.
Recommended design tactics
The Health Resources and Services Administration’s (HRSA) Maternal and Child Health Bureau, located within the Department of Health and Human Services (HHS), launched the Stop Bullying Video Challenge to help prevent and end bullying in schools and communities nationwide.
They worked with the Federal Partners for Bullying Prevention, an organization comprised of 9 departments and 34 different offices, to tap into a diversity of experiences and take advantage of local outreach capabilities. They also made peer-to-peer communication an explicit goal of the challenge to build community and foster positive exchange. Finally, all videos became part of a larger tapestry of ideas and solutions for future campaigns to prevent and end bullying through the www.stopbullying.gov website.81
While raising awareness is essential for driving change, mobilizing action is a more ambitious outcome. This outcome achieves multiple goals: It helps participants interact in ways that improve submissions; generates enthusiasm and publicity for the prize; and builds community among diverse groups. As John Bracken from the Knight News Challenge put it, the human network that comes out of a challenge is the “currency we care most about.”83 Designers can use challenge mechanisms to encourage participation in capability building, networking events, mentorship activities, and workshops.
Just as designers identify audience segments when trying to raise awareness, they also carefully consider whom they are trying to mobilize, because different actors are compelled to behave in distinct ways. The “unit of mobilization” can vary dramatically, from individuals, teams, and groups to organizations, institutions, and subnational governments. Using different forms of analysis—consumer, market, regulatory, and organizational, to name a few—designers must evaluate the incentives and barriers to action for each of these actors to craft a prize that will mobilize them effectively. This analysis then informs the prize structure and, most importantly, its rules.
Action-oriented challenges are not necessarily trying to create collaboration among participants, unless it is useful for another outcome, such as developing a model or stimulating a market. In these cases, mobilizing action can look a little bit like private sector “coopetition,” in which participants are simultaneously rivals and peer mentors.
Mobilizing action can be especially valuable for designers trying to build networks or communities of participants. A good illustration is the Department of Veteran’s Affairs’ Blue Button for all Americans providers contest, which sought to encourage of the use of Blue Button personal health records. The purse offered $50,000 to the first developer who coordinated the installation of Blue Button personal health records on the websites of 25,000 physicians and other clinical professionals.84 RelayHealth won the challenge by making a Blue Button personal health record system available to all patients, including veterans, for more than 25,000 physicians across America.85
Outcome benefits
Critical design elements
Amplify purses with recognition and networking benefits. Many prizes focused on mobilizing action and developing skills deemphasize the purse as the most important motivator. Instead, they find ways to highlight multiple participants in addition to winners, because recognition and network access also provide strong incentives to compete. For example, Facebook and the Gates Foundation hosted the HackEd 2.0 Hackathon, which assembled 24 teams of developers and educators to build educational applications addressing college readiness, social learning, and out-of-school learning.90 The event showcased the developers’ skills and gave them the opportunity to meet and interact with driven and passionate peers in an intense shared experience.
Help participants compete. Building adequate support structures for participants may require a larger operational budget. Funds can be allocated for workshops and conferences, mentorship resources during or after the challenge, and feedback sessions with partners who may also serve as judges. These interactions can provide powerful motivation, not only to get involved in the prize in the first place, but also to compete more intensely.
Start with a blitz and maintain communications post-award. To mobilize action and maximize impact across audiences, mount a branding, marketing, and media campaign focused on delivering the right messages to the right populations. Public organizations often lack the skills for this kind of strategic marketing and sometimes even the culture to embrace it. Without it, however, designers risk creating a powerful prize for which no participants, or the wrong ones, show up. Post-award communications are also critical, because a central output of most prizes is building community. Nurturing and championing this community will keep participants focused on the original problem well after the prize is awarded. Failing to continue the conversation and channel their energy will compromise the prize’s lasting impact.
Recommended design tactics
NASA’s Zero Robotic Challenge encourages high-school student STEM engagement. While the prize solicits algorithms to optimize the International Space Station’s solar energy collection, it is primarily focused on developing acumen and excitement for STEM research.
It achieves this goal by working to create an enriching experience for student participants, so they can leverage their new skills and networks to excel in STEM courses. Students gain access to MIT resources throughout the challenge and cultivate a community by allowing the teams from various schools to interact through formal alliances. Finally, the winning team gets its algorithm deployed on the International Space Station.94
Through the use of these elements, designers have managed to make Zero Robotics an annual prize in both Europe and the United States. Several of the teams repeatedly participate—an indicator of the challenge’s brand strength and the health of the communities it fosters.
As the craft of incentive prize design becomes more nuanced and sophisticated, so too do the outcomes to which designers aspire. Perhaps the boldest involves inspiring transformation. While some might argue that the distinction between mobilizing action and inspiring transformation is simply a matter of degree, designers who build transformation-oriented prizes more often have grand visions about how to address complex, seemingly intractable problems.
Few prizes seek this outcome. The Aspen Prize for Community College Excellence, however, clearly illustrates how a refined design can generate fresh, powerful, and scalable ideas for reshaping community college education throughout the United States.95 Bloomberg Philanthropies’ Mayors Challenge, recently expanded to Europe, offers another excellent example, with an emerging, potentially global platform for driving municipal innovation and connecting innovative public officials.96 Both challenges inspire transformation by targeting participants—community colleges and city leaders, respectively—that can take significant action and develop new models for change ready for adoption by others.
To inspire transformation, designers typically focus on a few, critical design elements, in a multiple-round process that helps to amplify the fundamental vision of the prize.
Outcome benefits
Critical design elements
Demonstrate performance through multiple rounds of competition. Successful designers use multiple competitive rounds to winnow the playing field. The Aspen Prize for Community College Excellence uses three rounds, employing quantitative and qualitative assessments as well as a finalist selection committee to reduce the field of entrants to one winner and four finalists with distinction. Because this process helps the institute gather and analyze a remarkable amount of educational data about community college performance, it can select winners whose educational solutions are proven to make a difference.100
Publicize the underlying issue. Those seeking to transform communities rely on robust marketing and communication plans that target different participant populations as well as the public through appropriate media channels.
The Knight Neighborhood Challenge is a case in point. When the Community Foundation of Central Georgia first launched the Knight Neighborhood Challenge competitive grant program to revitalize the College Hill Corridor neighborhood in Macon, Georgia, it thought that the challenge itself would have enough brand recognition to attract a range of viable applications. After initial enthusiasm for the challenge faded, however, the foundation ran two marketing campaigns with a public relations firm to spread the word about the prize through social media. The challenge is now in its fifth year.101
Recruit the right judges. To inspire transformation, designers often ask for innovations whose performance may not be easily or quantifiably measurable. While this poses a challenge, selection of the right judges can help. Designers typically look for high-profile judges—public officials, authors, well-known scientists, and even celebrities. The star power of the judges’ panel can help to establish the authority required to definitively select a winner. Famous judges also bring greater media attention to the prize, increasing its impact among participants as well as the public.
Recommended design tactics
Community colleges provide most of the nation’s continuing education and skills development. The Aspen Prize for Community College Excellence attempts to improve outcomes for community college students by identifying best practices and replicating them across the country.
To achieve this goal, Aspen’s team worked with data experts to create clear metrics (for example, labor market and learning outcomes) that helped colleges prioritize certain objectives. By tapping the expertise of former community college experts as judges, Aspen added credibility to its measures. Aspen’s competition involved three rounds: The first scoped eligibility, the second winnowed 120 candidates to 10 finalists, and the third chose a winner. This structure allowed Aspen to focus on collecting different kinds of qualitative and quantitative data at different stages, leading to a valuable dataset for future use. It also chose to make its prize recur every two years, extending stakeholder engagement and continuously promoting the new metrics. Aspen also invested heavily in communications, working with the major community college associations to broadcast to their networks, build credibility, and publish reports that aggregated best practices identified during evaluations of competing schools. In addition, Aspen focused on raising the profile of every participant. For example, it sent model press releases and helped colleges publish these in local newspapers to build participant profiles within their communities.
With these steps, Aspen was able to elevate the profile of community colleges, redefine excellence for them, and disseminate leading practices that can drive success across the education sector.102
Read the full report on The craft of incentive prize design.