Three straightforward principles—purpose, perspective, and alignment with other actors—can help the social sector reinvent its approach to measuring social impact, turning data into an asset that benefits both philanthropic organizations and those they seek to help.
Social sector organizations tackle some of the world’s most difficult and complex challenges on a daily basis. And, just as in other industries, getting the right data and information at the right time is essential to understanding what an organization needs to achieve, whether it is doing what it set out to do, and what impact its efforts are actually having. Yet, despite marked advances in the tools and methods for monitoring, evaluation, and learning in the social sector, as well as a growing number of bright spots in practice emerging in the field, there is broad dissatisfaction across the sector about how data is—or is not—used.
In a hurry? Read a brief version
Listen to the podcast
Subscribe to receive Social Impact content
Read Deloitte Review, issue 22
Create a custom PDF or download Deloitte Review, issue 22
Program staff find it difficult to quantify and prove results that they intuitively know are occurring, and they can find themselves confused by an alphabet soup of methodological options ranging from randomized controlled trials to developmental evaluation. Monitoring and evaluation (M&E) directors deal with others’ unrealistic expectations about the analyses they, the M&E directors, will be able to produce, and they are often frustrated with their organizations’ inability to integrate evaluation findings into concrete decisions about strategy. Donors and boards are disappointed by the fact that evaluation has too often overpromised and under-delivered in its efforts to measure organizations’ efficacy and impact. And nonprofits and social entrepreneurs are asked to spend time and money capturing data and reporting on outcomes that they feel serve nearly everyone’s purposes except their own.
Challenges like these are not limited to philanthropy. The same issues play out at corporate citizenship and sustainability groups within for-profit companies; with mission-driven investors and family offices; with senior executives who want to show their companies are making a social contribution while meeting or exceeding their financial objectives; and with government agencies that need evidence of impact from the use of public monies. This is not surprising: Social impact measurement is hard to do well.
Despite all this, however, there is also great cause for optimism. Advanced data analytic techniques are allowing social activists to identify patterns and understand systems like never before. Open data movements and integrated platforms are creating new opportunities for sharing and aggregating data. An emerging movement around “constituent voice” is actively engaging the people most affected by philanthropic programs in the process of designing and improving the services they receive. New behavioral economics principles are allowing for greater understanding of and influence over the real drivers behind decision-making.
These are just a few of the trends that are creating exciting possibilities for monitoring, evaluation, and learning in the social sector. The obstacles are less about methods and more about organizational and field-level challenges.
The “Reimagining measurement” initiative
Monitor Institute by Deloitte undertook a year-long, multi-funder “Reimagining measurement” initiative to highlight existing bright spots worth spreading and inspire experimentation with a range of next practices in the use of data and information in the social sector.
The approach was rooted in innovation principles and practices that have been adapted to spur new thinking, not just at individual organizations, but for the social sector as a whole.
For the initiative, we spoke with more than 125 social sector experts and practitioners, developed a “bright spots” catalog with 750 examples, and researched evaluation, monitoring, and learning practices as well as relevant trends in the field and adjacencies from outside philanthropy.
The goal is to hold up a mirror to the field—not to endorse any one approach or the views of any single institution or project sponsor—and, in the process, help individual organizations and the social sector as a whole explore and influence possible futures for measurement, evaluation, and learning.
Reimagining measurement
So what can be done to help social sector actors build a truly data-driven understanding of social impact—both at individual organizations and initiatives, and at a more holistic level? To investigate this question, Monitor Institute by Deloitte’s “Reimagining measurement” initiative explored where a diverse array of field leaders and experts expect monitoring, evaluation, and learning in the social sector to be in 10 years, as well as where they hope it will be (see sidebar, “The ‘Reimagining measurement’ initiative”). Comparing these expected and better futures enabled us to recognize practices that should be preserved and promoted, and to identify concrete steps that those who need, provide, and/or interpret data can take to increase the chances of achieving a better future.
Perhaps unsurprisingly, when the research team asked participants, “How do you expect the future of monitoring, evaluation, and learning to develop over the next decade?,” the answers were decidedly not optimistic. Without active intervention, the people we spoke with expected an expansion and deepening of what we already see emerging today. Data is becoming more accessible than ever; yet, figuring out how to effectively integrate information into decision-making remains a challenge. New data methods, tools, and analytics continue to flower and expand, but nonprofits struggle with this profusion of options and choices, and most find that they have insufficient resources—both monetary and human—to effectively choose among and use data analytic tools and techniques. Despite a growing movement to incorporate constituent voice into evaluation activities, these activities continue to aim more to serve foundations’ needs than those of grantees and the communities they serve. In essence, the participants’ expected future was marked simultaneously by hope, with the promise of greater understanding and impact—and despair, as they saw individual “bright spots” in the field multiply but not necessarily sum.
What was more heartening was hearing participants’ ideas for a better future. When asked what they hoped would happen in social sector monitoring, evaluation, and learning, our interviewees envisioned a future in which continuous learning is a core management tool; where foundations, as commentator Van Jones reportedly said, “stop giving grants and start funding experiments”;1 where foundations, grantees, and other groups share data, learning, and knowledge openly and widely; and where constituents’ feedback about what they need and what success looks like is central to strategy development and review.
It is a future that will take effort to create. And unfortunately, if things don’t change, the future of social sector organizations will likely be the future they expect, rather than the future they hope for.
Being clear on where the social sector wants to go with monitoring, learning, and evaluation, and on the difference between the expected and better future, can spur determination to change course and a willingness to embrace innovation and experimentation.
Based on our interviews, the research team identified three characteristics that participants within and outside the social sector believe should be defining pillars of a better future for monitoring, evaluation, and learning. These three characteristics are purpose, perspective, and alignment with other actors.
The first characteristic, purpose, is about the “why” of monitoring, evaluation, and learning. Our participants believe that data collection, analysis, and interpretation activities should aim to more effectively put decision-making at the center of monitoring, evaluation, and learning efforts. Simply put, measurement should aim to inform better strategic, operational, and portfolio decisions among both philanthropic funders and grantee organizations.
While this seems intuitively obvious, many organizations have historically struggled to define and track metrics that are meaningful for effective decision-making. It is very common for people, for instance, when wanting to judge the effectiveness of some effort, to ask what key performance indicators are available. This is a form of the “streetlight effect,” the observational bias of looking for data where it is easiest to search and not necessarily where one should look. The tendency is to rely on the available data instead of data that, while more useful, would be harder to collect. Putting decision-making at the center is, as one of our interviewees said, to practice “decision-driven evidence making.” It is the discipline of being clear about purpose, then about approach, and only then about the right indicators. Moreover, putting decision-making at the center involves not only the generation of data-driven insight, but also its application at an important organizational moment to change participant behavior.
The second characteristic, perspective, speaks to the “who” of measurement and evaluation. Perspective calls on social sector participants to better empower constituents and promote diversity, equity, and inclusion; it is about reframing who gets to define what is needed, what constitutes success, and what impact interventions are having. Who benefits from and controls what data is collected and how it is used? If the social sector views constituents as active participants rather than passive recipients of interventions intended to create positive social impact, these constituents’ ability to provide input and obtain access to data will be seen as inherently vital and valuable. Those who fund programs and provide services on the ground have a useful perspective on impact, but theirs should be neither the only perspective nor the privileged perspective.
Although participatory and empowering data collection methods are important, perspective is about more than methods. It’s about using data to gain insight and serve equitable goals, to change organizational cultures to promote inclusion, and to provide information and data tools for the agency and choice of constituents. Indeed, the very act of observing causes people to attend to some things and not to others, and the act of recording those observations requires people to choose how they will categorize or combine observations. Because of this, the collection and use of data is itself infused with power dynamics—and the way this is done can address or perpetuate inequities.
Alignment with other actors, the third characteristic, concerns the “what” of monitoring, evaluation, and learning. It is about more productively learning at scale: getting better at learning from and with other actors—about the good, bad, and inconclusive—to better match the scale and complexity of today’s social and environmental problems. For example, removing forced labor from the supply chain, curbing greenhouse gases, instilling a culture of health, or promoting gender and ethnic equity cannot be accomplished by any single organization, business, or government on its own. But although these actors may not have the opportunity to coordinate their social impact efforts, they can teach and learn from each other’s experiences. A great opportunity exists to make a bigger difference more quickly if the social sector can better combine insights across multiple organizations and many programs. New opportunities abound to develop collective knowledge and integrated data efforts that promote learning at the scale of the problems the world faces.
Reimagining measurement doesn’t necessarily mean inventing something entirely new. Central to any innovation process is to look for and learn from where innovation is already happening. In the case of measurement, many organizations are already integrating elements of the three characteristics just mentioned. These existing “bright spots” in the field can serve as important inspiration and a source of ideas for social sector organizations. Figuring out how to spread them and adapt them to new contexts can play a critical role in bridging the gap between the expected and better future.
Innovation tools for reimagining measurement
Innovation is not just the means for imagining a better future; it also provides the practices to test and learn one’s way into that better future. The innovation tools created through the “Reimagining measurement” initiative2 are meant to catalyze experimentation, action, and learning3. They have drawn inspiration from diverse sources, including tools and practices in other sectors, and they have striven to take into account opportunities created through emerging trends in technology and social shifts. Most importantly, they aim to help users challenge assumptions about standard ways of doing things.
The case studies included in this report are just a few of the many bright spots emerging in social sector monitoring, evaluation, and learning:
Bright spots in the field offer critical models for innovation in monitoring, evaluation, and learning at particular organizations. But bridging the gap between the expected and better future will also require social sector participants to imagine new possibilities for experimentation, hypothesis-testing, and learning as well. In the “Reimagining measurement” initiative, we heard many promising new ideas, some of which can be adopted by individual organizations and some that can be tested in collaboration. For example, what if the sector as a whole:
. . . were able to change grant reporting to require grantees to collect, monitor, and share data that is meaningful for grantees and constituents first and foremost?
. . . had the data collection and aggregation infrastructure to enable constituent feedback for decision-making, enabling the constituents themselves to make certain resource allocation and other initiative decisions?
. . . provided incentives to a group of grantees working in the same issue area, but with different theories of change, to spur them to aggregate learning and evaluation across their organizations?
To create a better future for measurement in the social sector, incremental change is not likely to be sufficient. The shifts that are needed require engaged action, not just among those charged with monitoring, evaluation, and learning activities, but across organizations and between funders and grantees. Getting to a better future will require ongoing exploration as social sector actors come together to further develop and test new ideas, engage in action learning,4 and share what is learned.
Nursing is incredibly hard work. So hard, in fact, that nearly two-thirds of nurses in a recent survey report some type of “nurse burnout.” Forty-three percent of all newly trained hospital nurses leave their jobs within three years. And work-related exhaustion among nurses, in turn, can have serious implications for patient health: increased numbers of medical errors, lower patient satisfaction, higher rates of healthcare-associated infections, and higher 30-day patient mortality rates.5
To address these challenges, HopeLab, a social innovation lab that designs science-based technologies to improve the health and well-being of teens and young adults, partnered with Dignity Health, a nonprofit hospital system, to identify ways to reduce stress and increase resilience among nurses. HopeLab identifies behaviors that support health and well-being, researches the psychology that motivates or inhibits those behaviors, and creates technologies designed to trigger adaptive behavior change.
The project began with detailed research and observation to help HopeLab understand the issues leading to nurse exhaustion and burnout. HopeLab personnel shadowed nurses on the job to gain insight into their work environment and surface potential intervention ideas. From this “human-centered” study, the team identified three different potential technological solutions that could help boost nurse resilience in the face of a difficult work environment.
For each concept, they developed initial prototypes and conducted a series of iterative, rapid-cycle tests with nurses and nursing leadership to improve the technology and land on a final solution. This robust, evidence-based testing process, combined with expert guidance and considerations of feasibility, scalability, and impact potential, helped Dignity Health to make an informed choice about which solution to pursue. The ultimate outcome was a new tool called Debriefing Codes, a system that can help nurses look back, identify practice improvements, and provide support for the staff’s emotional needs after a difficult emergency “code” situation occurs.
HopeLab’s development approach provides a vivid example of the importance of purpose in monitoring, evaluation, and learning. The organization uses evidence-driven design processes, including rapid randomized, controlled tests of its product designs, to make key decisions about health product designs and usage. Potential solutions are tested in rapid feedback cycles using user-centered design principles. HopeLab also performs outcome assessments to determine if the use of its technologies results in measurable health improvements. Its research process is designed to inform concrete, real-time decisions that shape the organization’s health product tools and the way they are used.
At its heart, putting decision-making at the center means focusing on the purpose, or the “why,” of monitoring, evaluation, and learning. In organizations that put decision-making at the center, all data collection and analysis efforts aim to answer essential questions that can guide leaders in allocating resources and adjusting strategy:
The need for this type of data-based decision-making is not a new conversation in the social sector. However, organizations still find it difficult to put in place the capacities, incentives, and practices to create useful and meaningful evidence, integrate it effectively into the decision-making process, and change behavior in the desired direction.
How can organizations improve? One potential starting point is that the data needed to generate useful insights has become easier to create, capture, and analyze than ever before. What is knowable has been utterly transformed; the sheer speed, quantity, and accessibility of the data that organizations can produce has exploded. Organizations seeking ways to collect the right data to inform decisions can draw on a range of tools and sources of inspiration. Innovative organizations are learning from their peers as well as from advances in other sectors; they are adapting to emerging trends and overturning established orthodoxies to develop new ways of generating and using data-driven insight.6 Technology start-ups, for instance, have pioneered lean analytics methods in which teams use data to quickly iterate and learn. These fit into larger agile management approaches that prioritize a rapid, incremental, and adaptive method of learning for improvement, with data creation clearly tied to ongoing decision-making.
HopeLab, of course, isn’t alone in trying to put decision-making at the center. Many other organizations are experimenting with other approaches that help them focus their monitoring and evaluation efforts on learning and improvement, and on how the information they create is ultimately put to use.
Take, for example, the DentaQuest Foundation, a corporate funder focused on promoting oral health in the United States. DentaQuest has chosen to lessen the reporting burden on grantees by paying significant attention to making its evaluation requirements useful for the grantee. DentaQuest provides opportunities for grantees to shape their overall evaluation strategy and approach, invites (rather than requires) grantees to participate in learning-focused monitoring and evaluation efforts, and encourages grantees to develop reporting and evaluation products (such as videos and communication collateral) that allow grantees to share their impact not only with DentaQuest but with their local stakeholders. The intent is to balance accountability with learning and to make evaluation processes and products useful tools for the grantees to advance their strategies. In effect, DentaQuest builds reporting requirements into the kinds of data-collection efforts that the grantees would likely have wanted to pursue anyway to guide decisions on interventions and methods of engagement.
Or consider the way the Open Society Foundations, the international philanthropic network founded by George Soros, is shifting the emphasis to learning and adaptation. The organization separates conversations focused on learning from conversations about strategy approval and funding allocation. Every two years, on a rolling basis, it conducts a “portfolio review” of each area of work with program staff and board members to self-critique program activities and assess what has worked and what has not. Program allocation decisions then occur separately as part of a strategy and budget review, which can occur up to two years later. This strategy and budget review reflects program performance and refinements to the approach that emerge from the earlier learning-focused portfolio review. Separating the discussion of “what worked and what didn’t” from the discussion of “how much should each program get and for what?” has encouraged program staff to surface information in the learning sessions without the immediate concern of how that information would affect funding. This information can then guide operational decisions such as whether to offer more flexible funding to certain grantees or adjust some grantees’ levels of support, allowing Open Society Foundations to more effectively pursue its mission.
Organizations like HopeLab, DentaQuest, and Open Society Foundations illustrate ways that both funders and the organizations they work with can integrate monitoring, evaluation, and learning activities with strategic, operational, and portfolio decision-making. Lessons to take away include:
While innovations by particular organizations can provide inspiration, a number of broader efforts are needed to spur more systematic change. These can include:7
Creating a better future where data is used with purpose—to guide meaningful decisions and prompt constructive action—will require focused experimentation and innovation around organizational practices and funder-grantee relationships. In that better future, monitoring, evaluation, and learning will be seen not as an add-on or burden, but as an essential tool in helping to achieve social sector organizations’ mission and goals.
For a low-income mother with three kids who weren’t doing well in school, a standard philanthropic solution might be some sort of educational intervention. When one mother was instead asked what she felt was needed, her response was striking: a car.
One of her children had asthma, and when that child was having an asthma attack, she couldn’t accompany her other young children to school on the bus. As a result, all of her children missed multiple days of school. Thanks to favorable financing terms, the mother was able to purchase a car—and her kids’ school attendance and grades improved.
FII is a nonprofit focused on economic mobility for low-income communities. It leverages the power of technology and information to support families’ efforts to improve their own lives. Seventy-five percent of low-income families move above the federal poverty line within four years; yet 50 percent of those who do get above the poverty line slip back into poverty within five years as families struggle to build the necessary assets to weather crises.8 Many policies actually penalize families for their efforts to save money by cutting off benefits if they manage to create even the smallest financial cushion. FII strives to change this resource gap by partnering with, learning from, and investing directly in families.
FII illustrates the principle of perspective in monitoring, evaluation, and learning: the use of data and information to empower constituents and promote equity in the social sector. FII has integrated constituent feedback into the core of its work to help direct how it deploys dollars to families and to empower families to make their own choices about improving their lives. To do this, FII has created a web-based data platform for families to set their own financial goals and connect with other families in the effort to find solutions to the challenges they face, from child care to saving for a home to affording tuition. FII’s platform helps families track their own progress, and FII matches their self-determined efforts with financial capital to accelerate attainment of their goals. In addition, FII develops aggregate data over time to better understand what works for its families to reduce poverty.
We need to understand what impact and success looks like for a community and not assume that we know what that is.—Foundation program director
Perspective—the principle of better empowering constituents and promoting diversity, equity, and inclusion—is about reframing who gets to define what is needed, what constitutes success, and what impact interventions are having. It is also about data as an asset, and who gets to benefit from and control that asset. In the context of monitoring, learning, and evaluation, the call to integrate constituent feedback must include an emphasis on diversity, equity, and inclusion in order to truly represent constituents’ views. Enabling constituents to define what success means and which interventions are working is an important path to inclusion and equity. Similarly, using an equity lens in the creation of data and knowledge opens up possibilities for engaging and empowering all constituents.
The social sector has made some strides in this direction. A growing embrace of methodological diversity and a focus on community-based evaluation offers multiple participatory evaluation methodologies, and the concept of cultural competence is widely accepted in the field. However, absent changes in incentives, most monitoring, evaluation, and learning efforts are expected to continue to be driven by funders and not widely shared with constituents or used to directly benefit them. Funders are the ones who allocate resources for monitoring, evaluation, and learning; more broadly, they act as the “keystone species” in determining how problems are defined, which individuals are seen as experts, and what constitutes evidence of success. Organizations are creating something new by drawing from and adapting emerging trends in other sectors that are raising public expectations for greater constituent participation and voice. These trends include transformations in user-centered design, civic tech, social marketing, and customer experience.9
Some people in the United States have a life expectancy 20 years lower than others who live in neighborhoods just a few miles away from them because of differences in education, employment, housing, access to health care, and environment.10 This inequity is one of the things the Robert Wood Johnson Foundation (RWJ) aims to change.
RWJ focuses on helping people in the United States live longer, healthier lives. After decades of focusing on the US health care system, RWJ reoriented its strategy to address the complex social factors that have a powerful influence on American well-being. Through its focus on a “culture of health,” the organization is working to help expand the discussion about what influences health as well as set a new standard of health and well-being for all communities.
RWJ explicitly integrates equity goals into its efforts to promote a national culture of health in the United States, with programs aimed at creating healthier, more equitable communities. Rather than deploying a top-down approach, RWJ is creating community-based solutions that are developed locally. These solutions pay close attention to distinct community needs and include a range of systems that impact health.11
To assess community health, RWJ uses metrics that go beyond traditional health measures to include indicators such as housing affordability and residential segregation. RWJ has gathered baseline data that reveals differences in these broader community indicators, as well as a disproportionate difference in health challenges such as access to health care, disease rates, and treatment outcomes, between lower- and higher-income communities.
RWJ is currently tracking measures across “sentinel communities”, a collection of 30 cities, counties, regions, and states selected to reflect the United States’ geographic and demographic diversity as well as different approaches to improving health. This will enable RWJ to learn which approaches are working where, in what contexts, and for which populations and communities.
From innovative organizations like these, the social sector can draw lessons about how to develop monitoring, evaluation, and learning practices designed to promote equity and help integrate ongoing constituent feedback into day-to-day management and longer-term assessments. Some of these lessons include:
A better future in which equity is integral to monitoring, evaluation, and learning—and where constituents are consistently engaged in ongoing, systematic feedback that offers them choice and agency—requires deep changes in how funders and nonprofits use data to connect with and empower constituents. Experiments to try can include:
Most of all, integrating constituent perspectives into monitoring, evaluation, and learning activities requires a fundamental shift in the social sector’s implicit assumption that “the funder knows best.” In a better future, approaches that view those to be helped as an essential source of information and guidance can become powerful tools that enable constituents to define and design successful outcomes for themselves.
Affordable homeownership programs in the United States seek to build stable, inclusive communities by enabling income-eligible families to buy housing at below-market rates in return for limits on the resale value of their homes. Do the programs actually work?
In a comprehensive study by Urban Institute, researchers in 2010 were able to determine across multiple such programs that they were indeed beneficial for maintaining affordability in neighborhoods, helping lower-income families generate assets, and reducing foreclosure rates. In the programs they studied, 90 percent of buyers were still homeowners five years later, while only half still owned homes under traditional ownership conditions.12
However, gathering the evidence was laborious and time-consuming. Researchers had to collect client-level data for every sale, in many cases by searching for hard-copy forms and county records, interviewing participants for whom no data was available, and conducting research and interviews for program-level information. The programs differed in markets served, types of homebuyers, and the affordability formulas used, making comparisons challenging.
Affordable housing advocates realized that any ongoing efforts to track program effectiveness required a different approach. A standard tool was needed to enable affordable housing organizations to collect data for day-to-day decision-making as well as for long-term assessments of what was working in the field. As a result, the HomeKeeper program was born.
HomeKeeper is a data management system for affordable homeownership organizations that helps programs manage their ongoing activities. It is maintained by the Grounded Solutions Network, a national nonprofit that supports local jurisdictions and affordable housing organizations throughout the country. The HomeKeeper system captures the characteristics and activities of homebuyers, the features of the properties they work with, and key elements of program performance. In addition, a subset of the data that individual organizations provide feeds into a national data hub that reports on how the affordable housing sector is performing more broadly.
HomeKeeper is one example of collective efforts to learn at scale. By using shared metrics and aggregating their data, the participating affordable homeownership programs are able to gain system-wide insight into trends in the field. And because they are continuously adding data, they are able to track outcomes and longer-term impacts over time.
Alignment with other actors—which can facilitate more productively learning at scale—encompasses the interrelated but distinct ideas of knowledge-sharing and collaborative learning. Knowledge-sharing involves individual programs and organizations offering what they are learning to others: the good, the bad, and the ugly. Knowledge-sharing allows the social sector to marshal its resources effectively by avoiding duplication of effort in articulating social problems, developing potential solutions, and determining what works in which contexts. Through knowledge-sharing, organizations can build on what has come before them rather than re-creating knowledge for individual use or replicating solutions and strategies that have previously been found insufficient.
Collaborative learning refers to cross-program or cross-organizational efforts to collectively create data and information that everyone can use. Collaborative learning is required for the social sector to develop field-level insights and support interventions at a larger scale. Complex, system-level problems require coordination and the development of a shared data infrastructure to promote broad hypothesis-testing and analysis.
Some professionals in the field take a skeptical view of the social sector’s prospects for learning at scale. Throughout the “Reimagining measurement” initiative, many participants expected knowledge silos to continue to hamper the social sector’s ability to take full advantage of the possibilities for field-level learning. The current “opt-in” culture for transparency and sharing was predicted to persist, with organizations likely to continue to share only those results that reflect positively on their own efforts. Shared data standards and integrated data systems may become more common, but without more systematic intervention, real hurdles will likely remain. Interest in big data and analytics will continue to grow, but without a push to develop a shared data infrastructure, participants expected datasets to remain small and historical.
In contrast to this, however, are efforts in which organizations are finding creative solutions that take advantage of recent social and technological developments. Among these developments are technologies that have transformed the ease and cost of collecting, sharing, and aggregating data; new data analytic techniques such as predictive analytics and machine learning; and the open data movement, which has made data more accessible to citizens.13
Foundations themselves are struggling. They don’t share evaluations across their own programs, let alone across a sector. They still rely heavily on calling each other up to make decisions, relying on networks, trying to shortcut the information overload by asking trusted partners what to read in order to feel as though they’ve done their due diligence.—Director of an organization serving foundations and nonprofits
From these initiatives, organizations can gain essential insights about how to create monitoring, evaluation, and learning practices that enable the social sector to learn together at a level that matches the scale of the problems it seeks to solve. Some of these lessons include:
Overcoming knowledge and data silos to more productively learn at scale will require the social sector to embrace much more coordinated and integrated approaches to monitoring, evaluation, and learning. Actions the sector can explore include:
Achieving alignment with other actors may not be simple or easy, but it is essential for enabling the social impact sector as a whole to learn from experience and make a bigger difference in the issues it seeks to address.