Calls to action
Inspiring innovation in monitoring, evaluation, and learning in the social sector
We list promising new ideas, some for individual organizations and some that can be done in collaboration, to research and test additional hypotheses about how to create the future we hope for monitoring, evaluation, and learning in this section of the Re-imagining Measurement toolkit. This catalog of potential actions was developed through research, our innovation lab, and multiple convenings. It is intended as a starting point as we collectively come together to further develop new ideas, engage in action learning, and share what we learn.
- Explore the complete toolkit
- Download the PDF
- Calls to action: Evidence-based decision making
- Calls to action: Promoting diversity, equity, and inclusion
- Calls to action: Open data
Calls to action for evidence-based decision making, promoting diversity, and the open data movement
Getting to a better future poses incentives-based, technical, cultural, and capacity challenges. The good news is there is exciting and innovative work being done—in the social sector and its adjacencies—to overcome these challenges. As we see from the expected future, however, much more intentional, transformative, and coordinated action is needed.
Through our research, innovation lab, and multiple convenings, we’ve collected promising new ideas, insights, and hypotheses—calls to action—to inspire exploration and further innovation. These calls to action include experiments that individual organizations can try, as well as actions for collections of organizations. They are meant to provoke further ideas, adaptations, and refinements.
This is meant to be an ongoing exploration in the field as we collectively come together to further develop new ideas, engage in action learning, and share what we learn. If you’re already trying these, share your experience!
Examples of calls to action for more effectively putting decision making at the center
- Innovating new ways of creating and sharing monitoring results: Grant reports are typically a great deal of work for grantees, yet are too often left unread and are seldom used in significant ways by foundations for ongoing decision making. What could it look like if grant reporting was fundamentally rethought? What if a funder worked with grantees (individually or in related clusters) to use data that is meaningful for the grantees first and foremost, or data that is already collected by the grantees, but would suffice for compliance and monitoring purposes for the foundation?
- Create embedded technology capacity to develop widely needed tools: Insufficient and low-quality data is pervasive in the social sector. Technology tools and infrastructure development could help simplify monitoring, evaluation, and learning tasks for organizations, and cross-functional teams could help build internal capacity. What if a funder or funders promoted a “Code for America”-like model with monitoring, evaluation, and learning and data analytics teams for a year of service to develop digital tools? The team could be embedded in a single foundation, but could would work on organizational-level tools and technologies that would be relevant across an issue area.
Explore the complete starter list of calls to action for more effectively putting decision making at the center.
In a better future:
- Information for on-the-ground decision making is prioritized
- Learning is embedded and continuous
- There is greater investment in monitoring, evaluation, and learning capacity
- The data and methods needed to inform decisions are available
Examples of calls to action for better empowering constituents and promoting diversity, equity, and inclusion
- Develop asset-based resources: Many actors use data with a deficit-frame, focusing entirely on challenges communities face rather than also including strengths and resources they have to draw upon. By focusing solely on deficits, funders can often overlook real assets that can be used to help solve critical community challenges. What if we developed best practice resources and a toolkit for asset-based monitoring, evaluation, and learning, including for the creation of relevant data?
- Create tools to help organizations systematically collect constituent insight. While momentum to gather constituent feedback exists, collecting constituent feedback and insights still appears elusive to many organizations. Could a group of organizations create a "constituent insight toolkit" that helps social sector organizations navigate the range of available options (e.g. direct feedback, behavior tracking) and catalogues resources for quick and easy implementation?
Explore the complete starter list of calls to action for better empowering constituents and promoting diversity, equity, and inclusion.
In a better future:
- Equity is consistently considered in and supported by monitoring, evaluation, and learning efforts
- Constituent feedback is an essential practice
- Constituents are empowered to make their own choices
- Data rights are secured
Examples of calls to action for more productively learning at scale
- Overcoming disincentives to share among nonprofits: Nonprofit programs are typically evaluated individually. What if a funder or group of funders provided incentives to a group of grantees working in the same issue area with different theories of change to support aggregated learning and evaluation across multiple organizations?
- Creating a diagnostic for helping groups learn together: Some issue areas are much further along in terms of shared learnings, data collaboration, and collective knowledge development than others. What if a funder supported the creation of a diagnostic that detailed and assessed the conditions that need to exist and key choices for collective learning for an issue area?
Explore the complete starter list of calls to action for more productively learning at scale.
In a better future:
- Data, learning, and knowledge are shared openly and widely
- Knowledge gaps and learning agendas are collaboratively undertaken
- Data is integrated at scale needed to assess social impact
- Evaluation synthesis, replication, and meta-evaluation are supported