Learning

 Innovation and systems change requires an adaptive management approach to programs. Your Monitoring, Evaluation, and Learning plan must:

  • Prioritize ongoing learning and adaptation over end-of-project measurements. 
  • Rely on methods suitable for experimentation and non-linear learning and regular adaption of plans, strategies, and technical approaches. 
  • Incorporate consistent reflection and insight into decision-making for future interventions.
  • Employ participatory approaches to engage key stakeholders throughout the entire process.

Traditional monitoring and evaluation (M&E) approaches can be challenging when applied to innovative projects or systemic changes for several reasons:

Uncertainty and unpredictability

Most usually characterize innovations or system changes with high uncertainty and unpredictability. Traditional M&E relies on predefined indicators, specific goals, and precise trajectories. In contrast, innovative projects often involve exploration, adaptability, and learning, making outcomes hard to predict.

Non-linearity

Traditional M&E methods often assume a linear pathway from inputs to outputs to outcomes, while systems change and innovation are inherently non-linear. They involve trying different approaches, learning from failures, and iterating on ideas. Identifying direct causal relationships and measuring impacts using conventional methods can be tricky.

Dynamic and complex environment

The environment in which innovation or systems change occurs is dynamic and complex. Change in one area can lead to unexpected consequences in others, and isolating the impact of a specific intervention from other factors can be challenging.

Long-term impacts

The impact of innovations or system changes often takes a long time to manifest and can be challenging to capture within the typical timelines of M&E cycles.

Risk of failure

Innovations and systems change efforts involve a higher risk of failure as they try something new or complex. Traditional M&E methods may not account for this risk and may view failures negatively rather than as opportunities for learning and improvement.

Qualitative aspects

Innovations and system changes often bring about qualitative changes that are not always effectively captured by the quantitative measures used in traditional M&E. For example, changes in an organization’s culture or the level of collaboration among stakeholders might be crucial outcomes of an innovation challenge but are difficult to measure with standard indicators.

For these reasons, monitoring and evaluating innovation or systems changes requires a more flexible and adaptive approach to deal with ambiguity, complexity, and uncertainty and capture quantitative and qualitative changes.

Forming a Theory of Change

Each program will undoubtedly have its unique theory of change. However, your measurement strategy will likely focus on short-term outcomes that reasonably signal the potential for long-term impact. In the context of innovation and systems change, we acknowledge that transformation occurs through the collaborative design, testing, and sustained implementation of new concepts. This process requires cultivating improved relationships and developing new capabilities, which typically necessitate shifts in mindset and understanding.

Consequently, your theory of change might encompass the following sequential stages:

1 ›

Shifts in Individual Mindsets

Participants’ perceptions and understanding of innovation and systems change will likely evolve due to their involvement in the program.

2 ›

Growth in Individual Skills

Participants’ abilities should enhance as they engage in capacity-building activities.

3 ›

Changes in Organizational Mindsets, Practices, and Cultures

As individual participants apply their new learnings to their respective organizations or institutions, it’s plausible to observe shifts in the broader organizational mindsets, practices, and cultures.

4 ›

Change in Practices

As mindsets shift, capabilities increase, and culture changes, you hope to see individuals and organizations doing things differently than before, including specific activities around systems sensing and mapping, co-design, prototyping, and testing.

5

Strengthened Relationships or Partnerships

Relationships or partnerships may be forged or reinforced through system-sensing and network-building activities.

While these activities may be mandated in the challenge, trying to understand if a) the activities are working as intended in the challenge and b) if participants adopt their learnings beyond the requirements of the competition will be informative. 

Suppose you can observe and measure positive shifts in these outcomes, and you’re confident that the systems innovation approaches significantly contributed to these changes. In that case, these outcomes can serve as proxy indicators predicting the likelihood of long-term impacts. 

Throughout this process, it is essential to remember that the trajectory of change may not be linear due to the nature of open innovation and systems change. The theory of change may need to be revised and adapted as the process unfolds and the approaches’ systemic impacts become more apparent. It’s also vital to recognize and account for the potential for unintended consequences, both positive and negative.

Possible Indicators to Consider for Each Phase

While each program will develop bespoke indicators, these generic indicators represent the results you hope to see in implementing a systems innovation program. You can use these as a starting point for your MEL plan.

  • Outputs
  • Systems mapped and analyses completed. 
  • Dissemination and communication of results of systems mapping. 
  • Number of stakeholders mobilized for systems mapping and challenge design through the discovery phase 
  • Diversity of stakeholders participating (as organizations and/or individuals – i.e. local organizations, historically marginalized groups, etc.)
  • Outcome
  • Increased awareness of the systemic issue among participants and wider network 
  • Shifts in mindsets around systems thinking and systems change amongst network participants 
  • Increased capacities for systems thinking and systems mapping for network participants
  • Impact
  • Enhanced interest and commitment to working in a systems-informed way
  • Outputs
  • Number of eligible proposals received
  • Diversity of organizations and/or individuals submitting proposals (i.e. local organizations, historically marginalized groups, etc.) 
  • Number of ideas generated relevant to  the challenge objectives
  • Outcome
  • Measurable changes in knowledge, attitudes, and/or mindsets in individuals and organizations in applying innovation and systems change methods to their work
  • Enhanced capacity for systems innovation among participants
  • Enhanced motivation for collaboration among participants
  • Measurable changes among participants in applying systems lens to planning and implementing their work 
  • Measurable changes among participants in applying innovation methods to their work
  • Impact
  • Organizations and networks  demonstrate ongoing engagement, innovation, and collaboration to address systems-level issues
  • Outputs
  • Number and quality of new partnerships or consortia established
  • Level of alignment of solutions with challenge objectives
  • Outcome
  • Development and testing of innovative solutions addressing systemic issues
  • Impact
  • Systems-informed solutions demonstrably support, catalyze, or scale innovations
  • Outputs
  • Network purpose and charter established
  • Network coordination functions activated
  • Network resources identified for continued collaboration
  • Outcome
  • Strengthened connections and collaboration among stakeholders relevant to networks purpose
  • Network continues in a structured way beyond the challenge period
  • Impact
  • Collective influence of network members on systems issues amplified
  • Collective action on systems issues enhanced and increased

Pause & Reflect

One method you might consider adapting and integrating to support non-linear adaptation is “Pause & Reflect.” The “Pause & Reflect” methodology provides an innovative approach for Monitoring, Evaluation, and Learning in the context of innovation and systems change. It encourages teams to take intentional breaks to reflect on their work, understand emerging patterns, adapt strategies, and document lessons learned. This approach fits well with the complexity and dynamic nature of systems innovation.

“Pause & Reflect” sessions support teams to consider what is working well and what isn’t in their systems innovation processes. You might discuss unexpected outcomes, novel insights, or emerging challenges. For example, the systems mapping or capacity building sessions are going differently than expected, or the team might encounter unanticipated reactions from participants and the network. These reflection sessions help the team refine your process iteratively, providing a formal opportunity to learn from ongoing work and adjust course as necessary.

For systems change initiatives, “Pause & Reflect” can provide an avenue to consider the broader impacts and implications of the work. Systems change often involves long-term, complex interventions with effects that ripple out in many directions. Reflective sessions can help teams to identify these systemic effects and consider how they align with their goals. For instance, a team might realize that their intervention successfully impacts one part of the system but inadvertently reinforces another part’s problematic aspect. You can redesign the approach to drive the desired change with this insight.

“Pause & Reflect” sessions should involve all relevant stakeholders, including your core team and partners, beneficiaries, or others affected by the work. This inclusive approach ensures diverse perspectives, enriching the reflection process and enabling more robust learning and adaptation. These sessions should be conducted regularly and embedded into the project cycle, creating a culture of continuous learning and improvement.

Monitoring, Evaluation, and Learning (MEL) Methods for Innovation and Systems Change

While traditional MEL methods may not be suitable, various methods designed to respond more effectively to complexity and uncertainty can help you understand your progress and impact. In addition to simple “Pause & Reflect” sessions, you might consider the following methods:

  • Developmental Evaluation

Innovation and systems change often involve trying new things and learning as you go. Developmental Evaluation can be your ally in this kind of work. Unlike traditional evaluation methods that assess predefined outcomes, Developmental Evaluation supports innovation by providing real-time feedback, helping you learn from what’s happening and adapt your strategies accordingly. Remember, it’s not about judging success or failure—it’s about constant learning and development.

  • Outcome
    Harvesting

You’re likely working in a complex system where cause-effect relationships are hard to pin down. Outcome Harvesting can help you navigate this complexity. Instead of starting with predefined outcomes, this method involves identifying changes that have occurred (the ‘outcomes’), figuring out what your project did to contribute to them, and learning from this. It’s a way of working backward to understand your impact, particularly useful when unexpected outcomes crop up.

  • Most Significant Change

Storytelling is powerful. The Most Significant Change (MSC) technique taps into this power by collecting stories about the most significant changes experienced by those you’re working with or serving. By discussing and analyzing these stories, you and your team can gain deep insights into the impacts of your work, including those that quantitative methods might miss.

Decisions that will set your direction

  • What is your theory of change? What can you measure in your theory of change? What can you NOT evaluate in your theory of change?
  • How will your MEL framework support ongoing adaptive management, learning, and decision-making? What methods will you use?
  • Who are the learning and the evidence for? What audiences will you be communicating the findings to?
  • How and when will you collect data? At what points in the challenge would data collection make sense?
  • What time, effort, and skills do you need to collect and analyze the data?
  • What are your risks and barriers in collecting the data?

People you will need to find your way

  • MEL Expert

Your core team should include a MEL expert in innovation and system change that can design the framework and oversee its implementation.

  • Researchers and Data Analysts

You’ll need research assistants to support data collection and a data analyst to clean, analyze, and visualize. These team members should be familiar with both quantitative and qualitative methods.

  • MEL champions

MEL champions are people within each key stakeholder organization that understand data collection and analysis and can support you with disseminating the tools and engaging the right stakeholders for all MEL activities.

  • External MEL Experts

Consider identifying external MEL experts willing to support the design and assessment of the program approach by participating in discrete activities and convening throughout the four stages. This expertise is optional, but given the complexity of evaluating innovation and system changes, it would be an asset to your team.

Review your plan for these critical elements

  • Have you understood the impact you seek to generate and the potential causal links between the current situation and the expected improvements?
  • Have you clearly articulated what you can evaluate and learn through the program and what data you need?
  • What moments in the program have you clearly identified as data collection points? What tools do you need to collect and analyze them?
  • Have you informed your relevant stakeholders, particularly network and challenge participants, of the commitment you need from them to collect the data you will need?
  • Do you have a system to ensure ongoing reflection points to adapt the program according to continuous learning?

See the warning signs first

  • Don’t over-promise what outcomes and impacts you’ll be able to measure. Think carefully when identifying your learning and evaluation questions. You want them to be observable and measurable during your program, so they must be concrete and grounded.
  • Avoid too many questions or too many indicators. A focused MEL approach will provide you with better data and insights.
  • Be sure to balance data needed with commitment. Think carefully about stakeholders’ time and interests before expecting their participation. Be creative in imagining ways to collect data.

These resources can help you on your journey

Before creating your MEL plan, review GKI’s Training Deck for MEL for Innovation and Systems Change.

Review the example materials from GKI’s Accelerating Innovation for Resilience Bangladesh for ideas on MEL tools and methods to:

Additional Resources on MEL in Innovation & Systems Change: 

No Crowd-Sourced Recommendations have been submitted. Be the very first one!

Leave a Reply

Your email address will not be published. Required fields are marked *