Marcus Jenal introduces a new series of webinars on tools to measure systemic change which are focusing on the USAID-funded project, Feed the Future Uganda. 

During the first webinar Eric Derks and Leanne Rasmussen introduced the Feed the Future project which aims to increase Ugandan farmers’ use of good quality agro-inputs by fostering more inclusive systemic changes in the agro-inputs industry. We also heard how the team monitor change in the agricultural input markets, and there were several noteworthy takeaway points:

The project's theory of change explicitly includes systemic change to achieve sustainable impact. The theory of change is high-level and does not specify how exactly the envisioned change will be achieved. Rather, it focuses on what needs to change in the system for the market to be more effective. This gives the team flexibility to explore different ways of achieving change.

The theory of change is coupled with a so-called ‘intervention scheme’ which adds to the team’s understanding of how change happens in the market system. The scheme looks at the succession of different levels of change that need to happen, but it does not feature a clear causal chain that describes in detail how a specific intervention leads to specific results. The scheme is not a results chain. It shows how reactions of market actors to interventions can lead to new general behaviours emerging and predominating in the market system. This guides the monitoring framework as it specifies where the team need to look for change. At the same time, the team do not narrow their focus to the investigation of a specific causal link, but look at multiple ways the change can be influenced by the project's interventions.

The monitoring framework (referred to as the 'M&E Scheme') has three layers. It combines performance management, results assessment and attribution assessment. This is very similar to the three-step approach that was introduced at a past BEAM webinar on attributing systemic change. 

The Feed the Future team quickly found out that asking the right questions to assess systemic change is more important than finding innovative technologies to measure it. The team invested time and resources to figure out what they needed to know to show that the activity achieved the systemic changes they envisioned.

The project combines a multitude of tools to show change in the different levels of the M&E Scheme. They include formal review meetings and informal ways of communication like a WhatsApp group or an open plan office. The tools become more formal to capture higher level changes like outcomes or systemic change. Quarterly reviews of the data provided an opportunity for the team to discuss the project';s strategy. The team went through an intense learning journey to test a variety of tools to see whether they could answer their questions. Some tools, such as business benchmarking, were tried and discarded; others, like network analysis, were radically rethought to make them more practical.

A number of good questions came up after the presentation by Eric and Leanne. The first asked how the M&E Scheme enables the team to learn from, and base, their decisions on data. Both Eric and Leanne were clear that the main aim of the tools is to generate data that establishes a feedback loop for the team to learn. There is also a culture of learning in the team that allows for critical discussion on progress. The team can collectively decide that a specific intervention does not show signs of change and abandon it. The importance of trying different things and failing was raised, Eric said the team were inspired by the phrase: 'if you are not failing you are most probably not trying hard enough and we are probably not making much progress.

Another interesting question was on attribution. Eric explained that within the project, attribution was part of the performance management element of the M&E scheme. Changes can likely be attributed on a specific level in the M&E Scheme, and one could still see businesses react to interventions of the project and the consequences. Everything that goes beyond that, Eric said was an exercise in correlation; it becomes difficult to clearly attribute, and both the project and USAID (the donor) acknowledge that some of the causal connections cannot be proven definitively. Leanne explained that in these situations it is important to get to a level of plausible likelihood to show that the project contributed to the observed change. To achieve this, the project uses deep-dive investigations to figure out why observed changes happen and how. These investigations draw on qualitative questions that dig in to the causal pathway. 

Marcus Jenal leads BEAM's M&E activities. As an independent consultant and Mesopartner Associate, Marcus has contributed to SEEP's Systemic M&E initiative and co-authored two papers on the topic.

Add your comment

Sign up or log in to comment and contribute.

Sign up