Adding structure to experimentation can help us manage unknowability and complexity much better.
True or false: Adaptive management means documenting lessons learned and using them to continuously improve programme effectiveness.
ASI's answer? It's kind of true, but also…kind of false.
Adaptive management needs to go a step beyond using lessons learned, to focus on structured, intentional experimentation. However, it often seems like, as practitioners, we don’t have the time to set up and execute well-thought experiments. We recognise that things are complex and unknowable, and our response is usually just to jump in and "start doing stuff."
Sometimes it’s good to jump in, but adding structure to experimentation can help us manage unknowability and complexity much better. It can give us better data, timelines, and expectations, to make timely decisions on what to do next. The key is to experiment with multiple possible options at once, in order to accelerate feedback loops and learning.
So how can we be intentional about experimentation? ASI decided to put together some guidelines, and then work with a few of our programmes to test them out. The goal is to understand how we can streamline structured experimentation into our work on a more regular basis.
We boiled it down to seven core steps:
1. Consider your programme context. This includes the risk appetite of the donor, and how much time you have.
2. Determine the objective. In other words, what do you hope to find out from the experiments? Make sure the objective is relevant to your programme strategy: adaptive management experimentation should not be thought of as an "add-on," but rather a core tool for building strong interventions.
3. Design micro-pilots to test various courses of action. Think of at least two different solutions, ideally even three or four. Often, experiments will reveal that multiple solutions are possible, so don’t necessarily expect a yes/no answer.
4. Set the micro-pilot parameters. Key questions include, how long will the experiments run for? We want to learn what we need to make a decision in the minimum required time.
5. Make a simple plan to measure. Again, think minimal. What is the simplest way you can know what’s working? And how quickly can you realistically get that data? Engaging the MRM team from the very beginning will help to answer these questions effectively.
6. Stick to the plan, but adapt as needed. We suggest regular but short check-ins with other team members.
7. Make decisions. Knowing when you have enough information or when enough time has passed to prove a point is often the hardest part. When you reach the end date, make space to discuss what happened, and what to do next. Think about what will be the next adaptive management "learning loop" that the intervention needs to go through.
Here's a quick programme example to put those steps into context:
Objective: Introduce weigh scales at three different points in the supply-chain to determine which, if any, are effective methods to improve trust between maize growers, aggregators and poultry companies.
Micro-pilots: Test introduction of weigh scales at the points where
1. Poultry companies purchase maize from aggregators
2. Aggregators purchase maize from farmers (either at their premises or farm gate)
3. Poultry companies purchase maize from aggregators
Parameters: 3 months during harvest season, 3 locations (one for each micro-pilot), 2-3 poultry companies per location, 3-5 aggregators per poultry company
Measurement: Fortnightly follow up by telephone / SMS or in person discussion + FGDs.
Quantitative: # attendees who had / hadn’t seen scales used before, # farmers / aggregators / companies who adopt the use of scales, # of actors who feel that scales have improved working relationships at that point in the supply chain
Qualitative (key informant interviews): What difference has the use of scales made? (Trust, sales, ease of workload, increased income, formal contracts etc) Would they recommend scales to other aggregators and / or poultry companies? Most effective method of communication about weigh scales eg. Demo event or materials / radio?
A few other reflections on how the guideline field-testing is going so far:
• At one point, a colleague asked us, “What’s the difference between a micro-pilot and a pilot?” Good question. We threw in this term to emphasise that these are supposed to be short, time-bound experiments. Too often, supposed ‘pilots’ end up running for more than a year. That’s not what we’re after here.
• When we approached our ASI programmes about participating in the guideline testing, some programmes were concerned it would take too much time and money to run experiments. It’s really important that adaptive management experimentation is right-sized and centrally linked to programme strategy. It should not be a burden for programmes — ideally it should make our jobs easier at the end of the day.
• We've found that determining the objective (step 2) can be harder than it sounds. Sometimes, teams are faced with a multitude of options that, when evaluated, keep revealing more and more unknowns. This can make the process quite daunting, and can result in teams losing interest in the idea of adaptive management experimentation in the first place. We're working on sharpening our guidance to make this more manageable.
Remember that this is not "7 steps to doing everything that is adaptive management" — it’s only seven steps to doing structured experimentation. Adaptive management involves a lot of other things which we’re not tackling here, but if we can get better at this critical aspect of adaptive management, then it could go a long way towards using programme resources more efficiently and effectively. So far, we think the steps are helping us do that. We’ll share the full set of guidelines and more programme examples once we learn more.
This blog is part of our series on adaptive management.
Add your comment
Sign up or log in to comment and contribute.
Sign up