Good monitoring and results measurement is possible when running adaptive management experiments.
Following on from our blog on adaptive management experimentation, here are our top tips for monitoring experiments effectively:
1. Avoid over-complicating. Forget about your nice-to-know questions. To borrow from Lean Start-Up language, think about the "minimum viable product" – what is the minimum you need to know, in order to understand which experiments are working? Think quick cycles of simple data.
2. Don't just bring in the MRM team for the monitoring part. Have them involved from the beginning, so they understand the objectives clearly and can help you monitor better.
3. Be realistic. Maybe you want to collect information on the experiments every week. Is that realistically going to happen? If you do need the data that frequently, make sure it is exceedingly simple and easily accessible, as delays will creep in otherwise.
4. Keep the budget simple, too. This should not look like an impact assessment. Make staff time your most valuable MRM resource, and keep it in-house wherever possible. Once you are more confident in which options you want to scale up going forward, then you can invest more in MRM.
5. Capture unintended outcomes. It’s good to have some numbers-based metrics, but something as simple as weekly 15-minute meetings to share observations can be enough to make sure you’re not missing rich and useful details.
6. Stick to your plan. It’s really easy to let the MRM component slide. One of the easiest ways to stay on top of the data is to make sure that the data is useful for decision-making, which will reinforce the importance for your team.
Now, all of this is ideal. Here’s what we’ve been finding when our guidelines interact with the "real world" of programmes:
It’s really, really, easy to start over-complicating. On one of our programmes, we made a nice, neat table with about four key metrics that would help us determine the financial viability of a new business opportunity. The final partnership agreement with the partner expanded the MRM questions about four-fold. The concept of a ‘minimum viable product’ is not always intuitive – especially for those of us with a research background! This probably just takes practice, as it requires us to "unlearn" some of our usual processes.
Our existing MRM systems are a disadvantage in adaptive management experimentation. This is related to the point on over-complication. On the one hand, we want these AM experiments to be streamlined into programme strategy. But on the other hand, if we streamline the monitoring into our "normal" MRM processes, we’ve found it complicates the collection and drags down the time to complete feedback loops. On another programme that’s helping us test the guidelines, when we tried to streamline into existing MRM tools, our experimentation questions fit into four different questionnaires that were administered on four different time frames. We had to just create our own spreadsheet in the end. It begs the question: even when we’re not running experiments, do we really need or use all that data? (But that’s a whole other blog post.)
Simple, bi-weekly data collection with bi-weekly check-in meetings seem to work well. On one of our programmes, the adaptive management experiments focus on improving employee retention for distribution companies. Every two weeks, a staff person asks companies if they have hired, fired, or lost any staff. If so, we record it, and in some case follow up. If not, that’s the end of the MRM exercise. These conversations fit into ongoing engagement with the businesses, so when we have our bi-weekly check-ins, we probe why things are or aren’t improving based on what we’ve been seeing. The data keeps us grounded in what’s actually happening, and we supplement with observations. This structure won’t make sense for every programme, depending on what you are monitoring, but so far it’s working in this context.
Separating programme data and experimentation data can be hard. Often, our logframes require us to count things like number of people who attend events, or number of events run, which doesn’t get at the real behavioural changes that we need to understand. These things are much easier to count though, especially in a short time. We had to remind ourselves of what we really wanted to measure, and think of simple ways to assess that. It is more important to focus on measuring the number of actors who adopt the scales, and cut out the "vanity metrics" that give numbers without behavioural changes attached.
It’s okay to use perception-based measures, particularly in these rapid trials. For example, for one indicator, we are simply asking actors, "Do you feel that weigh scales have improved your working relationship with [the other market actor]?" Perceptions are very powerful in this case, even if they are not "scientific."
Measuring both rapidly and accurately is never easy, but we hope that by creating these spaces for rapid MRM, it will help us to make our entire MRM systems more effective.
This blog is part of our series on adaptive management.
Add your comment
Sign up or log in to comment and contribute.
Sign up