Module 6: Monitoring for adaptive management

03

Cycles, processes and documentation systems for adaptive monitoring

Review cycles for different programme levels

Market systems programmes are typically delivered as a portfolio of projects which seek to achieve change in one or more sectors, through individual interventions. The pace and intensity of monitoring is likely to vary for each of these levels (portfolio, sector and intervention).

For instance:

  • A review of what is happening with the overall portfolio might be carried out on a bi-annual or annual basis
  • A sector team might review its activities quarterly or bi-annually
  • The progress of an individual intervention might be reviewed every month, or after implementing specific key activities. 

These cycles are illustrated in the diagram below.

Review cycles

The frequency of reviews is only suggestive, and each programme will need to define its own particular rhythm for monitoring at different levels. Some additions to this cycle might also be required, for example:

  • The decision to stop an intervention, change a sector strategy, enter a new sector or pull out of an existing one should probably be taken on the basis of a review carried out specifically for that purpose.
  • It is also possible that the emergence of a new opportunity requires an immediate decision which cannot wait until the next formal review. This underlines the need for field staff to have some decision-making authority, to allow them to react quickly.

It is recommended that decision-makers at different levels of the programme participate actively in review meetings, in order to ensure that the results have broad support and are acted on. It is also advisable that review meetings begin early on in the programme, to establish a process of reflection and review and to build the capacity of the team to analyse, discuss and make effective decisions. 

Questions to guide a review process

These questions establish a common understanding of the work under review. Divergences from the plan should be particularly explored:

  • What was planned?
  • What was actually done?

These questions encourage reflection on successes and failures during the course of the activity. Answering the question 'why?' will help to make sense of what was observed:

  • Why were there differences?
  • What happened as a result?
  • What was unexpected?
  • Why?

This question helps identify specific recommendations. The team should come up with clear recommendations on how to improve:

  • What would you do differently next time?

These questions focus on the strategy and logic for an intervention. They should lead to reflection about whether the intervention needs to be adapted, and whether the underlying assumptions about what is needed to bring about a desired change are well-founded or not:

  • Have we achieved what we set out to do?
  • Are we still doing the right things?

Tools for knowledge and learning

30 tools and techniques to facilitate improved knowledge and learning.

Different kinds of review session

It is also useful to consider the different kinds of review session that might be appropriate at different stages of an intervention. Two contrasting examples include the following.

After-action reviews 

An after-action review is useful for reflecting on a particular activity or group of related activities, and is appropriate when an intervention milestone has been reached. The purpose of the review should be for individual team members to reflect on what happened, and to discuss successes and failures in an open and honest fashion. The purpose is to learn from experience in order to accomplish similar tasks more effectively the next time. 

The whole process should be kept simple. There is also potential to experiment with the process and find a format that works for a particular team. As such, after-action reviews need to be kept fairly ‘light-touch’ and are more appropriate for reviewing progress within a small team rather than for assessing performance at intervention level or above. 

Strategic reviews 

These are regular events which provide the opportunity to review results at different programme levels i.e. interventions, sectors or the portfolio as a whole. These can be organised to allow for discussion and constructive criticism between different teams. Such reviews should therefore be fairly structured events. 

Issues to consider beforehand include deciding what preparation (e.g. reading or reviewing data) will be required prior to the review, what issues will be discussed at the meeting, and what decisions are needed. The review process itself can follow a simple logic:

  • What does the available monitoring data suggest about the issue in question?
  • What information is missing, and what more needs to be understood?
  • What decisions need to be made in light of the findings?

It is also important to document the conclusions drawn to support future assessment and learning about how a programme has adapted over time, and why.

In addition, a strategic review should provide the opportunity to revisit the theory of change and results chains. Market systems change over time (sometimes very quickly), so it is important not only to review the changes and results of interventions, but also to question the assumptions on which the programme and its interventions were developed. Doing this on a regular basis should also help to promote a culture of reflection and learning within the programme. 

Of sasquatches and flexible programming: a genuine sighting

How a programme in Cambodia used review meetings to review its theory of change and interventions.

Review of the use of theory of change in international development

How theory of change is used within DFID.

Documenting and accessing monitoring data

For monitoring data to be used it needs to be easily accessible by programme staff on a day to day basis, and not only during a formal review process. In highly dynamic situations, frontline staff may need to access information on a daily or weekly basis. Senior managers will also need timely access to data in order to oversee programme implementation and to be able to communicate effectively with donors and other stakeholders.

It is important that data collection and management requirements do not overburden staff, and that the benefits of having and using data outweigh the work required to collect it. It should be the responsibility of both programme managers and the dedicated monitoring team to make the processes as lean as possible. Some ways to achieve this include:

  • Reviewing what information is collected, and dropping items where necessary
  • Timing documentation efforts so that they coincide either with internal review processes, or with other project milestones at which data is collected from beneficiaries or implementing partners
  • Having a quality assurance process in place to make sure that captured data is reliable, meaningful and usable
  • Making sure that staff are aware that the data they collect is valued and used.

Large market systems programmes will almost certainly want to use a dedicated IT system to store and manage information. While this can take considerable effort to set up and adapt to a programme’s specific needs, it is likely to make monitoring processes much more effective and also allow data to be tailored easily for different stakeholders. How sophisticated such a system needs to be will depend on the circumstances and complexity of a particular programme. Possible options include: 

  • A fully integrated platform, which allows electronic data collected in the field to be fed directly into the system, and which provides a range of functionality to allow easy cross-referencing and comparing of data. Examples of 'off the shelf' platforms are DevResults and ActivityInfo
  • 'Lighter' systems based on Microsoft Excel or another spreadsheet programme into which data is manually entered, and which make use of a manually adjusted dashboard to show what information is available at a particular moment. 

What is an Aid Information Management System (AIMS), and do you really need one?

Tools4Dev's pros and cons of using a bespoke monitoring system.

Whatever system is chosen, it is important to be aware of the risk of focusing too closely on the system itself, and forgetting about the fundamentals of monitoring – making sure that processes for regular reflection are established, and creating a learning culture within the organisation. 

For practical examples, read the monitoring story from the programmes STARS.