Evidence of the effectiveness of the MSD approach is not strong enough - what can we do to change this?
In Hans Christian Andersen’s story The Emperor’s New Clothes, the monarch parades before his subjects in his new regalia and no one dares point out that he’s naked. It takes a child in the crowd to call out the charade.
Looking at barriers to effective learning at the Market Systems Symposium recently, we took inspiration from Andersen’s story. Cultural and structural obstacles that frustrate meaningful learning develop over time in any field. Every one of us involved in delivering MSD programmes needs to put ourselves in the place of the child in the crowd from time to time.
No, we do not think that the market systems approach is a charade. But the evidence base for MSD approach effectiveness is not strong enough. We can undoubtedly do better, but only if we begin to ‘call out’ some of the entrenched issues that are hiding in plain sight.
Evidence matters because the market systems approach is increasingly visible in donor-funded private sector development, and also becoming significant in non-traditional social development sectors such as water and sanitation, health and education.
In our discussions we asked ourselves two key questions:
- Do we have robust impact evidence from which to learn about the effectiveness of applying market systems approaches?
- If better evidence was available, how could we learn better?
Do we have robust impact evidence from which to learn?
On the face of it, the suggestion that there is a lack of evidence of impact about the effectiveness of market systems approaches is absurd. Since the first explicitly market systems project, the FinMark Trust launched by DFID in South Africa in 2001, the BEAM Exchange Evidence Map has collected over 150 evidence documents on market systems interventions across 41 countries. Each of these documents reflects a large volume of monitoring data.
The problem is not with the quantity of evidence, but rather with its quality. Few of these analyses meet the minimum thresholds of evaluation rigour. BEAM’s Evidence Review in 2019 reported clear signs of publication bias, and we cannot learn from mistakes that we hide. In addition, most of the evidence was commissioned by implementation teams – so not strictly independent. Few of the evidence base documents are ex-post impact evaluations, which is the most reliable way to assess the overall performance of market systems programmes. By 2019 only 14 per cent of the Evidence Map comprised impact evaluations and external reviews.
So it appears not much has changed since Ruffer and Wach’s review of M4P programme evaluations in 2013 reported that ‘evaluations reviewed here are generally weak’ in terms of considering systemic changes, data quality, triangulation practices, use of theories of change and consistency of units.
Clearly there is a problem here. But do market systems projects perform any differently in this regard to other similarly complex development sectors? Part of the problem is obviously structural to the aid sector generally. The need to show that taxpayers’ money is being spent effectively generally does not sit well alongside a nuanced reporting of a complicated picture. In addition, the recipients of aid do not generally complain about poor services.
However, we should recognise that monitoring the impact of a market systems programmes is harder than, say, building a school with aid funds. Market systems projects deliver, in a tangible sense, very little beyond diagnosis, facilitation and monitoring. Results are delivered (or not) by entrepreneurs adopting business innovations or public officials changing regulations, over whom the project has limited direct control. Getting accurate impact evidence from market systems projects is even harder than for other types of aid projects.
Improving accuracy is partly therefore a technical issue of applying counterfactuals to evaluations; taking the attribution of results more seriously; and undertaking ex-post evaluations. It also is related to a commitment to "serious monitoring" (internal, longitudinal etc.) and a decade of effort and experience has gone into developing the DCED’s Standard for Results Measurement which includes independent auditing of programmes’ results measurement systems.
However, in my view, the root of the problem is located in incentive frameworks created by the political economy of aid. Under pressure from donors to report rapid and extremely high impact-level results, combined with light touch donor management of monitoring systems, project teams are incentivised to generate an optimistic view of their interventions. This tendency is only sharpened when payment by results modalities are used - where consultants’ payments are contingent upon the achievement of specific high-level results being achieved. The ICAI review of DFID’s private sector development work in 2014 gave an amber-red (meaning ‘performs relatively poorly’) score for its assessment of impact, linking this explicitly to the pressure to demonstrate results against measurable targets, rather than systemic change and broader growth and poverty reduction.
In short, everyone is incentivised to pretend that the Emperor is wearing beautiful clothes. I do not think this situation is inevitable. We need inspirational people to create the space and environment where development practitioners are incentivised to tell the truth about the results of their interventions and to report failures as well as successes. This is not an easy task, but it is vital and it is possible.
How can we learn better?
Assuming an environment is created that will generate sufficiently robust evidence to support learning, the question emerges – how can we learn better? We think this requires action at the cultural and the institutional level.
Even though humans are biologically wired to learn, institutional learning in development cooperation seems to be fragmented and owned by individuals. Many stakeholders recognise the importance of building the culture of learning but struggle to put it into practice. Happily we already have a pretty good idea from experienced MSD programme managers, about how to build high-performing teams with strong learning cultures.
Donors also have a role in either promoting or inhibiting the learning culture. In general, the donor approach has been to out-source learning to consultancy firms and platforms run by external entities such as the donor-financed BEAM Exchange, DCED or MarketLinks.
These online platforms perform a useful function in that they are a repository of evidence and can synthesise and evaluate this evidence with a degree of critical oversight. However, institutionally, we need to evolve from scattered independent evaluations and ad-hoc research about market systems topics into a robust and recognised field of learning that attracts independent researchers from different backgrounds.
From this viewpoint our current repositories of knowledge and evidence are not ideal. Instead we should be looking to create an enabling environment for serious learning around market systems.
First, market systems practitioners (implementing organisations) should make effective their demand for better evidence and knowledge by being prepared to pay for it. In this way the online platforms can create a sustainable revenue stream which is independent from the pressures that come from donor funding.
Second, the distance between market system practitioner and academic worlds should be reduced. Market systems thinking has its conceptual roots in a respectable and currently vibrant academic critique of neo-liberal economics, drawing upon behavioural and evolutionary economics, and the science of complex adaptive systems. (See for example Cunningham & Jenal on systems change, or Raworth’s work on doughnut economics)
There is a window of opportunity for the market systems world to establish links with academic institutions in both donor and recipient countries in order to establish the latter as repositories of market system thinking and application. For practitioners this solution offers an institutionalisation of knowledge which the private sector cannot replace. For academic institutions, engaging with market systems practitioners will yield a rich palate of empirical case studies and the promise of funding from a new source.
In conclusion, we need to be honest with ourselves that while the Emperor has new clothes, they still look a bit threadbare at present. We all have a role to play in taking learning more seriously.
Donors should engage with politicians to nudge the incentive frameworks that they are creating away from impact-level to outcome-level results.
Implementing agencies should value their ‘results’ not just as a way to demonstrate the effectiveness of a specific aid programme but as a valuable input to a broader learning process.
And academics should recognise that the MSD approach presents their institutions with an important opportunity to apply some of the most innovative thinking in a relevant and meaningful context.
Jonathan Mitchell is the Oxford Policy Management portfolio leader for financial and private sector development, and project director for the Decision Support Unit of the DFID private sector development programme in the Democratic Republic of the Congo.
Note that the views and opinions expressed in this article are those of the author and do not necessarily represent those of his employers, donor organisations, or the programmes he works with.