Evidence on market systems approaches is one of the main topics at the BEAM Conference next week. To prepare for this, Matthew Ripley of the ILO shares his perspective on the politics of evidence.
Everyone wants evidence about the impact of market systems development programmes. The cry is clear: please sir, can we have some more? Empirical proof will show that the approach works, help hold implementers accountable, and justify future funding.
But why bother? Of all the problems with the current framing of ‘evidence’ (and there are quite a few – see here and here), the biggest is the assumption of rationality: that people actually use the available body of facts to inform whether their position is valid (the dictionary definition of evidence, in case you’re wondering). We don’t need to be experts in behavioural science to recognise that the world of international development is often not only irrational, but predictably so. Decisions are not systematically made based on information about effectiveness, but on what one impact investor has called the dance of deceit: set to the music of politics, perception and persuasion.
An excessive focus on the content of the evidence itself ignores the political economy of its use.
In the mid- 1980s, Steve Jobs and Apple CEO John Sculley fought over the future of the new Macintosh computer. Despite Jobs having the vision and arguably the data on his side (hindsight justified his faith in the Mac), he presented an arrogant case to a board that already held him in low esteem. He eventually lost what was in effect a proxy war for control of the company, and was demoted.
Systems of norms, cultures, incentives and rules shape whether evidence is likely to be used - or misused. As Jobs found out, it’s not just the facts that convince, it’s the context of their reveal.
As a consequence, our obsession with methods ‒ and constant debates about rigour ‒ means many of us are left chasing a MacGuffin. Searching for the silver bullet of evidence, we measure, evaluate and aggregate to reach a magic evidence metric. A big, impressive sounding number would be preferable, how does improved incomes for 2 million farmers sound? But such ‘evidence’ is rarely conclusive, nor even that meaningful. As Mike Field wrote in a recent blog, much of what we currently define as evidence is not really evidence as it does not provide insight or understanding.
In criminal trials, there is seldom a moment when a single piece of evidence provides irrefutable proof of guilt (or innocence). Many fragments of evidence have to be pieced together around a narrative. Facts are framed – and often misrepresented and distorted – depending on who is presenting, and to whom. Look at how much effort goes into jury selection. Lawyers don’t just shove the evidence in front of people; they craft their argument using carefully selected evidence as a means of persuasion.
In development, evidence is not apolitical. It can be used subjectively, as a powerful stick and to reinforce existing prejudices. If we don’t like something, we can always play the ‘yes, but where is the evidence for that?’ card. Or see something as fair game for methodological criticism by questioning whether the evidence is hard, solid or robust ‒ whatever we mean by those terms ‒ and then dismiss it as anecdotal. Deep down, we know we’ll never find the murder weapon smothered in bloody fingerprints, but that doesn’t stop us asking for it.
There is no definitive proof that a market systems approach always works.
So what? After 300 years, there is no evidence that challenge funds are effective. Microfinance, so says the evidence, leads to indebtedness. Yet both remain donor darlings. See what I did there? It reminds me of a call with a prominent academic who declared that there was little point further evaluating enterprise development interventions as they have no impact. His evidence? A single experimental study on business management training.
The belief that evidence leads to more informed positions ‒ and if we build evidence then it will be properly used ‒ underpins much of the impact assessment work we do, including here at the ILO. But this puts the cart before a non-existent horse. In the reality of development, politics often drives evidence, not the other way around. Evidence is politics.
It's not all doom and gloom.
By better understanding the different motivations, beliefs, principles and priorities shaping the demand for evidence, surely we can come up with a more appropriate supply. Are we looking for something that is accountability, knowledge or decision focused? What kind of information ‒ if any ‒ would be compelling for our audience (and who is our audience)?
As a practitioner, I need evidence that helps a project add, drop or build on the many possible pathways towards systemic change. Are new behaviours, roles and innovations in the market meeting our sustainability (commercial viability), scale (barriers to entry) and value (pro-poor benefit) hypotheses? This may be a different need than my implementing agency (for generalizable learning), or indeed my donor (holding the project accountable to top-line impact).
So of course we should bother with evidence. As long as we realise that in market systems, this evidence is going to be complex, context-specific and perhaps even contradictory: We do not operate in the simple realm of ‘what works’. And let’s stop trying to generate more evidence in the hope it will be used. Instead, let’s first try and unpack the ‘use context’ ‒ the irrationalities, tensions and trade-offs ‒ to see if, when and how evidence might actually be useful.
The views expressed here are those of the author and do not necessarily reflect the views of the International Labour Organisation.
Find out more about the BEAM Conference. Matthew will be speaking at the plenary on Friday 20 May, The good, the bad and the ugly: market systems and the politics of evidence.