I'm not clear where the comparison of an approach with abundant high quality evidence is, but nevertheless, the oft cited mantra in the echo chambers we inhabit is it’s not there, we need more. To me this points very much to a symptom rather than an underlying cause of the problem.

A market systems diagram can be used to examine almost anything. There is a core transaction which can be split into supply, demand, and exchange. There are supporting functions and rules which determine the quantity, quality, or price of the three aspects of the core transaction, as described in our recent paper on defining a system and systemic change.

Discussions at the BEAM conference in Lusaka recently led me to think that some clarity could be brought to discussions around 'evidence' by examining it as a system, with supporting functions and rules, and some of these acting as constraints to the quality, quantity and price of that core transaction. In the evidence 'transaction', we're primarily concerned with issues of quality and quantity but there are issues with the supply, demand, and exchange component for each.

On the supply side, there are a number of critical supporting functions and rules which are not performing effectively. One such supporting function is the skills available to generate the evidence. There are whole consultancies whose business model is centered on evaluation but which do not possess some important impact assessment skills among their staff. That's not to say that some of the more complex (usually quantitative) impact assessment methods are always appropriate, but not having the skills to conduct them or knowledge of how to employ them limits the conception of what an appropriate methodology might be. Another supporting function on the supply side is finance. Are people able and willing to pay what it costs to generate the amount and quality of evidence required? I don't actually think this is the most important constraint, and perhaps less important than it is often portrayed.

There is also a key function in the supply of information for the production of evidence. Programmes assess interventions where they predict success. Funders commission evaluations where they think there’s a chance of a good news story. The quality of evidence is highly constrained by selection bias. A different way to look at this is that, in an effective evidence system, there might be an independent function for the commissioning of evidence, or at least the supply of data, to allow an independent assessment of where an impact assessment might be merited.

Some key supply side norms are what's considered acceptable evidence by those financing its production (rather than those demanding the evidence for use). The lack of a standards for the quality of evidence means that, as Elizabeth Dunn stated in her presentation at the conference, 'rigour' effectively means ‘what I say goes and what you say doesn’t’. There are also formal rules on the supply side when it comes to the question of why those with the requisite skills might engage. There are many with significant skills to provide the requisite quality of evidence but for whom the incentives are not aligned to do so. Academics, for example, are strongly, and in some cases exclusively, motivated by the ‘publish or perish’ incentive and only certain types of 'evidence' meet the requisite criteria.

In terms of exchange, skills are again a problem but the nature of the skills is different. Where evidence of sufficient quality is produced, there is an issue with communication skills. Evidence is only useful where it is usable and to be usable it has to be communicated in a way that it can be understood.  

Frequently, the skills for the production of high quality evidence and the skills for its communication are mutually exclusive, which acts as a systemic constraint to the production and use of evidence. Another important supporting function of exchange is the platform necessary. How many programmes have access to peer reviewed journals? If evidence exists but is not accessible, then it might as well not exist. Indeed overcoming this systemic constraint might be as effective as improving the quality or quantity of supply. There are formal rules on data sharing which inhibit the quantity of evidence available. There are intellectual property rules around the sharing of journal articles. There are contractual commitments, as well as commercial considerations, which limit the sharing of programme level data which would enable learning.

On the demand side, skills remain a problem. In this case, it is the skills required to consume the information which are often lacking. This includes the skills to find and harvest the existing knowledge that is relevant to the needs of the programme as well as the skills to be able to understand, digest and use what evidence is available. There is little point in academics producing more methodologically rigorous evidence if those reading their papers cannot understand what it is, or how and why it was produced.

Finance could again be considered an issue here. If we can't change the platform to one that's more accessible to consumers, then we need to alter the consumers’ ability to, for example, pay for journal subscriptions. A vital informal rule which impacts the quality and quantity of evidence produced is the perception of what evidence is necessary, which is largely a consequence of the politics of evidence. Specifically, programmes, on occasion reflecting the concerns of their donors, insist on conducting their own data collection. It is rare for a programme to examine an evidence landscape to see what existing evidence they can utilise. Doing so; continually conducting surveys of beneficiaries or market players, is costly and therefore compromises the quality of what additional evidence can be generated. 

In the evidence system, then, there are systemic constraints to improving the quantity, quality and price of its supply, demand or exchange. As with any systems analysis, the key to addressing these issues lies in looking at capacities and incentives, who does and who pays. Examining these issues in greater detail is beyond the scope of this piece and is likely to get far deeper into supporting markets of recruitment and issues around the politics of evidence. Suffice it to say, though, that conceptualising the supply of and demand for evidence as a system might enable those with remit to address these issues, including DFID's R4D, USAID’s LEO or indeed BEAM, to focus more intently on how the flaws in the systems might be sustainably addressed rather than simply to bemoan the lack of ill-defined notions of evidence.


This is kindly republished here with permission from the author. Original pdf.

Add your comment

Sign up or log in to comment and contribute.

Sign up