“Management by results - like driving a car by looking in a rear view mirror.” (W. Edwards Deming)
The Isley Brothers' song, Here we go again, recently reminded me of painful lessons learned while monitoring agricultural programmes in the late 1980s and early 1990s.
My biggest mistake (described below) lay in attempting through meticulous analysis (such as incremental gross margin analysis) to statistically prove success in raising farmers’ yields and incomes. I assumed such information was credible for the funder and useful in enabling management to learn and adapt, but I was wrong.
My error was later confirmed by a World Bank review of M&E policy and practice. Buttressed by other sources of evidence, it warned against using high-level results (e.g. crop yields or farmer incomes) as valid measures of success within relatively short (e.g. five-year) implementation periods.
It would be ironic if the donors and designers of monitoring systems today were reluctant to learn from past experience. Yet, evidence suggests this is the case.
Monitoring: a management perspective
The re-emergence of adaptive management (first documented by Rondinelli et al. in the 1980s) to cope with complexity and uncertainty makes it clear that high-level results (e.g. yields and incomes) should be treated as projections, not “targets”. Most defy prediction, and complex external forces acting on market system players and farmers may overwhelm the programme’s influence.
The great management thinker, W. Edwards Deming, observed the most important things to know are often the unknowns. He explained what he meant by this in The New Economics.
“It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth."
What Deming refers to as the unknowns are assumptions which, for monitoring purposes, matter as much as, if not more than, results themselves. Lessons from the World Bank’s review and Deming’s philosophy chime with my experience of carrying out annual crop production and yield surveys in Malawi.
In my case, after presenting the first round of survey results, I noticed how little use senior management made of the data. The Director of Agricultural Extension told me that, while the survey results were interesting, they did little to inform the actions of his and other departments (e.g. research, crops, and the women’s programme).
Instead, he told me they needed me to reveal what was more immediately unknown to them: farmer responses to extension support and how this varied. Understanding this would provide a basis for remedying rejection among farmers and replicating the successes of those who had adopted and retained the advice they received.
When we discussed what specific questions would meet this purpose, we reeled off four examples:
- What are the farmers’ impressions of extension agents’ performance?
- How many farmers adopt their messages and how does this vary by message, crop, and gender of the household head?
- Why do some farmers adopt message x? On how many of their plots do they adopt the message and for how many seasons?
- Why do others not adopt the same message and what are the multiplier effects of this rejection among neighbouring farmers?
In response, we revised our approach to focus on understanding the interaction between extension agents and their client farmers. We ensured that the survey treated farmers as subjects of conversations on issues that mattered to them; not objects of a survey that enumerated values of results determined by funders.
Challenges with using yields and incomes as targets
There are three reasons why treating improvements in yields and incomes as targets is problematic:
- Agricultural programmes work in highly uncertain environments. Context matters. Focussing on these high-level results may not accurately reflect the programme's impact and can lead to misleading conclusions.
- Claiming yield gains, let alone their contributions to farm and/or household income, requires a suspension of belief. It is mathematically impossible to establish trends in crop yields in rain-fed smallholder farming systems within implementation periods of direct delivery programmes, let alone attribute them and declare an association with additional farmer income.
- Farmers farm for a host of different reasons and adopt a variety of strategies. For example: the binding constraint facing many African farmers is labour, not land. In stark contrast to Asia, most increases in African agricultural production have been generated through an expansion of cultivated areas with adverse consequences on biodiversity. MSD programmes should do more to analyse farming systems and, in doing so, anticipate, heterogeneity of circumstance and aspiration.
Today's MSD programmes
The aim of MSD programmes can be summed up succinctly as ‘shifting the conditions that are holding a problem in place’. Nearly all MSD agriculture programmes work on the assumption that low productivity is the underlying cause, not the symptom, of the problem, and that services and technologies exist that can improve farm yields and so raise farm incomes.
A 2020 Learning review of commercial agriculture programmes, which drew on evidence generated by 12 programme evaluations, showed how these assumptions are widely treated as so certain, there is no need to check them as illustrated by the PropCom programme:
“….low productivity was identified as the main cause of low incomes and the research pieces identified both barriers in the supply and demand side of the market for services and inputs (the support market) that could increase productivity”
MSD programmes operate in complex operating environments. Thus, it is striking that this review found none of the 12 programmes correctly identified the key assumptions and surprising given the onus of MSD programmes on developing results chains. Moreover, none sought to analyse whether those assumptions would hold. Data collection was exclusively focused on measuring movements in the relative values of results. This is perhaps not surprising: indicators remain king given their political utility, while assumptions continue to be afforded limited attention and importance, even in the DCED guidelines.
I would argue that monitoring should be less about measuring results indicators and more about answering critical questions relevant to management. Understanding how targeted market system actors rate, respond to, and benefit from programme interventions should take precedence if to avoid compromising the systemic rationale of MSD programmes.
Over time, analysis of interactions or relationships between these actors and their clients, smallholder farmers, can further enrich learning and adaptation. Doing this, modern monitoring practices can better support the effectiveness and impact of MSD programmes among the most significant investors in the agricultural sector – smallholder farmers themselves!