Why is monitoring and evaluation (M&E) still not being carried out effectively? This paper is a practitioner’s response to current debates on M&E and aid effectiveness. It examines the technical side of M&E and the latest thinking on complexity, arguing that current methodological debates are red herrings. It highlights the underlying incentive structures that create the need for M&E, but do not create a sincere demand for the impact assessments that M&E is designed to produce. The aid industry’s default setting is lesson suppression, not lesson learning.
M&E reports are similar to the audited accounts that are provided to a company’s shareholders. However, a shareholder has an interest in reading the audited accounts. Bad news is as valuable to shareholders as good news. Investor interest in the share price puts daily pressure on management to perform. None of this is true for an aid programme. An aid donor expects nothing more from the money once it has been given. Aid agencies and NGOs justify their existence in terms of results, but they also have the private interests of their political masters, their staff and their organisation’s own survival to consider.
Many people believe that M&E has failed because it cannot handle complexity. However, it is not the complexity of developing countries that makes M&E so difficult. The problem is the incentive structures of international aid, which lead to failure to apply M&E effectively:
- M&E is understood as a mechanical operation separate from daily programme operations. Few monitoring surveys are funded for long enough to measure impact against a baseline.
- Good news is preferred over bad, leading to a level of M&E that gives the minimum assurance of effectiveness without raising hard questions.
- Most aid programmes are built on a lowest-common denominator agreement between major stakeholders, whose incentives vary.
- Aid intermediaries (country governments, civil society, NGOs) do not see themselves as agents but as independent actors, with their own entitlements and constituencies. They resist being held accountable for their performance.
- M&E debates are dominated by the question of measuring outcomes, although attributing them reliably to a project is endlessly problematic. Yet outputs are a robust, easily measured indicator of what the primary stakeholders believe the outcome will be.
Simple, standard M&E tools have been criticised, not because they were wrong but because they delivered the wrong message. Conversely, there has been a strong incentive to encourage research into new tools in the hope that they will get the message right. Today’s methodological debates and theories around complexity are red herrings. They help stakeholders avoid the key elements of M&E – monitoring, process and accountability. The issue of effective impact evaluation can be summed up very simply:
- Measured outcomes cannot, without attribution, prove anything about impact.
- Some form of programme logic, causal chain or theory of change is essential for any attribution.
- The context and the way it influences the causal chain must be fully understood when assessing attribution.