• About us
  • GSDRC Publications
  • Research Helpdesk
  • E-Bulletin
  • Privacy policy

GSDRC

Governance, social development, conflict and humanitarian knowledge services

  • Governance
    • Democracy & elections
    • Public sector management
    • Security & justice
    • Service delivery
    • State-society relations
  • Social Development
    • Gender
    • Inequalities & exclusion
    • Social protection
    • Poverty & wellbeing
  • Humanitarian Issues
    • Humanitarian financing
    • Humanitarian response
    • Recovery & reconstruction
    • Refugees/IDPs
    • Risk & resilience
  • Conflict
    • Conflict analysis
    • Conflict prevention
    • Conflict response
    • Conflict sensitivity
    • Impacts of conflict
    • Peacebuilding
  • Development Pressures
    • Climate change
    • Food security
    • Fragility
    • Migration & diaspora
    • Population growth
    • Urbanisation
  • Approaches
    • Complexity & systems thinking
    • Institutions & social norms
    • PEA / Thinking & working politically
    • Results-based approaches
    • Theories of change
  • Aid Instruments
    • Budget support & SWAps
    • Capacity building
    • Civil society partnerships
    • Multilateral aid
    • Private sector partnerships
    • Technical assistance
  • M&E
    • Indicators
    • Learning
    • M&E approaches
Home»Document Library»Why We Will Never Learn: A Political Economy of Aid Effectiveness

Why We Will Never Learn: A Political Economy of Aid Effectiveness

Library
James Morton
2009

Summary

Why is monitoring and evaluation (M&E) still not being carried out effectively? This paper is a practitioner’s response to current debates on M&E and aid effectiveness. It examines the technical side of M&E and the latest thinking on complexity, arguing that current methodological debates are red herrings. It highlights the underlying incentive structures that create the need for M&E, but do not create a sincere demand for the impact assessments that M&E is designed to produce. The aid industry’s default setting is lesson suppression, not lesson learning.

M&E reports are similar to the audited accounts that are provided to a company’s shareholders. However, a shareholder has an interest in reading the audited accounts. Bad news is as valuable to shareholders as good news. Investor interest in the share price puts daily pressure on management to perform. None of this is true for an aid programme. An aid donor expects nothing more from the money once it has been given. Aid agencies and NGOs justify their existence in terms of results, but they also have the private interests of their political masters, their staff and their organisation’s own survival to consider.

Many people believe that M&E has failed because it cannot handle complexity. However, it is not the complexity of developing countries that makes M&E so difficult. The problem is the incentive structures of international aid, which lead to failure to apply M&E effectively:

  • M&E is understood as a mechanical operation separate from daily programme operations. Few monitoring surveys are funded for long enough to measure impact against a baseline.
  • Good news is preferred over bad, leading to a level of M&E that gives the minimum assurance of effectiveness without raising hard questions.
  • Most aid programmes are built on a lowest-common denominator agreement between major stakeholders, whose incentives vary.
  • Aid intermediaries (country governments, civil society, NGOs) do not see themselves as agents but as independent actors, with their own entitlements and constituencies. They resist being held accountable for their performance.
  • M&E debates are dominated by the question of measuring outcomes, although attributing them reliably to a project is endlessly problematic. Yet outputs are a robust, easily measured indicator of what the primary stakeholders believe the outcome will be.

Simple, standard M&E tools have been criticised, not because they were wrong but because they delivered the wrong message. Conversely, there has been a strong incentive to encourage research into new tools in the hope that they will get the message right. Today’s methodological debates and theories around complexity are red herrings. They help stakeholders avoid the key elements of M&E – monitoring, process and accountability. The issue of effective impact evaluation can be summed up very simply:

  • Measured outcomes cannot, without attribution, prove anything about impact.
  • Some form of programme logic, causal chain or theory of change is essential for any attribution.
  • The context and the way it influences the causal chain must be fully understood when assessing attribution.

Source

Morton, J., 2009, 'Why We Will Never Learn: A Political Economy of Aid Effectiveness', James Morton & Co

Related Content

Lessons from stabilisation, statebuilding, and development programming in South Sudan
Helpdesk Report
2020
Doing research in fragile contexts
Literature Review
2019
Designing, Implementing and Evaluating Public Works Programmes
Helpdesk Report
2018
Indicators and Methods for Assessing Entrepreneurship Training Programmes
Helpdesk Report
2018
birminghamids hcri

gro.crdsg@seiriuqne Feedback Disclaimer

Outputs supported by FCDO are © Crown Copyright 2023; outputs supported by the Australian Government are © Australian Government 2023; and outputs supported by the European Commission are © European Union 2023
Connect with us: facebooktwitter

Outputs supported by DFID are © DFID Crown Copyright 2023; outputs supported by the Australian Government are © Australian Government 2023; and outputs supported by the European Commission are © European Union 2023