GSDRC

Governance, social development, conflict and humanitarian knowledge services

  • Research
    • Governance
      • Democracy & elections
      • Public sector management
      • Security & justice
      • Service delivery
      • State-society relations
      • Supporting economic development
    • Social Development
      • Gender
      • Inequalities & exclusion
      • Poverty & wellbeing
      • Social protection
    • Conflict
      • Conflict analysis
      • Conflict prevention
      • Conflict response
      • Conflict sensitivity
      • Impacts of conflict
      • Peacebuilding
    • Humanitarian Issues
      • Humanitarian financing
      • Humanitarian response
      • Recovery & reconstruction
      • Refugees/IDPs
      • Risk & resilience
    • Development Pressures
      • Climate change
      • Food security
      • Fragility
      • Migration & diaspora
      • Population growth
      • Urbanisation
    • Approaches
      • Complexity & systems thinking
      • Institutions & social norms
      • Theories of change
      • Results-based approaches
      • Rights-based approaches
      • Thinking & working politically
    • Aid Instruments
      • Budget support & SWAps
      • Capacity building
      • Civil society partnerships
      • Multilateral aid
      • Private sector partnerships
      • Technical assistance
    • Monitoring and evaluation
      • Indicators
      • Learning
      • M&E approaches
  • Services
    • Research Helpdesk
    • Professional development
  • News & commentary
  • Publication types
    • Helpdesk reports
    • Topic guides
    • Conflict analyses
    • Literature reviews
    • Professional development packs
    • Working Papers
    • Webinars
    • Covid-19 evidence summaries
  • About us
    • Staff profiles
    • International partnerships
    • Privacy policy
    • Terms and conditions
    • Contact Us
Home»Document Library»Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations

Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations

Library
Rick Davies
2013

Summary

This paper summarises the literature on Evaluability Assessments, and highlights the issues to consider in planning one. It notes that an EA should examine evaluability: (a) in principle, given the project design, and (b) in practice, given the available data and systems. It should also examine the probable usefulness of an evaluation. While there is limited systematic evidence on the effectiveness of EAs, their relatively low cost means they only need to make modest improvements to an evaluation for their costs to be recovered.

The paper uses the OECD-DAC’s definition of evaluability: “The extent to which an activity or project can be evaluated in a reliable and credible fashion”. Common steps in an Evaluability Assessment include: (i) identification of project boundaries and expected outputs of the assessment; (ii) identification of resources available for the assessment; (iii) review of the available documentation; (iv) engagement with stakeholders; (v) development of recommendations; and (vi) feedback findings to stakeholders.

The paper notes that Evaluability Assessments should cover three broad types of issues: programme design; the availability of information; and the institutional context. Other findings include the following:

  • Locally commissioned Evaluability Assessments are likely to generate the most support and value. However, other complementary strategies may be useful, including centrally provided technical advice, screening of a random sample of projects in areas where little assessment work has been done, and mandatory assessments for projects with budgets above a designated size.
  • The timing of an EA will depend on its expected outcomes: to improve the project design prior to approval; to inform the design of an M&E framework in the inception period; to decide if an evaluation should take place later; or to inform the design of an evaluation that has now been planned for.
  • An EA’s recommendations should cover: (i) project logic and design, (ii) M&E systems and capacity, (iii) evaluation questions of concern to stakeholders, (iv) possible evaluation designs.
  • EAs can be applied beyond specific projects, to portfolios of activities, legislation and other policy initiatives, country and sector strategies and partnerships.
  • Ideally EAs would be carried out by independent third parties.

While there is limited systematic evidence on the effectiveness of Evaluability Assessments, their relatively low costs means they only need to make modest improvements to an evaluation before their costs can be recovered. The biggest risk of failure facing an Evaluability Assessment is likely to be excessive breadth of ambition: reaching into evaluation design or evaluation itself.

Source

Davies, R. (2013.) Planning evaluability assessments: A synthesis of the literature with recommendations. Working Paper 40. London: UK Department for International Development

Related Content

Lessons from stabilisation, statebuilding, and development programming in South Sudan
Helpdesk Report
2020
Doing research in fragile contexts
Literature Review
2019
Designing, Implementing and Evaluating Public Works Programmes
Helpdesk Report
2018
Indicators and Methods for Assessing Entrepreneurship Training Programmes
Helpdesk Report
2018

University of Birmingham

Connect with us: Bluesky Linkedin X.com

Outputs supported by DFID are © DFID Crown Copyright 2025; outputs supported by the Australian Government are © Australian Government 2025; and outputs supported by the European Commission are © European Union 2025

We use cookies to remember settings and choices, and to count visitor numbers and usage trends. These cookies do not identify you personally. By using this site you indicate agreement with the use of cookies. For details, click "read more" and see "use of cookies".