What can be learned from the many impact evaluations (IEs) carried out by development agencies in recent years? This review highlights the need and growing demand for greater and more strategic coordination of IE efforts, and notes insufficient attention to diverse methodological approaches to evaluation. It is important in all policy sectors to reflect on the suitability of methods to development questions and to invest in the development of impact evaluations informed by methodological pluralism. Developing country evaluation capacity, early stakeholder involvement in IEs, and the dissemination of clear policy implications should also be supported.
Assessing ‘impact’ involves looking at the broad effects of development interventions on their surroundings. This is in contrast to a monitoring and evaluation approach, which focuses on immediate goals and outputs.
The dynamics of the research-policy interface vary in different policy sectors. Analysis of similarities and differences in the patterns of impact evaluation production and use across sectors found important similarities. These include the growing recognition that IEs should be used as part of a broader monitoring and evaluation system; that multiple stakeholders should be involved; and of the utility of exploring alternative methodologies. Differences include the following:
- Knowledge base: Impact evaluations have a long history in the health and agriculture/NRM sectors, and thus a broader array of experiences from which to draw lessons.
- Suitability: In the health sector, and to a lesser degree in agriculture and natural resource management (NRM), there is recognition that IEs are strongly suited to providing robust evidence on a range of key questions. In social development, there is growing appreciation that mixed-method impact evaluation can provide valuable insights into intervention effectiveness. Views in the humanitarian and infrastructure sectors and in results-based aid are more cautious.
- Methodological innovation for IEs: This is strong in health and social development, but less so in the humanitarian and infrastructure sectors.
- Gestation of interventions: In human and social development and results-based aid, there is recognition that programme impacts often take several years; in agriculture/NRM and infrastructure, it is thought that impacts may take considerably longer.
- Choice of implementing agencies: The choice of IE agency can be critical in promoting greater stakeholder engagement and the use of evaluation findings. Work with developing country governments and local researchers is greatest in the health sector, and is emerging in social development.
- Commissioning IEs: This is largely supply driven, except in health and social development.
- Communicating findings: There is limited communication of findings at national level in all sectors except health. At the international level, coordinating mechanisms have emerged to communicate findings. However, there is concern that learning opportunities are limited by a focus on positive results.
- Use of findings: In the health sector, and increasingly in social development, use of evaluation findings is widespread. In agriculture/NRM, there is concern that the use of findings is hindered by evaluations’ limited attention to context and programmatic variables. In infrastructure, there is a general perception that the use of results stops at meeting reporting and official accountability requirements.
While there is a growing recognition of the importance of IEs, their potential to shape donor investments and national-level policy decision making has yet to be realised. Recommendations for development agencies focus on five areas:
- Strategic coordination: There is a need for a broader framework for IE production and use that goes beyond projects and addresses policy-level challenges. Methodologically, it is important to move beyond narrow debates on specific experimental methodologies toward mixed approaches.
- Funding: This is critical in shaping evaluation practice and incentives. Greater attention is needed to encourage process changes, the involvement of evaluation staff in dissemination, and publication of negative as well as positive results.
- Knowledge management: Common agreements on database formatting would promote knowledge sharing. Greater awareness is needed of how IEs are used to influence policy and practice.
- Capacity strengthening mechanisms: To avoid IEs being donor-driven, developing country evaluation capacity should be supported. There is also a need for a decision framework outlining whether, when and what to evaluate.
- Improving IE communication and uptake: Key factors facilitating uptake of IEs are early stakeholder involvement, and the dissemination of clear policy implications.