How can impact be credibly attributed to a particular intervention? This report discusses the merits and limitations of various methods and offers practical guidance on impact evaluation. A rigorously conducted impact evaluation produces reliable impact estimates of an intervention through careful construction of the counterfactual using experimental or non-experimental approaches.
The impact evaluation literature has been growing rapidly. While most studies focus on education and health, other sectors such as infrastructure, agriculture, and microfinance have seen a rapid increase in both the number and rigour of evaluations.
An impact evaluation should indicate more than whether an intervention works or not. It should also indicate in what context the intervention does or does not work, and to what extent its impacts vary across different groups of beneficiaries.
The core challenge lies in credible attribution of the observed changes in the outcome variables to the intervention being evaluated. The difficulty of attribution arises because the intervention is not randomly assigned or individuals do not participate in the programme in a random manner. Simply comparing those ‘treated’ by the intervention and the untreated will result in biases in evaluation results and lead to misleading conclusions about the impact of the intervention.
Different methods based on observational data have been developed to overcome this attribution challenge in impact evaluation:
- The difference-in-differences (DID) approach is probably the most common due to its reliance on relatively fewer assumptions and flexibility in application. DID is applicable when data are available for the time periods before and after the programme for the treatment group as well as the untreated comparison group.
- Randomised controlled trials (RCT) allow impact evaluation to overcome the limitations of observational data. This approach proactively uses experimental data collected from the field for impact estimation. RCTs work with programmes targeted at individuals or communities and for pilot projects intended to indicate whether a scale-up is desirable. Non-trial approaches are still essential in sectors such as transport and energy.
- Econometric techniques such as those known as regression discontinuity design (RDD) and instrumental variables (IV) are useful, but are only applicable in certain settings.
- Before-after and with-without comparisons are unlikely to produce credible estimates of programme impact.
There is no single well-developed evaluation design that can universally be applied to all projects. The optimal design for a project varies across and within sectors. Further lessons include the following:
- The impact evaluation (including the most appropriate methodology to use) needs to be carefully planned at an early stage, often at the outset of a new programme or project. This can also lower cost substantially.
- While evaluations with creative design and data analysis are desirable, it is advisable to replicate the project evaluation in different contexts, using experimental and non-experimental methods.
- It is important to exert every effort to obtain the right data and to build the capacity of and establish a strong partnership with leading research institutions and NGOs.
- Although RCTs are important for obtaining credible impact estimates, many practical issues such as sample attribution, partial compliance, and spillover need to be addressed when the evaluation is designed and carried out.