How do cost, time and data constraints affect the validity of evaluation approaches and conclusions? What are acceptable compromises and what are the minimum methodological requirements for a study to be considered a quality impact evaluation? This booklet from the World Bank provides advice for conducting impact evaluations and selecting the most rigorous methods available within the constraints faced. It clarifies the nature of trade-offs between evaluation rigour and budgets, time and data and provides suggestions for reducing costs and increasing rigour.
A quality impact evaluation must: (a) define and measure project inputs, implementation processes, outputs, intended outcomes and impacts; (b) develop a sound counterfactual; (c) determine whether a project has contributed to the intended impacts and benefited the target population; (d) assess the distribution of benefits among the target population; (e) identify factors influencing the magnitude and distribution of the impacts; and (f) assess the sustainability of impacts.
The challenge of conducting evaluations under budget, time and data constraints is outlined below:
- The conventional evaluation approach uses pre- and post-intervention project and control group comparisons. Simplifying evaluation design can reduce costs and time, but involves trade-offs between evaluation quality and the validity of conclusions.
- Options for selecting comparison groups include matching areas, individuals or households on observables, pipeline sampling, regression discontinuity designs and propensity score matching. Problems include self-selection and difficulties finding close matches.
- Secondary data can be used to reduce costs and save time and to reconstruct baseline data and comparison groups, thus strengthening counterfactuals.
- Post-intervention only evaluations suffer from a lack of baseline data. Baseline data may be reconstructed using project and other records, recall, Participatory Rapid Assessment (PRA) or key informants. There should be systematic comparison of the information obtained.
- Fewer interviews, cost-sharing, shorter and simpler survey instruments and electronic and community-level data collection can reduce data collection costs. Using nurses, teachers, students or self-administered questionnaires can reduce interview costs.
Whilst trade-offs between evaluation rigour and constrained resources are inevitable, there are a number of ways of minimising the effects of constraints on the validity of evaluation conclusions:
- Considering the cost implications of addressing selection bias and instrumental variables can strengthen overall evaluation design quality. Addressing selection bias in project design is often more economical than during post-implementation.
- Developing programme theory models can help identify areas to concentrate resources. Peer review can help assess threats to validity when addressing constraints. Preparatory studies completed before the arrival of foreign consultants help to address time constraints.
- When using smaller sample sizes, cost-effective mixed-method approaches can strengthen validity by providing multiple estimates of key indicators. When reducing the sample size, statistical power analysis can ensure the proposed samples are large enough for the analysis.
- Reconstructing baseline conditions and devoting sufficient time and resources to developing programme theory can strengthen theoretical frameworks and the validity of counterfactuals. Rapid assessment studies are cost-effective for developing programme theory models.
- Rapid assessment methods to identify similarities and differences between project and comparison groups as well as contextual analysis of local factors can strengthen the generalisabilty of conclusions.
- An evaluability assessment should be conducted to determine whether a quality impact assessment is possible. If not, the evaluation resources, time frame and objectives should be revised, or the evaluation cancelled.
