There has been a renewed interest in impact evaluation in recent years amongst development agencies and donors. Additional attention was drawn to the issue recently by a Center for Global Development (CGD) report calling for more rigorous impact evaluations, where ‘rigorous’ was taken to mean studies which tackle the selection bias aspect of the attribution problem. This argument was not universally well received in the development community; among other reasons there was the mistaken belief that supporters of rigorous impact evaluations were pushing for an approach solely based on randomised control trials (RCTs).
While ‘randomisers’ have appeared to gain the upper hand in a lot of the debates, the CGD report in fact recognises a range of approaches and the entity set up as a results of its efforts, 3ie, is moving even more strongly towards mixed methods. Other work underway on ‘measuring results’ and ‘using numbers’ recognises the need to find standard indicators which capture non-material impacts and which are sensitive to social difference. This work also stresses the importance of supplementing standard indicators with narrative that can capture those dimensions of poverty that are harder to measure.
This paper contributes to the ongoing debate on ‘more and better’ impact evaluations by highlighting experience on combining qualitative and quantitative methods for impact evaluation to ensure that we measure the different impact of donor interventions on different groups of people as well as the different dimensions of poverty, particularly those that are not readily quantified but which poor people themselves identity as important, such as dignity, respect, security and power. Additionally, the paper will cover the use of the research process itself as a way of increasing accountability and empowerment of the poor.
This paper defines and reviews the case for combining qualitative and quantitative approaches to impact evaluation. An important principle that emerges in this discussion is that of equity, or ‘equality of difference’. By promoting various forms of mixing we are moving methodological discussion away from a norm in development research in which qualitative research plays ‘second fiddle’ to conventional empiricist investigation. This means, for example, that contextual studies should not be used simply to confirm or ‘window dress’ the findings of non-contextual surveys. Instead they should play a more rigorous role of observing and evaluating impacts, even replacing, when appropriate, large-scale and lengthy surveys that can ‘overgenerate’ information in an untimely fashion for policy audiences.
The paper finds that the case for qualitative and combined methods is strong. Qualitative methods have an equal footing in impact evaluation and can generate sophisticated, robust and timely data and analysis. Combining qualitative research with quantitative instruments that have greater breadth of coverage and generalisability can result in impact evaluations that make the most of their comparative advantages.
Operational staff need sufficient knowledge to consider the methodological options available to them when identifying and designing impact evaluations. Perhaps even more significantly, they need to consider the political economy risks and opportunities for impact evaluation in any given policy context. Managing these risks and opportunities effectively will mean embedding the impact evaluation in a policy process which is locally owned, inclusive and sustainable.