This paper proposes a framework for evaluating social protection programmes that (1) assesses a broader range of impacts, and (2) uses a broader range of methods in a more holistic way. It draws on a case study of an evaluation of a cash transfer pilot programme in Tigray, Ethiopia.
Most evaluations of social protection programmes have two limitations. Firstly, they measure a limited set of outcomes – typically poverty, income, assets, food security, education and health – that are positive and can be quantified in terms of measurable changes over time. Secondly, they use a limited range of methods – with a bias towards large-scale household surveys administered before and after a programme is introduced, to beneficiaries and a control group, to measure statistically significant differences that can be attributed to the programme.
The alternative framework proposed in Devereux et al. (2013) argues that impact evaluations should be extended in three ways:
- The first innovation is to recognise that intervention design choices and implementation processes can directly affect programme outcomes. For example, how cash transfers are spent, and their impact on key outcomes such as children’s nutrition, can be substantially different depending on whether the cash is delivered to a male household head or a woman in the same household.
- The second extension to standard evaluation approaches is to incorporate unintended impacts of programmes, both positive (e.g. empowerment of women) and negative (e.g. beneficiaries being stigmatised), as well as their effects on social relations, between individuals within households and between households within communities.
- The third proposed innovation is to incorporate two feedback loops. The first loop – ‘recursive causality’ – recognises that programmes have multiple impacts and that one impact can either reinforce or undermine others. The second feedback loop is the ‘deliberate learning’ loop, whereby results from the monitoring and evaluation are systematically fed back into improved programme design and delivery.
The lessons learned from the analysis in this paper include the following:
- The evaluation of social protection programmes needs to draw upon a broad range of methodologies and tools. This is necessary for triangulation and verification of findings as well as for providing depth and texture to the analysis.
- Research methods should include a mix of both structured and more exploratory approaches, particularly to explore and capture unforeseen side-effects and assess the more complex and intangible impacts of social protection programmes
- Evaluations should integrate qualitative and quantitative research from design through to analysis and write-up, allowing for more ‘cross-fertilisation’ between methods than possible in the case of rigid sequencing of data collection and analysis.
- The use of control groups and panels is vital across all tools and methods, and should not be exclusive to collection of quantitative data. Experiences and opinions of those not participating in the programme can provide crucial insight
- Data collection and research efforts should extend beyond the time span of the social protection programme.