How can we ensure that impact evaluation of complex programmes is rigorous and inclusive, and triggers reflection and learning about contributions, while also remaining feasible? This paper reflects on piloting a Participatory Impact Assessment and Learning Approach (PIALA) to assess the impacts of two pilot government livelihood programmes. It explores trade-offs in rigour, inclusiveness and feasibility encountered when impact evaluations address complexity and enable meaningful stakeholder engagement. The pilots suggest that these trade-offs can be reduced by building sufficient research and learning capacity, clearly defining quality standards and providing guidance for every step in the evaluation process.
IFAD and the Gates Foundation commissioned the development of a PIALA to help IFAD and its partners rigorously and collaboratively assess, explain and debate their contributions to reducing rural poverty. It was piloted in Vietnam and then in Ghana on two livelihood-focused programmes. Both pilots used the same standards for framing and evaluating and insights from the first pilot informed how the second iteration of PIALA developed.
PIALA responds to challenges in impact evaluation by:
- Using alternative ways to reach rigorous causal inference than clean control groups (which are rare) and quantitative estimations of impacts on ‘treated’ vs ‘control’ groups, which are often inadequate.
- Focusing on programme contributions and contextual factors that influence results, rather than measuring performance against pre-set targets.
- Employing a systematic approach that asks ‘why’ and ‘how’ something works, not just ‘what’, to enable understanding that may lead to transformational or systemic change.
- Engaging stakeholders through participatory methods to investigate contribution, and encourage wider learning and broader responsibility.
PIALA is constructed around five key design elements: a systemic Theory of Change; multi-stage sampling centres on open-systems; participatory mixed-methods; configurational analysis; and participatory sensemaking. The ToC is critical as it provides structure for the whole evaluation and informs a broader understanding of the significance of conclusions.
Some of the trade-offs between rigour, inclusiveness and feasibility reveal how:
- Time and budget limitations may reduce inclusivity in the earlier stages that may later hinder identifying the findings critical to the strategic direction of the programme.
- Smaller budgets may mean either scope or scale need to be downsized. A ‘full scope – limited scale’ design emphasises learning while a ‘limited scope – full scale’ design emphasises learning on the effectiveness of a selective programme mechanisms on the entire population.
- Strong facilitation skills are needed to minimise positive bias when field mobilisation is less feasible, and also for ensuring collective sense-making contributes to final analysis and reporting.
- Further training, coaching and supervision for local research staff can minimise compromising on either the breadth or depth of participatory research approaches, and can create more of a mutual learning partnership than a technical consultancy.
- The approach to configurational analysis may need adapting when classical counterfactual approaches cannot be applied.
These trade-offs are not absolute, but they require definition early on and guidance throughout the process. The paper recommends:
- Be clear about how to meet each standard. This involves being thorough and careful methodologically and analytically, as well as about whose voices inform findings.
- Agree with commissioners on which standards are essential to serve the purpose. This will ease decision-making when confronted with trade-offs between rigour, inclusiveness and feasibility.
- Anticipate trade-offs while searching for win-wins to think of the critical competencies required when conducting the evaluation.