How can development programmes that are thinking and working politically be assessed? This paper proposes that using a mixture of assessment methods, with an appreciation of the strengths and limitations of each, provides the most effective solution to the assessment challenge. It outlines a range of evaluation methods and illustrates their usefulness for different processes and interventions. It notes that single methods are more likely to privilege the voice and interests of particular groups and focus on only part of the developmental outcomes.
Political change is likely to be long-term, with intermediate steps that are important, but possibly intangible – such as changes in process, relationships and power. The context for such change is also likely to be dynamic and complex, making linear attribution between donor inputs and the results achieved even more difficult.
Using quantitative methods such as randomised controlled trials can contribute to an understanding of the causal effects of a limited number of variants of an intervention over relatively short time periods. Qualitative methods such as outcome mapping or action-reflection approaches can help explain why change happened and how donor inputs influenced change or supported key actors in this situation.
Used together, with an understanding of the value of the data that they produce, quantitative and qualitative methods can provide a more comprehensive picture of the changes that have happened, and the reasons for those changes, across short, medium, and longer-term time frames. This is particularly the case for complex programmes: a mixed methods approach provides different perspectives that together facilitate a more rounded view of non-linear changes.
However, the application of mixed methods requires skill and care. Good quality performance assessment starts with asking the right questions, using an appropriate mix of good quality data collection methods, and then bringing diverse information together. Particular skill is required to effectively synthesise and analyse the data.
Some donors have been reluctant to use a mixed methods approach. The authors argue that this is partly because development practitioners are poorly informed about various methodologies and their limitations. Further, evaluation itself is a political process. Political interests, such as a focus on short–term results, can encourage an emphasis on single assessment methods.
Development agency staff can manage the methodological and political nature of monitoring and evaluation processes more effectively when they:
- Know the various purposes of their assessment systems, and how to ensure that all are attended to over time.
- Know the value of the information provided by particular methodologies, and the limitations inherent in any single methodological approach.
- Recognise the political nature of evaluative processes: by choosing single methods they are more likely to privilege the voice and interests of particular groups and focus on only part of the developmental outcomes.
- Are given incentives to assess the quality of assessment processes and are empowered to resist pressures to narrowly define ‘results’ and ‘value for money’. They need to be able to ensure space for longer, wider, and multiple processes of change and assessment systems to capture the complexity of these processes.