Challenges
The following section draws substantially on OECD-DAC (2012, p. 32-33), which offers several challenges associated with the monitoring and evaluation (M&E) of security and justice programming, particularly in fragile and conflict-affected contexts.
A lack of stakeholder participation: Experience shows that the participation of local actors in monitoring or evaluation processes means that they are more likely to contribute to the implementation of their findings. There is often a perception amongst local actors that monitoring and evaluation reflects the priorities and objectives of donors rather than of the needs of partner countries. Evaluation has tended to be led and managed by outsiders, with minimal contributions from citizens, civil society and governments. In such cases, local stakeholders become the objects of evaluations rather than key participants (OECD-DAC, 2007b, p. 242).
The risk of violence: Evaluators and the evaluated are often exposed to risks, with implications for the ability to collect material and data, recruit and retain local staff, meet interlocutors, publish findings, and disclose sources. The risk of harm may mean that information collected is biased, incomplete or censored (OECD-DAC, 2012).
Complex contexts and adapting programmes: It is difficult to identify what should be evaluated because change in complex contexts is multi-directional with high unpredictability and a general lack of information. Programmes often need to adapt to evolving environments, so implementation may differ from original plans and it can even be difficult to establish the activities that are actually being implemented (OECD-DAC, 2012).
Multiple actors: There are often many actors, from political, security or developmental backgrounds, seeking to affect change. This adds further complexity and uncertainty (OECD-DAC, 2012).
Attribution: Attributing results (i.e. the causal link from interventions to outcomes) is difficult because of the fluidity and complexity of contexts and the often non-linear nature of change processes (OECD-DAC, 2012).
Data collection: Generating data in fragile and conflict-affected contexts is particularly challenging due to physical risks, restricted access to data sources and the risks of manipulation by those producing the data (SAS, 2013a). This is often compounded by weak statistical capabilities and the multiplicity of donors with incompatible data systems (OECD-DAC, 2012). Data collection in relation to VAWG is particularly difficult because survivors find it difficult to speak out and because of the need to ensure confidentiality and the safety of survivors (DFID, 2012c).
Intangible results: Results in security and justice programming are often intangible, such as changes in trust, attitudes and behaviours, or transformations in cultural and social norms, and such changes may rarely be achieved within the scope of individual programmes (Corlazzoli and White, 2013c).
Competing indicators: Security and justice actors are often measured according to competing indicators formulated independently of one another by institutional leaders, ministries, multiple bilateral donors and international agencies. Those involved in front-line delivery can be resistant to indicators formulated externally that do not align with the local level ambitions and priorities. This may mean that indicators are not used effectively and that data is not collected accurately (Stone, 2011).
Approaches
Corlazzoli and White (2013b) argue that an explicit design process is essential for achieving desired outcomes and for developing rigorous M&E activities. Good design is based on (OECD-DAC, 2012; Corlazzoli & White, 2013b):
- Explicit and up-to-date conflict and context analysis. An understanding of changing dynamics contributes to flexibility, as the assumptions upon which programmes are based can be checked.
- Conflict sensitivity to ensure that the impacts of interventions on conflict dynamics, whether negative or positive, are taken into account. It also involves an analysis of the implications of conflict-affected contexts on programme design and implementation.
- Not being too ambitious and setting clear objectives that are specific, measurable, attainable, realistic and time-bound.
- Clearly articulating the ToC (see section 4.7)
- Mainstreaming gender (see sections 3.1 and 4.3)
Participatory monitoring and evaluation: Monitoring and evaluation that encourages learning and is not confrontational is an important way to overcome local stakeholder scepticism and negativity towards such processes. Demonstrating that the views of local actors can influence the design of donor programming can also lead to increased local ownership. A participatory process identifies core issues, produces agreement on those issues amongst local stakeholders, and finds evaluators that can work inclusively with local actors to set up frameworks for measuring results, proposing solutions to challenges, and promoting the implementation of recommendations (OECD-DAC, 2007b, p. 242).
Active and participatory indicators: Active indicators support local operational ambitions and involve local security and justice actors and civil society in their development. They describe outcomes in a common language understandable by those inside and outside institutions. They can be designed on the basis of participatory performance surveys or community scorecards, which ask citizens to rate the performance of security and justice providers. Such processes engage citizens in the process of data collection and in the control of standards embodied in the indicators (Stone, 2011).
Approaches to collecting data, especially in relation to VAWG, include providing more resources for local level data collection and analysis, and greater support for building the capacity of local groups, civil society organisations and local state service providers (DFID, 2012c).
Tools and guidance
Guidance and best practice for design, monitoring and evaluation in fragile and conflict affected contexts, including solutions to measurement challenges:
- Back to Basics: A Compilation of Best Practices in Design, Monitoring and Evaluation in Fragile and Conflict-affected Environments (see Corlazzoli and White, 2013b)
- Measuring the Un-Measurable: Solutions to Measurement Challenges in Fragile and Conflict-affected Environments (see Corlazzoli and White, 2013c)
- Evaluating Peacebuilding Activities in Settings of Conflict and Fragility: Improving Learning for Results (see OECD-DAC, 2012)
- DFID How to note: Results in Fragile and Conflict-Affected States and Situations (see DFID, 2012d)
- DFID How to note: Measuring and Using Results in Security and Justice Programmes (see DFID, 2010c)
- Guidance on Monitoring and Evaluation for Programming on Violence against Women and Girls (DFID 2012c)
- The OECD-DAC Handbook on Security System Reform. Section 8: Managing, monitoring, reviewing and evaluating SSR assistance programmes (see OECD-DAC, 2007b)
Measurement tools and guidance on data sources in fragile and conflict-affected states:
- Tools for measurement, monitoring and evaluation: Sources of conflict, crime and violence data (see SAS, 2013a)
- Tools for measurement, monitoring and evaluation: Making conflict, crime and violence data useable (see SAS, 2013b)
Guidance on active and participatory indicators:
- Problems of Power in the Design of Indicators of Safety and Justice in the Global South (see Stone, 2011)
Guidance on conflict sensitivity:
- Making the case for conflict sensitivity in security and justice sector reform programming (see Goldwyn, 2013)
- Monitoring and evaluating conflict sensitivity: Methodological challenges and practical solutions (see Goldwyn & Chigas, 2013)
- Corlazzoli, V., & White, J. (2013b). Back to Basics: A Compilation of Best Practices in Design, Monitoring and Evaluation in Fragile and Conflict-affected Environments. London: DFID / Search for Common Ground
See document online - Corlazzoli, V., & White, J. (2013c). Measuring the Un-Measurable: Solutions to Measurement Challenges in Fragile and Conflict-affected Environments. London: DFID / Search for Common Ground
See document online - DFID. (2010c). How to note: Measuring and Using Results in Security and Justice Programmes. London: Department for International Development.
- DFID. (2012c). Guidance on Monitoring and Evaluation for Programming on Violence against Women and Girls. CHASE Guidance Note Series No. 3. London: DFID.
See document online - DFID. (2012d). Results in Fragile and Conflict-Affected States and Situations. DFID How to note. London: DFID.
See document online - Goldwyn, R. (2013). Making the case for conflict sensitivity in security and justice sector reform programming. London: DFID / Care.
See document online - Goldwyn, R., & Chigas, D. (2013) Monitoring and evaluating conflict sensitivity: Methodological challenges and practical solutions. London: DFID / Care / CDA.
See document online - OECD-DAC. (2007b). Handbook on Security System Reform: Supporting Security and Justice. Paris: OECD.
See document online - OECD-DAC. (2012). Evaluating Peacebuilding Activities in Setting of Conflict and Fragility: Improving Learning for Results. Paris: OECD.
See document online - SAS. (2013a). Tools for measurement, monitoring and evaluation: Sources of conflict, crime and violence data. London: DFID / Small Arms Survey.
See document online - SAS. (2013b). Tools for measurement, monitoring and evaluation: Making conflict, crime and violence data useable. London: DFID / Small Arms Survey.
See document online - Stone, C.E. (2011). Problems of Power in the Design of Indicators of Safety and Justice in the Global South. Cambridge, US: Harvard Kennedy School.
See document online