Conflict-affected and fragile environments can change quickly. How can you ensure your programme is flexible enough to adapt to the changing environment? This practical guide focuses on the key elements of programme design in fragile and conflict environments, and what steps need to be taken to ensure that effective monitoring and evaluation processes are built in from the start. It covers guidance and best practices for designing and implementing monitoring and evaluation systems and processes.
It is meant to be a compilation of existing best practices, critical information, practical tips and further resources for all stages in programme cycles operating in fragile and conflict-affected environments.
Key findings:
- Design, Monitoring and Evaluation (DM&E) of peace & conflict and security & justice programmes pose numerous challenges. Best practises, though long established and accepted at the institutional and field-wide levels, are often lacking or missing altogether. Conflict-affected and fragile environments can change quickly. This makes it all the more important that programme design is flexible, thoroughly thought through, and can adapt in the face of a changing environment. To do so, design and implementation requires sound conflict or context analysis strategies, a conflict sensitive approach, and the mainstreaming of gender. The design hierarchy contains four main levels, and each functions as logical building blocks for the structure of a programme: (1) Impact (commonly referred to as a goal), (2) Objectives, (3) Outputs, and (4) Activities.
- Robust monitoring practises that go beyond collecting data against indicators must be set in place. This includes revising indicators, logframes and theories of change if the context and conflict have changed. Robust monitoring strategies may also include monitoring for implementation, learning, quality programming, risk and value for money.
- Rigorous evaluation that balances accountability and learning is all the more essential given the still-maturing evidence base of ‘what works under what conditions’ and the need to demonstrate quality, impactful programming in both upwards and downwards accountability. The first step to having a good evaluation is to conduct an evaluability assessment and determine the extent to which an activity or program can be evaluated in a reliable and credible fashion. An evaluability assessment can be used to clarify program logic to: 1) improve program implementation, and 2) determine the feasibility of an evaluation. Rigorous evaluations can only take place if they are well-budgeted and planned. This includes having detailed discussions with key stakeholders on the scope and purpose of the evaluation, including evaluation criteria and lines if inquiry.
- The standardised criteria for evaluation of all conflict and fragile international development assistance described by the OECD-DAC are: relevance: the extent to which the objectives and the activities of the intervention(s) respond to the needs of the beneficiaries and the peacebuilding process; effectiveness: the extent to which the intervention has met its intended objectives with respect to its immediate peacebuilding environment or likely to do so; efficiency: value-for-money; impact: the wider effects: positive or negative, produced directly or indirectly, intentionally or unintentionally; and sustainability: the continuation of benefits on end of assistance.