Page contents
- The changing context of monitoring and evaluation
- Common tools and approaches
- Terms and definitions
- Where is a good place to start?
The changing context of monitoring and evaluation
The international context for M&E is changing, with increasing focus on measuring results and critically analysing aid effectiveness. Several international initiatives and agreements over the past decade have implications for development evaluation:
- The Millennium Development Goals now form the basis of progress indicators at the national level, and have meant much greater interagency cooperation around data collection.
- The Monterrey Declaration (2002) acted as the major impulse towards results-based monitoring and evaluation and the promotion of the allocation of aid based on the development of measures of effectiveness and results.
- The Paris Declaration (2004) committed OECD countries to increasing the country ownership of programmes, encouraging donors to harmonise and align with country monitoring and results frameworks.
- This commitment was re-affirmed at the Accra High Level Forum for Aid Effectiveness (2008) which highlighted measuring development results.
A number of new trends have emerged during this period:
- From monitoring and evaluating project processes, inputs and outputs to an emphasis on measuring results, outcomes and impact.
- From monitoring and evaluating projects, to a new emphasis on evaluating the combined effects of aid – in part prompted by the increase in multi-donor programmes and sector-wide approaches.
- From M&E being predominately donor-led to increased interest in country-led approaches, with evaluations increasingly conducted in partnership with a broader range of stakeholders, including the programmes’ intended beneficiaries.
- An increased emphasis on attribution issues and a growing commitment to high quality analysis that is capable of demonstrating ‘value for money’.
- A growing interest in participatory approaches to M&E.
- A growing focus on measuring less tangible aspects of development such as governance and empowerment, and an increased use of mixed methods to capture complexity.
- A renewed emphasis on sharing and using the results of evaluations effectively.
While there has been tangible progress in recent years, such as a growth in resources committed to M&E and growing independence of evaluation bodies, several challenges remain. Current key issues include the question of how to improve joint working with other donors and how to ensure more coherent follow up and post-evaluation activities.
OECD, 2010, ‘Better Aid: Evaluation in Development Agencies’, OECD DAC Network on Development Evaluation, OECD Publishing, Paris
How is evaluation managed and resourced in development agencies? What are the major trends and challenges in development evaluation? This report reviews evaluation in 23 bilateral aid donors and seven multilateral development banks. It finds that significant progress has been made towards creating credible, independent evaluation processes. Challenges include the need to improve collaboration with other donors and country partners and to support the use of findings and take-up of recommendations. The use of evaluations needs to be systematised, for instance, by integrating the consultation of relevant reports into the planning process for new programmes or country strategies. Lessons from evaluations need to be better formulated and targeted to specific audiences in accessible and useable ways.
Access full text: available online
Common tools and approaches
Monitoring and Evaluation can take place at the project, programme, sector, or policy level. It is essential to develop an M&E framework (the strategy for planning and organising the vital M&E activities) during the design phase of projects or programmes. Designing evaluation alongside a programme can facilitate the development of a counterfactual through control groups.
Most projects or programmes formulate a ‘logical framework’, which hypothesises the causal link between inputs and outcomes. Nevertheless, there is no universal approach to designing or implementing M&E, and a wide range of tools are used.
World Bank Operations Evaluation Department, 2004, ‘Monitoring and Evaluation: Some Tools, Methods and Approaches’, The World Bank, Washington DC, Second Edition
Monitoring and Evaluation (M&E) is an area of growing importance for the development community. It allows those involved in development activities to learn from experience, to achieve better results and to be more accountable. This report provides an overview of some of the M&E tools, methods and approaches on offer to development practitioners.
Access full text: available online
DFID, 2009, ‘Guidance on Using the Revised Logical Framework’, How to note, Department for International Development, London
Access full text: available online
Terms and definitions
Aid agencies use their own distinct terminology to describe their M&E systems and processes. Nevertheless, the OECD DAC’s definitions of key terms are widely accepted:
Organisation for Economic Co-operation and Development – Development Assistance Committee (OECD-DAC), 2002, ‘Glossary of Key Terms in Evaluation and Results-based Management’, Working Party on Aid Evaluation, OECD-DAC, Paris
The DAC Working Party on Aid Evaluation (WP-EV) has developed this glossary of key terms in evaluation and results-based management to clarify concepts and reduce the terminological confusion frequently encountered in these areas. With this publication, the WP-EV hopes to facilitate and improve dialogue and understanding among all those who are involved in development activities and their evaluation. It should serve as a valuable reference guide in evaluation training and in practical development work.
Access full text: available online
Monitoring
‘Monitoring’ is the ongoing, systematic collection of information to assess progress towards the achievement of objectives, outcomes and impacts. It can signal potential weaknesses in programme design, allowing adjustments to be made. It is vital for checking any changes (positive or negative) to the target group that may be resulting from programme activities. It is usually an internal management activity conducted by the implementing agency.
Evaluation
‘Evaluation’ is defined by the OECD-DAC (2002) as:
“The systematic and objective assessment of an on-going or completed project, programme or policy, its design, implementation and results. The aim is to determine the relevance and fulfilment of objectives, development efficiency, effectiveness, impact and sustainability.”
Evaluations can be periodic/formative – conducted to review progress, predict a project’s likely impact and highlight any necessary adjustments in project design; or terminal/summative – carried out at the end of a project to assess project performance and overall impact. They can be conducted internally, externally by independent consultants (particularly in the case of impact evaluations), or as a joint internal/external partnership.
Evaluations are independent, and are published in order to share lessons, build accountability, and investigate the theory and assumptions behind an intervention. A review, by contrast, tends to be undertaken by people involved in a policy or programme. It focuses on whether or not a programme has met its stated objectives, and is usually intended for internal use.
Impact
‘Impact’ is defined by the OECD-DAC (2002) as:
“Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended.”
Impact has another, conceptually different definition, however, which is the difference made by an intervention. An impact evaluation refers to “an evaluation concerned with establishing the counterfactual, i.e. the difference the project made (how indicators behaved with the project compared to how they would have been without it)” (White 2006). A counterfactual is defined by the OECD-DAC as, “The situation or condition which hypothetically may prevail for individuals, organizations, or groups were there no development intervention” (2002). The conceptual difference is important, since technically speaking an impact evaluation can measure the difference the project or programme made in short or longer term, or for intermediate or final, or direct or indirect outcomes, for example. However, this definition and the OECD definition of impact are not necessarily at odds, and have for example been brought together in the NoNIE guidelines on impact evaluation (Leeuw and Vaessen, 2009).
Where is a good place to start?
For an introduction to both the context and application of M&E in development, see:
Kusek, J., and Rist, R., 2004, ‘Ten Steps to a Results-based Monitoring and Evaluation System’, World Bank, Washington DC
Governments and organisations face increasing internal and external pressures to demonstrate accountability, transparency and results. Results-based monitoring and evaluation (M&E) systems are a powerful public management tool to achieve these objectives. This handbook presents a ten-step model that provides extensive detail on building, maintaining and sustaining a results-based M&E system.
Access full text: available online