Do humanitarian organisations successfully measure the effect of their work, and what defines the extent to which this occurs? If additional efforts are needed to better capture the effect of humanitarian action, what specific gaps should these address?
This report addresses these questions, based on a review of all indicators used by member organisations of the Consortium of British Humanitarian Agencies (1,680 indicators provided by 11 agencies). The indicators were reviewed for quality and the overall set was reviewed for gaps. Basic statistical analysis was carried out related to indicator and sectoral focus. To complement this, an online questionnaire was shared with all 15 members of the CBHA.
Four key purposes of humanitarian M&E systems were identified:
- determining whether interventions meet the needs in relation to needs assessment, theory of change and, primarily, beneficiary perspectives
- determining the effectiveness of interventions in relation to their original design objectives and plans
- establishing levels of inclusivity and coverage
- facilitating learning.
Impact indicators remain limited: monitoring and evaluation frameworks used by humanitarian organisations are incapable of providing organisations with the means by which to measure what they are most interested in.
Even if the indicators used cannot be classified as impact indicators, do they still provide sufficient information to assess quality? The analysis found three main limitations in this regard: 1. Limited disaggregation by social variables; 2. Lack of qualitative indicators; 3. Lack of attention to cross-cutting issues (e.g. protection and gender-based violence).
Overall, around 70% of the sample was found to be of reasonable quality, in terms of the key elements of a good indicator: clear phrasing, being able to measure a result statement, being at the appropriate level in the results chain, and not including an activity and an indicator in the same sentence.
The review identified a number of areas and issues on which humanitarian organisations are in agreement. The clearest, perhaps, is the shared intention and interest in measuring the impact of their work and the simultaneous acknowledgement that current M&E systems do notfacilitate impact measurement. At a time of increasing pressure for greater transparency and accountability, this is important to recognise, given its potential for positively influencing the shape of future discussions.
Diversity of indicators, even within individual sectors and areas of work, was another important result of the review. Organisations consistently rely on a wide range of (mostly) output indicators which are rarely disaggregated, thus limiting the depth and inter-agency comparability of the data collected. The review also found limited use of qualitative indicators and beneficiary input, as well as information management systems that intrinsically limit data visualisation and utilisation by field staff.
Agencies clearly design and re-design their internal indicators within their own organisations, networks and partners. Comprehensive collective action is largely invisible. Changes in language will be required and this review highlights the need for a move towards a more common set of indicators. The goal is not to reinforce ‘upward accountability’, but to allow humanitarian agencies to benchmark their performance.
Greater adoption of common outcome and impact indicators is necessary, if not at an individual organisation’s level, then certainly to evaluate collective humanitarian efforts. But these must be part of an M&E framework that also carefully assembles a range of other indicators to provide a more complete picture. It is the M&E model rather than its indicators, that may ultimately define the capacity of humanitarian organisations to talk about their contributions to people’s lives.
Incentives are a significant part of this environment. Refocusing humanitarian M&E systems to produce more meaningful data has to be linked to a system-wide rethink of the incentives for providing such data.