This paper argues that in order to improve the practical benefit of real-time evaluation (RTE), humanitarian organizations need to be even more selective and modest in its use. Wherever real-time evaluation is used, it should prioritise endogenous learning in organizations over questions of accountability and control.
The study is based on a meta-evaluation of 44 RTE reports, using purposive sampling to select publicly accessible evaluation reports from influential aid agencies.
Key findings:
The analysis shows that the practice of RTE diverges from the theoretical characterizations frequently described. The main differences are:
- Organizations use RTE for more than the specific niche they were intended for;
- They expect RTE to consider results and impacts, going beyond the narrower scope on processes;
- Humanitarian actors apply RTE along all phases of the project life cycle, not just at the beginning of projects;
- The time required to conduct an RTE is longer than expected in the literature;
- Evaluators are still expected to provide external expert advice on operational issues;
- Humanitarian organizations use RTE for accountability purposes as well as to facilitate learning.
This shows a current practice where RTE are not used to their full potential, and in some cases even misused for less adequate purposes. Most significantly, an imbalance is evident between the light and agile setup theoretically described and the comprehensive questions major organizations pose in their terms of reference. The scope is frequently extremely broad, including wish lists of interesting aspects of an intervention from sector-relevant results to longer-term impact. This is problematic for at least two reasons:
- First, RTE is not able to provide solid analysis about results and impact with the simplified and quick set-up they usually entail. The knowledge they generate is neither as reliable nor as valid as the findings generated by regular evaluations or impact studies with more elaborate designs. Moreover, findings from RTE cannot be generalized beyond a fairly specific context.
- Second, being the more affordable option, they can crowd out full-fledged evaluation efforts, weakening the overall evaluation system in humanitarian assistance instead of merely supplementing it with a more agile approach.
Even if done correctly in accordance with the theoretical descriptions of the approach, RTE has its limits. Accepting these limits would lead to a more modest – and less formal approach. RTE should thus be seen as a helpful approach for organizational innovation and learning in complex environments rather than a (humanitarian aid) evaluation methodology. Thus, the application of RTE needs a number of corrections and the development of future real-time evaluative approaches could benefit from the following suggestions:
- Place them in the right context and accept the limits
- Do not use them to scrutinise implementing organisations
- Use them as an in-house tool for organizational development and learning with staff on the ground
- Trigger them by demand only
- Harness relevant feedback from outside the organisation.