The methods for evaluating influencing programmes are not well developed, and carrying out such evaluations is considered challenging. Projects often use multiple approaches to influencing, which demand different approaches to evaluation, and strong evaluations often use multiple methodologies to triangulate their findings. While there are many publicly available evaluations, the methodologies are often not clearly stated and are rarely critiqued. As a result, there is little evidence about the effectiveness of evaluation techniques to measure influence.
There is an adequate body of literature discussing theoretical approaches to evaluating influencing work, however there is debate about whether the tools can in practice adequately assess influencing projects. There is a small but growing sample of critical evaluations found within peer reviewed journals.
Key findings
- There are four main types of influencing (evidence and advice, advocacy and campaigning, lobbying and negotiating, and soft power) which each require a different method of evaluation. Evaluators therefore need to use multiple methods to critically analyse and triangulate findings.
- A large body of research states that the tools available for measuring influence do not allow for rigor, nor do they allow for replicable findings. There is difficulty in comparing findings between organisations.
- A preference for quantitative metrics tends to lead to a focus on activities and outputs and to attempts to quantify qualitative information – methods which, it is argued, do not robustly document impact (Coe & Majot, 2013).
- Some experts argue that the present tools are adequate but that there is a failure of applying the methods rigorously.
- Organisations that have found a process to measure their influencing work tend to focus their monitoring and learning on success stories, having a clear theory of change and finding anecdotal evidence to assess contribution.
- There are few examples of evaluations that reflect on the effectiveness of the review.