This paper explores how ICTs are helping to overcome common M&E challenges, including “real-world” challenges and methodological and conceptual challenges. It offers ideas on untested areas where ICTs could play a role in evaluation, and an in-depth discussion of some of the new challenges, problems and risks that arise when incorporating ICTs into the M&E process as a whole. Finally, it offers a checklist for thinking through the incorporation of ICTs into M&E.
Key findings:
ICTs are being used throughout the planning, monitoring and evaluation cycle, but there is little hard evidence of their effectiveness. There is quite a bit of experimental use of ICTs in M&E, yet much of it is not well-documented in terms of its usefulness in overcoming operational and methodological challenges or improving the M&E process. It can be difficult to convince donors or management that using a new tool is a useful investment due to the lack of evidence.
Some examples of how experimentation is happening in the various stages of the M&E cycle include:
- Diagnosis. ICTs are being used to bring new voices and broader participation into program diagnosis and enable a wider range of input at a reduced cost. They are enabling evaluators to better manage and pull possible trends out of large data sets.
- Planning. ICTs are being used to help achieve greater inclusion in planning processes. New technologies make it easier to compare and visualize data sets and to analyze data based on location so that resources can be better allocated. Data are also being aggregated more quickly and shared at various levels to improve participation in the planning process and to make better decisions. New software tools are being used to enhance the development and management of theories of change.
- Implementation and monitoring. ICTs are allowing for the collection of real-time data on participant experiences, behaviours and attitudes, meaning that analysis can be conducted early in the process and course corrections can be made to improve interventions and outcomes. Direct feedback from program participants is also being made possible through new ICTs, and it is assumed that this can help achieve greater transparency and accountability.
- Evaluation. ICTs can be integrated to increase the voice of vulnerable and underrepresented groups and broaden the types and volume of data being collected, combined, compared and analysed. New technologies may be able to help overcome challenges and constraints such as sample bias and poor data quality, and improve the understanding of complex sets of behaviour and data.
- Reporting, sharing and learning. ICTs are enabling wider circulation of evaluative learning, interactive sharing, and greater public engagement with evaluation findings.
- potential for selectivity bias when those who do not have access to or strong capacity to use ICTs are left out
- potential for tool- or technology-driven M&E processes when M&E plans are adapted to ICT tools rather than ICT tools being selected because they can help meet the needs of an M&E plan
- overreliance on digital tools, data and numerical indicators, which may lead to a loss of quality control measures, over-collection of data with little capacity to analyse it or provide context, and the loss of the personal rapport and contextual understanding obtained from project visits and face-to-face interviews when these are replaced with rapid and often remote electronic data collection
- low institutional capacity and resistance to change, which are common challenges for organisations that do not have the budget to fully train and integrate ICTs into their operations
- loss of privacy and increased levels of risk for evaluation participants, which can result if data and privacy are not carefully protected, and if a thorough risk assessment is not conducted to plan for potential negative or unintended consequences.
A number of areas with potential have not been sufficiently explored. It is supposed that ICTs could play a role in improving the validity of methods such as sample selection strategies (for example random routes), improving baselines or reconstructing baselines, reducing sample bias, enhancing rating scales, supporting concept mapping, evaluating complex interventions and improving qualitative case study methods. Many new ideas have not been tested and explored, however, and further work could be done to experiment with some of the ways that ICTs could support these methods.
ICTs bring new challenges that evaluators need to prepare for and address. Some of these new challenges include: