How can the principles of optimal evaluation design be applied under real world conditions with budget, time, data and political constraints? This paper, adapted from chapter 16 of RealWorld Evaluation: Working under Budget, Time, Data and Political Constraints provides an overview of the RealWorld Evaluation (RWE) approach. It addresses constraints through practical suggestions applicable to both developing and developed world research. Understanding the aims and purpose of the evaluation, as well as the local context, is critical.
RWE approaches can be applied at each stage of the design and implementation of a typical evaluation. This requires careful planning and design:
- Evaluators need a clear understanding of clients’ and stakeholders’ needs and expectations and the political environment of the evaluation. It is important to consider the dynamics of power and relationships of the key players in the project. Philosophical or methodological orientations must also be recognised.
- Defining the explicit or implicit theory or logic model of the project to be evaluated is critical. This model should include the following phases: design, inputs, implementation process, outputs, outcomes, impacts and sustainability.
- The seven most widely used RWE designs range from longitudinal quasi-experimental, to pre-post-test without control, to simple end-of-project assessment without baseline or comparison group. Less rigourous designs require techniques to cope with missing data.
- Further considerations include streamlining the evaluation design, identifying what analysis and comparisons are critical and assessing threats to validity and adequacy of different designs.
- Evaluators should select the tools best suited to the needs of the client and the nature of the evaluation. In most situations the strongest and most robust evaluation design will probably combine both quantitative and qualitative approaches.
A number of steps can be taken to increase the effectiveness of an evaluation in real world situations:
- Evaluation designs can be strengthened by: basing the evaluation design on a programme theory model; complementing the results-based evaluation design with a process evaluation; incorporating contextual analysis; reconstructing baseline conditions to assess changes in key indicators over the life of the project; and triangulation, including obtaining the perspectives of a range of stakeholders.
- The evaluation team should include people with different experiences, skill sets and perspectives. External consultants should be limited to essential areas. At least one team member needs to be a content specialist. Data collectors can be resourced creatively to save costs.
- Data collection plans need to be simplified. Preparatory studies and reliable secondary data can save time and expense. Simple ways to collect data on sensitive topics and difficult to reach populations need to be found.
- The analysis and report should focus on answering key questions. Limitations in the validity of conclusions should be clearly acknowledged, with reference to the RealWorld Evaluation integrated checklist for assessing the adequacy and validity of quantitative, qualitative and mix-method designs.
- Reports should be succinct, of direct practical utility to different stakeholders and communicated in appropriate ways to a range of audiences.
- A follow-up action plan should be developed with the client to ensure the findings and recommendations are well used.
