arrow_backEmergency WASH

M.3 Evaluation

Evaluation can be defined as the systematic and objective examination of humanitarian action to determine the worth or significance of an activity, policy or programme. It is intended to draw lessons to improve policy and practice and enhance accountability. Key evaluation criteria are:

Relevance: asks whether the programme is doing the right things, e.g. is the hygiene promotion (HP) programme meeting the needs according to the context? Does the programme target the right people in terms of geographical areas as well as vulnerabilities to WASH-related health risks?

Effectiveness: analyses whether the programme has achieved its objectives and intended results and examines the factors influencing the achievement of those objectives, e.g. has the HP programme achieved its behavioural objectives of increasing handwashing with soap at critical times? To what extent can these changes be attributed to the programme? If intended results did not occur – why not?

Efficiency: measures both quantitative and qualitative outputs in relation to the inputs, e.g. how efficient is the distribution method for hygiene items? Does this method make the best use of the resources available? Were there alternative options to improve access to hygiene items? 

Impact: examines whether there were significant or lasting changes resulting from the programme and whether they were intended or unintended, positive or negative, e.g. has the goal of the programme been achieved? Have there been any changes in public health? Has the programme made a real difference to the affected population? 

Sustainability: evaluates the extent to which the net benefits of the intervention will continue or are likely to continue, e.g. have people been supported to continue using, maintaining and repairing the water facilities? What behaviours have changed as a result of the intervention and how likely are these changes to last? Has local capacity strengthened?

Coherence: considers how well the intervention fits with existing country plans and local priorities, e.g. does the programme align with Government policies such as with Ministry of Health community outreach systems? 

There are numerous reasons for undertaking evaluations including to review innovations, gain evidence, demonstrate success or challenges as part of a learning process M.6, assess value for money and to be accountable M.4 to key stakeholders such as donors and, especially, to the affected population.  

There are different types of evaluations depending on the objectives. Some evaluations are carried out at, or after, the end of the programme and aim to provide Accountability M.4 and influence future policy and practice. Real-time evaluations are carried out during the programme, are interactive and involve multiple stakeholders; the evaluator acts as a facilitator to generate an overview of the programme and provide immediate feedback so that issues can be addressed during the response. All types of evaluation can be external and independent or conducted by an agency with the support of an external evaluator or by staff members. It may be appropriate to do joint evaluations in collaboration with other programme staff, partners and other organisations (e.g. within the WASH cluster) to minimise the duplication of resources P.9. Some evaluations have a strong focus on accountability to the affected population (M.4 and F.23), empowering them to play a key role in carrying out and contributing to the process in order to strengthen ownership of the programme and ensure that they are in a position to make use of the findings M.5

Existing national standards, Sphere standards, the Core Humanitarian Standard and the Code of Conduct can be used as references to assess the quality of the programme in conjunction with the programme objectives and indicators.

Process & Good Practice

  • Budget for an evaluation in the HP programme. Calculate costs such as the evaluators, interpreters, logistics (e.g. transport and accommodation) and dissemination (e.g. printing, community meetings and workshops). 

  • Clarify the purpose of the evaluation, the type of information needed and develop a Terms of Reference with a timeline and budget. 

  • Establish a formal baseline at the start of the programme to identify gaps in data and understanding and to serve as a comparison with the end of the project. Baselines can also feed into a broader programme evaluation.  

  • Develop a Logframe (A.9 and T.25) with indicators to enable an evaluation of the inputs (resources used), activities (what was done), outputs (what was delivered), outcomes (what was achieved) and impact (long term changes). 

  • Match the evaluation methods to the requirements of the evaluation and be accessible to and inclusive of marginalised groups. Examples include: Key Informant Interviews T.23, Observation T.28 and Transect Walks T.52, Pocket Chart Voting T.31, questionnaire-based surveys (T.24 and A.8) and Community Mapping T.7

  • Develop indicators which are disaggregated by age, gender and disability. Depending on the objectives of the programme, they are likely to include:

    • Hygiene practices: e.g. hand washing, disposal of excreta, water handling and storage and indicators to assess whether there have been any changes in behaviour, community perceptions and motivators
    • WASH facilities: access, use of and acceptability of water supplies, latrines/toilets for different groups
    • Community satisfaction, engagement and participation
    • Hygiene promotion methods: monitoring the effectiveness, appropriateness and acceptability of community mobilisation methods such as community meetings, Theatre T.6, Household Visit T.18 or posters (IEC, T.19)
    • Health data: e.g. trends in diarrhoea morbidity (such data is influenced by numerous factors – not just WASH and should be used with care)
  • Collect qualitative and quantitative data A.4 from different sources (triangulation), analyse it using appropriate methods and compile the findings into a report. 

  • Avoid the common pitfalls of evaluations, including:

    • Focusing on easy to reach geographic areas
    • Not collecting baseline (‘before intervening’) data
    • Not respecting data protection and or putting participants at risk, e.g.in insecure areas
    • Neglecting consultation with less visible groups, e.g. women, older people and persons with disabilities
    • Ignoring seasonal or geographical WASH differences
    • Collecting too much or unnecessary information, which consumes time and resources and does not answer the evaluation questions
    • Focusing the evaluation merely on outputs, not considering outcomes, behaviour change and impact
    • Not widely sharing the results, so the information is lost and not used to adapt programming
    • Not informing the target group about the results of the evaluation
     

Purpose

To examine what the project achieved, whether it achieved its stated goal and what changes occurred as a result of the intervention in order to be accountable to stakeholders and learn lessons to improve subsequent programming.

Important

  • An evaluation looks at the overall changes which can be attributed to a WASH programme and examines the outcomes achieved, the relevance, efficiency and wider impact on people’s lives. 

  • Evaluations can produce recommendations to improve the programme (including capacity strengthening if needed) and capture learning to inform future policy and practice.

  • Evaluations are an important aspect of Accountability (M.4) and sharing and using evaluation findings encourages transparency and learning (M.6 to M.8) in the sector.

  • Evaluations must be carefully planned and as systematic and objective as possible.

  • As with any data collection, the safety of participants and data collectors needs to be protected, such as ensuring data is anonymous, collecting data remotely, or taking protective measures during epidemics such as COVID-19 (e.g. maintaining physical distance, open-air interviews, or using masks).

  • A monitoring and evaluation framework identifies the specific information required to provide evidence of change. It is good practice to include all partners and other actors when developing the framework and, where possible, carry out joint monitoring. 

  • The results of the evaluation must be shared in an appropriate format with all key stakeholders so that the findings can be discussed and applied, e.g. through workshops, reports, presentations and community meetings.

References

Definition and explanation of evaluations

Cosgrave, J., Buchanan-Smitz, M. et al. (2016): Evaluation of Humanitarian Action Guide, ALNAP, ODI

Basic guideline covering key aspects of evaluation for Hygiene Promotion

Mooijman, A.M. (2003): Evaluation of Hygiene Promotion, DFID Resource Centre for Water, Sanitation and Health

Ferron, S., Morgan, J. et al. (2007): Hygiene Promotion. A Practical Manual for Relief and Development, Practical Action Publishing. ISBN: 978-1853396410

Practical guidelines evaluating hygiene at all stages of a programme; assessment, planning, evaluation

Almedom, A., Blumentahl, U. et al. (1997): Hygiene Evaluation Procedures. Approaches and Methods for Assessing Water- and Sanitation-Related Hygiene Practices, ODA, INFDC, LSHTM, UNICEF

Information on involving communities and using participatory tools for WASH evaluations

Narayan D. (1993): Participatory Evaluation. Tools for Managing Change in Water and Sanitation World Bank Technical Paper No. 207

Hygiene Promotion minimus standards and indicators

Sphere Association (2018): The Sphere Handbook: Humanitarian Charter and Minimum Standards in Humanitarian Response 4th Edition

Suggestions for multi-sectoral evaluations and links to field examples of evaluations

GWC (undated): Global WASH Cluster Coordination Toolkit (CTK), Global WASH Cluster Advisory and Strategic Team (GWC CAST)

Remote data collection and how to protect participants and data collectors

Majorin, F., Watson, J., et al. (2020): Summary Report on Remote Data Collection, COVID-19 Hygiene Hub

General and COVID-19 specific information and resources on monitoring and evaluation

Freeman, M., White, S. et al. (2020): Summary Report on General Principles for Monitoring and Evaluating Covid-19 Prevention Projects, COVID-19 Hygiene Hub

Majorin, F., Hasund Thorseth, A. et al. (2020): Summary Report on Remote Quantitative and Qualitative Approaches for Understanding Covid-19 related Behaviours and Perceptions, COVID-19 Hygiene Hub

arrow_upward