Our Partners
Related Tools

Monitoring and Evaluation

Last edited: December 20, 2019

This content is available in

Options
Options

Monitoring and evaluation refer to activities specifically developed to understand how an intervention or programme is being or has been implemented and what it has achieved. This can involve assessing the timeliness and/or quality of activities implemented as well as the outputs, outcomes, and impact of a programme, as its best practices and lessons learned.

 

Monitoring refers to the systematic and ongoing process of collecting, analyzing, and using information to track a programme’s progress toward reaching its objectives and to guide management decisions. This process tracks a programme’s performance over the course of its lifetime. Information is often collected on the frequency of an activity, the number of people the activity reached, whether the programme was successful, etc. In conflict and post-conflict settings, the Gender-Based Violence Information Management System (GBVIMS) is often utilized as a mechanism to collect standardized incidence data that enables post-conflict actors responding to GBV cases to analyze basic data reported by survivors.

 

Evaluation refers to the investigation of how activities/interventions meet programme objectives and how it compares the expected and achieved programme accomplishments. Different evaluation models can be used to measure different programme activities. The primary goal of a specific evaluation may be to examine the implementation, effectiveness, and/or efficiency of programme interventions.

 

 

Box 1: The GBVIMS System

The GBVIMS case management system is an inter-agency tool to standardize the collection of case management data, including creating standard classifications of VAWG cases for incident data across an emergency setting. The system also allows for safe sharing of sensitive de-identified data. 

 

A mobile version of the system has been developed through the Protection-related Information Management project (Primero).

GBVIMS: http://www.gbvims.com/

Primero: http://www.primero.org/

 

Combined, monitoring and evaluation (M&E) represent the core of data collection and analysis for many operational NGOs working in conflict and post-conflict    settings. The M&E design process helps programme implementers identify programme objectives, define parameters to measure the success of a programme, and develop indicators to track progress and identify appropriate data collection methods to track these processes. M&E can also help programme implementers conceptualize programme goals, facilitate the development of logic models such as causal pathways and logical frameworks, and clarify how the programme expects to create change within a population.

 

M&E VAWG programmes can require additional thought and consideration compared to M&E for other, less sensitive programmes. The costs and benefits of collecting data on these subjects need to be assessed when designing an M&E system to minimize additional risks (for further violence, re-traumatization, etc.) while still collecting data that ensure programmes are being appropriately implemented and are actually improving the lives of women and girls. Most M&E in conflict and post-conflict settings focus on monitoring the performance of VAWG response programmes and risk mitigation. Less focus is on M&E for prevention programmes though knowledge, attitude and practice (KAP) surveys and qualitative data such as focus group discussions can be utilized to collect data in this space.  Some common M&E tools/practices utilized by VAWG programmes in conflict and post-conflict settings, as well as links to practical tools that have been developed by the international community include:

 

Risk Mitigation

  • Rapid assessments to understand the risks women and girls face
  • Use of safety audits and assessment of the perceptions of safety of women and girls

 

Response

  • Collection of incidence data through the GBVIMS system
  • Collection of data on activities and distributions of material support (e.g. hygiene kits, etc.)
  • Evaluation of the effectiveness of training health sector service providers to provide clinical management of rape (CMR) services
  • Collection of client satisfaction data for survivors accessing case management services

 

Prevention

 

Box 2: Logic Models and Causal Pathways

 

Monitoring, evaluation and program design tools such as logic models and causal pathways can help any program designer conceptualize their pathways to change and measure the success of a program.

To learn more about these tools see some of the following resources:

HHS Logic Model Tip Sheet https://www.acf.hhs.gov/sites/default/files/fysb/prep-logic-model-ts.pdf

Community Tool Box: Developing a Logic Model or Theory of Change https://ctb.ku.edu/en/table-of-contents/overview/models-for-community-health-and-development/logic-model-development/main

W.K. Kellogg Foundation Logic Model Development Guide https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide