Monitoring and evaluation are processes for collecting and assessing data to measure progress in efforts to end violence against women and girls. These processes evaluate the extent to which interventions and coordination are meeting the aims and objectives set for them. Although they are linked, monitoring and evaluation are distinct activities.
- Monitoring is an ongoing review of practices and processes and should be undertaken regularly as part of a coordinated response – some may do it at every meeting, others set aside time at periodic intervals.
- Evaluation is a systematic analysis of the impact and effectiveness of the policies, activities and partnerships within the coordinated response. It is most robust when undertaken by an independent person or organisation, such as a university or research body.
There are different types of evaluation, which are outlined in the box below.
Types of evaluations – process, outcome and impact Process evaluations document implementation processes, tracing whether the internal activities and the external context enabled the project to take shape as intended, and detailing stakeholder responses. They can also flag and provide early warning of any issues arising during implementation, making it possible to reflect on and correct them. The focus tends to be more on implementation and ongoing operation than projected results or outcomes. Outcome evaluations measure whether or not programme objectives have been achieved. They can be used to assess changes in knowledge, behaviour, community norms, use of services and prevalence of violence against women, depending on what the aims of a project are. Data used for this type of evaluation usually come through a special study and are collected periodically, not on a routine basis. That said, they also rely on projects collecting their own data in the form of inputs, outputs and outcomes. The goal of an outcome evaluation is to show that the changes observed occurred as a result of the programme being implemented. In order to measure change, baseline data (i.e. data collected before the programme was implemented) should be available to compare with that collected after programme implementation. Alternatively, change may be assessed using data from a comparison area where the programme was not implemented or a combination of baseline and comparison data. Impact evaluations endeavour to show how much of the change can be attributed to the programme. These evaluations are harder to conduct and require specific study designs and technical expertise to measure the extent of the observed change in the desired violence against women outcome that can be attributed to the programme. For example, comparison or control groups may be used to show what would have happened if the programme had not been implemented. This type of evaluation also has critical ethical considerations. Adapted from Bloom, S. (2008) Violence Against Women and Girls: A Compendium of Monitoring and Evaluation Indicators, North Carolina: MEASURE Evaluation, in English. |
For more information, see the Monitoring & Evaluation section in Programming Essentials.