Monitoring (Process, Formative, Mid-term Evaluation)
Monitoring is a form of evaluation or assessment, though unlike outcome or impact evaluation, it takes place shortly after an intervention has begun (formative evaluation), throughout the course of an intervention (process evaluation) or midway through the intervention (mid-term evaluation).
Monitoring is not an end in itself. Monitoring allows programmes to determine what is and is not working well, so that adjustments can be made along the way. It allows programmes to assess what is actually happening versus what was planned.
Monitoring allows programmes to:
- Implement remedial measures to get programmes back on track and remain accountable to the expected results the programme is aiming to achieve.
- Determine how funds should be distributed across the programme activities.
- Collect information that can be used in the evaluation process.
When monitoring activities are not carried out directly by the decision-makers of the programme it is crucial that the findings from those monitoring activities are coordinated and fed back to them.
Information from monitoring activities can also be disseminated to different groups outside of the organization which helps promote transparency and provides an opportunity to obtain feedback from key stakeholders.
There are no standard monitoring tools and methods. These will vary according to the type of intervention and objectives outlined in the programme. Examples of monitoring methods include:
- Activity monitoring reports
- Record reviews from service provision (e.g. police reports, case records, health intake forms and records, others)
- Exit interviews with clients (survivors)
- Qualitative techniques to measure attitudes, knowledge, skills, behavior and the experiences of survivors, service providers, perpetrators and others that might be targeted in the intervention.
- Statistical reviews from administrative databases (i.e. in the health, justice, interior sectors, shelters, social welfare offices and others)
- Other quantitative techniques.
Outcome Evaluation
Outcome evaluations measure programme results or outcomes. These can be both short and long-term outcomes.
- For example, in a programme to strengthen health sector response to cases of violence against women, a short-term outcome may be the use of standardized protocols and procedures by practitioners in a health facility.
- A long-term outcome may be the sector and system-wide integration of those policies.
- It is important to be very clear from the beginning of a project or intervention, what the expected objectives and outcomes will be, and to identify what specific changes are expected for what specific population.
Impact Evaluation
Impact evaluation measures the difference between what happened with the programme and what would have happened without it. It answers the question, “How much (if any) of the change observed in the target population occurred because of the programme or intervention?”
Rigorous research designs are needed for this level of evaluation. It is the most complex and intensive type of evaluation, incorporating methods such as random selection, control and comparison groups.
These methods serve to:
- Establish causal links or relationships between the activities carried out and the desired outcomes.
- Identify and isolate any external factors that may influence the desired outcomes.
For example, an impact evaluation of an initiative aimed at preventing sexual assaults on women and girls in town x through infrastructural improvements (lighting, more visible walkways, etc.) might also look at data from a comparison community (town y) to assess whether reductions in the number of assaults seen at the end of the programme could be attributed to those improvements. The aim is to isolate other factors that might have influenced the reduction in assaults, such as training for police or new legislation.
While impact evaluations may be considered the “gold standard” for monitoring and evaluation, they are challenging and may not be feasible for many reasons, including:
- They require a significant amount of resources and time, which many organizations may not have.
- To be done properly, they also require the collection of data following specific statistical methodology, over a period of time, from a range of control and intervention groups, which may be difficult for some groups.
Impact evaluations may not always be called for, or even appropriate for the needs of most programmes and interventions looking to monitor and evaluate their activities.
- To measure programme impact, an evaluation is typically conducted at the start (known as a baseline) and again at the end (known as an endline) of a programme. Measurements are also collected from a control group with similar characteristics to the target population, but that is not receiving the intervention so that the two can be compared.
- Attributing changes in outcomes to a particular intervention requires one to rule out all other possible explanations and control for all external or confounding factors that may account for the results.
An evaluation of the impact of a campaign to raise awareness around the provisions of a recently enacted law on violence against women for example would need to incorporate:
baseline data on awareness of the law’s provisions prior to the campaign for the intervention group;
endline data on awareness of the law’s provisions after the campaign for the intervention group;
baseline data on awareness of the law’s provisions prior to the campaign for a closely matched control group not exposed to the campaign; and
endline data on awareness of the law’s provisions after the campaign for a closely matched control group not exposed to the campaign.
Endline data allows the programme to see if there were external/ additional factors that might influence the level of awareness among those not exposed to the campaign. If the study design does not involve a randomly-assigned control group, it is not possible to make a definitive statement regarding any differences in outcome between areas with the programme and areas without the programme.
However, if statistically rigorous baseline studies with randomly assigned control groups cannot be conducted, very useful and valid baseline information and endline information can still be collected.
Evaluation requires technical expertise and training. If the programme does not maintain the capacity in-house, external evaluators should be hired to assist.
Guidance Note on Developing Terms of Reference (ToR) for Evaluations (UNIFEM, 2009). Available in English. Once an evaluation is completed, a comprehensive report should be drafted to document the programme intervention’s results and findings.
Guidance: Quality Criteria for Evaluation Reports (UNIFEM, 2009). Available in English. The evaluation report (or a summary of the report where appropriate) should be disseminated to staff, donors and other stakeholders.
Guidance Note on Developing an Evaluation Dissemination Strategy (UNIFEM, 2009). Available in English.
Illustrative monitoring and evaluation reports:
Combating Violence against Women: Stocktaking Study on the Measures and Actions Taken in Council of Europe Member States (Council of Europe, 2006). Available in English and French.
For additional monitoring and evaluation reports by sector, see the following sections:
Additional Resources:
M&E Fundamentals: A Self-Guided Minicourse (Frankel and Gage/MEASURE Evaluation, 2007). Available in English.
Monitoring and Evaluating Gender-based Violence Prevention and Migation Programs (USAID, MEASURE Evaluation and Inter-agency Gender Working Group). The power point and handouts are available in English.
Monitoring and Evaluating Gender-Based Violence: A Technical Seminar Recognizing the 2008 '16 Days of Activism' (Inter-agency Gender Working Group/USAID, 2008). Presentations available in English.
Sexual and Intimate Partner Violence Prevention Programmes Evaluation Guide (Centers for Disease Control and Prevention). The guide presents information for planning and conducting evaluations; information on linking programme goals, objectives, activities, outcomes, and evaluation strategies; sources and techniques for data gathering; and tips on analyzing and interpreting the data collected and sharing the results. It is available for purchase in English.
A Practical Guide to Evaluating Domestic Violence Coordinating Councils (Allen and Hagen/National Resource Center on Domestic Violence, 2003). Available in English.
Building Data Systems for Monitoring and Responding to Violence Against Women (Centers for Disease Control and Prevention, 2000). Available in English.
Sexual Violence Surveillance: Uniform Definitions and Recommended Data Elements (Centers for Disease Control and Prevention, 2002). Available in English.
Using Mystery Clients: A Guide to Using Mystery Clients for Evaluation Input (Pathfinder, 2006). Available in English.
A Place to Start: A Resource Kit for Preventing Sexual Violence (Sexual Violence Prevention Programme of the Minnesota Department of Health). Evaluation tools available: Community Assessment Planning Tool; Evaluation Planning Tool; Opinions About Sexual Assault; Client Satisfaction Survey; Participant Feedback Form; Teacher/Staff Evaluation of School Presentation; Program Dropout Form
National Online Resource Center on Violence Against Women Evaluation page.
Gender Equality and Human Rights Responsive Evaluation (UN Women, 2010). Available in English. See also the UN Women online guide to gender equality and human rights responsive evaluation in English, French and Spanish.
Putting the IPPF Monitoring and Evaluation Policy into Practice: A Handbook on Collecting, Analyzing and Utilizing Data for Improved Performance (International Planned Parenthood, 2009). Available in English.