Evaluation assesses the merit or worth of the campaign. It brings together monitoring data, and findings from additional research to assess the effectiveness, or likely effectiveness (in formative evaluation), of a campaign and its different elements. Ideally, an appropriate baseline assessment and well-documented, regular monitoring form the basis for rigorous evaluation. Where such a basis has not been built, participatory approaches that explore stakeholders’ memories of activities, outcomes and challenges encountered are particularly important.
Evaluation can start with the first campaign planning steps through to the formative research needed to devise an appropriate campaign strategy. Mid-term, “real-time” or “developmental” evaluations that are carried out while the campaign is running include a strong formative element – a key purpose is to learn from previous campaign phases so as to improve the following ones or develop innovative approaches. Summative evaluations, conducted after the campaign ends, focus on campaign outcomes and impact.
Internal versus external
An evaluation can be conducted by the campaign team/alliance (self-evaluation or internal evaluation), or commissioned to an external actor. Both types of evaluation should involve stakeholders in the campaign, i.e. be conducted in a participatory manner, so as to obtain as comprehensive and accurate a picture of the realities of the campaign.
- A self-evaluation can be seen as a way of learning and improving practice. It takes substantial capacity for open self-reflection to do this effectively, so it may often be beneficial to call in an outsider to facilitate the internal evaluation. Formative evaluations are often carried out by the campaign team itself.
- In an external evaluation, usually most appropriate for summative evaluations, an outsider or outsider team is chosen to carry out the evaluation. This can be a research institute or an experienced consultant who has the knowledge and capacity to apply advanced techniques and handle more complex evaluation questions. The external evaluator should not have any direct stake in the campaign objectives, but it is advisable that he or she be familiar with the topic and ethical issues related to researching VAW.
TOOLS:
‘What we know about…Evaluation Planning’ from the US Centers for Disease Control and Prevention (CDC) is a quick summary on what is evaluation and how to do it, using examples from a VAW campaign conducted in Western Australia.
Conducting a Participatory Evaluation from USAID is a tip-sheet on how to conduct an evaluation that provides for active involvement in the process by all those with a stake in the program.