OverviewDo’s and don’ts
Related Tools

Assessing impact in campaigns

Last edited: January 03, 2012

This content is available in


Impact denotes the “positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended” (OECD DAC, 2002). Impact is about long-term and lasting effects. Often, the term is confusingly used to designate shorter-term outcomes, e.g. the immediate response (e.g. through website hits and call-in rates) of a target audience to specific communication tools. Such data are useful milestones – but the term “impact” should be reserved for longer-term effects.

Due to the multiplicity and complexity of factors contributing to VAW and its elimination, measuring the impact of campaigns can be difficult. In fact, some development practitioners argue it is impossible. It is a particular challenge in campaigns that aim to provoke change in values and behaviour that are deeply rooted in a society and its culture. Such change may be incremental and to some extent invisible to outside observers; it may take many years before a tangible transformation in society can be observed. In such campaigns, a single, brief evaluation exercise can only assess what outcomes the campaign has contributed to – which is a fair indicator for success. 

Advanced research designs for impact assessment

To establish robust links between campaign exposure, and impact in terms of attitudes and behaviour-change, the following methods can generate a reliable comparison between members of the target audience that have been exposed to the campaign and those who have not been exposed:

Experimental trial is widely considered to be the most accurate way to assess impact, e.g. of a communication intervention. Popularly known in evaluation circles as RCTs (random controlled trials) and much used in scientific research, this method identifies the difference between what a campaign achieved and what would have been achieved without the campaign.


Example: The evaluation of a media campaign on reducing sexual violence in high schools could for example use an approach to assess impact by conducting a formal experimental trial involving a control and an experimental group. The evaluation could be conducted in two different towns with similar socio-economic conditions. In each high school, the 11th grade males and females could be administered an initial KAP survey on the topic (pre-test). After campaign exposure over a predetermined spell of time at the “experimental” school (the students in the control high school would not be exposed to the campaign), 11th graders at both schools would be administered the same survey again (post-test). Comparing the pre- and post-test results at the two schools will allow for the drawing of certain conclusions on the effectiveness of the campaign – bearing in mind that other factors not considered in research design may have exerted an influence as well.

Source: Source: Potter, S. (2008): Incorporating Evaluation into Media Campaign Design. Harrisburg, PA, on VAWnet, a project of the National Resource Center on Domestic Violence/Pennsylvania Coalition Against Domestic Violence.


Experimental trials can produce reliable results if they are administered at a significant scale and with scientific rigor, which requires substantial resources in terms of time, skills and money. Ethical issues need to be taken into account as well – by definition, the control group will be excluded from campaign activities and any potential benefits involved.

There are a series of alternative methods to RCT that are accepted as producing equally valid results while demanding less time and offering more flexibility:

  • Repeated Measures: The same measurement, e.g. the same questionnaire, is administered to the same individuals or groups at intervals, e.g. every six months, to verify how their response has evolved since the beginning and at different phases of the campaign.
  • Staged Implementation: If a campaign is rolled out in different phases with a substantial time lag, the evaluation can compare areas exposed to the campaign in its early stages to areas that have not yet been exposed.
  • Natural Variations in Treatment: In a large-scale campaign, implementation in some areas is bound to “fail” or not roll out exactly as intended. If these variations can be adequately tracked and measured, they can provide useful comparisons for impact assessment.
  • Self-Determination of Exposure: Some individuals in a targeted area will not be exposed to a campaign. For example, they might not have a television or listen to the radio or read the newspaper. These individuals can be invited to serve as a comparison group, e.g. by responding to the same questionnaire that is administered to individuals who have been exposed. (Adapted from Coffman, J., Harvard Family Research Project, 2002. Public Communication Campaign Evaluation)

Attribution versus Contribution

It is difficult to measure precisely how and to what extent a campaign has impacted the target audience. The “how” can be inferred through more qualitative surveys, while extent could be measured through random controlled trials (RCT) (see above). Nonetheless, it will generally be extremely difficult or impossible for evaluations to precisely attribute change to the campaign, or to specific campaign activities. In most cases, it is more appropriate to focus on the contribution a campaign has made towards achieving its goal by producing its outcomes, acknowledging the multiplicity of factors that contribute to – or impede – change.

Cognitive variables such as knowledge and attitudes of the target audience can be measured, to a certain extent, as part of the baseline study, and then again during a later evaluation for comparison. If changes in these variables are observed, it is legitimate to assume that the campaign has contributed to these outcomes, even though it may be impossible to quantify impact. If no change or even negative change is observed, scanning the environment for factors external to the campaign that could have impeded goal attainment helps to determine whether failure is mainly attributable to the elements of the campaign or to external factors, e.g. a strong counter-campaign by a social movement that opposes gender equality.

In some cases, campaign evaluations have no access to baseline data and consequently rely on respondents’ retrospective self-assessment of change (e.g. through the Most significant change technique or MST) to evaluate if exposure to the campaign has had any impact on the target audience. This can be an effective approach to assessing campaign effectiveness if no other methods are available.

“Ultimately there are the campaign goal(s) – the impact it seeks in changing the relations and structures of power that lead to gender violence. The changes it seeks occur in heterogeneous contexts, are indefinite in time, and depend on the actions and decisions of many more actors than the members of the campaign team. So, when there is a change that represents impact, who can assume credit for the change? Who is accountable for what changes (and does not change), and to whom and how? These problems of attribution and aggregation mean that a campaign at best will contribute indirectly and partially to impact.” - Ricardo Wilson-Grau, personal communication