Although coordinated responses to violence against women have been developing for at least three decades, the evidence base remains relatively weak. There is limited data about their effectiveness, and a lack of consensus about which indicators are the most appropriate to measure what works in a response. Methodologists have raised questions about the failure to make comparisons with a control group or community (Klevens & Cox, 2008). In addition, some measures of ‘success’ may be too unrealistic to produce significant findings. For example, some studies (Visher et al., 2008) have tried to measure whether women’s sense of safety increased at two months and nine months after their case was closed, which may be too short a time period in cases of intimate partner violence, given that post-separation violence is common in the first 12 months (Klevens & Cox, 2008).
The lack of a clear overall evaluative approach is partly because the structures of coordinated responses are so specific to local contexts and needs. With regard to particular forms of violence against women, there can be wide variation. Yet, conducting a thorough and well-planned evaluation can only add to the overall knowledge base on coordinated responses, and the subsequent development of suitable frameworks for evaluating them.
To date, the success of coordinated responses has often been measured in terms of the achievement of the coordination itself, for example, the setting up of a coordination body, rather than outcomes. Yet the existence of a coordinated response in itself is not necessarily evidence of improvement (Worden, 2001). It is important to keep in mind that coordination relates to both the process of building a network of professionals/sectors and the more holistic response to victims/survivors this aims to produce. It is important to employ a range of evaluation methods to assess whether the agreements and protocols of the coordinated response have led to actual improvements in multi-agency working and better outcomes for victims/survivors.
Regular monitoring and evaluation will also highlight any issues in the implementation of programme objectives. Staffing, resources and access or making it possible to adjust practices or protocols, are examples of areas that can require modification if gaps are revealed. Funders often require periodic progress reports, which should include monitoring and/or evaluation results.
As far as possible, the results of monitoring and evaluation should be made public, with due care and attention being taken to preserve the confidentiality of service users. This is important to ensure transparency, but it is also essential in maintaining a dialogue with the communities, stakeholders and women and girls that coordinated responses are established to serve. Publicising this information can be influential in building buy-in by acknowledging and celebrating achievements, but it also conveys the message that services are self-aware, responsive and open to self-scrutiny and learning.
Selected monitoring and evaluation results can be made public through:
- annual reports;
- research and evaluation reports; and
- websites and other online media.