Monitoring is a form of regular self-observation. It is used to highlight things that are not working well or go wrong, or conversely, that are working well, such as whether agreed protocols are being followed. Coordinated responses and the agencies participating in them can monitor their coordination practices through:
- annual collection and collation of statistics;
- weekly or monthly case tracking;
- court monitoring;
- victims and survivor feedback surveys;
- one-off surveys/studies of specific problems; and
- case analysis or review.
Because coordinated responses involve multiple sectors and operate at multiple levels, evaluating their effectiveness can seem a daunting task. Each of the agencies involved will have different internal priorities and objectives, and there are likely to be methodological issues in comparing different institutional systems.
Evaluations of coordinated responses have tended to adopt two main approaches:
- Measuring the impact of the individual components of a coordinated response, such as criminal justice agents or victim/survivor advocates; or
- Measuring the system-wide coordinated response.
- A variety of data sources may be used to measure the effectiveness of the overall response, including:
- criminal justice statistics, such as reporting rates, prosecution rates, convictions and attrition;
- health data;
- interviews or focus groups with victims/survivors on their experiences with the services and agencies;
- observations of interventions provided;
- interviews with professionals on what is working and what the challenges are; and
- administering standardised tests, often involving control data (Shepard, 1999).
Where established, shared information systems, such as case or service user tracking databases, can provide valuable data about the overall response, and should minimise some of the methodological issues because the data is standardised.
For some measures it will be necessary to collect baseline data in order to track trends before and after the establishment of the coordinated response, or before and after a particular intervention. For some examples, see the box below:
Programme objective |
Data sources |
Reduce incidence of violence against women |
Incidence of violence against women before and after implementation of the coordinated response |
Increase victim/survivor confidence in the criminal justice system |
Feedback from victims/survivors (surveys, questionnaires) |
Increase reporting |
Reporting rates before and after implementation |
Increase arrest, prosecution and sanction of perpetrators |
Arrest, prosecution and conviction rates before and after implementation |
Process evaluations are also important, as they can help to show what components make an effective response, such as regular meetings, active involvement and strong leadership (Shepard, 1999). They can also reveal underlying issues or challenges:
- whether there is consistent ‘buy-in’ from all the partners, or whether some are lagging behind;
- whether there is consensus about key elements of the coordinated response, such as definitions, philosophy, protocols and information sharing;
- whether there are power agendas within/between partner agencies;
- which forms of violence against women are being included/excluded; or
- the influence of external factors – such as availability of funding/resources – on the ability of the coordinated response to function effectively.
National Coordination and Governance of Coordinate Response |
Elements |
Quality Guidelines |
Monitoring & evaluation (M&E) of coordination at national & local levels
|
Create and implement minimum standards for M&E for national & local levels
|
Minimum standards should include:
|
|
Provide review & feedback of monitoring results to policymakers & local multi-disciplinary response teams |
|
|
Practice transparency while protecting victims & survivors’ confidentiality & avoiding possible increased risk (See also Create consistent systems element, above) |
|