Although evaluations are often undertaken at the request of donors or as part of donor driven proposals, is that the only reason to conduct evaluations? As Emma Naughton and Kevin Kelpin point out, though quantifiable results may matter for donors, it is the assessment of the journey that matters equally if not more to those doing the work. And in the end, isn’t that work really being done for the beneficiaries, and not the donors?
Beneficiaries (or more widely the communities affected by our work or their representatives) have a right to know what we tried to do on their behalf and whether we made a difference. If yes, what brought that about? And if no, (or not yet), why not? Should we have approached things differently? Beneficiaries clearly have less power than donors, which makes it feel like all the pressure for evaluation comes from donors—in the end project evaluations do not always involve and are not always shared with beneficiaries. But what happens when we have an entire evaluation and decision-making process that doesn’t include the very people we are trying to help?
What happens when we have an entire evaluation and decision-making process that doesn’t include the very people we are trying to help? A second important audience is made up of activists, staff, board members, members, volunteers, our peers and colleagues, and the wider human rights community. We also want to know if we made a real difference. We want to know which methods work, under which circumstances and why. We need this information to intervene effectively in the future and to know that all the risks have been worthwhile. Evaluations can feel risky to people working on human rights, because such processes question whether we have done the right things in the right ways. This can sometimes be demoralising to staff who have worked hard “at the coalface” and who may, at times, lose sight of the bigger picture.
In the worst-case scenario, evaluations can show that time and efforts have not made any difference (yet). But this knowledge of non-progress is just as important, and both processes and final results count in human rights work. Even though evaluation audiences may not always agree on what to measure, and although long-term social changes may not be achieved at the end of a specific project, small steps along the way can provide useful indications of progress.
Of course, donors and all those who provide support have a right to know what we did with the resources they provided. They also want to know if we have made a difference at all to the situation we were targeting. Almost all donors need evaluations to satisfy the needs of other audiences (beneficiaries, staff, etc.) and many will enter into an intelligent and constructive discussion about different evaluation methods that fit the programme and its constraints.
Although all three of these audiences are interested in what difference has been made, donors are more interested in concrete measurable results and proven successes. Within human rights organisations, for our own learning, we may be as interested in why things worked, or not, as in proving exactly how much they worked. We may also be more patient, being willing to work for ten years to see a real change emerge.
Flickr/Rainforest Action Network (Some rights reserved)
The human rights community "should embrace evaluation for our beneficiaries, on whose behalf we all work."
Beneficiaries’ evaluation interests, on the other hand, vary greatly. Some are concerned only that their issue has been voiced and heard. Others want to know that everything that can be done is being done to try to resolve or react to ongoing human rights abuses affecting them. Great strides have been made in participatory evaluation methods, involving beneficiaries of programmes in gathering and analysing feedback about them. As previously noted in this debate, some human rights contexts are more difficult than some development contexts where these methods have mostly been applied. The individuals or groups of people on whose behalf human rights work takes place may be in incommunicado detention, or they may be at very great daily risk of physical attack or disappearance. Those working in governance to hold officials accountable for corruption face similar risks and threats. In addition, people working in gender issues gather views of women experiencing domestic violence or rape, which raises very similar challenges.
So if the human rights community is still resistant to evaluation solely for donors’ benefit, we should embrace evaluation for our beneficiaries, on whose behalf we all work. Human rights work is not just done to meet an international norm, system or standard. Human rights are useful, ultimately, because they make peoples’ lives a little better in myriad ways. We should be creative in finding ways to identify beneficiaries or those who could speak for them. We should also survey those in our intervention areas who did not benefit as well as those who did, making sure that we are not unknowingly leaving groups out or organising things in ways that some people feel unable to participate. We should evaluate so we can learn what works and why, not just to put ticks on a list.
Finally, we should also consider evaluating the unintended impact of our work. What changes occurred that we did not anticipate, or that we did not want, or that someone else did not want? If we really want to evaluate our work, these are the tough questions that everyone needs to be asking.
As much as all the audience needs may differ, Minority Rights Group’s experience has been that if we lead the process, donors will accept our evaluations even if they are not the method they would have chosen. Although they may have specific requirements, it is rare for donors to specify any particular method. This means that organisations can add beneficiary participation and learning elements to an evaluation that is only required by the donor to validate results. Donors rarely cut or query evaluation costs, meaning that evaluation plans can be ambitious and work intensive. Beyond this, many donors (e.g., SIDA) will ask an organisation to designate focus areas for an evaluation, and design an evaluation methodology—they will just want to check that they are happy with it. And even donors with rules will vary those rules when it is clear that the nature of the work does not fit the kind of evaluation that that donor is used to seeing. We, the human rights organisations, need to pick up this baton and run with it: if we don’t start leading the evaluation process, donors’ upwards accountability demands will fill the vacuum. If that happens, everyone loses—we will be left with methods developed by and for others that won’t improve our work, or help our beneficiaries.