Human rights work covers a huge range of interventions, and evaluation approaches will vary according to the type of work being assessed. There may be forms of human rights work that have discrete, “trackable” interventions where a results-based management type approach would make sense. But, more often than not, human rights advocacy—as with any influencing intervention in complex social and political environments—does not involve straightforward results that are easy to quantify and attribute causes to.
As others in this debate have suggested, a results-based lens is a problematic starting point when trying to understand social change. Change processes are complicated and unpredictable. In the constellation of actors and factors operating in all directions to influence how policy and political processes play out, the role of even the biggest, most active NGO is likely to be relatively small. In such contexts, the risk is that a demand for predictable, quantifiable results may lead to a misrepresentation of reality.
It may also constrain and pervert priorities, as encapsulated in a quote from a human rights defender cited in a report looking at the monitoring and evaluation approaches of three human rights organisations:
“I spent eight years defending political prisoners. There was no hope of their release. I lost every case. What was my [observable] impact? Zero. Should I have done it? I haven’t found one person who says no.”
Crudely applied, a results-based approach leaves no space for this sort of “zero impact” activity.
'Should I have done it? I haven’t found one person who says no.' Baulking at the direction in which a results-based approach appears to lead human rights evaluation, Emma Naughton and Kevin Kelpin, Brian Root, and Tim Ludford and Clare Doube accord central importance to learning. These authors emphasise the value of “learning through regular, daily evaluative thinking”, highlighting the importance of reserving space for reflection. This approach also includes measuring the tiny, messy steps that together constitute change. These are all points that we would endorse.
However, expressing fears about overly quantitative results-based evaluation—however well founded—elides but does not resolve the challenge of making a case for an organisation's contribution to change. Organisations also need to demonstrate accountability to the communities and human rights defenders with whom they work, as well as to funders, partners and supporters.
Accountability can take organisations into uncomfortable territory, full of pressures to make the intangible tangible and the complex simplistic. But rather than settling on narrow, results-based modes of operating, we argue that it is possible to fulfill legitimate accountability needs, while also favouring learning. One way to do this is to focus on the strategic plausibility that an organisation's work has had, or will have, a contributory effect in achieving change.
A plausible strategy is one that is coherent and defensible, that takes an expansive but realistic view of how changes come about and the role an individual organisation can have within wider processes. It tolerates uncertainty and accepts complexity rather than presuming a narrow, overly linear trajectory of change. It also sets out, in a credible way, how interim objectives truly constitute incremental steps towards the change that the organisation is seeking. It’s important, of course, to recognise and celebrate interim outcomes. But it is equally important to seek grounds for confidence that any such outcomes represent movement towards—and have a good prospect of translating into—significant and lasting change in people’s lives.
In evaluation, attention to plausibility means interrogating and assessing the explicit—and often implicit—strategic logic of change that lies behind action. In assessing whether and to what degree there is a plausible connection between its efforts and human rights change, an organisation should ask: what does the concrete evidence reveal about what actually happened and why? What are the perspectives of different actors involved about the balance of influences at play? What do we understand about the motivations of, and pressures on, decision makers and what does that tell us about likely factors explaining the end result? What can we learn from comparative examples?
Flickr/Rasande Tyskar (Some rights reserved)
"At times, a goal of 'solidarity' may be used as a get-out clause of 'at least we did something', or as a way to fulfill supporter engagement objectives without obviously having a rationale beyond this."
Human rights organisations put a high premium on “solidarity” actions, for example. For good reasons, public demonstrations that support human rights defenders and/or expose human rights violators are considered a part of the influencing repertoire. This is particularly true for international human rights organisations supporting local human rights defenders. Solidarity can have a positive place in human rights work, protecting, supporting and emboldening those on the front line. But although organisations express frustration at the lack of possible evaluation methods for such interventions, the real problem may sometimes lie with the faulty or limited strategic logic behind the approach. At times, a goal of “solidarity” may be used as a get-out clause of “at least we did something”, or as a way to fulfil supporter engagement objectives without obviously having a rationale beyond this. Strategic evaluation can oblige organisations to justify the value that solidarity has, to surface the plausibility of the logic underlying the strategy and approach, and to clarify when and where it is meaningful to take a stand and put opposition on record.
As a more straightforward influencing example: a human rights group focused on corporate accountability supported workers within the supply chain of a multinational corporation who were calling for improved working conditions. These demands were eventually met. A review of the situation found that the ongoing relationship between the NGO and the multinational corporation was a critical factor in achieving this result. Other possible explanations were examined, but evidence pointed to a plausible correlation between the efforts of the human rights group and the response.
Avoiding the temptation of the simple truths promised by results-based evaluation carries the risk that evaluations could remain inherently uncertain. But just because assessment of strategic plausibility can be tentative does not have to mean that it lacks rigour.
This sort of evaluation may raise as many questions as it provides answers, but it gets closer to important truths than simple assessment of results. In doing so, this method offers a way to derive a stronger sense of accountability from an evaluation, alongside the benefits accrued in terms of learning.
Get our weekly email
CommentsWe encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.