openGlobalRights-openpage

Finding balance: evaluating human rights work in complex environments

Ludford.jpg

In human rights work, how and what you measure determines what kind of organisation you become. A contribution to the openGlobalRights debate on evaluation and human rights.  EspañolFrançais

Tim Ludford Clare Doube
6 August 2015

Recently on openGlobalRights, Emma Naughton and Kevin Kelpin argued that the evaluation of human rights progress requires a people-centred, light-touch approach that works with the grain of front-line staff’s work. This process, they suggest, must be flexible enough to account for the non-linear nature of human rights change and it must find ways of including rights-holders. As Naughton and Kelpin rightly note, evaluating human rights work is about people.

Their conclusions echo much of our own thinking over the last few years within Amnesty International. Such conversations must, inevitably, begin with questions around what kind of impact we want to have and what kind of organisation we want to be. How and what results you measure has an effect on what kind of organisation you become. It is relatively easy, for example, to measure website hits, social media shares, or signatures on a petition. But it may be actions that are harder or more expensive to measure comprehensively—such as the “hard yards” behind the scenes in lobbying decision-makers—that could be making the difference. Putting in place ways to measure and report on website hits, for instance, at the expense of other elements, would encourage teams to see our work through that lens. This could potentially change our sense of our own impact, the decisions we might take, and incentives that effect the future actions of staff.

We try to capture this nuanced, qualitative information on a central database, in which teams running our projects can enter updates on a monthly basis. This information then informs bi-annual project reviews in which project teams build on this data to reflect on how their projects are going, and make adjustments in strategy if required. However, we are currently preparing a new Monitoring, Evaluation and Learning (MEL) framework for our Strategic Goals 2016-2019 through which we will build-on, expand, and deepen our ability to understand our impact.

Ludford.jpg

Demotix/Theo Schneider (All rights reserved)

In a movement as large and diverse as Amnesty, it is relatively easy to learn within a project team. It is learning across projects and programs that is hard. This means identifying who needs what information in their work, and using these strategic points to filter and analyse our evidence base.


At Amnesty, like many human rights organisations, our currency is influence—on the media, on decision makers at the national and international levels and other key players. We frequently achieve this through the actions of the activists that make up our movement, often in collaboration with human rights defenders with whom we work. In turn, our actions are—or should be—influenced by all these actors. To be the most effective organisation we can be, the newest iteration of our MEL framework must provide project teams with the “critical space”, tools and learning to help us understand that influence. It must also be able to engage with the world outside of Amnesty and how responsive we are to it.

We want to emphasise learning, and in particular this implies thinking about how we help all staff to critically engage with their work. Like our colleagues at Human Rights Watch, we want to emphasise learning, and in particular this implies thinking about how we help all staff to critically engage with their work. We must ensure that we generate the evidence to inform decision-making in our movement. This means being clear on what information we need to generate—where, how and when—as well as who needs to see this at what moment, so that we can make the best decisions to maximise our human rights impact.

But it also means ensuring we are able to tell the story to our supporters, to rights-holders and to the wider movement. Moving forward we would also like to be able to share more with others the lessons we think we are learning, and better integrate lessons learned externally.

At present these elements remain a challenge. Information that is recorded on the central database, for example, tends to be a record of outputs and positive impacts. Project reviews have tended to be discrete reviews rather than within a wider framework that has helped us to learn about ourselves as an organisation and to tell the story of our work.

We complement these processes by commissioning independent evaluations of strategically important projects, which help us engage stakeholders in a longer, more detailed reflection. These have also allowed us to explore different methodological ways of exploring impact, such as outcome mapping, process tracing, outcome harvesting and “most significant change”. However, though extremely useful, there is a risk that such evaluations can reinforce rather than challenge the project-centric nature of our current work. Therefore, moving forward in the context of our new Strategic Goals we are likely to renew our focus on generating learning that can transcend the project and inform the Amnesty movement as a whole (and beyond), and do this within a framework that better defines what success means for us.

In a movement as large and diverse as Amnesty, too much happens to drive all the evidence of impact, or where we lack impact, upwards. It is not enough to “want” to be a learning organisation. It is also necessary to think about how to sort through all this information so that it is used beyond those involved directly in individual projects. It is relatively easy to learn within a project team. It is learning across projects and programs that is hard. This means identifying who needs what information in their work, and using these strategic points to filter and analyse our evidence base.

Balancing this with the need to “tell our story” is not easy; contexts vary and those seeking to incorporate learning need refined, specific information, whereas communicating our success and failures outside the organisation must take a different, more general form. This is linked in part to another potential trade off: keeping the burden on our project teams low, while holding ourselves accountable to our own commitments represented by our Strategic Goals. To achieve human rights change requires many different interventions of many different types. If a project team were asked to analyse contributions at all levels across all actors, we would risk their time and energy being swallowed up, limiting their ability to carry out the work itself.  

One solution is to focus their attention through specific learning-questions that cut across our work, but together reflect our organisational theory of change. This filters our evidence base through an analytical lens that we can use to learn, adapt and change, while also creating multiple threads that can be used to build a more communicable story.

Monitoring and evaluating human rights work is about balancing competing priorities. Just as defining what “impact” means in our work and how best to measure it is a big challenge, so too is learning how to manage these trade-offs within an organisation between proving and improving, between accountability and learning, choosing not one nor the other, but finding how to walk the line between them.

imgupl_floating_none

Had enough of ‘alternative facts’? openDemocracy is different Join the conversation: get our weekly email

Related articles

Comments

We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram WhatsApp yourData