I used to believe that certain evaluation challenges were unique to human rights. But while stuck in the Addis Ababa international airport, a fellow passenger asked what I did for a living. I explained that I help organizations know if they make a difference— first for many years in a human rights organization, and now in international development. My friend then asked: “What’s the difference?”
The work of many development organizations is closer to human rights work than we might think. The question was a good one. In fact, the work of many development organizations is closer to human rights work than we might think. For example, organizations like ActionAid have tried to transform power relations among development actors by using a human rights-based approach. This work is different from development interventions such as building schools or vaccination programmes. Instead, ActionAid supports communities to hold governments to account—often acting as a bridge between communities and governments—or by raising awareness about rights and opportunities for participating in governance processes.
When it comes to challenges, there are several areas of overlap between evaluating human rights and international development. Dealing with complexity is one of them. Development interventions based upon human rights-based approaches usually involve diverse actors and a range of indeterminate influences. The results are also unpredictable. Often, interventions are carried out with partner organizations, increasing the difficulty of knowing who influenced the change. As Muriel Asseraf recently noted in this debate, unintended results, whether positive or negative, can certainly occur and have long been a problem. Interventions to increase women’s earning potential in Papua New Guinea, for example, have been linked to increased domestic violence.

Flickr/U.S. Pacific Fleet (Some rights reserved)
An international NGO worker leads a clinic on family violence prevention in Bougainville, Papua New Guinea.
Identifying unintended results in evaluations can be technically difficult, and organizations can sometimes find it hard to learn uncomfortable lessons—whether they work on human rights or not. Evaluations generally tend to focus on activities and outputs linked to interventions, and exclude possible causes of other outcomes. Development organizations often tend to focus on positive results, and it takes a lot of aptitude to learn lessons that challenge their own mission. Narrow forms of evaluation do not help. The key challenge is to develop forms of evaluation that go beyond the linear monitoring of predetermined indicators connected to interventions, and instead look more holistically at potential outcomes from interventions. Indeed, a NORAD report in 2014 found that less than 40% of the evaluation terms of reference for development projects funded by the Norwegian Government mentioned the need to assess unintended effects. The evaluators also found that references to unintended results varied according to the type of project: in projects involving governance, 63% of evaluations considered unintended consequences, dropping to only 12% for climate change-related ones.
In development, as in human rights, there is also a need to engage with the tension between learning and accountability in evaluations. In principle, M&E professionals agree that evaluations should aim to achieve both; indeed, there is no accountability without learning, and vice versa. In reality, however, many evaluations begin by asking whether they are for accountability or learning. Moreover, there is a tendency for evaluations to be motivated primarily by accountability to donors. Whether this outcome is actually wanted by donors, or is a habit of organizations, is the subject of another article. It is fair to say, however, that requests from donors about how to perform evaluations may prevent organizations from learning as much as they could.
The process of evaluating in itself is also as important as the evaluation results, which often take priority. Take participation as an example. M&E specialists agree that participatory evaluation can be a valuable, if challenging, activity for both human rights and development organizations. Obstacles to participation go beyond physical restrictions on individuals (such as the incarceration of political prisoners). They also include social, economic, and political barriers such as discrimination or reduced opportunities for welfare. These obstacles can be as limiting as being behind bars. A participatory evaluation process can be empowering for those involved and achieve useful outcomes.
For example, when I evaluated a campaign on domestic violence in Albania for a large human rights organization, I interviewed police officers, government officials, and development workers. But the breakthrough occurred when I spoke to women living in the country’s only (at the time) domestic violence shelter. One young woman told me: “Nobody asked us these questions before. Thank you.” When I asked a government official about this statement, she replied: “We already talk to NGOs, why should we talk to the women?” Yet there was a moment in this process when the young woman in the shelter felt she could make a contribution as more than just a victim. I wish I could have followed up with her on the longer-term impacts of this evaluation process, to understand whether this realization gave way to a further process of empowerment for the woman. To be clear, there are plenty of anecdotes about the empowering implications of evaluation. But it is hard to find widespread evidence of how the “process of evaluating” can help achieve an organization’s objectives.
There are already many areas of synergy between evaluations of human rights and development. The results-based agenda advocated in this debate by Vincent Ploton, however, has given rise to further similarities, and challenges. Arguably, the results-based agenda has become the dominant paradigm in the evaluation world, and in some cases—possibly more so for development organizations—is shaping the very essence of how evaluations are done. But this agenda makes those of us working at ActionAid uncomfortable. We find that it can focus excessively on predetermined indicators of impact, and reduce the attention that should be paid to the intricacies of shifting power among stakeholders and the unintended consequences these can generate. Moreover, this approach does not pay attention to the empowering potential of the evaluation process itself. It can lead to objectifying individuals in the constant search for their “experienced outcome”.
Many organizations seem to be adopting the results-based agenda without discussing the reasons for doing so, or the implications of this approach. This is why at ActionAid we are currently talking to staff, external advisers as well as partners and communities, amongst others, about the political implications of how we monitor and evaluate our work.
I strongly believe that the human rights evaluation community can learn from seeing how development organizations promote and further the realisation of human rights. In turn, development organizations can certainly learn more from the experience of human rights activists. Either way, we must be willing to look at both the positive and negative impacts of our work. Together, our experience counts for more.

Read more
Get our weekly email
Comments
We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.