As the advocacy efforts of human rights organizations have become more complex and arguably more sophisticated, important technical issues have arisen concerning the evaluation of those interventions. In fact, the particular requirements of evaluating advocacy campaigns have contributed to a wave of innovation in the field. A diverse menu of methodological options is now at the disposal of even small to medium-sized human rights organizations.
This innovation has been productive but has tended to focus the evaluation discussion, even within the human rights sector, on a series of technical and operational issues. The choice among evaluation approaches is not, however, a purely technical issue.
How an organization does its work must be taken into account in the design of assessment approaches. Organizations develop remarkably resilient cultural patterns, or patterns of social interaction in the work process. For example, since the work is never really done, in some organizations social pressure develops for staff to work very long hours at a frenetic pace. In others, it might be an organizational norm to carefully document all aspects of work undertaken, even if it means that other work is not completed. Evaluation methods that conflict with those patterns—no matter how technically appropriate—fail to yield the desired results. How an organization does its work must be taken into account in the design of assessment approaches, the choice of external evaluators and the way in which any exercise is implemented.
Similarly, relations of power—gender, class, race, ethnicity, etc.—exist and persist among the various actors on the human rights supply chain (or value chain, as some would prefer). These relations—by their nature, political—exist inside every organization, and among the various types of organizations that come into contact through human rights advocacy (e.g., donor organizations and grantees, professional NGOs and community organizations, women’s organizations and mixed gender organizations, etc.). These power relations add a political dimension to the choice of evaluation methodologies, as well as decisions about how such methods are implemented.
Therefore, both organizational culture and the political nature of the power relations surrounding any assessment process influence the way that process unfolds. Reducing the evaluation process to choice from a menu of “methods” risks obscuring the importance of these cultural and political factors.
Shutterstock/Marcos Mesa Sam Wordley (All rights reserved)
Organizations develop patterns of social interaction in the work process. For example, in some organizations social pressure develops for staff to work very long hours at a frenetic pace. Evaluation methods that conflict with those patterns—no matter how technically appropriate—fail to yield the desired results.
Brian Root of Human Rights Watch emphasizes that his organization is searching for an evaluation approach that is consistent with its unique culture, arguing that, “Complex and burdensome [evaluation] processes simply will not succeed for [the] organization. We are dispensing with the idea that evaluation necessitates some specific level of rigorous documentation.” From this understanding emerges a call for “low-burden self-reflection for evaluation.” It remains to be seen if such an approach can enable the sort of organizational learning upon which Root appropriately places great emphasis.
According to Tim Ludford and Clare Doube, Amnesty International has identified some of the same cultural tensions in its work. The idea of “seeking balance” is at the center of Amnesty’s response to these challenges. They must balance the need to carefully process and analyze various types of information about organizational successes and failures, with an urgent need to “tell the story” of their impact in a straightforward, timely and compelling way. Similarly, they must balance another potentially difficult tradeoff: keeping the burden on their project teams low, while still holding themselves accountable to their own strategic goals and commitments.
This recurring notion of managing the “burden” of serious assessment suggests that in addition to being participatory, rigorous, holistic and evidence-based, successful human rights evaluation must also be sustainable. That is, it must be possible to successfully carry out the exercise with a reasonable allocation of available financial and human resources. Sustainable assessment may look quite different to a well-funded international NGO and the grassroots women’s organization with which it works in El Salvador. These differences must be a point of discussion in an evaluation process involving both organizations.
In her reflection on the experiences of Minority Rights Group (MRG), Claire Thomas points out that beneficiaries have far less power than donors, “which makes it feel like all the pressure for evaluation comes from donors”. Thomas references one of the most evident power relations in a sector that has come to rely heavily on external donations to fuel its activities.
Unequal relations of power can create risk in evaluation processes, especially for those whose work is being evaluated. Anyone who has been involved in evaluation for any length of time has seen processes that have, in some way, negatively affected some participant in the assessment. Such risk cannot be eliminated, but awareness of the relations at play can improve the design and the implementation of any given assessment—make it more participatory, for example—to the point that the risk to participants in the exercise is greatly reduced.
While this oGR exchange was underway, ActionAid circulated terms of reference for a consultant to help facilitate a so-called “Monitoring and Evaluation Political Debate.” According to the concept note accompanying the RFP, the explicit purpose of the debate was for the organization to fully examine the political implications of evaluation choices made in a previous strategic planning process.
Regardless of the impact of this particular debate, the very fact that organizational resources are being devoted to such a discussion is an indication that more than just technical issues are under consideration. And this in turn is a hopeful sign for the future of human rights evaluation.
Get our weekly email
CommentsWe encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.