In her article “More than just a pound of flesh?” Alice Welbourn remarked that researchers’ interests – and resulting findings – regarding care for women living with HIV HIVHwomen living with womemay not always be truly relevant to the actual lives of the women concerned. She expanded on this theme during a session in the Women’s Networking Zone, (WNZ) entitled “Beyond the evidence base: rights and justice for women”, arguing that findings from the grey literature on women’s own expressed needs must gain acceptance for orienting our thinking as scientists about how best to design interventions to benefit women living with HIV.
Her fellow panellists concurred that quantitative data should not be our only source of evidence, but they were concerned about the extent to which policy-makers and programme managers grant validity to qualitative studies or other data. Laura Ferguson noted the importance of identifying the purpose for which data will be used: statistical data may be required to assess drug efficacy, while other types of data can be used to document human rights violations in health care. Laura also commented that non-quantitative data collection, especially when it concerns sensitive issues, may be held to higher standards by those in charge of establishing health programme priorities and guidelines.
An issue that did not arise, but that definitely merits debate, is how we should cope with instances where policy-makers accept a lack of evidence in order to pursue actions that they believe will produce desired results. An example can be found with regard to new recommendations for women to receive antiretroviral therapy (ART) during the entire breastfeeding period in order to reduce vertical transmission of HIV. No data exist regarding the possible consequences for women’s own treatment effectiveness if they take ART for a limited perinatal period and then stop until they need it again to control their own HIV. Yet the new guidelines will be embraced by those who prioritise prevention of perinatal transmission.
The discussion on evidence continued at another session, "Planning action around the neglected sexual and reproductive rights of women living with HIV.” Participants in that discussion remarked that we should not only think about the purpose for which data is collected, but also the target audience(s) to whom they will be presented. Medical personnel, for example, may not be receptive to hearing about why services should be provided or adapted to comply with women’s human rights; however, when they receive the same information couched in public health terms that address service availability, acceptability and uptake, they may be more willing to consider the evidence since it more easily fits their operational frameworks.
When some participants called for more use of women’s testimonies for advocacy, another discussant argued that policy-makers do not find anecdotal evidence reliable since anecdotes can be found to illustrate various viewpoints or situations. What was not pointed out, however, is the fact that policy-makers and advocates can also be very selective regarding the quantitative, peer-reviewed data that they consider reliable and valid. For example, whereas the World Health Organization and medical associations in various countries have conducted research reviews showing no evidence of associations between abortion and breast cancer, anti-abortion advocates cite other less rigorous studies to bolster their claims of this unproved connection.
A woman from Namibia remarked that politicians in her country argue that they need hard data on the prevalence and consequences of unsafe abortions in order to justify actions in line with implementation of the current abortion law (which permits pregnancy termination to preserve a woman’s life, health and in cases of foetal malformation or rape), or amendments to that law to permit easier access to safe legal services. A colleague from Argentina, on the other hand, remarked that in countries of Latin America, excellent quantitative data exist on maternal morbidity and mortality due to unsafe abortion, yet politicians and policy-makers refuse to accept this evidence as a rationale for ensuring that women have more access to safe legal abortion.
It is these latter examples that should also be considered in our debates about what kinds of evidence we need. Some policy-makers and programme managers will consider different approaches to meeting women’s reproductive health needs upon receiving quantitative, qualitative or grey literature data, tailored to their particular interpretative frameworks or not. However, others will find no evidence of value if it doesn’t coincide with the views that they already hold, or the views and interests that they perceive are important to the majority or most powerful of their supporters. We should consider that in some cases the only factors that truly influence policies and programmes are the demands made by those who vote in the policy-makers, and those who provide the funds.
In any event, assessments of how different types of evidence have led (or not) to women-centred care will be welcome additions to our continuing debates on evidence.