Last week ourBeeb posted a piece from the Carbon Brief blog complaining that, when it comes to climate science, the BBC continually pits believers against sceptics. Carbon Brief also critiqued a Radio 5 call-in show on climate change and the weather. Both pieces mentioned last year's BBC Trust report on the impartiality and accuracy of science coverage, chastising the BBC for not following its recommendations.
I worked on the empirical research which accompanied the BBC Trust science report. Here's an updated version of a post I wrote for my personal blog when the report was published last year, which I hope gives a bit more detail.
Like the Carbon Brief posts, a lot of the news coverage of the review focused on so-called 'false balance'; the perceived problem of giving too much weight to fringe views in an (arguably misguided) attempt to avoid pushing one particular point of view. Just as a political piece might 'balance' itself with perspectives from left and right wings, a science story might, for example, include someone who is against vaccinations as well as someone who would advocate them.
Our study was based on a slightly more diverse set of questions than simply balance, though, as we tried to gain a detailed empirical grip on these slippery issues of impartiality and accuracy. As a result, I think it threw up a rather more complex set of challenges. We also looked at more than just case studies that trended on twitter, with a sample running across the summers of 2009 and 2010. It's easy to get annoyed by a bad example. A systematic content analysis can provide a more nuanced picture. That's not to say individual examples of bad practice don't matter, and there were all sorts of limits to our sample which meant we will have missed important cases. I personally have a lot of sympathy with those who found last week's Radio 5 programme frustrating. I also think there are specific concerns around the reporting of climate science which can get lost when you look at something as broad as science as a whole. Still, good science communication is more than simply broadcasting more scientific voices.
One of the key questions our work asked was who is given a voice in science news? What’s their expertise, where do they come from and – arguably most importantly – what are we told about this sort of context? Less than 10% of broadcast items in our sample included comment from those presented as having no specialist professional knowledge of relevance to the issue. Moreover, lay voices tended not to be run alongside expertise voices. So, that oft-made complaint that news reports rely on personal anecdote? Well, going by our sample, for the BBC this would not appear to be the case. It’s also worth noting that almost two thirds of the broadcast news sample - which included a lot of very short summary items - either relied on a single viewpoint or paraphrased alternative views. In items reporting on new research, this proportion rose to about 70%. So there was little room for alternative views here; for 'balance' or otherwise. I also thought it was significant that only about an eighth of broadcast news items about research, and almost two fifths of online news items about research, included comment from independent scientists (i.e. with no connection to the research being reported).
Whether you think any of these various expert voices (when they were included) are the right ones is another matter. You can't just say scientists are the appropriate voices and that’s it. Simply being 'a scientist’ doesn’t necessarily make you an expert on the topic at hand, and there are other areas of relevant expertise, especially when it comes to science policy issues. We had to think a lot about the rather large ‘other professional expertise’ category we considered alongside scientific, clinical, lay, non-science academics and 'unknown'. Other forms of professional expertise came from politicians, spokespeople for charities and representatives of industry. It’s important these experts are included in debates about science. For example, a scientist might be able to talk about their research, but know little about the policy surrounding it. Equally though, many scientists do have expertise of such areas as part of their research. It depends, which is rather the point about 'a scientist' being too blunt a description. We did notice a relative lack of direct comment from the UK government (less than 2% of items) as part of science stories, something I think is really worrying in terms of public debate over science policy.
Aside from questions over appropriateness of expertise being a rather slippery issue, there is very little information given about the expertise of a speaker. We found lot of reliance on phrases such as ‘scientists have found’ and ‘experts say’. Personally I think we need to address this issue before we can even get on to matters of whether experts are the right ones or not. Although expertise may be implied through editing, and TV in particular can flag up an institutional association and job title, we rarely saw a contributor’s disciplinary background specified. I thought it was especially significant that in broadcast reports about new research we found little explicit reference to whether or not a particular contributor was involved in the research being reported (online reports often refer to someone as ‘lead author’ or ‘co-author’). This lack of definition makes it hard for audiences to judge a contributor’s independence, whether they are speaking on a topic they have studied in depth, or if they are simply working from anecdote. Listening to that Radio 5 call-in show, I was struck by how much of a problem that could be for call-ins as much as pre-prepared edited packages. (I do appreciate how hard it is to give this sort of context, but it's still a problem.)
One of the things I was personally excited to ask during the Trust review study was the institutional location of the voices of science. We found that they were twice as likely to be affiliated to universities as any other type of organisations. There are perhaps also questions to be raised about the geographical distribution of these. Cue headlines suggesting Welsh science is 'frozen out' by the BBC, but the dominance of researchers based in the south-east of England might be due to a range of other factors (e.g. concentration of ‘Russell Group’ universities there).
When it came to coverage of recently published research, it was also interesting to ask which publications they came from. Nature, the Lancet and the British Medical Journal accounted for nearly all journal citations in the broadcast news. As with the location of research institutions, this is perhaps understandable considering their status, but the selection lacks diversity. On the reliance of particular journals, Charlie Petit was quick to note how little the American journal Science is mentioned, compared to the British equivalent, Nature. We thought this was interesting too, and wondered if this was a UK bias issue, but seeing as we found more coverage from Science when it came to online and non-news samples, it could be simply be that Science's embargo times don't fit so easily with the BBC's broadcast news schedule.
If you combine the arguable lack of diversity of news sources with our concerns over the reliance on press releases it is tempting to think back to Andy Williams' point, based on his Mapping the Field study, about the 'low hanging fruit' of the almost ready-made content the highly professional press teams at these elite publications and research institutions push out. It’s hard to ascertain the reasons for this sort of coverage from our content analysis though. These sorts of questions, I believe, require studies which consider the processes of news-making, as well as the content analysis the BBC Trust asked us to carry out.
Again, listening to the Radio 5 call in, I wondered if talking to James Dellingpole was an example of taking low-hanging fruit (ditto Greenpeace, great as I thought Doug Parr was). I want interesting and informed people, not obvious ones. I appreciate that sort of clever booking takes time and expertise a researcher on a topical general show won't necessarily have. I also appreciate that the scientific community could be better at showing off its good voices, so journalists can find them. I suspect we need to work more on building relationships between scientists and journalists, and I'd like such relationship building to be done in the open as much as possible, so the process of choosing sources is more publicly accountable. I don't think that's going to be easy, but I do think it's something we need to worry about, and from a media scholarship point of view, I doubt it's something we can study from just looking at content. It's a slightly different topic - gender and science media - but the methodical approach taken by Chimba and Kitzinger in their Bimbo or Boffin work is great.
There is a lot more data on BBC science coverage in the report itself (pdf) - so do have a read. If you are especially interested in the balance issue with respect to climate change, chapter three of Matt Nisbet's Climate Shift report makes for interesting reading. I’d also add that Hargreaves and Lewis’ 2003 Towards a Better Map report on science news gives some historical context to the UK’s particular issues with respect to balance, in particular the way BSE led to framings of MMR (which one might argue, in turn, influences reporting of climate), and if you want a textbook, Gregory and Miller's 1998 Science in Public is still the best place to start.
Or, you can try the BBC report of the review, which may be brief and cheerleading, but does include a picture of Brian Cox blowing bubbles.
Alice Bell blogs at http://alicerosebell.wordpress.com.