This article is paired with 'Get your politics out of our research!...' by Thom Brooks.
Research culture in British universities has undergone a major shift over the last twenty years or so (as it happens, roughly the second half of my own full-time academic career). Many core values remain, but a newer set of dynamics has emerged around them. ‘Doing research’ and being a ‘researcher’ have become dominating markers of general academic life, altering the terms of professional self-worth and ambition.
The central reason for this shift is the intensive audits undertaken by the national Research Assessment Exercises (now the Research Evaluation Framework) in order to establish markers by which Government can decide the distribution of funding in a way that ensures ‘value for money’. This audit, undertaken every four or five years since 1986, examines submissions from each University department regarding its staff publications and its profile of research students, research funding and research planning.
It has placed exchange value right at the centre of research culture. In particular, it has installed the idea of a potential score, a ranking, as something close to being the cumulative moment of the research process, often having an implicit priority over any other outcome.
Widespread scepticism surrounds Research Assessment but, not surprisingly, most academics have learned to work within its terms, which receive regular institutional reinforcement. Alluring ideas of stardom have been generated, alongside new ways of feeling permanently uneasy. (There is particular unease around the politicization of research by the government, as discussed here by Thom Brooks, who is campaigning for the removal of the 'Big Society' from research funding priorities.)
The language within which the new order has been established mixes the superlative and the monitory. Mystical deployment of the idea of ‘excellence’ has been at the heart of the exercise, the term shifting from an accolade of distinction to something close to an expectation. The current description of ‘3 star quality’ is a good example. It is defined as ‘quality that is internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence.’ This formulation, especially the turn achieved around ‘but’, characterises the strand of tortuous bad-faith that runs through the whole project (how helpful to be warned about those ‘lower standards of excellence’ waiting to mislead judgement in the unwary!).
Such an approach – which we can call cryptic elaboration - can only be countered by the integrity and the sheer awkwardness of at least some of those peer-reviewers charged with doing the assessing. In my own experience as a panellist on three exercises since the 1990s, I know well the frustrations encountered in trying to be ‘fair’ in the context of opaque guidelines and madly unrealistic procedures and schedules.
In these circumstances, it is useful to ask what structures might make it possible to talk more honestly about research obligations, research costs and research quality and use. This is not at all a matter of evading scrutiny, but the now familiar problem of trying to find means of audit that do not adversely impact upon what is being audited.
Raising fundamental questions about research values touches on issues about which the academic community is likely to feel sensitive and divided. Indeed, I feel the contradictory pulls myself. It is apparent that a lot of research, leaving aside its exchange value for assessment purposes, is primarily of benefit within fields of inquiry themselves, outside which it may not even be comprehensible. Yet defending this has come to seem less strategic than placing emphasis, sometimes rather desperately, on the contribution of research to economic and social development.
There is a ‘romance of research’, portraying a co-ordinated, cumulative progress, with all efforts (‘contributions’) playing their part in the general advancement of human knowledge. This is a heavily science-based view, but even here it risks misrepresentation. ‘Research’ is more typically made up of a profusion of esoteric activities at different levels of specialism across a spectrum of ‘fields’, with considerable areas of overlap, duplication, mutual ignorance and enmity. Within the regime of assessment, the sheer requirement to research and publish has joined the more traditional pattern of motives and goals.
All academics should have time for scholarship and inquiry, and the opportunity to win competitive funding. Research ‘horizons’ will continue to be inspiring for many, holding the promise of activities that give personal satisfaction as well as professional recognition. However, the present system of regular and detailed accountability has distorted the working life of departments, undercut collegiality, and added to intellectual enthusiasm and the ‘wish to know’ something more dutiful, strategic and quite often more boring.
It has reduced the extent to which academic writing is an act of communication, however small the readership, and increased the passive sufficiency of the publication itself as listable object. These circumstances invite not only criticism but the asking of deeper questions. How does academic research fit within the broader economy of public and corporate knowledge? And what is the relationship between university values and the neo-liberal economic order?
Get our weekly email
CommentsWe encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.