Advertisement

Why scorecards in arts and culture do more harm than good

Modern society has become addicted to ratings and league tables. But a new scorecard, which aims to give ‘good art’ a numerical ranking, is utterly wrong-headed.

Jul 03, 2017, updated Jul 03, 2017
The Record – a genre-less, story-less dance piece – would never fit into a standardised category. Photo: Maria Baranova

The Record – a genre-less, story-less dance piece – would never fit into a standardised category. Photo: Maria Baranova

Foxes were introduced into Australia from Britain in the 19th century for the recreation of faux-English huntsmen. They destroyed dozens of native species. In the 21st century a parallel is at hand in an export of cultural metrics from Australia to the UK. The impact may be equally damaging.

We started writing about this from our experience in what makes Adelaide’s cultural life so durable and sustainable. How is it that we manage to do so much so well with so little money, you ask?

Well, we refuse to count the ways, but we certainly have plenty to tell you. And what we have to tell is gaining purchase in the UK, where an Australian-born system of cultural metrics is being rolled out by the Arts Council England.

Culture Counts, developed in Western Australia by the Department of Culture and the Arts, is a computer dashboard data program, designed to be used across art forms. Its aim, according to a Manchester-based pilot, is “a sector-led metrics framework to capture the quality and reach of arts and cultural productions”.

What is proposed is substantial, serious, and no doubt well-intentioned. Unusually for a government-led measurement scheme, arts practitioners as well as policy experts have helped develop it.

Yet we at Laboratory Adelaide – an ARC Linkage research project into culture’s value – view the venture with dismay. We argue that the approach is wrong-headed and open to political abuse.

In essence, Culture Counts is a quantitative scorecard for artistic quality, with a set of standardised categories translating a set of verbal descriptions into numbers.

For example, if a surveyed audience can be prompted to say of a cultural experience that “it felt new and different” or “it was full of surprises”, it would rate highly on a 5-point scale for “originality”. That number would then sit on the dashboard beside other numbers for “risk” and “relevance”.

The categories are nuanced enough to provide usable feedback for practitioners and bureaucrats with the time and desire to think hard about what the numbers mean. And we understand the pressure cultural organisations face to justify their activities in quantified ways.

But will funders analyse the numbers with care? Will artists resist the temptation to trumpet “a 92 in the ACE metric” any more than vice chancellors have refrained from boasting of their rankings in university league tables?

We think not. A quantitative approach to quality betrays naivety about how people look at dashboard data, privileging a master figure or, at best, two or three figures. Context is lost to the viewer, and the more authoritative a number is presumed to be, the more completely it is lost.

Numbers and culture can be dangerous bedfellows. Andy Maguire/flickr, CC BY

The second problem with a metric for artistic quality is the homogeneity of purpose it implies. A theatre in Thebarton, an orchestra in Melbourne and a gallery in Broome not only do different things in different places, their values are different too. They can be compared, but it requires critical assessment not numerical scaling.

This was a view discussed at length by the UK 2008 McMaster Review Supporting Excellence in the Arts – from Measurement to Judgement. It is to be regretted the current UK government has failed to heed the advice of this insightful document.

A third problem with the approach is the political manipulation it invites. Metrical scores look objective even when reflecting buried assumptions. If a government decides it wants to support (say) “innovation”, different projects can be surreptitiously graded by that criterion and “failures” de-funded. The following year the desideratum might be “excellence” and a different crunch would occur. Supposition is camouflaged by abstraction and the pseudo-neutrality of quantitative presentation.

Arts Council England’s metrics will be expensive. They will demand time, money and attention from resources-strapped cultural organisations who cannot spare them. Is it worth it? This is a vital point. The introduction of a new quantitative indicator should tell us something we didn’t know before. It is not enough to translate verbal descriptions into numbers as a matter of course. There has to be knowledge gained by doing so that we didn’t already have.

A scene from The Record. Maria Baranova

If the only answer is “by using numbers we can benchmark cultural projects more easily”, then we have a fourth problem. The incommensurability inherent in concrete instances of creative practice is not something that will be addressed by improving standardised measurement techniques.

In fact, the more sophisticated the Council’s approach becomes, the more its numbers will stand out as two-dimensional. In this way a scorecard of artistic quality is not only misrepresentative, it is self-defeating.

We’ve written about this on the Conversation and in a British journal called Cultural Trends, “Counting Culture to Death: an Australian Perspective on Culture Counts and Quality Metrics” which is what academics do. More unusually, especially for Australians writing about culture, it has been picked up in the UK in places as disparate as the British Arts Professional and the Daily Telegraph.

This may just be a flurry of luvvies happy to think that what they do is something special, and not to be traduced by nasty numbers. Or it might be a sign of turning in the metric tide that has flowed so strongly for a couple of decades. We hope the latter. Big data and new forms of quantification can be really useful, but they will not do your thinking for you, and they cannot justify one art form against another. Sustaining value in culture requires judgement as well as measurement.

So what might this mean back here in Adelaide?

Smoochi/flickr, CC BY-NC

In Laboratory Adelaide, we think that energy shouldn’t be wasted in generating elaborate sets of numbers that few people understand and no-one really believes.

People will use the numbers they like and hide those they dislike, but not much mutual understanding of why we all bother will occur. Numbers only tell a story when set in a larger narrative for understanding the value of culture. We are the ideal size to build that story, and our recent experience suggests that we might be able to lead the world when we have it.

Laboratory Adelaide: the Value of Culture is an ARC-funded Linkage project based at Flinders University and led by Professor Julian Meyrick in collaboration with Associate Professor Robert Phiddian, Dr Tully Barnett and Heather Robinson (all Flinders), Associate Professor Steve Brown (Australian Institute of Business), Professor Stephen Boyle (UniSA), Alan Smith (State Library of South Australia), Karen Bryant (Adelaide Festival Corporation) and Rob Brookman (State Theatre Company of South Australia).

Read the article in full in The Conversation, by Flinders University Professor Julian Meyrick, Professor Richard Maltby, Associate Professor Robert Phiddian and Dr Tully Barnett.

Local News Matters
Advertisement
Copyright © 2024 InDaily.
All rights reserved.