I remember a bit of a row on the Q method.org listserve a while back. There was a discussion about Q Assessor as a tool that allowed both the initiated and the unwashed to do Q studies faster. It clearly riled some members, arguing that
Q was not something to be rushed. And in part I agree. But in the last year I have been starting to look at Twitter data as a source of statements for my concourse and it revealed to me reasons why we need to do Q faster. Let me explain.
Although I have used Q in a number of ways in the past, my main reason for using Q is to understand the subjectivity that surrounds policy ideas. Anybody remember that slogan ‘war on terror’, or the reframing of global warming and associated concerns as ‘climate change’? In the UK an example would be something like the "big society" that seems to have had a three year life expectancy despite it being trailed as the Prime Minister’s ‘big idea for politics’. The current media consensus is that the idea is now dead and defunct. My hunch about all of this is that the formative stages of a policy idea’s life in the spotlight matter. Usually once the launch is over, the report published and the press release issued the policy communities take to Twitter to express their view. They express their views often with humour, irony and the popular ones are cascaded through networks of followerships propelling the message in what, some call, going viral. Policy ideas live and die by the web.
Q methodologists go to great lengths to draw on multiple sources of concourse: newspaper archives, documents, observations, interviews and literature reviews. They bring them together, sample them down and then administer the Q sorts. Fine and long may this continue. But whilst I was collecting Twitter data surrounding a recent policy idea, to have elected police commissioners in England and Wales, I noticed something interesting. If you took the tweets running up to the election as a whole, 50,000 over a couple of weeks, you can see what were the most commonly occurring words and terms. They give you a sense of the common descriptors for how the policy was viewed. Focus on the data a day at a time and we, as Steve Brown himself would say, finds us turning up the microscope. The language varies day to day, certain new phrases come in and stick.
Let me give you a few examples – "spoil" emerged as a frequently mentioned term as the campaign mobilised to encourage people to "spoil" their ballots. "shambles" came in to describe how the election was being administered. #MyPCC the hashtag of the Home Office campaign responsible almost disappeared. What I am trying to say is a simple point – that concourse, the volume of debate surrounding a topic, is not static. Perhaps we could grandly call these daily concourses, or micro-concourses, I don’t mind, but you get my point.
If we are to understand the formation of concourse around emergent ideas, policies whatever, then we need the capability of capturing voluminous discussion. We need tools that can take datasets of 20, 30,100 thousand Tweets or Facebook posts and pull out potential statements. So maybe we do need to do Q faster. I’ll think on.