Policy Ideas in a Digital Age – new conference paper

Hi

Next week I am off to Grenoble to present a new paper at  Session 3 New Ideational Turns, as part of panel 84 New directions in the study of public policy, convened by Peter John, Hellmut Wollmann and Daniel A. Mazmanian, the 1st International Conference on Public Policy, Grenoble, France, June 26-28. Friday 28 June, 8.30-10.30, Sciences Library Auditorium.

Policy Ideas Paper

It is a draft paper – I welcome your comments or suggestions

 Abstract

This paper argues that the discussion of public policy online is offering new and exciting opportunities for public policy research exploring the role of policy ideas.  Although considerable work focuses on political ideas at the macro or mid-range, specific policy ideas and initiatives  are overlooked, thought to be “too narrow to be interesting” (Berman, 2009, p. 21) .This paper argues that the prolific use of social media among policy communities means it is now possible to systematically study the micro-dynamics of how policy ideas are coined and fostered.  Policy ideas are purposive, branded initiatives that are launched with gusto; flourish for around a thousand days; and then disappear with little trace as attention shifts to the latest and loudest.  At best, media reports will document that Birmingham’s Flourishing Neighbourhoods initiative has been “scrapped”, “Labour’s Total Place programme has been “torn up”, or the Coalition’s big society policy is “dead”.  Save for a return to the policy termination literatures of the late 1980s, our impotence in conceptualising such death-notices reveals how little effort has been invested in understanding and theorising the lifecycle of policy ideas.  In response, this paper conceptualises policy ideas, their life, death and succession.  The paper draws on a case of the recent Police and Crime Commissioner elections held across England and Wales in November 2012, and the attempts of the Home Office to coin and foster the hashtag #MyPcc.

 Acknowledgement: The primary research reported was funded by British Academy Grant – SG112101 The shape of ideas to come: harvesting and analysing online discussion about public policy.  And University of Birmingham Roberts Fellowship 2008-2013.  Heartfelt thanks to the research team: Gill Plumridge, Becky O’Neil, Tom Daniels, Pete Redford, Phoebe Mau, Diha Mulla, Misfa Begum, Sarah Jeffries and Osama Filali Naji for your empathetic coding, unwavering enthusiasm and crowd-like wisdom.

DOWNLOAD FULL PAPER HERE –

Policy Ideas in a Digital Age by Stephen Jeffares

Missing out? What proportion of conversation do I miss out on using the Twitter API versus PowerTrack – 99pc, 66pc or 10pc?

You hear rumblings that we miss up to 99pc of the conversation using the Twitter search API rather than PowerTrack but we also hear it depends on the velocity of the topic.

In the post below I conclude that the proportion of the conversation you miss out on using API a rather than PowerTrack depends on the velocity of the topic, and based on the example of Mrs Thatcher’s funeral the answer is either a tenth or two-thirds.

At 8:15 on 17th April 2013 we set up a GNIP fetch and an API fetch for the following terms: Thatcher funeral
Thatcher’s
funeral cost
#thatcherfuneral

How did they compare?
By 11.28 GNIP was at 65,869 and API 24,354 – around 36.pc of the “full set”. By 12:25 it was 96,636 for the GNIP and API 31,345 – around 32.4pc . By 12:49 it was 105,022 for GNIP and 35,316 API – 33.6pc
By 13:24 it was 113,519 for GNIP and 41,842 API – 36.8pc

By 8:15 the following morning (18th April 2013) 190,240 GNIP and 109,212 API -42.5pc missing

(note – the figures quoted for the API includes 10,168 historical tweets captured between 21:20 on 16th April 2013 and 8:14 on 17th 2013).

It shows that the API is not suitable for major issues or event Tweeting – where the rate is up and beyond 10,000 tweets per hour.

However where the rates are closer to 1000 per hour – the API stands up fine. For example during the same period we pulled in feeds for the words ‘state & funeral’. – At 12:56 – 3914 GNIP and 3,498 API (minus 835 historic tweets)- 89% .

So it depends on the velocity. Some will add that you get a better quality of metadata with the PowerTrack – but if you are researching a low velocity topic over an extended period of time – you might just be fine with the API.

Now collecting…

For the last few months we have been collecting discussion of public policy on social media using DiscoverText. We are trying to understand how public policy is discussed online. To date we have collected just under half a million Facebook posts and YouTube comments on 26 different policies and issues.

Here is a list of the kind of things we have in the archive.

* GayMarriage (2462 Tweets, hashtag)
* Community budgets – (1800 tweets)
* 10p Tax rate (43924 de-duplicated tweets)
* Bedroom Tax (8155 tweets)
* Big society (2247 tweets, hashtag)
* Bristolmayor (8847 de-duplicated tweets, hashtag)
* Nick Clegg Sorry video (1030 YouTube comments)
* Councillors for Hire (4092 de-duplicated tweets)
* Lansley Rap YouTube song – (1037 YouTube comments)
* EUSpeech, David Cameron (40,000 tweets and 54,000 Facebook page posts)

* FakeXmas Twitter campaign (57 tweets)
* 2012 Floods (1498 Tweets)
* Hospital food (1974 de-duplicated tweets)
* HS2 (13685 tweets, 100 DisQus posts).
* Local elections (16926 Tweets ongoing)
* Mansion Tax (6025 tweets, and 188 Google plus)
* Minimum unit price England (12,000 Tweets).
* Neighbourhood planning (262 Tweets ongoing)
* PCC (45,000 in build up and 50,000 post election, 1211 Facebook page posts)

* Birmingham Cuts announcement (1528 tweets)

* Rotherham, UKIP foster parents decision (6970 tweets)
* Thatcherism, post-death (23700 tweets)
* Thatcher Funeral, morning of 17th – (100,000 tweets).
* Transforming Rehabilitation, probation service reform (350 Facebook Posts, 2900 tweets).

* Troubled families (4000 tweets).
* Work Programme (10343 tweets).

The work to understand the shape of the debate starts by de-duplicating exact and near duplicates, then we check the tweets are on-topic, and not just opportunist hashtag spam. We then identify those that express opinion about the topic and divide them up by theme. We draw on a dispersed team of real life human coders who code portions of the datasets. We check for inter-coder agreement and validity. We use the human coding to train custom machine classifiers to classify large portions of the datasets, reducing the need for human coding. One further way of getting a sense of the emerging shape of the discussion is to ask a group of people to Q sort a diverse sample of items using crowdsortq.com. The analysis identifies shared viewpoints and informs further rounds of coding.

More to follow…

Why big data social scientists “want it all”

Here’s the big data fantasy project in a nutshell:

“We’re pulling in every tweet, every post, every outward link, every update…It is a huge challenge – the platforms make it difficult for us, they keep changing how we can get hold of it…The APIs don’t give us enough – especially when things trend –but there are ways around it. We find a way around it…We’ve got big data”.

There is a new kind of social scientist: the big data social scientist. Top of their Desert Island Disc choices might be Queen’s I want it all (you know the one – “I want it all, I want it all, I want it all, and I want it now!”)

There is something in the “I want it all” aspiration that reminds me of when I travel on the train, and see the man at the end of the platform, notebook and SLR camera around his neck, sandwiches and flask of coffee packed carefully in his knapsack. He has in his hand a book with the number of the item of rolling stock currently in service. He has photographed and recorded 50% of them, he knows he has another 50% to go. He also wants it all. So that’s him, the new generation of big data social scientists, and Freddy Mercury: they all want it all.

When I recently started to capture (or harvest, or some say ‘scrape’) tweets about the police and crime commissioner elections, I found myself with a spreadsheet of 100,000 rows and ten fields of metadata – that’s 1m datapoints. For a first timer to this world, I had myself big data. It was exciting. I had it all.

Then I started to learn more about the mechanism I was using to pull in these tweets. Blogs and websites were warning me that using the API of Twitter to do this gives you sometimes as few as only 1% of the actual tweets. The limitations of 1500 an hour mean that you can’t get everything. For people who like to collect tweets about Obama or Occupy, there are times of the day that you could easily just end up with a tiny sample of a huge volume of tweets. But there are solutions, these people tell you – pay us a few hundred dollars and we will get it all for you. Yes, all of the tweets. No restrictions. You can have it all.

Meanwhile, imagine the big data social scientist, Ipod strapped to his arm, out for a 5K run to relieve some pressure, singing along, mulling the proposition over…

“Not a man for compromise and where’s and why’s and living lies So I’m living it all, yes I’m living it all,
And I’m giving it all, and I’m giving it all,
It ain’t much I’m asking, if you want the truth,
Here’s to the future, hear the cry of youth,
I want it all, I want it all, I want it all, and I want it now”,

And you can have it now my friend. Just enter your card details and you can have it all.

Looking back at my spreadsheet of big data, it doesn’t seem as big any more. I’ve just got an unknown sample of tweets. And to add to that I don’t have their Facebook activity, or their LinkedIn or what they wrote on the Guardian article or BBC news site, or their blog post. I really don’t have very much. I really only have a little bit.

The fantasy of having ”it all” seems like a possibility because we have the technology, or at least we have come close to it. The new generation of big data social scientists will tell you it was easier a couple of years ago – the platforms were less protective, whereas now they are becoming risk averse or enlightened to how they can monetize and exploit their big data. But they battle on. “If you don’t know how to hack, code or have the means to pay ”, they will tell you, “then you need to think carefully before getting involved with the world of big data. You might be better suited to just regular ‘data’ ”.

But hang on. Let’s look at that spreadsheet again – the one with the 1m data points. There’s quite a bit in there. We should stop ourselves from judging our data by what we don’t have and instead think what can we learn from what we’ve got. It is a simple point, but the quality of your data depends on the questions you are asking and the claims you want to make. There are as many unanswered questions in this spreadsheet as there are tweets. The key is not to try and answer them all, nor is it to be led completely by the availability of data, but rather we need to be creative with our questions and to exploit what we have.

It’s time to be happy with our lot – time to change the playlist – what’s that tune by Bobby McFerrin?

On why we need to do Q faster

I remember a bit of a row on the Q method.org listserve a while back. There was a discussion about Q Assessor as a tool that allowed both the initiated and the unwashed to do Q studies faster. It clearly riled some members, arguing that
Q was not something to be rushed. And in part I agree. But in the last year I have been starting to look at Twitter data as a source of statements for my concourse and it revealed to me reasons why we need to do Q faster. Let me explain.

Although I have used Q in a number of ways in the past, my main reason for using Q is to understand the subjectivity that surrounds policy ideas. Anybody remember that slogan ‘war on terror’, or the reframing of global warming and associated concerns as ‘climate change’? In the UK an example would be something like the "big society" that seems to have had a three year life expectancy despite it being trailed as the Prime Minister’s ‘big idea for politics’. The current media consensus is that the idea is now dead and defunct. My hunch about all of this is that the formative stages of a policy idea’s life in the spotlight matter. Usually once the launch is over, the report published and the press release issued the policy communities take to Twitter to express their view. They express their views often with humour, irony and the popular ones are cascaded through networks of followerships propelling the message in what, some call, going viral. Policy ideas live and die by the web.

Q methodologists go to great lengths to draw on multiple sources of concourse: newspaper archives, documents, observations, interviews and literature reviews. They bring them together, sample them down and then administer the Q sorts. Fine and long may this continue. But whilst I was collecting Twitter data surrounding a recent policy idea, to have elected police commissioners in England and Wales, I noticed something interesting. If you took the tweets running up to the election as a whole, 50,000 over a couple of weeks, you can see what were the most commonly occurring words and terms. They give you a sense of the common descriptors for how the policy was viewed. Focus on the data a day at a time and we, as Steve Brown himself would say, finds us turning up the microscope. The language varies day to day, certain new phrases come in and stick.

Let me give you a few examples – "spoil" emerged as a frequently mentioned term as the campaign mobilised to encourage people to "spoil" their ballots. "shambles" came in to describe how the election was being administered. #MyPCC the hashtag of the Home Office campaign responsible almost disappeared. What I am trying to say is a simple point – that concourse, the volume of debate surrounding a topic, is not static. Perhaps we could grandly call these daily concourses, or micro-concourses, I don’t mind, but you get my point.

If we are to understand the formation of concourse around emergent ideas, policies whatever, then we need the capability of capturing voluminous discussion. We need tools that can take datasets of 20, 30,100 thousand Tweets or Facebook posts and pull out potential statements. So maybe we do need to do Q faster. I’ll think on.

Analysing the shape of policy ideas

Dr. Stephen Jeffares, University of Birmingham.

This blog post describes an on-going research project sponsored by the British Academy called “The Shape of Ideas to Come”.

This project studies Tweets that express opinion about policy ideas.  By Tweets I mean those 140 character messages that people send over Twitter.  By policy idea I mean anything from ‘climate change’ to ‘big society’.  In most cases they are deliberate policies invented by governments, policy makers or organisations.  The interesting thing about policy ideas is that they tend to end up being discredited, usually within 3 years.  The focus of this research is how users of Twitter express opinion about policy ideas.

The first job is to capture discussion around a particular idea.    Let me give you an example.  In November  2012 the Home Office were responsible for the election of 41 Police and Crime Commissioners.  The voter turn-out in the election was pretty poor, but nevertheless the election went ahead and there are now serving PCCs in every police area of England and Wales.   The bit that is of interest to this project is how the Home Office developed a hashtag of #MyPcc to focus Twitter discussions of the election.  Within a few hours users on Twitter were using the hashtag to criticise the election and the rationale for the policy of having elected commissioners.  For the purposes of this project we collected 100,000 Tweets that included either #MyPCC or #PCC.  We started the collection three weeks before the election.  As you would imagine most of the Tweets and discussion came in the final few days before the election, with almost half coming on the day after the election, during the time the results were being announced and the issue was prominent in the news cycle.

When we sat down to examine the 100,000 Tweets the first thing we noticed is that many of them expressed opinion about the policy idea, but, importantly, not all. Many of the Tweets we found to be conversations between Twitter users or, alternatively, factual where candidates and blogger s are publicising meetings or directing users to look at their websites.  But alongside all of this conversation and broadcasting were relatively clear expressions of opinion.  The kind of opinionated Tweets included phrases like:  “I think this policy is a waste of time”; “In my opinion this is privatisation by the back door”; “I imagine this will end up costing more than the previous approach”; “It is clear that nobody has a firm grip of what needs to done”; “I think this is an important step forward and we need to embrace it”.

Although these opinionated Tweets vary, initial categorisation reveals there to be overlapping themes, repeated phrases and use of metaphor and cliché.  Although much can be learnt from isolating the opinionated tweets from the others, how to go about separating them out for analysis is a major challenge facing this project.  Thankfully there are some software tools available that can automate much of the process, but because we are dealing with subjectivity it also needs human intervention.  It requires analysts.

How it works is this. The analyst signs in to a secure website.  They are given a coding scheme – usually something simple like “1. Opinion” “2. Not” and a batch of Tweets.  Once underway, the first  Tweet flashes up full screen.  Hit “1” for Opinion, and “2” for Not. Once coded the remaining Tweets flash up one by one until all items are coded or the analyst presses the Stop button.    Because everybody signs in from their own device, several analysts can be working on the same set of Tweets at any one time.  Not everybody will agree on the categorisations, but through discussion of this disagreement that we can clarify our working definitions.  Armed with clearer definitions we can move to code new batches of Tweets with greater accuracy.  Throughout the process the software is learning about the nuanced distinctions between opinionated and non-opinionated tweets.  Following further rounds of coding and review the process of classification can then be handed over to the machine.  This automation opens up the potential to classify thousands of Tweets in a matter of seconds. 

Once the software is trained, the role of the analysts becomes one of devising a coding scheme to categorise the opinionated Tweets.  This is an iterative process but the aim is to identify key themes and overlaps and remove duplicates.  The aim is to represent the range and diversity of debate.

If you are interested in getting involved in the role of coding and classifying tweets about policy ideas please contact the Principal Investigator Dr Stephen Jeffares, University of Birmingham.