In the first part of what I hope will be an interesting series on freedom of speech and what I’ve decided to call ‘Platform Theory’, I talked a bit about what a platform is: something which can be used to amplify, legitimise and endorse others’ voices. In this post I want to cover what kind of rights you have with respect to platforms that you control.
I’m going to take two plausible claims about the rights which come with ownership of a platform, one negative and one positive:
The positive claim, let’s call it P1, goes: The owner of a platform may use that platform to amplify, legitimate or endorse any people or views that they wish.
The negative claim, P2, goes: Nobody can force the owner of a platform to amplify, legitimate or endorse people or views which they do not wish to amplify, legitimate or endorse.
At first blush both of these claims, which I would call ‘libertarian platform theory’, seem fairly reasonable. I think there are some fairly fundamental problems with both of them, which I’ll deal with in turn.
First, let’s talk about P1. The immediate problem with this claim is that there are certain views which it is literally illegal to espouse. In the USA, these are restricted to libel and some incitements to violence, although the latter are extremely limited in scope. In the UK and EU, we are more willing to trade off freedom of speech against other values, such as social harmony and security, and as such there are restrictions not only on libel, slander and incitement to violence, but also incitement to hatred of various kinds and, in some cases, blasphemy.
It’s nigh impossible to proscribe the actual speech act – without instating a version of the Thought Police that Orwell could only have dreamt of, straight out of Minority Report, we cannot physically prevent people from saying things. Instead, the state can make certain speech acts costly to perform, as illustrated in the first section of the SEP article on Freedom of Speech*. The way that this is framed in economic language is interesting, but probably a subject for another time.
Essentially, making a speech act costly means imposing some kind of sanction on people who either espouse or amplify particular views. This can be done by the state, constituting an incursion into legal freedom of speech. However, the notion of costly speech is particularly interesting when it’s cashed out in social terms. We can make the amplification, promotion or legitimation of a particular view more costly through social approbrium. For example, if someone within a friendship group continually makes racist remarks, they may risk being ostracised by the group, or at least find themselves on the receiving end of a verbal beatdown. That doesn’t mean that their freedom of speech is being infringed (and the interface between platform theory and debates about freedom of speech is a topic I’ll be covering in the near future), but it’s a clear example of how speech can be made costly in social terms.
An interesting case study here is the recent Dapper Laughs controversy. Daniel O’Reilly made rape jokes at gigs, as well as spouting homophobia and sexism in his ‘comedy’ on a regular basis. Rather than saying that he should be prosecuted – because he had done nothing illegal – activists put pressure on those who bankrolled him: ITV, who had given him a TV series; the various places which had agreed to host him on his tour; and the tour promoter, SJM. The argument they made was that by sponsoring O’Reilly’s work, they endorsed the things he said, many of which were irresponsible and misogynistic. Eventually, his TV show was not renewed for a second series and his live tour was pulled. Whilst there was never any legal pressure, the social action in terms of the sheer number of people who mobilised against him, as well as the targets they chose, resulted in his platforms being stripped from him. I would argue that in platforming O’Reilly, organisations were not necessarily endorsing his views, but they certainly legitimated them in some of the ways I talked about in the first post on this topic.
Revising the Positive Claim: Can you say what you like?
With this in mind, we can see that P1 needs some revision. We know that there are some speech acts which it is illegal to platform, and so we need to caveat these out. We also know that there are some speech acts which, whilst strictly speaking legal to platform, will almost certainly result in social pressure which may result in social and/or financial costs if you choose to platform them.
A revised P1*: The owner of a platform may use that platform to amplify, legitimate or endorse any view or person they wish, so long as it does not contravene the laws of the country this platforming occurs in. In addition, they may face costly backlash if they amplify, legitimate or endorse views which are socially unpopular.
This seems a fairly reasonable claim to make with regard to the positive rights one has to use their platform as they wish. I’ll cover the responsibilities which may come as the corollaries to these rights in another post soon.
The Negative Claim: Can you make me give you a platform?
The negative claim as I framed it earlier is P2: Nobody can force the owner of a platform to amplify, legitimate or endorse people or views which they do not wish to amplify, legitimate or endorse.
First, I’ll illustrate what this means in practice. Once there’s a framework in place for the simple cases, I’ll move on to what happens when the control of a space is contested, as was the case in the aborted (heh) Oxford abortion debate last year, or in the case of the Charlie Hebdo comics, or the BBC.
Prima facie it seems reasonable that nobody can force me to give them or their views a platform which I am in control of. If somebody sends me a tweet reading ‘Pls RT this important message about EVIL GM broad beans #fuckmonsanto’, I am within my rights to ignore them, refuse, or send them back a tweet reading ‘just wait until you hear about the pumpkin conspiracy’ and gleefully imagine the look of sheer panic in their eyes.
Similarly, if I were the comment editor of the Daily Mail and somebody sent me an opinion piece which talked about how great the modern world is and how it’s fantastic that there are lots of people working towards gender equality and maybe we should stop valuing women purely on the basis of their physical appearance and hey let’s get rid of the sidebar of shame and stop blaming all of our problems on hordes of immigrants who mysteriously manage to steal our jobs at the same time as lazing around collecting benefits, I would be within my rights to reject it. The Daily Mail has an editorial policy of only publishing articles which are either inane or pure evil, and the editors, who control the various platforms which constitute the overall paper, have the right to reject articles which do not fit in with this ethos. Except when they are legally obliged to print particular things – for example, when they’re forced to print a retraction which clarifies that 4 out of 5 new nurses are not, in fact, foreign – they cannot be forced to amplify, legitimate or endorse views which they don’t want to.
Contested platforms – or, should we debate abortion culture, republish the Charlie Hebdo cartoons, and platform UKIP?
So far, so simple. Whoever has control of a platform gets to choose who gets to take advantage of that platform. But what about cases where control is unclear, or is contested? I think there are three main kinds of case like this. In the first, there is a conflict between different stakeholders who all have some degree of control over a platform. In the second, there is a conflict between the views of those who control the platform and those who do not control it, but have some stake or vested interest in what is platformed. In the third, there are legal regulations which may force the platform controller to act in certain ways.
The Aborted Oxford Abortion Debate
The first case can be illustrated by the Oxford abortion debate. Towards the end of 2014, the student society Oxford Students for Life (OSFL) had planned to hold a debate on abortion. It was entitled “This House believes that Britain’s abortion culture harms us all”. There were to be two speakers: Tim Stanley and Brendan O’Neill. The debate was to be held in Christ Church college, Oxford. In response to this, a group on Facebook was set up entitled “What the fuck is abortion culture?”, where around 300 people planned to protest the event. The debate was eventually cancelled because OSFL had booked the room too late, and the college Censors said that there was not enough time to assess the security concerns before the time of the event. OSFL tried and failed to find an alternative venue for the debate, and so it did not go ahead. It is worthy of note that they could have chosen to hold the debate in one of their bedrooms, or the street – however, they chose not to do this.
The popular interpretation of events was that pro-choice students had got the debate shut down. For sake of argument, let’s pretend that’s true. This is a case of a contested platform. First, let’s make a small but important distinction: in many instances, the words ‘college’ and ‘university’ are interchangeable. In the context of Oxford and Cambridge, however, they are not. The ‘university’ is the institution at which one studies, with lecture halls etc spread out across the space of the town. The ‘college’ is the place where students live, eat and socialise. This means that the above debate was due to take place in a space where students lived, and the conflict is thus one between two or more sets of students who are stakeholders in the platform that is the college. One set of students, presumably including some of OSFL, wanted the debate to go ahead. Another set did not. How should these cases be decided? Could the students of OSFL force the debate to happen against the wishes of the pro-choice students?
The answer probably lies in democracy. There are three ways of deciding it: elected representatives, majority rule, or through stakeholder analysis. In this instance, the democratically elected representatives of the student body, the JCR, said that they did not want the debate to go ahead. If they’d wanted to, students of the college could probably have called an Open Meeting to decide whether the debate should go ahead, and then there would be majority rule. This would likely be problematic because most people wouldn’t actually turn up to the meeting, and so it collapses into de facto stakeholder analysis. Under stakeholder analysis, the people who have the most interest in whether the debate goes ahead or not get to decide whether it does. In this instance the biggest stakeholders are OSFL and students who strongly feel that their college should not be host to a pro-life organisation’s debate on abortion. In an Open Meeting, these are the most likely groups to turn out in numbers, and so the vote would likely be decided by which of these groups could get the most support.
A stakeholder analysis could go one of two ways. One could argue that the harm done to residents of the college through the debate taking place there supersedes the utility that OSFL get through the debate happening. Conversely, one could argue that the harm done to OSFL members in not being able to have this platform to hold their debate is worth the emotional or mental cost to those students who did not want it to happen.
In reality, all of this analysis is somewhat unnecessary because the debate was cancelled for bureaucratic reasons (as is so often the case in this kind of controversy). However, it does serve to illuminate the issues that arise when the use of a platform is contested. Who gets to decide whether a view or person or debate should get the use of that platform? If some of the stakeholders don’t want it, should they get their way or just suck it up? It’s an interesting conundrum.
Should Newspapers Republish the Charlie Hebdo Cartoons?
In the wake of the Paris attacks, in which a number of people, including cartoonists from the satirical magazine Charlie Hebdo, were killed by Islamist extremists, there has been a profound pressure on the British press to republish cartoons from the magazine.
To be clear, the magazine satirised most groups in society, but the pressure in this instance is to republish specifically those cartoons which satirise Islam and Muslims, particularly comics which depict the Prophet Mohammed.
There are a number of reasons why papers might not wish to republish these cartoons. They might be worried about putting their own staff at risk of reprisal from extremists. They might be concerned about the racialised (and arguably racist) depictions of Muslims in the cartoons. They might not want to further vilify and victimise Muslim populations at a time when attacks on Muslims and their places of worship have seen a sharp uptick. They might just not want to republish the cartoons.
However, a lot of these papers’ readers really want them to republish the cartoons. Some of them have gone so far as to abuse and even threaten those papers which do not publish them. Leaving aside the irony of sending threats to people for refusing to publish cartoons in the name of freedom of press, let’s look at the conflict of values here. In this instance, the clash is between the wishes of those who control the platforms – the editors of papers and TV channels – and some members of their audiences. Should these particularly vocal audience members be able to force press outlets to publish the cartoons?
My feeling on this is that they shouldn’t, because the editors have control of the platforms for a reason, and if they start to publish offensive cartoons purely because some people want them to in order to make a point, the entire purpose of freedom of press is somewhat compromised. If readers are so appalled by this display of what they perceive as moral cowardice that they decide to take their business elsewhere, then there may be financial implications for the papers and they may wish to reconsider in future. However, this form of economic disincentive excepted, I cannot see a decent reason for allowing the pressure of public opinion to force individual press outlets to publish the cartoons. It may be good, on the whole, if one or two outlets do choose to do so, but no individual paper should be forced into it. They should maintain control over whose voices they amplify, legitimate and endorse.
Must the BBC platform UKIP?
The short answer to the question above is ‘sadly, yes’. Even if the controller of the BBC didn’t want to have Nigel Farage on Question Time ever again, the guidelines of the corporation oblige them to give representation to parties who have a certain degree of electoral support. This is a fairly cut-and-dry instantiation of legal or contractual obligations forcing those who control a platform to provide particular people or groups with access to that platform, regardless of their wishes.
Revising the Negative Claim
In light of the examples above, we need to revise P2 accordingly. There are clearly circumstances in which people who (partially) control a platform can be forced to give it to others against their own wishes. So:
P2*: Nothing, save legal or contractual obligations, can force the owner of a platform to amplify, legitimate or endorse people or views which they do not wish to amplify, legitimate or endorse. In cases where there are multiple people who claim to control the platform, they must decide between themselves whether a view or person should be given that platform.
In this post I’ve tried to elucidate the rights that come with the ownership or control of a platform. I don’t think there’s anything overly contentious in here, though some may disagree with me that the ‘libertarian’ principles I proposed at the beginning need any revision whatsoever, and doubtless there will be some who disagree (wrongly) with my insinuation that the Daily Mail is the physical embodiment of the Platonic ideal of evil. However, I think – and I hope you agree – that Platform Theory gives us a number of useful tools with which to analyse the various problems that arise with regard to speech in society today. That’s clear from the way that it can be applied to a number of recent controversies without issue. I’m sure there is a great deal more analysis that could be done of the specific cases I’ve talked about. For example, does providing the Charlie Hebdo comics with a platform legitimate or endorse the views of the authors? Are there are further issues in the way that the Oxford debate was dealt with in regard to whether providing Brendan O’Neill in particular with a platform meant that OSFL legitimated not only his views on the issue at hand but also on, for example, trans people? I’m not certain in either of these cases, and I’d welcome comments on these issues as well as the framework as a whole.
*On a side note, the SEP Freedom of Speech article is a fairly lucid exposition of the problems associated with free speech as they relate to principles in philosophy. It primarily covers John Stuart Mill’s Harm Principle and how it relates to harmful speech in contrast to merely offensive speech, and seeks to understand whether free speech can be legally restricted on the basis of offence alone, finding few strong arguments in favour.