Censorship By Weaponizing Free Speech: Rethinking How The Marketplace Of Ideas Works
https://www.techdirt.com/articles/20180124/11124039076/censorship-weaponizing-free-speech-rethinking-how-marketplace-ideas-works.shtml
It should be no surprise that I'm an unabashed supporter of free speech.
Usually essays that start that way are then followed with a "but..."
and that "but..." undermines everything in that opening sentence. This
is not such an essay. However, I am going to talk about some interesting
challenges that have been facing our concepts of free speech
over the past few years -- often in regards to how free speech and the
internet interact. Back in 2015, at our Copia Summit we had a panel that
tried to lay out some of these challenges, which acknowledged that our traditional concepts of free speech don't fully work in the internet age.
There are those who argue that internet platforms should never do any moderation at all, and that they should just let all content flow. And while that may be compelling at a first pass, thinking beyond that proves that's unworkable for a very basic reason: spam. Almost everyone (outside of spammers, I guess) would argue that it makes sense to filter out/moderate/delete spam. It serves no useful purpose. It clutters inboxes/comments/forums with off-topic and annoying messages. So, as Dave Willner mentions in that talk back in 2015, once you've admitted that spam can be filtered, you've admitted that some moderation is appropriate for any functioning forum to exist. Then you get to the actual challenges of when and how that moderation should occur. And that's where things get really tricky. Because I think we all agree that when platforms do try to moderate speech... they tend to be really bad at it. And that leads to all sorts of stories that we like to cover of social media companies banning people for dumb reasons. But sometimes it crosses over into the absurd or dangerous -- like YouTube deleting channels that were documenting war crimes, because it's difficult to distinguish war crimes from terrorist propaganda (and, sometimes, they can be one and the same).
An even worse situation, obviously, is when governments take it upon themselves to mandate moderation. Such regimes are almost exclusively used in ways to censor speech that should be protected -- as Germany is now learning with its terrible and ridiculous new social media censorship law.
But it's not that difficult to understand why people have been increasingly clamoring for these kinds of solutions -- either having platforms moderate more aggressively or demanding regulations that require them to do so. And it's because there's a ton of really, really crappy things happening on these platforms. And, as you know, there's always the xkcd free speech point that the concept of free speech is about protecting people from government action, not requiring everyone to suffer through whatever nonsense someone wants to scream.
But, it is becoming clear that we need to think carefully about how we truly encourage free speech. Beyond the spam point above, another argument that has resonated with me over the years is that some platforms have enabled such levels of trolling (or, perhaps to be kinder, "vehement arguing") that they actually lead to less free speech in that they scare off or silence those who also have valuable contributions to add to various discussions. And that, in turn, raises at least some questions about the idea of the "marketplace of ideas" model of understanding free speech. I've long been a supporter of this viewpoint -- that the best way to combat so-called "bad speech" is with "more speech." And, you then believe that the best/smartest/most important ideas rise to the top and stomp out the bad ideas. But what if the good ideas don't even have a chance? What if they're silenced before they even are spoken by the way these things are set up? That, too, would be an unfortunate result for free speech and the "marketplace of ideas".
In the past couple of months, two very interesting pieces have been written on this that are pushing my thinking much further as well. The first is a Yale Law Journal piece by Nabiha Syed entitled Real Talk About Fake News: Towards a Better Theory for Platform Governance. Next week, we'll have Syed on our podcast to talk about this paper, but in it she points out that there are limitations and problems with the idea of the "marketplace of ideas" working the way many of us have assumed it should work. She also notes that other frameworks for thinking about free speech appear to have similar deficiencies when we are in an online world. In particular, the nature of the internet -- in which the scale and speed and ability to amplify a message are so incredibly different than basically at any other time in history -- is that it enables a sort of "weaponizing" of these concepts.
That is, those who wish to abuse the concept of the marketplace of ideas by aggressively pushing misleading or deliberately misguided concepts are able to do so in a manner that short-circuits our concept of the marketplace of ideas -- all while claiming to support it.
The second piece, which is absolutely worth reading and thinking about carefully, is Zeynep Tufekci's Wired piece entitled It's the (Democracy-Poisoning) Golden Age of Free Speech. I was worried -- from the title -- that this might be the standard rant I've been reading about free speech somehow being "dangerous" that has become tragically popular over the past few years. But (and this is not surprising, given Tufekci's previous careful consideration of these issues for years) it's a truly thought provoking piece, in some ways building upon the framework that Syed laid out in her piece, noting how some factions are, in effect, weaponizing the very concept of the "marketplace of ideas" to insist they support it, while undermining the very premise behind it (that "good" speech outweighs the bad).
In particular, she notes that while the previous scarcity was the ability to amplify speech, the current scarcity is attention -- and thus, the ability to flood the zone with bad/wrong/dangerous speech can literally act as a denial of service on the supposedly corrective "good speech." She notes that the way censorship used to work was by stifling the message. Traditional censorship is blocking the ability to get the message out. But modern censorship actually leverages the platforms of free speech to drown out other messages.
And that's something we need to reckon with.
It's also an area where I don't think there are necessarily easy solutions -- but having this discussion is important. I still think that companies will be bad at moderation. And I still think government mandates will make the problems significantly worse, not better. And I very much worry that solutions may actually do more harm than good in some cases -- especially in dragging down or silencing important, but marginalized, voices. I also think it's dangerous that many people immediately jump to the platforms as the obvious place to put all responsibility here. There needs to be responsibility as well on the parts of the end users -- to be more critical, to have more media literacy.
And, of course, I think that there is a space for technology to potentially help solve some of these issues as well. As I've discussed in the past, greater transparency can help, as would putting more control into the hands of end users, rather than relying on the platforms to make these decisions.
But it is an area that raises some very real -- and very different -- challenges, especially for those of us who find free speech and free expression to be an essential and core value. What do we do when that free speech is being weaponized against free speech itself? How do you respond? Do you need to weaponize in response and flood back the "bad speech" or does that just create an arms race? What other ways are there to deal with this?
This is a discussion that was started a while back, but is increasingly important -- and I expect that we'll be writing a lot more about it in the near future.
There are those who argue that internet platforms should never do any moderation at all, and that they should just let all content flow. And while that may be compelling at a first pass, thinking beyond that proves that's unworkable for a very basic reason: spam. Almost everyone (outside of spammers, I guess) would argue that it makes sense to filter out/moderate/delete spam. It serves no useful purpose. It clutters inboxes/comments/forums with off-topic and annoying messages. So, as Dave Willner mentions in that talk back in 2015, once you've admitted that spam can be filtered, you've admitted that some moderation is appropriate for any functioning forum to exist. Then you get to the actual challenges of when and how that moderation should occur. And that's where things get really tricky. Because I think we all agree that when platforms do try to moderate speech... they tend to be really bad at it. And that leads to all sorts of stories that we like to cover of social media companies banning people for dumb reasons. But sometimes it crosses over into the absurd or dangerous -- like YouTube deleting channels that were documenting war crimes, because it's difficult to distinguish war crimes from terrorist propaganda (and, sometimes, they can be one and the same).
An even worse situation, obviously, is when governments take it upon themselves to mandate moderation. Such regimes are almost exclusively used in ways to censor speech that should be protected -- as Germany is now learning with its terrible and ridiculous new social media censorship law.
But it's not that difficult to understand why people have been increasingly clamoring for these kinds of solutions -- either having platforms moderate more aggressively or demanding regulations that require them to do so. And it's because there's a ton of really, really crappy things happening on these platforms. And, as you know, there's always the xkcd free speech point that the concept of free speech is about protecting people from government action, not requiring everyone to suffer through whatever nonsense someone wants to scream.
But, it is becoming clear that we need to think carefully about how we truly encourage free speech. Beyond the spam point above, another argument that has resonated with me over the years is that some platforms have enabled such levels of trolling (or, perhaps to be kinder, "vehement arguing") that they actually lead to less free speech in that they scare off or silence those who also have valuable contributions to add to various discussions. And that, in turn, raises at least some questions about the idea of the "marketplace of ideas" model of understanding free speech. I've long been a supporter of this viewpoint -- that the best way to combat so-called "bad speech" is with "more speech." And, you then believe that the best/smartest/most important ideas rise to the top and stomp out the bad ideas. But what if the good ideas don't even have a chance? What if they're silenced before they even are spoken by the way these things are set up? That, too, would be an unfortunate result for free speech and the "marketplace of ideas".
In the past couple of months, two very interesting pieces have been written on this that are pushing my thinking much further as well. The first is a Yale Law Journal piece by Nabiha Syed entitled Real Talk About Fake News: Towards a Better Theory for Platform Governance. Next week, we'll have Syed on our podcast to talk about this paper, but in it she points out that there are limitations and problems with the idea of the "marketplace of ideas" working the way many of us have assumed it should work. She also notes that other frameworks for thinking about free speech appear to have similar deficiencies when we are in an online world. In particular, the nature of the internet -- in which the scale and speed and ability to amplify a message are so incredibly different than basically at any other time in history -- is that it enables a sort of "weaponizing" of these concepts.
That is, those who wish to abuse the concept of the marketplace of ideas by aggressively pushing misleading or deliberately misguided concepts are able to do so in a manner that short-circuits our concept of the marketplace of ideas -- all while claiming to support it.
The second piece, which is absolutely worth reading and thinking about carefully, is Zeynep Tufekci's Wired piece entitled It's the (Democracy-Poisoning) Golden Age of Free Speech. I was worried -- from the title -- that this might be the standard rant I've been reading about free speech somehow being "dangerous" that has become tragically popular over the past few years. But (and this is not surprising, given Tufekci's previous careful consideration of these issues for years) it's a truly thought provoking piece, in some ways building upon the framework that Syed laid out in her piece, noting how some factions are, in effect, weaponizing the very concept of the "marketplace of ideas" to insist they support it, while undermining the very premise behind it (that "good" speech outweighs the bad).
In particular, she notes that while the previous scarcity was the ability to amplify speech, the current scarcity is attention -- and thus, the ability to flood the zone with bad/wrong/dangerous speech can literally act as a denial of service on the supposedly corrective "good speech." She notes that the way censorship used to work was by stifling the message. Traditional censorship is blocking the ability to get the message out. But modern censorship actually leverages the platforms of free speech to drown out other messages.
The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.There's a truth to that which needs to be reckoned with. As someone who has regularly talked about the marketplace of ideas and how "more speech" is the best way to respond to "bad speech," Tufekci highlights where those concepts break down:
These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.
Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?As she notes, this is "not a call for nostalgia." It is quite clear that these platforms also have tremendous and incredibly important benefits. They have given voice to the formerly voiceless. There are, certainly, areas where the marketplace of ideas functions, and the ability to debate and have discourse actually does work. Indeed, I'd argue that it probably happens much more often than people realize. But it's difficult to deny that some have weaponized these concepts in a manner designed to flood the marketplace of ideas and drown out the good ideas, or to strategically use the "more speech" response to actually amplify and reinforce the "bad speech" rather than correct it.
And that's something we need to reckon with.
It's also an area where I don't think there are necessarily easy solutions -- but having this discussion is important. I still think that companies will be bad at moderation. And I still think government mandates will make the problems significantly worse, not better. And I very much worry that solutions may actually do more harm than good in some cases -- especially in dragging down or silencing important, but marginalized, voices. I also think it's dangerous that many people immediately jump to the platforms as the obvious place to put all responsibility here. There needs to be responsibility as well on the parts of the end users -- to be more critical, to have more media literacy.
And, of course, I think that there is a space for technology to potentially help solve some of these issues as well. As I've discussed in the past, greater transparency can help, as would putting more control into the hands of end users, rather than relying on the platforms to make these decisions.
But it is an area that raises some very real -- and very different -- challenges, especially for those of us who find free speech and free expression to be an essential and core value. What do we do when that free speech is being weaponized against free speech itself? How do you respond? Do you need to weaponize in response and flood back the "bad speech" or does that just create an arms race? What other ways are there to deal with this?
This is a discussion that was started a while back, but is increasingly important -- and I expect that we'll be writing a lot more about it in the near future.
No comments:
Post a Comment