Why I’m Not Especially Afraid of Free Speech on Twitter

Here’s a link to a serious take on why allowing free speech on Twitter is to be feared. I think it should be taken seriously. I’ll quote from that link at length below and intersperse some responses:

“Here are some kinds of content Twitter’s rules now prohibit that could return under Musk:

“Election misinformation
  • “Twitter’s current rules say, “You may not use Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes. This includes posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process.”

  • MyPillow CEO Mike Lindell lost his Twitter account after repeatedly claiming former President Trump had won the 2020 election.”

It’s easy enough to get a completely different crowd to see a problem with this by noting what else might get censored, depending on who’s implementing the rule:

  • Someone claiming that Trump cheated in 2016 or that the Electoral College is illegitimate and therefore Trump lost in 2016;
  • Someone claiming that Dubya (or the Supreme Court) cheated and Gore or Kerry won;
  • Someone urging people not to vote or to write in a name or to vote for a candidate not allowed in debates or mentioned by corporate media;
  • Someone denouncing an election system as broken and demanding reforms to it;
  • Someone holding a shadow or protest election;
  • Someone organizing a nonviolent demonstration to protest any “civic process” — One civic process is a military draft. Another is sticking immigrant children in cages. Another is executing people in prisons.

One alternative to having anyone implement such a rule would be to promote the speech that you consider useful and accurate, while critiquing that you find harmful.

Twitter’s algorithm may not provide a fair forum for free speech, but if not, then that is a reason to break up or take over or regulate Twitter, not a reason for Twitter to censor.

IMHO, people who cannot successfully mock the pillow man should not be blaming anyone else for anything!

 

“Medical misinformation
  • “Twitter has a policy against COVID-19 related misinformation. Medical misinformation, though, is not broadly illegal, and limiting moderation to the terms of the law could open up a host of false claims on everything from cancer cures to the safety of childhood vaccines.

  • “Twitter permanently suspended the personal account of Rep. Marjorie Taylor Greene (R-Ga.) in January for repeated violations of its COVID misinformation policies, including her false claim of “extremely high amounts of COVID vaccine deaths.””

The only thing I find remotely persuasive — and many people clearly find enormously persuasive — about a lot of what is claimed about COVID and vaccines is that the people running corporations like Twitter want it banned. I happen to just be reading a book about Alsace-Lorraine that mentions how the Nazis banning the French language resulted in even German speakers learning more and using more French. I wonder if anyone’s noticed how banning Russian in Ukraine has worked out. As with all of the topics quoted above and in what follows, a ban can be counterproductive.

It can also encourage laziness. Why educate others well about what you believe if you can just censor what you don’t believe? And worse: why try to think hard about what to believe when Twitter can do your thinking for you? (And why should a government ban truly dangerous speech if it can let Twitter do that?)

I also think we want false claims recorded and citable in the future, not erased. How else can we hold people accountable for them? But, as with elections, this rule can and almost certainly will be used in ways that even the laziest sufferer of Kadavergehorsam will find troubling, such as:

  • Ancient tweets from six months ago when what the U.S. government recommended was in some way opposed to what it now does;
  • Theories later proven but not proven at the time;
  • Theories never proven but derived from some large group of people’s cherished and First-Amendment-Protected superstition;

 

“Deepfakes and manipulated media
  • “”You may not deceptively share synthetic or manipulated media that are likely to cause harm,” Twitter’s rules say. “In addition, we may label Tweets containing synthetic and manipulated media to help people understand their authenticity and to provide additional context.” Most U.S. law does not yet address this issue.”

I’m not sure that labeling such things would be a bad idea. Labeling is not erasing. If Twitter starts warning people away from obvious satire, Twitter will look dumb to those who comprehend satire and noble to those who don’t.

The trouble is that Twitter could start labeling things falsely and allowing false things to go unlabeled. No corporation should have this sort of power at all. A government answerable to people (if we could get one of those) should have this power. But as long as we’re laying out rules for Twitter as it now exists, I think labeling this stuff, without impeding access to it in any way, is the way to go.

 

“Impersonating others
  • “Twitter says “You may not impersonate individuals, groups, or organizations to mislead, confuse, or deceive others, nor use a fake identity in a manner that disrupts the experience of others on Twitter.”

  • “In some cases, such as pretending to be someone else to commit fraud, such behavior could be illegal, but there are plenty of instances where it would not violate the law.”

Where this is damaging to someone, where the action has willfully or negligently harmed someone in a serious way, why the heck shouldn’t it be illegal? If you’re going to lobby Twitter on this, why not lobby a government? Neither one gives more of a damn than the other what you think — and only one has the power (advocated for by you) to censor you.

 

“Platform manipulation and spam
  • “”You may not use Twitter’s services in a manner intended to artificially amplify or suppress information or engage in behavior that manipulates or disrupts people’s experience on Twitter,” according to the site’s rules.

  • “Musk has criticized bots and other types of inauthentic behavior, but most instances of them aren’t specifically illegal.”

By all means, censor robots. They aren’t people.

 

“Targeted attacks and hateful conduct
  • “Twitter’s hateful conduct policy prohibits a wide range of behavior. Some, such as specific threats of violence, may be illegal, but Twitter’s policy goes far further.

  • “Among the practices that are not allowed on Twitter are displaying logos of hate groups, dehumanizing a group of people based on a wide range of characteristics including race, gender, religion or sexuality, as well as intentionally misgendering someone.

  • “Conspiracy theorist Alex Jones lost his Twitter account in 2018 after what the company said were repeated instances of abusive behavior.

  • “One-time Trump adviser Steve Bannon lost a Twitter account in 2021 for advocating the beheading of FBI director Christopher Wray and Anthony Fauci.”

Advocating beheading someone is, in my humble opinion, a form of advocating violence. It is and should be illegal.

Of course, it’s only illegal on a small scale. Advocating war is illegal under international law but perfectly acceptable to both the U.S. government and Twitter. In fact, advocating war seems to open up a license in corporate and social media to state falsehoods and unproven claims as fact.

Being hateful or bigoted or racist is something that should be addressed and corrected with wisdom and kindness, not censored. Where exactly to censor it will of course be open to wide interpretation. Facebook went so far as to change its rules to allow advocating violence if it were against Russians.

So, some sorts of bigotry are so permissible that they can allow MORE speech, but simple non-bigoted observations are likely to be seen as and censored as bigoted by anyone with the job of sniffing out bigotry.

 

“Graphic violence and adult content
  • “Twitter currently does not allow media that is “excessively gory” or that depicts sexual violence. That means there’s plenty of very graphic video currently prohibited that is not explicitly illegal.”

The internet is full of sites that have figured out how to separate children’s from adult viewing. Twitter users have long since figured out how to separate what they want to see from what they do not. If they haven’t, they should try Tweetdeck. The pretense that this is not so should not be used to justify censoring what U.S. tax dollars do in Yemen.

 

 

“Non-consensual nudity
  • “”You may not post or share intimate photos or videos of someone that were produced or distributed without their consent.” Some states have laws prohibiting such videos as “revenge porn.””

So should every other state.

 

 

“Suicide or self-harm
  • “Twitter’s rules say that “you may not promote or encourage suicide or self-harm.” While harassing someone to pressure them to hurt themselves can be illegal, glorifying or encouraging suicide broadly are not. The same holds for glorifying anorexia and other eating disorders.”

Movie-viewing sites hold dozens of movies glorifying suicide and millions of movies glorifying murder. This is a problem requiring a major cultural intervention, not the censorship of Twitter.

 

 

“Perpetrators of violent attacks
  • “Twitter, like Facebook and others, often removes the accounts of individuals who commit mass murders or other terrorist attacks, as well as “manifestos or other content produced by perpetrators.” This policy goes beyond what is required by law.

“Be smart: Laws vary from country to country. Pro-Nazi content, for example, is legal in the U.S. but illegal in Germany.”

 

We should get a government and do this through it.

 

 

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.