706 Comments
Comment deleted
Expand full comment
Comment removed
Expand full comment
Removed (Banned)Nov 3, 2022
Comment removed
Expand full comment

Opponents of moderation also like to conflate the two. The debate is at its least sane and most viral when the terms of the debate are obscured, so this is probably a stable equilibrium.

Expand full comment

These moderation features are often referred to as “Subreddits” and upvotes.

I find it kind of funny how often proposals for improving Twitter boil down to “Make it more like Reddit.”

Expand full comment

In a fully decentralized system each user would select moderation options themselves and it would be impossible for any central authority to censor. But someone would complain about child porn, and governments wouldn't have the option of tolerating it.

Expand full comment

Financial scams also tend to fall into the category of “both sender and recipient want to share this information”.

Expand full comment

The reason why no social media app does this is that users are not the customer. Advertisers are the customer. It might be a better user experience to opt in or out of various forms of moderation, but the money comes from advertisers who simply do not want their ads to appear under somebody saying some heinous stuff. So heinous stuff gets moderated away for everyone.

Expand full comment

The angle intentionally left out but the standard elephant in the room is that of the corrupting power of the moderator. Everything your outgroup says looks like an info hazard, the more convincing, often because it's the more true, the more dangerous the info hazard.

Expand full comment

There is also the "friends and family" argument for censorship, which I'm not sure fits in the examples given. If my friends and family are seeing lots of lizard-people content, I don't want to completely turn a blind-eye to it. But if everyone is blocked from seeing it, then the problem goes away.

Expand full comment

Welp. My long list of reasonable policies that will never get implemented just got a little longer. :'(

Expand full comment

Moderation in this sense can de facto act as censorship if, for instance, the 'banned posts' channel consists of 99.9% pornbot spam and is therefore highly impractical to use.

A situation like this seems somewhat likely if we were to implement a minimum viable moderation product of this sort due to the 'seven zillion witches' problem as you laid out in https://slatestarcodex.com/2017/05/01/neutral-vs-conservative-the-eternal-struggle/

Expand full comment

This is a good distinction.

Another factor is the need to drive clicks/engagement. More inflammatory content drives more engagement, which drives revenue, creating a strong incentive for the platforms to not offer such user-friendly opt-ins.

Expand full comment

It doesn't even need to be single filters (and the ensuing arguments about who gets to decide what's disinformation or antisemitism or whatever). "Fox News says fake" and "NPR fact-check: failed" and "ADL says antisemitism" and "Proud Boys endorses" and "counter to CDC guidelines" and so on could all co-exist in the same ecosystem (ideally with cites available). It's the Good Housekeeping seal of approval on steroids. It requires some sort of annotation system and a way for users to decide which annotations they want to know about or filter based on.

I prefer an Internet where platforms give users information and the responsibility to use it, as opposed to trying to make blanket decisions on users' behalf.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

This feels like basically a shared and streamlined version of the kill files of old: https://en.m.wikipedia.org/wiki/Kill_file

Expand full comment

I built a prototype about six years ago of a Chrome plug-in called The Grey Lady.  You could pull up an article on NYT.com and copy-edit it, and other users who followed you on TGL would be able to view your copy edited version of that article.

The idea was to be able to make this work on any website, and enable comments per copy-edited version and on the source article via TGL. Journalist friends at Reuters, Bloomberg etc. liked and hated it. There was zero monetization that we could figure out. I didn’t even bother trying to pitch other investors on it.

Expand full comment

Have you asked Substack for this feature for ACX?

Expand full comment

A huge thing that some of the (close-to-being-censored) people complain about is being shadowbanned or reach-limited. This idea is cool but I wonder if some folks would still be mad about being moved to the Bad Stuff channel, as it would limit their reach.

Expand full comment

Under this model, is your governance of the comment threads here moderation or censorship?

Certainly any number of posts that people get banned for have very high engagement.

Expand full comment

I see the pleasing elegance of this system on paper but I’m curious what it would look like in practice. If someone makes a “banworthy” post and gets marked as banned, their posts are invisible to the majority of the users of the website. Isn’t that basically a real-life version of the “shadow banning “ phenomenon that so many on Twitter have already complained about?

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I proposed an idea a while back (one Dorsey apparently had before me but didn't implement) that somewhat draws on this distinction. Twitter (or wherever) should allow custom algorithm creation. Users can go to a marketplace and choose an algorithm. Do you want HalakhahMax which promotes Jewish content and shuts down on shabbat and filters out anything even vaguely anti-Semitic? Great! Do you want Breadgorithm which only shows capitalist posts with appropriate socialist dunks and puts little corporate logos on all politicians and only shows ads from left-friendly corporations? Fine! Go nuts! Twitter would only enforce what was legally required. Basically a marketplace for moderation/content algorithms with Twitter only handling the limits, like CP, government requires (which would be enforced over and above the algorithms).

This'd be a win win if your goal was solely revenue. What algorithm a person chooses is basically a giant targeting sign for advertisers. The person who chooses the HalakhahMax algorithm is probably Jewish or at least friendly and would be an ideal target for Israeli products or matza or whatever. (I'm being a bit silly but you get the idea.) And the person gets to customize their experience to a far greater degree. This could be incentivized pretty easily by Twitter doing revenue share with the algorithms too. Which would incentivize the algorithm makers to make popular and high value algorithms to appeal to advertisers. (I have a more complete treatment somewhere.)

The reason none of the social media companies do this, as far as I can tell, is that they're so based on the idea of an optimized algorithm to boost ad rates and curate discourse that the idea of opening it as a market never occurred to them. Algorithms are seen as closely guarded "special sauce" rather than something that could be a fungible value add.

But it ties into your point here: because it is a marketplace, an option, all the marketplace algorithms would be moderation and definitionally not censorship. If you're a raging Neo-Nazi and everyone opts into algorithms that block you then that legitimately isn't some central authority trying to take you down. It's that no one wants to hear your weird theories about Jews and is choosing an algorithm that excludes you. And if you insist that's effectively censorship then that's the liberal point about not having a right to a platform in its proper form.

Expand full comment

Fascinatingly, the trans community is already there.

Shingami Eyes (https://shinigami-eyes.github.io/) is a Chrome/Firefox extension that allows a set of trusted users to mark users or websites as <insert bad thing here>, and then other users will see those users underlined in red. It’s a completely decentralized opt-in moderation that protects users without censorship.

It probably classifies some people in a way that would be objectionable to this community (I haven’t used it so I have no idea). But that’s fine! Who cares if the group that runs this extension doesn’t like you—they’re free to avoid you, and you’re free to keep posting.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I've pondered an analogous idea but instead of in the world of information/content, in the world of laws that constrain freedom in the name of protecting people -- e.g. drug criminalization, regulation of financial services, etc. What if we kept the system but those of us who want to trade our protection for freedom have a way of opting out?

And similarly, when I pitched this to a friend, as in this post, he suggested that although it might be complex, we could have the ability to opt out in different areas, e.g. health, financial, etc. As broad or specific as you want.

Expand full comment

Assuming setting up a bunch of filters were even possible (false positives and true negatives happen) or politically feasible (why isn't there a filter for X niche issue?), there's the question of what's the default setting you're going to give to a new user?

The power of the default is immense, and lets be honest, the vast majority of people would not know or care. If you set all of them too high or all of them too low, how many users would leave instead of adjusting them? If you set it in up in the registration flow, how many users wouldn't make it all the way through because of the friction? There is no benefit to a social media company for implementing this sort of function.

Expand full comment

I think the people who hate Politician X the most (in both parties damn you) will be the ones who are unable to resist toggling on the "show banned posts" button. And they are also the loudest. Maybe we need two buttons - "hide banned posts" and "hide people who refuse to use the first button".

Expand full comment

Isn’t this already happening, but just not in a single shared forum? Like, you already have the option of posting to (and reading from) totally uncensored forums, as well as forums that are censored in specific ways. It’s just messy, and inefficient.

One reason a single shared forum with the toggles you describe might not take off is that people often *like* to feel like they’re speaking to a specific community. Both for bad reasons, and for more sympathetic ones (like, you know what the norms are, or you expect discussions to be high quality given the other people there, or you’re less worried about context collapse).

One interesting experiment in this space is Radiopaper. The central artifact of the site is a public exchange between two people, but the exchange isn’t published until/unless the second person replies. (Other users can comment on existing conversations, but these comments aren’t published unless the commented-upon person replies or approves the comment.) It’s designed to minimize trolling and promote high quality discussions, but I also wonder if the ability to moderate who publicly engages with you (and not just what you see) will be attractive to people.

Expand full comment

> And it would make the avoid-harassment side happier, since they could set their filters to stronger than the default setting, and see even less harassment than they do now.

Is this what the “avoid-harassment side” is worried about? Or are they worried about *anyone* seeing the harassment? Would the simple knowledge that the harassment exists and is potentially visible be a problem for many people?

Expand full comment

Hrm... I think that's a useful distinction, though I think some activities that are generally accepted as "moderation" would fit your definition of censorship, unless "your customers" could also be read as "the customer base you're trying to have"

For instance, let's say you're trying to create a forum devoted to some particular topic. A bunch of people show up, enough that they may outweigh the small initial userbase, and start posting about a bunch of stuff that has no real connection the focus topic, but ends up starting big discussions that don't go anywhere useful and sort of destroy the idea of the forum being a thing for that specific topic, thus killing the ability to really attract users that want to discuss that particular topic.

If "your customers" could be read as "the customers you were trying to have" or something, then it fits your moderation definition, else it fits your censorship definition, but I think it would be generally understood to be a moderation activity?

Expand full comment

This essay addresses something often overlooked in policy debates generally: the power of good defaults, both in public policy and in society more generally.

The economist Nick Gruen has written about this well in relation to public policy at https://clubtroppo.com.au/2005/08/20/designed-defaults-introducing-the-backstop-state/ :

"Wherever possible, and before it resorts to coercion either through regulation or monetary incentives, the Backstop State will seek to assist its citizens by setting ‘designed defaults’. Citizens would remain free to make alternative arrangements. But they could also rest assured that, if they did not exercise this right to choose, they would fall back on a default option that reflected expert opinion about what was the most beneficial ‘default’ possible ... Of course we should retain the choice to take matters into our own hands. But if we do not choose to do so, it is efficient for experts to design a default which is as well suited to people’s circumstances as they can make it."

One part of the trick, of course, is to choose the appropriate defaults. Another part is to surface the alternatives in just the right way. But it can be done.

Expand full comment

I'm hoping someone with a better memory can fill in the details on this. Back in the days of NNTP / usenet, you had a .kill file: a file that you maintained on your own computer that determined whose posts would get immediately deleted from your feed. This worked moderately well for a while, because email addresses were scarce, and spamming by creating a new email address for each post was impractical to most people.

Some time later (and this is where I get quite hazy) people started sharing their .kill files, and I vaguely remember a way of distributing .kill files.

I don't understand why we platforms do implement something similar today. If you are a Biden-is-an-amphibian person, then get your moderation done by people in your in-group; if you instead trust the Biden-is-a-lizard crowd, have others in that in-group provide your moderation.

Expand full comment

The first amendment provides some exceptions to free speech, like direct calls for violence or obscenity, or noise regulations.

Expand full comment

From 1994 in response to https://en.wikipedia.org/wiki/Serdar_Argic

"...at the time, there was a fear of the free use of third-party cancellations, as it was felt they could set a precedent for the cancellation of posts by anyone simply disagreeing with the messages. [...]

The Serdar Argic posts suddenly disappeared in April 1994, after Stefan Chakerian created a specific newsgroup (alt.cancel.bots) to carry only cancel messages specifically for any post from any machine downstream from the "anatolia" UUNET feed which carried Serdar Argic's messages. This dealt with the censorship complaints of direct cancellations, because carrying a newsgroup was always the option of the news feed, and no cancellations would propagate unless the news administrator intentionally carried the alt.cancel.bots group. If sites chose to carry the group, which most did, all of Serdar Argic's messages were removed from all newsgroups."

Expand full comment

I do feel that all these free speech v censorship debates miss the main point.

There is a genuine free speech argument around scientologists, Nazis, pro-anorexia sites but that doesn't seem to be the main issue.

Xi, Putin and others censors seem most concerned about things which are either mainstream views or the truth.

There isn't really a compromise available in the vast majority of free speech debates where someone powerful wants to hide something and is willing to abuse the apparatus available to censor real information.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I think it's fine for a social media company to implement this type of plan, but I also think it's fine for it to moderate in any way it wishes, so long as its terms are clear. If Twitter did this and I were a Twitter user, I'd want to see all messages--I'm not worried about being offended and I'm curious--but if I saw that Twitter was a vehicle for activity I felt was legitimately illegal (sharing child porn), toxic to the political framework I value (spreading conspiracy theories designed specifically to undermine civic trust), or a debasement of social norms (normalizing routine discursive cruelty), I'd unsubscribe. Why would I want to pay money that supports and magnifies that type of behavior? If there are enough people like me, the platform can kiss its ambitions for growth goodbye.

I think there is legitimate grounds for censorship of online activities that are prima facie illegal (e.g., child porn; incitement to violent crime)--the government can require platforms to report and enforce bans on such activity. That's "censorship," but we've already sanctioned it by statute--we don't normally treat bans on child pornography as censorship.

A private company instituting its own bans is not censorship: it's a business model. I don't think the government should dictate what business models corporations adopt; I think users should, by opting in or out. If the demand their social media ban certain behaviors and unsibscribe if they don't that's not censorship, that's freedom of speech and association.

Many people may not care that the social media they use for fun or convenience serve to amplify conspiracy theories or antisocial norms--they can use Scott's filters or not. But I think many customers will care, and if the filter buttons enable these negative effects, their business is likely to go to a service that bans those behaviors. The optimal result would be multiple social media platforms serving different audiences. If conspiracy-theorists want to spread fear and loathing, they'll be free to do so, just as they can now on Parler and throughout the dark web. I don't see any reason Twitter or any other private company should feel the slightest ethical obligation to accommodate them if they feel their posts are unethical.

PS: And if Scott wants to ban me for this post and erase it from the site, that's up to him. I promise not to complain about censorship or cancel culture (though I can't promise not to be surprised)!

Expand full comment

All we ever wanted from social media sites was an infinite reverse-chronological list of the posts made by the people/organizations we follow.

We do not have that because it costs more than we are willing to pay for it ($0.00).

Expand full comment

"The current level of moderation is a compromise. It makes no one happy. Allowing more personalized settings would make the free speech side happier [...] And it would make the avoid-harassment side happier"

I agree that allowing more personalized settings would be a boon, but I'm not convinced that the vast majority of users, who, I suspect, occupy neither the free speech nor the avoid-harassment side, are dissatisfied with the status quo. If, e. g., someone uses Twitter merely to follow their favorite celebrities and talk about sports, what do they care that another, less normal member might get banned for jokingly threatening to murder someone, or because the moderation algorithms mistook their mockery of racism for the real thing?

Also, I think a case can be made for not allowing malefactors further opportunities, even if they are stuck on a blacklist. Imagine a social media site where Donald Trump was present, but unable to be viewed by default: He'd still exert a massive social-gravitational pull, with many users hopping the content-fence to view his latest posts, such that doing so would become a requisite for understanding the latest discourse-brouhaha. Similarly, it's not as if someone with tens of thousands of followers couldn't orchestrate harassment-campaigns from behind a blacklist. An outright ban is a blunt instrument, but it has the virtue of not allowing someone to stick around and find ways to game the system.

Expand full comment

I think one aspect of non-censorious moderation this misses is harassment in the form of "publishing private personal information that directly opens somebody up to harassment."

So, for example, if you post a nude picture of me with my personal address and phone number without my permission, I think that clearly falls on the side of "harassing me" even if I have a filter that keeps me from directly seeing your post, because the entire point is to enable people to harass me outside of the reach of the website's moderation policy, and because it isn't really expressing a "point of view" in any meaningful way that would make the value of this speech outweigh the harassment aspect.

As far as I know everyone who does moderation has to deal regularly with situations like this, and "censorship" doesn't seem like a useful word for it. (Obviously a policy like this *could* be used for censorship, if e.g. Biden decided that critical posts about his policies were personally harassing him and asked for them to be taken down. But the baseline feels fundamentally different.)

Expand full comment

What’s to stop Twitter from arguing that they already do this? It feels like there’s a case to be made that the existence of Twitter/Parler/Tumblr/etc creates a range of “flags” about different kind of speech that are allowed/forbidden, just at a higher level of friction. Now agreed, Twitter could reduce friction by replicating the diversity of the internet within their site, but at some point “degrees of friction” seems like it muddies the waters between “moderation” and “censorship”

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Ignoring the personalization/moderation side for a second, one of the more practical issues with the kind of "everything should be always up for debate for anyone willing to listen" is that nothing ever has any finality. There is never any "okay we are done for a while".

Similarly, the censorship debate is often argued as all or nothing, for all time.

Neither seems particularly optimal for society.

That is:

Take a thing that 99% of society agrees is *truly* horrible.

If you don't censor (we'll get to moderation in a second), Every day you get to debate with the 78 million people (1% of 7.8 billion people) who feel the other way from first principles. That's a lot.

This isn't particularly optimal in value for society.

Never debating at all doesn't seem particularly optimal either.

But this is where we end up most of the time (IMHO) - either full censorship, or no censorship, for all time.

Moderation doesn't fix this. The problem is in the arguments for censorship in the first place. As you say, they do actually have some value - whether you opt in or not, it's still not particularly valuable to society (for most reasonable definitions of value) to *constantly* argue over whether the "truly horrible thing" is horrible or not. Even if people think it is fun to do!

A more reasonable position (to me) would be to change how often we accept re-debate on things, or the degree of re-debate, or ...

Maybe we only re-debate whether it's okay to run over a group of kids in the park for no reason only every 5 years. Or maybe we only do it on "say any crazy thing you want tuesday".

Maybe it varies depending on what percent of society agrees, or all sorts of factors.

Yes, this would be hard to get right, require lots of balance, hard choices, etc. Same as anything else in the world. IMHO, we go for the all-or-nothing extreme positions because they are seductively easy to achieve - turn it all on or off. Not whether they actually achieve the best result.

Lots of the above view could be transformed into "what should the defaults of moderation be", but it wouldn't change the basic point - if the goal of all of this debate, moderation, and censorship is to have some valuable outcome for society, we should be arguing about how to achieve the most value for society - that seems super-unlikely to be achieved through 0% censorship, or 100% censorship, as much as anything in the middle is painful to achieve.

I will also point out - we formed representative governments and similar representative structures because direct representation didn't scale in lots of ways, one of them being that direct debate on issues among the entire population was not just hard physically (and admittedly easy virtually now), but because it was not an effective way to get anywhere or decide anything even when you *did* get everyone there.

So it's also somewhat amusing to me that some folks seem to believe that direct debate among 7.8 billion people on every topic is a useful exercise in public discourse. If we are doing it for shits and giggles, sure, whatever. But as a mechanism of useful public debate and discourse? We already know it isn't. We've consistently rediscovered this is basically every society ever built, at every meaningful size population. Regardless of any level of censorship or moderation involved.

Expand full comment

The difference is network effects and monopoly power.

Expand full comment

What about the advertisers?

Your minimal viable product doesn’t seem advertiser friendly at all so it would have to be a literal paid product and not a ad-based free platform

Is moderation that’s conforming to advertiser demands censorship?

They’re exerting control over communication where both the sender & recipients consent

Expand full comment

The problem with your ideas around moderation is that this Filter you're discussing has to be decided by someone, and getting filtered is the same thing as getting censored. Look at all the recent controversies on YouTube over getting videos age restricted.

Even the *smallest* barrier to your posts being seen is an equivalent to the free speech corner, and thus it's just a really clever way for censors to pretend they aren't censoring.

Expand full comment

I don't like this distinction, and this isn't the argument I think should be made in the first place. Moderation is censorship. I cannot speak to the public on a platform in any manner I want because some third party (the mods) have decided I cannot violate their arbitrary rules.

The key point is the distribution of power. In a setting with very few global rules and many variations of local rules, individuals reign mostly supreme. Don't like how a locality runs their things and they won't change? Leave and make your own. This is like Reddit, where admins handle site-wide (but ideally limited) rules and violations of said rules, and volunteer moderators who have a stake in the group's success manage their own areas.

In contrast, a place where the global:local rule ratio is more equal (or just weighted more to the global side) is one that is engaging in censorship. This is equivalent to Facebook or Twitter, where one centralized ruleset governs everybody (leaving no one happy except those who agree with the status quo).

In my opinion, the argument should be "make Reddit-like segregation the norm".

Expand full comment

I agree that more user-control over what I can and can’t see would be an improvement over the current “Twitter can ban anyone for any reason” scenario, I think there are two issues with that. The first is that advertisers are often the ones who want to control over where their ads appear, and what content it appears next to. This is partly why I think charging for a social network leads to a better network than an open one driven by ad revenue.

The second problem is one many in the comments here have listed, which is that deciding what hits the filter and what doesn’t is itself the problematic issue, not whether or not the ability to filter exists. Here, I shockingly would say that it would be better for the government to ensure a “right to post” in the bill of rights, and to then set restrictions that are sensible and can be revisited by an elected body... NOT random, nameless Silicon Valley bureaucrats with no accountability or interest in the public good.

Expand full comment

'I never found a thought yet that I was afraid to think, and if I ever do find such a thought I'll go ahead and think it just for spite.' -Dr. Seuss or free quote from memory of Soren Kierkegaard?

Expand full comment

I come to the site on the premise that "it's your blog and you can moderate or ban in whatever the heck way you choose". (Is "heck" OK on this site?) What I would love to see on ANY commentable site is a detailed, nay explicit, map of moderation protocols, rules and regulations, WITH LISTS, so commenters know where they stand. Our problem is that you hold all the moderation/banning power. What might seem perfectly reasonable to everybody on the site may get your goat, and earn a ban. Knowing just where the (your) line is would be useful for civil discourse.

I comment on our online national newspaper and, after 5 years, I still cannot work out what I cannot say without getting my comment rejected. In an article on fascism, sometimes you can say fascist or fascism in the comment, other times its deep-sixed and I have to delete the word. An opinion article on someone's racist comments or proposals will usually, but not always, be canned if I comment on that person's racist behaviour. Even replacing "racist" with "racial intolerance" doesn't always do the trick. It's a total lottery with no transparency. They also have auto-rejection for certain words, which I have laboriously compiled over time, but most are anodyne in the context. The funniest was an auto reject for repeating the name of a S American river, the Rio Negro, which was in the article. The weirdest was my use of "hit" as in a hit song - the bot thought I implied violent action. Hint - do not use auto-reject bots as moderators.

Expand full comment

I've previously toyed with even stronger positions than this (e.g., monopolies that host content should be restricted to only banning amendment which the government can ban under the first amendment. The argument being that anything else is a kind of abuse of monopoly power.)

However your reasoning here seems wrong to me:

"Moderation is the normal business activity of ensuring that your customers like using your product. If a customer doesn’t want to receive harassing messages, or to be exposed to disinformation, then a business can provide them the service of a harassment-and-disinformation-free platform."

The position you go on to outline effectively assumes that customers won't decide to leave, so long as they can't see the offending content.

A lot of users will simply refuse to interact with a product if it has speech which, in their opinion, exceeds a certain threshold of offensiveness- even if they can't see it. They may also not want to be on the same platform as people who enjoy viewing such content for various reasons. Hence attracting users, and the real consumers, advertisers, may completely removing speech. Brands in particular would not be much comfortable with the argument "yes we have Nazi's on the site, but don't worry, we've sent them to a seedy-underbelly/dungeon". Under your definition then, completely deleting content can count as moderation as it is the "business activity of ensuring that your customers like using your product."

Expand full comment

A good point, well made.

But it does make me curious. If Substack allowed commenters to block each other, would you consider "moderation" of your own comment section completely unnecessary, even ethically undesirable?

I'm not trolling you, either. I don't know what I would do. I think it's a very difficult question to answer when it gets closer to home, and your name starts being associated with the conversation, even indirectly. (And I don't think the people who run FB or Twitter are entirely unconcerned with this point.)

When I look at places that resolutely refuse to moderate at all, like the comment section at Reason, I'm impressed by how often it descends to the sewer and one just doesn't want to bother wading through the shit soup to find the occasional bright gem -- and I have a very high tolerance for wading through shit, as well as a very thick skin.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Potentially even simpler and more elegant is for the platform to let third parties provide opt-in filters (moderation services) that users can subscribe to. This would let the platform completely avoid being the arbiter of what is acceptable, except as required by law. And it enables quality moderation without the platform having to eat the costs or pass them on to users.

Quality, politically-neutral filter offerings might charge a fee or run extra ads, but I can easily imagine various non-profits and political groups providing good "free" filters in hopes of nudging public discourse in their desired directions.

In the end, what probably happens is that very few people actually use more than the most basic free anti-spam filters. That still looks like a win to me, as it becomes harder for people to use moderation as a weapon and an excuse for taking offense. Among other things, this could help blunt cancel culture.

Expand full comment

The problem is that in order to click that "see banned posts" button, you have to *want* to click the button. And in order to want to click the button, you have to know that there are banned posts worth reading. And you wouldn't know that, because you haven't clicked the button. And this describes 99.9% of the users, in my experience.

Expand full comment

Let me see if I can present an intellectually honest version of the mainstream response to this idea. It would go like this: sure, optional filters are all well and good for NSFW content and other relatively minor use-cases. They're already used for such purposes and additional uses would be fine.

However, the CENTRAL issue with social media, the elephant in the room, is that as everyone now knows, if it isn't heavily filtered then populist authoritarians will use it to spread lies and conspiracy theories and seize power all over the world. And at least in the US, the populist authoritarians are telegraphing more and more openly that the next time they take power will be the *last* time -- i.e., that as soon as they're back in power they'll do as Curtis Yarvin advocates and stop the machinery of democracy itself, as did the Bolsheviks in 1917 or the Nazis in 1933. Right now, then, the whole survival of Enlightenment civilization depends on Silicon Valley, which still has some leaders smart and public-spirited enough to understand all this, putting its thumb on the scale and using its power to prevent the lies and conspiracy theories from getting the sort of traction that they unfortunately would in a "free marketplace of ideas." In the language of this post, what's needed is not merely moderation but censorship. Indeed, the stakes have become so clear that if Silicon Valley *won't* censor, the suspicion arises that it's because it secretly wants the populist authoritarians to win.

This is an ugly, cynical, and depressing theory -- there are excellent reasons why it's almost never stated so openly! Alas, I also give it at least ~25% probability that, a decade from now, we'll look back and say, damn, the theory was true.

Expand full comment

Back when I read Slashdot regularly (as a lurker) it was vanishingly rare to see anything get outright deleted, and incredibly common to see posts get modded down to -1 where you couldn't see them in the default view. Sometimes I looked beneath the filter, and almost invariably agreed with the mods that these posts were not worth reading.

Of course, that's a different world from the one we live in now, pre-Great Awokening. It's chilling to look back on old threads and realize how offensive a lot of upmodded content was - and remember how I didn't find it offensive at the time.

Expand full comment

Why make people opt in, though? Why not allow people to block users they don't want to see? That's the best option; maximum potential for communication, and shelter for snowflakes.

Specifically, in the twitter context, unless a person takes their account private/followers only, I think blocked accounts should still be able to view and respond to the blocking party; just because the account owner wants to block someone should not force that choice on others.

Expand full comment

1. A few weeks before the 2020 election, Twitter prevented users from sharing a political article from the New York Post. People who wanted to see it could not. Was this action moderation or censorship?

2. On many occasions, federal agents have asked social networks to ban certain arguments related to COVID-19, and networks have done so. Is this moderation or censorship?

Expand full comment

This seems like such a no brainer to me. I hope your platform can incept it into the public conversation. Nobody would even be mad if twitter did this. My personal preference is that the users themselves have the power to form polities and do their own moderation but this is at least an easy step in that direction. The reason I think it has to go to the users themselves is that if you don’t then you’re always going to have a really small body trying to police a much larger one (way beyond what seems reasonable) or you’re going to have to try to train some neural net or something on the moderation. But in either case people will still chafe because there is no adjudication.

My twitter product roadmap, in case there is anything interesting here, is around the idea of vesting the users themselves with the rights of a kind of digital citizenship.

https://extelligence.substack.com/p/my-twitter-product-roadmap

Expand full comment

Hacker News has a system like this, that uses user downvotes as moderation -- first your post starts to gray out as it goes negative, and finally it becomes "dead" and not shown on the comments page. But there's a "showdead" option in the settings, and it really doesn't feel like censorship after that because you can see everything if you want to. Plus, there is a "vouch" option so users in good standing can help rescue things that were unfairly moderated.

Expand full comment

>If you wanted to get fancy, you could have a bunch of filters - harassing content, sexually explicit content, conspiracy theories - and let people toggle which ones they wanted to see vs. avoid.

The problem is that all of those categories are EXTREMELY subjective.

1. Who, precisely, gets to decide if a particular piece of content is "harassing"? A majority of users of the site? A majority of the employees of the federal government agency that is covertly meeting with the leadership of the site to determine policy?

2. Sexually explicit is straightforward enough, in the sense that I know it when I see it but cannot describe it.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Our research is exploring ways to put your argument into practice. Squadbox (https://homes.cs.washington.edu/~axz/squadbox.html , https://github.com/amyxzhang/squadbox ) is a platform that lets every individual email recipient decide what kind of email is harassment, and rely on their friends to block (only) that. Trustnet (http://trustnet.csail.mit.edu/about , https://people.csail.mit.edu/farnazj/pdfs/Leveraging_Structured_Trusted_Peer_Assessments_CSCW_22.pdf) is a platform and browser extension ( https://chrome.google.com/webstore/detail/trustnet/nphapibbiamgbhamgmfgdeiiekddoejo ) that lets every individual (i) decide who they trust, (ii) assess information for accuracy, and (iii) block information that has been assessed as inaccurate by sources they trust. Both tools give the user access to useful content flags but let the user decide for themselves what to do about the flags.

Expand full comment

Scott, what you're calling "moderation" simply isn't what anyone in the history of social media platforms has called "moderation." You're using the word in a pretty idiosyncratic way.

Which doesn't mean that I don't like it! I think some version of this could be the best solution.

But what you're talking about is a capability for "fine-grained muting," not "moderation." The argument you're actually making is that, with sufficiently fine-grained muting, there is no need for censorship.

Here's the thing though. If you consider the predominant arguments in the mainstream media about, e.g., the impact of social media on the 2016 election, it's quite clear that the force of those arguments is about the impact even on those who are *willing* to see certain content, not simply on the experience of those who *don't*. This is the first and third of your (rejected) arguments for censorship, even if American journalist naturally don't use the c-word.

Expand full comment

I think this distinction misses a layer. There are many forms of speech that I think ought to be permitted legally, but ought to be banned by social media.

A good example is misgendering. I think every social media site ought to ruthlessly ban anyone who engages in misgendering. But I don't think governments ought to ban that speech. If people want to privately email each other transphobic stuff, I think banning that would be overreach. But it should not be allowed to exist on a site viewable to the public.

Social media, to me, is not base level speech. It is "polite society." If I want to scream slurs, that ought to be allowed legally, but it ought to leave me disinvited to any dinner parties. And IMO, social media ought to be treated as analogous to the dinner party.

It takes a lot of resources and hard work to make social media available to me. Posting ought to be a privilege I earn through good behavior, not a fundamental right. If a company deploys their resources to allow speech to be publicly visible on their platform, they are in fact advocating that speech, and it is both permissible and necessary for them to filter out content they believe to be morally wrong. Failure to do so is dereliction of duty. If you want to be a silent content delivery system, you shouldn't make that content available to the public - it should be like email or SMS where the content is only visible to the specific intended recipient(s).

Expand full comment
founding

I think we are missing some separation between illegal content (regulated by the government and lawmakers), and legal content that the platform decides to prohibit. child porn, financial scams, etc. are illegal by law. so the question should be what *legal* content should not be allowed. But then we are back in the grey area of what should/shouldn't be moderated.

Expand full comment

This is a little bit like saying 'sure we use child slaves to harvest 90% our cocoa beans in Nigeria, but if that really bugs you then we'll send you personally the chocolate harvested from Venezuela where we don't use actual slaves. So you should now have no problem buying from us and supporting our brand.'

You say:

>Moderation is the normal business activity of ensuring that your customers like using your product.

But many customers don't *just* care about the product as they experience it, they also care about whether they are supporting a company that does harm to the world, or does good in the world. This preference may be weakly expressed in cases where that harm is carefully obscured by being overseas and happening to people without good publicist; but it certainly seems like it's a very strong preference for many customers of social media websites.

If customers don't like using a product that they believe is harming the world and society, then mitigating what they see as harmful about it is ensuring they like the product, which is your definition of moderation.

You may well say 'that's a stupid/invalid preference for those customers to have,' but the great thing about free markets is that you don't get to decide what preferences customers have, customers get to express their own actual preferences and spend their money/time accordingly.

So I really don't think 'moderation is doing what your customers want' is going to gain you any ground here. Many customers are very clear about wanting the type of moderation you are trying to call censorship, and would not generally be happy using a product that works the way you outline.

If that weren't the case, companies could make a lot of money by moderating less. I don't think every social media company in the world is too stupid to think of the filtering idea; I think they know their customers don't want that, and it would lose them money.

Expand full comment

A lot of platforms censor content in order to curate their customers. Consider a movie theatre that shows both pornographic and regular movies. Even if they moderate it, so that only those buying tickets to the porn are exposed to it, people would still not take their kids there because the overall vibe has shifted.

4chan might be the best example of this dynamic on the internet. There are specific boards where anything goes and if you stay away from them you can have a more sedate experience. However the mere existence of those boards drives away the normies and sets the overall tone of the site.

Expand full comment

You can think of moderation without censorship as a matter of freedom of listening or freedom of reading.

Expand full comment
founding

Generally I feel that the social value of free speech should be seen as more about the freedom to *listen* than the freedom to *speak*. The freedom to speak is something nearly everyone agrees shouldn't be entirely absolute in its reach (malware linkspam probably being near the top of the list of things that should be removed from the default view). The freedom to listen, on the other hand, *does* have genuine arguments for being absolute, and I generally agree with these arguments on the principled level if sometimes not necessarily the object level of the things in question (after all, a censorship button that says "only for use on misinformed child bombers" is still, in the end, a censorship button... the promise that it will only be used in this way is only worth the ornate embossing it is printed on, and its mere existence still serves as just as much of an excuse for governments and others to request its usage in response to surveillance).

I like this post because it cuts straight to the crux of distinguishing the amount of burden a prospective listener faces to enter the Matrix -- from a simple flip of a switch on the "moderation" near-extreme to downloading an entirely different browser on the "censorship" near-extreme. The "myth of consensual communication"-ness of web 2.0 is one of a few reasons I've found myself increasingly yelling at the Cloud in recent years, and hoping that more decentralized and privacy-preserving systems become more popular to allow people to choose the level of redaction they wish to see.

Expand full comment

An effort to distinguish moderation from censorship by pointing to the involvement of third parties misses that there is genuine confusion about which parties even *are* involved in any given exchange. Reputational concerns due to proximity are real; there is no cheap signal that lets you ignore that no matter how hard you might wish it to be so.

Even ignoring advertisers and the fact that if you aren't paying money you *definitely* aren't the customer, any social media host themselves has to worry about reputational concerns. The layman's publisher/platform distinction is not only legally bogus, it's a bad match to real user experience - there's a reason you avoid 8kun, dear reader, and it isn't because you've carefully siloed your impression of the site off from your impression of the conversation there. Avoiding that stink takes action on behalf of the host, and it is their prerogative to do so.

> The current level of moderation is a compromise. It makes no one happy.

The level of moderation on any given privately-owned site is entirely within the power and responsibility of that site's owners and controllers, barring some legally-regulated edge cases. If the level of moderation employed by a site owner is making that site owner unhappy, they have screwed up† as a straightforward matter of execution. If the level of moderation on someone else's site makes you unhappy, you can try and persuade the owner to adopt your position and/or you can leave.

†(Are there practical limitations on the "level of moderation"? *Absolutely* there are, and I'm sympathetic to the real resource investments required. But that's a distinct argument that has implications up and down this rhetorical branch, and it's disingenuous to specifically introduce it now.)

It is unforgivably sloppy to try and categorize "the current level of moderation" as a global phenomena (Online? In the media? In "media"? In the English-speaking zeitgeist?) divorced from the owners of that moderation. There isn't a single example of actual bad moderators given in this post! I know there's a bias to attribute negative behavior to systems rather than individuals, but making the argument abstractly is merely trying to launder virtue out of zero data.

Expand full comment

I agree that the opt-in scheme you you describe is not censorship. If you allow the users to control what is shown, that is user-chosen curation. That's not censorship any more than it is for me to not subscribe to certain authors.

However, most social media companies do *not* have the "minimum viable product" you describe. They enforce their filters, so the curation is de facto censorship.

Free speech is not for the writer, it is for the reader. Most companies fall short of the MVP standard, and in so doing they restrict what people are able to read. That's censorship.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

There are two important reasons to do "censorship" (as you call it here).

The first, which is my primary motive for moderating ('censoring'), is that it is a key part of maintaining a culture. Yes, some content is actively harmful to very broad norms (e.g. don't goad people into suicide) and the receiver does not want to see it. But if you want to uphold a more specific culture, then often you have to disincentivize and sometimes remove content that the sender and receiver are both net positive on seeing.

If I run a forum on tennis, and I have a user who likes tennis but is also really into environmentalism, they may start posting about environmentalism in tennis, posting about which brands are made in ways that don't use fossil fuels, and other similar tips. Suppose it finds traction, with many other users discussing this too, and then starts to snowball into discussions about environmentalism in other sports, then in other hobbies, then into the broader environmentalism activism space, with people getting really into it, until a point where there's more daily discussion of environmentalism than tennis. I think it's okay for those who run the space to say "We're not against environmentalism, but this is a space about tennis." and do things like delete user accounts who show up just to talk about environmentalism, or to not count karma accrued on posts about environmentalism, or to announce a site-wide ban on environmentalism content for 3 months. Ideally, much earlier in the process than the point where the site becomes primarily environmentalism content.

This is censorship (as you define it), and it also seems necessary to me for the functioning of walled-gardens that can maintain the integrity of their focus as they grow, and as more memetically fit ideas start to find their ways into the minds of their users and potential-users. I believe one of them must be chosen ('no removing content that sender+receiver consents to' or 'functional subcultures'), and I choose the latter.

The second reason is that *99% of users don't use personalized settings*. If I recall correctly, one of the reasons that Facebook doesn't give you lots of settings for what ads to see, is that they actually did that once, but nobody used it. My own experience of LessWrong is similar; we put filter-tags on the frontpage and most people do not use them (even I barely use them and I was one of the people who thought they were a good idea). So the addition of settings doesn't change 99% of people's experience. You made the general point yourself more eloquently than me [13 years ago](https://www.lesswrong.com/posts/reitXJgJXFzKpdKyd/beware-trivial-inconveniences), using basically the same example, but to the opposite conclusion.

If key information is hidden behind settings, most of the people you hope to get that info, will not get that info. So I am not so sure that simply doing 'moderation' is so cleanly separated from doing 'censorship', and that enacting the former is not to a significant extent enacting the latter.

(This is as per your definitions, I haven't thought about whether there are other definitions that more cleanly separate them and still capture most of what we're trying to talk about with censorship and moderation.)

Expand full comment

I suggested this exact feature on reddit 8 years ago.

https://www.reddit.com/r/worldpolitics/comments/1z6kev/why_reddit_moderators_are_censoring_glenn/cfr1hol/

There's the thread if you're interested in how it was received by the users at the time. (Generally positive.)

Expand full comment

So I'm actually gonna point out a big problem with this idea.

Adjacent communities bleed into each other.

If you set up a community with a Technology section and a Cute Kitties section, it biases both of those sections a little bit towards each other. You'll end up with more catlovers in the Technology section than you would have otherwise, and more tech people in the Cat section than you would have otherwise.

If you then add a Racial Slurs section, it's going to have much the same effect.

What you're proposing here isn't adding a Racial Slurs section. It's adding a way that people can fling racial slurs *directly at cat-lovers without the cat lovers being able to see it*. That's not just an adjacent community, that's an interlaced community, where no matter how pleasant what you're looking at is, you're one button-press away from people flaming it.

Reddit, IMO, has problems due to hosting a wide variety of communities dedicated solely to hating things (no, I don't mean necessarily racial hate; /r/fuckcars counts). The people who post in these communities naturally spread out and post in other subreddits, but they (inevitably) take their personality along. By catering to these hate communities you're naturally slightly increasing the amount of hatred in other communities as well - just take a look at /r/urbanhell to see the crossover.

I do not think your idea would work out well, unfortunately. It may reduce censorship, but I think you're going to have a really hard time *building a good community* on it.

Expand full comment

"That it’s a social good to avert the spread of false ideas (and maybe even some true ideas that people can’t handle)."

Does anyone have an example of some true idea that people can't handle? I am suspicious of these types of claims. We have this notion that ideas are mind viruses, which makes some sense as a metaphor. But once you start taking that claim literally, you quickly get into the world of the highly speculative.

PS - I mean actual ideas and not specific pieces of information that could be used to do bad, like hot to make a nuclear weapon or the home addresses of celebrities and politicians.

Expand full comment

There are a lot of people that mostly seem to care about what other users get to see. I think most people agree on "avoid harassment", the real disagreement is around mis/dis/malinformation.

Expand full comment

The true problem with censorship is when it silences certain ideas. Child porn as you mentioned is not an idea, it's a red herring as nobody is truly arguing in favor of allowing that. The philosophical position that no ideas should be censored has been debated for centuries and it has a name: freedom of speech.

The problem is that today nobody really knows what freedom of speech actually is. The fact that moderation and censorship has been conflated is one problem, but so is the fact that the philosophical position has been conflated with the laws (First Amendment). It shows when people claim that freedom of speech is a right.

Freedom of speech was meant to safeguard heliocentrism, it wasn't meant to be a right of Galileo.

Expand full comment

Always worth pointing out that the laws of most countries are not nearly as permissive when it comes to speech, and an internet platform needs to follow the laws of the countries where it is used or it will be banned from that jurisdiction. And I'm not talking about repressive regimes here: expressing and promoting Nazi ideology is literally illegal in Germany (for rather obvious reasons), so giving people the option to be Nazis has to be geofenced.

Expand full comment

One problem with user-customizable fine-grained filters on a forum like Reddit or Twitter, is that everybody gets to see a different subset of the conversation.

Alice posts an argument in favor of anarcho-capitalism. Bob argues against it, but I don't see his argument because he also posts in #anchovis and I have a filter to block all people who like anchovis. Other people respond directly to Alice, but I can't make sense of what they're saying because they are taking it for granted that Bob's earlier point is part of the conversation. I respond to one of them, without being aware that I just copied almost word-for-word something Bob wrote earlier. Etc.

(The anchovis example is silly of course, but the idea of globally blocking users for having expressed bad opinions elsewhere is not. There are browser extensions etc which will hide posts from users who post in fora associated with certain hot-button topics, and they will hide those users even in other fora on unrelated topics.)

Expand full comment

That's a perspective I probably haven't thought to apply before. But in my opinion, it comes down to authority. To wit, when does someone have the authority to tell you what to do or not do. The authority to restrict information from you is merely a derivative of that question.

The anarchist philosopher Robert Nat Wolff argued that under no grounds can someone have authority over another person because it conflicts with personal autonomy. He had some persuasive arguments too.

But it is clearly not how most people see it. For example, responsible parents censor, not merely moderate, certain kinds of content from their children all the time. I would argue they have the authority to do so.

Back in the day, the catholic church used to keep an extensive list of banned books - it didn't work since the fastest way to get people to read something is to ban it. But I see this as a parallel to the issues we face today: did they have the authority to do that. That's fuzzier.

Take the covid mandates and some of the terrible information about the pandemic provided by people with zero expertise. Do platforms have the authority to restrict people from seeing that.

It's more of a matrix: there's information you want to see that is good for you( that's easy.) There's information you don't want to see that isn't good for you( that's moderation). But what about information you want to see that isn't good for you( fascist propaganda) and information you don't want to see that is actually good for you( a reasoned perspective from your political enemies). Should platforms expose people to that information. More important, do they have the legitimate authority to do that beyond this is my platform and I make the rules. It is an important question. Like all important questions, it has no easy answers

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

There are multiple equilibria that could arise from this solution. Whether 99% or 75% or 10% of people chose to see all-banned posts - if your social group is dominated by those choosing to see them, you'll feel compelled to join it. Whether that's friends, school peers or your business arena. It's less of a consensual choice than you imply. Many would prefer a hard ban that their peers can't co-opt them out of.

Expand full comment

I get the impression this post was written for a certain person - someone who has strong free-speech beliefs, and has recently come into possession of a giant moderation&censorship machine. It's quite an interesting proposal and I hope that person considers it - especially the potential for allowing 3rd parties to create moderation lists, like how spam is dealt with, which means the platform is responsible for a lot less ongoing effort of maintaining perfect politically-neutral censorship over worldwide discourse.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Are you sure that the "we just want people to have a way of protecting themselves from harrassment" position actually has a sizeable amount of support, compared to the good ol' anti free flow of information position?

When you find yourself struggling to understand the motivations behind someone's actions, take a look at the consequences - especially if they continue using the same strategy, even though it doesn't seem to get them closer to their stated goals.

If there is a broad consensus that people should have more control over what kind of information they want to expose themselves to, if we are in agreement that it is each person's personal right and responsibility to make that choice, and we're willing to pay the social price for the irresponsible uses that informational self-determination allows for (not talking about the examples you gave toward the end of your post - those things, for me, fall squarely into the category of "clearly defineable criminal actions that have to be illegal for everyone for the rule of law to function", and if you have a problem with that you would have to argue about your misgivings with the law), then the system you propose should be very desirable. But.

Don't a lot of people simply believe 1) that this is *not* a price worth paying, and 2) that, to begin with, there's no reason to give people "informational self-determination", insofar as experts and trustworthy institutions can do a much better job of determining by what information we should let ourselves be determined?

I agree with your proposed solution because I think of self-determination as a fundamental value for humans. For me, the argument really is as simple as that - it doesn't matter if people abuse their freedoms and the consequences are undesirable, because a world where people don't get to make their own choices for their own reasons is not a world I value, regardless of how much it would feel like a utopia if you were living in it.

But if you look at the fight against fake news, against untrustworthy sources, at no-platformings and cancellings, at all the discussions that people are deathly afraid of having take place, isn't it at least plausible that there's a much more straightforwardly anti-liberal mentality underlying all of this? That the people pushing for censorship aren't simply confused about how to get what they really want (for you and me to protect ourselves from information we think we want to be protected from), that, basically, they just don't hold personal self-determination as a fundamental value? Lots of people in big tech, journalism, academia, etc. believe that, if they can control the flow of information competently, the world will be a better place for it, so that's what they're trying to do.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I think there's a fourth "argument for censorship": the people owning the platform might have their own personal boundaries to what they're willing to host. Maybe they're Jewish and don't want anti-semitic content on their servers even if in principle they didn't think the *government* ought to ban it from *all* social media platforms. (This is of course just the "bakers don't want to bake a gay wedding cake" issue at scale.)

Users are also not the only clients social media platforms have to satisfy in market terms; they're also bound to the wishes of advertisers, who have their own preferences about what they want their ads run alongside with. e.g. Tumblr doesn't have any moral opposition to porn, but its advertising partners didn't want their ads to run next to porn, so…

Expand full comment

Just to state the obvious: The moderation on ACX is a very, very good thing. Even I abstain from aggressive commenting. Here. Mostly. That's good. Compare comment section gone crazy on: mru :D

Expand full comment

Forgive my poor English.

I think Scott here implicitly pointed to a uncomfortable truth that traditionally the default policy and political thoughts leant heavily on the side of “censorship” as you listed through 1-3. Their point was that we *need* this kind restrictions to have a well-ordered and functioning society.

That position was seen as self-evident, and I think it was not entirely wrong. (Actually it might be mostly true for any given pre-liberal democracy polity.) The question was not why censorship might be necessary but why it was even possible to get rid of it without fostering disasters.

It was liberals who tried to argue that besides what rights people were naturally entitled to, we didn’t need this restrictions or at least not to the extend most authorities believed it to be to have a tolerable and functioning social and political order.

THAT WAS A NOVEL AND UNORTHODOX idea back then and behinds this idea were a whole set of new understandings of how social and political institutions works and organized itself. If we look carefully into the classical texts written by the proponents of freedom of speech what they were arguing was essentially that the PUBLIC interests were best served by reducing the restrictive policies of speech by authorities and everything bad come with it was a rather small price to pay for societies as a whole.

That is to say, the harassment problem which bothered people as private citizens were not at the centerstage of debates historically as it is not today. The hyper individualist(or rather narcissistic )vibe of modern America makes it everything about particular persons while in truth it is not.

And modern liberal democracy (with its brand of lightly censored publication/communication ecology) is still a very young thing compared to the traditional modes of governance. And to understand the 1st Amendment as something granted unrestricted free speech came out very lately even by the standard of liberal democracy.

Historically, even liberal societies were *not* running on the idea of speeches-unbounded as we now hold so dear.

The “good old days” many lament that are being destroyed by social media is everything but old. It once had worked by establishing some de facto “gate keepers” to contain the spoil-overs of theoretically unlimited speech without the need of FORMAl regulations. What made this possible was the simple fact that the cost of producing AND distributing ideas was high enough to make elites with access to capital, higher education, social status, and professional reputation had a much greater say of the shaping of public opinion than ordinary people. Laypersons could only channel their influence through the elitist pipelines. The raw feelings and preferences were being refined and moderated down through the process.

To put it in other words: we used to had an oligopolistic but competitive market of ideas while marketing it as a free market. The liberal arguments against a government regulated ecology of speech was valid largely because the natural course of things did not run down to its logical conclusion. And it served most people well.

Until the drastic reduction of said costs hugely disrupted the status quo. Suddenly the “free market of ideas” became a reality while it used to be a helpful fiction. The authorities, old guards, and a good portion of commoners are not happy with that: with good reasons.

It is not the case that social media present some *new* challenges to a established “ traditional” liberal order but rather an old debate is revived into public consciousness under new technological environments.

Expand full comment

You seem to ignore the rather ubiquitous and obvious counter-argument to this:

> But my point is: nobody is debating these arguments now, because they don’t have to. Proponents of censorship have decided it’s easier to conflate censorship and moderation, and then argue for moderation.

Which is that many, many people conflate in the other direction, I.e., claim they're being "censored" when all that has happened is comments that are offensive, dangerous, in violation of terms of service, or damaging to the product image/quality have been moderated

Expand full comment

How are there no comments here about the fediverse/mastodon? The way picking an instance works and how instances federate/mute/block each other is, like, exactly this.

Expand full comment

Posts like these make me perversely hope Substack keeps being banned in China. Although I suppose under the described system, I could just toggle the "see anti-CCP posts" button for the Classic Scott Experience. Assuming they'd still get written in the first place...

I think one of the biggest values of theoretically being able to see all content all the time is easier quantification. Like, part of the problem with curating a fake public square is that participants get a hugely distorted idea of what actual median people think. Since those are the voices not participating at all, or banned if they do. Lotsa people *want* that sort of echo chamber, obviously. But when inferences about the territory start getting made from intentionally-misleading maps of that sort, well, then you get Twitter. Not every individual has to see all the shitty stuff...if only researchers bother to dive into that abyss, it's still better than having no idea at all what gets through post survivorship bias.

Expand full comment

One scenario that is not mentioned is sharing data about 3rd parties - doxxing, stalking, revenge porn. Not seeing such elements will probably won't be enough for many people, and in many countries sharing such information could cause liability to platforms.

But looking at current landscape makes me wonder about one more thing - how much the issue of censorship vs moderation is caused of scale of current platforms. Amounts of spam on certain platforms would make "don't show" features unusable, and sheer amount of content is forcing to push for automated moderation/censorship, which is causing it's own set of issues.

Expand full comment

Chinese has a built-in workaround, at least for now. The word for “The West” and the President’s proper name are homophones. Criticizing The West for its corruption and state capitalism is a free action. Obviously that changes with the next emperor..er…President…

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Before I talk about the territory, I'd like to challenge your map. I don't think that that's what those two words mean, and I think that inaccurately using a word with strong negative affect for the thing you want to oppose is cheating.

I think that the distinction between moderation and censorship in standard English usage is that if you're one private actor among many saying "you can't say that on my platform" you're doing moderation, if you're a state saying "you can't say that at all" you're doing censorship, and if you're a private actor so large that your platform is a basically a monopoly it's a messy grey area (Google is approaching this point; Twitter definitely isn't).

As evidence that you know this, let me cite the fact that the things you've repeatedly referred to as "moderation" policies here and on SSC fit your definition of censorship, not moderation.

With that out of the way I'd also like to highlight a really important advantage of the kind of moderation you call censorship over the thing no-one actually does that you think would qualify as moderation: community curation. If your goal is to provide a platform for social media, where people - including strangers - can interact, and you want those interactions to be as likely as possible to be positive, then cracking down on posts and posters likely to provoke negative interactions is a really important tool. If you're a publisher (in the old-fashioned literal publisher-of-books/newspapers/magazines sense, not in the context of the platform-vs-publisher brouhaha) whose function is purely to let people put information out there, not to facilitate two-way communication, that may not matter, but for social media sites actively driving away people who have negative interactions with strangers is a strong positive good, because they're going to change and toxify the culture of your site /even if you let people choose to avoid seeing them/.

Conversely, I think that the three arguments for the kind of moderation you describe as censorship as less strong than they might be if advanced for genuine censorship, because of the distinction between "it doesn't happen here" and "it doesn't happen". But in practice, while saying "not here" doesn't stop those kind of speech, empirically it probably does reduce them, so I agree those arguments aren't null and void, and the fact that there are overwhelmingly strong arguments /against/ genuine censorship that don't apply to TKOMYDAC often makes them a good halfway house.

Let me refer you to your own previously-expressed admiration for "archipelago" type community formation. For that, you definitely need TKOMYDAC, not the new kind you're proposing.

Expand full comment

You didn’t address the actual behind the scenes rationale for censorship and propaganda uncovered in the Intercept “Truth Cops” piece:

“Jen Easterly, Biden’s appointed director of CISA, swiftly made it clear that she would continue to shift resources in the agency to combat the spread of dangerous forms of information on social media. “One could argue we’re in the business of critical infrastructure, and the most critical infrastructure is our cognitive infrastructure, so building that resilience to misinformation and disinformation, I think, is incredibly important,” said Easterly, speaking at a conference in November 2021.”

According to this minister of truth no one has ever heard of, Uncle Sam owns our thoughts and feelings and has a duty to maintain that “cognitive infrastructure.” Our minds are apparently a public utility. So it necessarily follows that DHS gets to decide what does and does not enter them. Under the concept of “cognitive infrastructure,” there can be no objection to censorship. There can be only submission.

Expand full comment

I'm not sure that censorship and moderation are as neatly separated.

What about "curation"?

What about inadvertent "desensitization" and unwitting addiction? How many people have become addicted to cigarettes because of seemingly innocuous past advertisements neither moderated nor censored and curated in a particular way: content A next to content B in magazine C.

Censorship, which does no damage to solidarity, probably does not bother me as much as it should. Libertarianism is a dead end as far as I'm concerned.

Expand full comment

I think the actual difficulty is people using your social network to cause unpleasant things to happen IRL.

Line here seems hard to draw depends on how much of a following the actors have, how "will no one rid me of this turbulent priest" we're getting, and how high profile/capable of dealing with things the targets are...

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Sounds like these definitions mean:

Moderation is for the benefit of the user.

Censorship is for the benefit of the platform (but claimed to be for the user or their safety).

Imagine the wretched results if a platform had various categories of ban - too right wing, too left wing, sexual content, religious content, whatever you like. Then users could shape their news into that which they would want to read (and you know we already do that to ourselves by reading only those sites that tell us things we agree with!) We could all end up reading the news from what seemed like different planets, with total social disconnect. That's nightmare-fuel.

Expand full comment

Came here to say: this post feels different, in a way I didn’t enjoy. Like: “oh, there is an opinion stated as a fact” (when you said “that’s not true at all”. Note I’m not making a claim about whether or not I agree with you).

I don’t like imagining this is the kind of article that triggers either boos or rallying chants. I prefer ACX less political, more observational.

Expand full comment

Very wise and well stated. I hope Elon or his people are listening.

Expand full comment

Consider /r/AskHistorians, which has notoriously strict moderation criteria. In many cases readers wouldn’t mind reading some of the comments removed for not quite being up to standards. Still, nobody calls this editorial policy “censorship”.

Expand full comment

Thinking about this a bit more, isn't this just advocating for the creation of a parallel, unmoderated social network for every existing social network?

Expand full comment

Did you really just say that CP isn’t a compelling thing to censor? The second order effects are intolerable!

Expand full comment

This is an interesting and new to me line of thought, and I think the idea of non-censorship or, at least, minimum-censorship moderation deserve exploration.

At the first glance it seems as a pareto improvement on the current equilibrium. But when I think more about it I'm less sure.

In one sense non-censorship moderation gives too much freedom of speech. Every platform that uses it will have to accept the fact that it provides tools for bad agents to do bad things. This is solvable by using minimal-censorship instead. But that just passes the buck of what has to be censored. Where is this minimal level? What exactly is the criterion we are using to figure out that child pornography and bomb making are okay to censor?

On the other hand, this system give less freedom of speech, or at least makes it less meaningful. The whole point why free speech is important is because winning in the marketplace of ideas is correlated with the truth. We accept the possibility of spreading of false information because we can't be sure that our own understanding of what is true and what is false is perfect. But with these system we can get multiple poorly connected marketplaces of ideas, wins in which would be less meaningful. If implemented poorly the system will make engaging with the opposite ideas even harder, creating even more polarised society, while also empowering the communities based around false beliefs.

Expand full comment

The conflation trend also leads to the definition of moderation always creeping towards censorship: preference falsification happens, the dissenters can't vent so they co-opt more ways of covertly talking about the topic which then leads to the emergence of 'dog whistles' which of course means ever more moderation covering an even broader spectrum of terms.

Moderation is about having online speech conform to a minimum, clear standard: 'don't call for violence', 'don't repeatedly spam harassment', 'don't reveal others' personal data'(this is one kind of ban that no opt-in moderation could ever solve) et cetera. Simple, broad rules that can't be interpreted in different ways to skew an ongoing public discussion. Censorship is easily enacted by making similar standards blurred enough to allow selective application or having them be defined by 'impact'.

Expand full comment

I once went on Twitter and saw that someone had posted a picture of feces in response to a reasonable argument by a female journalist (Megan McArdle). And then I went off Twitter...something about the combination of insulting and disgusting and sexist really repelled me.

On the other hand, I have no problem with sexually explicit speech, "bad" words, or fringe opinions. I'd love something like the self-guided moderation rules proposed here. Is this a nod to Elon's "choose the experience you want to have" idea?

Expand full comment

Idk, some of the same problems still exist here. Think about the Hunter Biden story and some of the COVID stuff, which were really the most egregious cases of “moderation” becoming censorship. The whole focus of the outrage is that certain people were not exposed to these stories which could have been important in shifting public opinion one way or another. People *should* be seeing this stuff. If 90% of people have the moderation filter on by default then it doesn’t change the fact of public opinion being shaped by the whims of who’s making the moderation decisions. Even if they turn the filter off so they see the stories, they are still “tainted” by the hand of the mods. I don’t think there was any person who was actually unaware of the hunter biden laptop because of the ban. In fact it probably gave it a huge boost in exposure. But the fact that it was taken down caused anyone who’s only heuristic is “trust the establishment” to automatically dismiss it. I feel like the only way this works is if you have the most honest, principled, and intelligent people making the decisions. However I’m skeptical that ANYONE at all is honest, principled, and smart enough for that impossible job.

Expand full comment

"Disinformation", wrongthink you mean.

Fair enough about harassment, but most of the time moderation is a way to shut down the side of the debate that they disagree with. Reddit is the embodiment of this.

Expand full comment

Wouldn't Bad Actors just start calling things that aren't spam spam? A lot of this happened in the first place because there was a reservoir population of Extremely Online obsessives who WOULD make a stink about everything they didn't like, and for some reason the PR departments thought these people represented "the Public". (Also, later, they happened to be using the same language as all the new hires).

For example, if there's a filter for Conspiracy Theories, doesn't everything not on CNN (Or whatever) just become a Conspiracy Theory? The Theory that there ISN'T a conspiracy among the police to kill black people for fun will become a Conspiracy Theory. Democratic Politicians That I Like Are Capable of Telling Lies? Conspiracy theory.

________________________________________________

The thing I like about the idea the most is that people can SEE the toggles on their end, and sticking yourself in a kiddle-pool is a deliberate choice rather than a cattle chute.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

This may have been brought up in one of the the 458 comments I skimmed but overlooked. There was an earlier comment policy along the lines of:

"be either true and necessary, true and kind, or kind and necessary"

Please put this or a version of it right above the comment box - or at least mine!

Expand full comment

The left believe that if the right are allowed to talk to each other without censorship they will whip each other into a genocidal fascist frenzy. So they must use censorship and even violence to maintain a liberal democracy.

Expand full comment

I completely agree. I want the "algorithmic news feeds" to work like this as well. I want to tell FB not to show me ANY political content... I really just want to see the best of humanity, not the polarized divisions of humanity. Taking this kind of content labeling one step farther and slightly off topic... I also wish Netflix would give me these toggles. If I'm okay with violence but don't want to see anything more than kissing, I should be able to watch GoT (condensed version) without nudity. This would be excellent for kids. Turn a rated R movie to PG by skipping content programmatically using ML. This is possible today, and I wish it was actualized.

Expand full comment

The discussion is to be made outside 'business' activity. Moderation is censorship in every way (graded); The algorithm will moderate (choose) and will, therefore, censor. This is true to most human discourse, except 'moderation' is exerted morally trought a societal network of consent/exclusion - and often implies only self-censorship. The problem with 'business moderation' is the business part; Business entities do not have interest in truth; they are interested in business; the $ comes from conflict, not peace. That is the reason twitter will trow opposite views in your TL, deliberatly; i.e. to 'cause engagement'; the discussion to be made is: how can a business act in order to comply with the constitutional barings of freedom of speach? This is not, as we recall, a problem with the internet (medium) but a problem that arose from social networks themselves. The static field of speech (internet) is different from the dynamic field of social networks. Regulation is, therefore, in order. Must be.

Expand full comment

"If the sender wants to send a message and the receiver wants to receive it, but some third party bans the exchange of information, that’s censorship." I think this is a useful way to define censorship. Under this definition, Twitter have never censored anything, because they have never banned anybody from sending information to anybody else. (They have blocked people from sending information *over Twitter*, but that's no different from me refusing to let random people upload their opinions on my personal website.)

Similarly, if the government makes it illegal to publish your book, that's censorship; if a particular publisher refuses to publish your book because they think the opinions you express in it are dumb and publishing it would reflect badly on them, that's fine.

To bring it back to the humorous image of Xi Jinping: if Xi Jinping makes it illegal for two people to have sex without his permission, that's bad. If two people ask Xi Jinping to pay for a hotel room for them to have sex in, and he says no, that's fine.

Ultimately, if you're using the word "censorship" to refer to Twitter refusing to pay to store your opinions on its servers, then what word do you have left to refer to real censorship, i.e. using the threat of violence or legal consequences to prevent somebody expressing an opinion?

Expand full comment

I kind of like the framework by the DHS.

https://theintercept.com/2022/10/31/social-media-disinformation-dhs/

Spreading misinformation: I mistakenly believe Joe Biden is a lizard person and tell others.

Spreading disinformation: I know full well that Joe Biden is not a lizard person and tell others to spread panic and fear.

Spreading malinformation: Joe Biden is in fact a lizard person, but me spreading that fact it is like... a totally irrelevant detail to the discussion at hand. And me pointing out that fact is in bad faith and against US strategic interests :)

The last category malinformation (factual information shared, typically out of context, with harmful intent) is especially neat. I as the censor get to determine what the context of a conversation is and ascribe intent to others.

Expand full comment

Only power that is hidden is power that endures. The real proponents of censorship aren't trying to argue with you. They want to own and control your cognitive infrastructure.

Expand full comment

I ran a popular website in the early 2010s that had a reddit-like community of people with a lot of time on their hands (mostly retirees and stay-at-home moms). Many participants seemed inexorably drawn toward in-fighting and arguing about hot button topics, so much so our feed showing "most recent posts" was often deluged by a delirious anger about something or another. To combat this, we created a forum called "Drama" that you had to opt in to to view. Within a few months, most every community member had opted into it, because who _doesn't_ push the big red button that says "show me something naughty"? Thus, the vibe was "caustic hellscape" and the moderators had a much nastier set of messages to contend with than previously, because now everybody was operating in a zone *designated for* dramatic discussions.

It was not an experience that served anyone particularly well in my opinion, but it was my n=1 data point in trying to implement a moderation platform similar to the ideal described in this post.

Expand full comment

This reminds me of the perfect scissor-nature of the SomethingAwful.com forums. Back in the golden years, two different people with different priors could see two totally different things.

One would see a paradise, a well-tended garden of discourse and friendly good humor, and be cheered to see people being temporarily banned simply for saying idiotic things, or being jerks, or being lazy with spelling and capitalization.

The other would see a hellhole where all kinds of upsetting fringe ideas were being discussed seriously, and where people were allowed to be huge trolls and get away with it simply because they were funny. And then people trying to make a righteous, appropriately angry stand against atrocious ideas would be banned for their tone!

The Internet (including SomethingAwful) has gradually trended more and more in the direction of serving the second person and leaving the first person frustrated. In the olden days, most communities would boot you not for what you said, but for how you said it. Now, you're much more likely to have free reign in terms of tone and vitriol, but be moderated on the basis of the content of your communications. And now everybody agrees that everything is definitely worse, but somehow, some people think the solution is more content policing.

Expand full comment

The MVP really isn't viable, economically, because some things still are illegal, and must be taken down, and that means you have to go over all the bad stuff essentially twice, both to decide whether to moderate it, and to decide whether to censor it. Conflating moderation and censorship means you only go over things once, you can stay well clear of the line of "actually illegal" instead of skirting it, and you can paper over a lot of the differences between legal jurisdictions you operate in, you lose all these advantages with the "pure moderation MVP".

Expand full comment

When I signed up for Twitter, I would see posts from people I followed in reverse chronological order. They wanted me to see those posts; I wanted to see them (which is why I followed them). Now, Twitter decides which tweets by people I follow appear in my timeline. Is that censorship? Seems so according to the "If the sender wants to send a message and the receiver wants to receive it, but some third party bans the exchange of information" definition of censorship. Are all algorithmic timelines therefore censorship?

Expand full comment

What'd I'd like to see is a way to filter based on how intelligent the argument is. I'd delightfully read a eg. good argument written by a white supremacism just to get an idea of where actual thought is going. However, they usually boil down to lots of ad hominem arguments, motive-attribution and obscenity.

Expand full comment

Love it when an article not only illuminates a subject which I have been trying to get a handle on, but also gives me the tools to clearly express what were previously only a scramble of thoughts. Thank you.

Expand full comment

This is somewhat off topic but at least tangentially related.

So do you think demonstrating that you have no qualms about inciting a riot based on a lie that you knew was a lie that was dismissed by scores of court cases and your personal lawyers plus your own selected Attorney General (Barr: “I told him it was bullshit.”) should result in a ban?

Being a Free Speech absolutist has a sheen of nobility, but really, you need to leave the abstract and have a look at recent history and make adjustments to a great idea that could not have anticipated Twitter.

If millions of people are willing to accept as gospel whatever self serving fantasy pops into one weird dudes head, you need to think it through.

Even Yuval Harari has suggested that there is at least a small chance that 2024 could be the last free election in the United States. (Last Friday on Bill Mahr)

I had a link to the Financial Times but it was paywalled.

> Harari winced, and solemnly suggested that American democracy is now so troubled that “the next presidential election could be the last democratic election in US history”. He added: “It is not a high chance, but it could be the case.”

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I would add a fourth possible - and probably most reasonable - point for censorship (under the definitions in this post). Let's say A wants to incite violence against N, and B, C and D want to read it. Now N has a really strong interest in this not being communicated, and it has little do with the discomfort of *reading* it. Even *laws* tend to crack down on incitement to crime, threats of violence, fraud, conspiracy and other acts that *are* speech. You don't automatically have to tell the platform to crack down on such illegal speech (you could just let the criminal system handle it), but it's also not automatically unreasonable to tell the platform to have it disallowed (after all, prevention of crime is preferable, and it seems unlikely that the justice system could handle any but the worst cases).

It's probably reasonable to have censorship against criminal activities, and moderation for everything else. Of course, being a European I know all too well that the laws can go a bit overboard - we do have weaker freedom of speech than the U.S., especially in the area of "hate speech".

Expand full comment

I don't really buy that this addresses the avoid-harassment side's worries. Is it really better to know that thousands of people are talking nasty shit about you, you just don't have to see it? Like if you're not on twitter but you know you're going viral in a negative way, I think a lot of people are not going to feel very comforted, and might even not be able to resist "reading the comments section"

I don't think our brains are good at coping with large numbers of people telling us we suck and we should strive for an information ecosystem that just reduces the volume of the yellers (i have no good ideas on how to do this without sacrificing other good things about virality/free speech in a way resistant to political manipulation)

Expand full comment

What is it called when the government directs a company’s “moderators” to remove information which later turns out to be true

Expand full comment

The categories were made for man, not man for the categories. That guy who sits there during a debate and prevents you from talking when it isn't your turn, and stops you from drifting too far off topic, he's a censor, right?

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

A big distinction missing here is that whether something counts as censorship depends on scale. You (i.e. ACX) can remove whatever comments/ban whoever you like and it's not censorship, same thing with any small publisher or non-top 3 social media. It's basically only when Facebook or Twitter or TikTok bans someone, or more so when there's a coordinated banning across platforms, that anything is plausibly getting censored (even then, it's not like people are unable to figure out what Trump has to say). So you could kind of see censorship as a feature of industry concentration and lack of competition as much as anything else (government censorship being where there is effectively only 1 information source in this framing).

Expand full comment

On the Internet we're often dealing with a relationship between *three* parties: the author, the audience, *and the host.* The Xi meme encourages us to overlook that, since its third party (Xi) has no need to be involved.

Surely though, where a private host is involved, their consent is morally relevant to some degree as well. I am free to converse with my spouse about anything we like in our own home, but our neighbor doesn't have to host those conversations if he doesn't feel like it. Vast networks like Facebook, Twitter, etc are still private for now, so I think their consent about what sort of business they want to be, and therefore what sort of content they want to host, does indeed matter to some degree.

I'm open to arguments that they're so big and all-encompassing that the state should step in, declare them a public service, and give them a variety of guarantees and protections in exchange for guaranteeing free speech and other civil liberties. I'm also open to arguments that they're so big and all-encompassing that they should just virtuously guarantee free speech of their own will.

Without either of those though, I don't see how we can get to a situation where the only consent that matters is the speaker and listener.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I think Musk was responding to a similar suggestion recently, unless he got the idea straight from ACX? I guess it's worth a shot, though I'm not certain it'll solve the deeper problem of receding into echo chambers.

Expand full comment

While I overall think that having many user-configurable filters is better than censorship, I also think this might make filter bubbles even worse.

If there is one filter setting which filters out holocaust denial, and another which filters out any claims that the holocaust did in fact happen (and similar filters for Obama being a Muslim, QAnon, Trump's voting fraud claims, Anti-Vaxxers, Creationists, Flat-Earthers), the Consensus Reality will shrink to basically nothing. This might lead to some bad outcomes down the road when tribes with disjoint realities clash with each other.

Expand full comment

I understand that archiveofourown.org is a decent implementation of this principle. Works are extensively tagged and readers are trusted to decide what tags they want to opt out of/into seeing.

It's text-only, which is cheap enough to host that it can run on donations as a nonprofit, by and for people who were fed up with their writing getting censored.

The moral, if this post must have one, is that the distinction between moderation and censorship is easy; not getting eaten by moloch is hard.

Expand full comment

I held an almost absolutist anti-censorship position until I read your essay "Toxoplasa of rage" and short story "Sort by controversial". The kind of multipolar traps you describe there are real and very scarry, while all ad-hoc measures i can think of that prevent them from being maximally bad involve some form of censorship.

Expand full comment

Here’s a fascinating twitter thread about Moderation from the point of view of a previous Reddit CEO: https://twitter.com/yishan/status/1586955288061452289?s=20&t=sauXK7_fWVo1Wd2OMTuIGQ

Expand full comment

I'm assuming I'm not the only one to notice this, but this only works if you're comfortable hosting child pornography, revenge porn, etc..

I suspect there's at least some storage space costs holding large quantities of spam, and possibly even some room to DDoS a site that genuinely refused to delete anything.

You'd also be de-facto censoring what people can read without signing up - you don't want search engines indexing your spam and porn. Which means you're a half-inch from having some nice detailed logs on who turns off their filters, and that's pretty useful to a draconian dictatorship too. Not as nice as full censorship, but still enough to inspire some fear.

All that said, this is basically what Slashdot uses: normal moderation can merely downvote something to "hidden by default", but any user can override and browse those. Only spam and illegal content would actually get deleted.

Expand full comment

This still wouldn't work. Anyone who wants to could turn their filters off with the click of a button, and then be exposed to a deluge of (Nazism/Communism/pornography/conspiracy theories/harassment/etc.). Anyone who wants to trash the site would make a presentation about how AWFUL it looks like when the filters are turned off, and the ensuing lawsuits/advertiser boycott/normie exodus would destroy the moderation-without-censorship social network. You actually already wrote about this: the relevant dynamics are described in https://slatestarcodex.com/2019/02/22/rip-culture-war-thread/.

These dynamics are why a bunch of high-profile mainstream websites turned off their comment sections, why Reddit banned a bunch of communities that were doing their own thing separate from everyone else (to be fair, all of those subreddits were terrible), and why "moderation", as understood on the internet, always means (or at least involves) censorship.

Expand full comment

I’ve often thought that there should be a cost to and a limit to downvoting, on places like Reddit. You get 5 downvotes a day. Use them wisely. If you have used them all but you really want to use another one you get to remove from a previous downvote, in a populous list.

Expand full comment

typo:

and then overthrow your your society

"your" is duplicated

Expand full comment

I think this take does not properly take into account the legitimate business interests of platforms. I find it reasonable for some platforms to refuse to host certain content if it thinks that it attracts people that are harmful to its userbase. I think there are two mechanisms:

The mere possibility of switching the "harmful" comments on reduces the experience of a normie user. If a young woman posts to Instagram and some guy posts a long comment about pride being sinful etc., she will not like this, even if the comment is hidden, since she'll fear other people peek behind the curtain see her "slandered". Even if she's the only one who can see it, she'll dread the possibility of such comments existing on her photos.

Secondly, people do not want to share spaces with very different people. If a platforms hosts literal Nazis (well behind moderation curtains), I have to suspect that everybody whom I engage with even in a normal context could be a literal nazi. If I know that due to tough moderation aka censorship, these people are not likely to stay on the platform, I can assume that strangers are not Nazis.

Expand full comment

There's already a Scuttlebutt protocol https://scuttlebutt.nz/ and it even has clients implement for various platforms. It provides decentralized, end-to-end encryped social network and the servers only play a role of helping those who don't have public IP meet or cache the (encrypted) messages.

It's really well designed protocol and I wish it was more popular.

I think this is the largest obstacle for new social networks: that social networks exhibit network effects so it is difficult to start.

Even more so if it os decentralized and not gaining revenue so there's nobody to promote it.

I wonder if rationality community would be a good seed for it.

Expand full comment

Giving people more fine-grained control over content was what we tried with Google+, and it was a colossal failure. The population of people who want to do this is 1) very small and 2) do not post regularly on social media. Huge success with Linux enthusiasts, though.

Expand full comment
Nov 4, 2022·edited Nov 4, 2022

I've noticed that my opinions about content moderation now are way different from what they were before I saw how the sausage is made at Quora. I would expect that most people's opinions would also change were they to do so. There's a lot more I could say, but I don't think it would mean the same thing to someone who hasn't had that experience, so I won't (don't want to argue with a blind person about the color of my shirt), other than to say that moderation too cheap to meter would be fantastic, because at the moment it's shockingly expensive at scale.

Expand full comment

The fundamental problem with the kind of "personalized moderation" that you propose here is that, when taken to its logical conclusion, it further breaks the conversation into separate information silos in which different sides of an argument can each select a moderator such that they don't need to see or hear any opposing points of view. E.g. the red team can tune out anybody who claims that the 2020 election wasn't stolen, and the blue team can tune out anybody who promotes the "big lie". If one of the goals of free speech is to promote truth and honest discussion, then information silos are counterproductive.

Over the past few years, I personally have become much less enamored of the benefits of unrestricted "free speech", at least with respect to speech that is distributed via large corporate platforms. Back before social media and the internet, most information (especially news) was channelled through "gatekeepers" -- local newspapers, broadcast news, and the like, and those gatekeepers essentially set the rules of what information was true, or at least adhered to journalistic norms of truthiness, and was considered to be socially acceptable for public consumption.

Now that the gatekeepers have been eliminated, it is all too easy for society to splinter into competing factions, which don't even agree on a common reality. The situation has become so bad that Serious People are contemplating the end of the American republic, if the 2024 election turns into a constitutional crisis. Without gatekeepers, there is nothing to stop demagogues from using platforms to spread lies and disinformation, preying on ordinary people's irrational fears and biases in order to boost their own personal power and wealth. Sadly, the public at large has not proven to be very adept at separating fact from fiction.

I would never advocate for the kind of draconian restrictions on speech used in mainland China or Russia. However, the Chinese figured out something that I think western democracies don't properly appreciate: Controlling the flow of information is important for maintaining social stability. In the case of China, censorship is being used to crush dissent and prop up an autocratic regime, which is clearly very, very bad. However, it also works; social cohesion and patriotism in China are at levels that are unheard of in Western democracies.

I have come to the conclusion that the best way to save liberal democracy may be to re-establish gatekeepers that are committed to truth. The scientific community has standards of peer review, which are far from perfect, but are a potential model. The old-school journalism community also has (or had) such standards. In Britain, the BBC acts as counterweight to the tabloids; it is funded by the government, but is a (mostly) independent voice that cannot be easily used for partisan political purposes. Depending on the whims of for-profit tech companies would not be my first choice, but IMO is still be better than no moderation at all.

Expand full comment

I agree that these sorts of measures are a good idea and refusal to provide them is a bad sign. I'm actually a big advocate for them. But I'm skeptical they're as powerful a tool to distinguish moderation from censorship as you're proposing.

Using Twitter is already voluntary, and in a sense, the Internet already implements what you want in the firm of different websites. If Twitter had an unmoderated "4chan mode" you could switch into, most people would not do so for the same reasons more people are on Twitter rather than 4chan in the first place, and so being banished to 4chan mode would remain an effective means of punishment/censorship.

*Most* debates about censorship online concern the abuse of voluntary systems of "moderation" to reduce the spread of ideas the moderators disagree with. There's almost no online government censorship in the west (and arguably, even what government censorship there is can be opted out of easily enough using tools like TOR, albeit at some personal risk. But people are still, quite understandably, frustrated when completely voluntary systems of moderation are abused and turned against them.

The most extreme example of this is Shinigami Eyes, a browser extension that adds it's own layer of moderation to the Internet in the form of warning users (mostly trans people) if a given user is "transphobic" and should be avoided by turning their name red. Not wanting to get called slurs is obviously reasonable, and this is the mildest, most voluntary thing imaginable. But in fact the people running it were *instantly* corrupted by this tiny shred of power and listed everyone they disagree with, including a ton of trans people in the Social Justice community they had subtle disagreements of doctrine with.

Exit rights are better than no rights at all, and a sufficiently free "market" in moderators might ultimately triumph. But it will always have to contend with good, popular teams of moderators either being taken over from the inside or corrupted by the temptation to censorship, and being hidden from any subset of users is always going to hurt.

Expand full comment

Orthodox?! On Ludicrous I had better not see any posts by anyone who isn't a sufficiently conservative rabbi to reject electricity as un-Talmudic!

Expand full comment
User was banned for this comment. Show
Expand full comment

I have always been sympathetic to the idea of having personalized filters of what you want to see, but I wouldn't use such loaded terms as "harassment" or "hate" for those filters. That's because everyone's threshold and idea of what constitutes harassment and hate are different. These are not objectively measurable entities. I am pretty sure lots of conservatives as well as independents feel harassed by the constant extreme name calling as well as outright bigotry by the left progressives, but I really doubt that if a filter for harassing posts is to be created they would actually filter out the progressive speech and only allow civil parliamentary discourse to be shown.

Expand full comment

So although I have not been a faithful and reliable reader, quickly pouncing on any new post, Scott Alexander has greatly influenced my life and way of thinking with his writing essays like “I Can Tolerate Anything Except The Outgroup.” I feel that I have to say this, because I don’t really have a comment history here, and I have to be very critical of this essay. Hopefully I can provide a couple ‘tools of thought’ to help us all understand the world better however.

The vast majority of censorship in both China and the US today is based on very extended, but widely acknowledged reasoning that aims to prevent violations of the non-aggression principle, or aims to prevent unjust and abusive treatment, very often all rolled into one.

If you want to go on Facebook and talk about how statistically, it is likely that Trump was defrauded of his rightful election win, you are doing so believing that Trump’s political rivals will treat the people he represents unjustly, and in fact, even if you are willing to lie to bolster Trump, you will usually honestly believe that his rivals are treating the people he represents unjustly.

Please do notice that at the very same time, those people banning claims of election fraud against Trump, are doing so to prevent what they view as election theft, which would be unjust, and to prevent the violent overthrow of the US government, which would violate the non-aggression principle.

Now you are probably not going to be okay with your critiques of fill-in-the-blank-politician being formally classified as being violent threats or insurrection attempts, and you are probably not going to be okay with musings on the limits of simple “inclusion” quotas in say, Cal Tech physics, to resolve inner city social problems, or transgender participation in women’s sports being formally classified as an abusive attempt to oppress people.

If any reader is considering disagreeing with me, then imagine what happens to your career (or the career of someone you know and sympathize with who must deal with the public) when someone can describe you as “having accumulated 48,000 posts advocating sexual oppression of women, and 19,000 posts advocating racism,” and will be able to quote-mine years of posts bearing this formal certification of villainy.

The problem is, if you are not okay with having your posts classified as abuse and incitement, with all the consequences that flow from that, then there is not in fact, any especial difference between “moderation” and “censorship.” The system proposed, with lots of additional levels and degrees of condemnation, can simply be expected to alienate a lot more people, and it would create a great many more cases where the person posting would be outraged by the moderator.

Expand full comment

One thing this taxonomy neglects is that harassment is often communication between people doing the harassing, and isn't something the subject of it needs to see at all for it to be harmful. The people libsofTikTok highlights probably don't see the Twitter comments about them, or at the very least, those are immaterial relative to the death threats in phone calls and emails those comments inspire.

Expand full comment

In federated social media (like Mastodon) there is an interesting way to choose your own moderation by choosing your server.

Expand full comment

Something that's important to note in this conversation is that western governments whose constitutional commitment to free speech prevent them from engaging in overt, direct censorship are engaging in censorship by-proxy - communicating to social media companies that regulatory legislation that *they will really not like* may be incoming if they don't play ball and self regulate. I believe there are already documented interactions between USGOV and social media companies about which ideas should be suppressed or removed (this is not a commentary on the actual worth of those ideas).

Expand full comment

"Or you could let users choose which fact-checking organization they trusted to flag content as 'disinformation'."

This is close to the correct long-term solution. But rather than just fact-checking, you need to unbundle the concepts of content propagation (i.e., something that the platform--Twitter, FB, TikTok, TruthSocial, whatever--provides as a basis for its business model, and the stream of content that its AI decides to push at a particular user) from moderation (i.e., something that the users, in cooperation with a service with which they contract, apply to the content stream). That doesn't necessarily prevent the content propagator from applying its own standards for its own business reasons, but it does provide a nice, simple way for the user to keep the slime at bay.

The biggest problems here are twofold:

1) The content propagator would prefer that the stream came through unvarnished, because its AI has figured out how to maximize attention, and any editing of that stream is therefore suboptimal. In addition, some important qualitative information about the user migrates downstream to the moderator, which impacts the quality of the ads targeted at the user.

2) Moderation is kind of an iffy business. It's extremely labor-intensive and the subscription fees that a user might be willing to pay are pretty limited. However, the moderator potentially has an ad revenue stream just like the content propagator.

Expand full comment

I like to think about it like in the case of product reviews, which are the most censored politically neutral info you constantly encounter, showing it's possible to have bad moderation and total censorship simultaneously. You can have spam and unhelpful, irrelevant content, such as reviews from people who haven't even used the product (bad moderation), and all the reviews pointing out the flaws of the product have been deleted (totally censored). In fact you see this all the time!

Moderation and censorship are actually slightly orthogonal!

Expand full comment

Hmm, does this mean that if two of my friends want to have a fight in my personal [Discord server/Discord channel/Facebook comments/house], and I don't want them to do that because I find it annoying and I tell them to knock it off, this is censorship rather than moderation under your system, since the two of them both want to have the interaction and I'm telling them they can't do it in my space because I don't want to host it?

If not, at what point between [my house] and [Facebook] does this become censorship and bad?

Expand full comment

Most arguments about moderation forget that you have to have real, actual people moderating stuff, and that not only costs money but also costs those people a lot of stress as they have to deal with it.

As such, flat-out banning bad actors actually has a ton of very positive effects on moderation. It turns out banning the worst offenders *greatly* decreases both the psychic load AND the work load of your moderation staff, which makes moderating a platform actually possible and makes it possible to actually pay attention to particular cases more.

Moreover, by flat-out banning toxic people, you abort the networking that they would do that would bring in even MORE toxic, awful people that makes this even worse.

As such, it's actually often very valuable to just flat-out ban the worst actors, because there are major downstream effects on everything else you do because of the load they put on your staff.

Realistically speaking, your goal running a social media platform is to at least have it be economically sustainable, so having it actually be possible to moderate (and having your moderation staff not burn out after six months) is very important. In fact, it's probably one of the most important considerations.

Additionally, sculpting the tone of discussion and content you want on your platform is valuable as well. If you have a platform for scientific discussion, you don't want people who are being overly political or religious coming in and shrieking at everyone and derailing everything. Conversely, if your goal is to make a conservative Christian platform, you probably don't want a bunch of barely legal pornographers on it (or maybe you do, because they need to connect with their audience :V).

The other part of this is advertising. Free platforms pretty much have to be supported by advertising, and it turns out people shrieking "Kill the Jews" on your platform makes people not want to advertise on yur platform unless they're skeezy cons, which drives off your users when their computers get infected with viruses from your ads. Keeping your advertisers happy is a big deal, and is a big argument in favor of banning toxic people rather than "moderating" them.

The only time this kind of "moderating" is useful is when two groups aren't toxic on their own but are toxic when mixed, like red and black ants.

Expand full comment

Imagine, for a moment, that social media existed in prior centuries. The helio-centric model of the universe could be suppressed and hidden; heck, the guy walking around the desert and saying we should take care of poor people and love our enemies could have his reach suppressed because, hey, he’s just a fringe lunatic right?

Don’t try to decide for others or influence others when it comes to information consumed. Complete epistemic openness is the only way to learn, grow, and better both yourself and humanity.

Expand full comment

Good post decoupling two overly-coupled ideas!

Expand full comment