That seems like a good idea but it's simplify the issue way too much.
> but also allow all content,
Almost no platform survive without a bit of moderation. If you don't moderate, you'll get any kind of content, including spam, troll, etc...
Add that to the fact that you'll get people that will just push to boycott such kind of platforms, and thus you'll no longer have much possible ways to make this kind of platform exist.
> but if someone posts something illegal, they take their share of legal responsibility for publishing it.
That's also kind of impossible. The law evolved to consider that impossibility to look at every piece of content, and this is why the DMCA exist. Look at Youtube which try to filter their content much further than the law currently require, they have HUGE teams of moderators, multiple tens of thousands, with some of the best kind of neural network, working on this and yet it fail so often.
The world isn't binary, we need a bit of both.
It could be an interesting experiment though to allow legally the kind of platform you suggest. Someway to protect website owner from any legal retaliation. It would most probably look like 4chan, but still interesting.
I believe the point the GP is making is that social media sites play both sides when it's in their best interest and neither when it isn't. So they moderate in the name of "public safety" when it suits them, but can't catch everything so they hide behind "impartial utility" to avoid responsibility.
They aren't walking a line, they are switching sides when it's in their best interest.
Oh I'll definitely agree I'm oversimplifying it, and there's room for improvement. But I think the core issue remains: if they are allowed to control discourse by choosing who can speak and who cannot, then they must be held responsible for who they allow to speak, and they need to be transparent about it.
And to your last point: as a decently frequent user of 4chan, that's pretty much what I like about it, though there absolutely IS moderation on most boards. People's minds go to /b/ and /pol/ when they think of 4chan, but there are several niche hobby communities on other boards which thrive in a (relatively) low-moderation, low-interferance. setting.
> if they are allowed to control discourse by choosing who can speak and who cannot, then they must be held responsible for who they allow to speak, and they need to be transparent about it.
Wouldn't this idea in essence eliminate the possibility of moderation of any kind? In order to do any moderation, every post by every user would have to be manually reviewed. This obviously doesn't scale, so in effect the law would eliminate the ability for website owners to moderate content on their own site. Also, what happens to sites like reddit and github where moderation is a feature offered by the product? How would people be able to find a moderated community if that's what they wanted? The idea seems totally unworkable.
Hey root_axis, I see you pop up pretty frequently on threads like this where people really don't understand the consequences of forcing companies to host speech and removing their legal protections if they decide to moderate. Personally I've seen the same dialog hash out so many times it's become exhausting to even reply to them.
I just want to thank you for continuing to fight for what's right.
haha, probably a sign I am spending too much time on HN, but it does surprise me to see this suggestion pop up so often, especially on this site which happens to be a quintessential example of moderation as part of the production. Imagine what dang's workload would be like if he had to hand review every one of these comments!
That’s not true though. What you mean is that the economics of moderation are not convenient to achieving your desired outcome. YouTube for example could easily afford to review uploaded content before publishing it. They just wouldn’t make as much profit as they would like to.
More than 300 hours of video are uploaded to YouTube every minute, the economics of comprehensive manual review are not "inconvenient" they are wholly implausible.
You’ve got it backwards. That much gets uploaded because it’s unmoderated. The vast majority of it is junk that everyone knows would never pass any sort of quality filter, so no one would bother to upload it.
That makes no sense. YouTube has no policy against "junk" videos, there is no reason why individual users would change their upload habits except for a tiny minority that knowingly upload violating videos.
No, there is that instant dopamine hit of uploading a video and seeing the likes in real time. Whereas if it was, upload a video and 6 months later it might be approved, people would be more diligent about it. This is very obvious human nature.
> . If you don't moderate, you'll get any kind of content, including spam, troll, etc..
There is a reasonable solution to this. Give people the tools to self moderate.
Things like reddit, for example, work pretty great, in that if you don't like a community, then you can go to a different one, which moderation rules that you prefer.
> Things like reddit, for example, work pretty great, in that if you don't like a community, then you can go to a different one, which moderation rules that you prefer.
The internet already works this way, if you don't like the moderation policies of a website you have the freedom to use a different website.
>Things like reddit, for example, work pretty great, in that if you don't like a community, then you can go to a different one, which moderation rules that you prefer
What you have described here is exactly how all websites work. If you don't like the site moderation you are free to use a different site, just like on reddit.com except you change a few more characters in the URL bar.
> What you have described here is exactly how all websites work
Not within the platform, no.
It is easy to create another subreddit. And if you create another subreddit, you have full access to all the same Reddit infrastructure as every other redditor.
I am talking about access to the platform.
> Except you change a few more characters
No, you would not have access to all the Reddit infrastructure, and access to all the cross site stuff, using the same Reddit account.
It is pretty easy to move across Reddit, and within Reddit, and get all the advantages of it. You don't get all those advantages, if it is another website.
Other people can't use, for example, the same mobile Reddit app, to access your website, and would have to download a new app.
They can't log in using the same account. They can't keep track of their posting history, all through the same user link.
There are numerous examples like that. There are lots and lots of benefits to using the actual Reddit website, compared to using a different website.
So no, you cannot just create a new website, and get all of the very significant benefits that you would get, from having it all on Reddit, using the same infrastructure, and account, and mobile app, and posting history, and follower list, ect,ect ect, for example.
What does this sentence mean? No what? On what platform?
> It is easy to create another subreddit
It is easy to create another website.
> if you create another subreddit, you have full access to all the same Reddit infrastructure as every other redditor
So what? There is no distinction from a moderation perspective. If you don't like how the mods run a subreddit you can use a different subreddit, same as any other forum on the internet.
> So no, you cannot just create a new website, and get all of the very significant benefits that you would get, from having it all on Reddit, using the same infrastructure, and account, and mobile app, and posting history, and follower list, ect,ect ect, for example.
Yes you can just create a new website or use a different. You don't "own" the users of reddit.com, there is no reason why one should be entitled to the infrastructure or the users.
You can't get all the same benefits of having it all on reddit. Things like being on the same mobile app, and having the same user account.
> Yes you can just create a new website or use a different. You don't "own" the users of reddit.com, there is no reason why one should be entitled to the infrastructure or the users.
This is called a barrier to entry. regardless of who "owns" all of these benefits, it is still something that a person does not get, if they simply create a different website.
The missing benefits, would make "creating another website", significantly less useful, and are the ones that I mentioned before. You would not be on the same mobile app, would not have the same follower list, would not have the same user account, ect.
These are huge benefits that one would not get if they simply created a different website. Who "owns" it, does not change the true fact that these benefits are large, and you would not get them if you merely created a new website.
> There is no distinction from a moderation perspective
Yes there is. The difference is that if you create a new website, you don't get all those significant benefits that I talked about. That is the distinction that I am talking about.
It's hard enough to get community moderators to sign up to do unpleasant work for no money, how do you expect to get anyone to do it after you impose liability for getting it wrong?
Actually, maybe there's something to this: Impose no liability on anybody for moderating as long as they're not moderating more than ten million users. If you are, the only way to avoid liability is to be a common carrier. Then you can actually have community moderation, but you can't have Zuckerberg deciding what a billion people don't get to see.
> how do you expect to get anyone to do it after you impose liability for getting it wrong?
I am suggesting that there would no platform wide moderation, but would instead be things like public block lists that users could voluntarily subscribe to.
IE, on Twitter, anyone could publish a "spam account list" or whatever, and people could choose whatever their preferred block list is.
Or they could choose not to follow any block lists, if they so desire.
Some blocklists might only block spammers, another might block Donald Trump, and another might block anyone who posts any swear words at all, and individual people would choose how they would like to view their content.
Killfiles did not work well, and are part of why Usenet died. Long-time members will have good killfiles and a relatively good experience, yes, but new members start with getting every message. Building up a killfile involved a long process of deciding who was worth reading, which is a large investment when you've just started getting into a channel.
That's the way the system had handled it for ages. Common carriers vs publishers. Tech companies decided that didn't apply to them, and they could have the best of both worlds.
Social media is a bit different than, say, the phone system, in its ability to widely broadcast things.
It's different from traditional publishers in that there are orders of magnitude more 'publishers', rather than, say, a newspaper and a few TV stations per city.
> Almost no platform survive without a bit of moderation. If you don't moderate, you'll get any kind of content, including spam, troll, etc...
If you create your own Twitter account and post a bunch of spam there, no one will follow you; so it shouldn't matter. You can also limit someone's access to "discovery" mechanisms without deleting their content or preventing them from posting entirely so even their opted-in followers can't see their content.
The core problem is when social media companies create mechanisms for users to bother each other without knowing each other, and there "span" is the tip of the ice berg: people routinely abuse and harass other people--which should be construed very very broadly: if you had a child who died in a school shooting and someone likes to keep reminding you of it as they take glee in your pain, that is obviously abuse and harassment, and yet it isn't "illegal" and clearly isn't "spam", so is almost always considered "totally fair game"--these websites don't remove that content or punish people for it, and yet they remove photos of people breast feeding, as if that is some crime against humanity.
What forcing websites to be platforms would do is fix social media by causing the people who make these websites to reconsider features that should probably not have existed in the first place.
> Almost no platform survive without a bit of moderation. If you don't moderate, you'll get any kind of content, including spam, troll, etc...
There is moderation and there is censorship. Nobody is against filtering spam. Trolling is fine in my opinion. Just let the user have the option of blocking who they want to block an follow who they want to follow.
> Add that to the fact that you'll get people that will just push to boycott such kind of platforms, and thus you'll no longer have much possible ways to make this kind of platform exist.
If that was true, twitter, reddit, facebook, google wouldn't exist in the first place.
> That's also kind of impossible.
No it is not. Publishers don't find it impossible. Platforms don't find it impossible. If it was impossible, telcoms and publishers wouldn't exist.
> It could be an interesting experiment though to allow legally the kind of platform you suggest.
We already had this kind of platform.
> It would most probably look like 4chan, but still interesting.
No, it would look like 2009-2013 twitter, reddit, facebook, etc.
> There is moderation and there is censorship. Nobody is against filtering spam. Trolling is fine in my opinion. Just let the user have the option of blocking who they want to block an follow who they want to follow.
In my experience, most people are not interested in sifting through mountains of garbage just to pick out a few morsels of a decent conversation. If you let trolls and bad-faith actors persist on your site, soon those people will be the only folks who are left.
> If that was true, HN would be infinitely more popular than reddit.
Not sure that was the best example. /r/programming is kind of notorious for being HN on a few-hour tape delay with a substantially diminished quality of conversation and fewer comments in general. But it's kind of a moot point because...
> twitter, reddit, facebook
All of these social networks are moderated to one degree or another. In fact, this entire post was spawned because of a Twitter moderation decision, and it is nowhere near the first time that this even happened.
More importantly, none of these social networks gained popularity because of lack of moderation. Twitter became popular because you could potentially win the lottery and talk to a famous person. Reddit became popular because Digg refugees needed somewhere to go and it had pornography on top of that. Facebook became popular because you could keep up with your buddies from college and everybody had real names and faces attached to them.
> No. If you let users block trolls and bad-faith actors, they go away.
This is so profoundly untrue that Twitter had to stop creating "egg" avatars for users who did not have them because the number of sock-puppet accounts made them block-on-sight.
> If that was true, HN would be infinitely more popular than reddit.
Reddit communities live and die by the strength of their moderation. Sure, Reddit as a whole is mountains of garbage. But the beauty (if that's the word) of the subreddit system is that to folks who want to talk about communism, hating women and minorities is garbage, and to folks who want to hate women and minorities, communism is garbage, and they both get the experience they want.
Reddit's popularity is due to the fact that a) people have multiple interests and so they want to hop communities with low activation energy (same high-level reason that GitHub got popular over individual git hosting sites: you already have an account) and b) there is some correlation between being a "bad-faith actor" across communities, regardless of their specific moderation worldview (e.g., neither /r/GamersRiseUp nor /r/FULLCOMMUNISM is interested in V1agr4), and so "you have some karma at all, regardless of source" is a useful filter.
> Once again, if you were right, twitter, reddit, facebook, etc wouldn't have grown to what they are today.
All of these systems put work into blocking abusive participants site-wide (including real humans who are very carefully and intentionally spewing vitriol) and are increasingly automatically blocking them.
> Reddit communities live and die by the strength of their moderation.
Hence why I wasn't against moderation. I'm against censorship. I'm all for limiting "communism" subreddit to the topic of communism ( moderation ). However, I'm against the communism subreddit censoring people saying nasty things about stalin or what have you ( censorship ).
Like how politics, atheism and other popular subreddits used to be open platforms for people to express how they truly feel. Until the shift happened and they turned into censored hellholes.
> All of these systems put work into blocking abusive participants site-wide (including real humans who are very carefully and intentionally spewing vitriol) and are increasingly automatically blocking them.
No. All of these systems put into censoring people they disagree with. If truly "spewing vitriol" was the reason, then politics, worldnews, twoxchromosome, atheism and every major sub would be banned.
As long as the "vitriol" was pertinent to the topic, it should be allowed. After all, that's the point of the voting system right? If you don't like it, vote it down.
The 2009-2013 social media was great because everyone got to spew their vitriol so it evened things out. Now the vitriol is so concentrated that you have shitholes like politics and the_donald. Funnily enough, one is quarantined and the other isn't.
I don't agree with your core conceit of delineating between moderation and censorship, but this threw me:
> As long as the "vitriol" was pertinent to the topic, it should be allowed. After all, that's the point of the voting system right? If you don't like it, vote it down.
Voting systems as implemented by many popular sites are moderation/censorship via mob rule, and I'm surprised that you advocate for it.
I actually prefer having actual moderators to having a post voted down because five random people disagreed with my opinion and wanted to hide it in a attempt to control the narrative of the comment thread.
That's a problem even this site doesn't manage to avoid. Heck, look at your posts; in this comment thread, people are downvoting you in an attempt to hide your opinion, and I don't even agree with you.
Wait, what? 2009-2013 Twitter, Reddit, and Facebook were moderating content. We never had the kind of platform you're talking about. The closest was 4Chan, and even 4Chan heavily moderated individual boards. Even platforms like Gab still have moderation today.
Forums, usergroups, mailing lists, blog comments, etc... have always been moderated for spam, trolling, abuse, and just bad actors in general.
> Nobody is against filtering spam.
Repeal Section 230 and I give you 1 year, tops, before advertisers start making the case that filtering spam is censorship. After all, who decides what is and isn't spam? Advertisers wouldn't waste their time posting spam if people weren't clicking on it, so clearly the content is relevant to some people. Who are you to say that those advertisers shouldn't be able to reach their audience?
> Wait, what? 2009-2013 Twitter, Reddit, and Facebook were moderating content.
But not censoring. You could pretty much say and do anything on those platforms except for illegal content.
> Forums, usergroups, mailing lists, blog comments, etc... have always been moderated for spam, trolling, abuse, and just bad actors in general.
Which is different from censoring.
> Even platforms like Gab still have moderation today.
Gab always had moderation.
Either people haven't used 2009-2013 twitter, reddit, facebook, etc or people are pushing some heavy revisionist history here.
Reddit, especially branded itself the "free speech platform" in that time period.
I'm okay with moderation, I'm against censorship. For example, I'm all for a sports subreddit/community limiting the content to sports. And I'm for the users saying anything they want about the sports topic, even if it offends people.
See the difference?
It's funny how every response to me was by people who intentionally confused moderation with censorship.
And locking the wikileaks account isn't moderation, it's censorship.
What do you think the difference is between moderation and censorship?
Because Reddit/Twitter/Facebook in 2009-2013 didn't just remove illegal content. They removed tons of legal content too. They removed spam. Facebook removed pornography. Reddit in particular allowed individual subreddits to moderate/censor basically on any criteria whatsoever. If you went into a random forum in 2009 about dogs and started spouting nonsense about how we should all eat dogs, you would get kicked off of that forum. They wouldn't patiently hear out your controversial point-of-view.
Go back and read some of the usenet threads from this time period, there are people getting banned just as a joke; the paradigm of 'benevolent dictators' running forums was already pretty widely accepted.
What definition of censorship do you have that doesn't include removing explicit content, self-promotion, and off-topic posts?
> There is moderation and there is censorship. Nobody is against filtering spam.
This position has all the integrity, defensibility, and internal logic of "I can't define pornography, but I know it when I see it". Moderation and censorship are the same concept. It's just that "moderation" is metaphysically good, and "censorship" is metaphysically bad.
> No it is not. Publishers don't find it impossible. Platforms don't find it impossible. If it was impossible, telcoms and publishers wouldn't exist.
The problem is neither of them holds the middle ground.
Telecoms allow everything. Some of it is bad. Not having the bad stuff filtered is not great, but that's okay specifically because it can be filtered by somebody else. You don't need Comcast to do spam filtering on your email because Gmail can do it.
Publishers allow almost nothing. Most of what they publish is first-party. The editor of The New York Times can have their article published in The New York Times, but you generally can't.
Neither of those entity types primarily host user-generated content. Which of them do you propose is the appropriate model for moderating YouTube or Reddit?
That's the one you take away if you require a choice between having any moderation at all and having a safe harbor, because without the safe harbor the 5% they get wrong subjects them to liability.
No, moderation is censorship of user-submitted content by forum operators (whether owners or some other kind of host) or their agents. Censorship not directed at user-submitted content or carried out by other entities (e.g., the state) is not moderation.
It would definitely look like 4chan. 4chan is a straightforward example of unrestricted speach. And its not a bad place, just different, but most people wouldn't enjoy being there.
Even places like this maintain a certain form of discourse by threats of bans.
> but also allow all content,
Almost no platform survive without a bit of moderation. If you don't moderate, you'll get any kind of content, including spam, troll, etc...
Add that to the fact that you'll get people that will just push to boycott such kind of platforms, and thus you'll no longer have much possible ways to make this kind of platform exist.
> but if someone posts something illegal, they take their share of legal responsibility for publishing it.
That's also kind of impossible. The law evolved to consider that impossibility to look at every piece of content, and this is why the DMCA exist. Look at Youtube which try to filter their content much further than the law currently require, they have HUGE teams of moderators, multiple tens of thousands, with some of the best kind of neural network, working on this and yet it fail so often.
The world isn't binary, we need a bit of both.
It could be an interesting experiment though to allow legally the kind of platform you suggest. Someway to protect website owner from any legal retaliation. It would most probably look like 4chan, but still interesting.