The need for moderation isn't new

Content moderation on the web in general and social media sites in particular has hit a similar tipping point to the Spam Deluge days of the early aughts. Every spam prevention step was infringing on a business’s ability to communicate openly.

While distasteful in the abstract — and monstrously complicated to layer on top of the Internet’s original vision of open, unrestricted data exchange — the reality was that meaningful email content had been completely overtaken by grift and chum.

The mechanisms are imperfect, and when changes happen to respond to new trends, marketers often scream bloody murder because they’re pushing the existing boundaries. But the alternative is a scorched wasteland of pyramid schemes, 409 scams, penis pills, and home refi malware

It’s easy to forget that Gmail’s 2004 launch was so successful in part due to its aggressive algorithmic filtering. “Did your message get lost in spam?” was a common question if an expected note didn’t arrive. It was annoying, but better than the alternative.

Are there better ways? Probably, yes. Does this kind of filtering and moderation constitute a philosophical insult to the free and unfettered exchange of information? Sure. But when the problem is consistently perceived as worse than any imaginable cure, people choose it.

The state of social media is more dire today than email two decades ago. Then it was a matter of losing needles in haystacks — missing important messages in the deluge of crap. Today, un- or poorly-moderated content is being used to explicitly attack individuals and groups.

To manipulate exploitative financial markets at a scale and speed the old scammers of the aughts would envy. To literally wage propaganda wars that build popular support for genocide.

So, it’s very difficult not to hear Nate Silver’s tongue-clucking concern about free expression vs. content moderation as an echo of the “Email Should Be Open” purists of the past generation. Some were naive, others were in on the grift.