One of the biggest failures of the tech industry in my generation is treating destructive patterns of targeted abuse as a context-free content categorization challenge— “objectionable language” in a specific tweet, or “referring to a distinct group” in a FB message, etc.
The obvious problem is the whack-a-mole dynamic. A quarter of a century ago AOL tried to enforce banned words and topics to keep their chats from veering into PR-nightmare-and-FBI-stings territory. It was always a losing battle and the chat guides knew it.
Less obvious is the problem of legal responsibility for content. Dumb-but-consistent “topic/ideology neutral” rules make it easy for a platform to argue that it is not legally responsible for what happens on the platform — it makes a ‘best effort’ to prevent illegal acts.
That gives us Twitter banning people who swear at neo-nazis, but not the neo-nazis who are better at dancing on the edge of the rules. It gives us Youtube hutting down home videos because a Beatles song is on in the background, but leaving MRA harassment of public figures online.
The bigger, deeper problem, though, is that technology solutionists — and to a larger extent the neoliberal political consensus that drives perception of “acceptable discourse” — is unwilling to grapple with the impact of actions in their real social context.
Distributed harassment campaigns being coordinated outside a social network, while the targets of harassment are framed as the aggressors because they respond angrily? “Both sides are name-calling.”
White supremacists evangelize eugenics, and a black person sumarizes it as “White people are the worst?” Same thing, says the neoliberal solutionist who’s terrified of anyone thinking he is taking a side.
Our industry has built platforms that repeatedly amplify structural violence, but insists on treating the resulting impact on vulnerable peoples’ lives as a content categorization problem.
I’m pissed, and I’m just ranting now, but lord. What a mess.