Purism’s David Seaward recently posted an article titled Curbing Harassment with User Empowerment. In it, they posit that “user empowerment” is the best way to handle harassment. Yet, many of their suggestions do nothing to prevent or stop harassment. Instead they only provide ways to allow a user to plug their ears as it occurs.
Trusting The Operator
David Seaward writes with the assumption that the operator is always untrustworthy. But, what if the operator was someone you knew? Someone you could reach out to if there were any issues, who could reach out to other operators? This is the case on the Fediverse, where Purism’s Librem Social operates. Within this system of federated networks, each node is run by a person or group of people. These people receive reports in various forms. In order to continue to be trusted, moderators of servers are expected to handle reports of spam, hate speech, or other instances of negative interactions from other services. Since the network is distributed, this tends to be sustainable.
In practice, this means that as a moderator my users can send me things they’re concerned by, and I can send messages to the moderators of other servers if something on their server concerns me or one of my users. If the operator of the other node breaches trust (e.g. not responding, expressing support for bad actors) then I can choose to defederate from them. If I as a user find that my admin does not take action, I can move to a node that will take action. The end result is that there are multiple layers of trust:
- I can trust my admins to take action
- My admins can trust other admins to take action
This creates a system where, without lock-in, admins are incentivized to respond to things in good faith and in the best interests of their users.
User Empowerment And Active Admins
The system of trust above does not conflict with Purism’s goal of user empowerment. In fact, these two systems need to work together. Providing users tools to avoid harassment works in the short term, but admins need to take action to prevent harassment in the long term. There’s a very popular saying: with great power comes great responsibility. When you are an admin, you have both the power and responsibility to prevent harassment.
To continue using the fediverse for this discussion, there are two ways harassment occurs in a federated system:
- A user on a remote instance harasses people
- A user on the local instance harasses people
When harassment occurs, it comes in various forms like harassing speech, avoiding blocks, or sealioning. In all cases and forms, the local admin is expected to listen to reports and handle them accoridngly. For local users, this can mean a stern warning or a ban. For remote users, the form of response could range from contacting the remote admin to blocking that instance. Some fediverse software also supports blocking individual remote accounts. Each action helps prevent the harasser from further harming people on your instance or other instances.
Crowdsourcing Does Not Solve Harassment
One solution David proposes in the article is crowdsourced tagging. Earlier in the article he mentions that operators can be untrustworthy, but trusting everyone to tag things does not solve this. In fact, this can contribute to dogpiling and censorship. Let’s use an example to illustrate the issue. A trans woman posts about her experience with transphobia, and how transphobic people have harmed her. Her harassers can see this post, and tag it with “#hatespeech”. They tell their friends to do it too, or use bots. This now means anyone who filters “#hatespeech” would have her post hidden – even people that would have supported her. Apply this for other things and crowdsourced tagging can easily become a powerful tool to censor the speech of marginalized people.
Overall, I’d say Purism needs to take a step back and review their stance to moderation and anti-harassment. It would do them well if they also took a minute to have conversations with the experts they cite.