Networks Of Trust: Dismantling And Preventing Harassment

Purism’s David Seaward recently posted an article titled Curbing Harassment with User Empowerment. In it, they posit that “user empowerment” is the best way to handle harassment. Yet, many of their suggestions do nothing to prevent or stop harassment. Instead they only provide ways to allow a user to plug their ears as it occurs.

Trusting The Operator

David Seaward writes with the assumption that the operator is always untrustworthy. But, what if the operator was someone you knew? Someone you could reach out to if there were any issues, who could reach out to other operators? This is the case on the Fediverse, where Purism’s Librem Social operates. Within this system of federated networks, each node is run by a person or group of people. These people receive reports in various forms. In order to continue to be trusted, moderators of servers are expected to handle reports of spam, hate speech, or other instances of negative interactions from other services. Since the network is distributed, this tends to be sustainable.

In practice, this means that as a moderator my users can send me things they’re concerned by, and I can send messages to the moderators of other servers if something on their server concerns me or one of my users. If the operator of the other node breaches trust (e.g. not responding, expressing support for bad actors) then I can choose to defederate from them. If I as a user find that my admin does not take action, I can move to a node that will take action. The end result is that there are multiple layers of trust:

  • I can trust my admins to take action
  • My admins can trust other admins to take action

This creates a system where, without lock-in, admins are incentivized to respond to things in good faith and in the best interests of their users.

User Empowerment And Active Admins

The system of trust above does not conflict with Purism’s goal of user empowerment. In fact, these two systems need to work together. Providing users tools to avoid harassment works in the short term, but admins need to take action to prevent harassment in the long term. There’s a very popular saying: with great power comes great responsibility. When you are an admin, you have both the power and responsibility to prevent harassment.

To continue using the fediverse for this discussion, there are two ways harassment occurs in a federated system:

  1. A user on a remote instance harasses people
  2. A user on the local instance harasses people

When harassment occurs, it comes in various forms like harassing speech, avoiding blocks, or sealioning. In all cases and forms, the local admin is expected to listen to reports and handle them accoridngly. For local users, this can mean a stern warning or a ban. For remote users, the form of response could range from contacting the remote admin to blocking that instance. Some fediverse software also supports blocking individual remote accounts. Each action helps prevent the harasser from further harming people on your instance or other instances.

Crowdsourcing Does Not Solve Harassment

One solution David proposes in the article is crowdsourced tagging. Earlier in the article he mentions that operators can be untrustworthy, but trusting everyone to tag things does not solve this. In fact, this can contribute to dogpiling and censorship. Let’s use an example to illustrate the issue. A trans woman posts about her experience with transphobia, and how transphobic people have harmed her. Her harassers can see this post, and tag it with “#hatespeech”. They tell their friends to do it too, or use bots. This now means anyone who filters “#hatespeech” would have her post hidden – even people that would have supported her. Apply this for other things and crowdsourced tagging can easily become a powerful tool to censor the speech of marginalized people.

Overall, I’d say Purism needs to take a step back and review their stance to moderation and anti-harassment. It would do them well if they also took a minute to have conversations with the experts they cite.

The Paradox of Tolerance In Online Spaces

Author’s Note: I have never felt uncomfortable contributing to GNOME. This is a more general post about online communities, and targeted at some of the other FOSS communities I’ve been interested in contributing to or have contributed to in the past.

In certain online spaces, the idea that not including people who are intolerant of others is itself a form of negative intolerance has gained traction. This would be excluding those who post white supremacist content, misogyny, homophobia, transphobia, etc. A common argument against excluding these people is the “slippery slope” – if you exclude these people for hate speech, soon you’ll try to exclude any people for anything.

In reality, this slope does not exist. In healthy online communities, these people are kept out and the community continues to move forward. How? Well, it’s because these communities could not be healthy and safe without the exclusion of harmful elements. This is where we hit the paradox of tolerance.

What Is The Paradox of Tolerance?

…if a society is tolerant without limit, its ability to be tolerant is eventually seized or destroyed by the intolerant.

In online spaces, “tolerance” refers to who you allow in the community. To be tolerant means to allow people from all walks of life into your space, regardless of race, sexual or gender identity, or other factors used to marginalize people within society. To go further, a good community should do more than tolerate them, but let them know that they are welcome and that they will not be marginalized within the community.

A person is marginalized when they are abused for their identity, or made to feel less important because of it. In real life, this manifests as workforce discrimination, housing discrimination, police brutality, and many other forms of oppression that make it so that the value of a victim’s life and livelihood are less important than the oppressor’s. In an online space, marginalization is more subtle. It would be if a black person saw someone use the “n word” – or worse, is called one – without reprucussion. It would be if a trans woman had to deal with someone saying that they are “men trying to invade women’s spaces”. It would be if a woman in general had to deal with men making sexual remarks and unwanted advances. These things all make the victims uncomfortable, and the lack of action taken can make them feel unimportant.

Some communities like to think of themselves as “perfectly tolerant”. This means that they would tolerate people that take actions to make marginalized people uncomfortable. When a community does this, they are actually being intolerant, and enabling abusers.

Isn’t It Intolerance To Keep Out The Intolerant?

Yes. In a very literal sense, it is intolerance to keep these people out of communities. However, the effect of this intolerance is that people who face real intolerance in their day-to-day lives feel safer in those communities. So it comes down to what you think is important. Do you think it’s more important to let people abuse others, or to have a safe and productive community? If you want to run a community, you have to make that choice.