With the incoming online safety bill a number of interesting topics and debates are arising. One of the things that is clear from this bill is that online platforms are going to have to increase, and/or change the way that they approach moderation. When one thinks about changing moderation one's initial thoughts are about increasing the number of human moderators, taking down more content, or moving more towards AI moderation. However, more online platforms (and people) need to also be thinking about how we can make moderation more democratic, especially with the recent increase in AI moderation.
What does one mean by making moderation democratic? Well if you read my substack or follow me on Instagram then you’ve probably noticed that myself, and other people related to the adult industry are often censored on social media. I’m not just saying I work in the spice industry to confuse people, and no I do not sell chillies. Indeed, my social media is as PG as I can possibly make it. There isn’t even a single mention of porn, and not even a bikini photo. Rather, it’s all comedy powerpoints and street interviews. And yet, I’m still shadow banned and profiled on the app. If I were to mention anything more detailed about my advocacy work, I could kiss my account goodbye.
Nonetheless, this article isn’t about how sex workers and porn industry members are unfairly censored on social media (that’s another conversation). It’s about how we need to demand more appeal processes and democratisation in moderation decisions.
Let’s take my only censored & taken down Instagram post as an example. This reel was a street interview in Finland. In the 30 second clip I asked people:
“Why do the Nordics have a high STD rate?”
The answers were about condom usage, too much trust and one guy made a joke about liking seals which didn’t make much sense.
Like most times when I post things, what I anticipate to do well on social media doesn’t always do well, and the things I think will just do average are sometimes the most viral. Such as the video on male shoes… To be honest, I didn’t think this video would do too well in terms of analytics but I never thought it would cause a problem. Within three minutes of posting it hit over 100k views. The comments were coming in faster than I could read, and the views kept going up. My first thoughts were that this was great, our company needed that kind of exposure. Nonetheless, my excitement was short lived as Instagram quickly took the post down. When I received this notification I assumed the reasoning would be
‘Of sexual nature’ since it referred to STDs.
Once again, I was wrong.
It was taken down for ‘hate speech’.
Now this really made me laugh. That made absolutely no sense to me. There was nothing about that video that can be classified as hate speech. And so, I appealed the moderation decision. Naturally, nothing changed and I received no response. There was no one at Instagram who I could contact about this matter, and the way their moderation appeals are set up is not to have a discussion or proper appeal.
As someone who was running a company that was heavily focused on moderation, and how to make it better this really upset me. The fact is, moderation mistakes are inevitable be that from AI or a human. They will happen on every platform and all the AI in the world won’t change that. The fact of the matter is a moderator can’t always understand context, or can misinterpret an image or video, or simply there can just be human error. That is why it is essential that when companies now look to increase their moderators and reliance on AI, that they also think about how to improve their appeal processes, and in a way democratise their platforms more. If they fail to do so this raises the risk of unfair censorship which can have dangerous effects that can’t be underestimated especially since these platforms now are intertwined with our social, business lives and how we read the news and politics nowadays. Yet, very few journalists, politicians and people outside of the moderation space are raising this very important concern.
There are a couple ways that platforms could improve their processes:
State more detailed reasons for moderation decisions beyond the generic category terms
Increase the option to appeal from one time to minimum two, maximum three
Enable users to offer detailed reasoning in their appeal submission, rather only being able to click appeal
These are three basic and generic ways this could be improved. Indeed, improvement needs to go even further than this but it would be a good place for Meta, X and Tiktok to start.