The solution to harmful content online: pre-moderation
And why big companies don't want to pre-moderate
Harmful & illegal content is one of the largest problems facing not just adult entertainment sites but also social media sites. In recent years, with the Mastercard regulations and the development of the online safety bill there have been increased discussions around how adult sites can reduce this type of material, and what responsibility companies have around this.
When it comes to a site's responsibility for third party content this can be quite tricky, particularly in the U.S. with section 230 which provides immunity for online computer services with respect to third party content. However, this bill is quite controversial as many believe that sites should have to take more responsibility for third party content.
We are currently seeing a number of states & countries attempting to put measures in place to reduce harmful content on adult sites, such as the Mastercard regulations, and governments trying to put age restrictions on adult sites. The issue is that these are not sound solutions to reducing harmful & illegal content, but rather can push towards more unregulated sites, and harms the companies that are trying to make the industry safer by making their business almost inoperable.
When we were designing the platform for my first company, we were constantly trying to think about how we could solve some of these issues and reduce harmful content from going live, and that was when we realized that there is a pretty clear solution, people just aren’t doing it. And that solution is pre-moderation.
Moderation is a fairly new concept, as it only became a necessity with the rise of large social media platforms. In turn, these companies set the worldwide standard for what moderation is and what it should look like. The way that most companies moderate is a mixture of AI & human moderation. The best practices are with a mixture. Companies then moderate what is reported, and through sweep checks. This has now been accepted as the way to moderate. But, just because everyone’s been doing something one way does not mean that it is the only way.
When developing the Freyja platform, we were trying to think about how we could reduce illegal & harmful content as much as possible. Now, you will never be able to completely stop it, but there are things you can do to reduce it. That was when we decided to use a mixture of AI & pre-moderation. This is how it worked:
A creator uploads an image or video of sexual nature
The AI picks this up, and it is immediately sent to a moderator.
The moderator then checks that we have the performers KYC, it is the correct performer, no one else is in it who hasn’t consented and isn’t verified, and that it meets the guidelines.
This is then approved and then posted.
This process takes 2 minutes, and is nowhere near as slow as one might think. What is great about this type of pre-moderation process is that even if harmful content is uploaded, you can stop it from even going live. Similarly, if there is a piece of content that you are unsure about or there’s a gray area this gives you time to have a discussion with the performer about it, rather than just censoring, as the pressure is taken away with the content not actually being live.
So, why do companies not want to pre-moderate?
Pre-moderation is so clearly a solution to social media & adult sites problems, and a step forward to making the internet safer & more democratic. However, this system worked for us because we built it from the ground up, and we had the infrastructure in place from the start which made it scalable and possible. Contrastingly, if a bigger fan site wanted to do the same thing as us, to ensure the same level of safety they would need to change their entire site infrastructure, kick off the majority of their existing content and user base which is something that most companies aren’t willing to do.
Overall, perhaps looking at the way companies moderate differently is the solution, rather than censoring and banning sites.