A signiﬁcant challenge that Facebook, Twitter, Instagram and LinkedIn face today is acting as the custodians of the Internet while at the same time being the center of self-expression and user-generated content.
By allowing millions of users with diverse views to post their opinions day by day, some of which are deemed oﬀensive or harmful, things can get out of their control so easily, without a content moderation plan.
Users demand to freely express their views on ongoing political, social, and economic issues on social media platforms without intervention and without being told their views are “inappropriate.”
So, platforms, in some form or another, need to moderate content to protect individuals and their interests, by removing the unsuitable ones.
How does content moderation work
A team of social media content moderators keep an eye on any type of content and remove any inappropriate or illegal posts, before it becomes viral and visible among a wider group. This happens on a higher scale, as the entire content is automatically removed from all social platforms.
This means that a few social media platforms will allow you to immediately view the content after updating, while few of them send it to moderators to check its category and appropriateness, before publishing.
So, here are the most common types of content moderation.
Whenever someone submits content to your website, and you have it placed in a queue to be checked by a moderator before it is visible to all, you are pre-moderating.
Pre-moderation has the benefit of verifying that content you deem to be undesirable, particularly libelous content, is kept off from your visible community. It is also a common choice for online communities targeted at children, to pick up on bullying or sexual grooming behavior.
The downside of using pre-moderation is the high cost involved when your community grows, adding to this the number of submissions, making it unmanageable for a smaller team of content moderators.
Pre-moderation is most suitable for communities with a high level of legal risks, such as celebrity-based ones, or even communities where child protection is a must. Basically, if the type of content submitted is not conversational or time-sensitive, it can be easily deployed.
Quite the opposite to the pre-moderation technique, the post-moderation one is a better alternative to it, from a user experience perspective. Meaning, that all the content is displayed on your website, or social media platform immediately after submission, but replicated in a queue for a content moderator to analyze it afterwards.
The main benefit of this type of content moderation is that conversations take place in real time, which makes it more agile and dynamic for the generations to come.
The single downside of post-moderation is strictly related to the size of the community. For example, if the community grows, the costs can become prohibitive.
As well as this, as each piece of content is viewed and approved or rejected, the content moderator legally becomes the publisher of the content, which can prove to be risky, especially if certain communities (gossip ones) attract defamatory submissions.
3. Reactive moderation
Reactive moderation is relying on your community members to flag up content that is either not aligned with your house rules, or that members of our community find it inappropriate.
It goes hand in hand with both pre-and post-moderation as a “safety net” in case anything gets through the moderators, as a normally human mistake.
The process is simple, having a reporting button on each piece of user-generated content, that if clicked, will file an alert and trigger the content moderation team.
However, your brand reputation could be at stake if you are willing to take the risk of letting some undesirable content available on your website, blog or any social media platform for a period of time, relying only on your community members to report it.
4. Distributed moderation
Distributed moderation is a rare type of user generated content method.
It relies on a rating system which members of the community use to vote on whether submissions are either in line with community expectations or within the rules of use. It can control comments or forums posts, usually with guidance from experienced senior moderators.
Expecting the community to self-moderate is a rare direction companies are willing to take, for legal and branding reasons.
For this reason, a distributed content moderation system can also be applied within an organization, using several members of the team to process and aggregate an average score to determine whether content should be allowed to stay public or need to be reviewed.
5. Automated moderation
In addition to all the above, automated moderation is a valuable weapon in any moderator’s life.
It consists in deploying various technical tools to process user-generated content and apply defined rules to reject or approve submissions.
The most typical tool used for this type of content moderation is the word filter, in which a list of banned words is filtered.
A similar tool is the IP ban list. There is also a more sophisticated tool being developed, such the one supplied by Crisp Thinking. Having an engine which automatically adds conversational pattern analytics with relationships.
6. No moderation
Nowadays, it simply can’t be!
I mean, where would we be without any moderations? A total chaos, to be honest.
Maybe you simply don’t have the resources or the finances to take this into account, or you don’t believe this could be a solution for your business and online community.
Although, from a legal point of view, you might feel that your community is small enough to fly under the radar. Be that as it may, there are tons of benefits of using one of the moderation types covered above.
Engage communities with content moderation
Without some form of moderation, your community will quickly descent into anarchy, and this is not going to do any good to your potential new users, followers, customers, or even future colleagues.