Content Moderation: Effects & Importance
Let's look into what content moderation is and what it means for companies
Tech Companies like Facebook, Twitter, YouTube etc. allow people to create content, share their views on variety of topics. Many a times this leads to sharing of illegal data, hate comments targeted at specific group of people as well as criminal activities like human trafficking. That’s why these social media companies are forced by the governments to restrict and delete the content that is ‘unethical’ and might compromise with state security.
Hence content moderation comes into picture.
Many companies are trying to develop machine learning and AI algorithms to look after this task but it is a difficult task to do. How can one decide which content to remove and what to keep? What users can and cannot see requires much understanding which can’t be done by machines alone. Thus companies are keeping people as content moderators/reviewers which remove posts and content which they feel is wrong and does not obey to community guidelines. Life of content moderators is hard. There have been many cases where people in this job have attempted suicide because of the content they see. They pay is low for such jobs and little to no emphasis is given to their mental health. They are not allowed to discuss what they see even with the co - workers. The content moderation solution market reached an all time high $4.9 B and is expected to grow over the years. Many companies are like Sentropy are building AI algorithms to perform these tasks but experts suggest that human intervention would anyway be required. This is because even if we try to train the model to identify some fraud material, people always find new ways to create and share misinformation. It is hard for machine to understand that.
There are many newly founded companies like parler, gab etc. which promise “freedom of speech” and have no guidelines on content and posts. People blocked from other guarded social media become their customers but of course this has more disadvantages. Apple and Google removed parler from their app store and amazon too stopped hosting the app when concerns regarding spread of misinformation and hate were found out through the app. In today’s world we have to distinguish between free speech and responsible speech while moderating the content of these platforms.
To conclude, I think companies should invest enough resources and money to moderate content. In the view of latest conditions like COVID 19 pandemic, passing of misinformation and false rumor’s can be really dangerous which may also lead to death. The responsibility of moderating content should also be of the users. Internet will always have people sharing bad and false content but as a user we must take action against it. Companies like Reddit and YouTube provide report option to the users so that if they find any misinformation and hate posts or videos, they have the power to report it. This move includes users to improve content and stand up against wrong practices.
Sources:
Nicely written