Today, content is easily accessible across multiple platforms. This content can be user-generated or created by companies.
User-generated content (UGC) includes text (like comments, forum posts, reviews, ratings, podcasts, testimonials), photos, videos, audio, links, or even documents. Often, the content dispersal happens through content communities, which are not a part of the UGC model but welcome UGC in the form of questions and comments.
Among the consequences of UGC is the risk of users getting exposed to inappropriate or irrelevant content. Considering that content impacts a company’s credibility and brand, this is of great significance.
As this two-way communication is unsupervised, content moderation solutions must ensure the checking and monitoring of the content. The purpose of content moderation is to filter, validate, and monitor publicly available content in order to ensure company credibility and professionalism. These services are often provided by third parties to companies today.
The need for content moderation
Internet World Stats estimates that there are currently 5.16 billion internet users in the world. The Social Media Benchmark Report of 2021 estimates that there are four billion users of social media. As a result, UGC has also grown significantly over the years. Moreover, content communities (hosted by companies) have also grown in popularity – mainly to provide users with quick access to technical information.
The abundance of public content, combined with a lack of adequate and appropriate moderation of this content, raises many risks, including:
Exposure to offensive content
A brand's reputation may be put at risk when unregulated UGC is posted. Such content could upset certain groups, leading to a chain reaction that damages the brand’s image.
Risk of unmonitored two-way interactions becoming abusive
Companies that provide two-way communication are at high risk of communications getting out of control, exposing them to abuse through uncensored texts, images, videos, etc., that might depict violence, hate speech, drugs, or cause offense, etc. Typical businesses in this category include delivery services, ride-hailing platforms, customer service platforms, online marketplaces where buyers and sellers meet, gaming platforms with real-time multiplayer features, etc.
Risk of incorrect content sharing among target groups/communities
Many companies find it critical to provide their internal focus groups or communities with accurate and verified information to any questions they raise on the system. It may adversely affect clients' businesses if incorrect code/detail/information circulates on such platforms. Focus groups are commonly used by technology companies, such as IT companies and technical service providers.
To safeguard brand image and prevent users from viewing inappropriate content on the web, companies often set up internal review teams to check the content posted online. Due to the increase in volume, service providers are enlisted to handle this specialized service more efficiently, accurately, and cost-effectively.
Expert Market Research estimates that the global content moderation solution market reached a value of $5300 million in 2020 and is now expected to grow at a CAGR of 12.6% over the forecast period of 2021-2026.
Benefits of content moderation services in today’s scenario
AdWeek reports that 85% of users are more influenced by UGC than by brands’ content directly. For multinational companies and brands to succeed in the market, content moderation services need serious attention. Here are some ways in which content moderation solutions can help protect and manage the brand image of companies:
Business cases for adopting content moderation services
The importance of user sentiment has been growing with the rise of the internet and social media, and content moderation solutions are relevant for most industries and sectors today. Content moderation solutions play a vital role in the following use cases:
Use Case 1:
Gaming platforms attract a large number of young users, including students at schools and colleges. Children are sometimes exposed to abusive comments, posts, and group mockery among the player community leaving a lasting psychological impact on them.
Solution:
The content on these platforms can be pre-moderated to protect vulnerable audiences. Chats and video interactions are closely monitored to ensure the safety of children and other users.
Use Case 2:
Platforms that allow people to interact with one another or rely on information provided by others, such as marriage matching sites, dating websites, online product reviews, yellow book sites, appointment scheduling, and recruitment platforms, all allow users to interact with one another or rely on others for information. These companies are concerned about ensuring that such reviews are not misleading or that users are not harassed.
Solution:
Interactions on such platforms are closely monitored, and users are notified if any suspicious activity is detected. Users are also protected from comments and interactions that are flagged as inappropriate.
Use Case 3:
Marketplaces where products are listed and sold online and include delivery services include both reviews and authenticity concerns. The interactions with delivery persons, salespersons, and customer service executives can result in tough conversations and lead to outbursts on either side. It can include electronics delivery, food delivery, ride sharing platforms, land/property rental or buy/sell apps, products re-sale platforms, home service apps, etc.
Solution:
By moderating content on such business models, we ensure that the reviews are authentic and not meant to harm a product or brand’s reputation with malicious intent. AI can even help identify such cases faster and track the conversations between parties to ensure everyone’s safety.
Use Case 4:
On crowdsourced knowledge platforms where the public can share insightful articles, improvements, or corrections to existing articles, share blogs, views, and incorrect information is often published when there are no checks to pre-authenticate and verify the information.
Solution:
Companies involved in knowledge sharing can benefit from the pre-moderation of such content to systematically verify any such articles or corrections thereto in order to build a reliable knowledge base of such reports.
Opportunities
The recent COVID-19 impact has forced almost all companies worldwide to go online and increase their virtual presence among users and prospective customers. The increasing access to business and product information online also makes content moderation solutions more necessary. People are exposed to tremendous manipulations and risks with online business information.
95% of travelers’ read online reviews before booking any leisure trip, according to TrustYou. As per Website Builder and Tnooz, the average leisure traveler spends 30 minutes reading reviews before booking, while 10% spend more than an hour reading reviews. A correct online image becomes extremely important for industries like travel, restaurants, hotels, etc.
According to a report by emarketer, consumers trust customer reviews 12 times more than reviews by manufacturers’, while Spiegel Research Center notes that online product reviews can increase conversion rates by more than 270%. The perception of user reviews makes close monitoring of online reviews and what they indicate about companies critical.
A Glassdoor study found that 83% of job seekers research company reviews and ratings online before applying for a job. A rating of three stars and below would not be considered by 33%.
According to BrightLocal’s Local Consumer Review Survey 2020, 79% of healthcare consumers prefer online reviews over personal recommendations. If you ask research firm Software Advice, 71% of patients go online to read reviews before finding a doctor.
Challenges
There are instances when some fake reviews and suggestions which are misleading, are also not detected by guidelines or filters applied to see them including the AI systems as well.
Content moderators are constantly exposed to extreme, abusive, and malicious content for long periods, posing challenges to their mental and emotional wellbeing.
Content often depicts regional dialect or colloquial usage of language rooted in a particular geographic area, which might be acceptable in some regions, but not in others. Content moderators need to be aware of this localization of content to take action at the right time.
Having content moderation solutions in place does not ensure that it is a full-proof solution since users are located worldwide and speak different languages, each with its interpretation. The nature of the content is not binary. It is open to perspective and has a more personal aspect attached to it. Even so, not having a content moderation solution in place will impact businesses sooner or later.
How Wipro helps companies leverage this opportunity
With Wipro’s global delivery centres (offshore and onsite), clients can receive content moderation services.
By creating a filtering and reviewing mechanism, Wipro provides a human-driven content moderation solution to a wide range of clients. Wipro also uses industry and domain experts for technical content, particularly for closed community groups within a company.
Wipro’s technology enablers that further strengthen our solution for our clients’ content moderation services include:
Hybrid content moderation process
Wipro uses a hybrid (AI and manual) moderation process to filter content using its content moderation solution.
Figure 1: Wipro’s hybrid content moderation process
The dual filtration process ensures that the AI system continues to learn and improve with time.
Manually reviewed content is closely monitored and handled by a team of content experts. Data from this flagged content is also fed into the AI system database to detect similar content automatically in the future, without the need for users to flag it.
Experts skilled in content moderation form the manual review team. Wipro ensures that their employees receive pre engagement training/refresher programs on content moderation, empowering them to understand cultural nuances and domain-specific knowledge to exceed client expectations.
The future of content moderation
As the world increasingly becomes a virtual interconnected network, and companies progressively emphasize their online presence amid COVID-19, it is becoming more evident that users will rely on UGC and reviews, which will significantly impact business and brand image.
Content moderation aims to protect users by closely monitoring and filtering out malicious, spiteful, fake, and abusive content.
Wipro helps companies achieve this goal through manual content moderation solutions and AI-enabled technologies, resulting in a more efficient and faster process. Wipro’s flexibility in adapting to client requirements makes it a preferred partner for global companies.
For more details on Wipro’s highly effective content moderation approach, connect with us.
About the authors
Gayatri Athreyan
Practice Manager – Technical Publications, Wipro
Gayatri heads Wipro’s Digital Content Practice, which develops content solutions to meet the needs and challenges of customers. She has over 28 years of content experience in various roles and is passionate about how content is designed, delivered, and consumed by end-users.
Kunal Jain
Presales Consultant, Wipro
Kunal is a Presales Consultant with the digital content practice of Knowledge Services, where he supports business growth, GTM planning, and solution pitching. As part of Knowledge Services, he is involved in the Geo-Spatial Information Systems (GSIS) practice. He has an MBA degree from the Indian Institute of Technology (IIT) -Delhi.