Within the growing mass of User Generated Content (UGC) is a continuous influx of violent, exploitive, hateful, and otherwise harmful images, videos, and speech. This content defeats the purpose of social media as a platform for creative expression and personal connection. Protecting users from egregious content and restoring the Internet to serve its original purpose is largely performed by a noble crop of content moderators. Content moderators are our digital guardians, our superheroes, who are tediously flagging objectionable content making the Internet a safe space for everyone.
Potential wellbeing risks faced by content moderators
Currently, there is limited scientific data and research on any impact on the psychological wellbeing of content moderators. However, extensive research is available on secondary stress or vicarious trauma triggered by direct or indirect exposure to someone else’s traumatic event experienced in other professions, such as Internet child exploitation investigators and journalists covering traumatic events like wars, homicide, or natural disasters. This publication from the National Collaborating Centre for Mental Health (UK), underscores the long-term effects of secondary stress: “We know that within the broader population of people exposed to secondary trauma, 7.8% experience lifelong symptoms, whereas 3.6% will have a 12-month period at which they exhibit full criteria for PTSD”. Borrowing from this secondary traumatic stress literature, a preliminary understanding maybe shaped about any potential psychological impact amongst content moderators.
The role of AI in content moderation
AI is a vital technology that continues to grow in its effectiveness in content moderation. Machine learning, optical recognition, and natural language processing are also effective at identifying fallacious stories, specified hate phrases, and spotting offensive images. However, the complexities of determining meaning, context, and intent are well beyond the current capabilities of pure automation. As a result, humans bear a large share of moderation duties, making it urgent to put a structured approach placing well-being at the heart of the content moderation process.
Furthermore, due to the sensitive nature of their work, content moderators may not be able to openly discuss work with family and friends like how most of us usually do. Being a relatively new profession, much needs to be understood about the impact of content moderation. This indeed calls attention on the psychological wellbeing of content moderators.
Preventing, protecting, and promoting wellbeing
On the brighter side, there is no need to wait for the any potential impact of content moderation to kick in before taking necessary action. Borrowing from existing literature of similar trauma-exposure professions, any suspected potential impact of content moderation can be negated before its augments through providing wholesome wellbeing programs to the content moderators.
It is recommended that the wellbeing programs of organizations are governed by the Occupational & Psychological Health and Safety model. These programs must span across the employee life cycle with focus on preventive, protective, and promotive approaches toward employee wellbeing covering Biopsychosocial aspects. Ideally, wellness initiative should be multileveled and multifaceted, ranging from individual to group level and at the organizational level. Research and data-driven decision-making must be central to employee wellbeing approach. Programs must be chosen based on felt needs of the employees or anticipated needs.
It is important to build on the sense of purpose and pride in moderating content and gear up the content moderators with a psychological safety toolkit providing them with the right knowledge, attitude, and behavior towards mental wellbeing whilst they take on the role of a digital guardian. Additionally, creating a sense of community support and normalizing mental wellbeing through thought leadership ensures that content moderators have a meaningful and positive work journey.
Dr. Aparna Samuel Balasundaram
Dr. Aparna Samuel Balasundaram is the Global Head for Well-being and Resilience for the Consumer Business for Wipro-iCORE. She successfully built and executed a clinical and evidence-based approach to employee well-being and psychological health, incorporating a DEI lens for this vertical. She is an award-winning and published thought leader and TEDx speaker with over 24 years of experience in mental health and well-being, with a domain expertise in Trust and Safety. She has a rich experience across corporate and behavioral healthcare settings, with training in management and clinical aspects from the University of Pennsylvania, New York University, NIMHANS, and TISS, India. She is also an adjunct lecturer at Columbia University, School of Social Work. She lives in Austin but is a global citizen and makes time to garden and dance, as her self-care practice.