Human-machine collaboration in online moderation

In our offline environment, governments take on a regulatory role. This means that certain behaviors are avoided through a series of regulations and with the support of supervisory bodies . The platforms are required to comply with the laws of each country in which they provide their services, but they also have decision-making power regarding the conduct that users must maintain within their digital borders through online moderation.

We are talking about regulatory platforms, such limited to their own space: they decide what can or cannot circulate, with what modalities and forms of expression. Generally there are two agents who operate in the moderation activity on social networks and platforms in general: moderation algorithm and moderator. One tries to fill the efficiency deficits of the other, in an integrated but not perfect supervision.

The online moderation algorithms

Online moderation algorithms are automated systems that merely execute the directives of programmers . They are most often based on machine learning systems that aim to compare elements already known by the algorithm itself with elements present in the content published by users, in search of a match that could suggest the inadequacy of the content. Computerized predictive models go even further in an attempt to identify new elements and learn to recognize them.

online moderation
The regulatory platforms have decision-making and sanctioning power on the conduct of users within the limits of their digital space

The moderation algorithms are not so much a convenience as a real necessity because the contents to be controlled are very numerous . However, circumventing this type of filter is often not a complex task. You will have come across posts with "disguised words", where an "o" is replaced by a 0, an "a" by a 4 and so on. Words that the algorithm considers inappropriate are not detected and therefore go unnoticed by automatic checks. In general, the ways of avoiding evolve continuously, often escaping algorithmic supervision.

Another big problem with moderation algorithms is the lack of context . The great ethical debate that has been going on for years on the reliability or otherwise of the machines in terms of choices finds relevant insights into the non-recognizability of the context by the algorithms. For example, an artistic nude image is not always distinguished from a pornographic nude image, ending up being removed anyway. Some contents can escape algorithmic moderation because they are not recognized as problematic : this is the case of the publication of images that comply with the guidelines but are published without the consent of the person portrayed, or of fake profiles created with stolen photographs.

Algorithmic discrimination: the injury from man to machine

When Victoria's Secret model Candice Swanepoel posted a photo of her with her shirt unbuttoned and only her hand covering her breasts on her Instagram profile, the community responded with numerous likes. When Australian comedian Celeste Barber ironically reproduced the same photo, accompanied by a funny caption, the online moderation algorithm censored the image.

This led to a series of accusations against the platform that would use a grassophobic algorithm . The only real difference between the two published images, in fact, is found in the physicality of the two women.

Millions of Instagram users have realized one thing that we in marginalized communities have known for some time: the Instagram algorithm prefers thin, white, cisgender people and in fact censors the rest.

Lacey-Jade Christie, British writer and activist

The leaders of Instagram promptly apologized for the incident, announcing important news in terms of online moderation. Celeste Barber's is not the only case in which algorithms have " made discriminatory choices " . Nyome Nicholas-Williams also had one of her artistic nude photos censored. In the image the model was represented sitting, with crossed legs and breasts covered by her arms. It didn't violate any of Instagram's guidelines.

Millions of photos of very thin white women can be found every day on Instagram but is a big black woman celebrating her body being banned? For me it was a shock, I feel like I've been silenced

Nyome Nicholas-Williams

online moderation
Nyome Nicholas-Williams

Also in this case, the leaders of the platform apologized, stating that censoring minorities and specific communities is not part of their habits. It would have been just an unfortunate mistake. In reality, the errors of the algorithm are a projection of the prejudices of whoever writes it . The same data that automated moderation processes draw on can be tainted by pre-existing social weaknesses in the offline reality.

Machine learning algorithms rely on massive amounts of data training that can include large databases of photos, audio, and video. It is well known and documented that datasets are susceptible to intentional and unintentional bias. How specific concepts are represented in images, video and audio can be subject to bias in terms of race, gender, culture, ability, and more. In addition, multimedia content randomly sampled from real-world data can also contribute to amplifying real-world bias

FMR levels

The quality of the data in this perspective is therefore crucial . These must be free from any form of bias capable of distorting the evaluation of the algorithm. The mistake can lead to the removal of a post that conforms to the platform's guidelines, or to far worse consequences, as happened to Robert Julian-Borchak Williams, a black man who was arrested in Detroit for an algorithm error. Facial recognition algorithms used by US law enforcement to support investigations draw on databases that mostly include faces of white men. This makes it difficult to identify individuals with different physical characteristics (such as Robert's dark skin).

It is not surprising the urgency of the use of human intervention, which manages to make up for algorithmic deficits quite effectively: but at what price?

Online Moderation: When your job is to see the worst of the internet

There are over 100,000, at the service of social networks for which they scan inappropriate content to steal it from users' vision, make no noise and remain in the shadows by virtue of the delicate confidentiality agreements stipulated upstream: they are the content moderators.

Sarah T. Roberts, a social media scholar and lecturer at the University of California, conducted ethnographic research on the moderator. The results are collected in his book "Behind the Screen – Content Moderation in the Shadows of Social Media". The scholar talked about the controversial world of "digital referees", subjected for eight hours a day to the vision of the worst that the world of the internet can offer: a job with a great emotional load that can hardly be balanced by a hefty remuneration, the most of the sometimes absent.

We cannot identify the content moderator : we do not know who he is, nor what his job is actually with respect to the platform he works for (Community manager? Contractors?) And we don't even know if he really removes our posts deemed non-compliant. (could it be the algorithm instead?). What we know about these mysterious figures, however, we owe in part to a former moderator.

online moderation
The testimonies of numerous moderators and former moderators confirm a psychological condition that is particularly at risk

The case of Selena Scola

Selena Scola, former Facebook moderator, has started the real turning point, opening the door to a greater awareness of the problem related to the neglected mental health of platform moderators. The woman denounced Zuckerberg's company because of the post-traumatic syndrome she allegedly developed during her moderation sessions . Selena Scola held this role for nine months, in which every day she was subjected to the vision of raw images, with a strong violent, offensive and explicit content. Nine months that were enough to severely compromise his mental health.

Selena Scola reports, through her lawyers, that she has problems carrying out ordinary activities, such as using a mouse. Entering a cold room or hearing loud noises is also a cause of serious psychological distress for the woman. Facebook, following this complaint, has allocated a large sum to the compensation of employees who have developed psychological problems due to their moderation activity at the company. The testimonies of numerous moderators and former moderators confirm a psychological condition that is particularly at risk , as it is damaged by constant exposure to strong and disturbing images.

Limit the psychological harm of the moderators

To try to limit the psychological damage of the moderators, Facebook has implemented a series of ad hoc measures: the contents to be supervised are reproduced in black and white and deprived of their audio. This is to reduce the impact of exposure. The company also offers its moderators the opportunity to take advantage of a psychological support service and to take part in individual or group psychotherapy sessions.

Artificial intelligence is another key element in protecting the human component . When the machine filters a large part of the content that is not allowed, it reduces both the number of necessary moderators and the content they will have to undergo. In this way man-machine collaboration is achieved.

Man-machine collaboration: imperfect but necessary

Social networks are required an increasingly accurate online moderation , by virtue of the enormous quantity of material produced and their pervasiveness. The intersection of our online and offline living, the effects of digital, which Selena Scola and the other moderators have shown us to be concrete and tangible, suggest how important it is to effectively supervise the content posted by users. An urgency that clashes with the problems faced up to now: on the one hand, the imperfect operation of the machine, unable to contextualise, easy to evade; on the other hand, the need to protect the health of the workers. The collaboration between man and machine once again takes on a compensatory and imperfect value.

Article by Ivana Lupo

The article Human-Machine Collaboration in Online Moderation comes from Tech CuE | Close-up Engineering .