all moderation issues in the metaverse

The metaverse has been a hot topic in recent months. We wanted to know more about the problems and challenges associated with moderation. According to Meta, ineffective moderation on their virtual world could kill the project. It is therefore a major challenge for the development of these new spaces.

We interviewed Hervé Rigault, CEO of Netino by Webhelp to share his vision on this topic. The manager is in particular responsible for the new Netino offering, which offers brand moderation and community management strategies on the web3. A fascinating exchange on the frontiers between technological, political and social debate.

Learn more about Netino

It is difficult to conceive what moderation is in the metaverse, can you explain to us what it consists of?

In general, it is the fight against reprehensible behavior in the spaces created by the platforms of the metaverse. Unlike traditional social networks, the metaverse aims to create a hyper immersive experience, which appeals to the maximum of the senses (even if it remains a digital experience).

Overall, it is necessary to be able to predict and prevent all toxic and “deviant” behaviors among the users of these spaces. I use the term “deviant” with tweezers because it also means defining a norm and what behaviors deviate from it.

What are the inherent difficulties of moderation in these virtual spaces?

The interactions take place live, so it’s a strong constraint to manage. To manage real time it is necessary to set up self-protection functions for users. It is obviously impossible to humanly monitor every person. There are already several moderation features for platform users: mute to silence a user, security bubbles to prevent other users from entering their space, possibility to alert the community, etc.

Some platforms lean towards this self-regulation strategy, where it is the community that takes the sanctions against users: block for a few days, exile … For our part, for example, we have chosen on The Sandbox to create a community of ambassadors that will welcome the newcomers, will explain to them the rules and how this space works. This is an operation that we also apply for brands that have spaces on the metaverse and with whom we work.

We therefore help users both discover this new space and maximize the quality of their experience, but our ambassadors are also present to manage reprehensible behavior.

What forms of harassment are users exposed to on these platforms?

In these spaces, the forms of harassment are manifold and go far beyond “written” harassment and can resemble “physical” harassment. Anyone can experience the way violence is experienced when they receive an aggressive harassment message. It’s violent, yet only the characters are left on one page. If the user wears a virtual reality headset and feels someone approaching, entering their intimate or personal space, the aggression becomes almost physical. Virtual reality headsets can create trauma quite close to what we can experience in “real life”.

It should not be a space of illegality that serves as an outlet for people who feel less and less free in normal life and where they could free themselves from all constraints and all morality.

It would be deadly and dangerous. One of the first human needs is security. So when you create a world like the metaverse, you have to take care of its inhabitants and consider that the avatar is an extension of “real” human beings.

This topic is very social, because it is related to consensus. Users must not try and have unwanted experiences. The metaverse is fabulous when it is lived as an experience, but users must have control over this experience and no behaviors that they consider toxic must be imposed.

Learn more about Netino

Is moderation in the metaverse therefore a “technical” subject, human, but above all political?

It should be remembered that these spaces are spaces created by private companies, therefore they are governed by their own rules, by their visions of freedom of expression, and by its limits. I think we should have a political approach to this topic, and think of the metaverse as the organization of a city, of a common space. This is a strong challenge: when platforms create worlds, you have to invent the rules of these worlds.

This is an issue that must also be considered with states, national and transnational institutions, because the actors of the metaverse are global actors. Leaving the responsibility of dictating laws in increasingly less virtual spaces to private companies, of which we belong, poses a real political problem.

In my opinion, public authorities need to act very quickly at the European level, or at least at the national level. It took the legislature nearly 20 years to regulate at least the web2, and to decide on its moderation. So be careful not to take 10 or 15 years to deal with web3 like we did for web2.
If on every platform, on every world, there are different laws, those who want to have deviant behavior will go to a more permissive platform. Some platforms will voluntarily be less careful about attracting a large audience.

I have always believed that Netino, through his moderation activity, had a real political mission in the first sense of the term. But we must be careful not to be just subcontractors who have to do what a transnational private entity unrelated to local law might ask us to do. A vast topic then, and much deeper than simple “moderation”!

Like on social media, is this moderation increasingly managed by AI?

The Avia Law Against Hateful Content on the Internet requires platforms to moderate illegal or hateful content within 24 hours. This obligation to handle millions of content so quickly required moderation automation. Today, 90 to 95% of network moderation happens automatically on Netino.

There is a lot of work and development on the metaverse on automatic moderation. The platforms work in particular with film studios, to reproduce aggressive behaviors with actors and to be able to train the AI ​​to recognize them. It is therefore an ongoing topic.

We used to adapt to the rules of the platform, or the brand we were working with, but with the emergence of the metaverse, I think it is the end user who will have to decide what to accept and what they want to be exposed. The user must have access to a list of behaviors that he accepts or does not accept. I strongly believe in this approach, which seems to me the only effective one in the context of live interactions.

You work a lot on The Sandbox. Is it a skill and a way of working that can be duplicated on other platforms?

What we offer is much more comprehensive than moderation. We enable community management and involvement. Let’s go back a bit to the beginnings of community management, with the desire to humanize these spaces. We therefore have about a hundred “real” collaborators, who wear virtual reality headsets to explore the different spaces and involve users.

We currently have a hundred people working on The Sandbox space, and it is obviously replicable for other spaces. Beyond the platforms, there is also a real need for brands. They want to create spaces, but they don’t necessarily know how to engage ‘classic’ social media communities on the web3. We therefore help them in this transition between the two worlds.

Learn more about Netino

Leave a Comment