#cyberbullying | #cyberbully | Cyberbullying In The Metaverse


One of the biggest ideas in product design right now is the metaverse. Numerous articles explain the idea of the metaverse and how it will replace the current internet. However, fewer pieces discuss the risks that this technology poses to our civilization.

If we don’t carefully build the metaverse with the public’s great good in mind, it could lead to a number of issues.

One of the biggest issues with the internet today is cyberbullying. Between the ages of 12 and 17, 37% of young people have experienced online bullying. Cyberbullying increases the risk of self-harm and suicide among young people.

Also read, Highlighting Safety Concerns in the Metaverse

The metaverse, however, has the ability to escalate this issue. Reading derogatory comments on Twitter or Facebook is one thing; seeing an abuser in front of you in cyberspace is quite another.

The borders between our real and virtual identities will be muddled as the virtual world immerses us in engagement.

As a result, the other person’s disrespectful or abusive actions will seem much more personal. The issue of cyberbullying might increase the number of psychological traumas if it is not properly moderated.

But moderation is impossible without clear guidelines for acceptable behavior. The designers of the metaverse will probably create a set of guidelines that will enable moderators and AI-based tools to assess user conduct in the virtual environment and respond appropriately.

The guidelines will probably be based on ethical standards found in the actual world, with some consideration for how people interact in virtual environments.

According to an internal memo seen by The New York Times, Meta reportedly encouraged its staff to volunteer to test the metaverse.

One Horizon Worlds tester’s avatar was recently touched by an unknown person, according to a business spokeswoman.

Since incidents in virtual reality frequently take place in real-time without being recorded, tracking misbehavior there is typically challenging.

Titania Jordan, the chief parent officer of Bark, a company that uses artificial intelligence to keep an eye on children’s devices for safety concerns, expressed particular worry about what kids might encounter in the metaverse.

She claimed that offenders might speak to youngsters through headphones or send them to chat messages in a game, behaviors that are impossible to record.

To prevent interpersonal victimization, different organizations are designing the metaverse with different procedures in place, but when people are unavoidably targeted and hurt, norms and rules must be in place – and rigorously applied.

To put it another way, any virtual environment must have a thorough (and regularly updated) set of Community Guidelines to specify acceptable behavior and to announce the presence of disciplinary measures for violations of conduct.


Don’t miss important articles during the week. Subscribe to blockbuild weekly digest for updates.



Source link