VRChat Institutes a New Safety and Trust System to Combat Griefers

vrchat-logo.jpg

In response to high levels of trolling, griefing, and harassment, the VRChat platform is instituting an incredibly detailed Safety and Trust System:

The VRChat Trust and Safety system is a new extension of the currently-implemented VRChat Trust system. It is designed to keep users safe from nuisance users using things like screen-space shaders, loud sounds or microphones, visually noisy or malicious particle effects, and other methods that someone may use to detract from your experience in VRChat.

This system is designed to give control back to the user, allowing users to determine where, when, and how they see various avatar features that may be distracting or malicious if used improperly.

The Trust and Safety system is designed so that, even when left on default settings, the system will ensure that someone can’t attack you with malicious avatar features. Malicious users won’t have these features shown, so you can have a good experience in the metaverse.

Basically, every VRChat user is automatically assigned to one of six levels, based on their past behaviour (e.g. exploring, making friends, creating content):

  • Veteran User
  • Trusted User
  • Known User
  • User
  • New User
  • Visitor (the default rank for brand-new users)

Visitors will not be able to upload content to VRChat until they are promoted to the New User rank. In addition:

Additionally, there exists a special rank called “Nuisance”. These users have caused problems for others, and will have an indicator above their nameplate when your quick menu is open. Most of the time, these users’ avatars will be completely blocked. In a future release, users who are sliding toward the “Nuisance” rank will be notified.

Finally, there exists a “VRChat Team” rank, which is only usable by VRChat Team members. When a VRChat Team member has their “DEV” tag on, you’ll see this rank in the quick menu when you select them. If you have doubts that a user with a “DEV” tag is actually on the VRChat Team, just open your Quick Menu, select them, and check out their Trust Rank. If it doesn’t say “VRChat Team” under the avatar thumbnail, then that user is not a member of the VRChat Team, and is likely trying to confuse users. Feel free to take a screenshot and report them to the Moderation team!

For each level of user, you can set what aspects of their avatar will be visible/audible to you in an extremely detailed Safety System:

Safety” is a new menu tab that allows you to configure how users of each rank are treated in regards to how they display for you in VRChat. This affects many aspects of a user’s presence in VRChat:

  • Voice — Mutes or unmutes a user’s microphone (voice chat)
  • Avatar — Hides or shows a user’s avatar as well as all avatar features. When an avatar is hidden, it shows a “muted” avatar
  • Avatar Audio — Enables or disables sound effects from a user’s avatar (not their microphone)
  • Animations — Enables or disables custom animations on a user’s avatar
  • Shaders — When disabled, all shaders on a user’s avatar are reverted to Standard
  • Particles and Lights — Enables or disables particle systems on a user’s avatar, as well as any light sources. This will also block Line and Trail Renderer components.
VRChat Safety System.png
The New VRChat Safety System

There is much, much more information on the new Safety and Trust System in their blogpost. The team behind VRChat have obviously put a lot of time and energy into designing this system, and I can say that this is now the most comprehensive suite of tools to combat griefing, trolling, and harassment that I have seen in any social VR space or virtual world, and a model for other platforms to emulate.

After a huge surge in usage in the early part of this year (mainly due to the promotion of the platform by various well-known livestreamers), the number of simultaneous users in VRChat has stayed relatively steady at around 6,000:

VRChat Stats 27 Sept 2018.png

This makes VRChat the most popular of the newer social VR platforms. The new Safety and Trust system will go a long way towards improving users’ experiences in VRChat.

Advertisements

Trolling, Griefing, and Harassment in Virtual Worlds: What the Newer Social VR Platforms Are Doing to Combat It

troll-3328599_1920.png
How do you deal with a troll? (image by Anaterate on Pixabay)

There was a particularly irritating troll at Alfy’s Voices of Sansar competition this past Saturday. Trying to find and mute her (currently the only tool available to us in Sansar) was an exercise in frustration, hovering my cursor over each avatar in the crowd watching the show until I found her. Gindipple has released some software that might help us the next time we get hit by a troll at an event:

Eject or Ban 14 May 2018.png

We’ve been pretty lucky in Sansar so far; we haven’t seen anything like the levels of trolling and harassment that occur in the more popular social VR spaces like VRChat and AltspaceVR. (VRChat, in particular, is infamous for its griefing.) But we Sansarians all know the onslaught of trolls is coming, and every social VR platform is going to have to come up with its own technical solutions to the problem of trolls.

So, how are the other social VR platforms dealing with this issue?

 

Sinespace

Sinespace has pretty limited options as well. You can basically report and ignore other avatars around you:

Sinespace Ignore and Report 3 14 May 2018.png

 

VRChat

VRChat is taking the most controversial step of banning new users from uploading avatars or worlds until certain (unspecified) conditions are met, and taking away such privileges from older users who misbehave:

Hello, VRChat! We’ve been working on some new “Trust” systems to help make VRChat a friendlier place. These systems will be used to help gate various features until users have proven themselves to be a friendly member of the community. One of the first parts of the Trust system is called “Content Gating”. This system is designed to reduce abusive or annoying behavior involving avatars or other content.

Here’s generally how it works. When a user first creates a new VRChat account, they will be unable to upload custom content like worlds or avatars. After spending some time in the app and having positive interactions with other users, they will eventually receive in-app and email notifications that their account has access to world and avatar creation capability. This time may vary from user to user depending on various factors.

If the new user chooses to spend time in VRChat behaving badly or maliciously against other users, they may lose the capability to upload content. They will receive a notification in-app and via email that they have lost access to content uploading. If they spend more time in the app and follow the Community Guidelines, then they will eventually regain access to these systems. Again, this time may vary depending on various factors.

The CEO of at least one other competing metaverse corporation has said that he doubts this step will actually work as intended. In addition to these new sanctions, VRChat also has the ability to mute (so you can’t hear) and block (so you can’t see) other avatars in its pop-up user interface, and a “safe mode”, which is a sort of “nuclear option” where you can mute and block all avatars which are not on your friends list.

VRChat is also temp-banning people who troll, but sometimes other people get accidentally caught in the cross-fire. I seem to remember that there is also a feature where you can ask avatars who share your world to vote “yes” or “no” on ejecting a misbehaving user from that instance.

So all in all, VRChat has developed the most evolved and developed tools for dealing with trolling. But then again, they’ve been forced to.

 

AltspaceVR

Back in 2016, AltspaceVR introduced a “space bubble” to keep other avatars from invading your personal space. I do know that you can also mute other avatars who are annoying you. You don’t have an option to block offensive avatars in AltspaceVR, but then again, you don’t really have any choice in your avatar, they’re so very limited!

I would load and run AltspaceVR to check all these features out, but the latest version of the client software (where you get to choose your new “home” location) has completely locked up my high-end PC THREE. TIMES. tonight and I am not going to risk trying it again! AltspaceVR seems to be experiencing some major growing pains. Seriously not impressed.

 

High Fidelity

High Fidelity has a Bubble icon on its tablet user interface that works similarly to the AltspaceVR space bubble:

High Fidelity Bubble 14 May 2018.png

You can also mute nearby avatars, or set them to “ignore” so they can’t messsage you in-world. Pretty much the same features as the other social VR spaces have. All the tools in all the newer social VR spaces are pretty limited.

 

General Issues in Dealing with Trolling and Griefing

So, let’s move from specific technical solutions to a more general discussion on how to handle griefing in general. What’s the best way to go about dealing with griefing, trolling, and harassment in online communities?

Dr. Mark Dombeck, in an article on the website MentalHealth.net, neatly outlines some of the issues in community and game design that affect trolling:

In my experience, manipulating perpetrator anonymity is an important factor in controlling griefer’s/troll’s antisocial behavior. The more easily identifiable and able to be held accountable for their actions community members are, the fewer instances of bad behavior you tend to see.

Allied with the idea of altering perpetrator anonymity is the idea of altering expectation of punishment. Accountability enables easier punishment. There are several ways that punishment can take place however. Punishment can be very informal, where community members heap scorn on other members who violate the social contract or simply ignore them (by using filters within the community to literally make their presence invisible). This sort of informal punishment is what makes accountability effective all by itself. Accountability can also enable more formal varieties of punishment such as entry bans. In my experience bans are the most useful way to discourage the really hardcore antisocial behavior that happens on communities. Punishment can never hope to eradicate all griefer/troll behavior however, because the really hardcore griefers will thrive on punishment, seeing attempts by the management to eject them as high praise for their work.

Here are a few other elements of the community or game that can be manipulated and which might have an impact on reducing griefing/trolling behavior.

Setting up Initiation Barriers probably would affect griefing behavior. The easier it is to get into a community, the more likely that community is to become a target for griefers. In part this has to do with helping people to identify with and value the community and not take it for granted. When you have to do a lot of work to get into a community you are more likely to care for that community and not want to harm it. The problem here is that the same barriers that might keep out griefers also keep out legitimate members. It is difficult to set a barrier high enough to keep out one group without also keeping out the other group.

I’d expect that the more opportunity there is to act out griefer behaviors with a group of other griefers, the more often the behavior would happen. People tend to take less responsibility for individual actions when they are acting as part of a group or mob. This social psychological principle goes by several names including the bystander effect, and diffusion of responsibility. The solution here would be to limit people’s ability to socialize, but as that utterly defeats the purpose of the community it isn’t really much of a solution.

I would expect that manipulating the frame of the community or game can increase or decrease the chance that griefer behavior will occur. The frame of a game or community has to do with its identity – how members think of what they are doing when engaged in the game or community. If an interaction is thought of as a game and therefore not something that is real or important it is easier to self-justify doing mayhem. If an interaction is thought of as a more serious behavior such as part of a support group interaction, the urge to do mayhem is maybe less strong (for some at least). The Wired article talks about this issue somewhat indirectly, noting that Second Life members don’t think of what they do in Second Life as being part of a game but rather view it as a more serious community. The “non-game” frame of Second Life participants makes such participants more likely to view griefing behavior taking place within Second Life in non-game ways, such as considering it to be actual theft or terrorism.

Second Life has often been an arena for trolling because it’s very easy to create a free, throwaway account to be offensive. If one gets banned, the griefer can go ahead and create another free account. All the newer social VR spaces have this problem, since they don’t want to discourage people from signing up and (hopefully) staying and generating income for the company.

There are no easy answers here. The best we can do is try various solutions and see if they prove effective or not. In these early days of the metaverse, we’re all still learning the best ways to design our communities to chain the trolls.

troll-3330172_1920.png

Jessica Outlaw’s Survey on Virtual Harassment: Half of All Women Surveyed Have Experienced At Least One Instance of Sexual Harassment in Social VR

mihai-surdu-415698-unsplash.jpg
Photo by Mihai Surdu on Unsplash

Jessica Outlaw, whose previous research on women and social VR I talked about previously, has published the results of her latest research: a survey of over 600 people of all genders on their experience with harassment in social VR.

She reports:

Harassment is commonplace in VR. In past qualitative research, I studied sexual harassment of women. In my new project, in partnership with Pluto VR, I surveyed 600+ people who regularly use VR (Rift, Vive, PSVR, or Microsoft Windows Mixed Reality). It turns out that all genders are subject to multiple types of harassment in VR:

49% of women reported having experienced at least one instance of sexual harassment

30% of male respondents reported racist or homophobic comments

20% of males have experienced violent comments or threats

The full report can be viewed here. She summarizes her findings as follows:

  • People want to be with their friends in VR
  • 70% of those who have used multiplayer VR agree that it’s better with people they know
  • People use single-player apps to avoid harassment
  • Many avoid social VR spaces entirely

Thanks to Enrico Speranza who told me about this report!