A Slippery Slope? Sony Files Patent for Shadow Banning Misbehaving Social VR Users

Shadow banning: the practice of blocking or partially blocking a user or their content from an online community so that it will not be readily apparent to the user that they have been banned.

—Source: Wikipedia

Much has already been written about the behaviour monitoring system in the upcoming Facebook Horizon social VR platform, used to prevent inappropriate behaviour, such as this RoadtoVR article from last August:

First, all the users in Horizon are involuntarily recording each other. The last few minutes of everything that users see and hear is recorded on a rolling basis. Facebook says this recording is stored on the headset itself, unless one user reports another, at which point the recording may be sent to Facebook to check for rule violations. The company says that the recording will be deleted once the report is concluded.

Second, anyone you interact with can invite an invisible observer from Facebook to come surveil you and your conversations in real-time to make sure you don’t break any rules. The company says this can happen when one user reports another or when other “signals” are detected, such as several players blocking or muting each other in quick succession. Users will not be notified when they’re being watched.

And third, everything you say, do, and build in Horizon is subject to Facebook’s Community Standards. So while in a public space you’re free to talk about anything you want, in Horizon there a many perfectly legal topics that you can’t discuss without fear of punitive action being taken against your account.

But Sony has filed a patent for a similar way of monitoring users in social VR, where you won’t necessarily be notified if you run afoul of the rules. The abstract for the patent reads as follows:

Shadow banning a participant within a social VR system includes: receiving and forwarding an identity of the participant, who may be shadow banned; recognizing and tracking inappropriate behaviors including inappropriate language and comments, inappropriate gestures, and inappropriate movements; receiving and processing the recognized and tracked inappropriate behaviors of the participant; generating a safety rating based on the processed inappropriate behaviors; comparing the safety rating to a threshold value; and outputting a signal to label the participant as a griefer and shadow ban the griefer when the safety rating is greater than the threshold value.

So, it sounds as though, if somebody makes an obscene gesture towards another avatar in a future social VR platform where this system is implemented (e.g. flips them the bird, or grinds up against them in a sexual way), that they would then be shadow banned, perhaps even becoming invisible to other users. What sets this proposed system apart from Facebook Horizon’s is that it would be triggered WITHOUT input from someone who reports the griefer.

Stop and think about that for a moment. Who is to decide what is inappropriate gesture, or inappropriate behaviour? The rudeness of various hand gestures varies by culture around the world; will American rules and codes of conduct take precedence over those of, say, Italy or India, which might differ? Can you be flagged just for staring at another person for longer than a few seconds? What is the dispute mechanism if you discover you are shadow banned, and will it be similarly automated? This is just a slippery slope, people.

An article about the patent by Jamie Feltham on UploadVR states:

Interestingly, one proposal for this solution includes “a system configured entirely with hardware” that specifically mentions tracking the user’s movement and even their gaze. Presumably, these would be features included in the headset itself. Another suggestion mentions using an “agent” placed within the application to judge any possible offenses.

While features like these may be necessary as VR expands, it also calls into question the security and privacy of any user’s actions within that social VR experience. Figuring out that balance will no doubt be a challenge for social VR app makers in the future.

It’s also interesting to note that Sony filed this document after PSVR’s release in 2016 and that the company doesn’t really have any big social apps to its own name on the platform. Could this be an indicator that Sony is indeed planning to launch a more robust social VR feature for the upcoming PS5 VR headset? We did report last month that the company had renewed the trademark for its PS3-era social VR service, PlayStation Home, so anything’s possible.

So perhaps Sony has a future social VR platform for PSVR users up its sleeve?

Another question which arises is: if Sony’s patent is awarded, will they be able to go after platforms like Facebook Horizon, which might use similar enough features to institute patent infringement? The mind boggles at the possibilities.

One thing is clear: the social VR marketplace is evolving so quickly that laws and regulations are struggling to play catch up. Facebook, for one, is collecting all kinds of personal data about your use of Oculus VR devices such as the Quest 2 (here’s the complete list, just for the Oculus app on your iPhone).

The more data collected and analyzed, the greater the chances that you could be branded a griefer and shadow banned!

In the future, if you look at another avatar the wrong way, you might land up shadow banned! (Image source: What Is Shadow Banning? on imge)

Thanks for Rob Crasco for alerting me to this patent!

Two Virtual Reality Designers Discuss Techniques and Strategies for Implementing Safer Social VR (Including an Example from the Forthcoming Facebook Horizon Platform)

Photo by Mihai Surdu on Unsplash

Back at the start of November, two VR designers, Michelle Cortese and Andrea Zeller, wrote an article for Immerse on aspects of designing safer social VR spaces. That article was recently reprinted on The Next Web news site, titled How to protect users from harassment in social VR spaces, and it’s an excellent read on the subject, which I highly recommend.

In particular, female-identifying users of social VR platforms are often the victims of sexual harassment, research conducted by Jessica Outlaw and others has shown. Michelle Cortese writes:

As female designers working in VR, my co-worker Andrea Zeller and I decided to join forces on our own time and write a comprehensive paper. We wrote about the potential threat of virtual harassment, instructing readers on how to use body sovereignty and consent ideology to design safer virtual spaces from the ground up. The text will soon become a chapter in the upcoming book: Ethics in Design and Communication: New Critical Perspectives (Bloomsbury Visual Arts: London).

After years of flagging potentially-triggering social VR interactions to male co-workers in critiques, it seemed prime time to solidify this design practice into documented research. This article is the product of our journey.

The well-known immersive aspect of virtual reality—the VR hardware and software tricking your brain into believing what it is seeing is “real”—means that when someone threatens or violates your personal space, or your virtual body, it feels real.

This is particularly worrisome as harassment on the internet is a long-running issue; from trolling in chat rooms in the ’90s to cyber-bullying on various social media platforms today. When there’s no accountability on new platforms, abuse has often followed — and the innate physicality of VR gives harassers troubling new ways to attack. The visceral quality of VR abuse can be especially triggering for survivors of violent physical assault.

Cortese and Zeller stress that safety needs to be built into our social VR environments: “Safety and inclusion need to be virtual status quo.”

The article goes into a discussion of proxemics, which I will not attempt to summarize here; I would instead strongly urge you to go to the source and read it all for yourself, as it is very clearly laid out. A lot of research has already been done in this area, which can now be applied as we build new platforms.

And one of those new social VR platforms just happens to be Facebook Horizon, a project on which both Michelle Cortese and Andrea Zeller have been working!

What I did find interesting in this report was an example the authors provided, of how this user safety research is being put to use in the Facebook Horizon social VR platform, which will be launching in closed beta early this year. Apparently, there will be a button you can press to immediately remove yourself from a situation where you do not feel comfortable:

We designed the upcoming Facebook Horizon with easy-to-access shortcuts for moments when people would need quick-action remediation in tough situations. A one-touch button can quickly remove you from a situation. You simply touch the button and you land in a space where you can take a break and access your controls to adjust your experience.

Once safely away from the harasser, you can optionally choose to mute, block, or report them to the admins while in your “safe space”:

Handy features such as these, plus Facebook’s insistence on linking your personally-identifying account on the Facebook social network to your Facebook Horizon account (thus making it very difficult to be anonymous), will probably go a long way towards making women (and other minorities such as LGBTQ folks) feel safer in Facebook Horizon.

Of course, griefers, harassers and trolls will always try to find ways around the safeguards put in place, such as setting up dummy alternative accounts (Second Life and other virtual worlds have had to deal with such problems for years). We can also expect “swatting”-type attacks, where innocent people are falsely painted as troublemakers using the legitimate reporting tools provided (something we’ve unfortunately already seen happen in a few instances in Sansar).

Some rather bitter lessons on what does and doesn’t work have been learned in the “wild, wild west” of earlier-generation virtual worlds and social VR platforms, such as the never-ending free-for-all of Second Life (and of course, the cheerful anarchy of VRChat, especially in the days before they were forced to implement their nuanced Trust and Safety System due to a tidal wave of harassment, trolling and griefing).

But I am extremely glad to see that Facebook has hired VR designers like Michelle Cortese and Andrea Zeller, and that the company is treating user safety in social VR as a non-negotiable tenet from the earliest design stages of the Horizon project, instead of scrambling to address it as an after-thought as VRChat did. More social VR platforms need to do this.

I’m quite looking forward to seeing how this all plays out in 2020! I and many other observers will be watching Facebook Horizon carefully to see how well all these new security and safety features roll out and are embraced by users.

Sinespace Learns a Lesson the Hard Way: Pay-to-Play Marketing Can Backfire

Trilo Byte (a.k.a. TriloByte Zanzibar, one of the people behind the virtual fashion brand BlakOpal Designs, started in Second Life and now operating in Sinespace) reports on his blog that Sinespace has had a marketing scheme backfire on them, and it has created a serious griefer problem.

The problem is that at least one of the marketing companies Sinespace contracted with started offering what are called pay-to-play inducements, where new users are paid in IMVU credits or Roblox currency (Robux) if they download the app, create an account on Sinespace and use the program for a minimum length of time (e.g. 30 minutes).

This has apparently led to a unwelcome surplus of trolls, griefers, and online harassment in Sinespace:

According to Sine Wave, what is happening in-world is the result of a single marketing agency who they have already complained to about the practice. However, we’re still seeing these users coming in, often referred to by shady sites like this onethis one, and this one too (and those are just the sites users are posting links to in chat).

By virtue of being offered payment in another game’s currency, they are confirming from the onset that they have no interest outside of getting currency to spend on another platform. Do they really expect users coming in for IMVU or Roblux currency to abandon everything they’ve built? The promise of Sinespace may be great, but the world is far from finished.

It isn’t just a matter of setting themselves up for failure. It’s much worse. On top of bringing in a bunch of people who are very unlikely to join the community and even less likely to become economic participants, it creates the Sinespace griefing problem.

Now, other virtual worlds have made similar mistakes. For example, Linden Lab set up a Twitch bounty program which paid livestreamers to visit Sansar, which was abused by several people who trolled the platform (I’m not certain if that program was suspended or not).

What is clear is that companies in the social VR/virtual world marketplace need to think carefully about the unintended consequences of offering financial inducements to entice new users on to their platforms. This is an embarrassing episode for Sinespace, one from which I hope they recover quickly. Sometimes you just have to learn a lesson the hard way.

Thanks to Jospeh Zazulak for the news tip!

VRChat Institutes a New Safety and Trust System to Combat Griefers

vrchat-logo.jpg

In response to high levels of trolling, griefing, and harassment, the VRChat platform is instituting an incredibly detailed Safety and Trust System:

The VRChat Trust and Safety system is a new extension of the currently-implemented VRChat Trust system. It is designed to keep users safe from nuisance users using things like screen-space shaders, loud sounds or microphones, visually noisy or malicious particle effects, and other methods that someone may use to detract from your experience in VRChat.

This system is designed to give control back to the user, allowing users to determine where, when, and how they see various avatar features that may be distracting or malicious if used improperly.

The Trust and Safety system is designed so that, even when left on default settings, the system will ensure that someone can’t attack you with malicious avatar features. Malicious users won’t have these features shown, so you can have a good experience in the metaverse.

Basically, every VRChat user is automatically assigned to one of six levels, based on their past behaviour (e.g. exploring, making friends, creating content):

  • Veteran User
  • Trusted User
  • Known User
  • User
  • New User
  • Visitor (the default rank for brand-new users)

Visitors will not be able to upload content to VRChat until they are promoted to the New User rank. In addition:

Additionally, there exists a special rank called “Nuisance”. These users have caused problems for others, and will have an indicator above their nameplate when your quick menu is open. Most of the time, these users’ avatars will be completely blocked. In a future release, users who are sliding toward the “Nuisance” rank will be notified.

Finally, there exists a “VRChat Team” rank, which is only usable by VRChat Team members. When a VRChat Team member has their “DEV” tag on, you’ll see this rank in the quick menu when you select them. If you have doubts that a user with a “DEV” tag is actually on the VRChat Team, just open your Quick Menu, select them, and check out their Trust Rank. If it doesn’t say “VRChat Team” under the avatar thumbnail, then that user is not a member of the VRChat Team, and is likely trying to confuse users. Feel free to take a screenshot and report them to the Moderation team!

For each level of user, you can set what aspects of their avatar will be visible/audible to you in an extremely detailed Safety System:

Safety” is a new menu tab that allows you to configure how users of each rank are treated in regards to how they display for you in VRChat. This affects many aspects of a user’s presence in VRChat:

  • Voice — Mutes or unmutes a user’s microphone (voice chat)
  • Avatar — Hides or shows a user’s avatar as well as all avatar features. When an avatar is hidden, it shows a “muted” avatar
  • Avatar Audio — Enables or disables sound effects from a user’s avatar (not their microphone)
  • Animations — Enables or disables custom animations on a user’s avatar
  • Shaders — When disabled, all shaders on a user’s avatar are reverted to Standard
  • Particles and Lights — Enables or disables particle systems on a user’s avatar, as well as any light sources. This will also block Line and Trail Renderer components.

VRChat Safety System.png
The New VRChat Safety System

There is much, much more information on the new Safety and Trust System in their blogpost. The team behind VRChat have obviously put a lot of time and energy into designing this system, and I can say that this is now the most comprehensive suite of tools to combat griefing, trolling, and harassment that I have seen in any social VR space or virtual world, and a model for other platforms to emulate.

After a huge surge in usage in the early part of this year (mainly due to the promotion of the platform by various well-known livestreamers), the number of simultaneous users in VRChat has stayed relatively steady at around 6,000:

VRChat Stats 27 Sept 2018.png

This makes VRChat the most popular of the newer social VR platforms. The new Safety and Trust system will go a long way towards improving users’ experiences in VRChat.