After News Reports of Sexual Harassment, Meta Implements a Four-Foot Personal Boundary for Avatars in Horizon Worlds and Horizon Venues

Unfortunately, sexual harassment online is pervasive, happening in such disparate venues as social media, chat rooms, Discord servers, and role-playing games. Virtual worlds and social VR are no exception. Again, this is not a new problem; I have been writing about trolling, griefing and harassment in the metaverse, and how companies are responding to it, since May of 2018 on this blog.

There have been several recent news reports about women who reported being groped or otherwise harassed in Meta’s social VR platforms Horizon Worlds and Horizon Venues. For example, the U.K.’s Daily Mail had this report about a women who was assaulted after logging into Horizon Venues:

Nina Jane Patel watched and listened in horror through a virtual-reality headset as her avatar – a moving, talking, computer-generated version of herself – was groped aggressively in a sustained attack by three realistic male characters.

On a visit this month, the mother-of-four entered the ‘lobby’ – a virtual space serving as an entry point. But within seconds she was pursued by the men’s avatars, who groped her, subjected her to a stream of sexual innuendo and took screen shots of the attack for several minutes as she tried to flee.

Alex Heath of The Verge reported on December 9th, 2021:

Earlier this month, a beta tester posted in the official Horizon group on Facebook about how her avatar was groped by a stranger. “Sexual harassment is no joke on the regular internet, but being in VR adds another layer that makes the event more intense,” she wrote. “Not only was I groped last night, but there were other people there who supported this behavior which made me feel isolated in the Plaza.”

[Vivek] Sharma [Meta’s VP of Horizon] calls the incident “absolutely unfortunate” and says that after Meta reviewed the incident, the company determined that the beta tester didn’t utilize the safety features built into Horizon Worlds, including the ability to block someone from interacting with you. (When you’re in Horizon, a rolling buffer of what you see is saved locally on your Oculus headset and then sent to Meta for human review if an incident is reported.) “That’s good feedback still for us because I want to make [the blocking feature] trivially easy and findable,” he says.

This event was widely reported by a variety of news sources, ranging from the New York Post to the MIT Technology Review. Victor Tangermann wrote in a Dec. 16th, 2021 Futurism article titled Sexual Assault Is Already Happening in the Metaverse:

Rather than ensuring Horizon Worlds doesn’t foster a culture of strangers groping each other in VR, Meta is hoping to make the problem go away by making adjustments to its tools. The company says users can turn on a feature called “Safe Zone,” which creates an impenetrable bubble around the user when they want more space.

But personal space is likely to be a galling problem for social VR applications.

“I think people should keep in mind that sexual harassment has never had to be a physical thing,” Jesse Fox, an associate professor at Ohio State University, told MIT Technology Review. “It can be verbal, and yes, it can be a virtual experience as well.”

Bloomberg columnist Parmy Olson also wasn’t exactly impressed by Meta’s VR experience, either. Once in the VR lobby of Horizon Venues — Meta’s VR events platform that is serving as Horizon Worlds’ precursor — she was being surrounded by a “group of male avatars” who started taking pictures of her.

“One by one, they began handing the photos to me,” Olson writes. “The experience was awkward and I felt a bit like a specimen.”

Meta may have thought they would have avoided these kind of problems by deliberately designing their avatars to have no body below the waist. No genitals, no problem, right? WRONG. It’s not what the avatars look like that’s the issue here; it’s how the people using the avatars behave towards each other.

Note also Parmy Olson’s incident in the previous quote: in her case, the group of male avatars were using Horizon Worlds’ built-in camera feature to make her feel uncomfortable. Harassment can take many forms, and may involve the abuse of features which the developers never dreamed would be so misused.

On February 4th, 2022, no doubt in response to these and other news reports and the negative publicity they generated, Meta announced a Personal Boundary feature:

Today, we’re announcing Personal Boundary for Horizon Worlds and Horizon Venues. Personal Boundary prevents avatars from coming within a set distance of each other, creating more personal space for people and making it easier to avoid unwanted interactions. Personal Boundary will begin rolling out today everywhere inside of Horizon Worlds and Horizon Venues, and will by default make it feel like there is an almost 4-foot distance between your avatar and others.

This Personal Boundary feature is hard-coded, at least for now; you cannot turn it off or adjust the distance. According to the press release:

We are intentionally rolling out Personal Boundary as always on, by default, because we think this will help to set behavioral norms—and that’s important for a relatively new medium like VR. In the future, we’ll explore the possibility of adding in new controls and UI changes, like letting people customize the size of their Personal Boundary.

Note that because Personal Boundary is the default experience, you’ll need to extend your arms to be able to high-five or fist bump other people’s avatars in Horizon Worlds or in Horizon Venues.

Adi Robinson of The Verge clarifies that “it gives everyone a two-foot radius of virtual personal space, creating the equivalent of four virtual feet between avatars”, adding:

Meta spokesperson Kristina Milian confirmed that users can’t choose to disable their personal boundaries since the system is intended to establish standard norms for how people interact in VR. However, future changes could let people customize the size of the radius.

If someone tries to walk or teleport within your personal space, their forward motion will stop. However, Milian says that you can still move past another avatar, so users can’t do things like use their bubbles to block entrances or trap people in virtual space

Contrast Meta’s approach with other platforms such as Sansar, which gives the user control over whether or not they want to set up personal space between themselves and other avatars, allowing them to set up one distance for people on their friends list (or to turn it off completely, and set another for non-friends and strangers (see the Comfort Zone settings in the image below):

And, of course, VRChat has an elaborate, six-level Trust and Safety system, where you can make adjustments to mute/hide avatars, among other settings.

A few thoughts about all this. Because Meta is such a large, well-known company, it was perhaps inevitable that such reports would be considered newsworthy—even though sexual harassment has been around for decades in virtual worlds, dating back to Active Worlds, founded over a quarter-century ago!

Also, the immersive nature of virtual reality can make such harassment feel more invasive. Jessica Outlaw has researched and written at length about women’s experience of harassment in virtual reality (here and here).

Finally, like all the metaverse platforms which came before it, Meta is learning and making adjustments to its social VR platforms over time. This is common and is to be expected. For example, Second Life has had a long history of discovering and addressing problems which arose during its 18+ years of existence. Some fixes are good; others cause their own problems, and require further tinkering.

I personally believe that the best solution to the continuing problem of sexual harassment in the metaverse requires a deft mix of social and community rules and expectations with software solutions such as the Personal Boundary feature, and muting/blocking avatars. There is no easy fix; we learn as we go.

A Slippery Slope? Sony Files Patent for Shadow Banning Misbehaving Social VR Users

Shadow banning: the practice of blocking or partially blocking a user or their content from an online community so that it will not be readily apparent to the user that they have been banned.

—Source: Wikipedia

Much has already been written about the behaviour monitoring system in the upcoming Facebook Horizon social VR platform, used to prevent inappropriate behaviour, such as this RoadtoVR article from last August:

First, all the users in Horizon are involuntarily recording each other. The last few minutes of everything that users see and hear is recorded on a rolling basis. Facebook says this recording is stored on the headset itself, unless one user reports another, at which point the recording may be sent to Facebook to check for rule violations. The company says that the recording will be deleted once the report is concluded.

Second, anyone you interact with can invite an invisible observer from Facebook to come surveil you and your conversations in real-time to make sure you don’t break any rules. The company says this can happen when one user reports another or when other “signals” are detected, such as several players blocking or muting each other in quick succession. Users will not be notified when they’re being watched.

And third, everything you say, do, and build in Horizon is subject to Facebook’s Community Standards. So while in a public space you’re free to talk about anything you want, in Horizon there a many perfectly legal topics that you can’t discuss without fear of punitive action being taken against your account.

But Sony has filed a patent for a similar way of monitoring users in social VR, where you won’t necessarily be notified if you run afoul of the rules. The abstract for the patent reads as follows:

Shadow banning a participant within a social VR system includes: receiving and forwarding an identity of the participant, who may be shadow banned; recognizing and tracking inappropriate behaviors including inappropriate language and comments, inappropriate gestures, and inappropriate movements; receiving and processing the recognized and tracked inappropriate behaviors of the participant; generating a safety rating based on the processed inappropriate behaviors; comparing the safety rating to a threshold value; and outputting a signal to label the participant as a griefer and shadow ban the griefer when the safety rating is greater than the threshold value.

So, it sounds as though, if somebody makes an obscene gesture towards another avatar in a future social VR platform where this system is implemented (e.g. flips them the bird, or grinds up against them in a sexual way), that they would then be shadow banned, perhaps even becoming invisible to other users. What sets this proposed system apart from Facebook Horizon’s is that it would be triggered WITHOUT input from someone who reports the griefer.

Stop and think about that for a moment. Who is to decide what is inappropriate gesture, or inappropriate behaviour? The rudeness of various hand gestures varies by culture around the world; will American rules and codes of conduct take precedence over those of, say, Italy or India, which might differ? Can you be flagged just for staring at another person for longer than a few seconds? What is the dispute mechanism if you discover you are shadow banned, and will it be similarly automated? This is just a slippery slope, people.

An article about the patent by Jamie Feltham on UploadVR states:

Interestingly, one proposal for this solution includes “a system configured entirely with hardware” that specifically mentions tracking the user’s movement and even their gaze. Presumably, these would be features included in the headset itself. Another suggestion mentions using an “agent” placed within the application to judge any possible offenses.

While features like these may be necessary as VR expands, it also calls into question the security and privacy of any user’s actions within that social VR experience. Figuring out that balance will no doubt be a challenge for social VR app makers in the future.

It’s also interesting to note that Sony filed this document after PSVR’s release in 2016 and that the company doesn’t really have any big social apps to its own name on the platform. Could this be an indicator that Sony is indeed planning to launch a more robust social VR feature for the upcoming PS5 VR headset? We did report last month that the company had renewed the trademark for its PS3-era social VR service, PlayStation Home, so anything’s possible.

So perhaps Sony has a future social VR platform for PSVR users up its sleeve?

Another question which arises is: if Sony’s patent is awarded, will they be able to go after platforms like Facebook Horizon, which might use similar enough features to institute patent infringement? The mind boggles at the possibilities.

One thing is clear: the social VR marketplace is evolving so quickly that laws and regulations are struggling to play catch up. Facebook, for one, is collecting all kinds of personal data about your use of Oculus VR devices such as the Quest 2 (here’s the complete list, just for the Oculus app on your iPhone).

The more data collected and analyzed, the greater the chances that you could be branded a griefer and shadow banned!

In the future, if you look at another avatar the wrong way, you might land up shadow banned! (Image source: What Is Shadow Banning? on imge)

Thanks for Rob Crasco for alerting me to this patent!

Two Virtual Reality Designers Discuss Techniques and Strategies for Implementing Safer Social VR (Including an Example from the Forthcoming Facebook Horizon Platform)

Photo by Mihai Surdu on Unsplash

Back at the start of November, two VR designers, Michelle Cortese and Andrea Zeller, wrote an article for Immerse on aspects of designing safer social VR spaces. That article was recently reprinted on The Next Web news site, titled How to protect users from harassment in social VR spaces, and it’s an excellent read on the subject, which I highly recommend.

In particular, female-identifying users of social VR platforms are often the victims of sexual harassment, research conducted by Jessica Outlaw and others has shown. Michelle Cortese writes:

As female designers working in VR, my co-worker Andrea Zeller and I decided to join forces on our own time and write a comprehensive paper. We wrote about the potential threat of virtual harassment, instructing readers on how to use body sovereignty and consent ideology to design safer virtual spaces from the ground up. The text will soon become a chapter in the upcoming book: Ethics in Design and Communication: New Critical Perspectives (Bloomsbury Visual Arts: London).

After years of flagging potentially-triggering social VR interactions to male co-workers in critiques, it seemed prime time to solidify this design practice into documented research. This article is the product of our journey.

The well-known immersive aspect of virtual reality—the VR hardware and software tricking your brain into believing what it is seeing is “real”—means that when someone threatens or violates your personal space, or your virtual body, it feels real.

This is particularly worrisome as harassment on the internet is a long-running issue; from trolling in chat rooms in the ’90s to cyber-bullying on various social media platforms today. When there’s no accountability on new platforms, abuse has often followed — and the innate physicality of VR gives harassers troubling new ways to attack. The visceral quality of VR abuse can be especially triggering for survivors of violent physical assault.

Cortese and Zeller stress that safety needs to be built into our social VR environments: “Safety and inclusion need to be virtual status quo.”

The article goes into a discussion of proxemics, which I will not attempt to summarize here; I would instead strongly urge you to go to the source and read it all for yourself, as it is very clearly laid out. A lot of research has already been done in this area, which can now be applied as we build new platforms.

And one of those new social VR platforms just happens to be Facebook Horizon, a project on which both Michelle Cortese and Andrea Zeller have been working!

What I did find interesting in this report was an example the authors provided, of how this user safety research is being put to use in the Facebook Horizon social VR platform, which will be launching in closed beta early this year. Apparently, there will be a button you can press to immediately remove yourself from a situation where you do not feel comfortable:

We designed the upcoming Facebook Horizon with easy-to-access shortcuts for moments when people would need quick-action remediation in tough situations. A one-touch button can quickly remove you from a situation. You simply touch the button and you land in a space where you can take a break and access your controls to adjust your experience.

Once safely away from the harasser, you can optionally choose to mute, block, or report them to the admins while in your “safe space”:

Handy features such as these, plus Facebook’s insistence on linking your personally-identifying account on the Facebook social network to your Facebook Horizon account (thus making it very difficult to be anonymous), will probably go a long way towards making women (and other minorities such as LGBTQ folks) feel safer in Facebook Horizon.

Of course, griefers, harassers and trolls will always try to find ways around the safeguards put in place, such as setting up dummy alternative accounts (Second Life and other virtual worlds have had to deal with such problems for years). We can also expect “swatting”-type attacks, where innocent people are falsely painted as troublemakers using the legitimate reporting tools provided (something we’ve unfortunately already seen happen in a few instances in Sansar).

Some rather bitter lessons on what does and doesn’t work have been learned in the “wild, wild west” of earlier-generation virtual worlds and social VR platforms, such as the never-ending free-for-all of Second Life (and of course, the cheerful anarchy of VRChat, especially in the days before they were forced to implement their nuanced Trust and Safety System due to a tidal wave of harassment, trolling and griefing).

But I am extremely glad to see that Facebook has hired VR designers like Michelle Cortese and Andrea Zeller, and that the company is treating user safety in social VR as a non-negotiable tenet from the earliest design stages of the Horizon project, instead of scrambling to address it as an after-thought as VRChat did. More social VR platforms need to do this.

I’m quite looking forward to seeing how this all plays out in 2020! I and many other observers will be watching Facebook Horizon carefully to see how well all these new security and safety features roll out and are embraced by users.

Sinespace Learns a Lesson the Hard Way: Pay-to-Play Marketing Can Backfire

Trilo Byte (a.k.a. TriloByte Zanzibar, one of the people behind the virtual fashion brand BlakOpal Designs, started in Second Life and now operating in Sinespace) reports on his blog that Sinespace has had a marketing scheme backfire on them, and it has created a serious griefer problem.

The problem is that at least one of the marketing companies Sinespace contracted with started offering what are called pay-to-play inducements, where new users are paid in IMVU credits or Roblox currency (Robux) if they download the app, create an account on Sinespace and use the program for a minimum length of time (e.g. 30 minutes).

This has apparently led to a unwelcome surplus of trolls, griefers, and online harassment in Sinespace:

According to Sine Wave, what is happening in-world is the result of a single marketing agency who they have already complained to about the practice. However, we’re still seeing these users coming in, often referred to by shady sites like this onethis one, and this one too (and those are just the sites users are posting links to in chat).

By virtue of being offered payment in another game’s currency, they are confirming from the onset that they have no interest outside of getting currency to spend on another platform. Do they really expect users coming in for IMVU or Roblux currency to abandon everything they’ve built? The promise of Sinespace may be great, but the world is far from finished.

It isn’t just a matter of setting themselves up for failure. It’s much worse. On top of bringing in a bunch of people who are very unlikely to join the community and even less likely to become economic participants, it creates the Sinespace griefing problem.

Now, other virtual worlds have made similar mistakes. For example, Linden Lab set up a Twitch bounty program which paid livestreamers to visit Sansar, which was abused by several people who trolled the platform (I’m not certain if that program was suspended or not).

What is clear is that companies in the social VR/virtual world marketplace need to think carefully about the unintended consequences of offering financial inducements to entice new users on to their platforms. This is an embarrassing episode for Sinespace, one from which I hope they recover quickly. Sometimes you just have to learn a lesson the hard way.

Thanks to Jospeh Zazulak for the news tip!