Back at the start of November, two VR designers, Michelle Cortese and Andrea Zeller, wrote an article for Immerse on aspects of designing safer social VR spaces. That article was recently reprinted on The Next Web news site, titled How to protect users from harassment in social VR spaces, and it’s an excellent read on the subject, which I highly recommend.
In particular, female-identifying users of social VR platforms are often the victims of sexual harassment, research conducted by Jessica Outlaw and others has shown. Michelle Cortese writes:
As female designers working in VR, my co-worker Andrea Zeller and I decided to join forces on our own time and write a comprehensive paper. We wrote about the potential threat of virtual harassment, instructing readers on how to use body sovereignty and consent ideology to design safer virtual spaces from the ground up. The text will soon become a chapter in the upcoming book: Ethics in Design and Communication: New Critical Perspectives (Bloomsbury Visual Arts: London).
After years of flagging potentially-triggering social VR interactions to male co-workers in critiques, it seemed prime time to solidify this design practice into documented research. This article is the product of our journey.
The well-known immersive aspect of virtual reality—the VR hardware and software tricking your brain into believing what it is seeing is “real”—means that when someone threatens or violates your personal space, or your virtual body, it feels real.
This is particularly worrisome as harassment on the internet is a long-running issue; from trolling in chat rooms in the ’90s to cyber-bullying on various social media platforms today. When there’s no accountability on new platforms, abuse has often followed — and the innate physicality of VR gives harassers troubling new ways to attack. The visceral quality of VR abuse can be especially triggering for survivors of violent physical assault.
Cortese and Zeller stress that safety needs to be built into our social VR environments: “Safety and inclusion need to be virtual status quo.”
The article goes into a discussion of proxemics, which I will not attempt to summarize here; I would instead strongly urge you to go to the source and read it all for yourself, as it is very clearly laid out. A lot of research has already been done in this area, which can now be applied as we build new platforms.
And one of those new social VR platforms just happens to be Facebook Horizon, a project on which both Michelle Cortese and Andrea Zeller have been working!
What I did find interesting in this report was an example the authors provided, of how this user safety research is being put to use in the Facebook Horizon social VR platform, which will be launching in closed beta early this year. Apparently, there will be a button you can press to immediately remove yourself from a situation where you do not feel comfortable:
We designed the upcoming Facebook Horizon with easy-to-access shortcuts for moments when people would need quick-action remediation in tough situations. A one-touch button can quickly remove you from a situation. You simply touch the button and you land in a space where you can take a break and access your controls to adjust your experience.
Once safely away from the harasser, you can optionally choose to mute, block, or report them to the admins while in your “safe space”:
Handy features such as these, plus Facebook’s insistence on linking your personally-identifying account on the Facebook social network to your Facebook Horizon account (thus making it very difficult to be anonymous), will probably go a long way towards making women (and other minorities such as LGBTQ folks) feel safer in Facebook Horizon.
Of course, griefers, harassers and trolls will always try to find ways around the safeguards put in place, such as setting up dummy alternative accounts (Second Life and other virtual worlds have had to deal with such problems for years). We can also expect “swatting”-type attacks, where innocent people are falsely painted as troublemakers using the legitimate reporting tools provided (something we’ve unfortunately already seen happen in a few instances in Sansar).
Some rather bitter lessons on what does and doesn’t work have been learned in the “wild, wild west” of earlier-generation virtual worlds and social VR platforms, such as the never-ending free-for-all of Second Life (and of course, the cheerful anarchy of VRChat, especially in the days before they were forced to implement their nuanced Trust and Safety System due to a tidal wave of harassment, trolling and griefing).
But I am extremely glad to see that Facebook has hired VR designers like Michelle Cortese and Andrea Zeller, and that the company is treating user safety in social VR as a non-negotiable tenet from the earliest design stages of the Horizon project, instead of scrambling to address it as an after-thought as VRChat did. More social VR platforms need to do this.
I’m quite looking forward to seeing how this all plays out in 2020! I and many other observers will be watching Facebook Horizon carefully to see how well all these new security and safety features roll out and are embraced by users.