VRChat’s Latest Security Update, Incorporating Easy Anti Cheat, Is Causing Controversy Among Users

Yesterday, VRChat posted the following blogpost to their official blog, titled The VRChat Security Update:

“Modified clients” are a large problem for VRChat in a variety of ways. Malicious modified clients allow users to attack and harass others, causing a huge amount of moderation issues. Even seemingly non-malicious modifications complicate the support and development of VRChat, and make it impossible for VRChat creators to work within the expected, documented bounds of VRChat.

In order to prevent that, we’ve implemented Easy Anti Cheat (EAC) into VRChat.

If you’ve played Apex Legends, Fortnite, Gears of War, Elden Ring, or many more, you’ve seen Easy Anti-Cheat (EAC).

EAC is the industry-leading anti-cheat service. It’s lightweight, effective, and privacy-focused. In short, for any game or platform looking to prevent malicious users from breaking the rules, it’s a powerful solution.

The integration of EAC means that all modified clients are blocked. The problems mentioned above will be minimized if not outright eliminated, improving the VRChat experience for users and creators.

Malicious client modifications are responsible for a massive amount of issues for both our team and our users. We’ve been listening to you cry out for a solution to being harassed, griefed, and constantly crashed, so we’re taking further steps to address one of the roots of the problem.

Our Trust & Safety and User Support teams witness first hand how much damage modified clients do to the platform. 

Every month, thousands of users have their accounts stolen, often due to running a modified client that is silently logging their keystrokes as well as other information. These users – often without even realizing it! – run the risk of losing their account, or having their computers become part of a larger botnet. 

These networks of modified clients perform malicious actions without informing users – such as reporting back user locations to harassers or stalkers, ripping and archiving avatars, allowing mass harassment of users via automated actions, and even acting as nodes for distributed “zombie” botnets. We’ve directly observed this happening innumerable times, and it alarms us!

Additionally, all modified clients – even ones that aren’t malicious – are a burden for creators. We regularly speak to many that have spent hours (or days) debugging user issues, only to realize that the culprit is a modified client. This frustration ultimately has a chilling effect on VRChat creators, hurting their enthusiasm and preventing them from building awesome things. 

This pain extends to VRChat support too – any time we update, we get a massive amount of bug reports that end up just being broken modifications. In addition to burning developer time, this support burden also frustrates less technically-inclined users who didn’t know what they were getting into by installing these modifications.

Now, keep in mind that it has always been against the VRChat Terms of Service to make modifications to the official VRChat client. Those who break the ToS risk being banned from the platform, but (much like earlier flatscreen virtual worlds, e.g. Second Life), there’s really very little stopping an infringer from creating a brand new account to get around the ban.

However, it appears that many users are unhappy with this latest move by the company, which will impact useful mods as well. Among the ways users are voicing their displeasure is by review-bombing VRChat on Steam:

However, this has not gone down well with the game’s community. Modding was a large part of the VRChat experience despite it being technically disallowed. Mods are currently used to address the game’s poor performance as well as to add missing accessibility features such as speech-to-text (via PCGamer).

Recent Steam reviews for VRChat are currently sitting at “mostly negative” as thousands of negative reviews are flooding in from displeased community members.

One such negative review lists the negative outcomes bringing EAC will cause for VRChat. “What EAC will do for VRChat: Lower framerates, increase instability, stop script kiddies, stop “wholesome” mods, accessibility mods, and quality of life mods, Stop [GPU software] from improving your framerate…”

One user with almost 9,000 hours left a sarcastic positive review, stating that the uninstall button works great on the game. Another simply stated, “horrible devs, entirely disconnected from the community and what they want.”

The VRChat development team has yet to address the ongoing community backlash to their decision.

A reportor from TheGamer website writes:

As highlighted by a ResetEra thread, there have been over 5,000 negative reviews filed since July 1. This has brought the recent reviews score down to Mostly Negative despite the game’s Very Positive overall rating. Mods are against VRChat’s terms of service but the community use a slew of client-side addons to fix a lot of the bugs while also adding key quality of life features. But these mods do a lot more than that, making the game safer, and they’re set to break with the anti-cheat update.

Mods such as AdvancedSafety, LagFreeScreenshots, JoinNotifier, UIExpansionKit, and CameraMinus will all be flagged by the anti-cheat, resulting in a ban for users. The community has voiced concern over this with the Discord racking up complaints and backlash as a feedback post on the official website accumulates over 18,600 upvotes.

An anonymous source has shared with me the following message which is circulating among the VRChat hacker/mod community:

As some of you may have noticed, VRChat’s next big update is regarding their new EAC (Easy Anti Cheat).

This means that any mods you may have been using, VRChat is requiring you to get rid of them on this next update. You won’t be able to launch the game until done so.


Still waiting on more information from VRChat regarding this entire situation. I will be keeping an eye out and will update here as well.

As you may have noticed, VRChat has released an open beta that includes Easy Anti-Cheat, which prevents use of mods.

Playing cat-and-mouse game with anti-cheat developers is not something that could be won by a modding community as big and open as ours, so if this open beta makes it to release as-is, this would mean the end of wholesome modding.

Now is your chance to tell VRChat that this is a dumb change. It does not solve ripping or crasher avatars. It probably won’t stop malicious mods, as they’re way smaller and can evade anti-cheat easier. Nor do they open source their code, meaning you never really know what you are running, and risk getting your account stolen, or worse.

However, it prevents you from having unlimited avatar favourites. It prevents you from using anti-crash mods. It prevents you from using all other mod features you’ve come to enjoy. It prevents you from using safe, open-source mods that never made anyone’s experience worse.

So, we recommend that you cancel your VRC+ if you have it, and do not launch VRChat for at least a week to produce a visible player count drop. Encourage your friends to do the same, even if they don’t use mods themselves. Anti-community measures like these from a greedy corporation should be protested as loudly as possible. You can take the time off VRChat to explore alternative platforms – Neos provides a different experience, and ChilloutVR aims to be similar to what you know (you can even auto-convert some of your avatars!). Or you can choose to experience other VR and flatscreen games together with friends you’ve made in VRChat.

This is not really a “security update”. It much more of a “we’re too afraid of people doing our job better than us” update.

You can upvote a Canny post here.

Stay tuned—it looks like things are going to get interesting! 😉

UPDATE 9:39 p.m.: ThrillSeeker spends the first ten minutes of his 15-minute weekly VR news update discussing this story, making an excellent case that VRChat should have implemented the accessibility features provided by some mods before implementing EAC and cutting off thousands of users who relied on things such as speech-to-text:

He makes some excellent points, and ones that VRChat should listen to.

UPDATE July 28th, 2022: Tech website Kotaku has published an update to the situation, titled The World’s Most Popular Social VR Game Is In Turmoil:

Whether this week’s Security Update will go down as a decisive turning point in the history of the VRChat community or just a larger-than-usual blip remains to be seen. But two things are certain: A lot of players are angry, and the Security Update is here to stay.

Two Virtual Reality Designers Discuss Techniques and Strategies for Implementing Safer Social VR (Including an Example from the Forthcoming Facebook Horizon Platform)

Photo by Mihai Surdu on Unsplash

Back at the start of November, two VR designers, Michelle Cortese and Andrea Zeller, wrote an article for Immerse on aspects of designing safer social VR spaces. That article was recently reprinted on The Next Web news site, titled How to protect users from harassment in social VR spaces, and it’s an excellent read on the subject, which I highly recommend.

In particular, female-identifying users of social VR platforms are often the victims of sexual harassment, research conducted by Jessica Outlaw and others has shown. Michelle Cortese writes:

As female designers working in VR, my co-worker Andrea Zeller and I decided to join forces on our own time and write a comprehensive paper. We wrote about the potential threat of virtual harassment, instructing readers on how to use body sovereignty and consent ideology to design safer virtual spaces from the ground up. The text will soon become a chapter in the upcoming book: Ethics in Design and Communication: New Critical Perspectives (Bloomsbury Visual Arts: London).

After years of flagging potentially-triggering social VR interactions to male co-workers in critiques, it seemed prime time to solidify this design practice into documented research. This article is the product of our journey.

The well-known immersive aspect of virtual reality—the VR hardware and software tricking your brain into believing what it is seeing is “real”—means that when someone threatens or violates your personal space, or your virtual body, it feels real.

This is particularly worrisome as harassment on the internet is a long-running issue; from trolling in chat rooms in the ’90s to cyber-bullying on various social media platforms today. When there’s no accountability on new platforms, abuse has often followed — and the innate physicality of VR gives harassers troubling new ways to attack. The visceral quality of VR abuse can be especially triggering for survivors of violent physical assault.

Cortese and Zeller stress that safety needs to be built into our social VR environments: “Safety and inclusion need to be virtual status quo.”

The article goes into a discussion of proxemics, which I will not attempt to summarize here; I would instead strongly urge you to go to the source and read it all for yourself, as it is very clearly laid out. A lot of research has already been done in this area, which can now be applied as we build new platforms.

And one of those new social VR platforms just happens to be Facebook Horizon, a project on which both Michelle Cortese and Andrea Zeller have been working!

What I did find interesting in this report was an example the authors provided, of how this user safety research is being put to use in the Facebook Horizon social VR platform, which will be launching in closed beta early this year. Apparently, there will be a button you can press to immediately remove yourself from a situation where you do not feel comfortable:

We designed the upcoming Facebook Horizon with easy-to-access shortcuts for moments when people would need quick-action remediation in tough situations. A one-touch button can quickly remove you from a situation. You simply touch the button and you land in a space where you can take a break and access your controls to adjust your experience.

Once safely away from the harasser, you can optionally choose to mute, block, or report them to the admins while in your “safe space”:

Handy features such as these, plus Facebook’s insistence on linking your personally-identifying account on the Facebook social network to your Facebook Horizon account (thus making it very difficult to be anonymous), will probably go a long way towards making women (and other minorities such as LGBTQ folks) feel safer in Facebook Horizon.

Of course, griefers, harassers and trolls will always try to find ways around the safeguards put in place, such as setting up dummy alternative accounts (Second Life and other virtual worlds have had to deal with such problems for years). We can also expect “swatting”-type attacks, where innocent people are falsely painted as troublemakers using the legitimate reporting tools provided (something we’ve unfortunately already seen happen in a few instances in Sansar).

Some rather bitter lessons on what does and doesn’t work have been learned in the “wild, wild west” of earlier-generation virtual worlds and social VR platforms, such as the never-ending free-for-all of Second Life (and of course, the cheerful anarchy of VRChat, especially in the days before they were forced to implement their nuanced Trust and Safety System due to a tidal wave of harassment, trolling and griefing).

But I am extremely glad to see that Facebook has hired VR designers like Michelle Cortese and Andrea Zeller, and that the company is treating user safety in social VR as a non-negotiable tenet from the earliest design stages of the Horizon project, instead of scrambling to address it as an after-thought as VRChat did. More social VR platforms need to do this.

I’m quite looking forward to seeing how this all plays out in 2020! I and many other observers will be watching Facebook Horizon carefully to see how well all these new security and safety features roll out and are embraced by users.