UPDATED! Academic Research in Social VR: Crowdsourcing Virtual Reality Experiments Using VRChat at Northeastern University

I am still working away on my presentation on the various uses of social VR in higher education, which I am to deliver on Sept. 8th, 2021 to my university’s senate committee on academic computing. Over the summer I have highlighted a number of interesting and innovative projects at various universities and colleges (you can find a number of them here, all tagged with the tag “Higher Education”). And I am especially heartened to see more and more published academic research on virtual reality, triggered by the increasing uptake of consumer-market VR headsets!

Conducting experiments in VR can sometimes be difficult, involving the purchase and setup of sometimes expensive hardware (particularly if multiple headsets need to be bought). University budgets can only go so far, even at the best of times. One way to get around this is to use existing commercial social VR platforms and their users as volunteers (who, of course, already have their own equipment).

This is a different form of what is called crowdsourcing: dividing up a task among a larger group of volunteers. In this case, researchers at Northeastern University in Boston, Massachusetts did a small demonstration experiment to prove the idea that recruiting study volunteers via VRChat was possible, publishing a paper at a computer science conference held last year. The following research paper is unfortunately not free to access and read, but you can always use your friendly local public or academic library to obtain a copy of it! Here’s the citation:


Saffo, D., Yildirim, C., Di Bartolomeo, S., & Dunne, C. (2020). Crowdsourcing virtual reality experiments using VRChat. Conference on Human Factors in Computing Systems – Proceedings, 1–8. https://doi.org/10.1145/3334480.3382829


According to the conference paper’s abstract:

Research involving Virtual Reality (VR) headsets is becoming more and more popular. However, scaling VR experiments is challenging as researchers are often limited to using one or a small number of headsets for in-lab studies. One general way to scale experiments is through crowdsourcing so as to have access to a large pool of diverse participants with relatively little expense of time and money. Unfortunately, there is no easy way to crowdsource VR experiments. We demonstrate that it is possible to implement and run crowdsourced VR experiments using a preexisting massively multiplayer online VR social platform—VRChat. Our small (n = 10) demonstration experiment required participants to navigate a maze in VR. Participants searched for two targets then returned to the exit while we captured completion time and position over time. While there are some limitations with using VRChat, overall we have demonstrated a promising approach for running crowdsourced VR experiments.

One of many delightful images illustrating this research paper!

One of the features which attracted the researchers to VRChat was the ability to build custom virtual worlds or rooms:

VRChat also has a special feature that sparked our interest: it allows users to upload custom rooms built with Unity by using a proprietary VRChat SDK. The SDK contains special triggers and event handlers that can be triggered by users, in addition to giving the possibility to upload rooms made of and containing any kind of 3D models made by a creator. We started asking ourselves if we could leverage the vast amount of VRChat users who already own VR equipment and use them as experiment participants by building a custom room that contained the implementation of our experiment, in order to run crowdsourced experiments in VRChat.

And so they built a maze and ran a simple experiment:

The participants in the experiment were asked to run through a VR maze, find two targets inside the maze, and go back to the exit. The experiment was run using two point of views, immersive and non-immersive, and compared the timing between a group of self-declared gamers and non-gamers. Our reasoning for choosing this experiment over others was that it was simple enough to avoid having too many variables influencing the results, and it would give us a quick way to evaluate the process of conducting a user study on the platform.

A researcher would then visit public world in VRChat, asking users present if they would be willing to run the maze.

After joining a public world, we began by looking for users using HMDs. We did this by asking users directly if they were using VR, or by observing their in-game movements as VR users have full head and sometimes hand tracking. We found that most users we approached were willing and eager to participate. After users had joined our world, they would spawn in a waiting room where we could give them further instructions. At this stage researchers conducting a user study may also present digital consent forms for participants to read and sign.

The researchers noted that, at the time of the proof-of-concept experiment, they were somewhat limited by the relatively narrow scope of what they could build using the then-available version of the VRChat SDK (software development kit). However, they noted that the next-generation graphical SDK (called Udon) offered the ability to build more complex interactive worlds, thereby expanding the possible uses for VR experiments.

The researchers also noted the relative ease and cost effectiveness with which VRChat could be used for academic research into the growing field of social or collaborative virtual reality:

It is particularly exciting to note that VRChat can also be
used to implement collaborative VR studies. Previously,
such studies would require custom multiplayer platform development. VRChat not only provides an SDK to create
worlds but also all the network capabilities to have several concurrent users all in the same virtual space.

UPDATE 2:02 p.m.: I’ve just discovered a recent five-minute YouTube video featuring the Northeastern researchers, explaining the concept of using existing social VR platforms for their experiments:

This video mentions and summarizes a second, follow-up research paper, which I have not yet read (again, you will have to pay to access this conference paper; you should be able to obtain a copy via your local public or academic library). Here’s the citation for you:


Saffo, D., Bartolomeo, S. Di, Yildirim, C., & Dunne, C. (2021). Remote and collaborative virtual reality experiments via social VR platforms. Conference on Human Factors in Computing Systems – Proceedings. https://doi.org/10.1145/3411764.3445426


I’m quite eager to read this second research paper! According to the description of the YouTube video, a preprint of this conference paper and all supplemental materials are available at the following URL: osf.io/c2amz (so you might not need to pay for a copy via interlibrary loan/document delivery from your local library, after all).

Of course, it’s not just VRChat that could be repurposed as an academic testbed. Any number of commercially available social VR platforms can be used as cost-effective platforms to conduct VR experiments! The researchers at Northeastern University are to be commended for their proof-of-concept work, and I very much look forward to seeing other uses of social VR platforms in various areas of academic virtual reality research.

Nonverbal Communication in Social VR: Recent Academic Research

Gestures (like this peace sign) are an example of nonverbal communication (Photo by Dan Burton on Unsplash)

In the real world, much of our communication is non-verbal: facial expression, gaze, gestures, body movements, even spatial distance (proxemics).

While older, flat-screen virtual worlds such as Second Life are somewhat limited in the forms of nonverbal communication available (most people rely on text or voice chat), modern VR equipment and social VR platforms allow for more options:

  • Hand/finger movement: most VR headsets have hand controllers; the Valve Index has Knuckles hand controllers which allow you to move your fingers as well as your hands;
  • Body movement: the Vive pucks can be attached to your waist, hips, feet, and other parts of your body to track their movement in real time;
  • Eye movements/gaze: for example, the Vive Pro Eye VR headset can track the blinking and movement of the eyes;
  • Facial expression: add-ons such as the Vive Facial Tracker (which attaches to your VR headset) allow you to convey lower face and mouth movements on your avatar.

In addition, many social VR platforms also employ emoticons, which can be pulled up via a menu and displayed over the head of the avatar (e.g. the applause emoji in AltspaceVR), as well as full-body pre-recorded animations (e.g. doing a backflip in VRChat). The use of all these tools, in combination or alone, allows users in social VR to approach the level of non-verbal communication found in real life, provided they have the right equipment and are on a platform which supports that equipment (e.g. NeosVR, where you can combine all these into an avatar which faithfully mimics your facial and body movements).

Two recently published research papers investigate nonverbal communication on social VR platforms, adding to the growing academic literature on social VR. (I am happy to see that social VR is starting to become a topic of academic research!)


Maloney, D., Freeman, G., & Wohn, D. Y. (2020). “Talking without a Voice”: Understanding Non-Verbal Communication in Social Virtual Reality. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2). https://doi.org/10.1145/3415246

Unfortunately, there is no open-access version of this conference proceeding available; you’ll have to obtain a copy from your local academic or public library. This paper, by Divine Maloney and Guo Freeman of Clemson University and Donghee Yvette Wohn of the New Jersey Institute of Technology, consists of two parts:

  • conducting unobtrusive observations of 61 public events held in AltspaceVR over the span of four weeks, to see what non-verbal interactions were being used naturally on the platform; and
  • interviewing 30 users of social VR platforms (of which I was one!), where the paper’s authors read through the transcribed interview data to acquire a picture with regards how social VR users used, perceived, and experienced non-verbal communication for further analysis.

In the first study of the two, the authors noted the following different kinds of nonverbal communication:

  • the use of movement to indicate that someone was paying attention. These included nodding behaviors and moving the body or head toward the person or object that was subject of attention;
  • the use of applause to indicate approval;
  • pointing and patting one’s own chest as a form of directing attention either at a remote object/person or oneself;
  • and behaviours such as waving, dancing, and kissing, which were mostly used in social grooming contexts (dancing was also used as entertainment);
  • and finally, the behaviour of trolls: interpersonal provocation and social disruptions.

With respect to the thirty interviewed conducted, they were analyzed as follows to answer two research questions:

Using quotes from users’ own accounts, in this section we present our findings as two parts. First, to answer RQ2 (How do people perceive and understand non-verbal communication in social VR?), we identified three common themes that demonstrated how users perceive and understand non-verbal communication in social VR: as more immersive and embodied interactions for body language; as a similar form of communication to offline face-to-face interaction in terms of spatial behavior, hand behavior, and facial expressions; and as a natural way to initiate communication with online strangers.

Second, to answer RQ3 (How, if at all, does non-verbal communication affect interaction outcomes in social VR?), we described the social consequences of interacting through non-verbal communication in social VR for various user groups, including marginalized users such as cis women, trans women, and disabled users. We specially highlighted how non-verbal communication in social VR afforded privacy and social comfort as well as acted as a protection for marginalized users.

Unsurprisingly, the researchers discovered that most participants considered non-verbal communication to be a positive aspect in their social VR experience. Those surveyed highly praised body tracking (either just the hands and head, or ins ome cases the whole body), as it allowed for a more immersive and embodied form of non-verbal communication than those in traditional, flatscreen virtual worlds.

In addition to supporting more immersive and embodied interactions for body language, participants also considered non-verbal communication in social VR similar to offline face-to-face interaction in terms of spatial behavior, hand behavior, and facial expressions. This familiarity and naturalness greatly contributed to their generally positive perceptions.

Participants also viewed non-verbal communication in social VR as positive and effective because it became a less invasive way to start interactions with online strangers (e.g. waving hello at someone you’ve just met). Nonverbal communication also afforded some users a sense of privacy and social comfort, and in some cases, became an effective protection for them to avoid unwanted interactions, attention, and behaviors (especially with LGBTQ people and women).

The paper made three design recommendations for improved nonverbal communication in social VR platforms: providing support for facial tracking (which is already on its way with products like the Vive Facial Tracker); supporting more accurate hand and finger tracking (again, already underway with the Knuckles controllers for the Valve Index); and enabling alternative modes of control, especially for users with physical disabilities. While most of the study participants highly praised full body tracking in social VR, disabled users in fact complained about this feature and demanded alternatives.

The conference paper concludes:

Recently, commercial social VR applications have emerged as increasingly popular digital social spaces that afford more naturally embodied interaction. How do these novel systems shape the role of non-verbal communication in our online social lives? Our investigation has yielded three key findings. First, offline non-verbal communication modalities are being used in social VR and can simulate experiences that are similar to offline face-to-face interactions. Second, non-verbal communication in social VR is perceived overall positive. Third, non-verbal interactions affect social interaction consequences in social VR by providing privacy control, social comfort, and protection for marginalized users.


Tanenbaum, T. J., Hartoonian, N., & Bryan, J. (2020). “How do I make this thing smile?”: An Inventory of Expressive Nonverbal Communication in Commercial Social Virtual Reality Platforms. Conference on Human Factors in Computing Systems – Proceedings, 1–13. https://doi.org/10.1145/3313831.3376606

This paper is available free to all via Open Access. In this conference proceeding, Theresa Jean Tanenbaum, Nazely Hartoonian, and Jeffrey Bryan of the Transformative Play Lab at the Department of Informatics at the University of California, Irvine, did a study of ten social VR platforms:

  • VRChat
  • AltspaceVR
  • High Fidelity (which shut down in January of 2020)
  • Sansar
  • TheWave VR (this social VR platform shut down in early 2021)
  • vTime XR
  • Rec Room
  • Facebook Spaces (since shut down and replaced by Facebook Horizon)
  • Anyland
  • EmbodyMe

For each platform, investigators answered the following eight questions:

  1. Can the user control facial expressions, and if so, how? (Pre-baked emotes, puppeteering, etc.)
  2. Can the user control body language, and if so, how? (Pre-baked emotes, puppeteering, postures. etc.)
  3. Can the user control proxemic spacing (avatar position), and if so, how? (Teleport, hotspots, real world positioning, etc.) How is collision handled between avatars? (Do they overlap, push each other, etc.)
  4. How is voice communication handled? Is audio spatialized, do lips move, is there a speaker indicator, etc.
  5. How is eye fixation/gaze handled? (Do avatars lock and maintain gaze, is targeting gaze automatic, or intentional, or some sort of hybrid, do eyes blink, saccade, etc.)
  6. Are different emotions/moods/affects supported, and how are they implemented? (Are different affective states possible, and do they combine with other nonverbal communications, etc.)
  7. Can avatars interact physically, and if so, how? (Hugging, holding hands, dancing, etc.) What degree of negotiation/consent is needed for multi- avatar interactions? (One-party, two-party, none at all?)
  8. Are there any other kinds of nonverbal communication possible in the system that have not be described in the answers to the above questions?

The results were a rather complete inventory of nonverbal communication in social VR, with the goal to catalogue common design elements for avatar expression and identify gaps and opportunities for future design innovation. Here is the table from the paper (which can be viewed in full size at the top of page 6 of the document).

An inventory of non-verbal communication in ten social VR platforms (source)

VR development is proliferating rapidly, but very few interaction design strategies have become standardized…

We view this inventory as a first step towards establishing a more comprehensive guide to the commercial design space of NVC [non-verbal communication] in VR. As a design tool this has two immediate implications for designers. First, it provides a menu of common (and less common) design strategies, and their variations, from which designers may choose when determining how to approach supporting any given kind of NVC within their platform. Second, it calls attention to a set of important social signals and NVC elements that designers must take into consideration when designing for Social VR. By grounding this data in the most commonly used commercial systems, our framework can help designers anticipate the likelihood that a potential user will be acquainted with a given interaction schema, so that they may provide appropriate guidance and support.

Our dataset also highlights some surprising gaps within the current feature space for expressive NVC. While much social signaling relies upon control of facial expression, we found that the designed affordances for this aspect of NVC to be mired in interaction paradigms inherited from virtual worlds. Facial expression control is often hidden within multiple layers of menus (as in the case of vTime), cannot be isolated from more complex emotes (as in the case of VR Chat), hidden behind opaque controller movement (as in Facebook Spaces), or unsupported entirely. In particular, we found that with the exception of dynamic lip-sync, there were no systems with a design that would allow a user to directly control the face of their avatar through a range of emotions while simultaneously engaging in other forms of socialization.

The authors go on to say that they observed no capacity in any of the systems to recombine and blend the various forms of nonverbal communication, such as can be done in the real world:

As we saw in our consideration of the foundations of NVC in general, and Laban Movement Analysis in particular, much NVC operates by layering together multiple social signals that modify, contextualize, and reinforce other social signals. Consider, for instance, that it is possible to smile regretfully, laugh maliciously, and weep with joy. People are capable of using their posture to
indicate excitement, hesitation, protectiveness, and many other emotional states, all while performing more overt discourse acts that inherit meaning from the gestalt of the communicative context.

The conference paper concludes:

As is evident in the scholarly work around social VR, improving the design space for NVC in VR has the potential to facilitate deeper social connection between people in virtual reality. We also argue that certain kinds of participatory entertainment such as virtual performance will benefit greatly from a more robust interaction design space for emotional expression through digital avatars. We’ve identified both common and obscure design strategies for NVC in VR, including design conventions for movement and proxemic spacing, facial control, gesture and posture, and several strategies unique to avatar mediated socialization online. Drawing on previous literature around NVC in virtual worlds, we have identified some significant challenges and opportunities for designers and scholars concerned with the future of socialization in virtual environments. Specifically, we identify facial expression control, and unconscious body posture as two critical social signals that are currently poorly supported within today’s commercial social VR platforms.

It is interesting to note that both papers cite the need to properly convey facial expressions as key to expanding the ability of avatars in social VR to convey non-verbal communication!

UPDATED! Nanome: A Brief Introduction to a Social VR Platform for Exploring Chemistry

Nanome: “The Future of Molecular Design” (VRFocus)

Virtual reality is finding application to many fields, and among them is chemistry. For example, in the spring of 2020, Harvard University used Oculus Quest VR headsets in an undergraduate-level biochemistry class to help students to observe, manipulate, and build molecules and explore the shapes of proteins and drug compounds. (Here’s a link to the recently-published paper in the Journal of Chemical Education. Unfortunately, you’ll have to buy the full-text article, or get a copy via your local public or university library. Remember, librarians are your friends!)

VR use in chemistry is not just for students learning about the basics of chemistry, however; it also has application to research scientists working in the laboratory. A good example of how social VR can be used in cutting-edge, collaborative chemistry research is Nanome, a startup co-founded in 2015 by some engineering students at University of California San Diego, who saw a need for 3D visualization tools to help medicinal and computational chemists and structural biologists reduce their time to market and increase the efficacy of new drugs (a process that can cost billions of dollars per drug).

Nanome recently announced the closure of a successful funding round raising $3 million from several venture capital firms:

“Since our founding, we’ve had a compelling vision about what scientific collaboration should look like and a goal to equip our real-life superheroes — scientists who are discovering ways to combat disease, address climate change and improve people’s lives — with an intuitive virtual interface where they can experiment, design and learn at the nanoscale,” said Steve McCloskey, Nanome CEO and Founder in a statement. “We made huge strides toward realizing that vision in 2020, and this funding gives us firepower to increase our impact, support more research initiatives and continue to revolutionize biotech and scientific research.”

Initially starting as a visualization tool to facilitate research and development by medicinal and computational chemists and structural biologists, Nanome has grown as an open platform for virtual collaboration. During the pandemic organizations have used Nanome’s platform “to assess candidate molecules’ ability to bind viral proteins in 3D,” the company notes.

In fact, Nanome became the first American company to join a coordinated supercomputing project funded by the European Union (EU) Commission to screen chemical libraries for potential activity against SARS-CoV-2, the coronavirus that causes COVID-19! (Here’s the press release.)

Nanome is being used in the search for drugs to fight COVID-19

And the best part is, you can try Nanome out for free! Nanome is free to download for personal use via Steam, Viveport, SideQuest, and the Oculus store, supporting the Oculus Rift, HTC Vive and Valve Index headsets. For academic or commercial use there are various licensing structures; for more details, visit the pricing page on their website.

For more information on Nanome, visit their website or follow the company on social media: Facebook, Twitter, Instagram, LinkedIn, or YouTube. I will be adding Nanome to my ever-expanding comprehensive list of social VR and virtual worlds.

UPDATE July 9th, 2021: Here’s an interesting article about Nanome, from a website called LabCompare: VR for Science: Drug Discovery and More in the Virtual World, with some great illustrations!

UPDATE Oct. 14th, 2021: A Spt. 7th, 2021 Wall Street Journal article by Sara Castellanos titled Virtual Reality Puts Drug Researchers Inside the Molecules They Study (original; archived version) is a highly recommended read if you want to learn more about Nanome and how it is being used in research.

Social VR in Higher Education: A Survey and a Presentation (Be Careful What You Wish For!)

Photo by Jaredd Craig on Unsplash

Be very careful what you wish for, people.

I have been toiling away on this blog for four years now, in (relative) obscurity, focusing from time to time on academic research into various aspects of social VR, and hoping and dreaming of a day when it becomes more mainstream technology in universities and colleges. I even began a research project myself, which I unfortunately had to suspend due to being wildly out-of-scale (librarians at the University of Manitoba are members of the faculty union, and have an opportunity to pursue research projects).

Well, guess what? I have been asked to give a half-hour presentation to my university’s senate committee on academic computing, on the applications of social VR to higher education. It’s to be early September, so I have a couple of months to research and prepare my slide deck.

And I am terrified!

Why? Because this is an important, high-level university committee, and I have never given a presentation to these kinds of people before. Sure, I have given all kinds of presentations to undergraduate and graduate students at my university, and of course, I have slipped on a VR headset and given presentations in places like ENGAGE and AltspaceVR. In fact, it was my presentation on social VR for the Students in VR group in AltspaceVR earlier this year that my director of libraries saw, when she recommended me to give this presentation!

So I was feeling major impostor syndrome, people. Until I gave my head a shake and told myself: Ryan, you’ve got this. You’ve been passionately blogging about this for four years now. If anybody can talk about social VR, it’s you!

So, my first step was to send out a message to all the social VR Discord servers I belong to:

Hey everybody! I have been asked to give a half-hour presentation at my university about the uses of social VR in higher education (colleges, universities, etc.). I would be interested in learning more about specific university/college partnerships and projects on social VR platforms, if you know of any could you please tell me about them? Thank you!

And I have been collating responses for the past 24 hours! I want to thank everybody who has responded to me so far. I hope to include many of the projects I hear about in my presentation, as examples of how higher education is using social VR platforms for teaching and research. (I will also blog about many of the projects I find, here on my blog.)

So, if you are aware of any specific university and college projects involving social virtual reality (either building a platform from scratch or using an existing social VR platform like NeosVR, ENGAGE, etc.), I would love to hear about them!

Please send me a message via my Contact Me page, or leave a comment here on this blogpost, thanks!