Creating Virtual Learning Spaces Using Mozilla Hubs at the New School

The New School, a private university in New York City, has recently launched a program using the social VR platform Mozilla Hubs as a component of classes. This is an initiative of the New School’s XReality Center, a new research centre and testbed with four core components:

  1. Immersive Learning: Create a resource hub for inspiring XR initiatives within the university with the focus on developing new learning models, design, storytelling, performing arts and the future of learning;
  2. XR and HCI Labs: Learn, design and experience what immersive worlds, XR and Human Computer Interaction (HCI) interfaces offer through our workshops, events, virtual and lab environments;
  3. Research: Lead and conduct research to create new knowledge and better understand the efficacy and impact of immersive and emerging technology in education and across industries; and
  4. Partnerships: Develop XR projects and products with internal and external clients and partners.

Starting in 2020, the XReality Centre began a program to create virtual spaces using Mozilla Hubs to enhance the learning experience:

Starting in Spring 2020, the XReality Center has embarked on creating a set of virtual spaces in Mozilla Hubs as a way to enhance student engagement and provide new opportunities for collaboration. During the fall semester the XReality Center will host virtual events, class visits and other social activities in these spaces. The XReality Center is interested in partnering with faculty, programs, schools, and administrative departments to develop and offer virtual teaching and learning initiatives.

The State of XR and Immersive Learning Outlook Report 2021 (available to download here), recently published by the Immersive Learning Research Network, describes one such application:

In a year marked by lack of access to VR labs, Mozilla Hubs gave opportunities for students to use collaborative tools and explore creating together in social worlds. At Parsons School of Design in Fall 2020, over 100 students enrolled in the Immersive Storytelling course met weekly in Mozilla Hubs to co-create virtual narratives, play, and build worlds.

Inspired by the work at the XR and HCI Innovation Labs at The New School, faculty and students from across the Parsons Art Media, Technology, and Fashion schools exhibited their 3D models and presented projects in virtual galleries using audio, video, and an abundance of student creativity. While students acknowledge the limitations of the Mozilla Hub interface, most reported that they enjoyed the opportunity to be in a shared space. One of the students summed it up: “I think it is very fun to be in a virtual world, for me it is a place where I can explore my ideas that may not be possible to create in the real world.”

Four examples of student-created worlds from the Immersive Storytelling course at The New School’s Parsons School of Design (image source: iLRN State of XR and Immersive Learning Outlook Report 2021)

The use of a simple, accessible platform such as Mozilla Hubs makes it easy for the university to try new things more quickly and easily, without making a massive investment of time and money in building their own platform. I believe that we can expect to see more institutions of higher education set up programs similar to the New School’s XReality Center, as a way to incorporate XR technology in the courses they teach.

Second Life Steals, Deals, and Freebies: New Addams VIP Group and L$500 Store Credit!

The popular Second Life womenswear store Addams is celebrating its 7th birthday with a brand new store group, the Addams Store VIP group, which you can join for only L$5! With your group membership, you can pick up a full fatpack of the Tiara bikini, which comes with an open shirt to wear over top, as well as L$500 in store credit to spend on whatever you wish (the credit is enough to get you two pieces of clothing or footwear):

The Addams store is extremely busy at the moment, but you can also teleport into the sim next door, Addams Land, join the group (just search for “Addams Store VIP” under Groups in Search), pick up the new group gift outfit and the L$500 credit, and do a little camshopping at the Addams store the next sim over (here’s detailed instructions on how you camshop). Two sims, no waiting! 😉

Here’s an example of an outfit you can pick up for that L$500, the Penelope black leather jacket and top paired with Penelope black jeans, plus a free necklace to complete the emsemble! You can use the included HUD to change the top under the jacket to one of dozens of different colours, as well as make it either opaque or transparent (with an optional bra underneath). Lots of options!

You only have five days to use the L$500 in store credits before they expire, so don’t delay! Happy shopping!

VRchaeology: Using Virtual Reality to Teach Archaeology Skills at the University of Illinois at Urbana-Champaign

I am spending this summer doing a deep-dive into the use of virtual reality (including social VR) in higher education, partly for this blog, and partly to find good examples of such usage for my presentation in September to my university’s senate committee on academic computing.

One such project that got my attention was an interesting virtual archaeology program, called VRchaeology. While it may not be social VR, it is certainly an innovative way to introduce archaeology skills to new students, and much more immersive than textbooks, lectures, and videos!

The State of XR and Immersive Learning Outlook Report 2021 (available to download here), recently published by the Immersive Learning Research Network, describes the project as follows:

The University of Illinois at Urbana-Champaign is using XR to teach archaeology and address the challenges of students finding time and funding for fieldwork, an essential requirement of that discipline. The VR laboratory, VRchaeology, is one such project, funded by a National Science Foundation grant and designed by anthropology professor Laura Shackelford; educational policy, organization, and leadership professor David Huang; and computer science graduate student Cameron Merrill. The virtual experience is based on an actual North American cave site excavated in the 1930s and includes over 110 virtual artifacts, many of which are based on objects in the University’s own collection.

These immersive experiences cover complex activities that comprise the work of a professional archaeologist, from initially mapping a site to creating an excavation grid and using ground-penetrating radar. Students learn how to dig for artifacts, record data, collaborate with each other, and reach scientific conclusions. Importantly, students have the agency to manage their own learning journey. In addition, their possible miscalculations and missteps do not impact the value of the historical artifacts, nor alter the significance of an actual site; instead, they help them develop and apply a deeper understanding to students learning to become expert archaeologists in their own right. With the virtual cave lowering the barriers to the fieldwork requirement, it also opens up the discipline to lower income students who may be unable to travel to an actual site.

Image source: iLRN State of XR and Immersive Learning Outlook Report 2021
UIUC anthropology professor Laura Shackelford; educational policy, organization and leadership professor David Huang; and computer science graduate student Cameron Merrill, the creators of VRchaeology (source)

Much the same as the Egyptian tomb of Queen Nefertari, which was set up in the former social VR platform High Fidelity, one purpose of the VRchaeology platform is to provide access to potentially fragile places and objects that would not be suited to a real-life site visit by hundreds or thousands of students. According to the project’s website, VRchaeology’s use is clearly spelled out:

It is…

• A semester long course to introduce field and lab methods
• A replicable, controlled teaching environment
• A way to introduce archaeology to a new audience
• An active, immersive environment

It is not…

• A replacement for field school
• A complete lesson in methods for Majors or graduate students
• A stand-alone game
• A passive experience

VRchaelogy is used by undergraduate students who might not have the time or money to attend a real-world dig site, and the course it is used in satisfies the field school requirement for those pursuing an archaeology degree at UIUC. An article from the Illinois news bureau about the project adds:

“Field school is a requirement of most archaeological programs across the country,” said Illinois anthropology professor Laura Shackelford, who led development of the class with…Wenhao David Huang and computer science graduate student Cameron Merrill. “But traveling to a field school site can cost anywhere from $500 to $5,000.”

This, and the fact that traditional field school expeditions are often scheduled during breaks, makes it difficult or impossible for many students to attend – cutting them off from the study of archaeology altogether.  

“This class makes it possible for many more students to get an education or explore a career in archaeology,” Shackelford said. The class is also accessible to students with physical limitations who are unable to travel to or navigate a field site…

The students learn the archaeological techniques required in any excavation. They set up a research grid on the cave floor and systematically locate and record any artifacts they find on the surface. They draw a map with all of the surface details and then decide where to excavate. They take photos of special features or finds. They dig. They collect artifacts. They conduct laboratory analyses. They keep track of their progress in a field notebook.

All of these tasks are accomplished in the virtual world.

Here’s a two-minute YouTube video overview of the VRchaeology platform:

If you are interested in learning more about the VRchaeology project, here is a list of academic publications and recent press articles. Dig in! (Get it? Dig in?!??)

Nonverbal Communication in Social VR: Recent Academic Research

Gestures (like this peace sign) are an example of nonverbal communication (Photo by Dan Burton on Unsplash)

In the real world, much of our communication is non-verbal: facial expression, gaze, gestures, body movements, even spatial distance (proxemics).

While older, flat-screen virtual worlds such as Second Life are somewhat limited in the forms of nonverbal communication available (most people rely on text or voice chat), modern VR equipment and social VR platforms allow for more options:

  • Hand/finger movement: most VR headsets have hand controllers; the Valve Index has Knuckles hand controllers which allow you to move your fingers as well as your hands;
  • Body movement: the Vive pucks can be attached to your waist, hips, feet, and other parts of your body to track their movement in real time;
  • Eye movements/gaze: for example, the Vive Pro Eye VR headset can track the blinking and movement of the eyes;
  • Facial expression: add-ons such as the Vive Facial Tracker (which attaches to your VR headset) allow you to convey lower face and mouth movements on your avatar.

In addition, many social VR platforms also employ emoticons, which can be pulled up via a menu and displayed over the head of the avatar (e.g. the applause emoji in AltspaceVR), as well as full-body pre-recorded animations (e.g. doing a backflip in VRChat). The use of all these tools, in combination or alone, allows users in social VR to approach the level of non-verbal communication found in real life, provided they have the right equipment and are on a platform which supports that equipment (e.g. NeosVR, where you can combine all these into an avatar which faithfully mimics your facial and body movements).

Two recently published research papers investigate nonverbal communication on social VR platforms, adding to the growing academic literature on social VR. (I am happy to see that social VR is starting to become a topic of academic research!)


Maloney, D., Freeman, G., & Wohn, D. Y. (2020). “Talking without a Voice”: Understanding Non-Verbal Communication in Social Virtual Reality. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2). https://doi.org/10.1145/3415246

Unfortunately, there is no open-access version of this conference proceeding available; you’ll have to obtain a copy from your local academic or public library. This paper, by Divine Maloney and Guo Freeman of Clemson University and Donghee Yvette Wohn of the New Jersey Institute of Technology, consists of two parts:

  • conducting unobtrusive observations of 61 public events held in AltspaceVR over the span of four weeks, to see what non-verbal interactions were being used naturally on the platform; and
  • interviewing 30 users of social VR platforms (of which I was one!), where the paper’s authors read through the transcribed interview data to acquire a picture with regards how social VR users used, perceived, and experienced non-verbal communication for further analysis.

In the first study of the two, the authors noted the following different kinds of nonverbal communication:

  • the use of movement to indicate that someone was paying attention. These included nodding behaviors and moving the body or head toward the person or object that was subject of attention;
  • the use of applause to indicate approval;
  • pointing and patting one’s own chest as a form of directing attention either at a remote object/person or oneself;
  • and behaviours such as waving, dancing, and kissing, which were mostly used in social grooming contexts (dancing was also used as entertainment);
  • and finally, the behaviour of trolls: interpersonal provocation and social disruptions.

With respect to the thirty interviewed conducted, they were analyzed as follows to answer two research questions:

Using quotes from users’ own accounts, in this section we present our findings as two parts. First, to answer RQ2 (How do people perceive and understand non-verbal communication in social VR?), we identified three common themes that demonstrated how users perceive and understand non-verbal communication in social VR: as more immersive and embodied interactions for body language; as a similar form of communication to offline face-to-face interaction in terms of spatial behavior, hand behavior, and facial expressions; and as a natural way to initiate communication with online strangers.

Second, to answer RQ3 (How, if at all, does non-verbal communication affect interaction outcomes in social VR?), we described the social consequences of interacting through non-verbal communication in social VR for various user groups, including marginalized users such as cis women, trans women, and disabled users. We specially highlighted how non-verbal communication in social VR afforded privacy and social comfort as well as acted as a protection for marginalized users.

Unsurprisingly, the researchers discovered that most participants considered non-verbal communication to be a positive aspect in their social VR experience. Those surveyed highly praised body tracking (either just the hands and head, or ins ome cases the whole body), as it allowed for a more immersive and embodied form of non-verbal communication than those in traditional, flatscreen virtual worlds.

In addition to supporting more immersive and embodied interactions for body language, participants also considered non-verbal communication in social VR similar to offline face-to-face interaction in terms of spatial behavior, hand behavior, and facial expressions. This familiarity and naturalness greatly contributed to their generally positive perceptions.

Participants also viewed non-verbal communication in social VR as positive and effective because it became a less invasive way to start interactions with online strangers (e.g. waving hello at someone you’ve just met). Nonverbal communication also afforded some users a sense of privacy and social comfort, and in some cases, became an effective protection for them to avoid unwanted interactions, attention, and behaviors (especially with LGBTQ people and women).

The paper made three design recommendations for improved nonverbal communication in social VR platforms: providing support for facial tracking (which is already on its way with products like the Vive Facial Tracker); supporting more accurate hand and finger tracking (again, already underway with the Knuckles controllers for the Valve Index); and enabling alternative modes of control, especially for users with physical disabilities. While most of the study participants highly praised full body tracking in social VR, disabled users in fact complained about this feature and demanded alternatives.

The conference paper concludes:

Recently, commercial social VR applications have emerged as increasingly popular digital social spaces that afford more naturally embodied interaction. How do these novel systems shape the role of non-verbal communication in our online social lives? Our investigation has yielded three key findings. First, offline non-verbal communication modalities are being used in social VR and can simulate experiences that are similar to offline face-to-face interactions. Second, non-verbal communication in social VR is perceived overall positive. Third, non-verbal interactions affect social interaction consequences in social VR by providing privacy control, social comfort, and protection for marginalized users.


Tanenbaum, T. J., Hartoonian, N., & Bryan, J. (2020). “How do I make this thing smile?”: An Inventory of Expressive Nonverbal Communication in Commercial Social Virtual Reality Platforms. Conference on Human Factors in Computing Systems – Proceedings, 1–13. https://doi.org/10.1145/3313831.3376606

This paper is available free to all via Open Access. In this conference proceeding, Theresa Jean Tanenbaum, Nazely Hartoonian, and Jeffrey Bryan of the Transformative Play Lab at the Department of Informatics at the University of California, Irvine, did a study of ten social VR platforms:

  • VRChat
  • AltspaceVR
  • High Fidelity (which shut down in January of 2020)
  • Sansar
  • TheWave VR (this social VR platform shut down in early 2021)
  • vTime XR
  • Rec Room
  • Facebook Spaces (since shut down and replaced by Facebook Horizon)
  • Anyland
  • EmbodyMe

For each platform, investigators answered the following eight questions:

  1. Can the user control facial expressions, and if so, how? (Pre-baked emotes, puppeteering, etc.)
  2. Can the user control body language, and if so, how? (Pre-baked emotes, puppeteering, postures. etc.)
  3. Can the user control proxemic spacing (avatar position), and if so, how? (Teleport, hotspots, real world positioning, etc.) How is collision handled between avatars? (Do they overlap, push each other, etc.)
  4. How is voice communication handled? Is audio spatialized, do lips move, is there a speaker indicator, etc.
  5. How is eye fixation/gaze handled? (Do avatars lock and maintain gaze, is targeting gaze automatic, or intentional, or some sort of hybrid, do eyes blink, saccade, etc.)
  6. Are different emotions/moods/affects supported, and how are they implemented? (Are different affective states possible, and do they combine with other nonverbal communications, etc.)
  7. Can avatars interact physically, and if so, how? (Hugging, holding hands, dancing, etc.) What degree of negotiation/consent is needed for multi- avatar interactions? (One-party, two-party, none at all?)
  8. Are there any other kinds of nonverbal communication possible in the system that have not be described in the answers to the above questions?

The results were a rather complete inventory of nonverbal communication in social VR, with the goal to catalogue common design elements for avatar expression and identify gaps and opportunities for future design innovation. Here is the table from the paper (which can be viewed in full size at the top of page 6 of the document).

An inventory of non-verbal communication in ten social VR platforms (source)

VR development is proliferating rapidly, but very few interaction design strategies have become standardized…

We view this inventory as a first step towards establishing a more comprehensive guide to the commercial design space of NVC [non-verbal communication] in VR. As a design tool this has two immediate implications for designers. First, it provides a menu of common (and less common) design strategies, and their variations, from which designers may choose when determining how to approach supporting any given kind of NVC within their platform. Second, it calls attention to a set of important social signals and NVC elements that designers must take into consideration when designing for Social VR. By grounding this data in the most commonly used commercial systems, our framework can help designers anticipate the likelihood that a potential user will be acquainted with a given interaction schema, so that they may provide appropriate guidance and support.

Our dataset also highlights some surprising gaps within the current feature space for expressive NVC. While much social signaling relies upon control of facial expression, we found that the designed affordances for this aspect of NVC to be mired in interaction paradigms inherited from virtual worlds. Facial expression control is often hidden within multiple layers of menus (as in the case of vTime), cannot be isolated from more complex emotes (as in the case of VR Chat), hidden behind opaque controller movement (as in Facebook Spaces), or unsupported entirely. In particular, we found that with the exception of dynamic lip-sync, there were no systems with a design that would allow a user to directly control the face of their avatar through a range of emotions while simultaneously engaging in other forms of socialization.

The authors go on to say that they observed no capacity in any of the systems to recombine and blend the various forms of nonverbal communication, such as can be done in the real world:

As we saw in our consideration of the foundations of NVC in general, and Laban Movement Analysis in particular, much NVC operates by layering together multiple social signals that modify, contextualize, and reinforce other social signals. Consider, for instance, that it is possible to smile regretfully, laugh maliciously, and weep with joy. People are capable of using their posture to
indicate excitement, hesitation, protectiveness, and many other emotional states, all while performing more overt discourse acts that inherit meaning from the gestalt of the communicative context.

The conference paper concludes:

As is evident in the scholarly work around social VR, improving the design space for NVC in VR has the potential to facilitate deeper social connection between people in virtual reality. We also argue that certain kinds of participatory entertainment such as virtual performance will benefit greatly from a more robust interaction design space for emotional expression through digital avatars. We’ve identified both common and obscure design strategies for NVC in VR, including design conventions for movement and proxemic spacing, facial control, gesture and posture, and several strategies unique to avatar mediated socialization online. Drawing on previous literature around NVC in virtual worlds, we have identified some significant challenges and opportunities for designers and scholars concerned with the future of socialization in virtual environments. Specifically, we identify facial expression control, and unconscious body posture as two critical social signals that are currently poorly supported within today’s commercial social VR platforms.

It is interesting to note that both papers cite the need to properly convey facial expressions as key to expanding the ability of avatars in social VR to convey non-verbal communication!