Steven King is an associate professor of multimedia journalism and emerging technologies at the University of North Carolina, Chapel Hill, holding a joint appointment with the UNC Hussman School of Journalism and Media and the Kenan-Flagler Business School. In his work, King combines computer science concepts, human-centered design and storytelling to create new ways to present information through emerging technologies, such as virtual and augmented reality, artificial intelligence and other interactive media forms, such as interactive data-driven graphics.
If you ask a UNC student what their remote classroom experience has consisted of, they will likely tell you about video lectures through Zoom. But for students in Steven King’s class, they are experiencing remote learning differently — through virtual reality.
“I’m always trying to figure out a better way to teach and communicate,” King, a professor at the Hussman School of Journalism and Media, said. “I know virtual reality is an immersive experience.”
King built a virtual 3D version of his classroom, which allows his students to walk around in the classroom and break out into groups.
He said he has tested out a lot of different platforms for hosting 3D classrooms. The first experience, he said, was through Mozilla Hubs. But King said his class will likely stick to AltspaceVR because of how pleased the students have been with it.
“When you’re faced with a crisis, these are times to step up and figure things up and make new discoveries,” King said. “We don’t need to limit ourselves to the tools we have. We need to develop new tools to move us forward.”
King sent Oculus Go Virtual Reality headsets to his 28 students to use at home. King and the students built their own avatars, and they are all attending class together in a virtual world as robots, panda bears, ducks and other characters. King chose the superhero Ironman as his avatar.
The emerging technologies class was tailor-made for this type of experiment, King said. Students had become familiar with the technology throughout the semester while learning about artificial intelligence and augmented reality.
To help the students prepare for class. I gave the students an assignment to be completed before the first class hosted in AltspaceVR. I asked every student to signup for an account, go through the tutorial in their home space, and to go to the InfoZone, which is a tutorial in the form of a social fair about going to events. The final step of the assignment was to send me a friend request. I also recorded a video on how to enter the room/event…
This assignment was critical to the success of the next class. I needed the students to work through any technical issues on their own and to feel confident in another social VR environment. Once I got a friend request, I added them to the group so they could see the private event…
Most students arrived early and were ready to go. I let them spend several minutes interacting and exploring the space. There was lots of personal chatting, like I would see before an in-person class, which has been absent in my Zoom class.
In the real world, much of our communication is non-verbal: facial expression, gaze, gestures, body movements, even spatial distance (proxemics).
While older, flat-screen virtual worlds such as Second Life are somewhat limited in the forms of nonverbal communication available (most people rely on text or voice chat), modern VR equipment and social VR platforms allow for more options:
Hand/finger movement: most VR headsets have hand controllers; the Valve Index has Knuckles hand controllers which allow you to move your fingers as well as your hands;
Body movement: the Vive pucks can be attached to your waist, hips, feet, and other parts of your body to track their movement in real time;
Eye movements/gaze: for example, the Vive Pro Eye VR headset can track the blinking and movement of the eyes;
Facial expression: add-ons such as the Vive Facial Tracker (which attaches to your VR headset) allow you to convey lower face and mouth movements on your avatar.
In addition, many social VR platforms also employ emoticons, which can be pulled up via a menu and displayed over the head of the avatar (e.g. the applause emoji in AltspaceVR), as well as full-body pre-recorded animations (e.g. doing a backflip in VRChat). The use of all these tools, in combination or alone, allows users in social VR to approach the level of non-verbal communication found in real life, provided they have the right equipment and are on a platform which supports that equipment (e.g. NeosVR, where you can combine all these into an avatar which faithfully mimics your facial and body movements).
Two recently published research papers investigate nonverbal communication on social VR platforms, adding to the growing academic literature on social VR. (I am happy to see that social VR is starting to become a topic of academic research!)
Maloney, D., Freeman, G., & Wohn, D. Y. (2020). “Talking without a Voice”: Understanding Non-Verbal Communication in Social Virtual Reality. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2). https://doi.org/10.1145/3415246
Unfortunately, there is no open-access version of this conference proceeding available; you’ll have to obtain a copy from your local academic or public library. This paper, by Divine Maloney and Guo Freeman of Clemson University and Donghee Yvette Wohn of the New Jersey Institute of Technology, consists of two parts:
conducting unobtrusive observations of 61 public events held in AltspaceVR over the span of four weeks, to see what non-verbal interactions were being used naturally on the platform; and
interviewing 30 users of social VR platforms (of which I was one!), where the paper’s authors read through the transcribed interview data to acquire a picture with regards how social VR users used, perceived, and experienced non-verbal communication for further analysis.
In the first study of the two, the authors noted the following different kinds of nonverbal communication:
the use of movement to indicate that someone was paying attention. These included nodding behaviors and moving the body or head toward the person or object that was subject of attention;
the use of applause to indicate approval;
pointing and patting one’s own chest as a form of directing attention either at a remote object/person or oneself;
and behaviours such as waving, dancing, and kissing, which were mostly used in social grooming contexts (dancing was also used as entertainment);
and finally, the behaviour of trolls: interpersonal provocation and social disruptions.
With respect to the thirty interviewed conducted, they were analyzed as follows to answer two research questions:
Using quotes from users’ own accounts, in this section we present our findings as two parts. First, to answer RQ2 (How do people perceive and understand non-verbal communication in social VR?), we identified three common themes that demonstrated how users perceive and understand non-verbal communication in social VR: as more immersive and embodied interactions for body language; as a similar form of communication to offline face-to-face interaction in terms of spatial behavior, hand behavior, and facial expressions; and as a natural way to initiate communication with online strangers.
Second, to answer RQ3 (How, if at all, does non-verbal communication affect interaction outcomes in social VR?), we described the social consequences of interacting through non-verbal communication in social VR for various user groups, including marginalized users such as cis women, trans women, and disabled users. We specially highlighted how non-verbal communication in social VR afforded privacy and social comfort as well as acted as a protection for marginalized users.
Unsurprisingly, the researchers discovered that most participants considered non-verbal communication to be a positive aspect in their social VR experience. Those surveyed highly praised body tracking (either just the hands and head, or ins ome cases the whole body), as it allowed for a more immersive and embodied form of non-verbal communication than those in traditional, flatscreen virtual worlds.
In addition to supporting more immersive and embodied interactions for body language, participants also considered non-verbal communication in social VR similar to offline face-to-face interaction in terms of spatial behavior, hand behavior, and facial expressions. This familiarity and naturalness greatly contributed to their generally positive perceptions.
Participants also viewed non-verbal communication in social VR as positive and effective because it became a less invasive way to start interactions with online strangers (e.g. waving hello at someone you’ve just met). Nonverbal communication also afforded some users a sense of privacy and social comfort, and in some cases, became an effective protection for them to avoid unwanted interactions, attention, and behaviors (especially with LGBTQ people and women).
The paper made three design recommendations for improved nonverbal communication in social VR platforms: providing support for facial tracking (which is already on its way with products like the Vive Facial Tracker); supporting more accurate hand and finger tracking (again, already underway with the Knuckles controllers for the Valve Index); and enabling alternative modes of control, especially for users with physical disabilities. While most of the study participants highly praised full body tracking in social VR, disabled users in fact complained about this feature and demanded alternatives.
The conference paper concludes:
Recently, commercial social VR applications have emerged as increasingly popular digital social spaces that afford more naturally embodied interaction. How do these novel systems shape the role of non-verbal communication in our online social lives? Our investigation has yielded three key findings. First, offline non-verbal communication modalities are being used in social VR and can simulate experiences that are similar to offline face-to-face interactions. Second, non-verbal communication in social VR is perceived overall positive. Third, non-verbal interactions affect social interaction consequences in social VR by providing privacy control, social comfort, and protection for marginalized users.
Tanenbaum, T. J., Hartoonian, N., & Bryan, J. (2020). “How do I make this thing smile?”: An Inventory of Expressive Nonverbal Communication in Commercial Social Virtual Reality Platforms. Conference on Human Factors in Computing Systems – Proceedings, 1–13. https://doi.org/10.1145/3313831.3376606
This paper is available free to all via Open Access. In this conference proceeding, Theresa Jean Tanenbaum, Nazely Hartoonian, and Jeffrey Bryan of the Transformative Play Lab at the Department of Informatics at the University of California, Irvine, did a study of ten social VR platforms:
High Fidelity (which shut down in January of 2020)
TheWave VR (this social VR platform shut down in early 2021)
Facebook Spaces (since shut down and replaced by Facebook Horizon)
For each platform, investigators answered the following eight questions:
Can the user control facial expressions, and if so, how? (Pre-baked emotes, puppeteering, etc.)
Can the user control body language, and if so, how? (Pre-baked emotes, puppeteering, postures. etc.)
Can the user control proxemic spacing (avatar position), and if so, how? (Teleport, hotspots, real world positioning, etc.) How is collision handled between avatars? (Do they overlap, push each other, etc.)
How is voice communication handled? Is audio spatialized, do lips move, is there a speaker indicator, etc.
How is eye fixation/gaze handled? (Do avatars lock and maintain gaze, is targeting gaze automatic, or intentional, or some sort of hybrid, do eyes blink, saccade, etc.)
Are different emotions/moods/affects supported, and how are they implemented? (Are different affective states possible, and do they combine with other nonverbal communications, etc.)
Can avatars interact physically, and if so, how? (Hugging, holding hands, dancing, etc.) What degree of negotiation/consent is needed for multi- avatar interactions? (One-party, two-party, none at all?)
Are there any other kinds of nonverbal communication possible in the system that have not be described in the answers to the above questions?
VR development is proliferating rapidly, but very few interaction design strategies have become standardized…
We view this inventory as a first step towards establishing a more comprehensive guide to the commercial design space of NVC [non-verbal communication] in VR. As a design tool this has two immediate implications for designers. First, it provides a menu of common (and less common) design strategies, and their variations, from which designers may choose when determining how to approach supporting any given kind of NVC within their platform. Second, it calls attention to a set of important social signals and NVC elements that designers must take into consideration when designing for Social VR. By grounding this data in the most commonly used commercial systems, our framework can help designers anticipate the likelihood that a potential user will be acquainted with a given interaction schema, so that they may provide appropriate guidance and support.
Our dataset also highlights some surprising gaps within the current feature space for expressive NVC. While much social signaling relies upon control of facial expression, we found that the designed affordances for this aspect of NVC to be mired in interaction paradigms inherited from virtual worlds. Facial expression control is often hidden within multiple layers of menus (as in the case of vTime), cannot be isolated from more complex emotes (as in the case of VR Chat), hidden behind opaque controller movement (as in Facebook Spaces), or unsupported entirely. In particular, we found that with the exception of dynamic lip-sync, there were no systems with a design that would allow a user to directly control the face of their avatar through a range of emotions while simultaneously engaging in other forms of socialization.
The authors go on to say that they observed no capacity in any of the systems to recombine and blend the various forms of nonverbal communication, such as can be done in the real world:
As we saw in our consideration of the foundations of NVC in general, and Laban Movement Analysis in particular, much NVC operates by layering together multiple social signals that modify, contextualize, and reinforce other social signals. Consider, for instance, that it is possible to smile regretfully, laugh maliciously, and weep with joy. People are capable of using their posture to indicate excitement, hesitation, protectiveness, and many other emotional states, all while performing more overt discourse acts that inherit meaning from the gestalt of the communicative context.
The conference paper concludes:
As is evident in the scholarly work around social VR, improving the design space for NVC in VR has the potential to facilitate deeper social connection between people in virtual reality. We also argue that certain kinds of participatory entertainment such as virtual performance will benefit greatly from a more robust interaction design space for emotional expression through digital avatars. We’ve identified both common and obscure design strategies for NVC in VR, including design conventions for movement and proxemic spacing, facial control, gesture and posture, and several strategies unique to avatar mediated socialization online. Drawing on previous literature around NVC in virtual worlds, we have identified some significant challenges and opportunities for designers and scholars concerned with the future of socialization in virtual environments. Specifically, we identify facial expression control, and unconscious body posture as two critical social signals that are currently poorly supported within today’s commercial social VR platforms.
It is interesting to note that both papers cite the need to properly convey facial expressions as key to expanding the ability of avatars in social VR to convey non-verbal communication!
The inspiration for one of the early pilots came from a session Rob attended on English as a second language. People were learning to order food inside a virtual coffee shop. He knew this approach would be a perfect fit for Georgian’s Indigenous language program. Michele O’Brien, program coordinator for all Indigenous programming, was quick to see the potential benefits.
The first module of language lessons in the program is based around the home. Using AltspaceVR, Rob built a house and furnished it and put information buttons on all the items in the house. Faculty member Angeline King and Elder Ernestine Baldwin translated a word list for everything so that when a student clicks on the button, the Anishnaabemowin word pops up.
The program has proven so successful that Jonathon Richter, CEO/President of the Immersive Learning Research Network (iLRN)…invited Michele and Angeline, along with other Indigenous groups, to attend a panel during the iLRN World Conference 2021 for a session on the iLRN House of Language, Culture, and Heritage – Teaching Native Language and Culture Using XR.
In addition to AltspaceVR, the program has used the educational social VR platform ENGAGE. From a press release:
There is also a second house using the Engage software which includes voiceover translations with either King, Baldwin or another faculty member Mitchell Ackerman providing the pronunciation.
Georgian College is also making the virtual reality assets they’re building for language learning open source, so that they can be used by other Indigenous programs across Canada and around the world (please contact Rob Theriault for more information).
If you’re in the mood for music, you’re in luck! There’s an event taking place tomorrow, Sunday, June 13th, at 10:00 a.m. PST/1:00 p.m. EST/5:00 GMT in the social VR platform AltspaceVR, run by the Global Music Festivals (which had previously held events in Sansar).
AN INNOVATIVE SHOW-PARTY WITH DJ’S AND FLYING CAM-DRONES. A LIVE VIDEO-ART IN WHICH YOU ARE THE SUPERSTAR!
Welcome to Global Music Festivals newest event in AltspaceVR. With Mad Paddy & DJ Celeste A music fest filmed by a professional camera crew.
Our party will take place in a surprising version of the famous LOVEHOTEL, created in a collaboration by SHUSHU & MATOcolori – with two dance floors, secret romantic rooms, online cameras with large video screens – you will feel like [you are] in a live video-clip!
Global Music Festivals runs an international festival within VR and IRL featuring DJ’s, with dedicated producer and camera crew in VR. Our best DJ’s will be playing EDM, TRANCE, GOA, TECHNO, PSY, HOUSE, NU JAZZ.
DJ Mad Paddy has been in the music industry since he was 11 and performed at many festivals, also a qualified sound engineer and light specialist. With his love for music and technology it brought him into the world of VR where he has performed for many VR events.
DJ Celeste’s unshakable love for music and performance has led her to spend nearly two decades of disciplined study and practice to the art and science of music production, DJing and sound engineering in her continuing quest to be able to take her listeners on an irresistibly exciting and inspirational, next level …She is also one of the new generation DJ’s to be at the forefront of Virtual Djing.
According to the producer of the event, Carlos Austin:
We have the two DJ’s and, four camera men in world there plus of course the public. Beautiful world created by Shushu and Marcello.. will be streamed to Global Music Festival twitch channel. Would you write a small post on your blog about it. We will start with AltspaceVR, then tour to other metaverses this coming year.