The New Ready Player Me Hub: The Ability to Import Your Avatar to Any Supported Platform!

An example of the avatars you can create using the new Ready Player Me Hub (source)

Wolf3D’s Ready Player Me, the customizable avatar system I have written about before here, here, and here, has issued a brand new update! In the email I received yesterday:

The Ready Player Me Hub lets you create one or multiple avatars and use them in all apps that support Ready Player Me. With one click, you can import your existing avatar or create a new one and add it to apps like VRChat, LIV, and Somnium Space.

According to the official blogpost announcing the update (which I recommend you read in full):

Ever since we launched Ready Player Me back in May last year, our goal was to create a cross-game avatar platform for the metaverse – one that gives you a consistent digital identity everywhere you and your avatar go. Think of it as a passport that gives you access to thousands of virtual worlds. Today, we are making the metaverse passport real with the launch of the Ready Player Me Hub

When you sign in to the Ready Player Me Hub, you can see all your avatars and connect them to your favorite applications in one click. To import your avatar into a partner app that uses Ready Player Me, all you need to do is sign in with your account.

VentureBeat reports:

Wolf3D’s platform allows users to travel between video games, virtual reality experiences, and other apps using a single virtual identity, said cofounder Timmu Tõke in an interview with GamesBeat.

“We’re trying to build a cross-game service to enable a lot of virtual worlds to exist,” Tõke said. “We see more people spending more and more time in virtual worlds. The metaverse is kind of happening around us. But most of it isn’t happening in one world or one app. It’s a network of many different worlds that people visit for work and play and collaboration. And doesn’t really make sense for the end user to create a new avatar identity for each of those experiences. It makes sense to have one portable entity that travels with you across many different games and apps and experiences.”

In fact, 300 games, apps, and social VR/virtual worlds now support the Ready Player Me avatar system, including the following platforms (all links below redirect you to blogposts I have previously written about each platform, which might be somewhat out-of-date, as I am covering so many different platforms on this blog!):

You can peruse the complete list of Ready Player Me partners here.

If you are interested in trying out the Ready Player Me Hub to create an avatar (either from a selfie or from scratch), you can access it here. You have a choice of making either a full-body avatar or a head-and-shoulders avatar:

The Ready Player Me Hub starting screen

When your avatar is ready, simply click Next in the top right corner of the website. You will be redirected to the new Hub interface. To save your avatar, click Claim now and sign in with your email address. You will get a one-time login code that you need to type in the Hub. You can use the Hub to connect your avatars to available apps in the Discover Apps tab. To import your avatar into a new app, all you need to do is click Connect avatar (some applications may require a few extra steps).

We are getting ever closer to the dream of having a consistent avatar which you can use in multiple social VR platforms! Be sure to give the Ready Player Me Hub a try.

Nonverbal Communication in Social VR: Recent Academic Research

Gestures (like this peace sign) are an example of nonverbal communication (Photo by Dan Burton on Unsplash)

In the real world, much of our communication is non-verbal: facial expression, gaze, gestures, body movements, even spatial distance (proxemics).

While older, flat-screen virtual worlds such as Second Life are somewhat limited in the forms of nonverbal communication available (most people rely on text or voice chat), modern VR equipment and social VR platforms allow for more options:

  • Hand/finger movement: most VR headsets have hand controllers; the Valve Index has Knuckles hand controllers which allow you to move your fingers as well as your hands;
  • Body movement: the Vive pucks can be attached to your waist, hips, feet, and other parts of your body to track their movement in real time;
  • Eye movements/gaze: for example, the Vive Pro Eye VR headset can track the blinking and movement of the eyes;
  • Facial expression: add-ons such as the Vive Facial Tracker (which attaches to your VR headset) allow you to convey lower face and mouth movements on your avatar.

In addition, many social VR platforms also employ emoticons, which can be pulled up via a menu and displayed over the head of the avatar (e.g. the applause emoji in AltspaceVR), as well as full-body pre-recorded animations (e.g. doing a backflip in VRChat). The use of all these tools, in combination or alone, allows users in social VR to approach the level of non-verbal communication found in real life, provided they have the right equipment and are on a platform which supports that equipment (e.g. NeosVR, where you can combine all these into an avatar which faithfully mimics your facial and body movements).

Two recently published research papers investigate nonverbal communication on social VR platforms, adding to the growing academic literature on social VR. (I am happy to see that social VR is starting to become a topic of academic research!)


Maloney, D., Freeman, G., & Wohn, D. Y. (2020). “Talking without a Voice”: Understanding Non-Verbal Communication in Social Virtual Reality. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2). https://doi.org/10.1145/3415246

Unfortunately, there is no open-access version of this conference proceeding available; you’ll have to obtain a copy from your local academic or public library. This paper, by Divine Maloney and Guo Freeman of Clemson University and Donghee Yvette Wohn of the New Jersey Institute of Technology, consists of two parts:

  • conducting unobtrusive observations of 61 public events held in AltspaceVR over the span of four weeks, to see what non-verbal interactions were being used naturally on the platform; and
  • interviewing 30 users of social VR platforms (of which I was one!), where the paper’s authors read through the transcribed interview data to acquire a picture with regards how social VR users used, perceived, and experienced non-verbal communication for further analysis.

In the first study of the two, the authors noted the following different kinds of nonverbal communication:

  • the use of movement to indicate that someone was paying attention. These included nodding behaviors and moving the body or head toward the person or object that was subject of attention;
  • the use of applause to indicate approval;
  • pointing and patting one’s own chest as a form of directing attention either at a remote object/person or oneself;
  • and behaviours such as waving, dancing, and kissing, which were mostly used in social grooming contexts (dancing was also used as entertainment);
  • and finally, the behaviour of trolls: interpersonal provocation and social disruptions.

With respect to the thirty interviewed conducted, they were analyzed as follows to answer two research questions:

Using quotes from users’ own accounts, in this section we present our findings as two parts. First, to answer RQ2 (How do people perceive and understand non-verbal communication in social VR?), we identified three common themes that demonstrated how users perceive and understand non-verbal communication in social VR: as more immersive and embodied interactions for body language; as a similar form of communication to offline face-to-face interaction in terms of spatial behavior, hand behavior, and facial expressions; and as a natural way to initiate communication with online strangers.

Second, to answer RQ3 (How, if at all, does non-verbal communication affect interaction outcomes in social VR?), we described the social consequences of interacting through non-verbal communication in social VR for various user groups, including marginalized users such as cis women, trans women, and disabled users. We specially highlighted how non-verbal communication in social VR afforded privacy and social comfort as well as acted as a protection for marginalized users.

Unsurprisingly, the researchers discovered that most participants considered non-verbal communication to be a positive aspect in their social VR experience. Those surveyed highly praised body tracking (either just the hands and head, or ins ome cases the whole body), as it allowed for a more immersive and embodied form of non-verbal communication than those in traditional, flatscreen virtual worlds.

In addition to supporting more immersive and embodied interactions for body language, participants also considered non-verbal communication in social VR similar to offline face-to-face interaction in terms of spatial behavior, hand behavior, and facial expressions. This familiarity and naturalness greatly contributed to their generally positive perceptions.

Participants also viewed non-verbal communication in social VR as positive and effective because it became a less invasive way to start interactions with online strangers (e.g. waving hello at someone you’ve just met). Nonverbal communication also afforded some users a sense of privacy and social comfort, and in some cases, became an effective protection for them to avoid unwanted interactions, attention, and behaviors (especially with LGBTQ people and women).

The paper made three design recommendations for improved nonverbal communication in social VR platforms: providing support for facial tracking (which is already on its way with products like the Vive Facial Tracker); supporting more accurate hand and finger tracking (again, already underway with the Knuckles controllers for the Valve Index); and enabling alternative modes of control, especially for users with physical disabilities. While most of the study participants highly praised full body tracking in social VR, disabled users in fact complained about this feature and demanded alternatives.

The conference paper concludes:

Recently, commercial social VR applications have emerged as increasingly popular digital social spaces that afford more naturally embodied interaction. How do these novel systems shape the role of non-verbal communication in our online social lives? Our investigation has yielded three key findings. First, offline non-verbal communication modalities are being used in social VR and can simulate experiences that are similar to offline face-to-face interactions. Second, non-verbal communication in social VR is perceived overall positive. Third, non-verbal interactions affect social interaction consequences in social VR by providing privacy control, social comfort, and protection for marginalized users.


Tanenbaum, T. J., Hartoonian, N., & Bryan, J. (2020). “How do I make this thing smile?”: An Inventory of Expressive Nonverbal Communication in Commercial Social Virtual Reality Platforms. Conference on Human Factors in Computing Systems – Proceedings, 1–13. https://doi.org/10.1145/3313831.3376606

This paper is available free to all via Open Access. In this conference proceeding, Theresa Jean Tanenbaum, Nazely Hartoonian, and Jeffrey Bryan of the Transformative Play Lab at the Department of Informatics at the University of California, Irvine, did a study of ten social VR platforms:

  • VRChat
  • AltspaceVR
  • High Fidelity (which shut down in January of 2020)
  • Sansar
  • TheWave VR (this social VR platform shut down in early 2021)
  • vTime XR
  • Rec Room
  • Facebook Spaces (since shut down and replaced by Facebook Horizon)
  • Anyland
  • EmbodyMe

For each platform, investigators answered the following eight questions:

  1. Can the user control facial expressions, and if so, how? (Pre-baked emotes, puppeteering, etc.)
  2. Can the user control body language, and if so, how? (Pre-baked emotes, puppeteering, postures. etc.)
  3. Can the user control proxemic spacing (avatar position), and if so, how? (Teleport, hotspots, real world positioning, etc.) How is collision handled between avatars? (Do they overlap, push each other, etc.)
  4. How is voice communication handled? Is audio spatialized, do lips move, is there a speaker indicator, etc.
  5. How is eye fixation/gaze handled? (Do avatars lock and maintain gaze, is targeting gaze automatic, or intentional, or some sort of hybrid, do eyes blink, saccade, etc.)
  6. Are different emotions/moods/affects supported, and how are they implemented? (Are different affective states possible, and do they combine with other nonverbal communications, etc.)
  7. Can avatars interact physically, and if so, how? (Hugging, holding hands, dancing, etc.) What degree of negotiation/consent is needed for multi- avatar interactions? (One-party, two-party, none at all?)
  8. Are there any other kinds of nonverbal communication possible in the system that have not be described in the answers to the above questions?

The results were a rather complete inventory of nonverbal communication in social VR, with the goal to catalogue common design elements for avatar expression and identify gaps and opportunities for future design innovation. Here is the table from the paper (which can be viewed in full size at the top of page 6 of the document).

An inventory of non-verbal communication in ten social VR platforms (source)

VR development is proliferating rapidly, but very few interaction design strategies have become standardized…

We view this inventory as a first step towards establishing a more comprehensive guide to the commercial design space of NVC [non-verbal communication] in VR. As a design tool this has two immediate implications for designers. First, it provides a menu of common (and less common) design strategies, and their variations, from which designers may choose when determining how to approach supporting any given kind of NVC within their platform. Second, it calls attention to a set of important social signals and NVC elements that designers must take into consideration when designing for Social VR. By grounding this data in the most commonly used commercial systems, our framework can help designers anticipate the likelihood that a potential user will be acquainted with a given interaction schema, so that they may provide appropriate guidance and support.

Our dataset also highlights some surprising gaps within the current feature space for expressive NVC. While much social signaling relies upon control of facial expression, we found that the designed affordances for this aspect of NVC to be mired in interaction paradigms inherited from virtual worlds. Facial expression control is often hidden within multiple layers of menus (as in the case of vTime), cannot be isolated from more complex emotes (as in the case of VR Chat), hidden behind opaque controller movement (as in Facebook Spaces), or unsupported entirely. In particular, we found that with the exception of dynamic lip-sync, there were no systems with a design that would allow a user to directly control the face of their avatar through a range of emotions while simultaneously engaging in other forms of socialization.

The authors go on to say that they observed no capacity in any of the systems to recombine and blend the various forms of nonverbal communication, such as can be done in the real world:

As we saw in our consideration of the foundations of NVC in general, and Laban Movement Analysis in particular, much NVC operates by layering together multiple social signals that modify, contextualize, and reinforce other social signals. Consider, for instance, that it is possible to smile regretfully, laugh maliciously, and weep with joy. People are capable of using their posture to
indicate excitement, hesitation, protectiveness, and many other emotional states, all while performing more overt discourse acts that inherit meaning from the gestalt of the communicative context.

The conference paper concludes:

As is evident in the scholarly work around social VR, improving the design space for NVC in VR has the potential to facilitate deeper social connection between people in virtual reality. We also argue that certain kinds of participatory entertainment such as virtual performance will benefit greatly from a more robust interaction design space for emotional expression through digital avatars. We’ve identified both common and obscure design strategies for NVC in VR, including design conventions for movement and proxemic spacing, facial control, gesture and posture, and several strategies unique to avatar mediated socialization online. Drawing on previous literature around NVC in virtual worlds, we have identified some significant challenges and opportunities for designers and scholars concerned with the future of socialization in virtual environments. Specifically, we identify facial expression control, and unconscious body posture as two critical social signals that are currently poorly supported within today’s commercial social VR platforms.

It is interesting to note that both papers cite the need to properly convey facial expressions as key to expanding the ability of avatars in social VR to convey non-verbal communication!

UPDATED: Second Life Founder and High Fidelity CEO Philip Rosedale Will Do an AMA (Ask Me Anything) on Reddit on February 23rd, 2021

Philip shared the following photo when posting about his Reddit AMA on Twitter (source)

Mark your calendars! Philip tweeted late tonight:

Join me for a Reddit AMA on Feb. 23rd from 11:00 a.m. – 2:00 p.m. Pacific Time. Ask me about Spatial Audio, VR, virtual worlds and virtual economies, avatars, and … anything.

So if you have any burning questions you’ve wanted to ask Philip, this is your perfect opportunity! When the AMA starts tomorrow, I will link to it here.

See you there!

UPDATE Feb. 23rd, 2021, 3:51 p.m.: Please accept my apoliogies for not linking to this AMA sooner; I was so tired that I lay down for a nap and landed up sleeping through the entire event!

Here’s the link to the Ask Me Anything posted to the r/IAmA subReddit, with the following introduction posted, plus the above photo as proof that he is, indeed, THE Philip Rosedale!

Hi Reddit!

I am the founder of the virtual civilization Second Life, populated by one million active users, and am now CEO and co-founder of High Fidelity — which has just released a real-time spatial audio API for apps, games, and websites. If you want to check it out, I’d love to hear what you think: highfidelity.com/api

High Fidelity’s Spatial Audio was initially built for our VR platform — we have been obsessive about audio quality from day one, spending our resources lowering latency and nailing spatialization.

Ask me about immersive spatial audio, VR, virtual worlds and spaces, avatars, and … anything.

(With me today I have /u/MaiaHighFidelity and /u/Valefox to answer technical questions about the API, too.)

This AMA has also been reposted the the r/secondlife, r//HighFidelity, r/WebRTC. and r/GameAudio subReddits.

UPDATE 4:26 p.m.: I have been informed that the AMA is still going on, as of this writing!

UPDATED: Tivoli Cloud VR Has Integrated Wolf3D’s Ready Player Me Avatar Creator: Now You Can Create a Tivoli Cloud Avatar from a Selfie!

Today, on a bitterly cold, -20°C winter day up here in the frosty Canadian prairie hinterlands (which felt more like -30°C when you factored in the wind chill from a strong wind), I was able to spend a convivial hour sitting around a campfire on a warm, tropical desert island, chatting with Caitlyn Meeks of Tivoli Cloud VR and a few other avatars (including a personable, OpenAI-controlled toaster named Toastgenie Craftsby, who every so often would spit out some toast, or even a delicious rain of hot waffles, during our delightful, wide-ranging conversation!).

Tivoli Cloud VR’s Desert Island (picture by Caitlyn Meeks)

Tivoli Cloud VR, a successor platform to the now-shuttered original High Fidelity social VR platform created by Philip Rosedale’s company of the same name (and based on HiFi’s open-source software code), has had a few new developments since the last time I visited, back in September! Among them is the full integration of Wolf3D’s Ready Player Me avatar creation system, as demonstrated in this two-minute YouTube video by Tivoli Cloud ambassador and well-known social VR personality XaosPrincess:

This is the same Wolf3D system which I first reported on back in September 2019, when High Fidelity issued an app called Virtual You, where you could take a selfie on your mobile device and then use that picture to create a HiFi avatar. As a matter of fact, I still have the avatar I created using Virtual You saved to my hard drive, and I hope to upload and resurrect him as one of my avatars on Tivoli Cloud VR! In the case of Tivoli Cloud, the app is fully integrated into the client software; there’s no need for a separate app!

Note that Wolf3D’s Ready Player Me avatar creation system is also used by Mozilla Hubs, although the Mozilla Hubs avatars are only head-and-torso, as opposed to the full-body, rigged avatars used in Tivoli Cloud. In fact, one of the people sitting around that campfire today was animating his avatar’s hands and fingers using a Leap Motion Controller! It was amazing to sit across the campfire from Max and watch him wiggle his avatar’s fingers in real time.

Max Huet, Caitlyn Meeks, and Roxie sitting around the campfire (all three avatars were created using Wolf3D’s Ready Player Me software)
Here’s a closer look at some Ready Player Me-created avatars, provided by Caitlyn Meeks of Tivoli Cloud VR

Using Ready Player Me, it is possible to create endlessly customizable human avatars—and Caitlyn tells me that you don’t even need to start from a selfie! You can just jump right into the program (as shown in the video above) and start creating your perfect virtual representation!

Here’s a thirty-minute interview with Timmu Tõke, the co-founder and CEO of Wolf3D (the creators of Ready Player Me), where he talks with Cristian-Emanuel Anton, the co-founder and CEO of MeetinVR, about VR avatars, meetings in virtual reality, and the metaverse. (MeetinVR is yet another social VR platform using Wolf3D’s avatar system to create their own head-and-torso-with-hands avatars!)

I suspect that we will see other platforms join Mozilla Hubs, MeetinVR, and Tivoli Cloud VR in using Ready Player Me avatars! Such corporate partnerships bode well for the future of the metaverse we will all live, work, and play in.

If you are interested in Tivoli Cloud VR, you can visit their website, join their Discord server, or follow them on Twitter to learn more. As I expect I will be writing more often about Tivoli Cloud VR, I have created a new blog category called (surprise!) Tivoli Cloud VR on the RyanSchultz.com blog (and I will go back and add all my previous blogposts about the platform to that new category).

UPDATE Feb. 10th, 2021: Daniel Marcinkowski of Ready Player Me has just published an interview with Caitlyn Meeks, the CEO of Tivoli Cloud VR, about the recent integration of Ready Player Me avatars, which you can read here.


Thank you to Caitlyn Meeks and XaosPrincess of Tivoli Cloud VR, and thanks to Rainwolf for the heads up on the interesting Timmu Tõke interview!