Metaverse Newscast Episode 5: My Interview with Chris McBride, Winner of the Best Avatar Contest in High Fidelity

Last October in High Fidelity (before the FUTVRE LANDS VR Festival was held), there was a Best Avatar Contest held at one of the monthly stress-testing events. The winner of that contest was Chris McBride, for his Ganesha elephant god avatar. In episode 5 of the Metaverse Newscast, I interview Chris about his creations in his domain, Ozone:

Enjoy! And I’d like to take this opportunity to thank my producer and cameraman Andrew William for all his tireless work in pulling this video series together.

Facebook Reality Labs Gives Us a Preview of What Your Avatar Could Look Like in the Future

Have you ever wondered what your virtual-world avatar could look like, 10 to 20 years from now?

A recently published article in WIRED covers the work of Facebook Reality Labs, which is developing stunningly lifelike virtual reality avatars, called codec avatars, which can recreate the full gamut of facial expressions:

Examples of Facebook Reality Labs’ Codec Avatars

For years now, people have been interacting in virtual reality via avatars, computer-generated characters that represent us. Because VR headsets and hand controllers are trackable, our real-life head and hand movements carry into those virtual conversations, the unconscious mannerisms adding crucial texture. Yet even as our virtual interactions have become more naturalistic, technical constraints have forced them to remain visually simple. Social VR apps like Rec Room and AltspaceVR abstract us into caricatures, with expressions that rarely (if ever) map to what we’re really doing with our faces. Facebook’s Spaces is able to generate a reasonable cartoon approximation of you from your social media photos but depends on buttons and thumb-sticks to trigger certain expressions. Even a more technically demanding platform like High Fidelity, which allows you to import a scanned 3D model of yourself, is a long way from being able to make an avatar feel like you.

That’s why I’m here in Pittsburgh on a ridiculously cold, early March morning inside a building very few outsiders have ever stepped foot in. Yaser Sheik and his team are finally ready to let me in on what they’ve been working on since they first rented a tiny office in the city’s East Liberty neighborhood. (They’ve since moved to a larger space on the Carnegie Mellon campus, with plans to expand again in the next year or two.) Codec Avatars, as Facebook Reality Labs calls them, are the result of a process that uses machine learning to collect, learn, and re-create human social expression. They’re also nowhere near being ready for the public. At best, they’re years away—if they end up being something that Facebook deploys at all. But the FRL team is ready to get this conversation started. “It’ll be big if we can get this finished,” Sheik says with the not-at-all contained smile of a man who has no doubts they’ll get it finished. “We want to get it out. We want to talk about it.”

The results (which you can see more of in the photos and videos in the WIRED article) are impressive, but they require a huge amount of data capture beforehand: 180 gigabytes of data every second! So don’t expect this to be coming out anytime soon. But it is a fascinating glimpse of the future.

Would you want your avatar in a virtual world to look exactly like you, and have their face move exactly like your face, with all your unique expressions? Some people would find this creepy. Others would embrace it. Many people would probably prefer to have an avatar who looks nothing like their real-life selves. What do you think of Facebook’s research? Please feel free to leave a comment on this blogpost, thanks!