The results are impressive, giving an avatar human-driven, lifelike animations not only of the lower face but also the upper face, which of course if covered by the VR headset:
This is light years ahead of current avatar facial animation technology, such as the avatar facial driver in Sinespace, which operates using your webcam. Imagine being able to conduct a conversation in VR where you can convey the full gamut of facial expressions while you are talking! This is a potential gamechanger for sure. It’s not clear when we can expect to see this technology actually applied to Oculus VR hardware, however. It might still be many years away. But it is exciting!
Have you ever wondered what your virtual-world avatar could look like, 10 to 20 years from now?
A recently published article in WIRED covers the work of Facebook Reality Labs, which is developing stunningly lifelike virtual reality avatars, called codec avatars, which can recreate the full gamut of facial expressions:
For years now, people have been interacting in virtual reality via avatars, computer-generated characters that represent us. Because VR headsets and hand controllers are trackable, our real-life head and hand movements carry into those virtual conversations, the unconscious mannerisms adding crucial texture. Yet even as our virtual interactions have become more naturalistic, technical constraints have forced them to remain visually simple. Social VR apps like Rec Room and AltspaceVR abstract us into caricatures, with expressions that rarely (if ever) map to what we’re really doing with our faces. Facebook’s Spaces is able to generate a reasonable cartoon approximation of you from your social media photos but depends on buttons and thumb-sticks to trigger certain expressions. Even a more technically demanding platform like High Fidelity, which allows you to import a scanned 3D model of yourself, is a long way from being able to make an avatar feel like you.
That’s why I’m here in Pittsburgh on a ridiculously cold, early March morning inside a building very few outsiders have ever stepped foot in. Yaser Sheik and his team are finally ready to let me in on what they’ve been working on since they first rented a tiny office in the city’s East Liberty neighborhood. (They’ve since moved to a larger space on the Carnegie Mellon campus, with plans to expand again in the next year or two.) Codec Avatars, as Facebook Reality Labs calls them, are the result of a process that uses machine learning to collect, learn, and re-create human social expression. They’re also nowhere near being ready for the public. At best, they’re years away—if they end up being something that Facebook deploys at all. But the FRL team is ready to get this conversation started. “It’ll be big if we can get this finished,” Sheik says with the not-at-all contained smile of a man who has no doubts they’ll get it finished. “We want to get it out. We want to talk about it.”
Would you want your avatar in a virtual world to look exactly like you, and have their face move exactly like your face, with all your unique expressions? Some people would find this creepy. Others would embrace it. Many people would probably prefer to have an avatar who looks nothing like their real-life selves. What do you think of Facebook’s research? Please feel free to leave a comment on this blogpost, thanks!