My Twitter feed keeps delivering news nuggets this week! This is an update to a blogpost I had written earlier this year on this technology.
Facebook Reality Labs has published a research article in the journal ACM Transactions on Graphics, which shows cutting-edge avatar facial animation using multiple cameras attached to a VR headset, and a new multiview image processing technique. (The full paper is free to download from the link above.) The researchers also gave a presentation at the SIGGRAPH 2019 computer graphics conference in Los Angeles.
The results are impressive, giving an avatar human-driven, lifelike animations not only of the lower face but also the upper face, which of course if covered by the VR headset:
This is light years ahead of current avatar facial animation technology, such as the avatar facial driver in Sinespace, which operates using your webcam. Imagine being able to conduct a conversation in VR where you can convey the full gamut of facial expressions while you are talking! This is a potential gamechanger for sure. It’s not clear when we can expect to see this technology actually applied to Oculus VR hardware, however. It might still be many years away. But it is exciting!