December 12, 2018
Digital Domain’s quest to craft the ultimate digital human
As technology edges ever closer to producing digital humans that fool the eye into thinking they’re real, advances in facial capture technology are key. Audiences instinctively know what real facial muscle, skin, and eye movement look like, which makes the process of animating digital faces particularly challenging. On top of that, the facial capture system needs to retain the uniqueness of the actor’s performance.
Over the last decade, the Digital Human Group has been dedicated to developing new technologies to generate realistic digital humans. Using Unreal Engine, they’re close to producing a real-time system that could seriously disrupt the way directors, tech teams, and actors collaborate on set. Evolving the production process with virtual production
“One of the areas we’re really the most excited about is this whole avenue of virtual production,” explains Darren Hendler, director of the Digital Human Group. “We’ve only just started doing some of that on sets, where we’ve been able to, in a more offline way, give an actor an example of what their character is going to look like. This becomes a hugely powerful tool for them, because they can really see what their performance is going to look like and how it’s going to translate.
“But now, what if we could do that live and on set with actors, with the directors, and everybody?
“It’s all coming, and it’s important for people to understand what, very, very soon, is going to be possible with some of this new technology.”
AI and real-time technology merge
Pushing deeper into the realm of virtual production, the team found that Unreal Engine provided the framework needed to take their work to the next level of realism in real time. The Human Group’s active research into deep learning and artificial intelligence has also played a pivotal role in the forward momentum of this technology, explains Hendler.
The traditional process of creating and animating a rig forces an artist to work within the rig’s limitations. Instead, Digital Human Group found that using a camera to capture facial movements at a very high resolution not only returned a better performance, but also gave them the opportunity to use AI and deep learning to enhance the process in ways never seen before.
Hendler is excited about the fact that his team is now able to automatically train these systems to recognize, understand, and recreate a face without losing what really makes the actor’s performance. “And now we’re able to do it, not in weeks or months like before,” he says. “Now we’re able to do it instantaneously. It’s getting fast enough that we can start to do real-time, very highly realistic performing faces, which is amazing.”
Curious to learn more? Watch the full video case study above, and then head over to our new virtual production hub for even more content about this exciting new approach to media production.