23 de janeiro de 2019
How virtual production brought dolls to life in “Welcome to Marwen”
Welcome to Marwen is based on the true story of artist Mark Hogancamp who, after suffering a brutal assault, turned to photography as a means of healing. The subject of his photos was a 1:6 scale WWII-era village that he assembled in his backyard, including dozens of carefully-posed dolls representing his friends, family, and attackers as well as Hogancamp himself.
“We just finally figured out how to really do it, to make sure that the emotion is completely translated to the avatar,” says Zemeckis. “We got another tool in the movie toolbox.” From nocap to mocap
Welcome to Marwen includes 46 minutes of doll animation. “How can we get believable human-like performances for this much of the movie at this scale and also have them mesh completely with the real physical world at the same time?” asks Kevin Baillie, visual effects supervisor on Marwen, echoing a question he asked when Zemeckis first gave him the script. “Those two things really seemed at odds with each other.”
Zemickis wanted the dolls’ performances, especially facial emotions, to reflect the actors’ actual portrayals. Performance capture—motion capture that records the actor’s full acting performance, including facial expressions—seemed like the obvious solution, especially since Zemeckis had pioneered such techniques in The Polar Express back in 2004. But the director, concerned about the limitations of such systems, wanted the VFX team to come up with something else.
Baillie tried different solutions, including filming Steve Carell, the actor who plays Hogancamp, then modifying and switching out Carell’s body parts to more closely match doll proportions. “It just looked horrible,” says Baillie, “like somebody in a high-end Halloween costume.”
Baillie then tried a reverse approach: instead of augmenting the live actor with doll parts, they augmented Carell’s digital doll with live actor parts by projecting a warped version of Carell’s eyes and mouth from his live performance onto his digital doll’s face. When they watched the results, they knew they had a winner.
“It was just like ‘Wow, this reads like a human performance coming through completely clearly but it looks totally believable as a doll, too!’ ” says Ballie. “That's the test that got the film greenlit, and the methodology that we ended up using for the film.”
This approach required the team to record the actors’ facial performances on a mocap stage, but with an unusual twist. “Because we were using the face footage... they couldn't have any facial markers in any of the areas, they couldn't wear head cams,” says Baillie. “And not only that, but we had to light them to match exactly to what the final shot was going to look like.”
Lighting with virtual production
The need for an exact match between the physical and digital lighting drove the team to an essential preparatory step: they built out all the doll scenes in Unreal Engine before shooting began. In this way, director of photography (DP) was able to work within UE4 to design all the lighting long before the mocap shoot, trying out different scenarios to get the look he wanted for each shot.“It made it really, really important that on the mocap stage, we really nailed the lighting, which was a big challenge for our DP C. Kim Miles because he's lighting in a big gray void,” says Baillie. “That was pretty tricky as well, and involved a lot of virtual production techniques to give him the insight into how the the decisions he was making on the mocap stage would actually affect the entire environment in the end result.”
During each motion capture session, if the director wanted to try out different lighting, blocking, or camera angles, the UE4 team was on hand to update the corresponding digital scene and give a preview of the changes.
Since the actors’ performances were such an important part of the mocap process, keeping the actors engaged in the process was crucial. “If an actor performed a rehearsal and went back and looked at the monitor and wanted to change something about the blocking of the scene, or if Bob [Zemeckis] wanted to change something, then we were able to do that and see exactly how it looked,” says Baillie. “It put the actors more in a world where it’s less, ‘Trust me, it’s going to work out,’ and it actually showed them how it was going to work out.”
Zemeckis credits the real-time feedback from Unreal Engine with speeding up communication between himself, the crew, and the actors. ”It’s kind of like a very elaborate moving storyboard,” he says, “which makes you able to communicate very simply to everyone, including the actors, what you’re trying to do.”
Once all the performances were recorded, the VFX team combined the footage with hand-keyframed animation on the doll head rigs, going back and forth between the 3D scan of the actor’s head and the doll model to find the right mix of face footage and keyframing. Through this process, the team applied believable, recognizable actor performances to the dolls’ faces.
Virtual production for better filmmaking
Beyond providing a means to record and transfer actors’ performances to animated characters, virtual production gives filmmaking crews opportunities for creativity that just weren’t possible before.“What virtual production does, is it really does allow us to be collaborative and show people the fruits of their labors right then and there, so that they can actually understand how some decision that they made today is going to affect the ultimate outcome,” says Baillie. “That sort of ownership...is so key to allowing not just the director, but every department to have their stamp on the final movie in as direct a way as possible.”
Zemeckis, never one to shy away from technology, concurs. “Technology is magnificent. Technology has to be embraced because it allows the filmmaker to make the best possible movie. It’s all there to serve the character and the story.”
Visit our Virtual Production page to get more great content from our Visual Disruptors series!