What is virtual production?

Courtesy of Framestore
Over the last few years, a wave of new production tools and workflows has been increasingly changing the face of storymaking across the media and entertainment industry, from live-action film and television, through animated content, and on to broadcast and live events.

Blending traditional techniques with digital innovations that leverage real-time game engines, this phenomenon is called virtual production—and it’s impacting everyone from producers and directors to stunt coordinators and grips. 

But what exactly is virtual production? Is it the same thing as in-camera VFX? And what about previs, pitchvis, stuntvis, techvis, and finally postvis? It’s a lot to get your head around!

In this Real-Time Explainer, we’ll aim to demystify all of these terms and a few more, as well as looking at the role Unreal Engine is playing in the evolving world of virtual production. We’ll also explore the benefits associated with this new set of filmmaking techniques.
 

What is virtual production?

Virtual production uses technology to join the digital world with the physical world in real time. It blends traditional filmmaking techniques with modern technology to help creatives achieve their vision. 

Virtual production has been used in the broadcast industry for many years to produce real-time graphics live to air, for example on sports shows and election coverage, where input data is constantly changing, and graphics have to be updated on fly. 

Today, technological advances centered around real-time game engines like Unreal Engine mean that high-fidelity photorealistic or stylized real-time graphics are becoming a key part of not just final pixels for live broadcast and events, but also for creating animated content, and for every part of the live-action film production process before, during, and after principal photography happens on set.
 

The different types of virtual production

Virtual production includes previs, pitchvis, techvis, stuntvis (also known as action design), postvis, and live compositing (also known as Simulcam), as well as virtual scouting, VR location scouting, and in-camera VFX (also known as on-set virtual production). We’ll explore each of these later in this article.
 

How is virtual production different from traditional film production?

Traditional film production is a linear process that moves from pre-production (concepting and planning), through production (filming) and finally to post-production (editing, color grading, and visual effects). Since you don’t get to see how it all comes together until the very end of the process, making changes is very time-consuming and costly; you sometimes even have to start over from scratch. The result is that filmmakers’ ability to make creative decisions on the fly is largely constrained. 
In contrast, virtual production erodes the boundaries between pre-production and the final result, enabling directors, cinematographers, and producers to see a representation of the finished look much earlier on in the production process, and therefore iterate quickly and inexpensively. The result is that they can refine the narrative to reflect their creative intent, ultimately telling better stories while simultaneously saving time and money.

Using virtual production for visualization

Virtual production enables filmmakers to visualize various aspects of their film before, during, and after production, for both live-action elements and visual effects or animated content. These visualization processes include:
  • Previs
  • Pitchvis
  • Techvis
  • Stuntvis (also called action design)
  • Postvis
  • Live compositing (also called Simulcam)
 

Previs

Previs, short for previsualization, has actually been around in some form or other longer than modern virtual production. 3D previsualization is the next step up from hand-drawn storyboards. 

Simply put, it means using computer-generated imagery (CGI) to rough out the main visuals and action before shooting begins in order to define how the script will be visually presented and try out different scenarios. In an action scene with minimal dialogue, previs also details out the story elements that move the narrative forward.

Previs provides a reference for what will need to be shot live. It also provides the basis for VFX bidding, where studios compete with each other for the contract to complete some or all of the shots.
Courtesy of The Third Floor
Originally, low-resolution characters and assets were quickly rendered in offline software packages before being cut together. With the ability today of game engines like Unreal Engine to render full-resolution, photoreal assets in real time, filmmakers can see their vision brought to life more vividly, iterate on the fly, and make better decisions as a result.

You can find out below how Zoic Studios created the action-packed visual effects sequences for Warner Bros’ TV series Superman & Lois at film quality but on an episodic TV timeline, with help from previs in Unreal Engine.
  

The role of virtual cameras in visualization

Previs, along with many other stages of virtual production, may involve the use of virtual cameras or VCams. These enable filmmakers to pilot a camera in Unreal Engine from an external device, often an iPad, and record that movement. The resulting camera moves can be used for blocking, where they are starting points for manual adjustment; to provide depth-of-field preview to decide camera setting for live action; and even as final cameras that are taken all the way through to post-production.

Many filmmakers find working with a tactile, physical device much more appealing and familiar to them than having to interact through a software user interface, and it results in more realistic camera moves in the digital world. 
 

Pitchvis

Pitchvis is a form of previs that usually occurs before a project is even greenlit. It’s designed to gain stakeholder buy-in by demonstrating the intended look and feel of a project before a major investment has been committed.

Unreal Engine has been used to successfully pitch projects as varied as Christopher Sanders’ 2020 version of The Call of the Wild, where it was instrumental in getting the halted project put back into production, and indie filmmaker HaZ Dulull’s animated series Battlesuit. You can see what HaZ has to say below.

Techvis

Techvis is used to work out the precise technical requirements of practical and digital shots before committing to crew and equipment, including cameras, cranes, and motion-control rigs, on the production day. Techvis can be used to explore the feasibility of shot designs within the confines of a specific location—such as where the crane will go, or even if it will fit on the set—as well as blocking involving virtual elements. It can also be used to determine how much of a physical set is required versus digital set extensions. 

Techvis often uses much more basic assets than previs, since it is not being used to make creative decisions—ideally, those have already been made at this point. Its real value is in providing practical information that the crew can rely on to make the shoot more efficient. 
 
Courtesy of PROXi Virtual Production

Stuntvis / action design

Stuntvis—also known as action design—is a combination of previs and techvis for physical stunt work and action sequences. It requires a high degree of accuracy so that action designers can choreograph shots with precision, get creative buy-in, and maintain a high safety level for stunt personnel and performers.

Unreal Engine incorporates real-world physical simulation, enabling stunt crews to rehearse and execute simulated shots that mirror reality as fully as possible. This results in much higher efficiency during production, which can mean fewer shooting days.

Check out this presentation from PROXi Virtual Production founders Guy and Harrison Norris to find out how they are using cutting-edge techniques in Unreal Engine for action design and techvis on films like The Suicide Squad.

Postvis

Postvis happens after the physical shoot has finished. It’s used where a shot will have visual effects added to the live-action footage, but those are not yet complete (or in some cases, not even started). These shots will usually have a green-screen element that will be replaced with CGI in the final product. Postvis often reuses the original previs of VFX elements, combining it with the real-world footage. 

Postvis can provide the filmmakers’ vision to the visual effects team, as well as offering a more accurate version of any unfinished VFX-intensive shots for the editor as they assemble their edit. It enables filmmakers to assess the cut with visual reference—even if final VFX is still a long way from completion—and ensure buy-in from all stakeholders. It can even be used to make early test screenings more representative of the intended final product.
 
L: postvis, R: final. Courtesy of SOKRISPYMEDIA.

Live compositing / Simulcam

There’s one more type of visualization we need to discuss. Essentially, live compositing is similar to postvis in that it combines the live-action footage with a representation of the CGI, as a reference; however, in this case, it happens during the shoot. Disappointingly, it’s not called “duringvis”! 

One of the earliest uses of live compositing on a film set was in the making of Avatar by James Cameron, who coined the term “Simulcam” for it. With this technique, the filmmaker is able to see a representation of the CG elements composited over the live action when they're shooting, so they get a better sense for what's being captured. 

Typically, it is used to visualize CG characters that are driven by performance capture data, which can either be live, prerecorded, or both. Performance capture is a form of motion capture where actors’ performances, including subtle facial and finger movements, are captured by specialized systems, and can then be transferred onto CG characters. Many virtual production processes involve the use of performance capture, but one example of its use with live compositing is the Netflix show Dance Monsters. It was also used to great effect in Robert Zemeckis’ Welcome to Marwen.

Virtual scouting

Another facet of virtual production is virtual scouting. This powerful tool enables key creatives like the director, cinematographer, and production designer to review a virtual location in virtual reality (VR), so they can immerse themselves in the set and get a real sense of its scale. 

This can help teams to design particularly challenging sets and choreograph complex scenes, as was done on John Wick: Chapter 3 - Parabellum, as you can see below.
Another common use for virtual scouting is set dressing: teams can move digital assets around while experiencing the environment at human scale.

Virtual scouting sessions are often collaborative and involve multiple stakeholders participating in the review at the same time. They can explore the set together, and identify and sometimes bookmark areas of particular interest for filming certain scenes. They may also call out which props may be built physically versus what will be virtual.

VR location scouting

An extension of virtual scouting is VR location scouting. In this case, the set is a real place. One team member may be sent ahead to capture the scene using photogrammetry tools such as RealityCapture, and then the other team members can review it remotely using virtual scouting in Unreal Engine. This can save significant time and money on travel, especially if there are multiple locations to choose between.

What is in-camera VFX (ICVFX)?

In-camera VFX, also sometimes referred to as on-set virtual production or OSVP, is one form of virtual production that has been getting a lot of press recently, with high-profile shows like The Mandalorian taking advantage of it. With this technique, live-action shooting takes place on a huge LED stage or volume, which displays an environment generated by the real-time engine on its walls and ceiling. The most sophisticated LED volumes are hemispherical domes that completely enclose the stage, while simpler stages might be half cubes, or even just a single wall. 

It’s much more than just background replacement; the real-time scene is live and interactive, and it moves in relation to the camera to create a true-to-life parallax effect, so everything appears in the correct perspective. The LED displays also cast realistic lighting and reflections on the actors, so they appear to be fully immersed in the scene.
Courtesy of Alter Ego, Pixomondo, William F. White, Virtual Production Academy, and Caledon FC

What are the benefits of in-camera VFX?

In traditional production, where live-action footage will eventually be combined with computer-generated imagery (CGI), actors are shot against a green-screen background. This background can then be replaced with the digital elements in post-production. The actors and director have to imagine the final environment. In contrast, with on-set virtual production, the cast and crew are working in context, making it easier to deliver convincing performances and make better creative decisions.

In some cases, for easy VFX shots, in-camera VFX can actually produce the final pixels without the need to go into post-production, saving both time and money: the live-action footage and CGI are combined and shot through the camera to produce the finished frames. These are known as in-camera finals. 

In other cases, in-camera VFX can mean that what would have been medium-difficulty shots require only a small amount of post-production work, and otherwise difficult shots are much less time-consuming to finish in post than they would have been.

Another huge benefit of working on LED stages is that the lighting and time of day can be easily controlled—no more waiting for that golden hour or ideal weather. Continuity is also easier: scenes can be recalled in the exact same state they were left, and in a matter of minutes, not hours.

Unreal Engine offers a complete toolset to support ICVFX. You can take a look at it being tested out by some cutting-edge filmmakers below.

Virtual production resources

We hope you’ve enjoyed this whistlestop tour of the exciting world of virtual production and ICVFX, and that it’s helped to make everything a bit easier to understand. Still want to learn more about virtual production? Check out some handy resources below.

The Virtual Production Field Guide

The Virtual Production Field Guide is a free, two-volume resource that offers in-depth insights into how the technology is transforming the art and craft of filmmaking, together with practical advice for using the techniques in your own production. 

The Epic Developer Community

The Epic Developer Community is your go-to destination to ask and answer questions; show off your work and get inspiration from others; and access 100s of hours of free learning content. Here are some resources that are particularly useful to creators interested in virtual production:  
 

Virtual Production Hub

Our Virtual Production Hub showcases news and case studies from around the globe. Check it out for the latest virtual production information, insights, and inspiration. 

The Virtual Production Glossary

Created with input from the Visual Effects Society, the American Society of Cinematographers, Epic Games, and Netflix, the Virtual Production Glossary is a great resource to help you understand the terminology you may encounter as you explore this field.

More Real-Time Explainers