Unreal Engine 4.20 Released
July 16, 2018

Unreal Engine 4.20 Released!

By Jeff Wilson

What’s New

Unreal Engine 4.20 delivers on our promises to give developers the scalable tools they need to succeed. Create a future-focused mobile game, explore the impact of Niagara, breathe life into compelling, believable digital humans, and take advantage of workflow optimizations on all platforms.

You can now build life-like digital characters and believable worlds with unparalleled realism. Take your visual effects to the next level with Unreal Engine’s new Niagara particle editor to add amazing  detail to all aspects of your project. Use the new Digital Humans technology powering the “Meet Mike” and “Siren” demos to raise the bar on realism. With the new Cinematic Depth of Field, you can achieve cinema quality camera effects in real-time.

Unreal Engine empowers you to make things your way by giving you the tools to customize the creation process to your preferred style and workflow. With the new Editor Scripting and Automation Libraries, you can can create completely customized tools and workflows. Make the lives of designers and artists easier by adding new actions to apply to Actors or assets thanks to scripted extensions for Actor and Content Browser context menus.

Battle-tested mobile and console support means you can create once and play on any device to deliver experiences anywhere users want to enjoy them. Epic has rallied around the mobile release of Fortnite to optimize Unreal Engine for mobile game development. We have made tons of performance improvements including implementing both hardware and software occlusion queries to limit the amount of work the hardware needs to do. Proxy LOD is now production-ready and can further reduce the complexity of the geometry that needs to be rendered at any time. 

In addition to all of the updates from Epic, this release includes 165 improvements submitted by the incredible community of Unreal Engine developers on GitHub! Thanks to each of these contributors to Unreal Engine 4.20: 

Adam Moss (adamnv), Akihiro Kayama (kayama-shift), Alan Edwardes (alanedwardes), Alan Liu (PicaroonX), Andrew (XenonicDev), Andrew Haselgrove (Dimpl), Anton Rassadin (Antonrr), arkiruthis, Begounet, Brandon Wilson (Brandon-Wilson), c4tnt, Changmin (cmheo), Christian Loock (Brainshack), Clinton Freeman (freemancw), Daniel Assuncao (dani9bma), David Payne (dwrpayne), Deep Silver Dambuster Studios (DSDambuster), Derek van Vliet (derekvanvliet), Eduard Gelbling (NachtMahr87), frankie-dipietro-epic, Gautier Boëda (Goutye), George Erfesoglou (nonlin), Giovanny Gutiérrez (bakjos), Gregor Gullwi (ggsharkmob), Hannah Gamiel (hgamiel), Hyuk Kim (Hybrid0), Ibraheem Alhashim (ialhashim), Ilya (ill), Jacob Nelson (JacobNelsonGames), Jaden Evanger (cyberblaststudios), Jared Taylor (Vaei), Jesse Yeh (jesseyeh), Jia Li (shrimpy56), Jørgen P. Tjernø (jorgenpt), June Rhodes (hach-que), Junichi Kimura (junkimu), Kalle Hämäläinen (kallehamalainen), kinolaev, Kory Postma (korypostma), krill-o-tron, Kryofenix, Lallapallooza, Layla (aylaylay), Lee Berger (IntegralLee), Leon Rosengarten (lion03), Lirrec, malavon, Marat Radchenko (slonopotamus), Marat Yakupov (moadib), Mathias L. Baumann (Marenz), Matt Hoffman (LordNed), Matthew Davey (reapazor), Maxime Turmel (maxtunel), Michael Allar (Allar), Michael Kösel (TheCodez), Michael Puskas (Mmpuskas), Mikayla Hutchinson (mhutch), mimattr, Mitsuhiro Koga (shiena), Muhammad A.Moniem (mamoniem), nakapon, Nicolas Lebedenco (nlebedenco), Paul Eremeeff (PaulEremeeff), Phillip Baxter (PhilBax), projectgheist, Rama (EverNewJoy), redfeatherplusplus, Rei-halycon, Robert Khalikov (nbjk667), Roman Chehowski (RChehowski), S-Marais, Sam Bonifacio (Acren), Satheesh  (ryanjon2040), Scott Freeman (gsfreema), SculptrVR, Sebastian Aaltonen, Sébastien Rombauts (SRombauts), Seokmin Hong (SeokminHong), Sertaç Ogan (SertacOgan), stephenwhittle, Temaran, Thomas Miller (tmiv), Trond Abusdal (trond), TWIDan, Tyler (tstaples), Usagi Ito (usagi), yama2akira, Yang Xiangyun (pdlogingithub), yehaike, Zachary Burke (error454)

Major Features

New: Optimizations and Improvements for Shipping on Mobile Platforms

Unreal Engine 4.20 brings well over 100 mobile optimizations developed for Fortnite on iOS and Android, marking a major shift for developers in terms of ability to more easily ship games and seamlessly optimize gameplay across platforms. Major enhancements include improved Android debugging, mobile landscape improvements, and occlusion queries on mobile.

  
 

Hardware and Software Occlusion Queries on Mobile

Hardware Occlusion Queries are now supported for high-end mobile devices on iOS and Android that support ES 3.1 or Vulkan using the GPU. They are enabled by default for any device that supports them. 

Software Occlusion Queries is an experimental feature that uses the CPU to cull primitive components from the scene. Because it uses a conservative approach, it can be used on any mobile device. 

   

Left - r.Mobile.AllowSoftwareOcclusion 1, r.SO.VisualizeBuffer 1; Right - Render frozen showing occluded parts

To enable Software Occlusion Queries, follow these steps:

  1. Enable r.Mobile.AllowSoftwareOcclusion 1.
  2. Disable r.AllowOcclusionQueries 0.
  3. Enable any primitive to be an occluder by setting LOD for Occluder Mesh true in the Static Mesh Editor.

You can visualize the results in the Mobile Previewer when using High-End Mobile and then enable r.SO.VisualizeBuffer 1.

Platform Material Stats

Quickly profile and optimize your Materials using the new Platform Stats window inside of the Material Editor! You can now see stats for multiple shader platforms and quality levels. For mobile platforms, we use an offline shader compiler to give more accurate instruction and Texture usage information.

Improved Android Debugging

Iterate and debug on Android without having to repackage the UE4 project! When compiling Android, we now generate a Gradle project file which can be opened in Android Studio. You can place breakpoints in C++ and Java code and use Android Studio to launch a debug session.  You can also make changes to C++ source code and recompile. If you start a new debug session, Android Studio will notice the change and quickly upload the new shared library to your device.

Mobile Landscape Improvements

Make your terrains on mobile more interesting now that you can have unlimited Landscape Material layers on mobile devices! While three is still the best optimized case, any number of Landscape layers are supported, provided there are sufficient Texture Samplers available.

You can now use the Feature Level Switch Material nodes in Landscape Materials enabling you to create a single Landscape Material for all platforms.

1 - Mobile Landscape; 2 - PC Landscape

Miscellaneous Mobile Improvements 

The following improvements were made to ship Fortnite on mobile and brought into Unreal Engine 4.20 to benefit all developers:

  • Minimum Static Mesh LOD per platform
  • Minimum Skeletal Mesh LOD per platform
  • Hardware occlusion improvements
  • HLOD tools and workflow optimizations
  • Audio quality node
  • Audio variation culling
  • Audio downsampling  per platform
  • Audio compression quality per platform
  • Shading model tweaks to better match PC
  • Reflection capture brightness fix
  • Landscape support for four layers
  • Landscape tessellation improvements
  • No memory cost for unused LODs, including:
    • Static Meshes
    • Skeletal Meshes
    • Material quality levels
    • Grass and foliage
    • High detail components and meshes
    • High detail emitters in Cascade
  • Settings based on device memory
  • Material memory reduction
  • Editor scriptability for bulk asset changes
  • Particle component pooling
  • Material parameter collection update cost

New: Optimizations and Improvements for Shipping on Nintendo Switch

We have significantly improved Nintendo Switch development by releasing tons of performance and memory improvements built for Fortnite on Nintendo Switch to all Unreal Engine developers!

  This includes the following:
  • Support for Dynamic Resolution and Temporal Upsampling
  • Low Latency Frame Syncing for Controller Input
  • Significant CPU Rendering Optimizations
  • Improvements to Threading
  • Better Texture Compression
  • Support for Memory Profiling
  • Backbuffer support for 1080p while in docked mode
  • And many other fixes!

New: Proxy LOD Improvements

The new Proxy LOD tool has graduated from “Experimental” to production-ready! This tool provides performance advantages by reducing rendering cost due to poly count, draw calls, and material complexity which results in significant gains when developing for mobile and console platforms. This tool provides an alternative to the third-party package Simplygon and can be used in conjunction with the Level of Detail (LOD) systems in Unreal Engine.

The Proxy LOD tool produces a simpler representation by creating a proxy in the form of a single low-poly parameterized mesh and associated textures that visually approximate a collection of more complex source geometry models. This proxy can then be displayed at runtime when a reduction in model quality is acceptable - for example, when geometry only occupies a small number of pixels on screen.

Note: The Proxy LOD tool is currently only available in Unreal Editor on Windows.

The above image shows the buildings and parking lots in Fortnite Battle Royale constructed using the Proxy LOD tool where both Gap-Filling and Hard-Edge Splitting were in use.

The production-ready version of the Proxy LOD tool has several enhancements over the Experimental version found in 4.19. Particularly, improved user control over the Normals on the Proxy Geometry and the ability to generate much simpler proxies by using  gap-filling to automatically close doors and windows.

Improved Normal Control : Hard Edge Split Normal

The extreme constraints on Fortnite memory usage call for highly efficient uses of LODs. For most proxies, very small base color textures are generated and no Normal map is used, this approach requires the highest possible quality Normals on the proxy mesh itself.  

1 - Hard Edge Angle = 80; 2 - Hard Edge Angle = 0

The above gif shows the effect of hard-edge splitting for vertex normals. The image 2 shows smooth vertex normals, as calculated in the 4.19 Experimental version of the Plugin -  the dark regions near the bottom of the house are indicative of the shortcomings. Compare this with image 1 which shows hard-edge vertex normal splitting with a user-supplied hard-edge cutoff angle.

In addition to the hard-edge cutoff angle, the user may now specify the method used in computing the vertex normal, by selecting between Angle Weighted, Area Weighted, and Equal Weighted.

Gap Filling

For watertight geometry, the Proxy system automatically discards any inaccessible structures (for example, interior walls or furniture within a closed house). For ideal results, source geometry should be constructed or altered with this in mind, but due to game production constraints that isn’t always feasible. To facilitate the generation of efficient Proxy LODs from source geometry that is nearly watertight, the Proxy LOD tool can optionally use the level set-based techniques of dilation and erosion, to close gaps. The intended use case is primarily doors and windows in distant buildings.

1 - Original Mesh; 2 - No Gap Filling; 3 - Gap Filling

The above gif shows the effect of using Gap Filling. All images were constrained to use a fixed small amount of texture space. Image 2 is the result of Proxy LOD on a building without using Gap Filling, in which case the LOD includes the interior of the building (at the cost of unseen triangles and texels). Image 3 is the same building with Gap Filling used to automatically close the doors and windows of the buildings, resulting in fewer total triangles and a better use of the limited texture resource.

New: Cinematic Depth of Field

The new Cinematic Depth of Field (DoF) enables you to achieve your vision of rendering cinema quality scenes in a real-time environment! This new method is designed as a higher-quality replacement for the Circle DoF method and is faster than most other DoF methods, such as Bokeh. With Cinematic DoF, the depth of field effect is cleaner, providing a cinematic appearance with the use of a procedural Bokeh simulation. This new DoF implementation also supports alpha channel, dynamic resolution stability, and includes settings to scale it down for console projects.

 

1 - Cinematic Depth of Field enabled; 2 - Depth of Field disabled

Cinematic Depth of Field is enabled by default and replaces the current selection for the Circle DoF method in the Camera and Post Process settings.

  • Cinematic DoF supports the following Platforms:
    • D3D11 SM5, D3D12 SM5, Vulkan SM5, PlayStation 4, Xbox One, and Mac.
  • The procedural Bokeh simulation supports the following features:
    • Configuring the number of blades for the Diaphragm.
    • Configuring the curvature of the blades directly with the Lens’ largest aperture (Minimal F-stop).
    • Configurable controls available in the Camera settings of the Post Process Volume, Camera Actor, and Cine Camera Actor.
  • Many customizable scalability settings using r.DOF.* console variables to scale it according to your project needs on hardware with finite resources.

For additional information, please see the Depth of Field documentation.

New: Niagara Visual Effects Editor (Beta)

The Niagara visual effects (VFX) Editor is now available as a Beta plugin! Try out a Beta version of the all-new visual effects tool that will eventually replace Unreal Cascade. Watch this GDC talk for a deeper dive on the vision for Niagara.

Note: The Beta nature of this feature means that we are far enough along in development that we want to share it with our customers and get as much feedback as possible before it becomes a standard UE4 Feature. Beta does not mean that Niagara is production ready as we still have quite a bit of performance optimization and bug fixing that needs to be done before you can consider using this tool for production. However, we hope that effects developers begin investing in learning Niagara and work with us to make it the best VFX editor that it can be.

For an overview of Niagara, please watch the GDC 2018 presentation Programmable VFX with Unreal Engine’s Niagara and read the Niagara documentation.

Improvements to Effect Design and Creation


Left - Particle system utilizing dynamic input module; Right - Dynamic input module

  • Skeletal Meshes can specify their emission from the surface, being driven by either Material name or a named bone influence region.
  • Specifying default values in Modules has been improved, allowing a wide variety of behaviors from calling functions to using default dynamic inputs.
  • Mesh particles now support Angular Velocity.
  • Beams support has been added to the Ribbon renderer with new corresponding Modules.
  • Dependencies between Modules can now be defined, enabling the user to be informed when they are putting the stack in a bad configuration. Also, users are being given options to auto-fix.
  • Multiple improvements have been made to merging System Emitters and Base Emitters, enhancing overall stability.
  • Modules can now be moved up and down the stack via drag-and-drop. Inherited Modules cannot be moved because doing so complicates merging.
  • Modules can now be enabled/disabled within the stack. This will also work for inheritance.
  • Sequencer and Blueprint support for setting Niagara User Namespace variables has been added.
  • You can drive parameters by custom HLSL Expressions, Dynamic Inputs (graph snippets), links to other variables, or by value.
  • Optionally, particles can now have a Persistent ID, which is guaranteed to be unique for that emitter.
  • Multiple renderers of each type can be applied to an emitter. Each instance can adjust where it gets the values for a given parameter. For example, an emitter could have two sprite renderers, one pulling its position from a particle’s position and the other pulling its position from a particle’s offset position.
  • The Niagara Extras Plugin also contains a debug Material that routes various per-particle parameters to a dialog-like display.
  • Houdini has provided a simple CSV importer to Niagara, enabling demo content for GDC 2018.
  • A wide variety of functionality for Niagara has been added under the Automated Testing system.

Updated User Interface

The Niagara interface has been designed to be make complex effects intuitive to create. It uses a stack metaphor as its primary method of combining pieces of script logic together. Inside of the stack, you will find a Timeline to control aspects of the effect over time, a Parameters Panel for easy access to variables available in the effect, and a Attribute Spreadsheet to quickly find and react to information as the effect is running. 

New Modules

All of Niagara’s Modules have been updated or rewritten to support commonly used behaviors in building effects for games and adhere to a consistent set of coding standards. New UI features have also been added for the Niagara stack that mimic the options developers have with UProperties in C++, enabling inline enable/disable or variable display based on the state of another variable. 

GPU Simulation

Niagara now has support for GPU Simulation when used on DX11, PS4, Xbox One, OpenGL (ES3.1), and Metal platforms. There are plans  for Vulkan and Switch to support GPU Simulation in a future release. Current limitations and known issues with GPU simulation are described below:

  • Full support for Niagara requires the ability to read-back data from the GPU. Currently only our DX11 and PS4 rendering interfaces support this functionality, and OpenGL and Metal are in progress.
  • Collision, Curves, and Curl Noise Fields are supported on the GPU. Meshes, Skinned Meshes, Spline Components, and more specialized data interfaces are not yet supported. The API for GPU shaders to interact with UNiagaraDataInterfaces has been redesigned as well.
  • Sprite and Instanced Static Mesh rendering from particles is supported on GPU simulations. At this time, Light Generation from Particles and Ribbons from Particles do not work on the GPU.
  • Events only work on the CPU and will be undergoing significant changes after Unreal Engine 4.20.

CPU Simulation & Compilation

Niagara CPU Simulation now works on PC, PS4, Xbox One, OpenGL (ES3.1) and Metal. At this time, Vulkan and Switch are not supported.

  • The CPU virtual machine (VM) now compiles its contents to the DDC on a background thread, significantly improving overall compilation speed and team efficiency. Further work is required to make the final and expensive VM optimization step occur in ShaderCompileWorker because it depends on non-thread safe libraries. Compilation dependencies are properly tracked across Modules, clearly identifying when we need to recompile certain parts of the stack.
  • Physics simulation on the CPU should properly model the Physics Material values for friction and restitution (bounciness).
  • Emitters will now simulate in parallel on worker threads.

New: Digital Human Improvements

As part of Epic’s character explorations to develop Digital Humans that started with the Photorealistic Character bust, many rendering improvements have been made to develop realistic believable characters that come to life. 

MeetMike_Image2.jpg

  While developing these characters, the following rendering improvements have been made for Skin, Eyes, Lighting, and Subsurface Scattering.
  • Added a new Specular model with the Double Beckman Dual Lobe method.
  • Light Transmission using Backscatter for Subsurface Profiles.
  • Better contact shadowing for Subsurface Scattering with Boundary Bleed Color.
  • Short Distance Dynamic Global Illumination through Post Process Materials.
  • Added detail for eyes using a separate normal map for the Iris.

For additional information, see Digital Humans.

New: Rectangular Area Lights

Rectangular Area Lights enable you to make more realistic lighting for environments containing large light sources, such as fluorescent overhead lights, televisions, lit signs, and more! Rectangular Area Lights are accessible from the Modes panel along with the other light types.

  • Currently only supports the Deferred Renderer.
  • Acts mostly like a Point Light, except it has Source Width and Height to control the area emitting light.
  • Static and Stationary mobility shadowing works like an area light source with Moveable dynamic shadowing, currently working more like a point light with no area.

Performance Considerations: 

  • More expensive overall than Point or Spot Lights with the dominant cost being incurred when movable and casting shadows. Shadowing generally has the same cost.
  • Stationary Light mobility or Non-Shadow Casting lights can be double the cost with cost scaling depending on the platform being used. If you’re using Static Lights, the cost is free.

New: Mixed Reality Capture (Beta)

Create compelling spectating experiences for mixed reality applications using the new Mixed Reality Capture functionality, which makes it easy to composite real players into a virtual play space!

The beta Mixed Reality Capture support has three components: video input, calibration, and in-game compositing.  We have a list of supported webcams and HDMI capture devices that enable you to pull real world green screened video into the Unreal Engine from a variety of sources.  If you have a Vive Tracker or similar tracking device, Mixed Reality Capture can match your camera location to the in-game camera to make shots more dynamic and interesting. Setup and calibration is done through a standalone calibration tool that can be reused across Unreal Engine 4 titles. Once you set up your filming location, you can use it across all applications.

While feature support is in beta, we’re looking forward to getting feedback as we continue to improve the system. More information about Mixed Reality Capture setup can be found in the Mixed Reality Development documentation

New: nDisplay Flexible, Multi-Display Rendering

Effortlessly create video walls for large visualization installations using the new nDisplay system! Automatically launch any number of Unreal Engine instances -  locked firmly together, with deterministic content and frame-accurate time synchronization - across any number of host computers, each instance driving its own projector or monitor display. Use active or passive stereoscopic rendering to enhance the viewer’s sense of immersion in the 3D scene and built-in VRPN support to drive the system from mobile VR controllers. 

For more information, please see the documentation.

New: Submix Audio Recording

In the new audio engine, we’ve added the ability to record Engine output - or any individual Submix’s output - to a *.wav file or SoundWave Asset.

Exporting Submix output to a SoundWave Asset.


Exporting Submix output to a *.wav file.

New: Shared Skeletal Mesh LOD Setting

Set LOD settings once and reuse them across multiple Skeletal Mesh assets using the new LOD Settings asset! Inside the Asset Details panel for a Skeletal Mesh, under LOD Settings, you can now select an LOD Settings asset to use, or you can generate a new asset based on the current settings.

Please see the Sharing LOD Settings section of the Skeletal Mesh Asset Details page for more information.

You can also assign the LOD setting and regenerate LODs from Blueprint using a Blutility. 

New: Streaming GeomCache and Improved Alembic importer (Experimental)

We continue to make stability and performance improvements to the geometry cache system, as noted in the following:

  • Individual vertex animation frames are now compressed using an intra-frame codec based on Huffman encoding. Compressed data is streamed from disk, enabling playback of longer sequence with a low amount of memory overhead. The new implementation is still very experimental and is not ready for use in production
  • The Alembic importer has been changed to iteratively import frames rather than importing all frames in bulk. This should improve the PCA pipeline and overall stability and speed.

New: Scripted Extensions for Actor and Content Browser Context Menu

Easily create in-context tools and workflow enhancements without writing a line of code by extending the context menus for Actors and Content Assets in the Browser using Blueprint Utilities, or Blutilities. 

  • Create a new Blutility using one of the new parent classes - AssetActionUtility (for Content Browser extensions) or ActorActionUtility (for Actor extensions).
  • You can specify what types of Actors or Assets the actions apply to with the GetSupportedClass function.
  • Add logic in events (or functions) with no return value, marking them as “Call In Editor” so they show up in the context menu, and a pop-up dialog will display when the event is triggered to fill in values for any parameters you define on your events

New: Animation Retarget Manager Improvements

Animation Retarget Manager now supports saving and loading of the mapping data, so you can save and reuse mapping data on multiple meshes. You can also quickly save multiple rig data for different animations and reuse them with this feature.

Please see the Retarget Manager page for more information.

New: RigidBody Anim Node Improvements

You can now have movement on simulated bodies when moving the Skeletal Mesh Component around in the world when using ‘Local Space’ simulation, which offers greater stability for your simulation. We have now added some options to look at the linear velocity and acceleration of the component in world space, and apply them (scaled and clamped) to the local space simulation. 

We also added the option for any joint to be the base of simulation, and added support for dynamics to easily be reset.

New: Clothing Improvements

Physics Assets now support tapered capsules for collision in clothing simulation.

Note:These are not supported for collisions in rigid body simulations. 

You can also now copy Skeletal Mesh vertex colors to any selected Clothing Parameter Mask. 

New: Garbage Collection Improvements

Garbage collection performance has been optimized reducing some operations by as much as 13x! Specifically, we made the following improvements:

  • The “Mark” phase has been optimized and is now multithreaded. On machines with multiple cores, the cost of marking Objects as unreachable has been reduced from 8 ms to 0.6 ms for approximately 500,000 Objects.
  • The “BeginDestroy” phase (unhashing Objects) now runs across multiple frames, using no more than 2 ms per frame. The cost of unhashing Objects will no longer be included in the same frame as the “Mark” phase and reachability analysis.
  • Garbage Collection assumption verification, which runs in development builds, now uses the same multithreaded code as reference-gathering. As a result, development builds will see an improvement in Garbage Collection times. In Epic's tests, sample timings for about 500,000 Objects reduced from over 320 ms to under 80 ms.

New: Visual Studio 2017 

UE4 now uses the Visual Studio 2017 compiler, and the Engine will generate project files for Visual Studio 2017 by default. Visual Studio 2015 is still being supported, but requires some configuration. Additionally, we’ve added support for the Windows 10 SDK. 

Note: Visual Studio 2017 supports the installation of multiple compiler versions side-by-side.

See our Hardware & Software Specifications for more information.

New: Development Streams on GitHub

Unreal Engine development streams are now updated live on GitHub. If you want the latest version of development code, you can now pull these streams directly, without waiting for Epic to merge changes from the development teams into our main branch. Note that these streams are live, and have not been vetted by our QA team, which is typically the case in our binary releases or in the main branch. 

To learn more, check out our blog post.

New: UMG Safe Zone Improvements 

The Screen Sizes you select in UMG and Play-In-Editor (PIE) settings are now linked with Device Profiles, which also takes into account the Mobile Content Scale Factor, meaning that the final resolution and DPI scale will change based on the device screen size selected. 

 

The following improvements have been made for UMG Safe Zone workflow:

  • Safe Zone previewing is now automatically enabled for Debug Title Safe Zone when using a value less than 1 to test screen sizes for TVs and Monitors.
  • Using the command r.MobileContentScaleFactor works to scale phone and tablet resolutions in UMG previews and PIE modes.
  • Non-Uniform safe zones are now supported for devices like the iPhoneX, where parts of the screen are inaccessible.
  • Safe Zones, Scale Boxes, and Common Border Widgets react correctly to non-uniform safe zones and UMG Designer sizes.
  • UMG now displays the selected device, its screen size, and uniform scaling factor for easy reference in the Designer Graph.
  • Use r.MobileContentScaleFactor to scale phone and tablet resolutions in UMG and PIE modes.

For additional information, see UMG Safe Zones.

New: Curve Atlases in Materials

Materials can now use a Curve Atlas asset to store and access linear color curve data with additional support provided through Blueprint. The Curve Atlas uses the same linear curve color as before, except you can use as many linear color curves as the size of your specified Atlas.  

To create a new Curve Atlas, use the Content Browser to select Add New > Miscellaneous and select Curve Atlas.

When you open a Curve Asset Editor, you’ll be able to adjust the Hue, Saturation, Brightness, Vibrance, and Alpha clamps of any individual curve. Additionally, the Preview thumbnails in the Content Browser will display the gradient set by the curve.

For additional information, see Curve Atlases in Materials.

New: Mesh Description Mesh Format 

UE4 is moving to a new higher-level intermediate format which can represent any type of mesh asset in the Engine. This is a gradual process that will improve workflow and enable us to provide some great new features. 

The goal of moving to a new mesh format is:

  • All meshes (Static, Skeletal, and potential other mesh-like objects such as terrain and BSP) can have the same internal representation with some interchangeability, to a certain degree.
  • Most UE4 geometry tools will work on any type of mesh based on the geometry format.
  • Any mesh using the new format can be examined and modified using a standard API enabling runtime, native or scripted modification, opening up many possibilities for procedurally generated content.
  • Meshes will be imported directly to the format with the ability to preserve higher-level mesh representations, such as quads or edge hardness. Currently, these are lost when importing a Static or Skeletal Mesh.
  • The new mesh format is structured internally so that modifications can be made in real-time, even to the most complicated meshes. This forms the basis of a work-in-progress mesh editing feature, which is also scriptable, that will be developed for a future release.

In this release, only Static Mesh has been converted to use the new mesh format. Users will not notice any difference to their everyday workflow and the assets themselves will not change. Currently, the new data is automatically created from the old format and cached in the DDC.

New: Label Saved Colors in Color Picker

Colors saved in your Theme Bar or Theme Menu can now have labels for identification purposes! Labels can easily be set by right-clicking the saved color swatch and entering a name for the saved color.

For additional information, see Color Picker.

New: Recently Opened Filter in Content Browser

Quickly find recently viewed Assets in the Content Browser using the new Recently Opened filter! This filter lists the 20 most recently opened assets.

 

You can find the Recently Opened filter in the Filters list under Other Filters. You can change the number of recently opened assets listed in Editor Preferences > Content Browser with Number of Assets to Keep in the Recently Opened Filter.

For additional information, see Content Browser Filters.

New: Shotgun Integration (Beta)

Streamline your production pipeline using the new Shotgun integration for Unreal Engine 4! 

Features include: 

  • It adds the Unreal Editor to your Shotgun launcher, so artists can reliably open the right version of Unreal for the Shotgun project.
  • You can open the Shotgun panel in the Unreal Editor interface, so you can stay up to date with the activity in the Shotgun project as you work.
  • It hooks into the Shotgun loader, so you can easily bring assets into your Unreal Project, and control where they end up in your Content Browser.
  • It even adds Shotgun interaction commands to the contextual menus you get when you right-click Actors in a Level, or assets in the Content Browser.

Note: We're working out the last details before we can share our integration on GitHub. Check back soon for updates and documentation!

New: Editor Scripting and Automation Libraries

The Editor Scripting Utilities Plugin is now available to all Unreal Engine users. This Plugin offers simplified interfaces for scripting and automating the Unreal Editor, working with assets in the Content Browser, working with Actors in the current Level, editing the properties of Static Mesh assets, and more.

For details, see Scripting and Automating the Editor.

New: Import Asset Metadata through FBX

When you import an FBX file into Unreal, any FbxProperty data that is saved in that file is now imported as well. You can access this metadata in Blueprint or Python scripts that you run in the Unreal Editor. This can help you customize your own asset management pipelines for Unreal based on information about your assets that comes from your content creation tools. 

For details, see FBX Asset Metadata Pipeline.

New: Improved Script Access to Static Meshes for LODs and Collisions 

Blueprint and Python scripts that you run in the Unreal Editor can now modify more properties of your Static Mesh assets. This allows you to automate some of the tools offered by the user interface of the Static Mesh Editor. For example:

New: Blueprint Bookmarks

The Blueprint Bookmarks feature provides the ability to create named Bookmarks in any function graph in the Blueprint Editor. Bookmarks being created will be listed in a new UI window, where you can click them to restore the position and zoom level of the Viewport (as well as the active tab you were viewing). In addition to the Bookmarks you create, you can also quickly jump to any Comment node in your Blueprint by selecting the comment from a separate list. Bookmarks are stored locally on your machine, so they won't affect the Blueprints themselves, and syncing content will not overwrite your Bookmarks with those of another user.

New: Blueprint Watch Window

The Blueprint Watch Window is designed to speed up debugging by giving you access to the variables and nodes that you want to watch, even across multiple Blueprints. Watch data from every Blueprint that you open in the Editor, and that is part of the current call stack, will be consolidated into a single list, enabling you to inspect variables and function outputs. Also, you can jump between Blueprints with ease. You can click on an entry in the "Node Name" column to go to the named node in any Blueprint, while selecting entries in the "Object Name" column will select the instance of the object associated with that entry. Arrays, Sets, Maps, and other data structures can be expanded, making a drill-down examination of any data they contain quick and convenient.

New: Navigation System Code Moved to a Module

Most Navigation System-related code has been moved out of the Engine code and into a new Navigation System Module. Game-specific code using navigation system functionality might need to be updated.

A Python (3.5) script is available to parse your project’s source code and point out lines that need updating. Optionally, the script can perform the changes but make sure to use this option with caution and assisted by a version control system. Script options can be found at the top of the file.

Please see the Programming Upgrade Notes section for details on upgrading your project to work with these changes.

New: Improved Mobile Specular Lighting Model

Mobile specular response has been changed to use the GGX Lighting Model by default. This improves mobile specular quality and better matches SM5 but adds a small cost to shader processing time. 

1 - 4.20 Default GGX Specular; 2 - 4.19 Spherical Gaussian Specular

The previous Spherical Gaussian Specular model is still accessible via the ‘Use legacy shading mode’ project option and can be found under Rendering > Mobile.  

New: Mobile Skylight Reflections

The Mobile Renderer now uses a Skylight Cubemap for Specular Reflections when no Reflection Captures are relevant.

1 - Mobile, no reflection captures ; 2 - PC, no reflection captures

New: Replication Driver / Replication Graph

The Replication Graph Plugin provides a replication system optimized for games with large Actor and player counts. The system works by building a series of customized nodes that can centralize data and computation. These nodes persist across multiple frames and can be shared by client connections, cutting down on redundant CPU work and enabling Actors to be grouped together in nodes based on game-specific update rules. We may make changes to the API, so this is considered Experimental in 4.20, but it is in use in Fortnite Battle Royale and it will be a fully supported feature. 

New: Steam Authentication

Steam Authentication has been added! Games can now add a packet handler component that interfaces with Steam’s authentication APIs, enabling them to advertise their servers properly, handle VAC/publisher bans, and provide better validation of clients. If enabled, clients joining a server now have to be authenticated by Steam before being allowed into gameplay. By default, clients who fail authentication are kicked from the server.

Virtual Camera Plugin

New to 4.20, the Virtual Camera Plugin enables a user to drive a Cine Camera in Unreal Engine 4 (UE4) using an iPad Pro in a virtual production environment. With ARKit, a Vive Tracker, or an optical motion capture system such as Vicon or Optitrack, the position and rotation of the iPad is broadcast wirelessly to the PC, with the PC sending video back to the iPad.

Camera settings such as focal length, aperture, focus distance, and stabilization can be adjusted using touch input. Additionally, the virtual camera can be used for taking high-res screenshots, setting waypoints, recording camera motion and other tasks related to virtual production.

On the Learn tab of the Epic Games Launcher under the Engine Feature Samples section, there is a Virtual Camera project which includes a sample scene and project set up for use with the Virtual Camera Plugin.

For more information, please see the Virtual Camera Plugin documentation.

New: Frame Accuracy Improvements for Sequencer

Sequencer now stores all internal time data as integers, enabling robust support of frame-accuracy in situations where it is a necessity. Keys, section bounds, and other data are now always locked to the underlying user-controllable sequence resolution; this can be as fine or as coarse as the use-case demands. Very high resolutions will support greater fidelity of key placement and sub-frames, while reducing overall sequence range.

Key Updates:

  • The time cursor in Sequencer is now represented as a block that spans the full range of the currently evaluated Tick, showing very clearly which keys are evaluated and which are not for any given frame.
  • “Force Fixed Frame Interval” playback has been rebranded as “Frame Locked”, setting the Engine max FPS to the Sequence’s display rate, and locking time to whole frame numbers (no sub-frame interpolation)
  • Sub frame evaluation remains fully supported for situations where frame accuracy is not a consideration (such as UMG animation).
  • Various time sources are now supported for runtime evaluation such as the Engine clock (supporting world-pause), audio clock and platform clock.
  • The UI can now be viewed in Non Drop Frame (NDF) Timecode and Drop Frame (DF) Timecode. NDF Timecode is available to all frame rates and directly converts the frame number to hours, minutes, seconds, and remaining frames. DF Timecode is only supported on NTSC Rates (23.976, 29.97, 59.94). The display format can be changed with the Ctrl + T keyboard combination or with the framerate UI menu.

Please see the new Sequencer Time Refactor Notes page for more information.

New: Media Track for Sequencer

Sequencer has a new track for playing media sources. It is like the audio track, but for movies. Simply drag-and-drop a Media Source asset into the track view or create a Media Track from the Add Track menu. This feature currently works best with Image Sequences, especially EXR. Image Sequences in the Media Track will accurately sync frames with rendered output.

Please see the Using Media Tracks page for more information.

New: Sequencer Curve Editor and Evaluation Enhancements

Several enhancements have been made to the Curve Editor and Evaluation in Sequencer including:

Weighted tangents are now supported on float curves.

Using weighted curves in the sequencer curve editor

Added support for continuous Euler Angle changes when changing rotations. Euler angles are no longer limited to -180,180, which is necessary to avoid flips in animation.

You can now turn on Quaternion Rotation on a 3D Transform Section via the track’s Properties menu to utilize quaternion interpolation to smoothly interpolate between two rotations. This is similar to the feature previously available in Matinee.

New: Animating Variables on Anim Instances in Sequencer 

It is now possible to animate variables on Anim Instances through possessables, enabling direct control of Anim Blueprint variables, functions and other content. To add an Anim Instance binding to Sequencer, look for its name in the  [+Track] button for Skeletal Animation Components. Any variables that are exposed to cinematics will be shown on its track picker.

Please see the Controlling Anim Instances with Sequencer page for more information.

New: Final Cut Pro 7 XML Import/Export in Sequencer

Sequencer movie scene data can now be exported to and imported from the Final Cut Pro 7 XML format. This can be use to roundtrip data to Adobe Premiere Pro and other editing software that supports FCP 7 XML. You can trim and offset shots in editing software and map those back to sequencer automatically during import.

Note: Audio is not supported at this time.

New: Sequence Recorder Improvements 

Sequence Recorder now supports a profile system that is stored in the Persistent Level. Recording profiles enable you to store which actors you wish to record and their settings, as well as the output path to store the recorded data in. Sequence Recorder also now supports recording multiple takes for each of the selected actors.

Please see the Sequence Recorder page for more information.

New: Sequencer Track Usability Improvements 

Several updates have been made to improve the usability of Tracks within Sequencer. Tracks, Actors and Folders can now be reordered, Event Track names are displayed next to the event keyframe, you can now resize sections to their source duration, you can mask individual transform channels, create Pose Assets from the blended pose and more.

Please see the new Working with Tracks in Sequencer page for more information.

New: Translucency Support for Instanced Stereo Rendering

We’ve taken the improvements to the Instance Stereo Rendering (ISR) path that we made for Robo Recall, and improved them to work across more features in the engine. Unreal Engine 4.20 adds support for performing the translucency rendering pass using Instanced Stereo Rendering, which can significantly reduce CPU cost on translucency-heavy scenes. No content changes are needed; any project with Instanced Stereo enabled in the project settings will automatically get the benefits of Instanced Stereo Rendering.

New: Magic Leap One™ Beta Support

At GDC, we announced Beta support for Magic Leap One™: Creator Edition, a software toolkit for early development of experiences for Magic Leap's personal spatial computing platform, as part of a larger partnership between the two companies. As of Unreal Engine 4.20, you can develop for the Magic Leap One™ using the fully supported release of Unreal Engine. 

Unreal Engine 4 support for Magic Leap One uses our built in frameworks for things like camera control, world meshing, motion controllers, and forward and deferred rendering. We’ve also added more robust support for features like eye tracking and gestures.

Developers can download the Magic Leap software development kit and simulator at developer.magicleap.com.  For those developers with access to hardware, Unreal Engine 4.20 can deploy and run on the device in addition to supporting Zero Iteration workflows through Play In Editor. 

New: Apple ARKit 2.0 Support

We’ve added support for Apple’s ARKit 2.0, which includes better tracking quality, support for vertical plane detection, face tracking, 2D image detection, 3D object detection, persistent AR experiences and shared AR experiences. Support for these new features enables you to place AR objects on more surfaces, track the position and orientation of a face, recognize and bring 2D images to life, detect 3D objects, and facilitate new types of collaborative AR experiences.

New: Google ARCore 1.2 Support 

We’ve added support for Google’s ARCore 1.2, which includes support for vertical plane detection, Augmented Images, and Cloud Anchors. Support for these new features enables you to place AR objects on more surfaces, recognize and bring images to life, and facilitate new types of collaborative AR experiences.

New: Platform SDK Upgrades

In every release, we update the Engine to support the latest SDK releases from platform partners. 

 
  • IDE Version the Build farm compiles against
    • Visual Studio:  Visual Studio 2017 v15.6.3 toolchain (14.13.26128) and Windows 10 SDK (10.0.16299.0)
      • Minimum supported versions
        • Visual Studio 2017 v15.6
        • Visual Studio 2015 Update 3
    • Xcode:  Xcode 9.4
  • Android:  
    • NDK 12b (New CodeWorks for Android 1r6u1 installer will replace previous CodeWorks for Android 1R5 before release, still on NDK 12b)
  • HTML5: Emscripten 1.37.19
  • LInux: v11_clang-5.0.0-centos7
  • Lumin: 0.12.0
  • Steam: 1.39
  • SteamVR: 1.39
  • Oculus Runtime: 1.25
  • Switch:
    • SDK 4.5.0 + optional NEX 4.2.1 (Firmware 4.1.0-1.0)
    • SDK 5.3.0 + optional NEX 4.4.2 (Firmware 5.0.0-4.0)
    • Supported IDE: VS 2015 / 2017
  • PS4:
    • 5.508.031
    • Firmware Version 5.530.011
    • Supported IDE: Visual Studio 2015, Visual Studio 2017
  • Xbox One (XB1, XB1-S, XB1-X):
    • XDK: April 2018
    • Firmware Version: April 2018 (version 10.0.17133.2020)
    • Supported IDE: Visual Studio 2017
  • macOS: SDK 10.13
  • iOS: SDK 11
  • tvOS: SDK 11
To view the full list of release notes, visit our forum or docs pages.