September 4, 2019

Unreal Engine 4.23 released!

By Jeff Wilson

What's New

Thanks to our next-gen virtual production tools and enhanced real-time ray tracing, film and TV production is transformed. Now you can achieve final shots live on set, with LED walls powered by nDisplay that not only place real-world actors and props within UE4 environments, but also light and cast reflections onto them (Beta). We've also added VR scouting tools (Beta), enhanced Live Link real-time data streaming, and the ability to remotely control UE4 from an iPad or other device (Beta). Ray tracing has received numerous enhancements to improve stability and performance, and to support additional material and geometry types—including landscape geometry, instanced static meshes, procedural meshes, and Niagara sprite particles.

Unreal Engine lets you build realistic worlds without bounds. Fracture, shatter, and demolish massive-scale scenes at cinematic quality with unprecedented levels of artistic control using the new Chaos physics and destruction system. Paint stunning vistas for users to experience using runtime Virtual Texturing, non-destructive Landscape editing, and interactive Actor placement using the Foliage tool.

We have optimized systems, provided new tools, and added features to help you do more for less. Virtual Texturing reduces texture memory overhead for light maps and detailed artist-created textures, and improves rendering performance for procedural or layered materials respectively. Animation streaming enables more animations to be used by limiting the runtime memory impact to only those currently in use. Use Unreal Insights to collect, analyze, and visualize data on UE4 behavior for profiling, helping you understand engine performance from either live or pre-recorded sessions.

This release includes 192 improvements submitted by the incredible community of Unreal Engine developers on GitHub! Thanks to each of these contributors to Unreal Engine 4.23:

Doug Richardson "drichardson", Morva Kristóf "KristofMorva", Reece Dunham "RDIL", "projectgheist", Jorgen P. Tjerno "jorgenpt", Ondrej Hrusovsky "Skylonxe", Miguel Fernandez "muit", Geordie Hall "geordiemhall", Artem Umerov "umerov1999", Marat Radchenko "slonopotamus", "AgentOttsel", Eric Spevacek "Mouthlessbobcat", Danny de Bruijne “danskidb”, Sertaç Ogan “SertacOgan”, Trond Abusdal “trond”, Joe Best-Rotheray “cajoebestrotheray”, Nick Edwards “NEdwards-SumoDigital”, Marcel “Zaratusa”, Mark Whitty “Mosel3y”, “YuchenMei”, Branislav Grujic “grujicbr”, “Rei-halycon”, Michael Hills “MichaelHills”, Nick Pearson “Nick-Pearson”, “mastercoms”, Zhi Kang Shao “ZKShao”, Nick “eezstreet”, “temporalflux”, Vladislav Dmitrievich Turbanov “vladipus”, Daniel Marshall “SuperWig”, Brian Marshall “TurtleSimos”, Sergey Vikhirev “Bormor”, Robert Rouhani “Robmaister”, Maxime Griot “”yamashi”, Igor Karatayev “yatagarasu25”, “Zeblote”, Hesham Wahba “druidsbane”, Joe Best-Rotheray “cajoebestrotheray”, MoRunChang “MoRunChang2015”, Sébastien Rombauts “SRombauts”, JinWook Kim “zelon”, Riley Labrecque “rlabrecque”, Дмитрий “Yakim3396”, “DanMilwardSumo”, Wesley Barlow “Wesxdz”, Franco Pulido “Franco Pulido”, Kimmo Hernborg “KimmoHernborg”, John Dunbar “Volbard”, Michał Siejak “Nadrin”, kalle Hämäläinen “kallehamalainen”, “KaosSpectrum”, Evan Hart “ehartNV”, Skyler Clark “sclark39”, Thomas Miller “tmiv”, Stephen A. Imhoff “Clockwork-Muse”, David Payne “davidpayne-cv”, “CyberKatana”, “roidanton”, Milan Šťastný “aknarts”, Alex P-B chozabu, Marco Antonio Alvarez “surakin”, Taikatou, Doğa Can Yanıkoğlu “dyanikoglu”, “Kalmalyzer”, “phi16”, Mikhail Zakharov “zz77”, Paul Hampson "TBBle", “NextTurn”, “Punlord”, kalle Hämäläinen “kallehamalainen”, Robert Pröpper “rproepp”, Yohann Martel “”ymartel06”, Francis J. Sun “francisjsun”, Eric Wasylishen “ericwa”, Phillip Baxter “PhilBax”, Alan Liu “PicaroonX”,Mathias Hübscher “user37337”,Daisuke Ban “exceed-alae”, Brandon Wilson “Brandon-Wilson”, Marcin Gorzel “mgorzel”, “prolenorm”

Major Features

New: Chaos - Destruction (Beta)

Revealed in a demo at GDC 2019, Chaos is Unreal Engine's new high-performance physics and destruction system available to preview in Beta form with the 4.23 release. With Chaos, users can achieve cinematic-quality visuals in real-time in scenes with massive-scale levels of destruction and unprecedented artist control over content creation.

Chaos functionality in Unreal Engine 4.23 must be enabled and compiled using a source build. See this guide for instructions on enabling Chaos.

For more information on Chaos Destruction, refer to the Chaos Destruction documentation pages. We have also added a Chaos Destruction Demo sample to the Learn Tab in the launcher to demonstrate how to set up various types of simulations and effects.

Geometry Collections

These are a new type of asset in Unreal for destructible objects. They can be built from one or more Static Meshes, including those gathered together in Blueprints or even nested Blueprints. Geometry Collections let the artist choose what to simulate and they also offer flexibility in terms of how you organize and author your destruction.

Left - One Wall Section - 31 Geometry Collections; Right - Exploded view of Static Mesh parts

Fracturing

Once you have a Geometry Collection, you can break it into pieces using the Fracturing tools. You can fracture each part individually, or apply one pattern across multiple pieces. In addition to standard Voronoi fractures, you can use Radial fractures, Clustered Voronoi fractures, and Planar Cutting using noise to get more natural results.

Left - Original Geometry Collection; Center - Fracture Across Entire Mesh; Right - Sub-fracturing Only Large Pieces

Clustering

With optimization in mind, Sub-fracturing allows you to control where to add complexity. Each time you sub-fracture, an extra Level is added to the Geometry Collection. The Chaos system keeps track of each subsequent Level and stores that information into something you can control called a Cluster. Below is an example of a mesh where each fracture Level is combined into its own set of Clusters.

Left - Level 1: 6 Objects; Center - Level 3: 50 Objects; Right - Level 5: 513 Objects

Connection Graph

This is a lightweight connectivity map that is a bit of a different paradigm for destruction simulation. In the image below, we have a few statically anchored pieces, but everything else in the scene is potentially dynamic.

Blue - Potential for Chaos; Yellow - Anchored Nodes

Rather than swapping from kinematic to dynamic, the Chaos system applies strain, which in turn breaks connections and Chaos destruction ensues. This is a great way to maximize interactivity, while retaining control over the amount of active rigid bodies.

Fields

Fields are the way that you can directly interact with and control simulations. Fields can be used to control any attribute on any part of your Geometry Collection. If you want to vary the mass for instance, or make something static, make the corner more breakable than the middle, apply some force; all of this can be controlled with Fields.

Cached Simulations

With caching, high fidelity simulations can be pre-cached and played back in real-time resulting in a kinematic Geometry Collection. This means that you can author large scale destruction events and still allow interactivity with the player and environment.

Niagara Integration

The Chaos system is a first class citizen of UE4, and as such, lives alongside all other systems that simulate your world including Niagara. Incorporating visual effects into your simulations can add a lot of depth and realism to the world. For example, when a building breaks apart it generates a large amount of dust and smoke. To create the marriage between destruction and VFX, data from the physics system can be sent to Niagara when an object collides or breaks apart, and that data can be used to generate interesting secondary effects.

New: Real-Time Ray Tracing Improvements (Beta)

Ray Tracing support has received many optimizations and stability improvements in addition to several important new features.

 

Performance and Stability

A large focus this release has been on improving stability, performance, and quality of Ray Tracing features in Unreal Engine. This means:

  • Expanded DirectX 12 Support for Unreal Engine as a whole
  • Improved Denoiser quality for Ray Traced Features
  • Increased Ray Traced Global Illumination (RTGI) quality

Additional Geometry and Material Support

We now support additional geometry and material types, such as:

  • Landscape Terrain (Measured Performance on a 2080Ti in KiteDemo: ~2ms geometry update time and ~500MB video memory)
  • Hierarchical Instanced Static Meshes (HISM) and Instanced Static Meshes (ISM)
  • Procedural Meshes
  • Transmission with SubSurface Materials
  • World Position Offset (WPO) support for Landscape and Skeletal Mesh geometries

Multi-Bounce Reflection Fallback

We've improved support for multi-bounce Ray Traced Reflections (RTR) by falling back to Reflection Captures in the scene. This means that intra-reflections (or reflections inside of reflections) that are displaying black, or where you've set a max reflection distance, will fall back to these raster techniques instead of displaying black.

This can subtly improve the quality of reflections without using multiple raytraced bounces, greatly increasing performance.

1 - Single RTR Bounce; 2 - Two RTR Bounces; 3 - Single RTR Bounce with Reflection Capture Fallback for last bounce

New: Virtual Texturing (Beta)

With this release, Virtual Texturing beta support enables you to create and use large textures for a lower and more constant memory footprint at runtime.

Streaming Virtual Texturing

Streaming Virtual Texturing uses Virtual Texture assets to offer an alternative way to stream textures from disk compared to existing Mip-based streaming. Traditional Mip-based texture streaming works by performing offline analysis of Material UV usage, then at runtime decides which Mip levels of textures to load based on object visibility and camera distance. For Virtual Textures, all Mip levels are split into tiles of a small fixed size, which the GPU can then determine which Virtual Texture tiles are accessed by all visible pixels on the screen.

Streaming Virtual Texturing can reduce texture memory overhead and increase performance when using very large textures (including Lightmaps and UDIM textures), however, sampling from a Virtual Texture is more expensive than sampling a regular texture.

For full details, see the Streaming Virtual Texturing documentation.

Runtime Virtual Texturing

Runtime Virtual Texturing uses a Runtime Virtual Texture asset with a volume placed in the level. It works similarly to traditional texture mapping except that it's rendered on demand using the GPU at runtime. Runtime Virtual Textures can be used to cache shading data over large areas making them a good fit for Landscape shading.

For full details, see the Runtime Virtual Texturing documentation.

New: Unreal Insights (Beta)

Unreal Insights (currently in Beta) enables developers to collect and analyze data about Unreal Engine's behavior in a uniform fashion. This system has two main components:

  • The Trace System API gathers information from runtime systems in a consistent format and captures it for later processing. Multiple live sessions can contribute data at the same time.
  • The Unreal Insights Tool provides interactive visualization of data processed through the Analysis API, providing developers with a unified interface for stats, logs, and metrics from their application.

You can connect to one or more live sessions, or select live or pre-recorded session data to view, in the Trace Sessions window (under the Unreal Insights tab):

Once you have selected the session data you want to examine, you can use the Timing Insights or Asset Loading Insights tabs to browse through it.

New: HoloLens 2 Native Support (Beta)

Developers are now able to begin creating for the HoloLens 2. You'll have access to APIs for the platform's unique features, such as streaming and native deployment, finger tracking, gesture recognition, meshing, voice input, spatial anchor pinning, and more. Build an AR game or an enterprise application. And with robust emulator support, it doesn't matter if you have a device already or are still eagerly awaiting delivery - you can get started right away with UE4 and HoloLens 2 development.

For more information, see Microsoft HoloLens 2 Development.

New: Virtual Production Pipeline Improvements

Unreal Engine continues to lead the way with advancements in what is possible in a virtual production pipeline! Virtually scout environments and compose shots, use the virtual world to light the real world, connect live broadcast elements with digital representations to build a seamless experience, and control it all remotely using custom-built interfaces.

In-Camera VFX (Beta)

Using improvements for In-Camera VFX, you can achieve final shots live on set that combine real-world actors and props with Unreal Engine environment backgrounds, using an LED wall that can either display an Unreal Engine scene, or a digital greenscreen for real-time compositing in UE4.

Camera frustum-based rendering enables real-world actors and props to receive lighting and reflections from the CG environment, and in some cases eliminates post-production workflows, significantly accelerating overall production. Save time by quickly placing greenscreens digitally on an LED wall with a click of a button instead of physically setting them up on stage. The entire solution can scale to LED walls of virtually any size or configuration thanks to nDisplay multi-display technology.

VR Scouting for Filmmakers (Beta)

The new VR Scouting tools give filmmakers in virtual production environments new ways to navigate and interact with the virtual world in VR, helping them make better creative decisions.

Directors and DOPs can easily find locations, compose shots, set up scene blocking, and get an accurate representation of the filming location, while artists and set designers can experience the location in VR while building it, using measurement and interaction tools to check distances and modify the world. You can capture out images from the virtual world, helping the whole production team track the decisions made during the VR session. Controllers and settings can be customized in Blueprints, without needing to go into C++ and rebuild the Engine.

For details, see Virtual Scouting.

Live Link Datatypes and UX Improvements

The Live Link Plugin now handles more kinds of information and it is easier to apply the synchronized data to scene elements in Unreal! You can now drive character animation, cameras, lights, and basic 3D transforms dynamically from other applications and data sources in your production pipeline.

You can assign a role to each scene element, which determines the information that Live Link synchronizes for that element. The Live Link Plugin offers built-in roles for character animation, cameras, lights, and basic 3D transforms. You can also drive an Actor in your Unreal Engine Level more easily from any Live Link source by assigning the Actor a new Live Link controller Component.

Additional improvements:

  • Pre-process the data coming through Live Link before it gets applied to your scene (for example, to apply axis conversions) and control how incoming data is transformed when you map a source from one role to another. You can also create and assign your own custom pre-processors and translators classes.
  • Combine multiple sources into a Virtual Subject, which lets you drive a single scene element in Unreal based on information coming through multiple Live Link sources.
  • Save and load presets for Live Link setups that you need to return to often.
  • Status indicators show you at a glance what Live Link sources are currently sending data to the Unreal Engine.
  • We have added support for ART tracking through Live Link enabling you to leverage ART technology for various tracking purposes in applications such as VR, Augmented Reality and Motion Capture.

For details, see the Live Link Plugin documentation.

Remote Control over HTTP (Beta)

You can now send commands to Unreal Engine and Unreal Editor remotely over HTTP!

This makes it possible to create your own customized web user interfaces that trigger changes in your project's content. You can control Unreal Engine from any browser or custom app that supports modern web standards, and you can integrate your controls into other custom panels that you use to control other applications in your environment.

By calling different endpoints provided by the remote control interface, you can set and get properties on Actors and Assets and call any function that is exposed to Blueprints and Python.

For details, see Web Remote Control.

New: nDisplay Warp and Blend for Curved Surfaces

You can now use nDisplay to project your Unreal Engine content on to a wider variety of physical installations, including curved and spherical screens, and scenarios that involve complex blending between overlapping projections.

nDisplay offers built-in integrations for two common ways of expressing how to project and blend 2D images from multiple projects to warped and curved surfaces:

  • The Scalable Mesh File (.smf) format, developed by Scalable Display Technologies.
  • The Multiple Projection Common Data Interchange (MPCDI) standard, developed by VESA.

In addition, third-party developers can now customize for the way nDisplay renders UE4 content by implementing an extended set of rendering interfaces.

For more information on nDisplay, see Rendering to Multiple Displays with nDisplay.

New: Niagara Improvements (Beta)

Integration into Chaos Physics

Niagara particle systems can now be generated by the physics simulation in Chaos! Whenever an object fractures, you can generate smoke and dust as well as more tiny fractured bits that enhance the physics simulation's visuals. There is now a Chaos destruction listener, which sends events to associated particle systems and provides information about Chaos interaction events such as break, collision, or trailing. Examples are available in the Chaos Destruction Content Example hallway.

GPU Simulation Improvements

Performance of GPU simulation has been significantly improved to reduce idle time by providing better data management and more explicit synchronization primitives between compute jobs. This enables overlapping significant amounts of GPU work, which greatly increases throughput and allows for more opportunities to run compute shaders in parallel with other computing work.

GPU Support for Static and Skeletal Mesh Sampling

GPU simulations can now sample the surface of a mesh, grab the UV coordinates, and sample from a texture, and then use that capability to drive complex simulation logic, narrowing the gap between what is possible on the CPU and what is possible on the GPU and enabling effects like the fairies in the GDC Troll demo, which spawned 600,000 particles a second.

Ray Tracing Support for Niagara Sprite Emitters

Niagara simulations can now generate geometry that is used by reflections and shadows when raytracing, such as the fairies in the GDC Troll demo which contribute to the reflections in the water as they fly over.

Currently only sprite emitter particles are supported.

Emitter Inheritance

You can now create an emitter that inherits from an existing emitter in your project when you create an emitter in Niagara. Now artists can reuse content more easily, since you can create inheritance trees of functionality for emitters with similar purposes.

As an example, if you create an emitter for a basic muzzle flash used in weapon effects, you might then want to have a heavy and light variant of that muzzle flash. Through inheritance, you can create new emitters for light and heavy variations, but which both inherit the same base functionality and renderers as the original muzzle flash.

Compiling and Cooking Improvements

You can now flush stale data and avoid unnecessary recompilation of Niagara Assets during load and cook operations using two new console commands:

  • fx.PreventSystemRecompile flushes data for a single system and all emitters it depends on.
  • fx.PreventAllSystemRecompiles finds every Niagara System in the project and will flush these Systems and the Emitters they depend on

After flushing and resaving the affected Assets, your load and cook processes will be faster and free from potential failures that stem from stale data.

Improved Error Reporting

A new Niagara Message Log panel in the Script and Emitter/System editor displays messages, warnings and errors from the last compile and allows you to navigate to the nodes or pins in your Script Graphs that cause warnings or errors.

This is particularly useful for technical and VFX Artists who write new functionality and behaviors through Niagara Scripts, or who composite Niagara Scripts into Systems as it improves your ability to rapidly iterate in the Script Editor. Moreover, these messages act as hyperlinks to the source of the error, which enables you to quickly find and resolve issues in Scripts or Systems.

Static Switches

Niagara now supports Static Switch nodes to reduce compile time and improve runtime performance by dropping instructions and parameters that don't affect the graph's final output. Static Switch nodes support several unique features, such as value propagation, metadata support, parameter culling, and compiler constant evaluation.

Increased Feature Parity with Cascade

This release increases the feature parity between Cascade and Niagara. Here are some features Cascade has that are now available in Niagara:

  • Sprite cutouts enable artists to reduce overdraw by creating a smaller bounding shape for the particle instead of rendering a full quad.
  • GPU sorting enables transparent objects to be sorted appropriately within the emitter.
  • AnimNotify events enable you to create particles in animation tracks, and manage their lifetime appropriately.
  • Standardized Activate/Deactivate make it easier to drop Niagara effects into existing pipelines.
  • Set Static Mesh and Skeletal Mesh targets in Blueprints make it easier to drop Niagara effects into existing pipelines.
  • Sampled Bone and Socket transforms enable effects that use the Skeletal Mesh to run on lower end target hardware.

New: Platform Extensions

We have added the concept of Platform Extensions which are similar in nature to plugins as they encapsulate all code for an individual platform in a single location decoupling it from the main engine code. This has changed some fundamental ways that the engine and its build system works, and may affect you, depending on how much you have modified the engine. See this guide for more details on Platform Extensions.

New: Skin Weight Profiles

The new Skin Weight Profile system enables you to override the original Skin Weights that are stored with a Skeletal Mesh and improve visual fidelity on certain platforms where dynamic character parts are disabled for performance reasons. You can import a profile from the Skeletal Mesh Editor and will need to provide the FBX file containing the different Skin Weights to use, a name for the profile, and optionally an LOD index. You can also assign a Skin Weight Profile for a Skinned Mesh Component (or any child-class) using the new Blueprint exposed API.

For more information, please see the Skin Weight Profiles documentation.

New: Animation Streaming (Experimental)

Animations can now be streamed for improved memory usage and memory management, which is particularly useful when playing back animation data in long cinematics.

The Animation Compression system can now compress animations into chunks to make them stream friendly, while the new Animation Stream Manager manages the loading and holding of the streamed animation data in memory and a new UAnimStreamable asset represents an animation that can be streamed.

New: Dynamic Animation Graphs (Experimental)

Dynamic Anim Graphs provide dynamic switching of sub-sections of an Animation Graph via Layers, which are separate Graphs that can be plugged into the Anim Graph via Layer Nodes**. This enables multi-user-collaboration and potential memory savings for functionality that is no longer needed.

Layers are defined by an Animation Layer Interface asset, which is an Animation Blueprint that has restricted functionality and is analogous to a Blueprint Interface. It defines the number of Layers, their names, which group they belong to, and their inputs.

Additionally, Sub Instance nodes have gained the ability to have their running class dynamically switched using SetSubInstanceClassByTag in UAnimInstance. Sub Instance nodes can also now expose multiple input poses from sub-graphs.

Both the Animation Blueprint and Sub Animation Blueprint(s) have to implement an Animation Layer Interface to be able to create Layer graphs, and instantiate Layer Nodes.

New: Open Sound Control (Beta)

The Open Sound Control (OSC) plugin provides a native implementation of the Open Sound Control standard, which provides a common framework for inter-application and inter-machine communication for real-time parameter control over UDP.

New: Wavetable Synthesis Plugin

We added a new monophonic Wavetable synthesizer that leverages the curve editor in Unreal Engine to author the time-domain wavetables, enabling a wide range of sound design capabilities that can be driven by gameplay parameters. The table index as well as all other parameters are controllable from C++ and Blueprint.

New: CSVToSVG Tool (Beta)

CSVToSVG is a new command-line tool that helps visualize performance data by building vector graphics image files (.SVG) from .CSV files. The tool handles data from any numerical stat, and supports smoothing, budget lines, and stacked graphs. Developers can use this to produce graphs of any data output by the performance tools included with Unreal Engine, or data from their own tools.

New: Sequencer Curve Editor Improvements (Beta)

The Sequencer Curve Editor has been significantly improved with several highly requested features and extensibility support! Artists and developers can now extend the Curve Editor by adding new tool modes, new toolbar buttons, and custom data filters (such as smoothing) without modifying engine code. In addition, the Curve Editor can now be docked separately from Sequencer and includes a transform tool that includes scaling, as well as a key retiming tool.

To use the updated Curve Editor, just open Sequencer or a UMG (Unreal Motion Graphics) Animation and click the Curve Editor button. The new Transform and Retime tools are provided through a plugin that is enabled by default. You can now remove and dock the Curve Editor window anywhere you want, and the position saves when you close Sequencer or the Editor and restores when you reopen Sequencer.

Extending the Curve Editor

The updated Curve Editor can be extended by plugins without modifying the engine code. There are three main ways to extend the curve editor:

  • Tool Modes - Tool modes are exclusive modes that allow you to catch the user input to the Editor first, which allows you to create a wide variety of tools. There are two example tools: FCurveEditorTransformTool, and FCurveEditorRetimeTool. To create a new custom tool, implement ICurveEditorToolExtension and register your extension with the Curve Editor module (see FCurveEditorToolsModule::StartupModule() for an example).
  • Toolbar Buttons - Toolbar Buttons allow you to insert new buttons into the Curve Editor toolbar, and these buttons can manipulate the Curve Editor in any way you want. There is an example which implements additional options for focusing on data (instead of just the default Focus Selected). To create a new toolbar button, implement ICurveEditorExtension and register it with the Curve Editor module (see FCurveEditorToolsModule::StartupModule() for an example).
  • Data Filters - Data Filters are a way to create new ways to manipulate selected curves as a singular action with user-specified settings. We provide three data filters by default which operate on your current selection:
    • Bake (UCurveEditorBakeFilter)
    • Simplify (UCurveEditorReduceFilter)
    • Smoothing (UCurveEditorFFTFilter)

A new filter can be implemented by deriving a class from UCurveEditorFilterBase. No registration with the module is needed. New implementations will automatically appear inside the User Filter dialog. Any UPROPERTY marked as EditAnywhere will automatically appear, enabling you to manipulate them before applying the filter.

View Modes

View your curves in new and exciting ways! Visible curves can be viewed in the tree view on the left side of the Curve Editor, and then you can choose how to view selected curves by clicking the View Mode button as shown below. There are three View Modes:

  • Absolute View shows all of your curves plotted in absolute space - this matches legacy behavior that everyone is used to.
  • Stacked View normalizes each curve and then draws them non-overlapping.
  • Normalized View normalizes each curve (like Stacked View), but draws them all on top of each other.

 

New Filter Menu

The new Filter menu applies different filters to your selection with a full Details panel and a wide range of settings. It persists after you click the Apply button, so you can easily iterate on your settings. You can add new options in this dialog by implementing a function on a new class which derives from UCurveEditorFilterBase.

Multiple Framing Options

Multiple different framing options are now available.

  • Frame Selection (or Frame All if there is no selection) is built-in and defaults to F.
  • Frame on Playback Cursor and Frame on Playback Range are new options. These are good examples of how you can extend the Toolbar using a plugin.
  • The old Frame Horizontal/Frame Vertical options have been replaced with Focus All followed by ALT + SHIFT + RMB to zoom each axis independently. You can also follow Focus All with ALT + RMB to scale proportionally.

Retiming Tool

The Retiming Tool creates a one-dimensional lattice and lets you adjust the timing of your keys. It supports grid snap, multi-select and more! This tool is provided by the default plugin.

Transform Tool

The Transform tool supports translating selected keys and scaling on both X and Y axes (where appropriate). Press ALT to change the anchor point from the opposite edge to the center.

New: Sequencer Usability Improvements

A number of workflow and usability improvements have been added including:

  • Filter and Search for Tracks - You can now filter and search for tracks and actors.
  • Stretch/Shrink Frames - You can now add or reduce the amount of time between sections and keys as long as there are no overlapping elements.
  • Adding and Editing Multiple Tracks - You can now group-add and edit multiple tracks by using Shift or Control.
  • Blend Audio Sections on Multiple Rows - You can now blend Audio sections which enables you to achieve crossfading effects. This functions similarly to blending Transform sections.
  • Restore State or Keep State for Skeletal Animation Poses - This option uses the When Finished property to either restore the Skeletal Mesh to its bind pose after the animation section has been evaluated, or, keep the animation pose of the last animation section.
  • UMG Multibinding - This feature has been around prior to 4.23, however, now you can add multiple widget bindings to an existing binding. This lets you animate one widget which can be used by other widgets.

New: Data Validation Extensibility

The Data Validation plugin has been extended to support C++, Blueprint, and Python-based rules for asset validation. This enables tech artists who are working in Blueprints or Python to create asset validation scripts, instead of requesting C++ implementations. For developers, default engine classes are now able to go through a validation path.

New: DebugCameraController Improvements

In this release, we have added new features to the DebugCameraController:

  • Orbit functionality which allows the user to orbit about a selected position or the center of a selected actor so you can review assets more thoroughly.
  • Buffer visualization overview, with an option to select buffers for fullscreen view makes it possible to examine the contents of the graphic card buffers.
  • View mode cycling gives you the ability to examine the different types of scene data being processed.

The new features augment in-game debugging capabilities when you are using the Debug Camera Controller in Play-In-Editor (PIE). The ability to examine view modes and graphics buffers helps you diagnose unexpected scene results in-game.

To open the debug camera controller in PIE, enter ToggleDebugCamera on the console command line, or use the new semicolon hotkey.

New: Multi-User Editing Improvements (Beta)

Multi-User Editing is significantly improved with the goal of making it a better fit for real-world production use cases.

  • We've streamlined the user interface to remove unnecessary dialogs, and to bring all the information and controls you need to manage your sessions into one place.

(The Multi-User Editing icon in the Toolbar is now hidden by default. You can open the new Multi-User Editing Browser from the Window > Developer Tools menu, or re-enable the Toolbar icon in the Project Settings dialog. See the Multi-User Editing Reference.)

  • Reliability of the transaction system across all session participants is greatly improved.
  • We have minimize the possibility of accidentally losing session data through user error by prompting the owner of a session to persist the changes made in the session before they leave. The server now also automatically archives all live sessions it's currently running when it shuts down, and you can recover the data from that archive later, if needed.
  • We've improved the Asset locking system to make it easier for multiple users to work on the same Assets. You'll now receive a notification when you try to modify an Asset that is already locked by another user in your session. In addition, if that other user releases the lock, you'll automatically get a lock on that Asset until the next time you save it.

For more information on using the Multi-User Editing system, see the documentation.

New: Disaster Recovery (Experimental)

The new opt-in Disaster Recovery system augments the Editor's existing Auto-Save system to increase your confidence that you'll be able to recover changes you make in your project content, even in the event of an unexpected shutdown.

As you work, it records the edits you make, using the same transaction system that underlies the Multi-User Editing system. If the Editor shuts down with unsaved changes, then the next time you open your project you'll be shown a list of all the transactions you made since the last save. You can restore all the listed changes, or all changes up to any recovery point you select.

New: Drag and Drop to Fill Array

You can now select multiple Assets from the Content Browser and drag and drop them onto the array header to fill the array. Select any Assets of the same type that you want as the array, and drag them from the Content Browser into the array header. This works for any array that stores Assets, and simplifies the workflow when working with large arrays.

See the Array Control page for more information.

New: EditConditions Metadata Improvements (Beta)

You can now use simple boolean expressions to enable or disable property editing in the Details panel using the new expression parser for the EditCondition metadata of the UPROPERTY system. Support for enums and numeric types is also included, so programmers can write relatively complicated expressions with minimal overhead.

Programmers writing classes that are intended to be edited by users, such as Actors or Components, where certain properties are only conditionally valid can quickly create more intuitive UIs without needing to write a fully detailed customization class.

The EditCondition meta tag now accepts nearly any valid C++ expression. For example, here is a valid property definition after this change:

UPROPERTY(EditAnywhere, Category=EditCondition, meta=( EditCondition="IntegerEditCondition >= 5" ))

Additional examples of valid expressions:

  • MyInteger > 5 && MyFloat <= 10
  • MyEnum == EnumType::B
  • MyBool == true || MyInteger == MyFloat + 5

New: Editor Utility Blueprints Updates (Beta)

Editor Utility Blueprints have been updated to improve the Editor's extensibility with Blueprints, bringing Blueprints extensibility more in line with Python and C++ extensibility. Editor Utility Widgets are now available to enable deeper customization of UIs than the previous auto-generated buttons for Editor Utility Blueprints.

Improvements include:

  • Parent classes can be any editor-only class that isn't a widget.
  • Can create instances of Editor Utilities that need instancing for function calls.
  • The ability to add a Run function to any Editor Utility Blueprint or Editor Utility Widget which will run scripts on Editor start. This will create an instance and allow for binding to stateful editor events.

New: Hide Unrelated Nodes

Users can now select a node and use the Hide Unrelated feature to dim all other nodes not linked to the selected node so Materials and Blueprints can be debugged and understood in a much cleaner and direct way.

New: Landscape Splines Improvements

You can now use ALT + Left Mouse Button (LMB) Drag to create a new spline control point and segment. This is in addition to the existing CTRL + Left-Click action to add new splines. The advantage of ALT + LMB Drag is that as you drag the cursor, you can see what the new spline will look like. You can also split splines with this action.

To add a new control point, select an existing control point and ALT + LMB Drag in the direction you want to place the new point. To split an existing spline, select a spline point on either side of the segment and ALT + LMB Drag the cursor towards an existing segment to split the path.

New: Editor Performance Improvements

If you regularly work with very large scenes, you should notice some significant performance improvements in the Unreal Editor aimed at making your work faster and smoother. It's much faster to select and de-select hundreds or thousands of separate Actors at a time, to show and hide Layers that contain thousands of Actors, to work with a selected Actor that is a parent of thousands of other Actors, to load Levels that contain thousands of Actors, and more.

New: Material Editor Updates

Material Editor and Material Instance Editor workflows are improved, and we added increased scriptability for Materials and Material Instances.

  • There is now a Hierarchy button to the Material Editor toolbar to display a menu that shows all the immediate children, and enables quick access to edit them.
  • The Hierarchy menu in the Material Instance Editor now shows all immediate children in addition to the parent chain of Materials and Material Instances.
  • The following nodes were added to the material scripting library, for use in Editor Utility Widgets, Editor Utility Blueprints, Python, and C++.
    • For material instances, GetStaticSwitchParameterValue.
    • For materials, GetMaterialDefaultScalarParameterValue, GetMaterialDefaultVectorParameterValue, GetMaterialDefaultTextureParameterValue, GetMaterialDefaultStaticSwitchParameterValue, HasMaterialUsage (for checking whether or not a material has a given usage flag), GetChildInstances, Get___ParameterNames (to get an array of scalar, vector, texture, or static switch parameter names), Get___ParameterSource (to find the source asset, whether it be function or material, where the parameter is defined, and again there's one for Scalar, Vector, and so on)
  • When you mouse over a parameter name in a Material Instance, you will see the name of the Asset where that parameter was defined. This makes it easier to work with Materials that have many levels of nested functions.

New: UMG Accessibility Screen Reader Support (Experimental)

UE4 now supports third party screen readers for Windows or VoiceOver on iOS, which enables you to make sure your game UI is accessible and helps you comply with CVAA standards. Screen readers, such as NVDA and JAWS, allow a software application's UI to be narrated to the user. This is a critical feature that enables those who are visually impaired to use and navigate software applications.

As of 4.23, there are now APIs included in UE4 to allow the use of third-party screen readers to read UI text. This supports a number of common UMG widgets, such as Text Block, Editable Text Box, Slider, Button, and Checkbox. This built-in functionality removes the need to implement custom text-to-speech technology, making screen readers easier to support.

To enable screen reader support, you need to go into either your project or Engine Console Variable configuration file. Once in the file, add the variable Accessibility.Enable=1.

For more information, see Supporting Screen Readers.

New: Wacom Tablet Support (Experimental)

Programmers working on features like a painting plugin or modeling tools can now take advantage of stylus inputs, such as pen pressure and tilt, using a new plugin that provides access to the additional inputs that Wacom-style tablet and stylus systems provide.

Not all tablets support all possible values that the API supports - the subsystem has intentionally been written to expose a superset of all supported values. It is up to the user of the subsystem to determine what values they are interested in.

This is an experimental feature, and it is not supported by any Unreal Engine tools as of yet.

New: UMG Widget Diffing

We have expanded and improved Blueprint Diffing to support Widget Blueprints as well as Actor and Animation Blueprints! The new tools also show changes made to the structure of the Blueprint, adding property and function flags, class settings, parent class, and added Components, in addition to default property values (which now include the default properties of Widgets and Widget Slots) and changes to Blueprint Graphs.

If you have written a custom Blueprint subclass that you would like to diff, you can override the FindDiffs function, which enables you to list specific changes you want to show, and to request subobjects for diffing.

New: Non-Destructive Landscape Editing (Experimental)

Landscape Heightmaps and paint layers can now be edited with non-destructive layers. You can add multiple layers to your landscape that can be edited independently from each other. These new layers function as a foundation for sculpting and painting a landscape, which allows you to manipulate and maintain landscapes more efficiently.

As you add layers, you can lock layers you don't want to change and focus on editing one layer at a time. You can also hide layers to help focus on a specific layer, or see what the landscape looks like without a certain layer. Lastly, by enabling Layer Contribution, the layer highlights in the viewport and you can see all of the sculpting and painting within that layer, even as you add to the layer. There are other new options, such as ordering layers and adjusting the layer alpha blending.

Landscape splines terrain deformation and painting can now be a non-destructive process by reserving a layer for splines. This means that you can now edit, change, and move splines non-destructively and the landscape will long roads or paths through your landscape.

For more information, see Non-Destructive Landscape Layers and Splines.

New: Place Interactive Actors with Foliage Tool

The Foliage Tool now supports populating a scene with interactive Actors in addition to Static Meshes. Actors placed by the Foliage Tool will behave the same way that Static Meshes do, automatically scattering when you sculpt the terrain or move the Static Meshes that you painted them on.

For example, in this video, the scattered trees are Blueprint Actors that contain interaction logic along with their Static Mesh Components. And when the terrain is later modified, these Tree Actors are automatically updated to match the new height of the terrain.

Foliage Actors are not rendered as Instanced Meshes because they are treated by the renderer as if they were individually placed by hand into the Level.

For additional information, read more about Foliage Mode.

New: HDRI Backdrop Actor

The new HDRI Backdrop Actor makes it faster and easier to create a realistic background and lighting environment for your Level from a single HDRI image. Place the Actor into your Level and assign it the HDRI texture you want to get your image projected onto a backdrop, a Sky Light automatically set up to provide ambient light drawn from the image, accurate reflections, and a floor surface that captures shadows cast from your Level content.

For details on the workflow, and on all the settings that you can use to control the projection, see HDRI Backdrop.

New: Dual Height Fog for Exponential Height Fog

We improved control over the fog with additional parameters for Fog Density, Height Falloff, and Height Offset of an additional fog layer when using an Exponential Height Fog Volume.

1 - No Exponential Height Fog; 2 - Exponential Height Fog; 3 - Dual Exponential Height Fog

New: Dynamic Shadow Bias Improvement

We improved shadow biasing for movable lights in this release by adding some new parameters that can be set per-light or globally per-light type using a console variable. In addition to the constant Shadow Bias parameter, we've added the Slope Bias parameter to help resolve some (but not all) issues with shadow artifacts and acne. For Directional Lights, we added an extra depth bias parameter with Shadow Cascade Bias Distribution to control the bias strength across cascades.

1 - Before Slope Bias and Shadow Cascade Bias Distrubution; 2 - After Adjustments to Slope Bias and Shadow Cascade Bias Distribution

There are two parts to consider: the Slope Bias is used during shadow map rendering, and the Receiver Bias during the shadow map fetching. The Slope Bias is controllable per-light and is proportional to the Shadow (Constant) Bias. With these two parameters (the Constant and Slope Bias), there is a trade off in shadow quality versus accuracy that will resolve some artifacts.

To control the Receiver Bias, use the console variables under r.Shadow.* to set a value between 0 (better accuracy with more artifacts) and 1 (less accuracy with fewer artifacts) to set the receiver bias globally per-light type. You can set any light-type's constant and slope bias globally using console variables, also.

New: Pre-Skinned Local Bounds Material Expression

We've added a new Material Expression to return the Pre-Skinned Local Bounds of Skeletal Meshes. This expression enables you to be able to calculate a relative position inside the bounds of an Actor through the Material. Since these bounds don't change - even when animated - it enables you to apply a material in a consistent or fixed manner, such as a pattern like a decorative wrap applied to a weapon or vehicle. In the Animation Editors, you can visualize the Pre-Skinned Bounds of any mesh from the Character drop-down under the Mesh category.

New: Storing Custom Per-Primitive Data

We now support storing data on a per-primitive level instead of per-Material Instance, enabling primitives to be automatically considered for dynamic instance drawing.

Storing data with Use Custom Primitive Data per-primitive has the advantage of lowering the number of draw calls required for similar geometry, even if each primitive were to have to have its own custom data.

New: From Material Expression Shading Model

We've added support for multiple shading models to be used in a single Material using the new From Material Expression Shading Model!

Previous workflows using the Material Output and Material Attributes node are supported, and common workflows using If-Statements, BlendMaterialAttributes, Static Switches, and Material Instances are also fully supported.

This can be an opportunity to optimize and reduce the number of draw calls used if a single asset is using two separate Materials, equaling two draw calls used for each instance of that asset in the level. Using a single Material with two shading models can reduce this to a single draw call.

For additional details, see the From Material Expression documentation.

New: IES Profile Improvements

With this release, we've made some improvements to working with IES Profiles in the Editor and with different light types:

  • Selected Point and Spot lights with an assigned IES Texture now provide a 3D visualization of photometric data.
  • There is better support for Type C IES files that improve axial symmetry display without artifacts.
  • IES Texture icons in the Content Browser now give a preview of the photometric data.

For additional details, see IES Light Profiles.

New: Render Dependency Graph

The Rendering Dependency Graph (RDG) — or simply, "Render Graph" — is designed to take advantage of modern graphics APIs to improve performance through the use of automatic asynchronous compute scheduling, as well as more efficient memory and barrier management

Unreal Engine's renderer is being actively ported to RDG and is the primary API for authoring render passes for new features going forward. The implementation is still in early development and lacks many of the performance benefits right now, but the API is stable enough to use in production.

For additional information, see Render Dependency Graph.

New: Composure Improvements

In this release we have added some additional options for Composure:

  • AlphaHoldOut Blend Mode - This new Material Blend Mode enables objects to hold out the alpha in the Material, punching a hole through objects behind it.
  • Color Grading - Composure now supports color grading and white balance using floating point post processing lookup tables (LUTs).
  • Composure Layer inherits Post Process Parameters from Scene Camera - Composure Layers can now inherit Post Processing parameters from the scene camera, enabling color correction controls to be optionally enabled separately for each CG Layer.

The Composure sample available from the Learn Tab in the launcher has also been updated to reflect the latest workflows for compositing in Unreal Engine!

New: Python Import/Export FBX

Python scripting now supports importing and exporting FBX animations.

New: Pro Video Codecs

The Unreal Engine now supports additional video codecs, making it easier to integrate Unreal into professional video production pipelines and workflows.

You can now export Pro Media files to Apple ProRes Encoder:

  • All formats of the codec are supported: 4444 XQ, 4444, 422 HQ, 422, 422 LT, 422 Proxy
  • Multiple frame rates and resolutions.
  • Embedded timecode track is supported.
  • No embedded audio. Audio is mixed down and exported to a separate .wav file.
  • Supported on Windows platforms only.

For details, see the new how-to guide, Exporting Pro Media Files to Apple ProRes

The Media Framework can now playback files encoded with HAP Codecs.

  • All formats of the codec are supported: HAP, HAP Alpha, HAP Q, HAP Q Alpha
  • Supports playback for a 1x 4K 60 FPS movie or 2x 4K 30 FPS movie, which can be stretched to 2x 4K 60 FPS movies.
  • Full support for alpha channels.
  • Multiple frame rates and resolutions.
  • No embedded audio or timecode support.
  • 8K and 16K are not supported at this time

For details, see the HAP Codec Playback Support section in Media Framework Technical Reference.

New: Stereo Panoramic Capture Tool Improvements (Experimental)

With the updates to the Stereo Panoramic Capture tool, it's much easier to capture high-quality stereoscopic stills and videos of the virtual world in industry-standard formats, and to view those captures in an Oculus or GearVR headset. You have expanded control over render settings, bit depth, and quality; you can also choose the graphic buffers you want to capture, making it possible to post-process and composite the images in other applications.

New: Platform SDK Upgrades

In every release, we update the Engine to support the latest SDK releases from platform partners.

  • IDE Version the Build farm compiles against
    • Visual Studio - Visual Studio 2017 v15.9.11 toolchain (14.16.27023) and Windows 10 SDK (10.0.16299.0)
      • Minimum Supported versions
        • Visual Studio 2017 v15.6
      • Requires NET 4.6.2 Targeting Pack
    • Xcode - Xcode 10.3
  • Android
    • Android NDK r14b (New CodeWorks for Android 1r7u1 installer will replace previous CodeWorks on Windows and Mac; Linux will use 1r6u1 plus modifications)
  • ARCore
    • 1.7
  • HTML5
    • Emscripten 1.38.31
  • Linux "SDK" (cross-toolchain)
  • Oculus Runtime
    • 1.37
  • OpenXR
    • 1.0
  • Google Stadia
    • 1.34
  • Lumin
    • 0.19.0
  • Steam
    • 1.42
  • SteamVR
    • 1.5.17
  • Switch
    • SDK 8.3.0 + optional NEX 4.6.3 (Firmware 7.x.x-x.x)
    • Supported IDE: Visual Studio 2017, Visual Studio 2015
  • PS4
    • 6.508.001
    • Firmware Version 6.510.011
    • Supported IDE: Visual Studio 2017, Visual Studio 2015
  • XBoxOne
    • XDK: July 2019 QFE-9
    • Firmware Version: May 2019 10.0.18362.3055
    • Supported IDE: Visual Studio 2017
  • MacOS
    • SDK 10.14
  • iOS
    • SDK 12
  • tvOS
    • SDK 12

Upgrade Notes

Editor

Matinee

  • With the release of Unreal Engine 4.23, Matinee is no longer supported and will be removed from the engine in an upcoming release. Once removed, you will no longer be able to access or open Matinee files. Please use the Matinee to Sequencer Conversion Tool to convert any Matinee sequences to Sequencer Sequences as soon as possible.

VR Editor

  • The VR Mesh Editor presented at GDC 2017 is no longer supported. The Mesh Editor plugin will be removed from the engine in an upcoming release.

Platforms

HTML5

  • HTML5 platform support will be migrated to GitHub as a community-supported Platform Extension and no longer officially supported by Epic in upcoming releases.

iOS

  • Support for iOS 10 has been removed. iOS 11 is now the minimum supported version.
  • OpenGL on iOS will be removed in an upcoming release, potentially as early as 4.24. Once removed, Metal will be the only rendering path for iOS devices going forward.