Unreal Engine 4.11 Released!
March 31, 2016

Unreal Engine 4.11 Released!

By Alexander Paschall

This release brings hundreds of updates for Unreal Engine 4, including 92 improvements submitted by the community of Unreal Engine developers on GitHub! Thanks to all of these contributors to Unreal Engine 4.11:

Anton Olkhovik (Sektor), Alessandro Osima (AlessandroOsima), Alex Widener, Alexandru Pană (alexpana), Andreas Axelsson (judgeaxl), Andreas Schultes (andreasschultes), Artem (umerov1999), Artem V. Navrotskiy (bozaro), Ben Reeves (BGR360), bensch228, Black Phoenix (PhoenixBlack), Brandon Wamboldt (brandonwamboldt), Cameron Angus (kamrann), Christoph Becher (chbecher) ,Clay Chai (chaiyuntian), Dan Ogles (dogles), David Baack (davidbaack), Don Williamson (dwilliamson), Eli Tayrien (ETayrienHBO), Eren Pinaz (erenpinaz), G4m4, Hannah Gamiel (hgamiel), Hevedy (Hevedy), Hyeon-Cheol Cho (crocuis), Igor Karatayev (yatagarsu25), Jason (Abatron), Jefferson Pinheiro (Ixiguis), kallehamalainen, Kiqras, Konstantin Nosov (gildor2), Leon Rosengarten (lion03), Manny (Manny-MADE), Marat Radchenko (slonopotamus), Markus Breyer (pluranium), marynate, Matthias Huerbe (MatzeOGH), Maxim Pestun (maxpestun), Moritz Wundke (moritz-wundke), Mustafa TOP (MSTF), Nikos Tsatsalmas (ntk4), Pablo Zurita (pzurita), Pavel Dudrenov (dudrenov), Peter Oost (Sirrah), Piotr Bąk (Pierdek), projectgheist, Rama (EverNewJoy), Rene Rivera (grafikrobot), Rob Ray (robdeadtech), Robert Khalikov (nbjk667), sackyhack, sankasan, Sébastien Rombauts (SRombauts), Simon Taylor (simontaylor81), Skylonxe, Spencer Brown (JedTheKrampus), Tam Armstrong (tamarmstrong), Thomas Mayer (tommybear), Thomas McVay (ThomasMcVay), unktomi, Verdoso, ZehM4tt.

What's New

This release is packed with many new features and numerous performance optimizations. Performance has been a big focus for us as we prepare to ship our next game, Paragon. Several new rendering and animation features focused on rendering realistic characters are in 4.11, as are new audio features and tools improvements. UE4 continues to push VR forward with improvements to VR rendering and support for the latest SDKs so that you can ship your games as VR hardware becomes available to consumers.

Major Features

Performance and Multithreading

A major effort for us over the last several months has been optimizing UE4 in order to get our next game, Paragon, running at 60fps on PC and PlayStation 4.

Paragon offered a unique set of challenges to the team. In Paragon, we need to support animating 10 heroes and 120+ minions at a time, tons of FX, and all while rendering a beautiful, detailed map with long sightlines at 60fps. Paragon pushes the engine to its limits, especially when it comes to animation and rendering.

image alt text

Thousands of small optimizations throughout the engine have resulted in a big increase in performance for Paragon and should benefit all games built with UE4. Many of those optimizations are in 4.11, and more will be coming in future releases. Here are some of the larger optimizations we've been working on.

Parallelization. Multicore scaling is crucial to achieving high frame rates on modern PCs and consoles, so we have improved our threading architecture in several ways. We've reduced the cost of creating tasks, added support for high priority tasks, and removed many synchronization points.

Rendering performance. The renderer now does a better job balancing the size of its worker tasks and the command buffers generated for the GPU in order to achieve maximum parallelism without adding overhead on the GPU. We've also worked to remove synchronization points in the renderer so that we can better utilize all available cores.

Cloth simulation is now dramatically faster and makes better use of multi-threading. We now call the APEX solver directly for each asset from a worker thread. This allows for much better scheduling and eliminates many sync points and overhead. Clothing will now be updated after animation (when blending is not needed), otherwise after the skeletal mesh component updates.

Faster garbage collection. We now support garbage collection "clusters", which allows the engine to treat groups of objects as a single unit, drastically reducing the number of objects that need to be considered. Currently, only subobjects for Materials and Particle Systems are clustered. Additionally, the mark and destroy phases are more cache-coherent, resulting in a 9x reduction in time, and memory churn has been reduced during reachability analysis.

Multi-threaded animation. Animation Graph updates can now run on worker threads allowing the number of animated characters to scale with the number of cores. Check out the Upgrade Notes as we've deprecated many animation related APIs and there are limitations on which animation graphs can run in worker threads.

Instant animation variable access. We've added a 'fast-path' for variable access in the Animation Graph update. This allows us to simply copy parameters internally rather than executing Blueprint code. The compiler can currently optimize the following constructs: member variables, negated boolean member variables, and members of a nested structure.

Additive animation 'baking'. We now have an option to turn on baked additive animations. This makes using additive animations roughly3x faster. The work involved in calculating the delta pose for an additive animation is done at cook time rather than run time. This saves not only the calculation work involved in creating the additive deltas but also the memory accesses and allocations involved with decompressing the base animation which is no longer needed at runtime. This feature increases cook times, so you'll need to enable it by setting the cvar "a.UseBakedAdditiveAnimations" to 1. Future versions of the engine will rework animation cooking to avoid this cost and permanently enable this optimization.

New: Realistic Hair Shading

We've added a physically based shading model for realistic hair based on the latest research from film.

image alt text image alt text

It models two specular lobes, transmission, and scattering. To use this feature, simply choose Hair from the list of shading models in the material editor.

New: Realistic Eye Shading

You can now give your characters highly realistic eyes using Unreal Engine's new physically-based shading model for eyes..

image alt text

The shading model approximates subsurface scattering through the sclera, caustics on the iris and specular on the wet layer. To be used in conjunction with the provided eye material and eyeball geometry. Together these additionally model the refraction through the cornea, darkening of the limbal ring, with controls for dilating the pupils.

New: Improved Skin Shading

We've improved the quality and performance of the Subsurface Scattering Profile shading model for realistic skin.

image alt text

The updated shading model runs in half resolution and requires less GPU memory. The scattering is resolution independent and there is no longer a color shift on object edges. Texture and lighting details are better preserved by storing the diffuse and specular lighting separately in a checkerboard pattern rather than packing both into a single pixel.

New: Realistic Cloth Shading

We've added a physically based shading model for cloth. This simulates a fuzz layer and will produce more realistic results for cloth than were achievable before. To use choose the Cloth shading model in the material editor.

image alt text

New: Capsule Shadows

Unreal Engine now supports very soft indirect shadows cast by a capsule representation of the character:

image alt text

Normally when are only indirectly lit they will not have any shadow except for screen-space ambient occlusion. Indirect shadowing needs to be very soft as indirect lighting is coming from many directions, so traditional shadow maps don't work well. The indirect shadow direction and softness come from the Volume Lighting Samples placed and computed by Lightmass during a lighting build.

In game, indirect capsule shadows serve to ground characters to the environment:

image alt text

You can use capsules for direct shadows too. The light's Source Radius or Source Angle determines how soft they will be. This can be used to achieve extremely soft character shadows in an efficient baked lighting environment, which wasn't possible before.

image alt text

This capsule shadow implementation is very efficient as it computes shadowing at half resolution with a depth-aware upsample and uses screen tile culling to limit the shadowing work to where it is needed.

The GPU cost is proportional to the number of capsules and the number of pixels affected by the casted shadow.

How to enable Capsule Shadows

  1. Create a new physics asset using only Sphyl bodies (capsules). Spheres also work but are not as flexible. The capsules should overlap slightly at joints. Foot capsules are the most important to tweak so that the character looks grounded. Arms are often not needed unless you can go into cover or crawl on the ground.
  2. Assign the physics asset to the Skeletal Mesh Asset's Shadow Physics Asset
  3. Finally, enable capsule indirect shadows on the Skeletal Mesh Component

New: Particle Depth of Field

New material functions allow small, out-of-focus particles to be expanded for depth of field the same way that opaque particles would be rendered.

image alt text

The left image shows a simple scene with a lot of particles placed on the ground. The right image has circle depth-of-field enabled and the new material function renders the particles out of focus like all other geometry. The quality is even better as they don't suffer from noise artifacts. We always expand out-of-focus particles by more than a pixel to avoid shimmering.

This feature requires you to make a change in your particle material:

image alt text

New: Dithered Opacity Mask

You can now use Dithered Opacity Mask to emulate a translucent surface using an opaque material.

image alt text

The "Dithered Opacity Mask" checkbox in the material editor provides a stochastic form of order independent translucency when temporal anti-aliasing is enabled. It exploits temporal AA to blend the foreground object with the background over several frames. This can allow semi-transparent objects to use all of our deferred shading features at the cost of some noise and ghosting.

New: Dithered LOD Crossfades

Static meshes can now smoothly crossfade between levels-of-detail using an animated dither pattern!

image alt text

Note: This feature must be enabled on the material as there is a small performance cost to enabling it.

image alt text

New: Improved Hierarchical LOD

This release features major improvements to the Hierarchical Level-of-Detail (HLOD) system. Hierarchical LOD can automatically replace large numbers of detailed meshes with a few simple meshes when they are far away. This helps you achieve much higher quality levels when viewing objects up close, and faster overall performance for your level.

image alt text

Paragon's Agora -- 2.56 Million triangles, 5690 draw calls (reduced from 3.94 million, 7060 draw calls)

In order to get the most benefit from HLOD, you will need the Simplygon SDK (requires a Simplygon license). Simplygon is required to generate a proxy mesh with a reduced number of polygons. Without it, the system will only bake out and combine sections that use different materials into a single draw call.

Also check out the new Hierarchical LOD Outliner feature which has many new settings to help you setup your level's HLODs.

New: VR Instanced Stereo Rendering

Instanced Stereo Rendering is an optimization that makes it more efficient for the engine to render stereoscopic images for VR headsets.

Previously, the engine rendered a stereoscopic image by drawing everything for the left eye, and then drawing everything for the right eye. With Instanced Stereo Rendering, we render both eyes at the same time, which significantly cuts down on the work done by the CPU, and improves efficiency in the GPU. Here are the two techniques running side-by-side:

image alt text

Using Bullet Train as our test content, we saw about a 14% improvement on CPU time, and about a 7% improvement on the GPU with no work required! Note that while most rendering features work with Stereo Instancing, there are a handful that are not supported yet (DFAO, for example.)

To enable this feature in your project, go to your Project Settings in the editor, and check the "Instanced Stereo" box.

image alt text

New: Anim Dynamics (Fast Physics Simulation for Characters)

Anim Dynamics is a brand new self-contained and simple physics simulation node for Animation Blueprints which allows dynamic motion to be procedurally added to skeletal meshes without having to use a full physics solution:

image alt text

Here's an overview of the main features of Anim Dynamics:

  • Simplified rigid body simulation
    • Runs in animation evaluation step.
    • Runs in component-space to react to animations.
    • No collision for faster simulation
    • Only boxes are supported for inertia calculations
  • Rigid body constraints
    • Angular - A two axis constraint with the third being locked. The rigid body can rotate within given angle ranges around the two free axes. Works in conjunction with Prismatic and Planar constraints
    • Cone - A free rotational constraint that keeps the rigid body within a specified angle of its constraint. Works in conjunction with Prismatic and Planar constraints but replaces angular if selected
    • Prismatic - A three axis linear constraint allowing movement along all three principal axes within specified limits.
    • Planar - The planar constraint is a list of infinite extent planes that the rigid bodies cannot cross, this can be used as a ground plane for hanging objects or to stop an object from penetrating a character. Each plane can either be placed in world space or have its transform driven from a bone on the character
  • Chains
    • Each node can either represent a single dynamic bone or a continuous chain of dynamics bones sharing similar constraint data. This allows for more realistic behavior when simulating larger numbers of connected bodies.
    • With single nodes we only push forces down the chain, and never propagate them back - with the chain mode we propagate forces in both directions for nicer chains
  • Spring Targets
    • Linear and angular springs can be used to get more bouncy effects. These springs are configurable independently and can have different spring constants.
  • Wind
    • Anim Dynamics can be used with the same wind source actors that affect APEX cloth objects. This can be toggled on or off per node and can be scaled to create the perfect wind response.
  • Adaptive sub-stepping
    • The simulation can either run with normal tick settings taken from the physics project settings or run using an adaptive substep.
    • The simulation can be configured per node - so if one need needs more iterations for its simulation to converge it can be configured to have as many as necessary without affecting other simulations.
    • In this mode we track the amount of time the game has ticked separately from the time the simulation has actually ran, if we start to fall behind then we maintain time debt and run the simulation multiple times. This is capped to stop spiralling problems but can add stability to complicated simulations
  • Visualisation
    • There are options on the nodes to visualise various things when the node is selected. Available visualisers cover:
      • Angular limits

      • Prismatic limits

      • Planar limits

      • Planar exclusion methods (sphere-like collision with the planar limits)

New: Live Animation Recording from Gameplay

You can now record an animation of a skeletal mesh during live gameplay and save it as an Anim Sequence asset!

image alt text

This asset can be used in-engine or exported as an FBX for use in a 3rd party tool. This should work in any active game scenario, either live or while watching a replay.

How to use this feature:

  • To record an animation, open the console and type: RecordAnimation MyActorClass_05 /Content/Foo/RecordedAnimation
  • To stop recording, type: StopRecordingAnimation MyActorClass_05 or StopRecordingAnimation all
  • If you omit or provide an invalid asset path, you'll get a picker popup where you can choose. You can use the World Outliner to find the name of the actor you're interested in (hover over the actor name to see it's "ID Name")

New: Higher Quality Depth of Field

You can now increase the sample count of depth of field ("Circle DoF") to increase quality by reducing noise, at some additional performance cost.

image alt text

New: Platform and SDK Updates

Along with the usual updates, we've updated all our major VR platforms to use their latest SDKs in preparation for titles shipping in the launch window of the various VR platforms. Along with these updates has been a concentration on stability and final polish, so any UE4 title on 4.11 should be "VR ship ready!"

image alt text

Platform highlights in this release:

  • Oculus Rift 1.3.0 SDK is coming soon in Unreal Engine 4.11.1 hotfix. (Oculus Rift SDK 0.8.0 beta in Unreal Engine 4.11.0)
  • Oculus Mobile SDK 1.01
  • Playstation VR SDK 3
  • SteamVR 0.9.12
  • PS4 SDK 3.008.201 (w/ PSVR)
  • Xbox One XDK November QFE 1
  • HTML5 SDK (Emscripten) 1.35.0
  • Linux Clang 3.7.0
  • Apple tvOS 9.0 support (GitHub only)

New: Improved DirectX 12

image alt text

We've integrated updates to DirectX 12 in Unreal Engine from Microsoft to allow better CPU utilization while generating rendering commands in parallel; also added improvements like support for multiple root signatures, enabled asynchronous pipeline state disk cache by default, reduced memory footprint & fixed leaks, resource transitions optimizations, faster memory allocations and limited GPU starvation by flushing work during idle GPU time.

DirectX 12 for Xbox One

Microsoft engineers have worked to add experimental support for DirectX 12 on Xbox One!

Some steps are required to enable this feature:

  • Set bBuildForD3D12 to true in the XboxOneRuntimeSettings section of BaseEngine.ini
  • Set D3D12_ROOT_SIGNATURE to 1 in XboxOneShaderCompiler.cpp
  • Comment out the use of GetSamplePosition in PostProcessSelectionOutline.usf (not supported on Xbox One yet)
  • Rebuild and restart!

Due to its experimental nature, there may be rendering and/or stability issues with this enabled.

New: Metal Rendering on Mac OS X

Metal is now the default graphics API on Mac OS X El Capitan!

image alt text

Epic have worked closely with Apple, AMD, Nvidia, and Intel to integrate Metal for Mac and in 4.11 it replaces OpenGL as the primary graphics API for OS X El Capitan. The 4.11 release provides the same rendering features across Metal and OpenGL by default. Metal provides a streamlined, low-overhead API with precompiled shaders and efficient multi-threading support to maximise the processing power of the GPU. We will continue to improve and extend support for Mac Metal and look for ways to leverage new API features in upcoming versions of the engine.

There is also experimental Metal support for Shader Model 5 features, try it out using the "-metalsm5" command-line switch.

New: Faster Lighting Builds (Intel Embree support)

We've integrated Intel's Embree ray tracing library into Lightmass and have improved lighting build dramatically with it.

The majority of lighting build time goes toward tracing rays to figure out how light is bouncing. As a test case, the "Sun Temple" level lighting now builds 2.4x faster (from 45 seconds to 18 seconds) by using Embree. The results are visually identical, with Indirect Lighting Quality set to 4.

New: Lightmass Portals

Skylight quality indoors can now be massively improved by setting up Portal actors over openings.

image alt text

Portals tell Lightmass where to look for incoming lighting; they don't actually emit light themselves. Portals are best used covering small openings that are important to the final lighting. This yields higher quality light and shadows, as Lightmass can focus rays toward the incoming light. (Below left: without portals. Below right: with portals.)

image alt text

Several other improvements have been made to Lightmass quality:

  • Fixed light leak artifacts with Point / Spot / Directional lights and small values for Static Lighting Level Scale
  • SkyLight indirect lighting has been improved

New: Animation Posing by Copying from Meshes

We have added a new Animation Node that copies a pose between Skeletal Mesh Components inside the Animation Graph. This is an improved version of Master Pose Component because you can now blend the source animation with new animations.

image alt text

In the above example, the Gauntlet is using the Copy Mesh Pose node in its own Anim Blueprint to copy the hand and arm transforms from a Source Mesh, the female character. At the same time, the Gauntlet's spikes are animating independently.

image alt text

We set the Source Mesh to copy pose from another mesh component. It will only copy transforms where the bone names match. Once you get the pose from the node, you can blend other animations in.

The source mesh has to animate prior to the target, which happens automatically when the target is attached to the source, and the joint names need to match. Also note that bone transforms are copied prior to physics simulation, which means it won't work if the source mesh simulates, although this is not a problem if the target mesh animates after physics runs.

New: LOD Bone Reduction Tool

You can now remove bones from a Skeletal Mesh LOD. Skeletal weighting will automatically be updated! This is an easy way to improve performance of character animations your game.

image alt text

Use the new 'Remove Children' option in the Skeleton Tree's right click menu to disable bones for any given LOD.

image alt text

Previously, this was tightly coupled to the mesh reduction tool. Now you simply view the LOD you want to remove bones from, select the desired joint and remove it, including children if you wish. You are also given the option of removing from only the current LOD or including all lower LODs.

Once you remove the bones, they will turn grey, indicating that they are not skinned at the current LOD. The LOD preview mode in Persona also shows bone names in grey when disabled.

Finally, there is a new "LOD Bones" display option drop-down in the Skeleton Tree toolbar for filtering which bones visible.

image alt text

New: Particle Cutouts (Fast Flipbook Particle Rendering)

Particle cutouts allow for flipbook particles to render as much as three times faster!

Particles using flipbook animations (Sub-UV Animation module) tend to have quite a bit of wasted overdraw - areas where the pixel shader had to be executed, but the final opacity was zero. As an example, the texture below is mostly comprised of transparent pixels.

image alt text

We can now render particles with much tighter bounding geometry, cutting out the invisible areas, instead of using a full quad regardless of what frame of the animation is playing.

image alt text

Setup

The engine can't use Particle Cutouts by default because the material graph allows any logic to create the particle's opacity - it might not even come from a texture. Artists have to opt-in to using Particle Cutouts by setting it up.

  1. Create a new 'SubUV Animation' asset off of the flipbook texture (right click texture in Content Browser)

image alt text

  1. Open the 'SubUV Animation' asset and make sure the Sub Images properties are set correctly. These first two steps only have to be done once per flipbook texture.

image alt text

  1. In Cascade, find the SubUV module and assign the Animation asset

image alt text

Performance results

image alt text

Shader complexity viewmode shows much less overdraw from the particle system using cutouts. Using the cutout geometry reduced the GPU cost of this particle system by 2-3x!

New: Per-vertex Translucent Lighting

Lit translucency can now be rendered much faster using new per-vertex translucent lighting settings!

image alt text

There are two new translucency lighting modes available in the material editor which compute lighting per-vertex

image alt text

  • Per-vertex lighting modes use only half as many shader instructions and texture lookups
  • The "Volumetric PerVertex NonDirectional" setting is extremely fast -- nearly the same as an Unlit material!
  • We recommend using per-vertex translucent lighting as often as possible. The exception is for very large particles or triangles, since lighting will be interpolated across each triangle.

New: Lighting Channels

Lighting channels allow dynamic lights to affect only objects when their lighting channels overlap. We now have support for up to 3 lighting channels.

image alt text

You can set which channels a Primitive Component or a Light Component is in.

image alt text

  • These are really useful for cinematics! For example, you can have a rim light on a character that doesn't affect the surrounding environment.
  • Lighting channel influence is applied dynamically -- that means it won't work with Static Lights. You'll need to use either Stationary or Movable lights. Also, lighting channels will only affect direct lighting on opaque materials.
  • Lighting channels add only a small GPU cost per light when used.
  • Please note, this is available for Deferred Rendering only, so this will not be possible for Mobile or Mac that are not using SM5 or Metal.

New: Stereo Spatialization

3D spatialization is now possible for stereo assets on PC, Xbox One and PS4 platforms. Stereo spatialization essentially spatializes each input source channel (e.g. Left and Right channels) as if they were mono-sources. The positions of the left and right channels are determined by the sound's emitter position offset by a 3D Stereo Spread, a new parameter in Sound Attenuation Settings.

image alt text

The 3D Stereo spread parameter defines the distance in game units between the left and right channels and along a vector perpendicular to the listener-emitter vector. This means that the stereo channel spread parameter is in world-coordinates and will naturally collapse to a mono point source (with left and right channels summed) the further away from the emitter the listener is.

image alt text

New: Sound Focus

The Sound Attenuation settings asset now supports sound focus, a new feature which allows sound designers to control various parameters automatically based on the direction of the sound relative to the listener.

The following diagram illustrates the azimuth settings:

image alt text

Sound designers specify azimuth angle values that define when a sound is in and out of focus. Relative sound positions that are between these azimuth angles are interpolated to blend between the regions. The new settings are specified in the Sound Attenuation Settings UStruct:

image alt text

The focus and non-focus values are used to modify sounds in three ways: Distance Scaling, Volume Scaling, and Priority Scaling. Distance scaling is useful to create a boom-mic or zoom-mic effect by scaling the apparent distance of sounds that are in focus or out of focus. Volume scaling can be used to add another attenuation to sounds based on their visibility. Priority scaling is used to reduce (or enhance) the priority of sounds for sound concurrency.

New: Sound Occlusion

UE4 now supports simple raycast-based sound occlusion. To use enable occlusion calculations on a sound, simply specify it in the Sound Attenuation Settings asset as shown in the following picture:

image alt text

If occlusion is enabled for a Sound Attenuation object, sounds playing with that object will perform raycasts against collision geometry to determine if the sound is occluded or not. If the sound is determined to be occluded, it will apply the given Low Pass Filter Frequency value and volume attenuation. Because the occlusion system is based on a binary raycast (occluded versus not-occluded), an optional occlusion interpolation time is provided which will interpolate from the unoccluded and occluded frequency values. An optional checkbox is provided to enable collision checks against complex geometry.

New: Sound Concurrency

The new concurrency system removes the concurrency data from sound instances and instead puts it into its own asset. You can create these by selecting "Sound Concurrency" under the “New Asset” menu in Content Browser.

In addition to the previous Max Count and Resolution Rule, the new Sound Concurrency object adds a few other concurrency-related bells and whistles which will be described below.

Limit To Owner

This box is used to indicate that any sounds that are played with this concurrency should attempt to do its concurrency counting with the sound's owner. If there is no owner of the sound (i.e. it wasn't played on an actor or through an audio component), then it will count concurrency as if the checkbox were not checked. This is intended to support the ability to have per-actor concurrency limits.

Volume Scale

This feature groups sounds into concurrency groups for volume management. When multiple sounds in a concurrency group are playing at the same time, the older sounds will become quieter. The formula for a sound's volume scaling under this feature is:

VolumeScale = VolumeScaleSettingNewerSoundCount

where VolumeScaleSetting is a user-set value, and NewerSoundCount is the number of currently-playing sounds in the concurrency group that started after this one. For example, if sounds A, B, and C were in the same concurrency group and were played in rapid succession with aVolumeScaleSetting of 0.9, they would be adjusted as follows:

A's VolumeScale = (0.9)2 = 0.81

B's VolumeScale = (0.9)1 = 0.9

C's VolumeScale = (0.9)0 = 1.0

In other words, the most recent sound will not be affected, but each older sound will be ducked so that its volume exponentially decays, effectively acting as an automatic ducking mechanism for sounds that are grouped together.

New Resolution Rules: Stop Lowest Priority and Stop Quietest

The new concurrency object also introduces two new concurrency resolution rules: Stop Lowest Priority and Stop Quietest.

The Stop Lowest Priority rule uses the priority value set on the USoundBase object (SoundCue, SoundWave, etc). Once a concurrency group's limit is reached, the system will run through all active sounds in the group and stop the lowest priority sound (or not play the new sound if it would be the lowest priority).

The Stop Quietest rule does what its name suggests: instead of stopping based on distance (which is often correlated to volume, but not necessarily), it will stop the sound in the group that is quietest or not play the new sound if that new sound would be the quietest. The quietest sound is defined as the sound with the smallest volume scale product after all gain stages of the sound have been evaluated.

Applying the Sound Concurrency Object to Sounds

To use the Sound Concurrency object, you simply supply it to any of your sound calls or sound objects in much the same way that you would the Sound Attenuation Setting object. Below is the details panel of a Sound Wave asset.

image alt text

The new section highlighted above allows you to drop in an asset reference to the new Sound Concurrency asset.

Overriding Sound Concurrency

Both for backwards compatibility and for cases where the old behavior is preferred, you can choose to override the concurrency setting object with a local set of data on the asset. In this case, the sound asset becomes its own concurrency group and the older behavior of limiting concurrency by asset instance is achieved. Older projects that are converted to 4.11 will automatically use the override setting for their sound assets.

New: Marker-Based Animation Syncing

image alt text

Animations can now be synchronized using markers in the animation data. In previous releases the only method of syncing two animations was based on time - animations were just scaled so their lengths matched. You can add marker data by right clicking on a notify track and selecting Add Notify->New Sync Marker.

The feature set for the initial release is as follows:

  • Only animations within the same SyncGroup are synced. The leader drives the positions of followers within same sync group.
  • No play rate adjustment, the play rate is always that of the master animation.
  • Only markers common to all animations within a group are synced. For example, if one animation is missing the 'Right Foot Down' markers, those markers will be ignored for all animations when determining that frame's position.
  • Position is synced based on the relative position of the Leader with respect to its common markers. For example, if the Leader is 25% of the way between its left and right foot markers, then the followers will be synced to 25% of the way between their respective left and right markers.
  • Marker based sync is used automatically when animations in a sync group have enough matching markers. Otherwise the original scaled length syncing behavior is used.
  • Montages also support marker-based sync while blending out, so you can transition back to other animations seamlessly. You can find the Sync Group setting in the Montage.

New: Curve Blending for Animation Montages

Montages now support curve blending. The Blend In and Blend Out options on the Montage control how the Montage should be blended when it plays. Note that if an additional Montage is played, its Blend In settings will be used.

image alt text

New: Hierarchical LOD Outliner

The new Hierarchical LOD Outliner allows you to visualize the clustering of mesh actors and enables editing of the various HLOD settings and generation processes.

image alt text

Material Generation

The material merging process now supports more material properties such as per-vertex colors, vertex positions, and vertex normals. This means that techniques such as world space texturing, blending in snow or moss based on surface normal, and masking material layers by vertex colors now work correctly with HLOD.

Some features are still not supported such as per-pixel normals and the World-to-Tangent transform. In these cases, or other cases where you want to use a different or simplified material for HLOD, the Material Proxy Replace node can be used. It functions much like the Lightmass Replace node: the "real-time" input is used at runtime, and the “material proxy” input is only used during the generation process.

Settings for the material merging process can now be found in the HLOD Outliner panel, in addition to the World Settings panel.

Proxy Mesh Generation

The proxy mesh generation has been streamlined and features the latest integration of the Simplygon SDK, settings for the mesh generation process can now also be found in the HLOD Outliner panel.

Other improvements:

  • Spline meshes are now supported by the HLOD system.
  • Proxy meshes/clusters are now only shown, generated and built for visible (sub-)levels.
  • Emissive colors are now supported by the material merging process.
  • Static meshes with opaque materials are now supported by the material merging process.

New: Complex Text Rendering (Experimental)

We're working on adding right-to-left and bi-directional text support to Slate, including support for complex shaped text (such as Arabic).

image alt text

This feature is still very early. You're encouraged to check it out and provide feedback, although some areas may be a bit rough around the edges in this release.

Known issues:

  • Single-line editable text blocks do not currently work complex text (only multi-line editable text, and static plain/rich text).
  • There is no support for Text Actors or Canvas.
  • Complex text rendering is currently only available for Windows. Mac, and PS4 builds.

New: Advanced Blueprint Search

The Blueprint search tool has been updated to support more advanced search functionality (to get more targeted results).

image alt text

  • You can now target specific elements in your search, such as: nodes, pins, graphs, functions, macros, variables, properties, and components. Full documentation here: https://docs.unrealengine.com/latest/INT/Engine/Blueprints/Search/index.html
  • Supports And (&&) and Or (||) logic operations, as well as Tag/Value matching (using the syntax: Tag=Value).
  • Variables' "Find References" option now leverages this improved functionality for a more precise search.

New: VR Head Mounted Display Camera Improvements

The camera system for Head Mounted Displays has been refactored to make it more versatile and easier to use. We've made it so that your active Camera Component will be offset in the engine in the same way your headset in the real world is offset from its origin. That means you can easily calculate the position of the VR headset in your world, attach meshes and other objects directly to it, and simplify your VR game's control scheme.

In addition, anything attached to the camera component will be "late updated," which means it will use the most up-to-date positional data possible for rendering to reduce latency. Meshes, effects, and sprites attached to the camera will be locked solid there, and updated every frame in the same way that we update the Head Mounted Display itself.

Check out our updated documentation for instructions on how to migrate your current project to use the new system!

New: VR Stereo Layers

Stereo Layers allow you to draw a quad with a texture at any position in the world on a separate layer that's fed into the VR's compositor directly. This allows you to make UI that is more readable and less distorted. Currently, this feature is only implemented for the Oculus Rift headset, but will be coming shortly to other platforms!

New: Major Progress on Sequencer (Experimental)

Sequencer is our new non-linear cinematic animation system. Sequencer is still under heavy development and we don't recommend using it for production work quite yet, but you're welcome to try it out and send us feedback! Expect to hear a lot more about Sequencer in an upcoming UE4 release.

Notable new features in Sequencer for 4.11:

  • New tracks: Shot/director, play rate, slomo, fade, material, particle parameter tracks.
  • Improved movie rendering; .EXR rendering support.
  • Improved keyframing behaviors, copy/paste keyframes, copy keys from matinee, 3d keyframe path display.
  • Master sequence workflow, so you can have sub-scenes within a larger sequence
  • Support for "Spawnables" (actors that exist within your cinematic asset)
  • UI improvements: track coloring, keyframe shapes/coloring, track filtering.

You can turn on Sequencer by opening the Plugins panel and enabling the "Level Sequence Editor" plugin, then restarting the editor.

Release Notes

AI

  • New: A new EQS scoring function, Square Root, has been added.
  • New: A simple "Current Location" EQS generator has been added to support reasoning about AI's current location.
  • New: Added an EQS test for geometry overlaps.
  • New: Added Blackboard Keys support to EQS' NamedParams.
  • New: EQS tests can now be configured to normalize results against declared ideal value rather than max value.
  • New: Made EnvQuery a blueprint type, which enables storing EQS templates in blueprint classes and AI blackboards.
  • New: Added a silent auto-conversion of Controllers to Pawns when calling blueprint RunEQS function.
    • This is usually the desired behavior. But it can be disabled with AISystem.AllowControllersAsEQSQuerier flag in Project Settings
  • New: The way EQS test scoring gets previewed in EQS editor has been improved towards higher display resolution as well as flexibility for future expansion
  • New: AI debug display for several subsystems has been improved by disabling their scene outline drawing in the editor.
  • New: Added a feature to the BT MoveTo task enabling it to observe and react to changes to an indicated blackboard entry storing move goal location or actor.
  • New: Added ability to specify the number of points generated by the OnCircle generator.
  • New: Added time slicing, logging and additional profiling stats to AISense_Sight to help debug and manage performance cost for AI on servers.
    • Exposed new parameters MaxTimeSlicePerTick (in seconds) and MinQueriesPerTimeSliceCheck AI Perception values.
  • New: Pathfollowing parameters responsible for path's mid-points reachability acceptance have been exposed for configuration via AISystem settings.
  • New: Stopping AIController's logic during pawn's unpossess event has been made optional. Use AIController's StopAILogicOnUnposses property to control this mechanism.
  • Bugfix: The blackboard editor no longer crashes when adding new keys while having a search filter applied to keys list.
  • Bugfix: A crash in garbage collection related to a bug in EQS query template caching in certain use cases has been fixed.
  • Bugfix: Fixed a crash in EQS where wrappers could be garbage-collected during queries.
  • Bugfix: Blueprint-implemented EQS generators no longer crash when trying to add a generated value of a wrong type (vector/actor mismatch).
  • Bugfix: A bug in EQS resulting in FEnvQueryInstance's result never getting set to "failed" has been fixed.
  • Bugfix: AIPerceptionSystem.UnregisterSource no longer crashes when perception agents being unregistered don't use all of the defined senses.
  • Bugfix: The AI perception system will no longer allow AIs to bypass visibility-cone checks.
  • Bugfix: AIController.MoveTo will no longer ignore its navigation data's default querying extent.
  • Bugfix: AIController's pawn possession no longer overrides cached GameplayTasksComponent references, which could break AI's gameplay tasks usage.
  • Bugfix: Blueprint interface functions to PawnActions have been fixed to return apropriate values.
  • Bugfix: Fixed AI stimuli never expiring, which resulted in AI never forgetting perceived actors.
  • Bugfix: BTService_BlueprintBase will now correctly trigger "deactivation" events that are implemented in derived blueprint classes, but not the base class.
  • A safety feature has been added to AI's possession logic. AIControllers will no longer try to possess soon-to-be-destroyed pawns.
  • EQS Query instances that run longer than expected will output once to the log, and then once more when the query finishes. The previous behavior was to log repeatedly until the query finished.
  • EQSTestingPawn's ticking has been disabled in game worlds. This can be re-enabled with the bTickDuringGame property.
Behavior Tree
  • New: BTService_RunEQS, a new BT service for regular EQS query execution, has been added.
  • New: Added new property to all EnvQueryTests: TestComment. This property is a simple, optional string used to label the purpose for a test. For now, it only shows up in the details view, but it is still useful for helping to document the reasons for tests, especially when using more than one of the same type.
  • New: Expanded the RunEQSQuery BT task so that users can configure it to use a query template indicated by a key in blackboard.
  • Changed the color of root level decorator nodes in the behavior tree editor to distinguish them from regular decorators.
  • Bugfix: Stopping behavior tree with instantly aborting parallel tasks no longer crashes.
  • Bugfix: Handled an edge case in BT's parallel tasks removal that could result in not cleanly finishing all of tasks, and potentially hanging.
  • Bugfix: Behavior tree focus service reacts to blackboard changes appropriately.
  • Bugfix: Behavior tree's blackboard filters work correctly when a single object uses multiple configurable filters of the same class.
  • Bugfix: BTDecorator_Blackboard now handles user-requested condition inversion properly.
  • Bugfix: Fixed behavior tree decorators not observing in LowerPriority mode when search flow enters and leaves their branch without finding a task to run.
  • Bugfix: Prevented behavior trees from starting new task execution just to be abandoned in next tick due to decorators firing between abort and execute calls.
  • Bugfix: Fixed cleanup of active blueprint actions for blueprint based behavior tree tasks.
  • Bugfix: Fixed missing Tick event in aborting behavior tree tasks from an abandoned subtree.
  • Bugfix: Copy and paste operations in behavior tree's composite decorators subgraphs now work properly.
  • Bugfix: Fixed node memory allocations for injected behavior tree decorators.
  • Bugfix: RotateToFaceBBEntry no longer gets stuck when passed a blackboard key with FRotator value.
  • Bugfix: Fixed order of behavior tree graph nodes when a node is being moved.
  • Bugfix: GameplayDebugger's default settings are now applied properly.
  • Bugfix: Triangulation errors in some convex polygons recorded by visual logger have been fixed.
Navigation
  • New: CompositeNavModifiers have been expanded to support use of convex and sphere physics shapes.
  • New: Enabled custom export of navigable geometry for foliage with InstancedStaticMesh type.
  • Bugfix: Streaming out of a level containing a RecastNavMesh instance no longer crashes the engine.
  • Bugfix: A bug in editor-time navigation system, resulting in removing all the saved data in static navmesh on map load, has been fixed.
  • Bugfix: A memory leak in Recast's heightfield layers building has been fixed.
  • Bugfix: Navigation bounds gathering no longer produces unexpected results while streaming levels with certain streaming setups.
  • Bugfix: NavLinkProxy no longer marks instances containing only smart links as navigation-irrelevant.
  • Bugfix: A missing NavigationSystem notification has been added to let it know about editor-time changes to ActorComponent.CanEverAffectNavigation property. This fixes situations where toggling this flag wouldn't result in rebuilding navmesh.
  • Bugfix: Added a safety feature to address navmesh generation related crashes when BP-implemented actors get their construction script re-run just after navmesh generation finishes.
  • Bugfix: "Navmesh build in progress" editor notifier no longer hangs indefinitely if it's created just after navmesh has finished building.
  • Bugfix: Crowd simulation works properly with auto-possessed pawns placed in a level.
  • Bugfix: Collision data for navigation built from mirrored PxConvexMesh now exports correctly.
  • Bugfix: Fixed geometry projection for navmesh walking mode when a pawn moves far away from the geometry on the Z axis.
  • Bugfix: Corrected HitNormal values reported by navmesh raycasts.
  • Bugfix: Fixed the influence of navmesh edges on crowd simulation near the ends of paths.
  • Bugfix: Navigation export of destructible mesh without collisions enabled now works as expected.
  • Bugfix: Fixed navigation export of NavRelevantComponent attached to an actor not affecting navmesh generation.
  • Bugfix: Navmesh tiles are no longer drawn in red, or not drawn at all, after navmesh generation finishes.
  • Bugfix: Reuse of navigation paths now updates all flags appropriately.
  • Bugfix: Corrected scoring of navmesh boundary segments in detour's crowd simulation.
  • Bugfix: Fixed UNavRelevantComponent not being included in navmesh generation when it's not attached to actor with collision component.
  • Navmesh raycast will use nearest poly containing ray origin instead of just closest one.
  • Added Z check to detour's crowd avoidance segment gathering.
  • RVO avoidance now takes the height of agents into account when culling obstacles.

Animation

  • New: Level-of-Detail Threshold for Animation Graphs
    • Many animation graph nodes now have an LOD Threshold option for performance optimization:
      • Apply Additive

      • Apply Mesh Additive

      • Aim Offset

      • All Skeletal Control nodes

    • Whenever the Skeletal Mesh instance drops below this LOD, the node will not execute.

image alt text

  • New: Per-Animation Compression Settings
    • You can now set animation compression on a per-animation basis.