In this article, I’d like to give some advice about developing Gear VR applications and VR experiences in general using Unreal Engine 4. We’ll also reveal some sneaky tricks we had to use to achieve some effects present in the game.
Lessons learned
Early optimization is the root of a good Gear VR experience
The theoretical advertised performance of the Mali-T760 MP8 GPU present in the Samsung Galaxy S6 (our target platform) is around 210 GFLOPS and 5.2 GTexels/s. This is comparable to the recommended PC GPU for VR purposes, the Nvidia GeForce GTX 970. It offers 3494 GFLOPS of raw performance and 109.2 GTexels/s of fillrate. Considering that UE4 renders more pixels on a Gear VR device than on Oculus DK2 (2.097M vs 2.073M), you can clearly see that there is vastly less computational power and bandwidth at hand. The only thing that works a bit for your advantage is framerate: Gear VR applications should keep 60 fps while Oculus DK2 expects 75 fps and Crescent Bay is looking for 90 fps.
With this data in mind, there is only one sensible approach to performance: developing for it upfront. If you think you can shove performance improvements to an “optimization pass”, you’re wrong. Using this approach, you’ll have to painfully cut out many assets and features.
For VR, it’s simply better to think about performance during every step of development. When designing a feature or a scene, think about the number of distinct objects that have to be rendered, effects that may be needed and graphical fidelity of everything. Prototype with the same render setup but with placeholder assets. If something makes you go below acceptable framerate, revert it and either drop it or make it performant in a separate branch.
Before Escape Velocity, our team had UE4 experience only on PC. We naively thought that we’d be able to get the game to constant 60 fps later in the development. Sleepless nights and abandoned features proved us wrong. With this experience we have now adjusted our development methodology, and newer projects don’t suffer such profound performance problems. Every new idea is screened for possible performance issues and thoroughly prototyped.
THE MYTHICAL DRAW CALL
We’ve found out that the number of draw calls per frame is a metric that you should look at carefully during development. Mobile GPUs are quite powerful in terms of number of triangles that can be pushed but can start behaving badly when there are too many driver state changes incurred by draw calls of objects with different materials.
That’s why our station started out neatly divided into parts and ended up as a one big mesh with all textures and maps merged. Our tests have shown that LODs couldn’t help us because even with all the modules replaced with blockout models (with ~30 tris each), the Mali-T760 MP8 present in the Galaxy S6 still didn’t manage to maintain required 60 fps.
Playing with occlusion also didn’t help much because the freedom given to the user in EV leads to situations where the whole station can be seen on the screen.
Also, we had to forget about any particle effects because UE4 wasn’t able to use instancing on the Galaxy S6 at the time, so draw calls would go through the roof. However, this is now supported in 4.11.
For our further projects we are eagerly awaiting support for the recently released Vulkan API in mobile GPUs. Epic’s Zen Garden demo shows that lower-level APIs like Metal (and hopefully Vulkan) can unleash the true potential of these devices and let developers forget about some of the limitations.
Motion sickness is in the eye of the beholder
The subject of motion sickness has accompanied discussions about modern VR since the first Oculus prototype was kickstarted in 2012. General consensus is that you want to reduce it as much as possible, even when it means sacrificing gameplay elements.
Our experiences with EV show that it’s not that simple. It seems that as long as motion sickness comes from the very nature of the experience and not from technical deficiencies, people are OK with it.
Alongside EV, Setapp released another Gear VR application - the puzzle game Neverout. With this title, people weren’t expecting anything naturally nausea-inducing, but we got reports from users feeling bad even with relatively mild movement.
On the other hand, Escape Velocity inspired numerous comments that can be summarized as, “Omg, I barfed all over my carpet, this is awesome!”
With EV, people expect the experience to be uncomfortable because in reality, it probably is. Keep in mind, however, that we included a way to immediately stop the most unpleasant part of the experience (i.e. applying too much rotational force with the jetpack) so that the user is in control.
Tricks
Postprocess box
As of version 4.10, Mobile HDR is not supported on Gear VR. This means that most post process effects are absent, notably screen fading.
We needed this effect, among others, to fade from intro sequence. What we used is a small hack:
We anchored a static mesh (box) with flipped normals at the camera point. It has special material that is unlit and not depth-tested so we are sure that it will always be drawn above any other geometry:
Normally, this object is not visible on the screen. It’s only shown for a brief time when its opacity is animated with a timeline:
The box cannot be too small because the HeadModel for the HMD may cause the offset per-eye cameras to clip through the box.
Be aware, however, that this approach causes the whole screen to be drawn over again. Due to the rather limited pixel fillrate of the Galaxy S6 GPU, this may cause framerate drops if there are other large translucent elements on the screen.
Gear VR head model
The "HeadModel" describes the offset from the neck to the position right between the eyes. The Camera Actor that you use in the game will sit in the position of this virtual “neck”. The actual position of each eye used for rendering is calculated by transforming the initial “neck” position by the following transformations:
- Add Head tracking offset (n/a on Gear VR).
- Rotate around based on input from head tracking.
- Add HeadModel offset.
- Separate eyes by Inter Pupillary Distance (IPD).
For Gear VR, UE4 uses fixed HeadModel and IPD (see FSettings constructor in GearVR.cpp). On Rift, this info is read from user preferences.
We’ve found out that default values for Gear VR are too big for our setup. When rotating the astronaut’s head inside the helmet, one eye could clip through it. Also, if you want to simulate a very small character merely scaling your character actor won’t suffice because you’ll still have the same HeadModel and IPD. It will give the impression of normal person lying on the floor.
To alleviate this, the whole HeadModel must be scaled. To scale the HeadModel, IPD and head tracking offsets by 0.1, issue the following console commands:
- STEREO CS=0.1
- STEREO PS=0.1
These commands will scale both the camera and positional tracking offsets by ten times, giving the impression that the user is very small.
To discover more HMD runtime parameters that you can manipulate, check this function -
Engine/Source/ThirdParty/Oculus/Common/HeadMountedDisplayCommon.cpp function FHeadMountedDisplay::Exec().
Tweaks in UE4 for a better (Gear) VR Experience
We ran into some hurdles in development due to the state of specific features but found clever ways to overcome them. Here's what we encountered:
UMG and UI in general
Working with UI elements in VR is a challenging task. These specific items were problematic for us:
DEPTH TESTING AND RENDER TO TEXTURE
We wanted to make certain elements of the UI look like they are displayed on the glass of the helmet. However, when we placed this UMG 3D widget near the glass it turned out way too near the camera. On Gear VR, users had to uncomfortably cross their eyes to have the UI in focus.
After that, we tried pushing UI elements couple of meters away and making them bigger. However, this caused them to clip with the geometry of the station.
We needed these elements to be drawn no matter what. With standard objects, we could simply disable depth testing in material settings. For UMG 3D widgets, however, the list of possible materials is fixed. We ended up extending 3D widgets by using the option to disable depth testing.
We would really love to have the ability to render UMG widgets to textures. This would give much more freedom than 3D widgets, especially in VR setting where UI elements should naturally blend into objects of the environment. A request for this has been sent in to Epic.
GAZE INPUT
There’s no easy way to pass gaze input to UMG 3D widgets right now. You either have to extend 3D widgets and synthesize mouse input or do some contrived setup of checking traces with a set of separate objects on the scene, effectively sidestepping UMG completely.
A solution that could combine UMG to texture and gaze input support via some line trace to UV space to mouse event would be a terrific thing for VR purposes. This feature request has also been sent to Epic.
Matinee-driven player animation
The animation of skeletal meshes in Matinee can be done only on the ASkeletalMeshActor and classes derived from it. We wanted to animate our astronaut in the first and last sequence of the game with Matinee because of the flexibility of the tool. However, actors deriving from ACharacter cannot have their skeletal mesh animation directly controlled in Matinee.
Normally when doing cinematics in FPS games, you simply fade to specially crafted scene where the player character is hidden and you spawn a bunch of specially-crafted skeletal mesh actors. In VR, we don’t want to have this kind of immersion-breaking fade. We wanted to reuse the same character object for Matinee sequences and for the player to control. At the end of the sequence we would like the character to seamlessly switch to player-controlled mode.
In EV, we had to use a dummy skeletal mesh that is animated by Matinee, and attach our real character (albeit hidden) to this mesh. At the end of the sequence, we hide the dummy and show the real character. Since it wasn’t possible to match both meshes positions exactly you may notice small discontinuity when the switch is made.
Closing words
Developing for VR poses a completely new set of challenges for developers and designers, especially on mobile. Many widely used patterns and solutions stop working and require you to step off the beaten path. There’s a tremendous amount of issues that still need to be resolved, especially around user interaction.
Mobile VR is especially difficult due to the severe limitations of the devices. You’ll have to get creative with performance tuning to have the best visuals yet maintain consistent framerate.
Unreal Engine helped us transform our vision to a solid product. It’s still challenging to achieve some things, but considering that VR (especially on mobile) is still in its infancy, that’s understandable. Thanks to the recent developments to the VR and mobile features in the UE4, we are looking forward to working on our new projects.
If you have more questions related to Escape Velocity or VR in UE4 in general, feel free to send me a PM on Unreal Engine forums.