Sumo Digital shares how it created Spyder’s sophisticated animation systems

Sumo Digital Senior Programmer Nick Edwards and Sumo Digital Senior Programmer Andy Chapman

Founded in 2003, award-winning Sumo Digital employs over 700 staff across its nine studios in Sheffield, Nottingham, Newcastle, Brighton (The Chinese Room), Leeds (Red Kite Games), Leamington Spa (Sumo Leamington and Lab42) and Warrington, UK and Pune, India. Developing successful games across all platforms and genres, Sumo is recognized for its versatility, proprietary technology and creativity across a portfolio of games featuring titles for major publishers including Microsoft, Sony, SEGA, and 2K.

Sumo Digital is part of Sumo Group plc.

For more information visit: sumo-digital.com.
Hi, Sumo Digital Senior Programmer Nick Edwards and Sumo Digital Senior Programmer Andy Chapman here. Sumo Digital regularly runs internal game jams and gives winning entries the potential to get further development time to expand upon the concept. Unreal Engine-powered Snake Pass is an example of a previous game jam winner that made it into production, achieving a successful multi-platform launch.

Spyder is the latest entry to emerge from this process. Having secured a game jam win, the original team of three designers was expanded and went on to produce an engaging pitch demo which secured Spyder a place on Apple Arcade, this meant ensuring the game could run on modest mobile devices like the iPhone 6. When the title entered production, the team was expanded further. We came on board to rewrite the existing Blueprint-based Agent 8 prototype in C++, with the intent of making it more characterful, maintainable, and performant. 

Here you will find a breakdown of what was required to reach those goals.
 

Animation

First steps
Before we began development of the final version of our protagonist, Agent 8, we reviewed the earlier iterations of the character produced for the Game Jam and the prototype of Spyder. Our artists and animators wanted to push the characterization of the spider, but it would be difficult to do with the procedural animation employed in our earlier versions.
 

Moving away from the procedurally animated approach posed myriad challenges. We required a skeleton with enough bones for an X-legged character, but few enough that we could use GPU skinning on our mobile target platforms. We needed to avoid creating a vast animation set for locomotion to keep us within development, disk, and memory budgets. Therefore, we would need to adapt the limited set of animations to cope with navigating around a varying environment with complete user control.
 
Equipping Agent 8 with an (exo)skeleton
One of the crucial requirements for pursuing an art-driven approach for the character was to create a skeleton for the spider so that our animators could bring Agent 8 to life. Creating this skeleton posed a challenge, as the number of legs helped inflate the bone count beyond the 75-bone count limit that our target mobile platforms had for using GPU skinning. Past this limit, Unreal Engine falls back to CPU skinning, which incurs a hefty performance penalty.

Fortunately, we were able to dig into the engine code and make modifications to add support for up to 256 bones on ES3.1 platforms using the Metal API. You can find the changes we made in a Pull Request on GitHub (login required). With Spyder being an Apple Arcade title, we could rely on all our target platforms using the Metal API and therefore lay this issue to rest.
 
Finding our feet
The most challenging part of animating Agent 8 was dealing with the number of legs, and how they needed to interact with the environment. In most games, characters move on a single plane. When moving up and down slopes or stairs, the capsule stays upright, and developers employ Inverse Kinematics (IK) to prevent feet from animating through the ground. A physics trace from some distance above where the foot currently is to some distance below may find a surface that should constrain the foot.

With Agent 8, a different solution was needed to deal with the character being able to move across multiple different planes in the environment at once. It wouldn’t be good enough to do “reactive" IK, as we would end up with feet dragging along surfaces in situations where Agent 8 should instead be adapting leg movement arcs in anticipation of moving onto a different surface, such as when walking on to a perpendicular wall. Many possible combinations of geometry meant that we could only produce animation sets for a single flat XY plane. Therefore, we needed to develop “predictive” IK to solve cases like the wall traversal example above where we want to anticipate moving onto a different surface, and to warp the animation accordingly.
An illustration showing the basic result we wanted to achieve.
While standing still on an external corner, finding the right positions to IK Agent 8’s feet is a relatively straightforward task. When moving, more problems arise. We need to estimate where to place each foot so that we can warp the motion of the walk cycle to curve up a wall or over an edge. Otherwise, the character would drag their feet through the wall they were moving onto, or the corner they were moving over.
Agent 8’s predictive IK anticipates where a foot is going to land.
Animation Modifiers were used on the locomotion animations to compute and store where the feet leave from and arrive at, in component space. First, our animators produced all the animations with root motion so we could determine the motion for all frames of the animations. Then, the Animation Modifier marks the period(s) of the animation where the foot is placed (foot bone at or near the Z=0 XY plane). Then, for each frame, we can compute the relative transform between the current root transform and the root transform at the recorded locations where the foot is on the floor. We then multiply this relative transform with the component-space location of the foot when placed or about to raise, to produce the final values. We deconstruct these location vectors into their X, Y, and Z components, and store them in separate float curves in the animation. Three float curves for each arrival and three for each departure location for each foot results in 36 float curves.
Target foot locations are encoded as float curves for each leg and vector component.
Arriving and departing foot locations are generated by the Animation Modifier.
Tracing all over the place
With the data on hand to figure out where the feet will lift from and fall, predictive IK is now possible. All the logic for finding IK target locations is contained within a custom AnimGraph node. Initially, we did it in the AnimInstance, but this introduced latency, which produced inaccurate results. We needed to allow the relevant animations to tick for up-to-date curve data to be available to sample, and to then do our traces and calculations before updating Virtual Bones, which the LegIK node could pull data from. The virtual bones serve as a scratch area for the transforms to be shared between different AnimGraph nodes. Ideally, we would have factored out the AnimGraph node into more re-usable sections, and we can see Control Rig being a solution for this in the future, once it is production-ready.

Diving deeper into the implementation of how our custom AnimGraph node uses the data, we first defined a static reference socket for each leg on the skeleton of Agent 8 from which to perform swept sphere traces to the departure and arrival locations. These were in the rough location of each coxa joint. With the spider on a single flat surface, the result of these traces should be their endpoints. On a surface next to a perpendicular surface, the result of the traces for some of the legs will hit the surface. On a surface next to an edge, the result of the traces for some of the legs will hit nothing. 
An illustration showing three of the many traces we employ in surface discovery.
If the first trace doesn’t hit, we perform additional traces to find a surface. We use a series of predefined techniques, sorted in the order in which they are most likely to succeed. The first additional trace is performed from the arrival/departure location straight down, to find a surface on slightly curving geometry such as a large sphere. If this trace fails, we do another from the endpoints of the previous trace back inwards towards the spider, to handle many simple external corners. We do further traces until we get a hit. If we don’t find anything, we return early and do nothing.

Once we find a surface, we do some math to figure out the foot along the plane, according to several variables. One such variable is the distance away that we could sensibly place the foot, using the current distance between the coxa and the arrival foot location to determine an estimated range for how far the foot can go without introducing hyperextension in the leg. 
Hyperextension is the undesirable stretching of legs.
After “projecting” the target foot location along the impact surface, we then do additional physics traces to determine whether the foot is still on a surface. If it isn’t, we repeat the process from above to find another surface, and then try to project along that surface instead. Our projection logic is robust enough to handle surfaces with normal vectors facing entirely away from us, so we deal well with determining a foot location in cases where Agent 8 reaches one of its legs over and around onto a surface completely hidden from its view.

One thing to note is that, in most cases, we double up our traces. Spyder utilizes two separate complex collision meshes for most of the environment and objects. One is a “simplified” approximation of the render mesh (still a tri-mesh, but with less detail). The other is the full render mesh collision itself. The character movement and camera systems are the primary users of the simplified collision, with the render mesh collision only explicitly used by the animation system. The simplified collision was mostly associated with a “walkable” collision channel, with the render mesh remaining associated with the visibility channel. The traces for IK use both; the simplified collision is more useful for getting a broad sense of where the foot might want to be placed, due to things like rivets and other environmental elements not being represented in this collision. We use the render mesh collision to make sure the feet are on visible geometry.

A note on performance. We were initially quite cautious about doing too many traces, given the specs of some of our target platforms and the complexity of the collision we query. So cautious in fact, that a lot of early IK work used a single trace for each leg each frame, switching trace targets using AnimNotifies. We even considered only tracing once when the target foot location changes, and not bothering tracing each frame at all. In the end, we found that the cost of performing the traces wasn’t as major as we thought. In our shipping implementation, we trace against the location each foot is departing from and going to arrive at each frame, for both types of collision, and until a trace strategy succeeds. As a result, the animation system can execute up to 200 physics queries a frame when Agent 8 is traversing complex geometry. If necessary, we would’ve optimized this, but with only a single character executing this logic, it didn’t contribute much to our overall frame time.
 
Motion warping
With the logic in place for finding the departure and arrival foot transforms, we need to use these to affect the animation in some way. We approached this with the idea of warping the raw animation’s ground plane onto a spline which we generate using the transforms. We still have the reference points for the departure/arrival positions in component space, and by using these, we can figure out an alpha value for where the current position of the foot is between these two positions. Using this alpha value, we can determine a point on the spline to use, which becomes our new “ground” position. Next, we calculate the relative transformation from the reference ground position and where the foot currently is and then apply this relative transformation onto the new spline ground position. The result is the location of where the foot should be, using the spline as our new warped ground “plane.”
The same set of locomotion animations are used regardless of the geometry being traversed.
After doing this process for all the spider’s feet, we shove the results of these computations into a series of virtual bones that pass them along to the LegIK node, which implements the FABRIK IK solver. The LegIK node was something that worked out of the box for our spider and us, with its four bones-per-leg chains.
Surface discovery is critical in figuring out locations for placing feet.
Dynamics
Alongside the leg work, we made sure to spend time finessing other aspects of the character animation as well, primarily in how other parts of Agent 8's body respond when moving around the environment.

Antennae
One such area is the radio antenna mounted in Agent 8's rear hatch. These not only serve to provide our protagonist with a means of receiving radio communications, but to help reinforce the orientation of the character. As such, it was important that we could introduce some dynamism to the antennae, according to their orientation relative to gravity. To do this, we made use of the AnimDynamics node to provide motion on the chain of bones belonging to each antenna. Our animator was able to play around with the many settings on the node to produce the results he desired, without us having to write any custom code. And not only did we get the antennae tilting according to gravity, but we also gained nice secondary animation for free when Agent 8 locomotes around or plays their idles. This adds some nice bounce and additional interest to the character, and as development progressed, the antennae became affectionately referred to as deeley boppers.
 
Agent 8’s antennae respond to changes in orientation.

Body

Once the antennae had been dynamized and the leg IK had improved far enough, we reflected on how we could apply physical reactions to the body of Agent 8. This started with a desire to replicate the effects of gravity on the body of the spider so that it would hang further away from the ground when upside down. This meant modifying the transform of the primary body bone to supply translation and rotation offsets. Without any other systems in place, this would've resulted in the entire mesh being offset, legs included. With component space IK, this isn't an issue, as the legs reorient to where the body has been moved to.

After we'd proven out the process of affecting the body with gravity, we then looked at implementing additional effects. From the start of the project, a frequently discussed idea within the character team was giving a sense of weight to Agent 8. When starting and stopping, or while sitting still on a moving object, the body would shift in response. To do this, we utilize a vector spring targeting the actual location of the spider, interpolating towards it from the previous body spring location. We impart a force on the body spring location in certain situations (such as when stopping) to increase the amount of wobble the body receives. The current body spring offset is also used to derive pitch and roll rotations, and these, along with the translation offset, are accumulated with the gravity effects before being applied to the body bone.
 
Agent 8’s body spring results in fun, dynamic movement.
Gadgets
One of the primary methods through which Agent 8 interacts with the world is using gadgets. There are several of them in the game, and each one helps to provide the player with a tactile experience when progressing through each level. Each gadget provides a unique experience and is therefore controlled and animated in a different way. We wrote the gadgets such that the gameplay behavior is decoupled from the animation itself, which means we have full control over how they look when in use.
How does it even fit in there? Spider science.
Each gadget has an associated actor class that contains its physical representation, and each gadget is comprised of at least two skeletal mesh components – one for the arm, and one for the gadget head. The arm’s skeleton is shared between all gadgets, and each head has its own unique skeleton. We make use of the Copy Pose From Mesh AnimGraph node to propagate shared bone transforms from Agent 8 to the arm, and from the arm to the head. Most of the gadgets have animation Blueprints implementing different types of manually calculated IK to produce the desired effect.
 
Emotional response
With full control over how Agent 8 animates, we wanted to push characterization even further with the development of an emotional response system. Taking specific actions or being in certain locations in the game can evoke emotions that trigger a different facial expression when walking around or idle/fidget animations when standing still. We have at least one idle and fidget animation for each of the 12 emotions in the game. We play them via instances of a custom node in the AnimGraph. The custom node is like the Random Sequence Player node but selects a single animation out of a list based on chance, loops a random number of times at a random play rate, and then stops playing. This allows us to transition between two instances of the node, each with a different bucket of animations, chance percentages, loop times, and play rates.

Ensuring that IK worked well enough for each of the idle animations was key. They differ substantially from the locomotion animations we tested against when developing the IK systems. Thankfully, the solution we created only required a few tweaks and fixes to get these new animations working nicely.
See all 12 of Agent 8’s emotions.

Locomotion

Having 360-degree navigation and freedom of movement is a blessing and a curse. While it gives the player numerous freedoms not always afforded to them in traditional three-dimensional games, it can give the developer several potential headaches. Common systems used in orthodox first- and third-person games such as controls and cameras break down when the player starts to walk on walls and ceilings.
 
The Problem
We came up with a list of restrictions early in the project; these were a combination of design requirements and man-hour limitations (we had a deadline to meet and a chunk of level content to generate).
 
  • No level mark-up or hints (in other words: “This is a wall,” “This is an edge,” “This is a corner,” and so on). This was a man-hours limitation. There was no way we'd be able to keep on top of the evolving level layouts.
  • Relatively freeform geometry: because the real world isn't grid-based.
  • Moving surfaces: where's the fun in being stationary?

This cemented the fact that due to the potential complexity and occasional dynamic nature of the level geometry we were planning, we would have to somehow discover the spider’s local environment in real-time.
 
Simple Beginnings
Knowing that we would need to perform some kind of tracing to determine the surfaces around the spider that he could walk on, we started out with the floating pawn movement component as a base and added gravity and a single call to multi sphere trace to it.

As a side note, we did have an issue when performing a sphere trace of zero length, the engine would detect this and force the length to be 1 in the x-axis IIRC. We commented this out, and it didn't seem to cause any problems.

This floating-pawn solution provided promising results on the quickly thrown together test level. We were getting quite a bit of information from that trace call. As it turns out, this was because the level was built from lots of individual cubes: the multi sphere trace will return multiple hit results for the trace, but only one hit result per object, which was fine for our first test level.

As soon as we tried this approach on a single static mesh, we were down to one hit result from the trace and not enough information.
 
Getting Cross
The next obvious approach was to just increase the number of sphere traces; this consisted of a single trace down the vertical axis of the spider and four compass traces: north, east, south, and west relative to the spider.

This improved things greatly and gave us literally five times more information. We now had to work out what to do with this information.
 
Flying Carpet
I think we missed a trick in our variable naming here; we use a virtual "movement surface" (flying carpet would've been a much better name) that is calculated from the array of found surfaces we gathered when tracing.
Discovered Surfaces – The gray squares indicate a surface; the red arrow shows us how much influence that surface has.
We iterated over the list of newly discovered surfaces. First, discarding any unsuitable faces, these are ones that the spider could not possibly walk on (such as above the spider, and facing away) remaining faces were then awarded a weight for how much we think they should contribute to the spider’s orientation and a weight for how much they should influence the spider’s location. Various formulas were tried and tested to calculate this weight; the final formula consisted of a weighted sum of the following variables:
 
  1. The distance between the surface hit location and the spider’s origin normalized according to the maximum distance we scan to find the surface.
  2. The dot product of the surface hit normal and the spider’s up vector.
  3. The dot product of the spider’s forward vector and the direction to the hitpoint of the detected surface.

The first of these gave us a curve like the one below:
A graph showing how a surface's influence on the spider’s orientation diminishes over distance.
And by incorporating the second and third, we got a complex 3D surface that allowed us to bias the spider’s rotation towards surfaces we considered more important.
Examples of forward bias in the surface influence weights.
We then used the contents of our refined list to calculate the location and orientation of the virtual "movement surface."
The virtual movement surface, shown here in yellow.
This movement surface is not used as a target for the spider’s location and orientation, but as a guide to determine the spider’s current frame of reference. This information is used to tell the spider in which direction to "cling," and how to orient itself.

At this point, we had a pretty good system, and the traversal worked well around the current sandbox test level. Unfortunately, our test level was missing a literal edge case.

The movement system worked nicely over the miniature hills and vales, cliffs, and walls of our rather boxy initial test level. Problems started to appear when we started tackling more representative level geometry created by our art team.

The issue we called “ping-ponging” came from a feedback loop in the spider’s orientation system, because we needed the system to be able to react quickly and orientate the spider correctly when traversing the levels we couldn't overly damp this rotational movement.
The other big issue came from external corners greater than about 270 degrees, when the spider was traveling at full speed, it was possible to fly off the geometry and out into the wild blue yonder like a coyote being tricked by a roadrunner.

The apparent issue was that our current set of sphere traces weren't "seeing" the cliff face you were expected to cling to as you shot off it.
This happens because we were relying on the spider’s orientation being adjusted as you hit the cliff edge to wrap you around it. This would allow the spider’s vertical trace to pick up the cliff, but at the speed the spider was traveling, this didn't happen. More traces were needed.
 
Head massager
Both problems were eventually solved by performing those additional traces, although we didn't make the jump to the final system in one big leap, as is usually the case, this was an iterative process.

At this point, the system used a table of spider-relative directional vectors, adding an extra trace to the existing system required adding an extra vector to each existing entry in this array. This solved the initial test case we had for the ping-ponging fault, but QA soon found another.
A visualization of the traces performed; a red trace indicates an intersection with world geometry.
By creating traces that meet underneath the spider, we appeared to have fixed the issue. But adding those extra traces highlighted that the system was quite clunky to update. Making any changes to the shape of the traces was quite labor-intensive and required us to calculate the individual vectors for each of the directions we were tracing. Basically, it needed refining to make the whole updating process a lot more streamlined.

A small refactor of this code to allow us to perform multiple traces through an array of connected vectors meant we could effectively trace a curve, and with a little extra code on top of that, we were able to perform a rotational sweep of those individual traces for the full 360 degrees around the spider.
The escalation of traces, from left to right – 1, 4, and 8 traces being performed.
Cache'ing in
As it happens, all the traces in the world still didn't completely fix our ping-ponging rotational oscillation problem. They reduced it drastically, but it was still there, and it was still noticeable.

The underlying cause was this feedback loop:
The spider orientation logic feedback loop.
Because we required the spider’s orientation to be responsive to changes in surface geometry, we couldn't raise the level of damping on the spider’s rotation any higher.

Unable to slow progress through the feedback loop, our solution was to break the loop when the problem was most noticeable: while the spider was stationary.

The red arrow on the diagram below shows where we break the loop.
The now interrupted spider orientation logic feedback loop.
To do this, we kept a cache of the detected surfaces. These were valid until the spider had moved more than a fixed distance or rotated more than a specified amount, or one of the cached surfaces had moved. When this happened, we cleared the cache and started again.

This finally fixed the issues we were seeing, and a pleasant side effect of this addition is that we vastly reduced the amount of scanning performed over time.
 
Let's not get physical
By basing our spider movement system on the floating pawn movement sample code, we were fairly committed to having a kinematic character. The original game design didn't have any call for physics-related shenanigans, and gravity would always be towards the surface we were walking over.

So, when the call came in for some physics-related additions to be made to the character’s capabilities (in the interests of gameplay) we were faced with a choice, we could either:
 
  • Enable physics on the main character and convert all the movement code to use forces.

Or
 
  • Simulate the forces we require in a more gameplay friendly way.

We chose the latter.

The main influencing factor in guiding that decision was the spider character we had started the project with.

That spider was written in Blueprint and had evolved from the original internal Sumo Digital Game Jam Blueprint prototype. It was built upon a physics object and all the movement, rotation, and surface attraction were implemented with the addition of forces to the character. Everyone who had used it had said how tricky it had been to balance a physically accurate simulation for fun and engaging gameplay.

This would be no good for us. We had to have a system that was easy to tweak and balance, so we went for the second option.
 

Drawing The Rest Of The Owl

As has been mentioned, we needed to add some extra capabilities to the spider.

Design wanted us to expand on the physical materials in Unreal Engine, specifically, the way friction was simulated.

Because we were already doing our own physics simulation on the spider, it was fairly easy to expand the friction calculations within this to support the sticky surfaces that had been requested.

The second request was for support for our custom in-game wind system.

This custom wind system had been developed to allow us to blow gameplay objects around in the level, and we needed to get the spider to react to it in the same way. Moving the spider around using this system was another simple addition to the code, we had already added implementations of AddForce and AddImpulse functions to the spider, so it was just a matter of hooking these up.

It wasn't all smooth sailing though, getting this system to be consistent across a range of frame rates proved to be a lot trickier than we anticipated.

The system worked well on our development PCs running at quite high frame rates. However, it was a different matter when we got it onto the lower-end devices we were supporting.

This wouldn't necessarily have been a problem if we had only been applying wind to the spider, but we were moving gameplay objects around using the same system, and this gave us discrepancies between the behavior of the two.

We spent quite a bit of time debugging and refining our own physics simulation before eventually implementing physics sub-stepping within it. This last addition got us the parity in the results we needed.
 

Further work

With Spyder shipped, the Code team had an opportunity to reflect on the development journey, knowledge accrued, and consider open questions.

In terms of potential future works, two features come to mind:
 
  • Make the spider a physics object.
    • This would be more aligned to Unreal Engine’s framework. In doing this, we could empower our designers (who are working solely in Blueprints) and give them the ability to prototype more freely using currently available out-of-the-box Unreal Engine functionality.
  • Alternate uses for this sticky locomotion tech.
    • Wipeout-style hover racing.
    • Super Mario Galaxy-style platformer.
    • Shadow of the Colossus gameplay.

Animation is an actively developed area of the engine, with some interesting new tech worth investigating as part of future work. We already mentioned Control Rig, the experimental scriptable rigging system, which could be a good candidate for reimplementing our IK logic in a more modular and powerful way. Inertial blending could provide gains in performance and blend quality. Animation Insights, the new profiling tool for inspecting gameplay state and live animation behavior, looks useful for development. There is a lot here that could provide avenues for future improvements.

In terms of future iterations of our animation systems, we would look at improving the temporal stability of our generated splines to reduce the chance of legs and feet flicking when traces return wildly different results from frame to frame. There is room for finessing our animation blending behavior to avoid blending between poses too quickly, such as when coming to a stop.

We hope you've enjoyed learning about our development journey with Spyder, and if you want to save the world with Agent 8, check out Spyder on Apple Arcade!
 

    Get Unreal Engine today!

    Get the world’s most open and advanced creation tool. 
    With every feature and full source code access included, Unreal Engine comes fully loaded out of the box.