Animation: Clips
August 6, 2023
Animation: Clips
This post is a continuation of the skeleton topic covering importing animation clips into engine runtime formats. Originally I had planned for this to be covered in the last post, but just going over skinning data and skeletons got pretty lengthy!
First let's go over what an animation clip is. In the simplest form it's just a list of bone transforms for each frame in the animation. A frame is usually going to represent a 1/30 of a second snapshot of the bone transforms. Most animations for games are still authored and exported at 30 FPS, and interpolation is used to fill in the gaps between frames at faster frame rates at runtime. The bone transforms could be exported in any of the spaces I talked about in my last post. Normally you'll see them exported in model, local space, or additive to the t-pose. Using model space saves you matrix multiplications since you don't need to traverse the bone hierarchy to get the model space transforms during/after your animation processing. Using local space can make things like animation compression more effective since the bone transforms will be in a more compact range(pelvis transforms are around 0 cm instead of 60cm above the "ground"). Local space does mean you have to traverse the skeleton and multiple each bone by it's parents transform when you want to get to model space. Additive to the t-pose essential means each animation is additive to the skeletons t-pose and has similar benefits to local for compression, but instead of needing to traverse the entire bone hierarchy to get to model space each bone can just be multiplied by the bones t-pose transform.
Chronicle animations
Now let's take a look at Chronicles animation clips. Currently clips are exported to runtime in a local space format, so to get model space transforms the bone hierarchy is traversed to get the final transforms.
Animation clips use the Chronicle Resource Manager, and such is a derived class of ResourceBase.
First up is the KeyFrame class, this is just the basic class that contains a bone's transform(Quaternion and Vector) and it's key(the frame number it's part of). Arrays of this class are used to store the clip data in the resource as well used at runtime to store extracted poses from a clip at a specific time as well as blend results in the animation graph(this will be covered in the future).
Below you'll find an picture of the AnimClipResource class. You'll see that KeyFrame is used to store the majority of the data. That mostly being mKeyFrames which is a 2D array (frames x bones). You'll also see mRootMotion which stores any root motion from the clip separately from the bone transforms so it's easy process/ignore root motion separately from the animation pose. The other thing you'll see that hasn't been covered yet are the things related to what I call MetaData. This data contains extra data for the animation that isn't from the source animation file(fbx) such as flags(looping, partial, additive), triggers, and sync markers. Triggers are events that can be placed on a frame of an animation, more detail is provided below.
The main runtime functions to take note of are GetKeysForTime, GetRootMotion, and GetActiveTriggers which have pictures below and for tools WriteAnimClipToFile which will be covered a bit below in the export section.
Extracting the pose is pretty straight forward. Find the key(s) that correspond to the time. If it lands on 1 frame you can just return that frames data, if it's between 2 you have to go through each bone and interpolate between the 2 frames and return that result. Currently I don't have any compression so every frame and every bone will have data in an animation so I don't have to check for holes or missing bones.
Extracting the root motion while similar is a bit different than getting the pose. Root motion keyframes are deltas from the previous frame, so you need a range to extract root motion for instead of a single time. These ranges could be more than 1 frame so you find the start frame, start percent, end frame, and end percent and accumulate the deltas over that range.
Collecting the active triggers like root motion takes a previous and current time to correctly collect active triggers. Triggers are single frame events so to prevent firing a trigger 2 game frames in row if the clip time ends up between keyframes you exclude any triggers that would have been active on previousTime frame, and add any that are on the currentTime frame.
Clip Meta Data
Below you'll see the widget used to edit a clips meta data. This meta data is extra runtime data for an animation clip that is stored separately from the source file. When in the tools this meta data is loaded from an xml file, but on export it's embedded into the runtime file format and read in from there in game. This is done to prevent losing data when re-importing a clips source file(fbx) and instead the animation is re-imported and then the meta data is re-applied from the xml file. Can see an example meta data file here AnimX Example
Clips flags are pretty straight forward this is just a list of bools/flags that can affect how an animation is used at runtime. Currently only Looping is used, and flags an animation as a cycle so that the animation graph knows to circle around when hitting the end of the animation.
Next up are triggers which are the more interesting part of the meta data. As mentioned above triggers are "events" that are placed on frames of an animation. Triggers have a name, an assigned frame number, and special triggers can be flagged as sync markers. Sync marker triggers are used to sync up timing between blending animations, but that will get covered when I get around to making a post about Chronicles animation blend trees and state machines. An animation can have any number of triggers, and a single frame can have multiple triggers as well. Currently the widget looks like above but it should really be a timeline so it's easier to understand. Triggers can also have a list of what I call payloads. These payloads can be defined in engine or game code. Payloads don't contain any logic internally and are just for storing extra data on a trigger/animation that other systems can check for and use the stored data to execute logic. For example the Sound Payload contains a resource reference to the set sound, and the bone name the sound should be played from. The SoundEmitterComponent on an object will check for any active sound payloads during it's update and then use that data to play the sound at the bones location.
Clip exporting
Clip exporting starts right where the last post about skinned mesh and skeleton exporting ended. If a mesh was exported skinned and successfully exported a skeleton it will then continue on to export any animations associated with said skeleton. Currently it does re-export every animation when the mesh is exported since I'm not usually making individual changes to a single animation source file, but for a real pipeline you wouldn't want this work flow. From the last post we know that all of the animation source files are collected by the Export Mesh widget, this list of files is then iterated over and processed with AssImp and also using some of the information already processed from the mesh and skeleton.
Like with the mesh first we need to import the file into an aiScene with the same flags as used for the mesh source file. After that it's a bit simpler to get to the data we care about. The scene has an animation and a corresponding array of said animations that be looped over. We start by collecting the basic information, bone count, tick rate, and animation length. From that we create any arrays/vectors needed to store frame and bone data.
Now using the skeleton we already processed we start processing the data for each bone and frame in the animation. Since we converted bone names to bone indices in the skeleton processing we will use that info to find the aiNodeAnim that corresponds to those indices using the name. If a node can't be found then all the data for that bone is set to identity.
If a valid node is found we can process the frame data for that bone. Each node stores the rotation and translation in different arrays that may have different lengths if there are frame gaps in the source file so they are processed separately. Below you will find the translation processing for a boneIndex and keyIndex(frame number).
Then rotation is processed.
The aiNodeAnim also contains scale keys, but Chronicle doesn't currently support animated scale so that is skipped over, and if the Override scale is set on the mesh that is handled next by just scaling all the translation data by the override.
If the bone is the root bone(index 0) it's transforms are also stored into a rootTransforms array so that after the keyframes have been processed root motion can be processed if it has any.
The root motion processing is a bit more involved so I'll go into some of the details here. First off in most animation pipelines each of your skeletons will have a dedicated root bone that is used for storing root motion and not necessarily included directly in the main bone tree. Most of my data comes from mixamo and the root motion is directly on the hips, so I make a few assumptions above based on that fact.
Frame 0 is skipped as it assumes frame 0 of an animation won't have root motion, and frame 0 is used a reference point. So each frame for bone 0 is updated starting at frame 1, by calculating the delta from the current frame to the previous, that's stored in the root motion data for that frame. Then the transform for the frame is set to the same as frame 0. You'll notice that there are some exceptions for Y translation, roll, and pitch. Again because of the lack of specific root bone I decided to only handle x,z translation and heading root motion. The hips have some y movement and almost every animation so instead of having a moving threshold for considering Y translation as root motion or normal animation deltas I choose to ignore it.
And finally after root motion has been processed the animation can be written out.
Like the mesh and skeletons the animation clip has tools only functions for setting all of the data and a WriteToFile to convert the data to a binary file format.
Finally we have the WriteAnimClipToFile function. Like Chronicle's other binary file formats first it writes out a file version to handle loading/updating older data. It's pretty straight forward just writing out the translation and rotation for each frame and bone, then the root motion if it has any, and finally you can see where we grab the MetaData and embed the xml string directly into the file so that on load in game the flags and triggers can be loaded from that xml just like in the tools, but from the file buffer instead of a separate file read.
That concludes the overview of how Chronicle handles the skinned mesh and animation pipeline. As noted in both posts both things make some assumptions based on the data I'm currently using and definitely wouldn't hold up in a production pipeline. As always hope you enjoyed the posts and please contact me with any questions.
I'm currently expanding Chronicles animation graph functionality, but hopefully my next post will be about animation blend trees and/or state machines and not take 8 months to post :).
<--- Previous Post Next Post --->