Exporting metadata out of animations


I’m currently working on a 2D video game, and have decided to use synfig for the art, as it is a) awesome, and b) can render from the command line. I’ve spent the last couple days thinking about how I can embed helpful metadata into the artwork, and how to get it out at render time. The two primary goals are:

  • detect when events happen (such as when a footstep occurs)
  • track the position of a point throughout the animation (to pin facial expressions to the right place on a body, for example)

To solve the first problem, I have been using keyframe descriptions and manually parsing the .sif, but it seems like doing this for the second problem would be significantly harder. I figure I might as well do this in synfig itself, where there is already rendering code.

Before I start I’m wondering whether any functionality like this already exists, and if not, if the community at large would have any use for such features?


Regarding to track the position of a point you can export the parameter and later parse the exported value node (unique on the document) to see its movement along the time. You need to parse the sif file as well.
About know when something has happened at a certain time I don’t have other idea than your previous one.

I will take a look at this tonight, it seems to be what I need. Much thanks.

Sorry for the double post but I have a rudimentary version of this working and thought I’d post it here in the hope that it might be helpful sometime down the road.

Currently it dumps any keyframes with a description into a lua table, as well as tracking the pixel position of any exported origins. To enable tracking for an element, you must add a metadata with the name of the element as the id and “track” as the value.

Very interesting!

Could this be made to work the other way around? - Take tracking data from one of the 2D tracking tools (Tracksperanto, Blender etc.) and apply the motion to the origin of a layer?

Quadrochave has some experience on create animation sif files based on predefined structures (explosions, smoke, etc.)

I don’t know if he has worked further or released any version of that though.

After some further tests, I don’t think this avenue is going to work. The following situation is problematic:

  • Encapsualte a circle and export its origin
  • Animate a rotation transformation to spin the circle
  • The circle’s origin stays in local space and thus never changes - despite the movement of the circle

This seems to me like a bug - why does exporting a value give its value in local canvas space, rather than in world space? Or maybe I misunderstand what export does.

Is there a way to expose the world space of an object? If not I’m going to look at hacking it into the synfig renderer tomorrow :slight_smile:

Can you post a sample file?

Attached is my test file. Notice how when you scrub through it, the exported circle’s origin never changes, even though the circle moves.

EDIT: I spent a chunk of the afternoon looking through how the rendering system works, and I realize why my test file doesn’t work [the rotate layer renders the circle, and then rerenders it without actually changing anything]. I notice however that there is a Transform::perform method which if called recursively for each transformation would presumably work. Are there any existing mechanisms to determine all of the transformations applied to a layer?
circle.sifz (819 Bytes)

The way synfig works is a bit special.
Synfig defines its layers using vectors but when a layer is passed to other layer to perform a transformation it is passed on a raster form. It is good and bad at the same time. It is good because it allows transformations for the rendered results of the layers and a great set of effects can be done this way. It is bad because the time to perform the effect increases a lot because the data is in a raster form, not in a vector form.
When you use the rotate layer to rotate something, you effectively rotate the raster output and each layer is still rendered on its own local system.
Although you see the origin of the circle rotate (as well as the radius duck) it is made by an internal transformation (rotate, scale, translate) via the Transform Stack call. It is not accessible using the information in the sif file because it is done internally.

There are transformations that are not possible to be translated to ducks. For example the noise distort layer can’t pass its transformation to the ducks of the transformed layers.

On the other hand, Synfig has a very powerful value node system that allows to you perform complex movements using the Value Node Convert feature.

For instance if you want to see the origin of the layer effectively have the rotated value you have to do this:

  1. Convert the Origin to Composite. It would give you the x and y components of the Origin.
  2. Convert the X (Y) component to Cosine (Sine) type.
  3. Link the provided Angle sub parameter to a angle duck (for instance a star layer)

More complex convert types can be done and they can perform lots of geometrical transformations. See ‘Convert’ at the wiki (ugh, it is down now!).

This won’t give you the numbers of the X and Y positions of the circle’s origin on the sif file because Synfig will only store the formula on the sif file.

In the future, it could be coded another output format for the animation that can be called “raw data”. It could be the current frame vs value for each exported value node, so the Synfig engine can be used as input for other applications.

If you are very interested on this and I have time I could try to code that kind of special export format. It is not a two lines patch but it might be interesting for Synfig’s integration in the animator / game creator tool-chain.


Hi Genete,

While your method for rotating the circle would work, I fear it might be too much for my artists - especially when the path isn’t necessarily geometric. Your description of exported raw data, however, sounds like exactly what I need.

I understand it won’t be a trivial patch, but as a rough estimate, how much work would you say this feature would require? If it’s not too big and scary, I would be more than willing to contribute and help implement it.


To implement it it is needed to create a new target on the render system and store the data internally and then flush it to the user selected file.
I can’t estimate a implementation time since I do have so few free time. The problem is that the there are lots of other bug fixes and implementations more useful for the whole artist community (like new cairo render support, sound support or a guy for the bone system) that wouldn’t be implemented/fixed.

Lucky, lately, it is not only me the one who is coding here so digging further with the implementation of the feature would make possible to have it in the coming releases.

Consider that implement one feature needs a lot of user request feedback to keep the enthusiasm of the volunteer coders. Specially the output format is something that should be defined first to allow straight implementation instead of a trial error.

Also, your’e welcome to take a look to the code and propose a patch. Please don’t hesitate to ask whatever question you might have understanding the Synfig code structure. If I can help I’ll do it.


I’ve forked synfig and have a (very) basic start on this feature.

I made a new layer, with some logic inside of accelerated_render in an effort to get the raster-space position of a duck. This kind of works, but occassionally renderdesc’s tl and br aren’t the bounds of the image, and it throws off the pixel detection code (which I stole from geometry/circle). What’s more perplexing is that accelerated_render is never called if the layer is encapsulated. That kind of explodes my brain.

My initial approach to solving this problem is as follows:

  • Add a track_point layer with a position duck
  • Make the track_point’s render code find the pixel in raster space, and save it somewhere in the render stack
  • Update the relevant transformation layers to transform these tracked points in the render stack
  • Have a locking mechanism so encapsulated transforms can’t touch parent tracked points
  • At the end of each frame, output these points somewhere, and start fresh
  • At the end of rendering, collate all of these calculated points and export them (maybe as XML?)

What are your thoughts on this? Is there an easier way to do it?


The only thing I don’t see clear on the approaching is:

To be able to do that you shouldn’t go the raster way (examine the pixels) but do the reverse operations of the transformations layers that affect the current layer.

Each layer has a hit_check() function that takes a point in the 2D space and ask the layer if that point lies on a non transparent part of the layer or not. You can see that the virtual function is implemented on all the geometry layers. They return itself when the color generated by the layer at the given point is not transparent. Otherwise, it passes the hit_check call to its context to see if there is other layer below that could be hit by the point.

Something similar to that can be done but in a reverse way. I don’t know how yet but each layer should be able to ask to its immediately above layer what’s the returned value for a given point. This way the recursive call would end up to the top layer which would return the calculated point to the asking layer.

Then once the layer knows how to obtain the global coordinates of a local value, you can expand the target class to another target type to only retrieve the specially marked data from the layers (each parameter can have a status that could make them able to be exported as data to a file).

This way the user would only need to mark which parameters want to export to file and the target ask each layer to retrieve the global coordinates of the parameter and write it down to the output file. Of course, non space transformation parameters (color) can also be exported.

I hope it helps.