Motion Distortion Layers

So I was trying out Synfig for a commercial Kinetic Typography project, and it looks like Synfig isn’t cut out for things like that with plenty of things lacking (for instance, I literally had to add blur layers, duplicate, solid color layers, and all the different kinds of settings AND ANIMATE each letter in a 20 words text… and it was painful). And I realize that Kinetic Typography tools might not be on the priority of the developers for now, so I won’t start a thread about it just yet.

But while working with the Noise Layer, I realized something that could be a useful feature in some cases to automate some kinds of animations that would otherwise need manual tweaking.

How about Distortion layers that affect the “Motions” of the object, but not the general shape?

For example: Noise layer. When you animate from A to B, it’s usually straightforward, no matter what you interpolate. Sure, you can just place keys midway or even make a “path” to make the transition shaky, but that’s too much work when you want manually do so for, say, a hundred different elements.

So why not just have a random “noise motion layer” that can basically sway the motion of the objects – like the water currents affecting a boat – randomly, before it reaches the destination? Here’s a visual:

Noise Distortion Layer + Displacement parameter converted to Random =


exactly what you need i think ! ?

synfig is great … synfig is big … synfig is giant … :wink: sometime too much :open_mouth:

NotaBene :

just start it , at least to record and share your ideas … for my point of view i would synfig could do some kinetic things

I don’t think that’s what I need. xD See, I need the “motion” to distort, but NOT the shape. This is especially useful when dealing with complex shapes, such as faces, whose surfaces you DON’T want deformed, but still want to show some random shakes at. Motion distortion could also be helpful to create camera-shake effects. :slight_smile:

That’s perfectly possible but there is not good interface to do that.
“Motion” is always defined by waypoints, right? One waypoint is just the holder of the following information:

-Value (*)
-Interpolation type (in and out)

(*) This value is a Value Node but in this case it is constant. But in theory it can be any type of value node (constant, animated or “converted”).

You can do this recipe:

  1. Create your motion based on waypoints. Group it.
  2. Duplicate your group stuff (name it Original, Copy)
  3. Right click each waypoint of Original and export it to wp1, wp2, etc.
  4. Right click each waypoint of Copy and select convert to random value node.
  5. Right click click each waypoint of Copy again and export it to cwp1, cwp2, etc.
  6. Create one extra layer (name it Extra) with a parameter that can hold cwp1, cwp2, etc.
  7. For i in each cwpi, wpi do
    7.1) Connect Extra’s parameter to cwpi
    7.2) Connect Extra’s Link subparameter (aka cwpi->Link subparameter) with wpi
  8. Additionally you can export and connect the speed and radius of the random converted value nodes.

This way, in theory you have an original (Original) motion and one random driven motion controlled by the Original and the speed and radius value nodes.


This is not feasible at the moment because:

  1. Interface is not created. It is very tedious to export, connect, etc. and you need intermediate layers to do the connection between the value nodes of the waypoints and its subparameters.
  2. Waypoints converted to something and with exported value nodes in its children parameters are not possible to be load because there is a name resolution conflict in the code that doesn’t allow to load saved files. It is a bug that appears because a backdoor to the value node of the waypoint has been reached and the application wasn’t ready for that.

Alternatively, instead of fix the mentioned above, a new type of interpolation can be created: Random. Probably it couldn’t be created directly but yes based on current existing waypoint. It would work internally like the described without need to have one Original motion alive. It could just have stored the basis of the motion internally and show the random result instead. Of course, interface to show the speed, radius etc. should be created as well as the possibility of disable random result to properly edit the base motion.

I don’t show the example of the first solution because it is not possible to load it.


I agree with tushantin that adding some variation in a transition is perfectly achievable in Synfig, but lacks a bit of user-friendliness.

What I’d appreciate is if “convert > random” would use as “link” the (animated) current value node, that way I could simply animate the object, then use “convert to random” to simple “add” some variation on the animation… without loosing the work on animating the value node.

A reverse convert “simplify into 1st term” and “simplify into 2d term” could be useful some day too… eg to “clean” the variations.

Attached is an example where I distort the path of 3 moving objects by adding some randomness into it

  • green circle position: decompose > animate x > convert y to random
  • red circle position: random > export link (to be able to animate it) > animate link (3 key waypoints along the path in black, which is only shown as a reference)
  • blue circle position: similar to red, but convert to switch first, to enable/disable the variation when the circle should not be moving, then link off is set to red circle position at t=0, and link on is set the animated+variation (similar to red circle above). Similar effect could be done more easily by animating the random “radius” of the variation (0 = no variation).

How does this sound?
semi-random-tests.sifz (2.1 KB)

Hmm, I “think” there already might be a way to switch specific parameters mode to “Random”, but to what extent, I don’t know since I haven’t tested it. My intention was that, if you can distort shapes (and thus position), then you could also distort “only” the position without the shape. The easiest example I can come up with this:

Motion Scale. Rather than placing a Scale layer and expand all the objects INCLUDING the space between them, the Motion Scale will expand just the space between them without touching the objects. This could easily be used to demonstrate “exansion of space” concepts in astronomy without much hard work. If we could do that with Scale, then we should theoretically be able to do it with any distortion layer.

The same can de done with “Rotate”, where we can rotate the position of an object, without the object itself rotating.

But I see what you mean by no interface for this… What interface would we need to build for this?

It can be done via layer but a new one. To understand this you have to know how do layers work in Synfig.

There are two kind of layers: Composite layers and Non Composite layers

Composite layers do this:
They receive a rendering information (x size, y size, quality, etc) and a surface to render on.
First, the layer ask to its context (the layer’s below it) to render themselves to the given surface. The layer waits for the result of that request. Once the context has rendered on the surface, the layer (who actually knows its own parameters) renders the shape (or whatever it represents) on the result surface using its current blending method taking account the rendering information passed (rendering parameters)
Once rendered, it returns the result surface to the caller, usually other layer.
This go on, until the first layer call and the process end.

Non Composite layers do this:
They receive a rendering information (x size, y size, quality, etc) and a surface to render on.
First, the layer ask to its context (the layer’s below it) to render themselves to the given surface. Maybe the non composite layer needs that the context renders a slightly rendering information so it could modify the rendering information passed to context to render on the surface. The layer waits for the result of that request. Once the context has rendered on the surface, the layer performs a raster operation based on the pixels it receives and so it produces a completely new (modified) surface. That surface maybe blended to the render result from the context or not but the final (distorted) surface is returned to the caller.

Some non composite layers just do this, pass to the context a new rendering parameters and ask it to render on the surface. This happens with the Scale, Translate, Rotate and Stretch layers.

To perform such operations like “scale origins” or “rotate origins only” some extra operations have to be send to the rendering result. If you can produce the desired result with a combination of two or more layers, it is possible to create a layers that perform the same in only one operation.

The problem is that you pass information to a “context” and that information is common for all kind of layers. Some kind of new rendering information has to be created and the layers should know how to handle it. It is a possible work but it would be heavy because it would need to touch a lot of layers.

For example, “rotate only origin” could be performed by a rotation of the origin and one anti-rotation of the shape by the same amount around the origin. The problem is that the “Rotate only origin” layer" doesn’t know where the origin of the context layers are, so due to that, all kind of layers that would make use of the new “rotate only origin” layer must learn to handle that parameter.