suppose you have a convert type which took two blines A[1…n], B[1…n] and two scalars a, b zipped them together linearly, so that the result is C[1…n], where C[i] = aA[i] + bB[i] (vector addition, obviously). (You could do this now, but it would be very tedious to set it up.)

Then, is this a sensible way to merge two (or more) different animations of a bline together?

You can smoothly interpolate between two fixed positions by varying a and b (and keeping a+b = 1).

Or you can switch smoothly between two different animations, say walking -> running (but
you’d need to get the moment of switch about right)?

Or combine animation of different parts of the same curve (a == b == 1), say the head and tail of a worm, doing different things.

In the advanced version, a and b can be functions of the bline amount parameter, so the behaviour can be programmed to specific portions of the curve…

Hi!
This idea is not bad but let me refine it a bit to see where we can arrive:

A and B blines have to have the same number of blinepoints.

If 1) is true then there is not difference between that convert type and the current keyframe/waypoint/interpolation method

With the algebraic sum you have more control of the merge of the two blines than the current way of interpolation

Long time ago I have the wish of have a library of “poses” (exact values for a set of layers) where the user can create new poses based on a combination of any of them. Poses are stored by canvas or inline canvas. So you cannot cobine two poses from two different canvases because there is no way to know how to merge them. Canvas poses (or any of its linear combinations) can be inserted in the timeline inside the current canvas (inline canvas) and even select the interpolation type that you use to travel between pose and pose.
I think that the poses approaching is much more general and versatile than the sum of two beziers which must have the same number of blinepoints.

thanks for the reply. I think you are right – and yes, my fundamental idea is that you can work separately on a number of different poses and animations, then combine them later.

With my idea you can for example get two extreme poses (eg smile, frown) and use another control to select how much of each you want at any time (so it’s not just on/off, but can be in between) – so according to your point (2) I don’t know how you’d do this with existing interpolations.

Anyway, your idea would be more powerful than this. You could merge two canvases provided they have similar structure. (It might help when the layers are labelled to work out which ones have to match up)

But still at the bottom you’ll have some BLines that need to be interpolated, and I suppose the question is “how?”

Suppose A has m points and B has n. Perhaps the n points of A have to be spread out along B so that A[0] --> B[0] and A[m] --> B[n], but the intermediate points are not necessarily exactly mapped, eg A[3] --> B[4.5] !! (this would be like the “amount” parameter when following a BLine path, if you divided by n). But the mapping would have to be done very carefully, so that complex parts of A (with many points giving small turns and detail) are all mapped to almost the same place in B.

I can imagine doing this automatically, but perhaps the user needs to be given manual control too.

Finally, I think you can already achieve your “poses” idea. Set up pose A at time 0s, and pose B at time 1s. Use all linear waypoints. Now you want to achieve pose 0.5(A + B). Move the time to 0.5s and select “add waypoint” for the canvas (if this doesn’t work … it ought to!!!) Then copy that waypoint to wherever you need it.