My origins got created elsewhere. How can i move the origin without moving any other duck?
Unfortunately you can just move the origin.
You have to move all containing vertices below the origin and then move the whole layer back into position.
Really? That is hard to believe and somehow sad…
Sad but true.
It is possible to make the artists life easier when it is wanted to move the origin without moving the content. But what should the Synfig Studio would do if either the origin or the content is animated?
Move in relation to its new position. If you reposition the origin it’s up to you to fix what problems is caused. I think the benefits of this feature would be far greater than the negatives.
I also think that way. It’s very comparable to Blender. There you have the object-mode (origin) and the edit-mode for the mesh (bline). Applying shapekeys is strongly comparable to an animated bline, even so you can’t add new vertices (controlpoints/ducks). Changing/moving the origin usually does not affect the mesh itself. Instead, moving the origin will translate all vertices in oposite direction for any key. (or only the base-key if in relative mode)
The only problem could be linked vertices with different origin. Maybe they should act more like constraints, that can work in worldspace (like origin) or in localespace (relative to origin).
That gives me the thought, why synfig does not use transform matrices? Would allow it the user to translate, scale and rotate an object (outline,…) directly. Shapekeys would also be a great idea, since reusing this keys would make some things much easier. From my experience from blender i can tell, that it is much faster an easier to setup animations this way. The only thing blender misses in comparison to synfig is the same way for compositing and supporting filled/outlines curves, rather then meshes. Otherwise i would call it superior in approach, even it isn’t thought to be a 2D-Animating tool.
My words shouldn’t understood as polemic critic, but should be an encouragement how to improve synfig for the user.
Regarding to modify origin wihtout modify the absolute position of the layer’s stuff and mixing the comments from rylleman and niabot:
It would be possible (but I don’t want to be it would be possible right now ) that when moving one or several origins, the action of movement were calculated to add the opposite movement to the layer’s stuff. That can be done in a tricky way if the layer’s stuff is already animated. The program needs to act over the layer’s parameters affected by the origin movement in this way:
If the parameter is non animated (i.e. it is a constant value node so it hasn’t any waypoint), then there are two cases:
1.a) In animation mode: add a waypoint with the needed value at the current frame. (Add a normal waypoint to the origin too?)
1.b) In non animation mode: modify the constant parameter value to the needed value.
If the parameter is animated (has waypoints), then there are two cases:
2.a) In non animation mode, for each waypoint of the parameter, edit the waypoint and sustract the origin offset to keep everything in place.
2.b) In animation mode do the same as in animation mode (edit the waypoints) plus add a new waypoint in the current frame with the resulting calculated value.
The origin would act as a parameter in both cases. It just have the addition of the offset instead of the sustraction.
But as mentioned, the user might want to just move all the stuff around (current behavior) when the origin is moved. It means that it is needed that there were “something” that can be selected by the user to do one stuff one time (re-center the origin) and other stuff other time (offset all the layer). That “something” can be:
- A new tool or a modification of the current one.
- Another animation mode switch button. “Relative/Absolute” <<< my preference
- Something related to the origin itself (like the static option for any parameter): “Relative/Aabsolute”
What do you think could it be more useful/intuitive?
The solution for that is to use bones. Current Synfig approaching is that layers know how to render them in a untransformed space and the transformations are made in the raster level (pixel by pixel) that’s good and bad. It is good because it gives great flexibility to the filters you can apply to any kind of layer or layers and it is bad because its slow render speed.
With bones, all the transformations (scale, rotate and translate and its conbinations in a weighted manner) are performed at vector level and without layer restrictions as a whole (the translation of the origin would affect all the layers parameters at the same time and amount). It is in my mood and intention to continue the bones branch and release a development snapshot with a very primitive bones features. Recently we have passed a roadblock that made the bones development get abandoned some years ago. Now it has been recovered.
Please be a bit patience with me and let me finish the 0.62.02 release. After that I want to continue developing the bones branch.
I have so few free time. Any help is welcome as always. (in any area).
Of course! And I can’t express how happy we are having you and users like you (both) in the Synfig community.
Currently i was thinking about two things in blender:
First you have the child-parent relationship, that is very similar to the layers in synfig, with the exception that also origin, scale and rotation is applied to the children. It would make sense to also implement this behavior into synfig. (will act as transformation for any vertex inside an layer/object)
In blender bones are provided by an armature that can modify any vertex by moving them with them, using vertexgroups/weights to modify the mesh. Guess this is not what your where talking about when mentioning a bone-system. Would also not make much sense as long you don’t have enough vertices to interpolate for smooth transitions. (inserting invisible vertices between visible vertices might be an option.)
The proof is in the pudding: youtube.com/watch?v=qNOPR_1UAz0
The core bone system is already there in a separated development branch of the code. It needs a high GUI improvement though. Currently all the bone - point bind system has to be done manually.
Yes, it won’t be so difficult (*) but would imply some big collateral changes. For instance, I think that any layer shouldn’t show all those parameters (as well as the common Z depth and amount (alpha)) every time. We should implement some sort of parameters visibility filter system to make parameter list more friendly. In current code, each parameter has a set of properties (mostly non user accessible) that qualify the parameters for that kind of stuff. The problem is that never has been implemented.
(*) that’s the typical risky phrase in mouth of an uncaring developer