Hello, everyone!
Lots of news for the last week, so I have split them into sections. Let’s start…
BUGFIXES
The beginning of the week Ivan dedicated to fixing bugs. The results are:
- Smooth Move tool now works correctly
- Transformation widget now keeps the same size, independently of canvas dimensions
- Fixed #593 - TimeLoop layer remains active even when disabled
- Fixed #597 - Stroboscope layer remains active even when disabled
- Handles won’t disappear anymore when you zoom during Spline construction (although, the fix still needs some polishing)
Meanwhile, Yu Chen finished rework of interface for Tool Options Panel and his changes were merged into master.
Carlos have fixed most of the warnings for OSX 10.9 build.
This week I am going to prepare new development snapshots so you can get all fixes and improvements.
BONE-DRIVEN DISTORTION
Last week we have started to work on a new priority - “Bone-driven image distortion” (see similar feature in AnimeStudio and ToonBoom).
Initially we planned to make a minimal implementation as follows:
- Make a Polygon Distort layer, which works by the same algorithm as the Gimp’s Cage Tool.
- Use Skeleton Layer to control the Polygon’s points and thus distort the image underneath. User can quickly link Polygon to Skeleton automatically using the “Link to Skeleton” feature.
The disadvantage of this is that user needs to build a construction of TWO layers - “Polygon Distort” and “Skeleton” on top. Also, there is additional action needed to link “Polygon Distort” to “Skeleton”. This is not a smooth workflow.
In addition to that, there is still speed problem remains - it is unclear how much we will be able to optimize the Gimp’s Cage algorithm.
As I have outlined in my initial announce, the main problem here is to make it work fast (which was a mandatory requirement of this feature request).
So, on April 3rd I had a brainstorming discussion with Ivan and we come to conclusion that it would be possible to implement a “Skeleton Deform” layer, which will directly distort any image below it, much like the same as any distort layer does. Without any intermediate polygon deformation layer.
There are two problems here:
-
We need to figure out the actual math formula, that will take into account bone influence area, so we could map pixels from source surface to resulting image depending on the bones offset. Right now we are investigating possible approaches
here. Most probably we will borrow the maths from “Automatic weights” algorithm of Blender (see below). -
All distort layers are very slow in Synfig and the “Skeleton Deform” layer will be slow as well, because we take every pixel of destination surface, then apply the transformation formula to get the color from the source surface. This takes a lot of time.
Our idea is to go a different way.
We can split the source surface into squares, so we will get something similar to mesh in Blender:
[attachment=1]synfig-deform-concept-1.png[/attachment]
So, instead of transforming each pixel, we will transform a whole squares:
[attachment=0]synfig-deform-concept-2.png[/attachment]
Transformation of squares opens two possibilities for optimization:
- User will be able to set mesh density to get faster or slower (but
more accurate) result. The density will be dynamic, so it will be
possible to change its values after the animation is done (no need to
re-rig anything). - We can optimize that even more if we will calculate the deformed
mesh as OpenGL surface. That way (I believe) we can get a real-time
update, in the same way as we have in Blender. This will be quite
tricky but I believe the results deserve the efforts. I am attaching
the sample blender file for reference.
Finally, if we will get Skeleton Deform implemented in such way, then this approach could be applied to ALL OTHER distortion layers and we will get them all optimized at once! Considering that distortion layers are the slowest ones in Synfig, that would be an epic speed boost.
We have made a quick test file in Blender to proof the concept and see if “Automatic weights” algorithm will fit for our case.
Here’s the video - http://www.youtube.com/watch?v=NMH6JRGAu6A
As you can see, the result is pretty good even for very sparse subdivision.
Well, we will continue this work and will keep you updated.
OTHER NEWS
We have started a fundraising campaign for May. Please help us to spread a word and get the next month funded!