On your short term plans. Could you gather some information about on how synfig currently builds that you find it relevant and post it on the wiki page that I just made?
I’ve been working on the Cairo render (core) implementation and I’m so excited on the progresses I’m achieving.
For the moment I’ve not written any code line but I have now a much better idea of what’s the internal design of the current Synfig renderer engine and how can the Cairo engine could be applied.
So far I’ve got one idea this morning that keeps me very excited:
When facing Cairo render implementation we have to deal with the limitations of the Cairo API to produce the effects that Synfig can achieve. Changing from Cairo surface to synfig::Surface forth and back every time the layer is not supported by the Cairo libraries worries me a lot, because it might ruins the wanted Cario speed up achievement.
So my idea is this:
synfig::Surface is a instance of a etl::surface which is a template of a generic surface. This instance is for Color type (RGBA float). What if we create a synfig::CairoSurface class that inherits from the generic etl::surface class and overrides the basic etl::surface painting members by the corresponding Cairo ones? This way the layer can act like this:
The layer receives a etl::surface.
if the pointer casts to synfig::Surface then use the Software render method.
if the pointer casts to synfig::CairoSurface then:
Either use the Cairo native methods or the basic etl::surface methods to write pixels (and by inheritance it would use the write pixels Cairo methods
This way, the layers that are not supported natively by Cairo by a high level Cairo API can be rendered using the Cairo pixel format and the current etl::surface methods. Doing this way, we can avoid the forth and back pixel conversion form Cairo surface (ARGB 4*(2 bytes per channel)) to synfig::Surface (RGBA 4*(16 bytes per channel))
This would imply to include the stride concept on our current etl::surface and use it to access the data in memory. The stride will be zero for a synfig::Surface and the Cairo one for a synfig::CairoSurface.
in which the same computation is done three times in a row, in addition to all the other times when it is run, when it could be done this way:
const float d175a255 = 175.0 / 255.0; /* Only performed once. */
...
cr->set_source_rgb(d175a255, d175a255, d175a255);
or maybe even better, the programmer calculates it and uses it like so:
Oh please! send us patches for things like that! I don’t have enough time to fix those kind of things when my brain is about to explode with this Cairo render project…
-G
Is that so? I assumed that the compiler is smart enough to optimize this (i.e. the calculation is done three times at compile-time, and then the value of 0.77… is hardcoded into the binary).
Yeah, I wouldn’t expect any modern compiler to be so dumb haha.
Anyway, a constant definition would be an improvement for code readability. What I wouldn’t do is define the constant with the computed value, as it’s a step back in readability, and any compiler would compute it at compile time.
I’d go with a name more specific, though.
const float RGB_MIDGRAY = 175.0 / 255.0;
What is important is where to place the constant definition so it can be reused as much as possible. You have to put it in the correct section in the correct file.
If you have time, try to search for similar operations which could be converted into more constants and put them all together in a definitions file or whatever the top file in the inclusion hierarchy (that has something to be with this) is.
Would it be a good idea to use GEGL in the Synfig renderer? (GEGL is a node-based compositing system developed for use in gimp).
GEGL mostly operates on raster graphics, so it can’t draw BLines for us, but it can probably do blend methods. Using GEGL has the potential of having to re-render only the layers that were just modified, and then compositing them with cached copies of the other layers. GEGL has support for high bit-depth and HDR.
Cairo has good primitives for drawing paths/regions, but compositing isn’t as sophisticated. We’ll likely have to implement blend methods and layer compositing ourselves. It’s very widespread and wouldn’t introduce any new dependencies. Since it’s widely used it has good optimization.
Would it be a good idea to use babl for the renderer? (babl is the library used by GEGL to do color conversions). This will let us scrap our custom code for converting color formats and use a more optimized and extensible version.
Would it be a good idea to change the renderer so that transformations are applied to the vector versions of objects, and not the raster versions? (Try rendering a rotate layer on top of a rectangle using minimum quality to see the problem).
About GEGL (and the so few I’ve read) I think it would deserve to use it as renderer for for raster effects as well as for blending. In any case it should be faster than any of our similar routines. What I don’t have is how much can GEGL cover of the Synfig raster operations. In any case it would offer others benefits for raster operations that we don’t have yet.
Similar things I can opine on babl. From a quick view of the documentation I couldn’t check that the ARGB 32 bits were natively supported but it claims that is easily extendible. The more we rely on robust libraries the most we can focus on our own targets.
-G
EDIT: forgot to talk about transforms. Yes, (simple) transformations might be passed to layers at vector level and do the transformation by its own. When the layer that receives the transformation needs to read the context to modify it (filters), the transformation has to be passed to its context as well to allow it to handle too. (*) There will be problems with the Paste Canvas with exported canvases parameters because passing the transformation to them, leads into a modification of all the instances of the exported canvases and would produce uncontrolled results. I noticed it when did the Outline Grow parameter (that is some kind of transformation value). Maybe it can be solved but I didn’t found a good way to do it yet.
(*) It is not the same transform at vector level and then filter at raster level than filter at raster level and after transform (at raster level). The result is not the same.