What is connecting synfig-core modules to synfig-studio UI?

Hello again,

I’ve been studying the Synfig source code lately just to try and understand its structure a little better. Specifically I’ve been trying to understand exactly how the UI elements of the gui application know what shapes to draw. Here’s what I understand so far. Please correct me if I’m mistaken anywhere.

  • The app has a toolbox which is a gtk toolpallet object that is defined in synfig-studio/src/gui/docks/dock_toolbox.cpp
  • When a tool such as the spline tool is clicked, the user state changes. These states determine what the outcome of of certain user inputs are. States are defined in synfig-studio/src/gui/states/
  • When an input is made such as clicking within the canvas, an action will be triggered like creating a new layer. Actions are defined in synfig-studio/src/synfigapp/actions/ and managed in synfig-studio/src/synfigapp/action.cpp
  • A layeradd action creates a new custom gtk object which contains all the metadata for that layer and is defined in synfig-studio/src/gui/docks/dock_layers.cpp
  • Meanwhile, the algorithms that describe how to draw different objects are contained in synfig-core/src/modules/
  • I believe that these modules are built independently during compilation into tools that can be accessed by their specific name such as “advanced_outline”

Here is where I’m stuck. I can’t find the sections of code that instruct how the layer objects call the synfig-core modules to see how to populate the parameter tree or draw the shape on the canvas. I suspect the answer might be hiding amid the code base’s many preprocessor macros, but they don’t play nice with QT Creator’s navigation tools so I’m still currently in the dark. Could someone point me in the right direction?

Here’s what I learned reading the same code between the frontend (GUI) and backend (core) last year.

All layers to some extent have their own unique parameter description instanced from RendDesc class. So that object is where Synfig’s parameter panel get data from when user selects/adds new layer.
The code to trigger that I believe was like this
refresh_/rebuild_ param_tree when layer selection event has been triggered or a layer has been modified from the canvas or a new layer has been added.

As for drawing on canvas I suspect one has to be versed in Computer Graphics to really understand the concept of how mouse drag events trigger the backend to rasterize as pixels on our monitors. I know nothing about that area for now :frowning:

Glad to “read” you again, GreorysonofCarl! :slight_smile: Long time no read!

Just for correctness, it would be the “workspace” state, not “user”.

No. It creates a new Layer object.
The Synfig core library is “completely” independent of Gtk.
The workspace displays “the main” Canvas. A Canvas is composed of layers. Canvas handling via GUI (Synfig Studio) is normally done by an interface called… CanvasInterface that makes the glue between Canvas object/data and some GUI interaction.
So, in your example:

  1. The LayerAdd action adds a Layer to selected Canvas
  2. The Canvas node is modified, and it emits a signal to broadcast this change and it’s how Studio knows the Canvas drawing in Gtk DrawingArea widget of WorkArea should be refreshed
  3. To tell “anyone” about the explicit event of layer being added to a Canvas, the LayerAdd action uses the CanvasInterface object to signalize the layer adding event (signal_layer_inserted())
  4. Stuff like LayerTree and Sound Panel catches this signal and reflect this fact in their panels.

The ParameterTree is filled by the info provided by the layers themselves via ParamVocab data (Layer::get_param_vocab()), that describes the parameter properties (name, value type, tooltip, etc.) and then retrieve the current data, for example, via Layer::get_param().

Oh, well. That’s complex and my English is poor lol

1 Like

Thanks! I’m happy I finally managed to make my way back here. It’s been much too long.

Also, thanks for the corrections. There’s a lot to learn here and I’m grateful for your feedback.

Oh, okay, that’s promising. It looks like the GUI is accessing that information from an instance of the handle class template which is defined in the ETL library. Not sure I 100% understand how the data of a given module ends up inside any given handle, but at least I finally know that a connection between the frontend and backend exists within the ETL library.

Ha ha, well I’m not actually looking to understand the entire journey a line of code takes to get all the way to the LED’s in my computer monitor, at least not right now. I’m more looking to understand the connection between the core and gui in a similar manner to Layer::get_param_vocab() but with respect to the DrawingArea this time instead of the Parameter Tree. I’m going to guess that it also has something to do with the handle class template.