Game Development Platform

Hello Synfig Studio,

My company, MousePaw Games, has just abandoned Adobe Flash as a platform for our upcoming educational software game, and after some research, I decided that it would be best if we adoped Synfig Studio as our animation software in its place.

Herein lies our problem - due to the nature of our game’s design, we need our animation and our interactive game interface to occur on the same platform. Essentially, we need to replace Flash’s game design functionality. We are targeting the desktop, NOT web browser.

We have our own scripting language, Ratscript, under active development. We also have our own game engine, Trailcrest. Both are in C++. Essentially, our goal is this - we need to connect Synfig Studio into Ratscript, so that we can design and code the game interface in Synfig Studio, allowing it to coexist with our animation. My entire programming team will be working on this, but we’re new to the code. Do you all have any tips as we start in on this?

Of course, I’ll ensure my team is looking for places where the code and interface can be optimized, improved, and debugged. I believe strongly in giving back. It is worth noting that my team is C++ and GTK focused. This would be a long-term commitment, as we want a platform we can use for many decades to come.

Since we’re relying on the open source community, I want to be honest: because of the nature of our particular industry, we have decided that Ratscript and Trailcrest will not be open technologies, and quite probably non-free. However, we do believe strongly in the open source movement. We create open-source software via our Labs division, and use our platform to heavily promote open-source projects.

To ensure we don’t create any sort of a dependancy on non-free technology, we can A) keep our work in a fork, or B) ensure the “bridge” can connect to any scripting language, not just Ratscript.

Any advice, guidance, or assistance in this is appreciated. I know this is far off of Synfig Studio’s development path, but as far as my company is concerned, someone needs to create a open-source-friendly replacement for Adobe Flash, and we want to help do that. We believe Synfig has what it takes.

Hi Jason,
it is difficult to answer all your request but let me understand better what would be the final scenario explaining you how Synfig works at the moment.

Synfig project consists basically in two tools. A render engine aka Synfig CLI (where the document specification is hardly connected) and a animation editor aka Synfig Studio.

The first the CLI (command line interface) depends on a synfig-core code that is the real renderer of the synfig documents. Then Synfig Studio has a GUI code and is closely connected to the core of synfig through one interface called ‘synfigstudio’

With the CLI you can read one file and produce the animation output for that file (with specific options)
With Synfig Studio you can create synfig documents and also invoke the core renderer and produce the animation within the GUI.

How is the core of synfig organized? synfig has the following concepts:

  1. Layers. Every thing in a synfig Document is a layer: Shape, effect, Time control, etc. Layers are placed in a stack system, organized in Groups with its own scope. Layers affects to what is below them within its group context. Layers can produce shapes (circle, spline, rectangle,…), change render specifications of the context (rotate, scale, zoom,…), pixel based distort the context (blur, spherize, twirl, noise distort) or create raster images of its own (noise gradient, gradients, bitmap image layer). Recent layers can do new things like distort shapes based on region influence (bones) hold audio (audio layer). One special layer is the Group layer that apart of limit the context of the layer inside it, can change timing of them, change coordinate system or the visibility of the layers based on the z depth.
  2. Each layer has parameters. Synfig has a powerful parameter system based on Value Nodes. There are three types of value nodes: Constant, Animated and Linkable. Linkable are value nodes that are based on other value nodes through a math operation or a internal calculation. You can combine them the way you like nesting linkable inside linkable of each type. Also, value nodes can be ‘labeled’ (exported) and can be ‘referenced’ (linked) to other valuenodes so it is possible to do complex driven animations with them.
  3. Render engine is float point based so it means that currently it uses its own render system with its goods and bad things. The good is the excellent color and antialiasing results. The bad is the render performance.
  4. In the GUI, (almost) each operation you do to the document is undoable. It is that way because Synfig Studio action system. Actions takes parameters of the document, perform the action and keep track of the modified data just in case it is needed to undo it. Undo history is lost when document is closed. Actions are to modify the document so they are the entry point for a scripting system for Synfig Studio (I repeat, to create or modify synfig documents)
  5. Synfig document is a XML type but there is not document schema to create it outside Synfig Studio in a easy way. Document specifications is all inside the code, not externally documented.
  6. Render engine currently supports the following outputs: image sequence (png, jpeg, gif) some of them by its own, some of them by external libraries (cairo,etc.). ImageMagick/Magick++ outputs though its corresponding library. Fmmpeg outputs though the ffmpeg CLI executable (we want to migrate to libav), yuv420, etc.

Please describe better the working scenario you’ll targeting for and ask whatever you need to better understand Synfig way of work.

Cheers!
-G

First option that comes to mind is to simply render animations to PNG sequences or spritesheets and then use the script engine to display the sprites with the right timing, like so (not done by me):
skias.free.fr/bestiole/
skias.free.fr/bestiole/oreille1.png
skias.free.fr/bestiole/oeil.png
skias.free.fr/bestiole/bouche.png

But I’m guessing that you guys want to generate the animation frames on-the-fly, to allow to programatically modify parameters of the animation. In that case what you want is to somehow use the Synfig-Core renderer from the script in order to generate the frame to display.

From what I remember the renderer can generate a frame with a given resolution and at an specified time. The script engine could perhaps plug into that, using the time value to “jump” into different animations. The different animations would need to be laid out on different parts of the timeline.

Another option would be to have different groups for different animations and somehow turning them on or off (there’s a Switch Group for this I think) from the script engine to access different animations.

Note that Synfig’s input file uses purely declarative syntax. There is no state, there are no variables. It’s like a directed acyclic graph, evaluation starts at the root canvas element and must eventually end (hence the acyclic requirement).

Hmm, this is going to be interesting, to say the least, but I don’t see any alternative at this point. We’ll have to find some way to pull this off. Here are some thoughts, organized as best as I can.

SCRIPTING

Adobe Flash Professional basically had all the vector items as “objects” that could be either manipulated via the IDE or modified via code on the timeline, in an object-oriented fashion. It sounds to me like layers behave very similarly. I’m wondering if we can accomplish something similar here.

Ratscript’s main feature is that it is a “runtime interpreted language.” The first step on that front would be to create a “script layer” in Synfig, which can have Ratscript code (or another language’s code, though I’ll refer to Ratscript exclusively herein for brevity) linked to the timeline, much like the sound layer. Thus, the code is actually stored in the Synfig document, and taken into account during rendering (see below.)

If Ratscript has programmatic access to the various other layers, it could be used to directly modify their properties. The main purpose of this is to change the vectors based on various conditions. Another role of Ratscript would be to change the flow of the timeline - stopping playback, jumping to a particular frame, etc. (Yoyobuae outlined both of these quite well, thank you.)

Ratscript’s runtime interpreted nature plays to our advantage. All we need is for an object to exist at the moment the script is scheduled to be executed in order for it to work, thus it should tie in quite nicely with the declarative structure.

EVENT LISTENERS

Runtime interactivity is a crucial aspect, otherwise we still only have a movie. GTK is almost entirely useless in this respect, as we actually need to be able to click, drag, and drop animatable objects, and have them respond based on that action. We also need the game to respond to keyboard events.

Somehow, I get the feeling this isn’t as hard as it sounds. If Synfig Core knows A) the position and size of every layer, and B) the position of the mouse cursor, it can mathematically determine mouse collision with an object.

Furthermore, runtime drag functionality can be duplicated by way of 1) detecting mouse collision with the object, 2) detecting a mouse down at that position, 3) calculating the distance from the cursor to the “registration point” of the object (the point that determines the object’s x/y position), and 4) moving the object to the mouse position, adjusting using the numbers from step 3.

Keyboard events are trivial, honestly. The only possible obstacle here is that we would need a text input object in Synfig Studio…but if we can modify any layer’s properties, we can certainly modify a text layer based on key presses.

RENDERING

On the topic of rendering, a strange thought…since this can render animations via Cairo, couldn’t we actually somehow generate code from a Synfig animation/project that could duplicate all of the vectors and their movement, seperately from Synfig, in a GTK/Cairo GUI? In other words, it renders at runtime.

Now, all that said, one other option (and probably an easier one) is to actually write a piece of software that renders synfig XML data, in the same manner that Synfig Studio does. Basically, we just need to give it the same specs that Synfig Studio has hard-coded in. This piece of software would essentially be to this as Flash Player is to Adobe Flash - it unpacks the file format and displays everything in the proper order.

I bring that up because my team would need SWF-like functionality.

GUI ADJUSTMENTS

As you mentioned, most of this makes more sense in the core. However, in order to make Synfig a viable game design tool, it would need the scripting tools built into the GUI. Again, I know GTK fairly well - that’s my company’s go-to GUI toolkit - and we’ve been working with Flash Professional long enough to know what works well in that context, and what is simply a pain in the rear.

The GUI features for scripting, of course, should not interfere with the animation workflow in any way.

MY BACKGROUND

I should note a little bit about my background with game design. I have been custom-building a lot of these features (such as drag-and-drop, movement, and animation) in various languages and GUI toolkits, including Visual Basic/WinForms (that was not fun), Python/wxWidgets, Python/PyGTK, and Adobe Flash (they don’t have everything prebuilt). Thus, I think all of the above is doable, as long as the plan is tailored around how Synfig is already build.

By the by, XML is my favorite file format, bar none.

Synfig does something like this when rendering an animation:

canvas->set_time(1) canvas->render() canvas->set_time(2) canvas->render() canvas->set_time(3) ...
On each set_time() call the code detects that the time has changed and recalculates all parameters for all layers. Once that’s done then the layers are rendered one on top the other. So what I said above about there being no variables and no state ain’t entirely true. There’s only one variable: time. Synfig assumes that layers need to be re-evaluated whenever time (ie. the only state variable) changes.

So if the script runtime changes some value somewhere then it would need to force the recalculation of parameters. The duplicate layer in Synfig gets around this by doing:

context.set_time(time_cur+1); context.set_time(time_cur); Not very pretty. :laughing: So the script runtime will need a better way to trigger the recalculation of values.

Another thing is that layers are usually expected to only have an effect on layers bellow them. It would be a bit weird if this script layer could modify other layers which are above it.

An alternative to having a script layer which programatically modifies other layer’s parameter is to have a script valuenode. Layer parameters can be converted into various valuenodes and those valuenodes are evaluated to find the parameter’s value. The script would run each time the valuenode is evaluated, allowing it to feed values from the script into the layer.

As above, the concern is triggering the recalculation an re-rendering on the Synfig side in response to events happening.

Not all operations on Synfig are implemented as pure Cairo. Some layers are implemented by accessing the pixels directly.

Some kind of player would be needed to implement interactive functionality anyway. The GUI would also need a way to test the animation before exporting it (ie. check if button response to clicks, etc).

Okay, well, that gives us some direction to start in.

Perhaps I’m oversimplifying it, and I’m just theorizing, as I haven’t seen the code, but if we isolate the code from set_time() that involves Synfig’s assumption of re-rendering into a function of its own, it can be called from both set_time() and a new function, force_rerender(). Added benefit, we wouldn’t need the clunky workaround.

Of course, I can be a lot more intelligent about my ideas if I’m looking at the code. Approximately where would I find this functionality in Synfig Core? This strikes me as the first task that needs doing.

Excellent idea. That sounds like the best course of action. That is quite handy about how layers modify one another. Keeps things sane.

We’ll go with that plan then, unless something better comes around. I’d like to contribute a name for that player, borrowing from what would have been our own SWF projector. Seeing as that project was scrapped when we abandoned Flash, the name is up for grabs: Lightrift.

Ratscript has (or will have…it’s under active development) full code-checking abilities, so catching errors should not be hard.

Ratscript is rather unique in that it is interacted with purely as a (hidden) command-line application, so all I/O takes place as if there is a human user typing into the interpreter. That gives us the benefit of automatically routing all output and errors back to the Synfig Studio, for initial error checking. Some of this same functionality can be duplicated in the player.

We might want to limit what all is tested, however. In Adobe Flash, event listeners are hooked up with responses to event handlers. A lack of an event listener would not be mentioned, for obvious reasons. However, a lack of an event handler would become apparent because of an invalid function call from the event listener. That should simply what we actually have to test.

IMPORTANT: I need to ask how we should go about starting work. I want to contribute to the main project, if you all are okay with that. (However, we will fork it if that is what ya’ll prefer we do.) Again, I will ensure our work is not Ratscript dependant, so an open-source language can be used instead if desired.

If we do indeed work on the main project, I assume we would need to make our own branch on Github, so that our changes can be integrated at the project administrator’s discretion.

This is one of the calls of set_time() at one of the renderers:
github.com/synfig/synfig/blob/m … e.cpp#L133

But you need to scrub up the call stack to find what’s the structure that triggers that Target::render() function:

Here is where render is called in the CLI:
github.com/synfig/synfig/blob/m … r.cpp#L241

And here is where render is called in the GUI:
github.com/synfig/synfig/blob/m … r.cpp#L873

One mistake I made on my first post: the interface between the core and the GUI where the actions are stored is called synfigapp not synfigstudio.

-G

I think you can perfectly fork Synfig for that. If you prepare the code to be a module for Synfig (i.e. you can compile Synfig suit with or without the game plugings) and the modifications doesn’t interfere with the main usage of Synfig, that is create animations, your changes are welcome.

Currently Synfig has been improved with lots of new features and so it has many bugs. So please be patient with the development if you find some usage bugs in the current development branch.

One important question: are your Ratscript and Trailcrest available in all platforms? It is known that Windows version is always having more issues than Linux or OSX in general.
-G

Hi Genete,

I’ll have my team fork the project for our experimentation, to prevent altering the main project. With your permission, we’ll also branch the main project, so we can offer bugfixes and optimizations that we come up with as we work. As to new features, we’ll propose integration once we ensure they are stable and modular, as requested.

As I mentioned, this is a long-term commitment for us, so we’ll undoubtedly be contributing heavily to the main project for many years to come.

Trailcrest and Ratscript are designed to be highly cross-platform, and we’re aiming for them to be as backwards compatible as possible. They’re both built on C++ GCC, and we’re working hard to optimize memory usage. Since most schools have outdated software and hardware, we want our game to work on their systems. We also plan to offer discounts for Linux users, to help encourage the schools to adopt open-source platforms for their computer labs.

That’s excellent for the Synfig project and for its community.
Sincerely thanks in advance!
-G

A quick update: I’ve forked Synfig on Github, and I’ll entrust ya’ll to merge the changes you want merged. I’m more experienced with Subversion, so it’ll take some getting used to. Our fork is at github.com/mousepawgames/synfig.

I’ll also try and keep everyone up to date here on the forums and on the bug tracker as we fix, improve, and build things.

I have a meeting in 10 minutes to get my team started, so they’ll be joining shortly no doubt.

(P.S. If you’re interested, my company just updated the news page regarding Synfig. Check our website to see it.)

Git is even easier than subversion. Just start a new branch and add code there. Once ready it can be merged or rebased with master again if it has evolved quicker. In most cases there could be conflicts that should be solved by the one that is requesting the pull.

There are heaps of tutorials. This one is good.
atlassian.com/git/
And this one too
gitimmersion.com/lab_01.html
Good luck!
-G

Hello, Jason!

My name is Konstantin Dmitriev, I am an administrator/maintainer of Synfig project. Apologies for delayed reply, it takes some time to handle everything. :slight_smile:

I am happy to see your interest about implementing the game platform features for Synfig. A long time ago I had an experience with programming for Flash 4 and Flash 5 (MX). I think maybe the best approach here would be to try to mimic the same architecture as it was in Flash.

Let me note, that my experience with Flash was many years ago, so I don’t remember every thing in detail. Also, I didn’t followed the Adobe’s developments for the last versions. So, my statements might be not precise enough. Feel free to correct me.

Some comments regarding to the discussion above:

1. Where to put the scripts?

I think the best would be to attach script code to particular frame at the timeline. When the time cursor reaches the particular frame, then the script is executed. Much like the same as it’s done in Flash.

For example, the script at the first frame can do initial setup - attach functions to events (like onEveryFrame(), onExit() and so on).

Also, we will need buttons and other interactive controls. As a low-level solution we can add possibility to assign event handlers for any layer. For example, add onClick() event handler to some layer would make it act as a button. Such low-level elements could construct complex widgets, including text entry and others.

Of course, all that requires the script to access/modify the current document, its layer tree, the current object and its parameters.

I am not familiar with RatScript yet to have any further comments for now.

Relevant info for ActionScript: help.adobe.com/ru_RU/AS2LCR/Flas … 00393.html

2. Playback/Execution of the game

The execution should take place when user hits play in the Synfig Studio.

For example, we have following (pseudo)script attached to frame 0:

document.layers[0].transformation.x = document.layers[0].transformation.x+1;
And the following script attached at frame 1:

gotoAndPlay(0);
When user hits play, we have following happening:

  • Playhead reaches frame 0
  • Script at frame 0 executed - first layer moved by 1 px.
  • Frame 0 rendered and displayed on the screen
  • Playhead reaches frame 1
  • Script at frame 1 executed - playhead set to frame 0. No rendering takes place, because playhead position was changed right now.
  • Playhead reaches frame 0
  • Script at frame 0 executed - first layer moved by 1 px.

Thus we have nice scripted animation.

Later, the playback functionality could be stripped into separate application (player). This will be similar to Synfig Studio editor, but without editing functionality. I guess the execution framework will be a separate library at the end.

3. Rendering speed.

If we wiling to have realtime playback, then it comes to rendering speed problem. The current rendering is slow. We have prepared the framework for OpenGL-optimized rendering, but it is still a lot of work to do. In any case, this problem should be resolved to get the interactivity features.

4. MovieClip analogue

In Flash there is a MovieClip concept, which allows to playback (and control) different animation pieces separately - http://help.adobe.com/ru_RU/AS2LCR/Flash_10.0/00000240.html. I think in Synfig we have similar concept, which is called “Exported Canvases”.

5. Working with code.

Yesterday we have pushed a lot of changes into the main source repository. The major change is migration to Gtk3 library. Please update your repository to keep your changes up to date.

Hi Zeigadis,

Thanks for the reply! I really appreciate it.

There are certainly benefits to this, however, Flash had two glitches that accompanied this structure. First, there was what I dubbed the “frame-one bug” - certain complex circumstances lead to variables defined on frame 1 to spontaneously reinitialize, erasing any information that had been stored in them at that moment.

The second one was when a script on a frame caused the entire timeline to freeze and start over…at least where audio was concerned, and then ignore all animation and scripts and just play the sound.

We would have to be very careful to avoid these (and similar glitches). I think I know how they both worked, or at least how the first worked, so that’s doable.

As to Ratscript, it is currently under development, so we could (theoretically) make it do whatever we wanted. At the moment, it looks a little like this:

[code]>>Define a variable to store the name of the item. Let’s store “whale” initially.
make obj as string

Output a message based on the item name.
if(@obj == “whale”)
:print(“Hello, what am I doing here? Who am I?..”)
else if(@obj == “petunia”)
:print(“Oh no, not again.”)[/code]

Our proposed syntax is designed to be very easy to learn.

Based on our current proposed syntax, and some of the better characteristics of ActionScript 3, transformation code might look a little like this…

>>Assume a layer named sphericalCow. @sphericalCow.width += 5 @sphericalCow.height += 5 @sphericalCow.x += 5 @sphericalCow.y += 5 gotoAndPlay(0)

Those commands would simply hook in behind the scenes to the .transform function you were referring to, thus cleaning up the code and making it easier to learn (and, honestly, guess).

Further complicating things, my company’s target market is schools - specifically schools with outdated hardware, who cannot afford to upgrade. We’re trying to build everything to work on 512MB of RAM, with an ideal goal of 256MB of RAM. It is an ambitious goal, but not an impossible one. As modern programmers, we take RAM for granted. Simply by optimizing our code as far as possible, I believe we can achieve superior backwards compatibility.

MovieClips were quite useful in Adobe Flash, especially for organizing code to create buttons and whatnot. If I’m reading the right link, exported canvases sound like they’ll fit the bill.

Important Side Note: At the moment, I am seriously considering the viability of a “community license” for Ratscript. Though it is closed-source and non-free, as I mentioned before, I do believe in supporting open source software (obviously, since I’m here). The Community License would allow anyone to use Ratscript for free, so long as their finished product was either open-source or Creative Commons. (And, again, my team will be writing Synfig’s scripting engine to be compatible with other languages, not just Ratscript.)

I think it would be better to have a proper init script which runs one time when animation is loaded (or animation is reset, if player has that option). Faking it with a frame 1 script doesn’t seem right.

Also proper playback control from the scripts should also be possible. If the animation needs to pause in order to wait for user to press a button or something it should really pause, not fake it with an infinite loop over a frame. Of course that means that non-frame triggered scripts need to be able to run even when animation is paused at some frame.

Yes, there should be onStart() event.
The general idea is to have scripts hooked to events: onInit(), onEnterFrame(), onClick(), etc.
It is possible to put a whole code into single place and create all event handlers there. But from UI point of view it might be useful if script chunks will be attached to corresponding objects - frames (much like it’s done in Flash) and layers. So, if you delete a layer and there is some code written as onClick() handler, then the corresponding chunk of code is deleted automatically.

Exactly the same is possible in Flash with gotoAndStop() function. In our case it doesn’t really stops, but puts timeline to paused state.

In the (artifical) example above I used the loop just to move the circle. This could be done without loop by adding onEveryFrame() handler to the root timeline.

I believe this is just specifics of feature implementation in Flash. We don’t want (and can not) copy the Flash implementation, we just can borrow the general approach.

“Frame-one” bug could be solved by putting linking initialization script to onInit() event (as pointed by Yoyobuae). :slight_smile:

No, this is not really right link. :slight_smile: Please take a look at this document for description of how exported canvases work - wiki.synfig.org/wiki/Doc:Reuse_Animations

About Ratscript: I have to be honest, I don’t really understand the purpose to introduce the new scripting language. Personally I would better stick with Python. But I guess this is the internal requirement of your company and as long as you plan to allow support for several scripting languages, this is OK. :slight_smile:

Agreed. That would make life a lot easier, that’s for sure.

Hmm, I think this will rather come in handy…being able to store “MovieClips” as separate files. Saves soooooo much time and effort! Flash often crashed when you tried to import a MovieClip from one project to another. The only question becomes, just how deep can you make the nesting without trouble?

SIDE NOTE: We would need some way to detect a script execution that is taking too long and abort it…preferably a user-modifiable script timeout option. (It may be worth mentioning that Ratscript has some infinite loop detection features.)

Yes, we’ll support others. There are a few in-company reasons for Ratscript, not least of all the connections into our game engine, Trailcrest. That said, there are three main reasons for the language outside of the company. First, it is a “bridge language”, which means it can (or will be able to) serve as a runtime bridge between any two NativeProcess-capable platforms, allowing full access to one-another’s functions and variables. 2.0 was originally designed as a link between Flash and C++, but as you can guess, that particular purpose has shifted.

Second, it is intended to have a shallow learning curve. My content development department needs scritping capabilities, but they aren’t programmers by any means. Granted, many other languages arguably are easy to learn as well, but there are a few conventions in Ratscript that just make it easier to read and understand.

Third, Ratscript is a runtime-interpreter, meaning that it is a ready-to-go runtime scripting language. The first version of Ratscript was for an in-game console that let me execute my game’s functions and modify its public variables directly, to speed up debugging. Playing a linear-progression game through for 30 minutes every time you want to test a particular area is quite annoying.

Anyway, all that to say, Ratscript introduces some new innovations of its own. It isn’t right for every project - Python might do someone else’s project better - but I believe that it has the design and capabilities to take the place of ActionScript in this context.

Synfig doesn’t really have the concept of frame. There’s objects with an associated time value (ie. waypoints, keyframes) but they are not really tied to some specific frame object. So frame scripts would have to be something similar, a time value and script associated together (maybe defined per canvas).

As for attaching scripts to layers, I personally wouldn’t put a chunk of code inside the handler inside the layer. Instead just put a function call there, keep the handler code as small as possible. The benefit of having code deleted automatically is not worth the cost of having a mess of code spread all over the place.

This is starting to mirror the age-old debate in the ActionScript 3 world about whether code should ever be put on the timeline at all. The strongest argument for it is that it is pretty easy to figure out where you parked something, so long as you have a system.

My team always put the bulk of the script on Frame 1 of its own layer, time-specific code on the appropriate frame in that layer, and object-related code on the layer (group) that held the object. Obviously, that convention does not hold true for Synfig, but it goes to show that things can be organized logically.

However, the downside is that, if one does not exercise such discipline, it is as you said, yoyobuae, we get a mess of code all over the place.

Crazy idea (if you haven’t noticed, I’m infamous for those): what if we throw out all of the established ways of doing things, and have one place for global code which affects the entire project. All of the initialization code, functions, variables, et cetera, et cetera, would live here. It would have access to all of the project’s layers, for transformations. It would also have access to the timeline.

The fundamental difference is that we would attach the scripts in the complete opposite fashion from what we’ve been discussing. Instead of the timeline calling the code directly, we rely purely on “listeners” that exist in the code. These listeners could be triggered by any manner of events - user interaction, timeline position, values of a layer. Every time we render, these conditions are checked…but only if they were defined to be checked.

Here’s a very rough example off the top of my head, in Ratscript syntax. >> is a comment, and “:” indicates the code belongs to the preceding function, if statement, etc.

“listen()” would monitor the given property (arg 1) for the given condition (arg 2) and then execute the given function (arg 3). The fundamental difference between this and a traditional “if” is that listen tests its condition every time we render, or until it is destroyed, perhaps with an analogous “mute()” function.

[code]>>Listen monitors (1) given property for the (2) given condition, and then excutes the (3) function.
listen(timeline.frame, 10, moveCow)
listen(sphericalCow.x, 100, popCow)

Assume a layer called sphericalCow.
make moveCow()
:sphericalCow.x += 10
:sphericalCow.y += 10
:timeline.playFrom(1)
end make

make popCow()
:sphericalCow.hide()
:mute(timeline.frame, 10, moveCow)

Jump to cow explosion animation.
:timeline.playFrom(11)
end make[/code]

The advantages to this method are, A) we have all the code in one place, B) the script engine is a lot easier to “unplug” from Synfig, and C) we don’t have to make massive changes to the GUI to do it.

thanks