I have made a few short animations, but the problem I am having is with voices. Not with animating the mouth, but getting it to open and close at the right time. As synfig has no audio support, it is hard to lipsync the voices at the right time. Any help is appreciated, thanks!
Have you had a look at the development version? It now has JACK integration so you can play a soundtrack synced with the synfig timeline.
the development version package with JACK integration is not released yet, you will have to build it yourself from the source code (you will have to be familiar with building).
I’m not familiar with building from source as well, So I open my sound file in audacity or kdenlive (or any video or sound editor) to determine the time frames at which I need to animate my objects.
to make it easier to you, use frames as a timeline unit instead of seconds so you don’t have to make extra calculations to convert them between programs (you can change it in synfig from edit>preferences>Misc.>Timestamp: FFf)
Having worked in a professional production pipeline (Synfig included), I’ve actually found it much faster (though not necessarily easier) to be able to line up storyboards and voices in Blender’s video editor, then just write down lip signatures “per frame” for each scene. Not the most convenient way to lip-sync, but it’s worked for me.
Just added a new plugin that allows to import audio timing from Audacity to Synfig: plugin to import keyframes - audio synchronisation
Pappagayo and Audacity work like charms for me.