Any workaround for the lack of a sound layer for lipsync?

Has anyone figured out a way to emulate a sound layer in another program?

I’m trying to play an audio file at the exact time I play the frames of animation. Is there any way to hotkey an audio program to the keyboard to play a sound file?

Or is there any other way to do this? I’m stumped on how to lipsync quickly and efficiently.

If you do the audio in Audacity then you get a graphical representation of the sound and it will have a timeline. So, if you do the audio first then you have all the times you need to match your animation timetrack with. Then mix the two in a movie editor.

Sounds good to me.

jlipsync or papagayao are two nice lipsyncing programs with outputs that work with Synfig. I’m also contemplating making an HTML5 based one.

Thanks

The Audacity timeline way is good, but I’m looking for a way to make it faster than that.

Both jlipsync or papagayao won’t run in Ubuntu 12.04 for me.

papagayao is not runing on my 64-bit linux system :frowning:
I have used some trick for lipsync: First I made encapsulated canvas layer with speak mouth (using time loop) and layer with silent mouth. Then load wav file into cinelerra and tune timeline in way it shows time in same format as in synfig (min:sec:frame). Then I saw position when character starts speak go to synfig to turn off visibility silent mouth and turn on visibility speaky mouth. Switch to cinelerra and look for position when character stop speak. Return to synfig and turn off speaky mouth, turn on silent mouth. Etc till the end.
Here is some ugly example: youtube.com/watch?v=9bSzq7iLVRI