Autoanimate an skeleton using real video

Hello there,

Would it be nice for you to be able to record someone moving with the phone camera and automatically get an animated synfig skeleton?

I thought it would be nice a couple of weeks ago and started to code it. I have now a first version of it, so I wanted to show you guys the PoC, and maybe if people are interested we could improve it (right now the app is a total mess, I think it should be redone).

PoCAutoPose

The skeleton in this .sif is created automatically from a video recording, I just had to draw the limbs and link to the corresponding bones.
PoCAutoPose.sif (609.8 KB)

After completing it I saw something like this already exists: https://pose-animator-demo.firebaseapp.com/ , but not for synfig - I think - so I think it would be a nice functionality for this community anyway :slight_smile:

6 Likes

The idea is pretty amazing.

Congratulations! Keep going!!!

This is great. Can you tell us more about your approach? Is your autoanimate app based on tensorflow.js like the pose-animator-demo?

For such experiments you can also check at the posts of our friend @bazza

If such a tool would be great indeed, it would be another source of problems of integration in Synfig.
Maybe as a plugin or a complementary app.

As the source code is free and open-source, here are some info for those interested in (I am :P)

2 Likes

That looks great.

Now I’m also working with this one who creates intermediate pictures using neuronal networks and vulkan

2 Likes

As I coded it directly for Android I didn’t use the tensorflow approach, but there’s a Machine Learning Library for Android with that option as well: MLKit. I suspect they use the same (or very similar) model anyway.

The approach is simple:
-Get feed from camera
-Pass frame by frame the pose detector model
-Translate the detector points to synfig coordinates
-Rewrite .sif file with new waypoints everyframe

But I was thinking now of coding a PC version in python and tensorflow indeed.

If such a tool would be great indeed, it would be another source of problems of integration in Synfig.
Maybe as a plugin or a complementary app.
As the source code is free and open-source, here are some info for those interested in (I am :P)

Actually I don’t think there would be any problem integrating it: right now is an Android standalone App: you record the scene and it generates a .sif with an animated skeleton. And for the next version I was thinking about making it as a plugin: You put the video you want to extract the movement from in the same folder for the plugin to access it, open synfig, run the plugin and you automatically get an animated skeleton (and for this version I wanted to provide shapes in the joints as well to ease later the rigging of the character), I think it would be a nice plugin for synfig, don’t know if other programs have something similar. Good thing about working with python and tensorflow: you can detect more than one person, my current version is limited to one pose.

But I don’t feel like doing it alone now (as the Android version is correctly working my motivation went a bit down). So if anyone wants to produce this together it would be cool.

Also for the fun (and maybe some inspiration)
https://sketch.metademolab.com/

I am interested in collaboration. I’m familiar with Python, but not synfig and sif format. I will appreciate if someone can provide some links so I can learn about implementing synfig plugins. @DSan It would be great if you can share your android code so I can study it

You can learn about sif format here, thanks to @rodolforg :
https://synfig-docs-dev.readthedocs.io/en/latest/common/sif_file_format.html

Plugins are just python programs, which are provided the file as argument, you perform operation on the file itself.
You can find more information here:
https://synfig.readthedocs.io/en/latest/plugins.html

Also checkout previously existing plugins like Lottie exporter (in-built) , Plugin by Glax, Joystick Plugin, etc. to get idea of plugins.

As this is going to be based on other input, you need to create a importer plugin and then create a sif file from it.
Importer and Exporter plugins are only supported on development version 1.5.x

1 Like

Great resources! Thanks a lot

Hi @thang,
nice to see someone else fancies developing this. I can’t upload the android studio package here as it exceeds the size limit. If you are interested I can send it via email or something but anyway I really think is a mess of a code and we would be better of starting from 0 in python: I modified the AI library sample app to do what i wanted, therefore a lot of parts of the code are useless for us, but we can’t really get rid of them, besides I edited the sif file manually, instead of using a package as elementtree, etc.

With python I would use this: MoveNet: Ultra fast and accurate pose detection model.  |  TensorFlow Hub
and parse the sif file with ElementTree

1 Like

Thanks for the pointers.

So let’s say this feature is technically viable (I’m still figuring out, though). Let’s discuss what the optimal workflow should look like. In a second thought, I don’t think it’s necessarily a synfig plugin - we can develop a desktop app or web app and users can drag an input video as well as character body parts, preview the output and save the result as .sif file.

What do you think?

It certainly is technically viable hehe, all the pieces are already there, it’s just they have to be put together.
Workflow of the program would be:
-open the video file frame by frame
-foreach frame run the pose detection tensorflow module and get the joint points
-Use those coordinates to build the .sif file (for that you need to understand the labels of an animated skeleton in synfig, basically you have to create a skeleton layer with the 14 bones - I think- and for each bone update frame by frame the labels origin, angle and scale)
-Done

It indeed doesn’t need to be a synfig plugin, the android version I made proves it, but I think as it is specific for synfig, it makes sense and it shouldn’t be that much harder.

1 Like

thanks for your detailed description of the code’s logic. I am sorry I wasn’t clear here: by workflow I mean the workflow when user of synfig software use this “autoanimate” feature to create scenes, so I’m talking about UI designing…