Would it be nice for you to be able to record someone moving with the phone camera and automatically get an animated synfig skeleton?
I thought it would be nice a couple of weeks ago and started to code it. I have now a first version of it, so I wanted to show you guys the PoC, and maybe if people are interested we could improve it (right now the app is a total mess, I think it should be redone).
The skeleton in this .sif is created automatically from a video recording, I just had to draw the limbs and link to the corresponding bones. PoCAutoPose.sif (609.8 KB)
As I coded it directly for Android I didn’t use the tensorflow approach, but there’s a Machine Learning Library for Android with that option as well: MLKit. I suspect they use the same (or very similar) model anyway.
The approach is simple:
-Get feed from camera
-Pass frame by frame the pose detector model
-Translate the detector points to synfig coordinates
-Rewrite .sif file with new waypoints everyframe
But I was thinking now of coding a PC version in python and tensorflow indeed.
If such a tool would be great indeed, it would be another source of problems of integration in Synfig.
Maybe as a plugin or a complementary app.
As the source code is free and open-source, here are some info for those interested in (I am :P)
Actually I don’t think there would be any problem integrating it: right now is an Android standalone App: you record the scene and it generates a .sif with an animated skeleton. And for the next version I was thinking about making it as a plugin: You put the video you want to extract the movement from in the same folder for the plugin to access it, open synfig, run the plugin and you automatically get an animated skeleton (and for this version I wanted to provide shapes in the joints as well to ease later the rigging of the character), I think it would be a nice plugin for synfig, don’t know if other programs have something similar. Good thing about working with python and tensorflow: you can detect more than one person, my current version is limited to one pose.
But I don’t feel like doing it alone now (as the Android version is correctly working my motivation went a bit down). So if anyone wants to produce this together it would be cool.
I am interested in collaboration. I’m familiar with Python, but not synfig and sif format. I will appreciate if someone can provide some links so I can learn about implementing synfig plugins. @DSan It would be great if you can share your android code so I can study it
nice to see someone else fancies developing this. I can’t upload the android studio package here as it exceeds the size limit. If you are interested I can send it via email or something but anyway I really think is a mess of a code and we would be better of starting from 0 in python: I modified the AI library sample app to do what i wanted, therefore a lot of parts of the code are useless for us, but we can’t really get rid of them, besides I edited the sif file manually, instead of using a package as elementtree, etc.
So let’s say this feature is technically viable (I’m still figuring out, though). Let’s discuss what the optimal workflow should look like. In a second thought, I don’t think it’s necessarily a synfig plugin - we can develop a desktop app or web app and users can drag an input video as well as character body parts, preview the output and save the result as .sif file.
It certainly is technically viable hehe, all the pieces are already there, it’s just they have to be put together.
Workflow of the program would be:
-open the video file frame by frame
-foreach frame run the pose detection tensorflow module and get the joint points
-Use those coordinates to build the .sif file (for that you need to understand the labels of an animated skeleton in synfig, basically you have to create a skeleton layer with the 14 bones - I think- and for each bone update frame by frame the labels origin, angle and scale)
It indeed doesn’t need to be a synfig plugin, the android version I made proves it, but I think as it is specific for synfig, it makes sense and it shouldn’t be that much harder.
thanks for your detailed description of the code’s logic. I am sorry I wasn’t clear here: by workflow I mean the workflow when user of synfig software use this “autoanimate” feature to create scenes, so I’m talking about UI designing…