As I coded it directly for Android I didn’t use the tensorflow approach, but there’s a Machine Learning Library for Android with that option as well: MLKit. I suspect they use the same (or very similar) model anyway.
The approach is simple:
-Get feed from camera
-Pass frame by frame the pose detector model
-Translate the detector points to synfig coordinates
-Rewrite .sif file with new waypoints everyframe
But I was thinking now of coding a PC version in python and tensorflow indeed.
If such a tool would be great indeed, it would be another source of problems of integration in Synfig.
Maybe as a plugin or a complementary app.
As the source code is free and open-source, here are some info for those interested in (I am :P)
Actually I don’t think there would be any problem integrating it: right now is an Android standalone App: you record the scene and it generates a .sif with an animated skeleton. And for the next version I was thinking about making it as a plugin: You put the video you want to extract the movement from in the same folder for the plugin to access it, open synfig, run the plugin and you automatically get an animated skeleton (and for this version I wanted to provide shapes in the joints as well to ease later the rigging of the character), I think it would be a nice plugin for synfig, don’t know if other programs have something similar. Good thing about working with python and tensorflow: you can detect more than one person, my current version is limited to one pose.
But I don’t feel like doing it alone now (as the Android version is correctly working my motivation went a bit down). So if anyone wants to produce this together it would be cool.