When the total render time is “big” it is interesting to use the modern multicore CPU in parallel. You know that the better way to export an animation is using a png sequence. Also it allows to split the render into several sessions or (when available) into several cores in parallel.
I start this thread to create a script to perform render in parallel.
If anyone wants to help, you’re welcome. I’m trying to use bash as my first instance. Usage of python, java or perl would be also awesome.
for the case of png I would like that each processor renders one set of the image sequence. How does that the above code?
Does it means that I have to name “star” to the output file? How is divided the number of frames to each processor when I invoke “make” with “-j4” ? (sorry for my ignorance and laziness)
‘star’ is just the example animation I used to perfect it, you can change that to any name.
make understands ‘jobs’ and runs X jobs in different sub-processes, it relies on the operating system (Linux/Darwin/Windows/BSD/etc) to distribute the processes over the available CPU cores. In this Makefile I assign each job to be 1 frame of the animation. The -j specifies the number of jobs to run in parallel.
I tell synfig to render frame X when the filename of the output image is star.X.png.
I tell make that star.ogg depends on all the star.X.png images existing and to run ffmpeg2theora once they do exist.
Here is a simpler version in bash:
#!/bin/bash
name=star
start=0
end=102
frames=$(($end-$start+1))
cores=5
frames_per_core=$(($frames/($cores-1)))
intervals=$(seq 0 $(($cores-1)))
for i in $intervals; do
b=$(($start+$i*$frames_per_core))
e=$(($start+($i+1)*$frames_per_core-1))
test $e -gt $end && e=$end
echo synfig -t png --begin-time "$b"f --end-time "$e"f $name.sif*
done
wait
ffmpeg2theora -o $name.ogg -f image2 $name.%04d.png
I couldn’t figure out how to set $frames_per_core optimally though, in some cases one core will be left with relatively fewer images to render.
I know it only works on unix systems since it actually uses multiple processes (and pipes) instead of multiple threads, but otherwise it should work. Shouldn’t it?
Note: the process forking code is wrong in at least one place, but it luckily doesn’t present itself as a bug.
It also shows another problem with that piece of code. Since there is no wait() system call in the synfig code, all the child processes become (and remain) zombies until the main synfig process dies, and the init process finally kills of all them (rather like buffy! )
That’s a great idea, although pardon me… the posts don’t make sense.
Are you saying it would be a simple matter to create a .BAT file (if on Windows) by changing the filename of the SYnfig project file within the .BAT? Or is it more complicated than that?
Hi Tushantin.
The idea here is to split the rendering of frames, or sets of frames, across multiple CPUs, rather than having one CPU do everything serially. The problem under Windows is that we can’t easily or programatically target specific CPUs/cores for a specific set of frames to render. So yes, you could use different PCs to render sets of frames, but if you render on one PC under Windows, you’ll end up just using one CPU/core/thread. At which point, you might as well just render as normal.