Motion Blur layer: Linear transparency of frames?

Hi. I have a query that’s difficult to explain, hopefully you can follow me. I’m trying to do something with the Motion Blur layer. I reckon I’ve got a fair handle on how to use it and all, but something about how it works has caught my attention. I understand from this that the Motion Blur layer merges a set of, lets call them, “subframes”, rendered into the past as far as the aperture parameter specifies. (Aperture would probably be more correctly named “shutter speed”, but I digress.)

What I notice is that these subframes are merged together with linear decay; that is, the subframe at time aperture in the past is completely transparent, and the subframes between that and zero time (relative to the frame) are composited with linearly increasing opacity. When I look at the output of using the layer, that does appear to be what is happening - the past motion of the frame seems to fade out. In my opinion, it doesn’t look very believable.

It strikes me that this is not how film works at all. Film collects light for however long the aperture is open, but the “older” colours don’t “fade” away like this seems to. I mean for each frame of film, it doesn’t matter at what point during the time the shutter was open that the light arrived on the film - all times of arrival are considered equal. Look at this as an example.

Am I missing something? Perhaps there’s a subtlety in moving shutters on film cameras, that I’m unaware of, that might justify the linear decay. But if not, I would think a constant opacity for each subframe would be more logical. Then the result would simply be an average of all the subframes, rather than a (unnecessarily complex) linearly graded opacity.

Thoughts? In my opinion this is a bug, but I figured I’d better ask. If not to just fix this, perhaps the kind of blur could be a new parameter for the Motion Blur layer?

Hi Tachyon,
The current motion blur layer behavior is not unintentional. Here is the answer to how was it made: … 719fa6e426
The fact is that you’re right, and the primitive motion blur from the original designer was constant decay along the aperture time (btw, the parameter name “Aperture” was the one given by the original Synfig designer and it has a text description called “Shutter Time”, unfortunately parameter descriptions tooltips doesn’t work and should be corrected).
I don’t know the reason from dooglus about use a linear dacay but the fact is that a normal motion blur in any film is constant.
The primitive approaching was a bit weird because it did a centered time blur. I mean, it made subsamples in the past and future times in a period of time of +/-amount*0.5. I think that it was a mistake too (or not?).

I take note of your comments and (when I have more time) it should be reverted and also allow to add more parameters to the layer, as requested on the feature request tracker.

Apart of the “Aperture” (or any better human readable name) and the “number of samples”, which other parameter could be interesting to add? (I understand that the amount and the blend modes can be achieved by encapsulating the whole motion blurred composition in a paste canvas)


Thanks Genete. Heh, about the only place I didn’t look was on the Sourceforge trackers. I didn’t realise you were using them for features. :slight_smile:

I was thinking that perhaps a parameter specifying whether to use linear, constant, or some other method(s) (if implemented) would be appropriate. Although, looking at that feature request, I see that it would work how I expect it to if I could set this proposed aperture transparency parameter to 100%.

I was considering poking at the code myself, and seeing how simple/complex it is. I couldn’t promise anything, though. :slight_smile:

You’re absolutely welcome!
Please, if you do, use the forum, the patch tracker or the mailing list to send your patch. I’ll look at it.

The fix for the particular issue I was having with the “linear” transparency (which I’m now not sure is even really linear) seemed far too simple. Then I had the idea to generate subframes adaptively: parts that didn’t change much wouldn’t have to be rendered, saving time and making the render faster. So I spent some time working on this idea.

It was a beautiful failure. The idea was to separate frames into blocks, and render each block adaptively, using recursion. If the difference between two subblocks (separated by some time (less than the aperture/shutter speed)) was above a threshold, then a subblock halfway between those in time would be rendered, and so on recursively.

I was half way thorugh implementing this when I realised that the processing time it takes to set the context time, thus causing the recalculation of positions etc., could actually be significant. …Turns out that hunch was right. I finished up a working version and did a test with an image I’m working on for something else, with these results (using the “time” shell command):

Original Synfig 0.62 build (Debian package), motion blur off:
real 0m35.519s
user 0m31.670s
sys 0m1.484s

Original build, motion blur on:
real 0m46.501s
user 0m41.999s
sys 0m1.532s

So 24 samples, very nice quality, adds a bit more than 10 seconds to the frame. Most of the time is taken loading and decompressing the file.

My algorithm, motion blur off:
real 0m33.054s
user 0m29.354s
sys 0m1.500s

That’s consistent with the original build, so that’s good.

My algorithm, adaptive, limited to 32 samples, motion blur on:
real 15m40.255s
user 14m39.967s
sys 0m5.316s


The numbers speaks for themselves. Add to this the fact that my new algorithm has a bug in that it doesn’t actually do the equal composition that I wanted it to do in the first place!

I would attach the diff, for the morbidly curious, but apparently this forum doesn’t let you attach files with “.diff”, “.patch”, or blank extensions, and I’m not going to bother figuring out what extensions it does allow.

sigh I’ll start again and implement the simple solution later.

You don’t need to save a diff file as diff extension. just zip it and it would be allowed. Also the sourceforge tracker is a good place for that as ot keeps the list of patches better organized than here. I’m absolutely curious to see your implementation :smiley:

My idea of what should it be performed is this:
The motion blur should have four parameters:
-Aperture (or whatever it should be called)
-Number of sub-samples.
-Amount of the subsamples.
-Whether the amount of the subsamples should be automatically calculated on number of subsamples or use the given by the parameter.

So the user can select the motion blur to produce other effects depending on the parameters.

Like I said, I wasn’t going to try to work out what files it will allow me to upload. Here’s a zip of the diff for your curiosity. I wouldn’t dare submit it to the SF tracker, given that, well, it makes things considerably slower and doesn’t actually fix anything.

Regarding your suggested parameters, I was thinking something roughly along those lines, but I’m not sure how best to present the “Amount” curve. The current implementation appears to be (maybe) hyperbolic, rather than linear, but I haven’t stared at the maths involved for long enough to figure that out for sure. At least, if it is really linear, it’s not obvious. I would like to make it so that the user could choose whatever amount calculation/curve for the subsamples they wanted. Is that possible? Does that expose too much complexity to the user? Another option is just to enum. some options: “Old style” (or “Hyperbolic” or whatever curve it actually turns out to be), “Constant”, “[INSERT OTHER CRAZY THING HERE]”… Some of which might require some extra specific parameters. I prefer this idea, but I’m not so sure what to do about those extra parameters.

Another problem is how to take the render quality into account. I expect people will be confused when they specify X subsamples and it only renders X/4, or whatever, because the global quality is not up full. I haven’t noticed any other situation where this kind of thing occurs (though they may exist). Probably a more sensible idea would to have a “Subsampling Quality” parameter specific to the motion blur. The default 1.0 would corresponding to, say, 32 samples at highest quality, as is the case now. The number of subsamples would be proportional to this parameter. Taking the global quality into account would basically be multiplication (roughly speaking). (3.19 KB)

Yes, some sort of drop down list of subsampling methods can be a good idea so the user just select what fits better for him.
Regarding to the quality system I’m not so happy with it right now. I wish to have more control and if the user wants it and less levels for day by day usage:
For example I wish:
a) Be able to enable/disable specific layers render individually.
b) Be able to enable/disable specific layers parameters or specify one quick value for certain parameters.
c) Offer, to the user who doesn’t want to control all that stuff, just four global quality setting:
Preview, High Quality, Normal Quality, Low quality
which would be override by the specific quality settings.
Also a visual system that tells you what’s being rendered at any moment would be good too, so nobody would doubt why the motion blur is not working or the duplicate layer doesn’t duplicate.

Had some time for this today, here’s what I’ve come up with. Weighting curve types are enumerated; at the moment the options are constant (default), hyperbolic (this is the original one - I’m convinced it isn’t linear), and linear. Also new is a quality parameter, that simply multiplies the number of subsamples.

Lastly, there’s starting and ending amount parameters. At the moment these only effect the linear curve - I don’t know a sensible way for them to affect the hyperbolic curve, and there should be no effect on the constant curve. (It occurs to me that the constant curve could be implemented by using equal nonzero relative weights on a linear curve, but for the moment it’s just a separate option.) I don’t know how to disable parameters when they cannot be used, like for the constant curve, or if that’s even possible.

This solves my original problem with the motion blur layer, and implements some of the functionality suggested by the feature request on the tracker. See what you reckon. (1.68 KB)

It looks awesome!!! thank you very much for the patch! i look forward to apply it to the new release. I’ll play with it a bit more before apply to master branch if you don’t mind.
Great! :smiley:

Go right ahead. :slight_smile:

I would like to rename “Subsamples Factor” to “Density” or similar.
subsample_start and subsample_end both = 0 produces a black frame. It should be checked.

Things TODO (reminder for myself):
Reverse the quality system to a non fix number of samples but a factor of the number of samples parameter. Like you have done for the parameter but the inverse: parameter=number of frames; quality=number of framesfactor where factor depends on quality
*Turn this layer to a composite type and set its default blend method to straight. The user should have that option.
*Find a way to do the get_color member based on the layer’s parameters. It will allow apply further filters properly.


case SUBSAMPLING_LINEAR: scale = ipos*subsample_start + pos*subsample_end +0.000001;
This modification avoids the black frame. Do you find other better way to solve it?

Please post your full name for the credits. :wink:

I agree, density is a better term.

Fine idea, although the documentation needs to be made clear enough that users won’t think the layer is broken when they see only a couple subsamples from the dozens of subsamples they asked for actually get rendered (because they’re using a low quality preview).

You mean blank (i.e. transparent) frame, rather than black frame, right? This problem had not escaped me, but I took the easy solution. The thing is, I think we should guarantee to the user that at time_cur-aperture and at time_cur the scale for the corresponding subframe will be what the user specifies (subsample_start and subsample_end). Your solution intentionally doesn’t guarantee that.

What we actually want to do, rather, is not waste time generating a subframe that will be added with 0 scale, but generate subframes at different times. This means handling the subframe_start==0 case (or possibly even the subframe_end==0 case, if someone wants to use a kind of leading blur) as a special condition.

See attached patch (apply on top of the previous one). This will recognise conditions where scale will be zero and adds extra subframes to compensate. Then it will not bother rendering the frames where scale is zero. IMHO, this is the correct solution.

Although I’m also of the opinion that anything other than constant scaling is nonsense, anyway. :wink:

Nice. Of course, what that doesn’t show is that Synfig is now capable of simulating realistic motion blur, using constant curve and aperture <= 1f. That should be good for displaying fast motion in an otherwise normal sequence.

By the way, I’m this guy. Since you asked. :slight_smile: (910 Bytes)

No, I mean black frame (I don’t have any context below the motion blurred layers) . The canvas is completely black RGB=(0,0,0). Anyway it is a non good result.
I’ll take a look to the patch.

Your solution still giving me a black (or full transparent if there is context behind) frame when both subsample_start and subsample_end are both = 0.0
if you set them to:

it gives exactly the same render than for:

which is logical because when one of them is set to zero the other is considered the full value or an alpha = 1/samples. Remember that you scale down to “scale” but later you divide it by “divisor” which is a sum of the scales.

scale is set to zero at the begining and it will remain = 0 if both subsamples are zero. So scale is zero for all samples and also divisor. Premultiply alpha by zero and demult alpha by a divisor = 0 is not a good idea. That’s why I set it to a tiny value when both are zero. In that way there is continuity on the values, regardless the small error it introduces when both are not zero.


Oh I see. That’s a different problem to the one I fixed in the patch 2. I would argue, though, that setting both the start and end values to zero should give you a black frame. The user is saying “I want no contribution from the first subsample, and no contribution from the last subsample, and linear interpolation between the two for everything else.” Of course the result will be 0, so it’s logical that it will be black.

But I agree that division by zero is bad. In this case I would suggest checking prior to rendering: if(subsample_start==0 && subsample_end==0) then return a black frame (my choice) or render 1 sample at time_cur (perhaps your choice) or something. Or perhaps it might be easier to check for divisor==0 inside the loop. In either case, having both values set to zero is a pretty nonsensical input, so we should probably just validate it like you would for any user input where they might put in something nonsensical. (Come to think of it, we might need to validate negative values as well—I’m not sure if they produce anything remotely sensible.)

That’s my opinion, of course. The decision is up to you. :slight_smile: