On Montag, 4. August 2008, percy tiglao wrote: > On Sun, Aug 3, 2008 at 4:05 PM, <[EMAIL PROTECTED]> wrote: > > Zitat von percy tiglao <[EMAIL PROTECTED]>: > >> Bug #1: Put two tracks on top of each other. Have the top one at say, > >> 50% fade or so. (any fade that isn't 100% has the bug). Have the > >> bottom at 0% fade. The bottom track is visible, although it shouldn't > >> be. The lower the top track's fade, the more noticable the bug. > >> > >> Bug #2: Put two tracks on top of each other. Have the clip on the > >> bottom track start a few seconds after the first. Have the top one at > >> 50% fade. Mute the bottom track. There is a noticable change in > >> brightness when the clip on the bottom track starts. > >> > >> Bug #3: Put two tracks on top of each other. Have the top one at 50% > >> fade. Change the fade of the bottom track between 0% and 50%. There is > >> no difference in output despite the changes on the bottom track's > >> fade. > >> > >> Three bugs that I found... but they all have to do with the same few > >> lines of code in the render engine. Unfortunately, there is a ton of > >> copy/paste code in this source file... so most likely this bug exists > >> in other forms or on other render settings. However, this patch fixes > >> Cinelerra as far as my current project is concerned. > > > > One premise that I buy your argument is that after your patch the > > software renderer ("X11" playback driver) and the OpenGL renderer ("X11 > > OpenGL" playback driver) produce the exact same result (in the > > Composition window). And I have reason to believe that it does not, > > although I have to meditate a bit more on this subject. But I'm on > > vacation currently, and without access to my Cinelerra box.
As far as the render pipeline is concerned, I mostly agree with your analysis, and it basically also matches what I found out: http://bugs.cinelerra.org/show_bug.cgi?id=470#c4 But there is one complication: The render pipeline is used for two things: (1) to render new media; (2) to render an image for the compositor. And at least in the case (2) there is one render step that is not performed by the pipeline, but in cmodel_transfer() when the image is computed from the rendered result. In my tests, I think it was function transfer_RGBA8888_to_RGB888() via macro TRANSFER_FRAME_DEFAULT(), via cmodel_default(), via cmodel_transfer(), via some tool in guicast that transforms a VFrame to an image As you can see, that function applies the alpha value to RGB. But this step only happens if we are using the X11 or X11-XV driver, not with the X11-OpenGL driver, because that has its own pipeline; and it uses straight OpenGL calls, which do not modify RGB values when they are transfered into the display frame. Therefore, even though I think your patch goes into the right direction, it is not fully thought through, yet, and solves only a special case. > Actually, upon further calculation... I am 100% sure there is a bug in > my code. The alpha value is set to max, which is wrong. I'm supposed > to set alpha to my predicted value of a2 * (1-a1) and then divide r g > and b by this new alpha. I don't immediately see that this is the correct alpha value. And why do you divide RGB values? OpenGL certainly never does it. > Let me explain why I'm 100% sure, and why I think it is only a slight > bug. The model that I'm using for the overlay is: > i1 * a1 / max + i2 * a2 * (1-a1) / max + i3 * a3 * ( 1- a2 * (1-a1)) / max > ... How do you get this from? If I derive the compositing formula for three layers, I get a somewhat more complicated result. Is yours so simple because you set the alpha composited from two layers to a2*(1-a1) (your a_eq below)? Is this a suitable alpha value? This site tells something different: http://en.wikipedia.org/wiki/Alpha_compositing (And OpenGL's default formula is different again, but that can be fixed with a shader program to whatever formula we agree upon. > > Where i1 is a pixel, and a1 is the alpha value for the top pixel. i2 > and a2 are the values for a lower pixel, and i3 and a3 are the values > for the lowest pixel. The job of this bit of code is to combine i1 and > i2, and a1 and a2 into an equivalent statement. > > Therefore... we are aiming at something along the lines of: > > i_eq * a_eq / max + i3 * a3 * (1 - a_eq) / max == i1 * a1 / max + i2 * > a2 * (1-a1) / max + i3 * a3 * ( 1- a2 * (1-a1)) / max > > Basically, i_eq and a_eq replace i1 and i2. A bit of arithmatic and... > > i_eq * a_eq = i1 * a1 + i2 * a2 * (1-a1) Your left-hand side is a pre-multiplied color value. I haven't thought through whether the render pipeline uses pre-multiplied colors. It seems to make sense. In this case: - we shouldn't multiply by the alpha value again in the compositing stage - we don't need to divide by alpha, because we need to check for zero-division. - it is incorrect that the transfer_RGBA8888_to_RGB888() function is used to display the resulting image, which multiplies the alpha again. There's another issue to think about: Are the RGB values of the (source) media (that have an alpha channel) pre-multiplied by there alpha values or not? Shouldn't we use the same convention when we write (rendered) media? I don't know what is customary. And another one: The first layer must be multiplied by the fade value and the alpha channel must be set up accordingly for media that do not have an alpha value. Does this happen at all times? > a_eq = a2 * (1-a1) I think the correct alpha value must be a_eq = a2 * (1-a1) + a2 -- Hannes _______________________________________________ Cinelerra mailing list Cinelerra@skolelinux.no https://init.linpro.no/mailman/skolelinux.no/listinfo/cinelerra