Thanks cwright! I get it now. I wasn't putting the gl_FragColor line
in the right place.... really obvious oversight.
I'm attaching the "working as expected" shader. (I'm still
scratching my head a little bit about why, if color is alpha=0
already in the vertex shader, and my source texture is composited
over alpha, why this is necessary.)
I think you're thinking in more complicated texturing terms than is
strictly necessary -- the blend mode affects the entire object (the
final compositing of whatever the fragment shader spits out), not the
input texture or the input color. Alpha blend mode (unofficial)
simply tells the GPU to do the premultiplication itself. This is a
bit slower, but simplifies writing shaders. (And in fact, if you need
to do the premultiplication in the fragment shader anyway, it's
actually the same speed -- the premultiplication needs to happen
somewhere, so that cost gets paid somewhere).
To do alpha blending, you perform this mathematical operation (Alpha
blend mode tells the GPU to use this equation):
outputColor = destinationColor * (1. - sourceAlpha) + sourceColor *
sourceAlpha; (you provide sourceColor and sourceAlpha directly by
setting gl_FragColor or gl_FragData[0])
This cost 2 multiplies, and an add per pixel ["1. - sourceAlpha" is
often "free" since it's usually just a binary inversion in 8 bit mode]
(back in the day, this was a gigantic cost - I remember writing
assembly blitters that did this sort of thing when I was a kid, before
graphics hardware was around to make this irrelevant).
So some people thought about the problem, and they realized that you
could pre-calculate the "sourceColor * sourceAlpha" part, since that's
constant. Thus, "premultiplication" was born. Your textures don't
store sourceColor anymore, they store sourceColor * sourceAlpha.
this, of course, has the negative side effect of losing color fidelity
when alpha is less than 1 -- for example, when alpha is 0, color is 0
(you've lost all the color information). In practice, this typically
isn't a problem, and it saves a multiply per pixel, which is a lot of
work.
The equation then becomes the following (Over blend mode uses this):
outputColor = destinationColor * (1. - sourceApha) + sourceColor;
So if you feed it a sourceColor that _isn't_ premultiplied, the
results are wrong (they kinda blend, but they're kinda opaque too).
Add blend mode uses the following:
outputColor = destinationColor + sourceColor; this is even simpler
(0 multiplies!), but disregards alpha entirely.)
So, when you're setting colors in QC, typically it does the
premultiplication for you, and never need to know/care. However, it
cannot do that in the shader for you, since you're taking control of
the GPU directly at that point.
Too bad that the shader is blowing up your machine ( I wonder if the
classic vertexnoise glsl would do the same, as it's largely the same).
Not sure -- I commented out all the dead code, and it started working,
so I have no idea. *shrugs*
--
[ christopher wright ]
[email protected]
http://kineme.net/
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com
This email sent to [email protected]