Hi Mical,

I recently have been trying to get a deferred rendering system to work using 
osg.

Lots of fun to come! :-)

   First of all in almost all the tutorials I've seen so far, the normal vector 
was multiplied by the normal matrix (gl_NormalMatrix), but when I did so in my 
geometry vertex shader, I had strange results (the normals where changing while 
I was moving in the scene) ... I later discovered that by leaving the gl_Normal 
untouched, I was getting the right result :|

The thing you have to remember when doing deferred rendering is what coordinate space you're in, and what coordinate space you want to be in. This forces you to really know what the built-in matrices in GLSL do, and what you should and should not use. Most tutorials are written with forward-rendering in mind, so they do things with that assumption. You can't.

(digression: this is one of the really nice things about OpenGL 3+, the built-ins are gone so it kind of forces you to know what matrices you need to pass and so on.)

Generally, what you want in deferred rendering is to store world-space values. That way, everything is in a common coordinate space, and once you do your light passes you will be able to use the same data for each light.

BUT, gl_NormalMatrix and gl_ModelViewProjectionMatrix bring normals and positions (respectively) into eye space, not world space. Same for ftransform(), you don't want to use that. That's why you saw the normals change when you moved the camera - they didn't actually change, it's just that they were in eye space, and eye space changes when you move the camera.

So using the vertex normal directly is almost correct. At least it's more correct than using the gl_NormalMatrix. It won't work correctly if your objects are rotated in the modeling transform (transforms between the camera and that object in the graph. You would have seen this for example if you animated your object to rotate all the time - lighting wouldn't change on it even though it's rotated differently with respect to the light source, because you're using object space normals, not transforming them to world space normals.

What you need in most cases is the model matrix, which in the G-buffer (main camera) pass you could get by using a cull callback on your G-buffer pass camera to set the camera's inverse view matrix in a uniform. Then you do:

    mat4 modelMatrix = u_ViewMatrixInverse * gl_ModelViewMatrix;
    mat3 modelMatrix3x3 = mat3(modelMatrix);

    // world space
    vWorldVertex = modelMatrix * gl_Vertex;
    vWorldNormal = modelMatrix3x3 * gl_Normal;

Then you can pass those to your G-buffer fragment shaders as varyings.

Then, in the deferred pass(es) you'll need to do the rest of the job. Do most of your calculations in world space. Remember, once again, which space you're in: the gl_* matrices won't be of any use to you in the deferred passes, since you're rendering to a full-screen quad with an ortho projection probably so the matrices don't represent your main pass values at that point!

In particular, be careful to pass your light source's position/direction in world space in your own uniforms. OpenGL specifies that gl_LightSource[n].* are in eye space, but as I said the eye space of your deferred passes is not the same as the eye space of your G-buffer pass!

So it's really important for you to have a clear understanding of which space you're in at each step of your calculations in each of your shaders. It's a bit complicated but it's essential to pull off this technique.

Hope this helps,

J-S
--
______________________________________________________
Jean-Sebastien Guay    jean-sebastien.g...@cm-labs.com
                               http://www.cm-labs.com/
                    http://whitestar02.dyndns-web.com/
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to