Hello,

I am trying to read "real" distance from a depth buffer from a scene, and I 
have a couple of questions. I am using Wang Rui's code as a starting point for 
working with depth buffers (specifically his code from the OpenSceneGraph 
Cookbook Chapter 6, Recipe 4). 

What I am trying to do is to read the distance from the depth buffer and dump 
it to standard output as a start. I've attached an osg::Image to the viewer's 
camera's depth buffer, and I am querying the image values during a click event 
in a pick handler (the reason for the pick handler is I would like to 
eventually do a comparison of the distance value I get from using an 
Intersector vs the value from the depth buffer).

Code:

viewer.getCamera()->attach(osg::Camera::DEPTH_BUFFER,image); 

image->allocateImage(32,32,1,GL_DEPTH_COMPONENT,GL_FLOAT);




So I am using a 32x32 image, and I am querying the image value with:

Code:

                        for(int k=0; k<32; k++)
                        {
                                for(int l=0; l<32; l++)
                                {
                                        z_buffer=((float*)image->data(k,l))[0];
                                        z=-1/(z_buffer-1);
                                    std::cout<<" "<<z<< " ";
                                }
                                std::cout<<std::endl;
                        }




I understand that the z-buffer distance is not linear, so following the article 
"Love your Z-Buffer" by Steve Baker (can't post links yet because I don't have 
enough posts) I came up with the formula z=-1/(z_buffer-1) for converting a 
z_buffer value to an approximate real distance value for a 24 bit depth buffer, 
with zNear at 1m, and zFar at 10,000m

However the values I am getting when querying the resulting image don't make a 
lot of sense to me. When I am standing on my terrain in first-person mode, 
visually I see the top half of my screen filled with the sky, and the bottom 
half with the terrain, so I would expect to have very large distance values for 
the top portion of the depth buffer image, and smaller and smaller values as I 
get closer to the bottom. However all of the values in the 32x32 depth image 
are around 2m. 

I would really appreciate it if someone could take a look at the code and let 
me know where I am going wrong. 


Eventually, I would like to render the depth buffer as a grayscale image akin 
to what you would see from a ranging sensor such as a lidar or a Kinect. Is 
this even a good approach to do this kind of visualization or am I better of 
trying some sort of a ray-tracing solution? My only concern with ray-tracing is 
that I need to maintain "soft" real-time performance, and I am not sure that I 
can achieve that using a ray-tracing solution. I would appreciate any advise.

Thank you!


Andrey

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=50031#50031





_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to