Hi Sergey

I would like to understand exactly how the graphics pipeline works and am 
struggling to figure this out by digging through the code (are there free books 
that deal with OSG and its implementation details in-depth? I dont mean the QSG 
by Paul Martz...). Also, my understanding of OpenGL is lacking in some places 
especially in implementation details so perhaps this will help clear things up 
for me.

In the prerender example we have something like this:

0: camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);
1: camera->attach(osg::Camera::BufferComponent(osg::Camera::COLOR_BUFFER0), 
image);
2: camera->setPostDrawCallback(new MyCameraPostDrawCallback(image));
3: textureRect[tex_to_get]->setImage(0, image); 

As far as I understand this works as follows

0: This generates an FBO which has no data storage until I attach data storage 
to it (next step) ? Is this true? What happens then when I attach a color 
attachment but dont attach a depth buffer attachment? Does OSG automatically 
generate one for me? I ask because in regular OpenGL when you dont attach a 
depth attachment to an FBO then rendering to the FBO using 
glEnable(GL_DEPTH_TEST) produces incorrect results...

1: The image attached to the camera is the data storage for the FBO created in 
step 0. If this is the case then it means the FBO sits in RAM and not on the 
GPU. Is that true? 

2:Here we set a post render callback using the image as our input. As far as I 
understand this means that the image (which is sitting in RAM) is edited in 
place. That means that as long as we dont need the data for actual rendering 
then we are being efficient because we dont copy to the GPU. Which brings up 
the next line...

3: We specify that a texture has the image as its image. I dont entirely 
understand what this means. When we create the original texture then OpenGL 
allocates space on the GPU for that texture. By using setImage(0,image) then 
does that mean that whatever is in that image (which sits on RAM) must always 
be copied to the GPU texture  before the texture can be used at render time? 

All in all I feel quite confused and would like to understand how to do the 
above process efficiently.
Ideally I would like to : pass dynamic geometry to the GPU every frame and have 
that rendered to textures inside the GPU. Occassionally I would like to ask the 
GPU to send the RTT textures back to the CPU, update them on the CPU and then 
send them back to the GPU.
How should I be doing that in the most efficient way possible?



Thank you!

Cheers,
aaron

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=41103#41103





_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to