Hi,

I have a very large scene graph where I have many geodes.

Each of this geode has a stateset with a GPU program defined on it, contianing 
a vertex and a fragment shader.

The GPU program instance is unique for my whole application - I am not 
recreating the program over and over again for every geode. This would be an 
easy, silly mistake.

I have done performance tests and having a StateSet assigned to each Geode 
causes a MASSIVE performance drop. For a given model, containing thousands of 
geodes, the framerate goes from 64 to 30 fps just when you call 
getOrCreateStateSet on every geode, causing state to be created on each 
individual geode.

An OpenGL trace shows that OSG is clever enough to avoid calling 
glUseProgram(id) / glUseProgram(0) in between every geode being rendered. 
Declaring the GPU program once, on the topmost graph node, and not on every 
geode, results in the same GL trace, with no performance degradation. This 
suggests OSG is very heavily using the CPU to 'diff' the state changes across 
geodes.

Is there a way, maybe by fiddling with OVERRIDE/PROTECTED flags, to improve the 
way OSG handles state changes, in order to keep good performance?

Cheers,
Fred

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=38256#38256





_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to