Hi, I have an OSG-based simulation that I'm considering retrofitting to decouple the actual simulation rate from the render frame rate. It's not viewer-based. A simplified example would be and application that listens for a network message identifying two world positions, performs and intersection test, and returns a response, while rendering/updating the scene. Currently this happens serially, but I want to put the network listener and intersection testing on a separate thread. Obviously the shared data here is (a portion of) the scenegraph, which may be modified by the render/update thread. I want the network/query thread to be able to respond as quickly as possible, and not depend on how fast the scene can be drawn, as it is currently.
Does anyone have any advice/experience/examples for how to best approach this kind problem? Is it even feasible? Thanks, Craig
_______________________________________________ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org