Thanks!

Some questions:
If I do not use a separate camera for each texture, will the texture
objects store the past frames after the camera has been rebound and is
rendering to a different object?
If I do use separate cameras, will I need to remove them from the scene
graph (or in some other way disable them) during frames in which I do not
want their corresponding textures updated? If so, will the texture
attached to them render correctly while the camera is not a active?

I previously used a solution similar to what you describe (simulation
offset), but the decision was made to use an actual time delay. This
approach more closely mirrors the system we are simulating (vehicle
teleoperation with latency, discrete sensor packets, limited
communications bandwidth/reliability etc).

-Nicholas

> HI Nicholas,
>
> For render to texture work just attach a texture rather than an image
> to the RTT Camera.  You'll need to use a separate texture for each of
> entries in your ring buffer.  I'd also use a separate Camera for each
> entry in your ring buffer.
>
> But... I really don't why on earth you want to go to this length, when
> not just render the image on demand with the simulation time you want.
>  Note, the SVN version of the OSG has a new concept of simulation time
> as well as reference time, simulation time can be set per frame and
> can advance at a different rate as the reference time - with the
> reference time now being used purely to specify the actual computer
> time that frame occurs at.
>
> Robert.
>
> On 4/2/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> Hello. I am a bit new to OSG and had a question.
>>
>> What I want to do is offset a camera's display by a variable time delay.
>> I
>> currently use a camera to render to an image and then a separate camera
>> to
>> render that image to the screen. In the first camera's post draw
>> callback
>> I swap images into a ring buffer so that the image displayed by the
>> second
>> camera is a previously rendered frame.
>>
>> This worked fine so long as I copied the image data, however, for
>> efficiency reasons I wanted to switch to an implementation that simply
>> rebound the images to the appropriate cameras during the callback. After
>> some testing I found that, while the second camera was rendering the
>> correct frame in the ring buffer to the screen, the first camera was not
>> rendering to the correct image. Specifically, I could not get it to
>> render
>> to any image other than the first one I gave it.
>>
>> This is what I am using to change the render target:
>>
>> osgImage = newImage.get();
>> sceneCameraNode->detach(osg::CameraNode::COLOR_BUFFER);
>> sceneCameraNode->attach(osg::CameraNode::COLOR_BUFFER, osgImage.get());
>> osgTexture->setImage(osgImage.get());
>> osgTexture->dirtyTextureObject();
>>
>> newImage is the (initialized) desired render target. osgImage is the
>> prior
>> render target. newImage == osgImage fails.
>> sceneCameraNode is the CameraNode I want to render on to osgImage.
>> The image is bound to osgTexture.
>>
>> Currently the system continues to render to the first image object I
>> give
>> it. Because I use a ring buffer, this means I only see one out of every
>> N
>> frames, where N is the size of the ring buffer.
>>
>> I am unsure if the above code is correct or even if this is the best way
>> to go about this. Any help would be appreciated.
>>
>> -Nicholas
>>
>>
>> _______________________________________________
>> osg-users mailing list
>> [email protected]
>> http://openscenegraph.net/mailman/listinfo/osg-users
>> http://www.openscenegraph.org/
>>
> _______________________________________________
> osg-users mailing list
> [email protected]
> http://openscenegraph.net/mailman/listinfo/osg-users
> http://www.openscenegraph.org/
>
>


_______________________________________________
osg-users mailing list
[email protected]
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/

Reply via email to