Re: [osg-users] Resetting OSG state when managing own context

2012-08-27 Thread Neil Clayton
Yup, well it's certainly possible (the shared context). It's what I'm doing now.

I'm going to investigate the use of deleteAllGLObjects when I get back to the 
office and have a 2nd screen connected that I can test with (I'm using the 
scenario of moving a window between two screens to force a CAOpenGLLayer 
context refresh).

If that works, it's likely I'll stay with the shared context single threaded 
approach.

--
Neil

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=49605#49605





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Resetting OSG state when managing own context

2012-08-26 Thread Sergey Polischuk
Hi, Neil

I feel like (b) option is a clear winner here.If it is possible to make osg 
context shared with CAOpenGLLayer, you'll be in good shape and running without 
gpu->ram->gpu transfers. Otherwise it all depends on texture roundabout impact 
on framerate.

Cheers,
Sergey.

24.08.2012, 18:36, "Neil Clayton" :
> Hi All,
>
> I've a question or two regarding rendering contexts. While the usage is OSX 
> specific, the question is not.
>
> I'm rendering into a CAOpenGLLayer (I can't render directly to a NSOpenGLView 
> because I'm also using CALayer objects within the view - and these conflict 
> with the NSOpenGLView, at least in my experience).
>
> In essence this means I manage the OpenGL context, create an osg::Viewer and 
> use the setupViewerAsEmbeddedInWindow method. I have no problem using the 
> context created by CAOpenGLLayer (I'm basing my code on the existing 
> CocoaViewer which essentially does the same thing), I do have problems when 
> the context which OSG is now using needs to be reset.
>
> In CAOpenGLLayer, a context reset can be caused by (for example) a user 
> moving the window to another screen.  In this case the context might be 
> recreated because CAOpenGLLayer wants to use either a different graphics card 
> or a different pixel format.
>
> From what I can gather, while I'm managing the real OpenGL context myself, 
> OSG is managing it's own texture ID's (and I presume other scene OpenGL 
> resources). When the real OpenGL context is reset, OSG still believes all of 
> it's existing resources to be valid, because I've not explicitly told it 
> otherwise.
>
> From this I have two questions:
> a) How do I tell OSG to throw away it's managed resources (texture ID's, 
> etc), such that they are recreated at the next render pass?  Can I do this? 
> Is it even wise? Or should I be looking to throw away the viewer and recreate 
> it?
>
> b) Instead of the above, should I rather have OSG manage it's own context and 
> then after each OSG render pass copy the rendered result to the layer?
>
> For (b) I was considering rendering to a texture backed FBO, then DMA'ing 
> that texture to RAM.  I'd then DMA that back into the CAOpenGLLayer context.  
> Thus OSG wouldn't have to be concerned at all with what the layer is with 
> it's context, nor context resets.  This (to me) sounds like a reasonable 
> idea. It'd keep the OSG engine separated from the CAOpenGLLayer side of 
> things, and would allow OSG to manage everything about it's rendering 
> pipeline. The only part I don't like is the additional texture-copy to get 
> the result into another context.
>
> p.s: The scene being drawn is small, with only many hundreds of vertices and 
> <100 textures.
>
> Thank you!
>
> Cheers,
> Neil
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=49508#49508
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Resetting OSG state when managing own context

2012-08-24 Thread Neil Clayton
Hi All,

I've a question or two regarding rendering contexts. While the usage is OSX 
specific, the question is not.

I'm rendering into a CAOpenGLLayer (I can't render directly to a NSOpenGLView 
because I'm also using CALayer objects within the view - and these conflict 
with the NSOpenGLView, at least in my experience). 

In essence this means I manage the OpenGL context, create an osg::Viewer and 
use the setupViewerAsEmbeddedInWindow method. I have no problem using the 
context created by CAOpenGLLayer (I'm basing my code on the existing 
CocoaViewer which essentially does the same thing), I do have problems when the 
context which OSG is now using needs to be reset.

In CAOpenGLLayer, a context reset can be caused by (for example) a user moving 
the window to another screen.  In this case the context might be recreated 
because CAOpenGLLayer wants to use either a different graphics card or a 
different pixel format.

>From what I can gather, while I'm managing the real OpenGL context myself, OSG 
>is managing it's own texture ID's (and I presume other scene OpenGL 
>resources). When the real OpenGL context is reset, OSG still believes all of 
>it's existing resources to be valid, because I've not explicitly told it 
>otherwise.

>From this I have two questions:
a) How do I tell OSG to throw away it's managed resources (texture ID's, etc), 
such that they are recreated at the next render pass?  Can I do this? Is it 
even wise? Or should I be looking to throw away the viewer and recreate it?

b) Instead of the above, should I rather have OSG manage it's own context and 
then after each OSG render pass copy the rendered result to the layer?

For (b) I was considering rendering to a texture backed FBO, then DMA'ing that 
texture to RAM.  I'd then DMA that back into the CAOpenGLLayer context.  Thus 
OSG wouldn't have to be concerned at all with what the layer is with it's 
context, nor context resets.  This (to me) sounds like a reasonable idea. It'd 
keep the OSG engine separated from the CAOpenGLLayer side of things, and would 
allow OSG to manage everything about it's rendering pipeline. The only part I 
don't like is the additional texture-copy to get the result into another 
context.

p.s: The scene being drawn is small, with only many hundreds of vertices and 
<100 textures.


Thank you!

Cheers,
Neil

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=49508#49508





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Resetting OSG state when managing own context

2012-08-24 Thread Neil Clayton
Hi All,

I've a question or two regarding rendering contexts. While the usage is OSX 
specific, the question is not.

I'm rendering into a CAOpenGLLayer (I can't render directly to a NSOpenGLView 
because I'm also using CALayer objects within the view - and these conflict 
with the NSOpenGLView, at least in my experience). 

In essence this means I manage the OpenGL context, create an osg::Viewer and 
use the setupViewerAsEmbeddedInWindow method. I have no problem using the 
context created by CAOpenGLLayer (I'm basing my code on the existing 
CocoaViewer which essentially does the same thing), I do have problems when the 
context which OSG is now using needs to be reset.

In CAOpenGLLayer, a context reset can be caused by (for example) a user moving 
the window to another screen.  In this case the context might be recreated 
because CAOpenGLLayer wants to use either a different graphics card or a 
different pixel format.

>From what I can gather, while I'm managing the real OpenGL context myself, OSG 
>is managing it's own texture ID's (and I presume other scene OpenGL 
>resources). When the real OpenGL context is reset, OSG still believes all of 
>it's existing resources to be valid, because I've not explicitly told it 
>otherwise.

>From this I have two questions:
a) How do I tell OSG to throw away it's managed resources (texture ID's, etc), 
such that they are recreated at the next render pass?  Can I do this? Is it 
even wise? Or should I be looking to throw away the viewer and recreate it?

b) Instead of the above, should I rather have OSG manage it's own context and 
then after each OSG render pass copy the rendered result to the layer?

For (b) I was considering rendering to a texture backed FBO, then DMA'ing that 
texture to RAM.  I'd then DMA that back into the CAOpenGLLayer context.  Thus 
OSG wouldn't have to be concerned at all with what the layer is with it's 
context, nor context resets.  This (to me) sounds like a reasonable idea. It'd 
keep the OSG engine separated from the CAOpenGLLayer side of things, and would 
allow OSG to manage everything about it's rendering pipeline. The only part I 
don't like is the additional texture-copy to get the result into another 
context.

p.s: The scene being drawn is small, with only many hundreds of vertices and 
<100 textures.


Thank you!

Cheers,
Neil

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=49506#49506





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org