Re: [osg-users] Resizing FBO camera while rendering

2018-10-02 Thread Rômulo Cerqueira


> Is it only the FBO that is forcing you to do this?


Yes.

Do you have any tip how can I solve this issue?

I found similar problems into links below, but I could not have the same 
success.

https://groups.google.com/forum/#!topic/osg-users/Oxb9QF8Myyo
https://groups.google.com/forum/#!topic/osg-users/ScUN5VSj6W8

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=75025#75025





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Resizing FBO camera while rendering

2018-10-02 Thread Robert Osfield
Hi Rômulo,

On Tue, 2 Oct 2018 at 18:45, Rômulo Cerqueira
 wrote:
> I use the method setupViewer() "to resize" the FBO as well (by instantiating 
> the viewer, camera, texture and callback again). This approach was the best 
> way so far to minimize this problem.

What I was trying to work out is whether calling setupViewer() was
just done because of the FBO resize of whether it was being done for
other reasons as well.

As a general approach I'd recommend sticking with a single
GraphicsWindow where possible and just handling the resize of the FBO,
or at least the mapping of the resize in other ways.  For instance one
approach you could take is to create a FBO in the maximum window size
then just use a viewport to select which part is active.  Another
approach is to force a rebuild of the FBO by clearing the
RenderingCache via camera->setRenderingCache(0).

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Resizing FBO camera while rendering

2018-10-02 Thread Robert Osfield
On Tue, 2 Oct 2018 at 16:20, Rômulo Cerqueira
 wrote:
> > So you are setting a whole new graphics context and associated data on
> > each resize?

Is it only the FBO that is forcing you to do this?

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Moving from 3.4.1 to 3.5.7 breaks my "hardware instancing"

2018-10-02 Thread Andrew Cunningham
Hi Robert,
Not sure where the 3.5.7 came from - another dev must have grabbed it from GIT 
a while ago.
I'll update to 3.6.3 and see what happens.
Thanks

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=75020#75020





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Resizing FBO camera while rendering

2018-10-02 Thread Rômulo Cerqueira
Hi,


> So you are setting a whole new graphics context and associated data on
> each resize?


Yes, Robert.

... 

Thank you!

Cheers,
Rômulo

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=75019#75019





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Resizing FBO camera while rendering

2018-10-02 Thread Robert Osfield
Hi Rômulo,

On Tue, 2 Oct 2018 at 12:40, Rômulo Cerqueira
 wrote:
> I use the method setupViewer() "to resize" the FBO as well (by instantiating 
> the viewer, camera, texture and callback again). This approach was the best 
> way so far to minimize this problem.

So you are setting a whole new graphics context and associated data on
each resize?

With this you are getting the GL error?

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Problems porting from osg-3.4.0 to osg-3.6.0

2018-10-02 Thread Robert Osfield
Hi Herman,

With the change to subclassing Drawable from Node rather than Object,
the dirty flag is now inherited from Node, it's a bit awkward as it's
now _boundingSphereComputed, but it's essentially the same thing.

However, your own code should worry about the dirty flag, that's the
job of the getBound()/getBoundingBox() job to worry about, and this
has always been the case - it's an example of the Template Method
Design Pattern.  The getBound*() method provides the high level
managing of dirty status that always needs to be managed coherently
and the compute*() method provides the part that differs.

So in your code just delete the line:

   _boundingBoxComputed=true;

It will work fine for 3.4.x and 3.6.x without this line as it was
never needed in the first place :-)

Robert.
On Tue, 2 Oct 2018 at 12:25, Herman Varma  wrote:
>
> Hi,
>
>
> I have encountered another problem. what is the replacement for
> _boundingBoxComputed in osg-3.6.2
>
> in osg-3.4.0
> It was defined in osg\Drawable
>
>
> BoundingBox _initialBound;
> ref_ptr _computeBoundCallback;
> mutable BoundingBox _boundingBox;
> mutable bool  _boundingBoxComputed;
>
> in osg-3.6.2
> it  is not defined in osg\Drawable
>
> BoundingBox  _initialBoundingBox;
> ref_ptr_computeBoundingBoxCallback;
> mutable BoundingBox _boundingBox;
>
>
>
> The code to be ported is
>
> osg::BoundingBox OsgDynMesh::computeBoundingBox() const
> {
> FBox3 box;
> m_pDynGeom->DoCalcBoundBox(box);
>
> // convert it to OSG bounds
> v2s(box.min, _boundingBox._min);
> v2s(box.max, _boundingBox._max);
>
> _boundingBoxComputed=true;
> return _boundingBox;
> }
>
> Thank you!
>
> Cheers,
> Herman
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=75006#75006
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Resizing FBO camera while rendering

2018-10-02 Thread Rômulo Cerqueira
Hi Robert,

I use the method setupViewer() "to resize" the FBO as well (by instantiating 
the viewer, camera, texture and callback again). This approach was the best way 
so far to minimize this problem.
 
... 

Thank you!

Cheers,
Rômulo

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=75016#75016





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Resizing FBO camera while rendering

2018-10-02 Thread Rômulo Cerqueira
Hi Robert,

I use the method setupViewer() "to resize" the FBO as well (by instantiating 
the viewer, camera, texture and callback again). This approach was the best way 
so far to minimize this problem.
 
... 

Thank you!

Cheers,
Rômulo

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=75014#75014





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Resizing FBO camera while rendering

2018-10-02 Thread Robert Osfield
Hi Rômulo,

I didn't spot any code you posted that handles resizing, did I miss it?

Robert.
On Tue, 2 Oct 2018 at 08:10, Rômulo Cerqueira
 wrote:
>
> Hi,
>
> I have rendered a FBO camera to image by using a callback (as seen in the 
> code below), however some OpenGL warnings/erros are raised when I resize at 
> runtime by setupViewer() method. I debugged the code by using
>
>
> Code:
> export OSG_GL_ERROR_CHECKING=ON
>
>
>
> and got the following error:
>
>
> Code:
> Warning: detected OpenGL error 'invalid operation' after applying attribute 
> Viewport 0x7fb35406e500
>
>
>
> How can I properly do the resizing of my FBO camera?
>
>
>
> Code:
> // create a RTT (render to texture) camera
> osg::Camera *ImageViewerCaptureTool::createRTTCamera(osg::Camera* cam, 
> osg::Camera::BufferComponent buffer, osg::Texture2D *tex, 
> osg::GraphicsContext *gfxc)
> {
> osg::ref_ptr camera = cam;
> camera->setClearColor(osg::Vec4(0, 0, 0, 1));
> camera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
> camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);
> camera->setRenderOrder(osg::Camera::PRE_RENDER, 0);
> camera->setViewport(0, 0, tex->getTextureWidth(), 
> tex->getTextureHeight());
> camera->setGraphicsContext(gfxc);
> camera->setDrawBuffer(GL_FRONT);
> camera->setComputeNearFarMode(osg::CullSettings::DO_NOT_COMPUTE_NEAR_FAR);
> camera->attach(buffer, tex);
> return camera.release();
> }
>
> // create float textures to be rendered in FBO
> osg::Texture2D* ImageViewerCaptureTool::createFloatTexture(uint width, uint 
> height)
> {
> osg::ref_ptr tex2D = new osg::Texture2D;
> tex2D->setTextureSize( width, height );
> tex2D->setInternalFormat( GL_RGB32F_ARB );
> tex2D->setSourceFormat( GL_RGBA );
> tex2D->setSourceType( GL_FLOAT );
> tex2D->setResizeNonPowerOfTwoHint( false );
> tex2D->setFilter( osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR );
> tex2D->setFilter( osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR );
> return tex2D.release();
> }
>
> void ImageViewerCaptureTool::setupViewer(uint width, uint height, double fovY)
> {
> // set graphics contexts
> osg::ref_ptr traits = new 
> osg::GraphicsContext::Traits;
> traits->x = 0;
> traits->y = 0;
> traits->width = width;
> traits->height = height;
> traits->pbuffer = true;
> traits->readDISPLAY();
> osg::ref_ptr gfxc = 
> osg::GraphicsContext::createGraphicsContext(traits.get());
>
> // set the main camera
> _viewer = new osgViewer::Viewer;
> osg::ref_ptr tex = createFloatTexture(width, height);
> osg::ref_ptr cam = createRTTCamera(_viewer->getCamera(), 
> osg::Camera::COLOR_BUFFER0, tex, gfxc);
> cam->setProjectionMatrixAsPerspective(osg::RadiansToDegrees(fovY), (width 
> * 1.0 / height), 0.1, 1000);
>
> // render texture to image
> _capture = new WindowCaptureScreen(gfxc, tex);
> cam->setFinalDrawCallback(_capture);
> }
>
> osg::ref_ptr 
> ImageViewerCaptureTool::grabImage(osg::ref_ptr node)
> {
> // set the current node
> _viewer->setSceneData(node);
>
> // if the view matrix is invalid (NaN), use the identity
> if (_viewer->getCamera()->getViewMatrix().isNaN())
> _viewer->getCamera()->setViewMatrix(osg::Matrixd::identity());
>
> // grab the current frame
> _viewer->frame();
> return _capture->captureImage();
> }
>
> 
> WindowCaptureScreen METHODS
> 
>
> WindowCaptureScreen::WindowCaptureScreen(osg::ref_ptr 
> gfxc, osg::Texture2D* tex) {
> _mutex = new OpenThreads::Mutex();
> _condition = new OpenThreads::Condition();
> _image = new osg::Image();
>
> // checks the GraficContext from the camera viewer
> if (gfxc->getTraits()) {
> _tex = tex;
> int width = gfxc->getTraits()->width;
> int height = gfxc->getTraits()->height;
> _image->allocateImage(width, height, 1, GL_RGBA, GL_FLOAT);
> }
> }
>
> WindowCaptureScreen::~WindowCaptureScreen() {
> delete (_condition);
> delete (_mutex);
> }
>
> osg::ref_ptr WindowCaptureScreen::captureImage() {
> //wait to finish the capture image in call back
> _condition->wait(_mutex);
> return _image;
> }
>
> void WindowCaptureScreen::operator ()(osg::RenderInfo& renderInfo) const {
> osg::ref_ptr gfxc = 
> renderInfo.getState()->getGraphicsContext();
>
> if (gfxc->getTraits()) {
> _mutex->lock();
>
> // read the color buffer as 32-bit floating point
> renderInfo.getState()->applyTextureAttribute(0, _tex);
> _image->readImageFromCurrentTexture(renderInfo.getContextID(), true, 
> GL_FLOAT);
>
> // grants the access to image
> _condition->signal();
> _mutex->unlock();
> }
> }
>
>
>
> ...
>
> Thanks in advance,
>
> Cheers,
> Rômulo
>
> --
> Read this topic online here:
> 

Re: [osg-users] osg apps on gpu cluster

2018-10-02 Thread Robert Osfield
Hi Nick & Per,

On Tue, 2 Oct 2018 at 06:12, Per Nordqvist  wrote:
> I and Nick are working to utilize as much of the GPUs as possible, either on 
> single machine or cluster.
> So hardware is not yet decided, but let's assume ubuntu 16+, multiple modern 
> Nvidia gaming cards, but still single screen.

osgViewer has been written from the ground up to support multiple GPUs
on a single machine with a single application.

The basic concept is the View's master Camera controls the overall
view, and a series of slave Camera's assign to the View handle the
rendering for each graphics card/display.  The osgwindow example is
the simplistic example of this in action.  A search for addSlave in
the OSG codebase will reveal lots of other examples of it in action -
it can be used for a wide range of tasks.

The OSG out of the box will default to DrawThreadPerContext
ThreadingModel on modern machines, you might find
CullDrawThreadPerContext more appropriate, you could even try
CullThreadPerCameraDrawThreadPerContext if you have plenty of cores to
throw at it.

In 3.6.x you also have support for explicitly controlling Affinity so
you can lock various threads to particular cores.

Another variable you could play with is that you can set up the
OS/desktop so that one single graphics context can span multiple
cards.

Modern graphics cards are beast so you might well be able to handle
quite a few displays just from one card.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Moving from 3.4.1 to 3.5.7 breaks my "hardware instancing"

2018-10-02 Thread Robert Osfield
Hi Andrew,

On Tue, 2 Oct 2018 at 03:45, Andrew Cunningham  wrote:
> I upgraded from 3.4.1 to 3.5.7 and my code that uses "hardware instancing" 
> (to render a large number of simple geometry) renders nothing now.

3.5.7 is a developer release and is unsupported, I wouldn't recommend
using an old developer release when later stable releases are
available.  Please upgrade to 3.6.3 then see if the problem persists -
we have spent a lot of time testing and debugging in the 3.6.x stable
release cycle so should be a good base to work from.

If a problem persist with 3.6.3 then creating a small test program
that illustrates the program would be helpful, if there is an issue
with your usage we can spot it, if there is a bug in the OSG we can
use it as a unit test for reproducing the regressions and confirming
that it's fixed once the bug is found.

Cheers,
Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Latency

2018-10-02 Thread Robert Osfield
Hi David,

It's not possible to know specifically what is wrong without having
your hardware, software and data available to test, even with these
one might well need inisght from NVidia to unravel what is going on.

>From your description it sounds like you are running three separate
slave applications and a master that controls them.  If so have you
tried just a single application and using the osgViewer with a single
Viewer and three slave Camera with a GraphicsWindow on each?

Within your application are you application are you setting up and
swap ready and swap buffer mechanism?

If you can try running your application on another OS, especially one
that you can switch off go closer to the metal and switch off
compositing.

Robert.


On Mon, 1 Oct 2018 at 18:25, David Heitbrink  wrote:
>
> I currently have an odd problem I am stuck on. I have a system with 2 Quadro 
> P5000 cards- driving 3 displays that are warped and blended on a 120 degree 
> dome. Running Windows 10.  The application is ground vehicle simulation -  I 
> have pretty high rates of optical flow. Each display is running its own 
> process, they receive a UDP broadcast with position update information. What 
> I am seeing is 1 of the displays is off by 1 frame 95% of the time. When this 
> happensmy blending fails and I get a large seam in my scene. I added 
> logging as to the eye point position as well as high frequency counter time.
>
> From what I can tell from the logs, the return from the VSync's 
> (viewer->frame() ) are all within 200 microseconds, and the eye point 
> position and data frame number (i.e. the frame number for my incoming data) 
> is the same across all of the channels.
>
> So I strongly suspect this has something to do with the graphics 
> card/driver's own internal frame bufferingand there is not a lot I can do 
> about it.
>
> This leaves me with a couple of real issues.
> 1) Programmatically cannot tell if a channel is a frame behind or not. 
> Basically I added buffering for the other 2 channels for position 
> information..and my seam goes away 95% of the time
> 2) Since things are not 100% the same...I randomly get a seam 5% of the 
> time (assuming I am buffering).
>
> At this point I don't know what to do.I have talked to NVidia about this, 
> they mentioned making sure DPI scaling in windows is set to 100% and/or 
> setting up the app to be DPI aware. I have done this.but I get the same 
> result.
>
>
> Any advice and/or speculative guesses on this would be great.
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=75001#75001
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Latency

2018-10-02 Thread Voerman, L.
Hi David,
are your cards running in "quadro mosaic" mode or are they configured as
independent cards? Our win7 machine is driving 6 channels from 2 cards for
a cylindrical theatre, and have no problem running in full sync (stereo as
well as monoscopic view).
Laurens.

On Mon, Oct 1, 2018 at 7:25 PM David Heitbrink 
wrote:

> I currently have an odd problem I am stuck on. I have a system with 2
> Quadro P5000 cards- driving 3 displays that are warped and blended on a 120
> degree dome. Running Windows 10.  The application is ground vehicle
> simulation -  I have pretty high rates of optical flow. Each display is
> running its own process, they receive a UDP broadcast with position update
> information. What I am seeing is 1 of the displays is off by 1 frame 95% of
> the time. When this happensmy blending fails and I get a large seam in
> my scene. I added logging as to the eye point position as well as high
> frequency counter time.
>
> From what I can tell from the logs, the return from the VSync's
> (viewer->frame() ) are all within 200 microseconds, and the eye point
> position and data frame number (i.e. the frame number for my incoming data)
> is the same across all of the channels.
>
> So I strongly suspect this has something to do with the graphics
> card/driver's own internal frame bufferingand there is not a lot I can
> do about it.
>
> This leaves me with a couple of real issues.
> 1) Programmatically cannot tell if a channel is a frame behind or not.
> Basically I added buffering for the other 2 channels for position
> information..and my seam goes away 95% of the time
> 2) Since things are not 100% the same...I randomly get a seam 5% of
> the time (assuming I am buffering).
>
> At this point I don't know what to do.I have talked to NVidia about
> this, they mentioned making sure DPI scaling in windows is set to 100%
> and/or setting up the app to be DPI aware. I have done this.but I get
> the same result.
>
>
> Any advice and/or speculative guesses on this would be great.
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=75001#75001
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org