[osg-users] FBO FrameBufferObject and stencil support

2008-11-12 Thread Himar Carmona
Hi,

   i have two question regarding FBO and Stencil buffer. Perhaps someone
could clear my thoughts.

   1) Does OSG 2.6.1 supports FBO with stencil buffer attachment?

   2) Does it supports EXT_PACKED_DEPTH_STENCIL?


Explanation: I use FBO to take the render. Recently i wanted to use stencil
to render concave polygons. But it doesn't work. It seems that i must also
attach the stencil buffer. But don't know exactly how to achieve that and
i'm getting a FBO setup failed (0x8cd6) from runCameraSetup. The graphics
card supports PACKED_DEPTH_STENCIL and it seems i must also attach a depth
buffer but i don't know how.

Now i'm lost and a bit confused with all this mess of buffers. Any help
about it will be really appreciated.

Thanks in advance and best regards
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Re: FBO FrameBufferObject and stencil support

2008-11-14 Thread Himar Carmona
Hi,

   using trunk version works fine. I suspect my graphics card doesn't
support "unpacked" depth and stencil buffers (NVIDIA Geforce 8600 GT).

   Only one comment about its implementation inside OSG:

 In RenderStage.runCameraSetup when the fbo is being initialized, if the
camera has two attachment (in my example they are COLOR_BUFFER and
PACKED_DEPTH_STENCIL_BUFFER), the second one isn't detected as DEPTH_BUFFER,
and the flag depthattached remains with 0. After that OSG initializes
another rendebuffer for depth_buffer and attaches it. So, there are three
attachments, COLOR_BUFFER, PACKED_DEPTH_STENCIL_BUFFER and DEPTH_BUFFER
(automatically done by runCameraSetup). I think this works because the last
attached is PACKED_DEPTH_STENCIL_BUFFER and overrides the DEPTH_BUFFER one.

Perhaps these details could be helpful for someone having problems or
debugging OSG fbo support.

Thanks for the quick responses!!!
Himar.



2008/11/12 Robert Osfield <[EMAIL PROTECTED]>

> Hi Himar,
>
> On Wed, Nov 12, 2008 at 10:56 AM, Himar Carmona <[EMAIL PROTECTED]>
> wrote:
> >1) Does OSG 2.6.1 supports FBO with stencil buffer attachment?
>
> It should do.  I haven't personally tested it, but the code paths are all
> there.
>
> >2) Does it supports EXT_PACKED_DEPTH_STENCIL?
>
> 2.6.1 doesn't support this extension, but I merged a submission for
> this last week so it's in 2.7.5 and svn/trunk.
>
> Robert.
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Offscreen rendering without visible window

2007-11-26 Thread Himar Carmona
Hi,

i have some questions regarding Pbuffers, FBO, and offscreen rendering. I'm
not showing the final render on a window but retrieving it with glReadPixels
for CPU processing.

   1) is there any method to create a Graphics Context without any visible
window other than using pbuffer = true in traits (on Windows XP)?

I have read that Frame Buffer Object (FBO) replaces much of the
functionality of PBuffers (Simon Green slides about FBO) and that they have
better performance.

   2) Can FBO's be used without using PBuffer to establish the
GraphicsContext? Since they need one, how?

Since FBO's are established at camera objects, is there any method to read
back the rendered pixels (other than doing it myself with glReadPixels)? Is
perhaps Image the correct path?

   3) If the desired size of the rendered result changes, i have to recreate
the pbuffer manually creating a new graphics context with the new size. If i
use FBO, will i
have to manually resize it, or it is done by the camera?

I have already achieved doing it with pbuffers, but i will do the
rendering with FBO also and compare both methods (until replies, using
pbuffer to create de GC).


Thanks a lot.

PD: StatsHandler fails to create the HUD camera if the main camera's
graphics context is PixelBufferWin32 instead of GraphicsWindowWin32. It
doesn't found a correct GraphicsContext.
Is this perhaps a bug?

PPD: Any ideas when will the programming guide book be available ?
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] DynamicLibrary::failed loading

2007-11-28 Thread Himar Carmona
Hi,

  just guessing, perhaps adding OSG_ROOT (and or osgPlugins-2.2.0) to
environment PATH may help?

Good luck!

2007/11/29, Will Sistar <[EMAIL PROTECTED]>:
>
> I am a new user and am trying to use osgviewer (specifically osgviewer
> cow.osg).  I have not built OSG; I used the Win32 Binaries Installer.  I
> set the paths as I found in Getting Started and I modified the
> osgShell.bat file to turn on debugging. When the osgshell starts it prints
> out this:
>
> OSG_ROOT = C:\Program Files\OpenSceneGraph
> OSG_FILE_PATH = C:\OpenSceneGraph-Data
> OSG_NOTIFY_LEVEL = DEBUG
>
> after typing in "osgviewer cow.osg" part of what I get follows:
>
> itr='C:\Program Files\OpenSceneGraph\bin'
> FindFileInPath() : trying C:\Program Files\OpenSceneGraph\bin\osgPlugins-
> 2.2.0\osgdb_osg.dll ...
> FindFileInPath() : USING C:\Program Files\OpenSceneGraph\bin\osgPlugins-
> 2.2.0\osgdb_osg.dll
> DynamicLibrary::failed loading " osgPlugins-2.2.0/osgdb_osg.dll"
> Warning: Could not find plugin to read objects from file "cow.osg".
> osgviewer: No data loaded
>
> The dll is in the osgPlugins-2.2.0 directory as it should be, as I read in
> a previous post similar to this.
>
> I noticed that I could load the library manually so I copied the
> osgdb_osg.dll to the bin directory and typed
> "osgviewer -l osgdb_osg.dll cow.osg" and it worked. (sort of, it didn't
> load the osgdg_rgb.dll although it tried like above)
>
> itr='C:\Program Files\OpenSceneGraph\bin'
> FindFileInPath() : trying C:\Program
> Files\OpenSceneGraph\bin\osgdb_osg.dll ...
> FindFileInPath() : USING C:\Program Files\OpenSceneGraph\bin\osgdb_osg.dll
> CullSettings::readEnvironmentalVariables()
>
> I tried manually loading the dll like this "osgviewer -l
> osgPlugins-2.2.0\osgdb_osg.dll cow.osg" and that didn't work.
>
> Any suggestions would be helpful.
> Will
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] FBO resizing problem

2007-11-30 Thread Himar Carmona
Hi,

  i'm using FBO to render offscreen and i have found a little problem when i
need to resize the target object.

  i have the following scenario:

  1. I set up a camera to use FRAME_BUFFER_OBJECT (as osgprerender).
  2. I attach an Image to it with camera.attach (leaving OSG to control de
RenderBuffer object).
  3. I render once. (viewer.frame()).
  4. I take a snapshot of the image using osgDB.writeImageFile

Until here all runs ok. I have a pretty file with the rendered frame as
expected. But since i want to be able to resize the target (Image) between
frames render, i do next the following:

 5. Simulate a resizing (growing up), changing the image size (hence
changing the internal buffer) or creating a new Image with different size.
 6. setting Viewport from the camera to the desired size.
 7. I attach the new image to the camera (or the original resized one).
 8. I render once again. (viewer.frame()).
 9. I take a snapshot of the image.

The problem is that the internal FBO that RenderStage controls doesn't
notice the attachment's change, and therefore doesn't call fbo.setAttachment,
so what i have in the second
snapshot (step 9) is only a piece of  the image rendered.

 If i have understand correctly the framebuffer_object spec, it says that
the FBO doesn't have sizing fields internally, and the final size is
governed by the attachment (here the image).

 I 'feel' that renderstage only calls fbo.setAttachment when the fbo is
initialized. If this where the case, then how can i force RenderStage to
reattach the fbo without creating it again? I will
try to use the fbo.dirtyAll, but first i must realize how can i get at it.

 Of course, i  also could have missed something. I have a test code with the
problem, so if it must be tested, i can upload it.

 Another side effect of renderstage (independent for this problem) is that
it also attach another renderbuffer to the depth_component internally, with
or without being said. Is this avoidable?

Thanks.
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] FBO resizing problem

2007-12-01 Thread Himar Carmona
Hi Robert,

 i only resize (i.e. recreate) the FBO when a resize operation is performed
(when it is detected as finished). 99% of the time this will not happen.
Really, my app performance is dropping due to the readback operation every
frame. In my case this is acceptable, as i don't need constant frame rate
(i'm rendering on demand, my app is a CAD like one). OSG is so fast doing
its work, that the response time is good enough.

  This is my workaround to the problem i mentioned in my post:

   osgViewer::Renderer* renderer =
(osgViewer::Renderer*)camera->getRenderer();

renderer->getSceneView(0)->getRenderStage()->setCameraRequiresSetUp(true);

renderer->getSceneView(0)->getRenderStage()->setFrameBufferObject(NULL);

  These lines force the RenderStage to recreate the FBO, but this only
happens when the resize operation ends.

  Your idea is better of course. But my app must be able to handle a
multimonitor system. How can i determine in this situation what is the
largest region i need? Is this region independent of the multimonitor
configuration?


Thank you for your advices, Robert. I really appreciate them.
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OsgViewer in form, aspect ratio issue and error on close

2007-12-01 Thread Himar Carmona
 Hi,

 i'm also using .NET (with C++/CLI) and i'm also new with OSG (a few moths
like you). Perhaps that can help you:

1) the projectionMatrix is recomputed in
GraphicsContext::resizedImplementation (which GraphicsWindow derives from).
You can override it (setResizedCallback), or look at how it is done.The code
use the traits structure to calculate the aspect ratio. Be sure you are
initializing the traits correctly.

Another way is setting the viewer with setUpViewerAsEmbeddedInWindow. There
are messages in the lists archives explaining it. I remember at least one. I
recommend you to look at the examples of osgViewerXXX. (osgviewerGLUT,
osgViewerQT, ...) osgviewerMFC is perhaps not the best one.
setUpViewerAsEmbeddedInWindow looks like the best method to separate the
viewer from the GUI framework.

2) It seems that the window Handle is being released before you close the
GraphicsContext. On .NET, Form::OnClosed, is called AFTER the window has
been destroyed, so it seems a 'tempo' error. Try OnClosing instead. This is
called before the window handle is released.Beware which one you use. If
this is not the problem, try setting the threadingModel of the viewer to
SingleThreaded and see if it is some threading issue.

Hope this helps.
Himar.


2007/11/30, Miao <[EMAIL PROTECTED]>:
>
> hi Christophe,
>
> > 1) it seems that the aspect ratio used for the rendering is based on the
> > ratio of the form size.
> About your first problem,when i reuse code of  osgviewerMFC ,
> I met the same problem.
> I found  in  osgViewer::Viewer's base class osg::view 's constructor ,
> it'll new a master camera and set ProjectionMatrix with screen ratio.
> and when you add camera with  addSlave() , it will call the updateSlave(),
> and set ProjectionMatrix using master camera' projection.
>
> so my solution is set projection matrix to own camera,
> and using setCamera() to set master camera  instead of using addSlave().
> i am not sure it is a good solution or not, but i works.
>
> by the way, i use  osg 2.2 .
> Hope it also works with osg2.0.
>
> some code from osg::View
> View::View(): Object(true)
> {
>..
>   _camera = new osg::Camera;
>   _camera->setView(this);
>
>   double height = osg::DisplaySettings::instance()->getScreenHeight();
>   double width = osg::DisplaySettings::instance()->getScreenWidth();
>   double distance = osg::DisplaySettings::instance()->getScreenDistance();
>   double vfov = osg::RadiansToDegrees(atan2(height/2.0f,distance)*2.0);
>   _camera->setProjectionMatrixAsPerspective( vfov, width/height,
> 1.0f,1.0f);
>   _camera->setClearColor(osg::Vec4f(0.2f, 0.2f, 0.4f, 1.0f));
> 
> }
>
> void View::updateSlave(unsigned int i)
> {
>   if (i >= _slaves.size() || !_camera) return;
>
>   Slave& slave = _slaves[i];
>
>   if (slave._camera->getReferenceFrame()==osg::Transform::RELATIVE_RF)
>   {
>slave._camera->setProjectionMatrix(_camera->getProjectionMatrix() *
> slave._projectionOffset);
>slave._camera->setViewMatrix(_camera->getViewMatrix() *
> slave._viewOffset);
>   }
>
>   slave._camera->inheritCullSettings(*_camera);
>   if (slave._camera->getInheritanceMask() &
> osg::CullSettings::CLEAR_COLOR)
> slave._camera->setClearColor(_camera->getClearColor());
> }
>
> Miao
>
> - Original Message -
> From: "LOREK Christophe" <[EMAIL PROTECTED]>
> To: 
> Sent: Friday, November 30, 2007 12:10 AM
> Subject: [osg-users] OsgViewer in form, aspect ratio issue and error on
> close
>
>
> > hi everyone,
> >
> > I'm new to this mailing list, although I have been using OpenSceneGraph
> > for a couple of monthes.
> >
> > I'm using the OsgViewer from OSG 2.0 and I would like to have it use a
> > .NET form for display (using managed C++, not C#). I followed  the
> > advice given by Glenn Waldron in his reply to Romain Blanchais on the
> > 20th of November, where he gave the following sample code :
> >
> > HWND hwnd = (HWND)control->Handle.ToPointer();
> >
> > osg::ref_ptr traits =
> >new osg::GraphicsContext::Traits();
> >
> > traits->inheritedWindowData = new
> > osgViewer::GraphicsWindowWin32::WindowData( hwnd );
> > traits->setInheritedWindowPixelFormat = true;
> > traits->doubleBuffer = true;
> > traits->windowDecoration = false;
> > traits->sharedContext = 0;
> > traits->supportsResize = true;
> >
> > RECT rect;
> > GetWindowRect( hwnd, &rect );
> > traits->x = 0;
> > traits->y = 0;
> > traits->width = rect.right - rect.left;
> > traits->height = rect.bottom - rect.top;
> >
> > osg::ref_ptr gc =
> > osg::GraphicsContext::createGraphicsContext(
> >traits.get() );
> >
> > viewer = new osgViewer::Viewer();
> >
> > osg::ref_ptr cam = new osg::Camera();
> > cam->setGraphicsContext( gc.get() );
> > cam->setViewport( new osg::Viewport( traits->x, traits->y,
> > traits->width, traits->height ) );
> > viewer->addSlave( cam.get() );
> >
> > I have succeeded in obtaining a rendering of the OsgViewer in my form,
> > but I have the following issues :
> >
> > 1) it seems that the aspe

Re: [osg-users] Win32 crash

2007-12-02 Thread Himar Carmona
Hello Panagiotis,

sometime ago i had a similar issue. nvoglnt.dll is the nvidia Driver. In
my case it was a driver problem. NVIDIA have a known uninstall issue that
didn't clean up correctly if you upgrade the driver version (even if you
uninstall previous version and install the new one). I've found a page that
explain how to cleanly uninstall the nvidia driver (in fact, deleting some
of the dlls it installs manually). I do it, and the exceptions gone.

   Perhaps doing the same may help you. Uninstall, verify that there isn't
something that looks like a nvidia driver (seach nvogl.dll and likely ones),
and reinstall the drivers.


   Hope this helps.
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] FBO resizing problem

2007-12-02 Thread Himar Carmona
 Hi Robert,

 thanks again. For now i will stand with my workaround. I'm also running for
a deadline and really
 i have no time for things other than my project (i think there must be some
sort of pandemic disease
upon the programmers beings, i will not believe the day one programmer comes
and says  'i have a lot of time to spend on this project...')

  Well, comments apart, is there any place where i could document the
proposed changes to FBO functionality in RenderStage and stand there until i
(or anyone else) have time to try to incorporate it to the osg distribution?
Something like osg-submissions, maybe?

 Best regards,
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render To Texture problem in a Reflection algorithm

2007-12-02 Thread Himar Carmona
Hi,

   do both graphics cards support Frame_Buffer_object ? The problem could be
in the renderfallback (i.e. PIXEL_BUFFER_RTT) if they do not support it? Try
the different RTT methods if these seems to be the problem. Also , i think
you maust set the viewport for FBO to work (i suppose it is set in another
place also)...Also, you can set the notifylevel of OSG to WARN or DEBUG and
see what is really happens.

Good luck.
Himar.


2007/12/3, zhangguilian <[EMAIL PROTECTED]>:
>
>  Hi,
>  In an algorithm of reflection I used Render To Texture, part of the code
> is:
>
>  Texture2D* reflectionTex=new Texture2D;//rtt texture
>   reflectionTex->setTextureSize(512, 512);
>   reflectionTex->setInternalFormat(GL_RGBA);
>
>   reflectionTex->setFilter(osg::Texture2D::MIN_FILTER,osg::Texture2D::LINEAR);
>
>   reflectionTex->setFilter(osg::Texture2D::MAG_FILTER,osg::Texture2D::LINEAR);
>
>   
> reflectionTex->setWrap(osg::Texture2D::WRAP_S,osg::Texture2D::CLAMP_TO_EDGE);
>
>   
> reflectionTex->setWrap(osg::Texture2D::WRAP_T,osg::Texture2D::CLAMP_TO_EDGE);
>
>   CameraNode * rttcamera=new CameraNode;//rttcam
>   rttcamera->setCullingMode(CullSettings::NO_CULLING);
>   rttcamera->setCullingActive(false);
>   rttcamera->setClearMask(GL_DEPTH_BUFFER_BIT|GL_COLOR_BUFFER_BIT);
>  // rttcamera->setViewport(0,0,512,512);
>   rttcamera->setRenderOrder(CameraNode::PRE_RENDER);
>
>   
> rttcamera->setRenderTargetImplementation(CameraNode::FRAME_BUFFER_OBJECT,CameraNode::PIXEL_BUFFER_RTT);
>   rttcamera->attach(osg::CameraNode::COLOR_BUFFER,reflectionTex);
>
>   osg::ClipNode* clipNode = new osg::ClipNode;
>   osg::ClipPlane* clipplane = new osg::ClipPlane;
>   Plane pl(normal,point);
>   clipplane->setClipPlane(pl);
>   clipplane->setClipPlaneNum(0);
>   clipNode->addClipPlane(clipplane);
>   clipNode->addChild(reflectTransform);
>   rttcamera->addChild(clipNode);
>
>  I don't know why it running well on a GeForce 8800 GTS/PCI/SSE2
> from NVIDIA Corporation but doesn't have any reflection effect on a GeForce4
> MX 440/AGP/SSE2 from NVIDIA Corporation,
> and while I add a line:rttcamera->setViewport(0,0,512,512),the later
> machine(GeForce4 MX 440/AGP/SSE2 from NVIDIA Corporation) can have reflect
> effect,but a strange phenomena appears:
> an arbitrary window(relative or irrelative with the program) over it will
> destroy the reflction map and the area destroyed will move with the
> window over it
> (once the window over it moves ,the destroyed area move too) , I don't
> know what have happend and I eagerly to know how to solve the problem.
>
> Thanks very much!
> zhangguilian
> 2007.12.3
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] FBO resizing problem

2007-12-03 Thread Himar Carmona
Hi,

  i'm trying the second approach of having a FBO greater always. Is there an
easy way to set a FBO and copy only a portion of it back to memory? Using
FBO, an image to control the transfer to main memory and the requisite of
having image copy the pixels in a specified location (given pointer), i
haven't achieve it. If i set the image size to 2048x2048 (for example) so
that the FBO takes this size, and then (before a viewer.frame call) set the
viewport to the required size, image.readPixels allocates its own buffer,
because the sizes differ. The result is a new buffer controlled by the image
object of the viewport size and my buffer untouched.

   Any ideas?

Thanks,
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] FBO resizing problem

2007-12-03 Thread Himar Carmona
 Hi,

  this little hack makes this approach work:

  Change to Image (header): Added bool allocateIfRequired to ReadPixels.
Controls if Image is free or not to allocate the desired memory buffer, or
app want to take care of it (true means Image has control). App must
guarantee _data is initialized and have the proper format (and size) or a
memory problem could happen.

void readPixels(int x,int y,int width,int height,
GLenum pixelFormat,GLenum type, bool allocateIfRequired = true);


  Change to Image.cpp: Make call to allocateImage dependant of flag state.

void Image::readPixels(int x,int y,int width,int height,
GLenum format,GLenum type, bool allocateIfRequired)
{
if (allocateIfRequired)
allocateImage(width,height,1,format,type);

glPixelStorei(GL_PACK_ALIGNMENT,_packing);
glReadPixels(x,y,width,height,format,type,_data);
}

Change to RenderStage.cpp (drawInner): Added false to the call of
readPixels. Means app have control (and responsability) of memory.

itr->second._image->readPixels(static_cast(_viewport->x()),
   static_cast(_viewport->y()),

static_cast(_viewport->width()),

static_cast(_viewport->height()),
   pixelFormat, dataType,
false);

   Now, simply setting the image to the desired FBO size, the viewport of
the camera controls which region is transfered.

   I know this is not the right place to comment that, but so this thread is
now complete for everyone who read it.

  Summary for the future readers: The main topic was 'How to use FBO with
resizing requisites and readback render results to main memory (for new
users like me).'

  Problem: RenderStage is not sensitive to changes in the FBO or its
attachment (w.r.t. size at least) ) once it is created in runCameraSetUp.

  Alternative 1: Resize the FBO and have it always of the desired size
(first workaround in this thread).
  Alternative 2: Set a FBO larger than the maximum size possible and let the
camera viewport control which region is desired. (need hack in this message)
  Alternative 3: (Not yet implemented). Change RenderStage and implement the
desired behaviour (if image attached to an FBO chages size, let leave FBO
refresh its attachments).

  None is perfect, 1 and 2 have pros and cons. Best seems 3, but requires
more knowlegde of OSG that i don't yet have.

Thanks for the wise replies.
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Win32 crash

2007-12-04 Thread Himar Carmona
> Hi,


Ati and nvglont.dll ? isn't the last associated with NVIDIA icd?

   well, you can try gDebugger and debug the opengl calls. Also with VS you
can do a step by step debugging. And OSG have osg notifylevel and opengl
error checking. Try setting the last to ONCE_PER_ATTRIBUTE, instead of the
default ONCE_PER_FRAME. VS have memory inspectors, if you use buffer, be
sure don't to have any memory overrun (buffer overflow). Try boundschecker
or even GlowCode to check memory isn't overwritten. VS compiles also with
memory checking. In debug mode, it fills the app memory with 0xCDCDCDCD and
0xCC instead of clearing it with zeros.


Good luck.
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Camera manipulation

2007-12-04 Thread Himar Carmona
Hello,

   the math involved depends on whether the rectangle is perpendicular to
the camera view direction (no rotation needed) and if the projection is
either orthographic or perspective and if it is symmetric or not. In either
case you must take into account which edges of the rectangle you want to fit
with the view edges (in OSG i suppose this is what ProjectionResizePolicy is
meant for) if the rectangle aspect ratio (width by height) differ form the
camera. Your objectives in the algebra must be a new view matrix and a new
projection matrix (but i suppose you knew that already). With this two
matrix you can position the camera where you want.

   Supposing the simpler case (rectangle is perpendicular to view direction
and the symmetric orthographic projection), the center of the rectangle give
you the new center of the view matrix. If the camera have perspective
projection, make the new frustums near plane be your new rectangle.

  Once you have the new view matrix and the projection matrix, you can set
them to the camera with setViewMatrix and setProjectionMatrix.

   Of course, this is only a method, you can also orient your math to
compute the vector needed to use setViewMatrixAsLookAt. The eye vector will
be the rectangle's plane normal, the center can be computed with the
projection matrix of the camera and the up vector is just the cross product
(or the camera up vector, if it doesn't roll).

   A good reference of the model of a camera in 3D is Opengl Programmers
Guide (official OpenGL guide). OSG Camera class is just a thin wrapper about
opengl methods w.r.t. position and orientation.

  I think this will not help you much if you have problems with the math
methods involved but your problem is just such one. Is this is your case
(was mine a few months ago), take a good 3D math book (Allan Watts 3D
Computer Graphics for example), a heat coffee or tea and a lot of patience
until you understand perfectly 3d math algebra, or you will have problems in
the future (if you are working with 3d, of course). Please, don't
misunderstand my words. I think sooner or later we all have to read "the
avoided math chapter"...

   If you were asking if OSG has some method that can help you, i haven't
found one yet, but the right place to start looking for something similar
are the camera manipulator classes. Of course they have more algebra than
osg methods..

Best regards.
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Adding an alpha channel to a texture?

2007-12-04 Thread Himar Carmona
Hi,

 i think you are asking for the *texture blending* functions. In OSG look at
BlendFunc, BlendEquation and BlendColor. Sure there is a combination of the
blending factors to achieve the effect you want. See osgblendequation. You
can also program a shader of course. Another trick could be applying by
default a transparent texture (at a scene node that is parent of your
geometry) when you don't have any and apply the texture to your geometry
with Override on.

Hope this helps.
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] osgText quality

2007-12-06 Thread Himar Carmona
Hi,

  have anybody any hints about which parameters values give the best quality
with osgText? In my app i must show some text (normally 10-20 pixel height),
and the quality i have been able to achieve is not good. I noticed also in
osgText that some characters are clipped ( most visible for e and c
characters, event at a great font resolution).

  I was also looking for a way to look at the texture that osgText is
creating, but haven't found one.


Thanks a lot,
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Taking an image of an intermediate node.

2007-12-06 Thread Himar Carmona
Hi,

  i'm not completely sure, but i think that

shot->allocateImage(width, height, 24, GL_RGB, GL_UNSIGNED_BYTE)

must be

shot->allocateImage(width, height, 1, GL_RGB, GL_UNSIGNED_BYTE)... i.e.
Changing the 24 by the 1. The third component is the 3rd dimension of the
image.

Also, you need not to call allocateImage at all, i think it is done by OSG,
and will set the correct format and size.

The initialization of the camera seems ok, perhaps your error is setting the
view or projection matrix? I would first test that focusing the camera to a
known place instead of calculating
the node position.


Hope this helps.
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] WPF and OpenGL popup window problem

2009-03-06 Thread Himar Carmona
Hi,

i also use WPF, you are not the only one. The problem in WPF is surely
the "airspace". In WPF you can't have in the same window overlapping
technologies. And if the window is a popup window, if it has transparency
activated, you will also experiment the same problem (but this time
it depends on the graphics card). This is a issue in Windows XP, not in
Windows Vista. This is due to the windowing management.

Windows XP has also a problem with transparent windows (overlay) for a
long time ago, since it was not thought to render transparent windows (where
the color of a pixel depends on more than one window). I think tooltips are
window (like menu, context menu and so on), and the tooltip has transparency
activated.

W.r.t. whether you are fool to use WPF or Qt or wxWidgets or whatever
windowing technology you use: They are tools, you as programmer are the
artisan. I would choose the best to the project in hands. All have problems
and limitations and different design philosophies, and... the list is long
enough to be discussed ad limitum. Really, evangelism is the problem. Open
Mind the solution. (My apologies to everyone, i'm not good writing english,
and i don't want to offend anyone, this is just a personal point of view).
Just kidding a bit...

  Thank you all the OSG community, i think all of you are open minders.

Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] WPF and OpenGL popup window problem

2009-03-08 Thread Himar Carmona
Hi Jean-Sébastien,

  sorry, my intention was also not to correct you. Don't misunderstand my
words, i understand fool is not an insult. My words were are bit rude, but
my intention was not it. Perhaps i also forget my smiley :-). I was also
joking. We speak the same language. It's a shame enterprises' interests and
benefits don't let us integrate better the current technologies (and i am a
bit tired about that, don't you?)

You are welcome both for your OSG knowledge and also for your jokes. :-)
Thank you.




2009/3/7 Jean-Sébastien Guay 

> Hi Himar,
>
>W.r.t. whether you are fool to use WPF or Qt or wxWidgets or whatever
>> windowing technology you use: They are tools, you as programmer are the
>> artisan.
>>
>
> Sorry if it looked as if I were trying to imply anyone was a fool, that's
> not the case. I have used this same argument a lot in the past (and in the
> recent past on this very mailing list) hence the smiley at the end of my
> sentence... :-)  It was meant as a joke.
>
> Thanks for the concrete info on this issue, it's useful to know about these
> issues.
>
> J-S
> --
> __
> Jean-Sebastien Guayjean-sebastien.g...@cm-labs.com
>   http://www.cm-labs.com/
>http://whitestar02.webhop.org/
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] What is everyone doing for GUIs?

2009-05-19 Thread Himar Carmona
We use also WPF, only windows, C++/CLI for integration with .NET.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Stereo Problem

2009-05-26 Thread Himar Carmona
Hi,

   interesting thread. Recently my team started experimentation with Stereo
and the new Nvidia glasses. We use of course OSG, and we were wonder how we
could use the glasses with OpenGL.

   Here are our impressions and results (for those of you who are
interested):

   We have a Quadro FX 370 (i think this is the worst one for the Quadro
series) and it has Stereo support (activation required via control panel),
i.e. OSG can use QUAD_BUFFER, but it does not have the stereo plug, so we
use this vga stereo adapter to generate the sync signal (
http://www.int03.co.uk/crema/hardware/stereo/) (use pin 10 and 14 of vga
DB15, that is vertical sync). Recently i discovered that Lightspeed
(DepthQ's vendor) sells an vga stereo adapter. I suppose there must be other
vendors thats provides those, aren't they?

   Of course (thanks Nvidia), the 3d vision driver does not like Quadro
drivers, so it does not install. Therefore we use another pc with a Geforce
card, where we install the driver and the ir emitter. In this pc we
configure the driver to use a HDTV DLP or a generic crt, otherwise the ir
emitter neglects to use the minijack sync in signal. Also, for the ir
emitter to work, it must have a fullscreen directx application running  (we
use the test from the control panel because we run on windows), otherwise it
will be in standby mode. After some experimentation we achieved it and we
were able to "see" in stereo with OSG in the other pc with the quadro.

   Our impressions:

Though the glasses are "VESA compatible", the ir emitter doesn't work if
it isn't connected to the pc via USB AND a DirectX application is running
fullscreen.
The ir emitter does not use the sync in signal if it is connected to a
"3d ready" display (either samsung 120Hz lcd and DepthQ projector). It
seems to prioritize the usb  signal in these cases.
The glasses doesn't work  for us at 75Hz display vertical sync. We used
60Hz and all went ok, but with a 75Hz refresh rate we experimented sync
problems.
Rivatuner and 3d vision drivers aren't compatible with each other. You
can't install them both (again, windows platform).

Only one question without answer: Will the Nvidia shutter glasses work with
a standalone DLP TV with 3d support? Where will the ir emitter be connected,
via sync in signal to the tv or via usb cable to a pc? Will never know, the
glasses are a bit of disappoinment.

So, although we were able to make it work, it seems that there are only two
options: Either Nvidia adds supports for quad buffer in their drivers (as
stated here http://www.nvidia.com/object/quadro_stereo_technology.html) or
we better stand with the "professional" shutter glasses and forgets about
the Nvidia "only for games" ones.

Ah, maybe someone out there with a Quadro FX with the stereo plug and the
nvidia shutter glasses can reply me: Does the glasses work with it?

Best Regards,
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Stereo Problem

2009-05-26 Thread Himar Carmona
2009/5/26 Jan Ciger 

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Himar Carmona wrote:
> >Our impressions:
> >
> > Though the glasses are "VESA compatible", the ir emitter doesn't
> > work if it isn't connected to the pc via USB AND a DirectX application
> > is running fullscreen.
> > The ir emitter does not use the sync in signal if it is connected to
> > a "3d ready" display (either samsung 120Hz lcd and DepthQ projector). It
> > seems to prioritize the usb  signal in these cases.
>
> Isn't the USB connection used just to get the power for the signal
> converter? The stereo sync signal is normally emitted on the VGA
> connector directly.
>

Don't know exactly how it works, but for sure the ir emitter use the USB
connection also for data. In a "normal" configuration (i.e. without using
the sync in plug), the ir emitter is connected to the pc only by the usb
cable and does not need to be plugged to the VGA connector. So, i deduce
that if it receives the DDC signal, then it is the driver that enroute that
signal. I also suppose that it uses some sync protocol different than a
simply square wave.


>
> That it doesn't work right with the displays you mention could be due to
> the fact that they use the DDC signal for the original purpose -
> communicating the monitor properties to the PC and - and therefore would
> interfere with the stereo sync.
>


Working on windows, when you install the 3d vision driver, it installs a new
group of options in the nvidia control panel to configure the stereoscopic
display. It have a setup wizard and a test application to help in the
process. If the graphic card detects a "3d ready" monitor (like the lcd
samsung at 120Hz or the DepthQ projector), it disables the option to select
generic crt display and hdtv dlp. And the only configuration possible that i
found was to tell the driver to use one of these. In these cases, if you run
the test appication, it tells you with an onscreen  message that it does not
detect the sync signal, so it forces you to connect via vga 3 connector's
sync in plug. So, i supposed that:

   1. if the graphics card is connected to a 3d ready display, the ir
emitter will receive the sync via usb.
   2. (else) if the driver is configured with a crt or dlp, the it will use
the external sync in signal (another plug in the ir emitter).

  With the vertical sync of the vga connector injected through this plug
(with the circuitry i mentioned in my post) and the driver configured as 2,
the glasses works like a (VESA compatible?) ones.

   Notice also that we use 2 pcs: one with the quadro and osg installed (and
without 3d vision driver, because they don't install for a quadro card), and
the other with the vision driver and the ir emitter plugged.

  If you plug the ir emitter without installing the drivers it doesn't work
(blinks red), so i can't plug it in the first pc. I don't know other
stereoscopic systems (glasses + ir emitters), how they work or how they are
configured. This ones seems to be tightly integrated with the software
drivers.

Regards,
Himar.


>
> Regards,
>
> Jan
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Mandriva - http://enigmail.mozdev.org
>
> iD8DBQFKG8tbn11XseNj94gRAsX2AKCXY57SRXgaNn8pWz+NPxHgcCfVdwCfXNwx
> RaKlxS06CKxziKwquvKpw5k=
> =25NN
>  -END PGP SIGNATURE-
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Stereo Problem

2009-05-26 Thread Himar Carmona
Thank you for the info. Certainly valuable. I'll dig a bit more about DDC
and sync. W.r.t. Nvidia shutter, it seems that nvidia's objective was to get
stereo "transparently" to the directx apps, not to be fully compliant with
stereoscopic standards. Mmm. Not to useful to general stereo software like
OSG (unless they add the announced support for quad buffer).

Best regards,
Himar.

2009/5/26 Jan Ciger 

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Himar Carmona wrote:
>
> >> Don't know exactly how it works, but for sure the ir emitter use the USB
> >> connection also for data. In a "normal" configuration (i.e. without
> >> using the sync in plug), the ir emitter is connected to the pc only by
> >> the usb cable and does not need to be plugged to the VGA connector. So,
> >> i deduce that if it receives the DDC signal, then it is the driver that
> >> enroute that signal. I also suppose that it uses some sync protocol
> >> different than a simply square wave.
>
> That looks like some kind of NVIDIA proprietary solution :(
>
> EDIT: Scratch that. I have checked the link you have sent before
> (http://www.int03.co.uk/crema/hardware/stereo/) and looked at the
> schematics. LOL, that is certainly not anything USB-based :-p
>
> The whole device just gets your stereo sync from the VSYNC signal and
> the only thing it needs USB for is 5V power for the flip-flop chip
> inside. You could replace that with a standard wall-wart power adaptor
> or a battery if you do not want USB there.
>
>
> >>   If you plug the ir emitter without installing the drivers it doesn't
> >> work (blinks red), so i can't plug it in the first pc. I don't know
> >> other stereoscopic systems (glasses + ir emitters), how they work or how
> >> they are configured. This ones seems to be tightly integrated with the
> >> software drivers.
>
> The "normal" way of doing this (e.g. the CrystalEyes glasses originally
> used by SGI, "VESA" stereo) and all the Quadro line that supports stereo
> work in a much simpler way.
>
> As soon as the quad buffer stereo visual is enabled by the application,
>  the card starts producing the stereo sync signal which is just a simple
> 5V TTL level square wave output to the IR emitter. That is generated
> directly by the hardware, there is nothing for the driver to do, just
> turn it on - the signal flips depending on which framebuffer is selected
> for rendering in the application. The emitters are connected using the
> 3pin mini DIN plug - one pin 5V power for the emitter, one pin ground
> and one pin the TTL sync signal (http://geektechnique.org/images/1927.jpg
> ).
>
> Nvidia drivers require the stereo support being enabled in the settings
> (in Windows in the profile, in Linux in xorg.conf), but that is all. For
> Nvidia hardware you can also choose how to emit the signal - either
> through the VGA connector or via the mini DIN plug - e.g. HMDs need it
> on the VGA connector, like the popular eMagin Z800, despite having an
> USB plug, that one is used only for power and the built-in tracker.
>
> I even have an old pair of Asus shutter glasses that are using
> non-standard 3.5mm headphone jack, but the signals are the same 5V TTL
> level (measured it with a scope). Asus shipped these with some of their
> old GeForce 4 cards. On that card you had to use the old "consumer
> stereo" drivers from Nvidia to get a stereo effect in Windows, however
> the output is constantly on, there is nothing to enable or disable (the
> glasses are on all the time). Most likely they have simply wired the
> connector to the VSYNC signal. This has worked in Linux just fine
> (render left/right frames alternating), but you do not have any control
> over the sync, so sometimes the stereo polarity is wrong (eyes get
> swapped due to application latency, e.g. opening a menu).
>
> Regards,
>
> Jan
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Mandriva - http://enigmail.mozdev.org
>
> iD8DBQFKHEzun11XseNj94gRAqZ3AKCfAbAFXry5bzXbNvdG0+sCkCug6wCfSHlE
> QVBedWVlC5oAOcQ40iZUcH0=
> =7lGQ
>  -END PGP SIGNATURE-
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Windows forms and c#

2009-06-22 Thread Himar Carmona
Hi Ba,

maybe i could recommend you two books about C++/CLI:

First, "Pro Visual C++/CLI and the .NET 2.0 Platform" by Stephen R.G.
Fraser. This is an introductory manual to .NET from the C++/CLI perspective.
It's like a beginner C# book, but instead using c#, it uses C++/CLI. Here
you will find how to use the managed .NET API from C++/CLI.

   Second, "C++/CLI in Action" by Nishant Sivakumar. This book complements
the first one, and it speaks more about interop and has many examples and
test cases, for example, how to build a managed wrapper around a native
library (the way to go with OSG and .NET).

   A side note, Microsoft changed C++ drastically from .NET 1.1 to 2.0.
There are two managed C++ languages specifications. The first one (already
obsolete) is Managed C++. This was the first attempt of Microsoft to "bridge
the gap". Not very nice. The second one, C++/CLI, is the last attempt and is
more mature and easier to grasp. Forget all you know about Managed C++, and
use only C++/CLI. This ones confused me at first, so, if you find references
regarding Managed C++, ignore them. I think "Managed C++" and "C++\CLI" are
the proper terms to search and separate them.

  Of course, if you are new to C++, grasp also a good book about C++ and
STL. It will help you to better understand OSG.

  Sorry OSG community for this off-topic message.

  Hope this can help you. Best regards.
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] thousands spheres renedering with OSG

2009-07-02 Thread Himar Carmona
Perhaps i'm fooling but...

Disabling culling may help? If your bottleneck is culling, perhaps the GPU
would eat them all without problems. Of course you will render a lot of
geometry you will not see, but you will avoid a lot of CPU computation. Fast
and dirty test. If you are not happy, then you can take the other approachs.

Also the problem could be in visiting a huge scene graph...?


Best regards,
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] thousands spheres renedering with OSG

2009-07-02 Thread Himar Carmona
Hi Robert,

  thanks for the explanation. I always confuse culling (not visible because
it is out of the view frustum?) with occluding (not visible due to object
being in front of it?).

Best regards,
Himar.



2009/7/2 Robert Osfield 

> Hi Himar,
>
> On Thu, Jul 2, 2009 at 6:08 PM, Himar Carmona
> wrote:
> > Disabling culling may help? If your bottleneck is culling, perhaps the
> GPU
> > would eat them all without problems. Of course you will render a lot of
> > geometry you will not see, but you will avoid a lot of CPU computation.
> Fast
> > and dirty test. If you are not happy, then you can take the other
> approachs.
>
> Disabling culling will do nothing as the OSG already only does cull
> tests when subgraphs are partially culled - if the subgraph is
> entirely within the view frustum the view frustum test if switched
> off.
>
> The majority of the cull traversal time in Mikhail's case will simple
> by traversing a big scene graph and they creating a big rendering
> graph to rendering it.
>
> Robert.
>  ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] how can I ignore the UpdateCallback

2009-10-16 Thread Himar Carmona
Hi,

Specifically, AnimationPathCallback has a setPause method.

More generally, you can control which nodes OSG updates (i.e. visits
during an updateTraversal) using node masks. Look at the methods
osg::NodeVisitor::setTraversalMask and validNodeMask for an explanation
about how it works. In short, visitors have a bit mask (defaults to
0x) and nodes have a bit mask (defaults to 0x). If the
bitwise "and" operation between them gives a result other than 0, the node
will be visited. If the result is 0, the node will not be visited. (Really
the node wil be visited but the update callback will not be called). Node
visitors has also a Node mask override that allow you to "override" the node
mask to force the node to be visited ignoring its mask.

 Hope this helps.
Himar.
2009/10/15 Paul Martz 

> AnimationPath runs off of sim time, so if you mod your app to not increment
> sim time, then the animation will not play.
>
> Paul Martz
> Skew Matrix Software LLC
> _http://www.skew-matrix.com_ 
> +1 303 859 9466
>
>
>
>
> Martin Großer wrote:
>
>> Hello,
>>
>> my short question is. When I have a Node with an UpdateCallback function
>> (maybe a animation), can I skip this function?
>> Also my Problem is, I export a graph from 3D Max with an animation. The
>> animation is in a update callback function like the following lines:
>>
>> [...]
>> MatrixTransform {
>>DataVariance DYNAMIC
>>name "Kugel01"
>>nodeMask 0xff
>>cullingActive TRUE
>>UpdateCallbacks {
>>  AnimationPathCallback {
>>DataVariance DYNAMIC
>>pivotPoint 0 0 0
>>timeOffset 0
>>timeMultiplier 1.0
>>AnimationPath {
>>  DataVariance DYNAMIC
>>  LoopMode LOOP
>>  ControlPoints {
>> [...]
>>
>> But I would not start the animation right at the beginning. I think the
>> viewer calls the update function (update visitor) every frame. I want to
>> start manually the animation. There are an easy way?
>>
>> Cheers,
>>
>> Martin
>>
>> ___
>> osg-users mailing list
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
>>
>> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] how can I ignore the UpdateCallback

2009-10-16 Thread Himar Carmona
The viewer didn't draw the node because you set the value to the node mask
to something that also avoids the cull visitor to process (visit) the node?

You can't set it to 0 because you will also achieve this behavior. Take a
look at the method ViewerBase::frame. To render a frame, it does 3
traversals on the scene graph: event traversal, update traversal and
rendering traversal. It uses a different visitor for each traversal. But
each visitor use the same node mask (because it is a node attribute not a
visitor). Each visitor has a mask and it defaults to 0x (all ones).
Each node mask defaults to 0x. So, for each visitor, the
validnodeMask give a result different to 0. I suppose that what you want to
achieve is to control the update visitor, but let the event and rendering
visitor untouchable. So, you need to define the node masks and the visitor
mask to certain values to achieve this behavior. If you set the node mask to
0, all visitors will ignore the node (not what you want). You need also to
set the visitor mask (using NodeVisitor::setTraversalMask).

   Try these values:

  1) Set the update visitor with the traversal mask to 0x0001  with
viewer.getUpdateVisitor()->setTraversalMask(0x0001).

  2) Set the node mask of the node you want not to visit during update
traversal to a value with the last bit set to 0 (0xFFFE)
myNode->setNodeMask(0xFFFE).

Then, the other two visitors will have the default value (0x).
The result:

- Event traversal: traversalMask & nodemask = 0x & 0xFFFE =
0xFFFE. Not equal to 0, then the node is visited.
- Update traversal: traversalMask & nodemask = 0x0001 & 0xFFFE =
0. Then the node is not visited (the update callback isn't called).
- Rendering traversal: traversalMask & nodemask = 0x &
0xFFFE = 0xFFFE. Not equal to 0, then the node is rendered.

   Of course, i'm just guessing about what you have coded. Try this if you
want. I suppose your idea could also work, but the difficulty will be how to
define a new event.

Best regards,
Himar.
2009/10/16 Martin Großer 

> Hello,
>
> Paul, when I stop the sim time then stop all animation. Or is it a wrong
> guess?
>
> Himar, I tried to change the NodeMask and the viewer did not draw the node.
> I think all functions were ignored, but I would ignore the update callback
> only.
>
> I have an idea. Can I Copy the update callback to event callback and after
> this I remove the udate callback. Now I can define a "event" to run the
> event callback?
>
> Cheers, Martin
>
> Am 16.10.2009 08:59, schrieb Himar Carmona:
>
>   Hi,
>
> Specifically, AnimationPathCallback has a setPause method.
>
> More generally, you can control which nodes OSG updates (i.e. visits
> during an updateTraversal) using node masks. Look at the methods
> osg::NodeVisitor::setTraversalMask and validNodeMask for an explanation
> about how it works. In short, visitors have a bit mask (defaults to
> 0x) and nodes have a bit mask (defaults to 0x). If the
> bitwise "and" operation between them gives a result other than 0, the node
> will be visited. If the result is 0, the node will not be visited. (Really
> the node wil be visited but the update callback will not be called). Node
> visitors has also a Node mask override that allow you to "override" the node
> mask to force the node to be visited ignoring its mask.
>
>  Hope this helps.
> Himar.
> 2009/10/15 Paul Martz 
>
>> AnimationPath runs off of sim time, so if you mod your app to not
>> increment sim time, then the animation will not play.
>>
>> Paul Martz
>> Skew Matrix Software LLC
>> _http://www.skew-matrix.com_ <http://www.skew-matrix.com/>
>> +1 303 859 9466
>>
>>
>>
>> Martin Großer wrote:
>>
>>> Hello,
>>>
>>> my short question is. When I have a Node with an UpdateCallback function
>>> (maybe a animation), can I skip this function?
>>> Also my Problem is, I export a graph from 3D Max with an animation. The
>>> animation is in a update callback function like the following lines:
>>>
>>> [...]
>>> MatrixTransform {
>>>DataVariance DYNAMIC
>>>name "Kugel01"
>>>nodeMask 0xff
>>>cullingActive TRUE
>>>UpdateCallbacks {
>>>  AnimationPathCallback {
>>>DataVariance DYNAMIC
>>>pivotPoint 0 0 0
>>>timeOffset 0
>>>timeMultiplier 1.0
>>>AnimationPath {
>>>  DataVariance DYNAMIC
>>>  LoopMode LOOP
>>&

[osg-users] About checkNeedToDoFrame, rendering on demand and _requestRedraw

2009-10-21 Thread Himar Carmona
 Hello,

recently (developer version 2.9.5) i noticed that the implementation of
checkNeedToDoFrame in both Viewer and CompositeViewer have different
conditions. Viewer checks if the camera has an UpdateCallback or if any node
has an UpdateCallback, while CompositeViewer doesn't. Is there a rationale
behind that?

I'm asking about it because we use rendering on demand and all went ok
until we needed to render based upon the manipulator behavior, i.e. upon the
flag _requestRedraw. We didn't use checkNeedToDoFrame, instead we check for
events and databasepager request (like checkNeedToDoFrame). But now we need
to check _requestRedraw, but it is protected and there isn't any getter to
get its value, so we change the code to call checkNeedToDoFrame. But this
method return true if there is any UpdateCallback. Unluckily, we have many
UpdateCallbacks and the end result is a continuous render (not what we want,
and this problem is aggravated due to the high cpu consumption of the
render, perhaps related to NVIDIA's active waiting).

   I'm searching for a way to implement a method similar to
checkNeedToDoFrame out of the Viewer,

 - is there an alternate way to test if requestRedraw is set? How
about adding a getRequestRedraw method to ViewerBase ? Will it have any
implications or side effects?
 - do we need to check for UpdateCallbacks? In our case, we use
Updatecallbacks to check an external condition, they do nothing 90% of the
time.
 - does the implementation of checkNeedToDoFrame need to be
different between Viewer and CompositeViewer or is this a bug?

Thanks in advance and best regards,
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Call for Increased Participation

2009-10-23 Thread Himar Carmona
Hi,

  is there anything a "sheep" can do for the "sheperds"? I read this
thread and it seems the leaders are discussing how to organize the job
to free Robert a bit. So, forgive me if i'm not targeting the real
discussion, but i want to know if a person like me without too much
time and with a "misery" knowledge about OSG (in comparison to yours)
but with a desire to collaborate can do something. Perhaps commiting
some docs or examples, correcting some bugs or simply with code
review?

Perhaps there are some people in the same situation like me
wanting to offer their help but too "shy" to take a step forward. And
perhaps two or three people like myself working together temporally in
a little parcel of the field could let the synergy starting to grow.
Don't know, but for sure, there are many people in the same situation
thats only need a concrete task (and affordable to us) to step
forward. Any volunteer more? :)


   Thanks everybody and best regards,
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Breadth-first-search

2009-10-23 Thread Himar Carmona
Hi,

   i agree with both of you. Queue seems to be necessary. I coded a
BFS node visitor in response to this thread. Here is the code:

class BFSNodeVisitor: public osg::NodeVisitor
{
public:

std::queue> _pendingNodesToVisit;

BFSNodeVisitor() : osg::NodeVisitor( 
osg::NodeVisitor::TRAVERSE_ALL_CHILDREN )
{
}

virtual void apply ( osg::Node& node )
{
std::cout << "Processing " << node.getName() << std::endl;
}

virtual void apply ( osg::Group& group )
{
std::cout << "Processing " << group.getName() << std::endl;
for(int i = 0; i < group.getNumChildren(); i++)
{
_pendingNodesToVisit.push(group.getChild(i));
}
while(_pendingNodesToVisit.size() != 0)
{
osg::Node* node = _pendingNodesToVisit.front();
_pendingNodesToVisit.pop();
node->accept(*this);
}
}
};

I tested it with a little test scene and this seems to do the job (but
not tested in a true complex graph). This code do a BFS traversal
through the scene graph, with two noticeable points to take into
account:

1) As a node can have more than one parent, this code will traverse
the node so many times as it has parents (same behavior as the depth
traverse implemented in OSG).

2) The scene graph must be acyclic, since this code doesn't have any
visited flag or stop condition. If there were a cycle in the scene
graph, the queue would never emptied and the while loop (recursive)
would never stop.

It works exactly like Art Tevs says, visit the node and then add its
children to a queue. I suppose Art will have already his code
complete, but i post it for other interested readers (or just Art
could have another solution). Just a half hour of coding (ughhh), so
take it as it is: no more than a possible starting point. Perhaps
someone smarter can optimize it a bit (for example, avoiding queue
growing up to much), but remember a tenet of optimization: Optimizing
memory normally involves more code (and hence less optimized execution
time) and viceversa. :)

Hope this helps and best regards,
Himar.

2009/10/23 Jason Daly :
> Art Tevs wrote:
>>
>> In deed, current osg implementation uses recursion to traverse over the
>> graph. Which is very suitable for DFS. Implementing BFS with recursion is
>> not that easy, then DFS. The way, how I can solve that problem would be to
>> write a method which just iteratively collects nodes in a queue and then
>> apply the new visitor on each node. Does anybody has other ideas? Or maybe
>> somebody can correct me if I am doing mistakes here.
>>
>
> A BFS will always involve a queue, by its very nature, in the same way that
> a DFS always involves a stack (usually the stack of recursive calls, but you
> can also do a DFS iteratively by implementing your own stack).
>
> You're right, it's not easy (if it's even possible) to implement a BFS
> recursively.
>
> --"J"
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Side-effect with two cameras sharing Viewports

2009-10-26 Thread Himar Carmona
Hi,

   i noticed a strange (buggy?) behavior with two cameras sharing the
same Viewport instance with version 2.9.5 (Windows XP).

   Situation: Main camera, adding a slave camera (with addSlave) and
setting the slave camera's viewport with this line of code:

  slaveCamera->setViewport( viewer.getCamera()->getViewport());

   The objective is to have two cameras rendering to the whole window

   Problem: If the viewer is in window mode (setUpViewInWindow) and
the window is resized, the viewport updates incorrectly. In my case,
it doesn't fill the window.

   I suspect this behaviour is due to
GraphicsContext::resizedImplementation having updated the viewport
twice, since it is share between the two cameras.

   Workaround: Don't share the viewport between two cameras :)

   slaveCamera->setViewport(new
osg::Viewport(*(viewer.getCamera()->getViewport(;

   or

   osg::Viewport* vp = viewer.getCamera()->getViewport();
   slaveCamera->setViewport(vp->x(), vp->y(), vp->width(),
vp->height());

If this use isn't as expected (i use it wrong) or if
resizedImplementation need to be fixed, then i could try do the job
and submit a patch for this odd behaviour. At least i could modify
Camera and add some comments regarding this behavior. But first i
wanted to know experts advise about it.


   Best regards,
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Video as background

2009-10-26 Thread Himar Carmona
  Hi,

   if your system is Vista, run it as Administrator, perhaps you don't
have permissions to write to Program Files. Try copy it manually there
if you can. Also be sure no app is using the dll.

2009/10/26 Nectarios Pelekanos :
> Thanks,
>
> I have tried to install osgART but I had  a problem during the final step of 
> the installation. I dont' know if someone can help but I tried to install 
> osgART 2.0 RC3 and after the Cmake and the buid install I get the following 
> error
>
> file INSTALL cannot copy file
>  "C:/Users/.../osgART_2.0_RC3/build/lib/Debug/osg55-osgART_debug.dll"
>  to "C:/Program Files/osgART/bin/osg55-osgART_debug.dll".
>
> Any suggestions??
>
> Thank you!
>
> Cheers,
> Nectarios
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=18716#18716
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] osgWidget suggestions

2009-10-27 Thread Himar Carmona
Hi Jeremy,

some days ago you asked about some feedback for osgWidget. Since i read
your message i have been looking at osgWidget and i have some suggestions,
comments about how its works and possible future lines of work. And no, i'm
not using it, osgWidget came to late to me, so i leave it out (at the
moment). I use osg for a project that's growing up, now i'm looking harder
to it and osgWidget will be something I will use.

First, i must say its a good job. As far as i have seen it looks
promising. But, have you some documentation regarding its design and or
future plans? So i don't spend hours thinking something you have already
thought about! :) The first question i could ask is (read beyond the
obvious): What is the very concrete objective of osgWidget? I'm not fooling
here. I mean, its objective is to be a thin layer of abstraction to allows
some basic ui programmability with osg or it is to be a complete ui
framework for user interfaces on OSG?

Are you still open to suggestions?

   PD: Your optimism is commandable! Good for you. I wish you the best! Keep
going!

Best regards,
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgWidget suggestions

2009-10-30 Thread Himar Carmona
 Hi again,

i have been thinking and trying to order the ideas in my mind to share
them with you in an ordered manner.

From your message, i guess you implicitly has objectives in mind. I will
comment some ideas i had and perhaps they are out of reach by now, but i'm
trying to grasp the main picture behind osgWidget (past, present and
future).

As far as i understand, osgWidget is a windowing toolkit, mainly 2D, to
overlay on top of a scene (ortho camera), i.e. it works around the common
abstraction of windows (main containers) and widgets (aka controls). These
are uptodate 2D in nature and one can't do much to control its aspect other
than styling, texturing and compose them hierarchically. Also, there are 3
specific Widgets' "aspects" or uses (Browser, PdfReader and VncClient) that
osgWidget give you. osgWidget give event handling, focus management, sizing
and z order (in the sense of a 2d windowing toolkit). Also, to build a
window it uses a ScriptEngine that use either Python or Lua to externalize
the window "template". Until here, I'm right?

   Thats all what I have grasp from the code. Now my ideas:

   There are many, many "evils" that osgWidget (or windowing toolkit in
general) has to face: Separation of concerns, input handling, easy of use
(from toolkit), abstractions difficulty, extensibility (endless windows
contents and aspects), ...

   1) Windowing toolkits (X11, Qt, Windows Forms and lately Flex components
and WPF) , there seems to be a tendency forward a declarative programming
model vs an imperative one (Oh dear, i come from the windows world :) ).
Flex (flash components) and WPF has many common concepts in this line: They
both use XML for the "presentation" and combined with code behind "logic". I
suppose this came from the web world (HTML + javascript or server side code,
PHP, JSP, ASP, ...). This has some good pros: Separation of logic and
visuals, extensibility, easy of making forms (windows). So, the recent
discussion about the use of C++ or Lua for osgWidget, i add another one: The
possibility to have declarative programming model with the use of some
hierarchical domain specific language. Using a Scene graph for this (.osg or
.ive) can be a good starting point. It's mature, it's here to stay and it's
very extensible, hierarchical and perhaps in the near future it will have an
XML export file format to friendly editing offline. With a new
ScriptEngine as an adaptor this objective is not as far as it seems. Of
course, due to C++ being an imperative language, there is a need for an
adaptor or an intepreter for parsing and format a "declarative approach".

   2) A declarative model tends to have a logic tree (example: a window
composed of a tabular set of buttons) and a visual tree (how is a button
represented or rendered, its border, its content, its color, ...). This
separation helps to design an UI in that you can face first what you need
and then how you will render it.

   3) Container vs Window: The window is no more than a "widget" that is a
"widget container" with an specific aspect. Needed by the windowing
platform, it is necessary, but its logic can be more abstract: Being it a
specialization of a container. So, the concept of "Container" and "Content"
is really useful here. It is the composite pattern in the GoF world. Also,
as far as i know, osgWidget lacks of the concept of "listbox" (a container
of an indeterminate numbers of items).

   4) Layout: Ouch, the damocles' sword of UI. All of us want the Windowing
toolkit to lay out the content automatically into a resizable container with
many options for alignment and ordering and tabbing and sizing and anchoring
and scrollbars,  but also it must be easy to use and near transparent. So,
having this aspect extensible is good (unless you want to be many years
programming layouts). osgWidget already has it with Table, Frame, Box and
Canvas. I have notice in other toolkits (Flex, WPF) that layout is
implemented as a two passes process (traverse). One for measure and one for
positioning. Mainly, a parent has a definible area to position its resizable
children and also an intend about how to position them relatively to each
other. I think this generalization of layout is what has allow them to have
a very well designed extensible model regarding layout.

  5) Why not a window could be positioned absolutely in 3d space (for
example following a node or moving in sync with an scene)? There is a good
base for starting out.

  6) What about data binding?

  So far, i'm being very ambitious with all these aspects. But this is not
an objectives list. They are just ideas. As you said, a full-blown UI
nodekit is unreachable uptodate. But i think, as a minimum, there can be a
solid base for, step by step, growing in this line. You have laid the first
stone (not a bad one). For sure, now, after you have coded osgWidget, your
expertise and your knowledge about what is achievable and not is very
precise.

   Think over what I have

[osg-users] OSG uniforms in GLSL shaders

2009-11-01 Thread Himar Carmona
Hi,

   how can i use the uniform osg_ViewMatrixInverse in my shaders? My shader
compiles fine, but it seems i have the view inverse matrix wrong. I'm not
sure if OSG is initializing this uniform for me at runtime. In fact, i don't
know exactly what i have to take into account to use it. Can anyone give me
some hints regarding its (recommended) use?

Thanks and best regards,
Himar.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org