Re: [osg-users] Using shared OpenGL contexts

2009-08-17 Thread Raphael Sebbe
Thank you Robert and Paul for this information.
To further clarify, my goal is to be able to use multiple GL contexts
representing the same scene from different camera angles, and those contexts
could well be GL views in windows, FBOs, or even Core Animation GL layers
(Mac). As said previously, having textures on 3D objects being the result of
some GPU image processing (Core Image), or the other way around, using the
result of 3D rendering as input to GPU filters (Core Image) are possible
applications. But all these things require some kind of flexibility in the
way you create/declare (possibly shared) contexts, to avoid duplicating VBO,
DL and other GPU resources all over.

A nice extension to OSG would be to make it possible to work with external
contexts, even if some constraints need to be enforced (such as state
preserving across OSG calls, or other threading constraints). But this is
probably easier to say than to implement. I will dig some more in the source
to find out.

Thanks,

Raphael


On Fri, Aug 14, 2009 at 6:33 PM, Paul Martz  wrote:

>  Robert -- Thanks for replying to this; I saw the post and was curious
> myself as to how this could be done.
>
> Let's assume the app has created their own contexts and done so with object
> sharing set up correctly. Taking just DLs as an example, could the app
> enforce their own DL sharing with a Drawable draw callback? Seems like this
> would be possible, but I'd have to dig further.
>
> Not sure at all how an app would do with with shared buffer objects,
> texture objects, etc.
>
>  Paul Martz
> Skew Matrix Software LLC
> *http://www.skew-matrix.com* <http://www.skew-matrix.com/>
> +1 303 859 9466
>
>
> Robert Osfield wrote:
>
> Hi Raphael,
>
> On Tue, Aug 11, 2009 at 6:36 PM, Raphael Sebbe 
>  wrote:
>
>
>  Is it possible in OSG to use shared opengl contexts that are created outside
> of OSG, and yet declare to OSG that they are sharing their texture spaces
> (and VBO, etc)? I searched the documentation, but could not find a way to do
> that.
>
>
>  There isn't any support for doing this directly.  If you use the
> GraphicsWindowEmedded feature of osgViewer they you could probably do
> it, but you'd loose threading and multi-context support provided by
> osgViewer.  If you just need a single context and single threading
> then this would be OK.  See the osgviewerSDL and osgviewerGLUT for
> examples of this in action.
>
> In general though, the OSG is designed to be the primary way you drive
> OpenGL apps rather than something that is secondary and piggy backing
> on the back of another part of the app that does OpenGL.  Lazy state
> updating, state sorting, threading and and multi-context management
> that make the scene graph so useful are all things that make mixing
> with 3rd party OpenGL code awkward.
>
> Robert.
> ___
> osg-users mailing 
> listosg-us...@lists.openscenegraph.orghttp://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Using shared OpenGL contexts

2009-08-11 Thread Raphael Sebbe
Hi,
Is it possible in OSG to use shared opengl contexts that are created outside
of OSG, and yet declare to OSG that they are sharing their texture spaces
(and VBO, etc)? I searched the documentation, but could not find a way to do
that.

Thank you,

Raphael
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Using GL textures generated outside of OSG, in a shared GL context

2009-07-27 Thread Raphael Sebbe
Hi Robert,
OK, I will investigate that route.

The main idea is that some work gets done on the GPU with another API, and
it would loose all benefit if I have to read that back through the bus to
CPU memory to just afterwards send it back to OSG to recreate a GL texture
with that. Lots of things can be made on the GPU these days...

Thank you,

Raphael

On Mon, Jul 27, 2009 at 10:26 AM, Robert Osfield
wrote:

> Hi Raphael,
>
> It's technically possible, but the OSG is not designed out of the box
> to facilitate this, so you'll need to wrap up the texture ID via a
> custom subclass of osg::Texture.
>
> Personally I would avoid playing games like this as manage threading
> and multi-contexts out-with the full control of the OSG will require
> very careful management.
>
> Robert.
>
> On Sun, Jul 26, 2009 at 6:45 PM, Raphael Sebbe
> wrote:
> > Hi all,
> > In my app, I would to use an OpenGL texture (with an existing GLuint id)
> as
> > a texture map to an object of an OSG scene. The texture is generated
> outside
> > of OSG, but OSG GL context and the texture GL context are the same (or
> > shared).
> > Could someone tell me if this can be achieved in OSG, and if it is, how?
> > Thank you in advance,
> > Raphael
> > ___
> > osg-users mailing list
> > osg-users@lists.openscenegraph.org
> >
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
> >
> >
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Using GL textures generated outside of OSG, in a shared GL context

2009-07-26 Thread Raphael Sebbe
Hi all,
In my app, I would to use an OpenGL texture (with an existing GLuint id) as
a texture map to an object of an OSG scene. The texture is generated outside
of OSG, but OSG GL context and the texture GL context are the same (or
shared).

Could someone tell me if this can be achieved in OSG, and if it is, how?

Thank you in advance,

Raphael
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Picking Problem PolytopeIntersector

2009-04-28 Thread Raphael Sebbe
Hi,

I second Paul on that. Generally speaking, you need some kind of tolerance
(or margin) when picking for best user experience. Users dont want to spend
time trying to pick an object. And that's exactly what polytope is about.
But if you only have large objects (pixel size) in your application, well,
line intersector may be just fine.

Raphael

On Fri, Apr 24, 2009 at 8:04 PM, Jean-Sébastien Guay <
jean-sebastien.g...@cm-labs.com> wrote:

> Hi Paul,
>
>  When you do a mouse pick, you generally are trying to pick an object that
>> falls into a small screen-space box around the cursor. In a perspective
>> view, the box has smaller world-space extents at the near plane, and
>> larger
>> world-space extents at the far plane. In essence, it looks like a view
>> frustum. PolytopeIntersector is the only intersector that accurately
>> represents this pick volume.
>>
>
> Yes, but this is only a problem if the user clicks on the pixels at the
> edge of the object (or the limit case, if the object occupies only one pixel
> on screen). The chances that the user will click just at the edge of the
> object and the ray will miss are really small. Generally the object is big
> enough on screen (even if the object occupies 5x5 pixels I'd be surprised to
> see a user click on the edge, most of the time they'll click in the middle).
>
> Every developer is free to make the choices they want. I consider this case
> much too infrequent, and it has not been a problem for us. I have not had
> one user tell me "hey, I should have selected the object then but I didn't".
>
> J-S
> --
> __
> Jean-Sebastien Guayjean-sebastien.g...@cm-labs.com
>   http://www.cm-labs.com/
>http://whitestar02.webhop.org/
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] SGI declares bankruptcy (UNCLASSIFIED)

2009-04-05 Thread Raphael Sebbe
For Pixar involvement, I just re-read it, it is in the Renderman Companion,
in the last part of Pat Hanrahan foreword (May 1989).

"... *During this time, we also began a joint project with Silicon Graphics
to work on a three-dimensional graphics library usable for both interactive
graphics and high-quality rendering* ..."

(the time he's referring there is the time when the RISpec was built)
Not sure though if that joint project is IrisGL, or if it's something else
actually...

Thanks for sharing!

Raphael

On Sun, Apr 5, 2009 at 12:25 AM, Paul Martz  wrote:

>  Not sure about Pixar's involvement.
>
> The Genesis of OpenGL varies depending on who you ask. People inside SGI
> say that OpenGL was driven by customer demand. Their users wanted to see
> IrisGL deployed on non-SGI platforms. SGI tried this with a fee hardware
> vendors, but the support was spotty, different vendors supported IrisGL in
> different ways, and the same app did not run the same way across platforms.
> So SGI came up with OpenGL and designed it in such a way that anyone could
> support it well.
>
> However, if you talk to anyone in the PHIGS/PEX camp during the late
> 80s/early 90s, they'll tell you that SGI was feeling threatened by the
> widespread adoption of PHIGS/PEX, and decided to turn IrisGL into an open
> standard ("OpenGL") in order to kill off PHIGS/PEX.
>
> PEX was an open standard adopted by just about anyone with X Windows on
> their box, but SGI had not been invited to join the PEX consortium. So there
> was some animosity there.
>
> Paul Martz
> *Skew Matrix Software LLC*
> http://www.skew-matrix.com
> +1 303 859 9466
>
>
>  --
> *From:* osg-users-boun...@lists.openscenegraph.org [mailto:
> osg-users-boun...@lists.openscenegraph.org] *On Behalf Of *Raphael Sebbe
> *Sent:* Saturday, April 04, 2009 3:28 AM
> *To:* OpenSceneGraph Users
> *Subject:* Re: [osg-users] SGI declares bankruptcy (UNCLASSIFIED)
>
> Jumping in...
> Well, I would definitely be interested in getting more history background
> about the original design of the OpenGL API. There was a collaboration
> between Pixar and SGI in the late 80s, probably around the same time that
> both IrisGL and the Renderman specification emerged. Pixar had been working
> on this for some time, yet, I am unsure of how the whole process of OpenGL
> (IrisGL) happened. Did Pixar propose the Renderman specification to SGI,
> which then proposed a real-time api inspired on that? Or did SGI develop
> this all internally and by chance came to some similar design?
>
> Raphael
>
>
>
> On Fri, Apr 3, 2009 at 10:42 PM, Paul Martz wrote:
>
>> As the British punk band The Stranglers used to say, "Everybody loves you
>> when you're dead."
>>
>> Sure, SGI did some things pretty well, but let us not forget they also did
>> some things pretty poorly, and in my opinion were really not much better
>> or
>> worse than many other hardware vendors.
>>
>> Things they did well: OpenGL 1.0 was a masterpiece of design that took the
>> graphics world by storm and utterly crushed several other competing APIs
>> of
>> the era. This is an API that is so well-accepted that it has outlived its
>> creator.
>>
>> Things they didn't do well: Too much focus on the high end. Bad business
>> decisions. No focus on open standards until absolutely forced to do so.
>> Not
>> able to keep pace with the industry (look where they are now).
>>
>> And marketing faux pas... When OpenGL 1.0 first came out, SGI marketing
>> constantly repeated the notion that immediate mode was the most important
>> thing in the world. (This was a direct slam against the PHIGS/PEX APIs,
>> which were focused on retained mode.) At the same time, SGI was promoting
>> their 1.1 million triangles/second hardware. However, all their demos had
>> a
>> performance HUD that clearly showed significantly less than 1.1m tris/sec.
>> When pressed on this, they eventually published a benchmark that
>> demonstrated 1.1m tris/sec, but it used display lists. Oops.
>>
>> Paul Martz
>> Skew Matrix Software LLC
>> http://www.skew-matrix.com
>> +1 303 859 9466
>>
>> ___
>> osg-users mailing list
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] SGI declares bankruptcy (UNCLASSIFIED)

2009-04-04 Thread Raphael Sebbe
Jumping in...
Well, I would definitely be interested in getting more history background
about the original design of the OpenGL API. There was a collaboration
between Pixar and SGI in the late 80s, probably around the same time that
both IrisGL and the Renderman specification emerged. Pixar had been working
on this for some time, yet, I am unsure of how the whole process of OpenGL
(IrisGL) happened. Did Pixar propose the Renderman specification to SGI,
which then proposed a real-time api inspired on that? Or did SGI develop
this all internally and by chance came to some similar design?

Raphael



On Fri, Apr 3, 2009 at 10:42 PM, Paul Martz  wrote:

> As the British punk band The Stranglers used to say, "Everybody loves you
> when you're dead."
>
> Sure, SGI did some things pretty well, but let us not forget they also did
> some things pretty poorly, and in my opinion were really not much better or
> worse than many other hardware vendors.
>
> Things they did well: OpenGL 1.0 was a masterpiece of design that took the
> graphics world by storm and utterly crushed several other competing APIs of
> the era. This is an API that is so well-accepted that it has outlived its
> creator.
>
> Things they didn't do well: Too much focus on the high end. Bad business
> decisions. No focus on open standards until absolutely forced to do so. Not
> able to keep pace with the industry (look where they are now).
>
> And marketing faux pas... When OpenGL 1.0 first came out, SGI marketing
> constantly repeated the notion that immediate mode was the most important
> thing in the world. (This was a direct slam against the PHIGS/PEX APIs,
> which were focused on retained mode.) At the same time, SGI was promoting
> their 1.1 million triangles/second hardware. However, all their demos had a
> performance HUD that clearly showed significantly less than 1.1m tris/sec.
> When pressed on this, they eventually published a benchmark that
> demonstrated 1.1m tris/sec, but it used display lists. Oops.
>
> Paul Martz
> Skew Matrix Software LLC
> http://www.skew-matrix.com
> +1 303 859 9466
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] MAC

2008-11-19 Thread Raphael Sebbe
I think it's right in the middle of moving from Xcode based projects to
cmake build system, which currently generates dylibs instead of frameworks
(I may not be up to date, checked a couple of weeks ago, moving target).
Once you get a successful build in cmake, with proper build flags, you
shouldn't have much surprise on the mac.
Raphael

On Wed, Nov 19, 2008 at 1:46 AM, Ulrich Hertlein <[EMAIL PROTECTED]>wrote:

> Quoting Don Leich <[EMAIL PROTECTED]>:
> > My experience with Mac development has not quite so smooth.  Getting
> cmake to
> > build was a battle and beyond the scope of this thread.  OSG itself has
> some
> > appearant rough edges with OSG and X11 integration. I am working with
>
> Odd, it was a complete breeze for me getting OSG/cmake/Makefiles (cmake 2.6
> from
> DarwinPorts) compiled on OS X.
>
> I don't know why you have to use X11 (Qt?) but that could be the cause of
> your
> problems.  You'd probably see similar issues running OSG/X11 on Windows.
>
> /ulrich
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multisampling in embedded window?

2008-11-18 Thread Raphael Sebbe
Some conclusions: the opengl context created in Interface Builder did not
respect the requested settings (multisample and sample count). Providing a
pixel format manually fixed this issue. As you said, nothing special has to
be done on OSG side.
Thanks,

Raphael

On Tue, Nov 18, 2008 at 11:22 AM, Raphael Sebbe <[EMAIL PROTECTED]>wrote:

> Hi Robert,
> Thank you, I was not aware of that. I'll experiment some more.
>
> Raphael
>
>
> On Tue, Nov 18, 2008 at 11:14 AM, Robert Osfield <[EMAIL PROTECTED]
> > wrote:
>
>> Hi Raphael,
>>
>> If you are embedded the OSG within an existing OpenGL graphics context
>> then no OSG settings will make any difference, you'll have to request
>> a multi-sample visual from whatever toolkit you are using to create
>> the graphic context.
>>
>> Robert.
>>
>> On Tue, Nov 18, 2008 at 10:01 AM, Raphael Sebbe <[EMAIL PROTECTED]>
>> wrote:
>> > Hi,
>> > I am trying to enable multisampling in an embedded viewer, with no luck
>> so
>> > far. It works great on stand alone Viewer though.
>> > Here is what I do:
>> >
>> > Create an OpenGL buffer with multisampling enabled. Then, this:
>> > osg::DisplaySettings * ds = osg::DisplaySettings::instance();
>> > ds->setNumMultiSamples(4);
>> > _osgViewData->window =
>> >
>> _osgViewData->viewer->setUpViewerAsEmbeddedInWindow(100,100,bounds.width,
>> > bounds.height);
>> >
>> > But this doesn't make it. I also tried this:
>> > ref_ptr< DisplaySettings > displaySettings = new DisplaySettings;
>> > displaySettings->setNumMultiSamples(4);
>> > _osgViewData->viewer->setDisplaySettings( displaySettings.get() );
>> > _osgViewData->window =
>> >
>> _osgViewData->viewer->setUpViewerAsEmbeddedInWindow(100,100,bounds.width,
>> > bounds.height);
>> > Same result.
>> > Do you have any recommendation on what to check to make multisampling
>> > anti-aliasing work?
>> > Thank you,
>> > Raphael
>> > ___
>> > osg-users mailing list
>> > osg-users@lists.openscenegraph.org
>> >
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>> >
>> >
>> ___
>> osg-users mailing list
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multisampling in embedded window?

2008-11-18 Thread Raphael Sebbe
Hi Robert,
Thank you, I was not aware of that. I'll experiment some more.

Raphael

On Tue, Nov 18, 2008 at 11:14 AM, Robert Osfield
<[EMAIL PROTECTED]>wrote:

> Hi Raphael,
>
> If you are embedded the OSG within an existing OpenGL graphics context
> then no OSG settings will make any difference, you'll have to request
> a multi-sample visual from whatever toolkit you are using to create
> the graphic context.
>
> Robert.
>
> On Tue, Nov 18, 2008 at 10:01 AM, Raphael Sebbe <[EMAIL PROTECTED]>
> wrote:
> > Hi,
> > I am trying to enable multisampling in an embedded viewer, with no luck
> so
> > far. It works great on stand alone Viewer though.
> > Here is what I do:
> >
> > Create an OpenGL buffer with multisampling enabled. Then, this:
> > osg::DisplaySettings * ds = osg::DisplaySettings::instance();
> > ds->setNumMultiSamples(4);
> > _osgViewData->window =
> > _osgViewData->viewer->setUpViewerAsEmbeddedInWindow(100,100,bounds.width,
> > bounds.height);
> >
> > But this doesn't make it. I also tried this:
> > ref_ptr< DisplaySettings > displaySettings = new DisplaySettings;
> > displaySettings->setNumMultiSamples(4);
> > _osgViewData->viewer->setDisplaySettings( displaySettings.get() );
> > _osgViewData->window =
> > _osgViewData->viewer->setUpViewerAsEmbeddedInWindow(100,100,bounds.width,
> > bounds.height);
> > Same result.
> > Do you have any recommendation on what to check to make multisampling
> > anti-aliasing work?
> > Thank you,
> > Raphael
> > ___
> > osg-users mailing list
> > osg-users@lists.openscenegraph.org
> >
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
> >
> >
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] multisampling in embedded window?

2008-11-18 Thread Raphael Sebbe
Hi,
I am trying to enable multisampling in an embedded viewer, with no luck so
far. It works great on stand alone Viewer though.

Here is what I do:

Create an OpenGL buffer with multisampling enabled. Then, this:
osg::DisplaySettings * ds = osg::DisplaySettings::instance();
ds->setNumMultiSamples(4);
_osgViewData->window =
_osgViewData->viewer->setUpViewerAsEmbeddedInWindow(100,100,bounds.width,
bounds.height);

But this doesn't make it. I also tried this:
ref_ptr< DisplaySettings > displaySettings = new DisplaySettings;
displaySettings->setNumMultiSamples(4);
_osgViewData->viewer->setDisplaySettings( displaySettings.get() );
_osgViewData->window =
_osgViewData->viewer->setUpViewerAsEmbeddedInWindow(100,100,bounds.width,
bounds.height);

Same result.

Do you have any recommendation on what to check to make multisampling
anti-aliasing work?

Thank you,

Raphael
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] deadlock in independent contexts

2008-07-31 Thread Raphael Sebbe
Hi,

I have some additional information.
I moved to OSGv2.6rc1, built the .dylib with cmake. I still get the same
deadlock situation, with the same thread backtraces (I had to create 7
windows, the log makes it more difficult to get to that situation).

I uncommented this line in getCompileContext:
osg::notify(osg::NOTICE)<<"GraphicsContext::getCompileContext
" Hi,
> I have come across a problem when performing multi-threaded rendering.
>
> The application setup is as follows: 4 windows with 4 independent OpenGL
> contexts, with 4 associated GraphicsWindowEmbedded. There are 4 threads,
> invoking Viewer->frame().
>
> After some time, 3 out of 4 threads are blocked, using gdb, I get the
> deadlock locations:
>
> 2 threads are in this:
>
> #0 0x90c5a4ee in semaphore_wait_signal_trap
> #1 0x90c61fc5 in pthread_mutex_lock
> #2 0x00299ffc in std::_Rb_tree ContextData>, std::_Select1st >,
> std::less, std::allocator ContextData> > >::insert_unique
> #3 0x00295b18 in osg::GraphicsContext::getCompileContext
> #4 0x0051ed05 in osgViewer::Renderer::cull_draw
> #5 0x00297350 in osg::GraphicsContext::runOperations
> #6 0x00521d20 in osgViewer::ViewerBase::renderingTraversals
> #7 0x3b91 in -[OSGView drawRect:] at OSGView.mm:367
> #8 0x3d7d in -[OSGView displayFrame:] at OSGView.mm:400
> #9 0x3007 in MEDisplayLinkCallback at OSGView.mm:102
> #10 0x955a8013 in CVDisplayLink::performIO
> #11 0x955a863f in CVDisplayLink::runIOThread
> #12 0x90c8b6f5 in _pthread_start
> #13 0x90c8b5b2 in thread_start
>
>
> Another one in this:
>
> #0 0x90c5a4ee in semaphore_wait_signal_trap
> #1 0x90c61fc5 in pthread_mutex_lock
> #2 0x00299ffc in std::_Rb_tree ContextData>, std::_Select1st >,
> std::less, std::allocator ContextData> > >::insert_unique
> #3 0x00295b18 in osg::GraphicsContext::getCompileContext
> #4 0x00060060 in osgDB::DatabasePager::requiresExternalCompileGLObjects
> #5 0x0051ef88 in osgViewer::Renderer::cull_draw
> #6 0x00297350 in osg::GraphicsContext::runOperations
> #7 0x00521d20 in osgViewer::ViewerBase::renderingTraversals
> #8 0x3b91 in -[OSGView drawRect:] at OSGView.mm:367
> #9 0x3d7d in -[OSGView displayFrame:] at OSGView.mm:400
> #10 0x3007 in MEDisplayLinkCallback at OSGView.mm:102
> #11 0x955a8013 in CVDisplayLink::performIO
> #12 0x955a863f in CVDisplayLink::runIOThread
> #13 0x90c8b6f5 in _pthread_start
> #14 0x90c8b5b2 in thread_start
>
> Seems to be in there:
> GraphicsContext* GraphicsContext::getCompileContext(unsigned int contextID)
> {
> // osg::notify(osg::NOTICE)<<"GraphicsContext::getCompileContext
> "< OpenThreads::ScopedLock lock(s_contextIDMapMutex);
> ContextIDMap::iterator itr = s_contextIDMap.find(contextID);
> if (itr != s_contextIDMap.end()) return
> itr->second._compileContext.get();
> else return 0;
> }
>
>
> I did not configure any thread specific flags in OSG (perhaps I should have
> done that?), I am not familiar with how OSG handles threading.
> I also cannot instrument OSG to debug this as I am using precompiled
> binaries (OSX frameworks), OSG 2.2.0-10.5SDK.
>
> I can workaround this by making all rendering mutually exclusive, but this
> is obviously not what I want.
>
> Any advice would be greatly appreciated. Thanks,
>
> Raphael
>
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] deadlock in independent contexts

2008-07-29 Thread Raphael Sebbe
Hi,
I have come across a problem when performing multi-threaded rendering.

The application setup is as follows: 4 windows with 4 independent OpenGL
contexts, with 4 associated GraphicsWindowEmbedded. There are 4 threads,
invoking Viewer->frame().

After some time, 3 out of 4 threads are blocked, using gdb, I get the
deadlock locations:

2 threads are in this:

#0 0x90c5a4ee in semaphore_wait_signal_trap
#1 0x90c61fc5 in pthread_mutex_lock
#2 0x00299ffc in std::_Rb_tree, std::_Select1st >,
std::less, std::allocator > >::insert_unique
#3 0x00295b18 in osg::GraphicsContext::getCompileContext
#4 0x0051ed05 in osgViewer::Renderer::cull_draw
#5 0x00297350 in osg::GraphicsContext::runOperations
#6 0x00521d20 in osgViewer::ViewerBase::renderingTraversals
#7 0x3b91 in -[OSGView drawRect:] at OSGView.mm:367
#8 0x3d7d in -[OSGView displayFrame:] at OSGView.mm:400
#9 0x3007 in MEDisplayLinkCallback at OSGView.mm:102
#10 0x955a8013 in CVDisplayLink::performIO
#11 0x955a863f in CVDisplayLink::runIOThread
#12 0x90c8b6f5 in _pthread_start
#13 0x90c8b5b2 in thread_start


Another one in this:

#0 0x90c5a4ee in semaphore_wait_signal_trap
#1 0x90c61fc5 in pthread_mutex_lock
#2 0x00299ffc in std::_Rb_tree, std::_Select1st >,
std::less, std::allocator > >::insert_unique
#3 0x00295b18 in osg::GraphicsContext::getCompileContext
#4 0x00060060 in osgDB::DatabasePager::requiresExternalCompileGLObjects
#5 0x0051ef88 in osgViewer::Renderer::cull_draw
#6 0x00297350 in osg::GraphicsContext::runOperations
#7 0x00521d20 in osgViewer::ViewerBase::renderingTraversals
#8 0x3b91 in -[OSGView drawRect:] at OSGView.mm:367
#9 0x3d7d in -[OSGView displayFrame:] at OSGView.mm:400
#10 0x3007 in MEDisplayLinkCallback at OSGView.mm:102
#11 0x955a8013 in CVDisplayLink::performIO
#12 0x955a863f in CVDisplayLink::runIOThread
#13 0x90c8b6f5 in _pthread_start
#14 0x90c8b5b2 in thread_start

Seems to be in there:
GraphicsContext* GraphicsContext::getCompileContext(unsigned int contextID)
{
// osg::notify(osg::NOTICE)<<"GraphicsContext::getCompileContext
"< lock(s_contextIDMapMutex);
ContextIDMap::iterator itr = s_contextIDMap.find(contextID);
if (itr != s_contextIDMap.end()) return
itr->second._compileContext.get();
else return 0;
}


I did not configure any thread specific flags in OSG (perhaps I should have
done that?), I am not familiar with how OSG handles threading.
I also cannot instrument OSG to debug this as I am using precompiled
binaries (OSX frameworks), OSG 2.2.0-10.5SDK.

I can workaround this by making all rendering mutually exclusive, but this
is obviously not what I want.

Any advice would be greatly appreciated. Thanks,

Raphael
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Mac OpenGL integration / CGLMacro.h

2008-03-21 Thread Raphael Sebbe
Hi Mike,
OK, I understand your point. You're right about the ARB, I guess the focus
for them is on the next opengl version (3.0), and who knows, it may well
expose the context explicitly if that makes sense for other platforms as
well.

Best,

Raphael

On Fri, Mar 21, 2008 at 6:21 AM, Mike Weiblen <[EMAIL PROTECTED]>
wrote:

> Hi Raphael,
>
> I strongly agree w/ Robert on declining to incorporate that Apple
> extension into OSG.
>
> The problem with this "feature" is, while it seems to "enhance" the
> OpenGL API, it _totally_breaks_ the OpenGL ABI.
>
> There are many OpenGL shim libraries (such as gDEBugger, glIntercept,
> BuGLe and Chromium, to name just a few) that depend on mimicking the
> existing OpenGL dynamic library ABI.  That entire class of tools is
> rendered unusable by the magic insertion of an additional parameter.
>
> For that reason, I believe this Apple extension should be actively
> *avoided* by any GL developers having any thoughts of portability.
> Except for vendor lock-in, IMHO Apple would be insane to "require" it of
> applications in the future.
>
> And I agree w/ Richard that this seems to be a "feature" of little
> practical merit.  In quality OpenGL applications, MakeCurrent() just
> isn't a bottleneck in the pipeline that needs fixing.
>
> Not to speak for Robert, I suggest simple test for considering any
> OpenGL extension for OSG is: Would the ARB officially sanction the
> extension?  I believe the chances in this case are 0.
>
> cheers
> -- mew
>
>
>
>
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:osg-users-
> > [EMAIL PROTECTED] On Behalf Of
> > [EMAIL PROTECTED]
> > Sent: Thursday, March 20, 2008 10:41 AM
> > To: OpenSceneGraph Users
> > Subject: Re: [osg-users] Mac OpenGL integration / CGLMacro.h
> >
> > While Robert makes a good and valid point about corrupting the API
> (and
> > reality distortion fields in general), I don't think the original
> > poster
> > adequately explained this "feature" of OpenGL on Apple platforms.
> >
> > All it amounts to is a #ifdef __APPLE__ and an extra include file in
> > all the
> > OSG headers (just like including gl.h) plus a single function call
> > tucked
> > away in the OpenGL initialization code. It doesn't change the OpenGL
> > API as
> > far as the source code or programmer is concerned. The original poster
> > sort
> > of made it sound like each OpenGL call now has an extra parameter
> > passed
> > in... which it does, but this is "hidden" by the macro header. No
> > changes to
> > OpenGL code required (unless you consider an extra header file an
> undue
> > burden and pollution of the API ;-). It does speed up each OpenGL call
> > on
> > the CPU side, and can help (God forbid) immediate mode code
> > considerably.
> >
> > It would be fairly simple to add to OSG, albiet tedious, and Robert
> > (who
> > obviously is not a Mac user ;-) would have to test it, or have someone
> > close
> > to him test it (hundreds of files would be affected).
> >
> > Even though I am fully immersed in the Apple reality distortion field,
> > I
> > would have to express doubt however that it is worth the change. In
> the
> > age
> > of batched geometry submission (as opposed to immediate mode), and the
> > increasing reliance on shaders rather than the OpenGL state machine
> the
> > value of this feature over time get's increasingly smaller. I have
> made
> > this
> > retro-fit many time to my own projects without any problems... what I
> > have
> > failed to see however is any significant performance benefit to well
> > written
> > OpenGL code in the first place.
> >
> > Richard
> >
> >
> >
> > Robert Osfield writes:
> >
> > > On Thu, Mar 20, 2008 at 1:35 PM, Raphael Sebbe
> > <[EMAIL PROTECTED]> wrote:
> > >> thanks for answering. I understand your point regarding cross
> > platform
> > >> complexity. However, I am pretty convinced that passing the context
> > to
> > >> drawing functions makes sense these days, especially considering
> the
> > many
> > >> contexts and threads running in parallel, and I don't get this as a
> > vendor
> > >> lock-in strategy, although this can be a side-effect of course.
> > >
> > > Ahhh the Steve Job reality distortion field...
> > >
> > > The vendor lock-in comes from getting developers to start off on
> > Appl

Re: [osg-users] Mac OpenGL integration / CGLMacro.h

2008-03-20 Thread Raphael Sebbe
On Thu, Mar 20, 2008 at 4:40 PM, <[EMAIL PROTECTED]> wrote:

> While Robert makes a good and valid point about corrupting the API (and
> reality distortion fields in general), I don't think the original poster
> adequately explained this "feature" of OpenGL on Apple platforms.
>
> All it amounts to is a #ifdef __APPLE__ and an extra include file in all
> the
> OSG headers (just like including gl.h) plus a single function call tucked
> away in the OpenGL initialization code. It doesn't change the OpenGL API
> as
> far as the source code or programmer is concerned. The original poster
> sort
> of made it sound like each OpenGL call now has an extra parameter passed
> in... which it does, but this is "hidden" by the macro header. No changes
> to
> OpenGL code required (unless you consider an extra header file an undue
> burden and pollution of the API ;-). It does speed up each OpenGL call on
> the CPU side, and can help (God forbid) immediate mode code considerably.
>


Well... you're right, gl_ calls themselves wouldn't change. But what would
change is how you define the context needed by these. Either by adding a
member variable to classes performing rendering, or through an additional
parameter in the methods doing the rendering. This is quite a change I
believe...

Alternatively, you could simply define that context by getting the current
context locally, as expressed just below, but then there would be no point
to change the code as this would be equivalent to usual gl functions:
CGLContextObj cgl_ctx = CGLGetCurrentContext();


Raphael




>
> It would be fairly simple to add to OSG, albiet tedious, and Robert (who
> obviously is not a Mac user ;-) would have to test it, or have someone
> close
> to him test it (hundreds of files would be affected).
>
> Even though I am fully immersed in the Apple reality distortion field, I
> would have to express doubt however that it is worth the change. In the
> age
> of batched geometry submission (as opposed to immediate mode), and the
> increasing reliance on shaders rather than the OpenGL state machine the
> value of this feature over time get's increasingly smaller. I have made
> this
> retro-fit many time to my own projects without any problems... what I have
> failed to see however is any significant performance benefit to well
> written
> OpenGL code in the first place.
>
> Richard
>
>
>
> Robert Osfield writes:
>
> > On Thu, Mar 20, 2008 at 1:35 PM, Raphael Sebbe <[EMAIL PROTECTED]>
> wrote:
> >> thanks for answering. I understand your point regarding cross platform
> >> complexity. However, I am pretty convinced that passing the context to
> >> drawing functions makes sense these days, especially considering the
> many
> >> contexts and threads running in parallel, and I don't get this as a
> vendor
> >> lock-in strategy, although this can be a side-effect of course.
> >
> > Ahhh the Steve Job reality distortion field...
> >
> > The vendor lock-in comes from getting developers to start off on Apple
> > then get them hooked on non portable API's, then come the day when you
> > want to port to another platform you whole architecture is tied up
> > around these non portable API's.   Porting to other platforms becomes
> > a prohibitive task.
> >
> > Apple and Microsoft do it all the time.  Think how non standard all
> > MS's and Apple's API's are.  They even making porting back to previous
> > versions of their OS's hard.  MS's did 3D graphics with Direct3D
> > rather than the industry standard OpenGL purely for lock-in and hey
> > now Apple are tying to use OSX specific extensions to OpenGL that do
> > the same thing.  MS and Apple offer sweeteners, make it easy to
> > sudducced by their API's with slick and "plausable" marketing and
> > before you know you're hooked and tied in.
> >
> > The OSG is so portable because I and other resist such vendor lock-in,
> >  any platform specifics are kept well encapsulated enabling the end
> > users to easily move between platforms.  Portability is sweet for a
> > project like the OSG not only because it gives freedom to end users,
> > but also means that it's diverse development community can help each.
> > Windows user writes and OpenGL extension that is portable, submits, I
> > come it under Linux, then others under OSX, then Solaris, etc, the
> > same works the other way.  All users get to benefit from each other.
> >
> > Extensions like Apple's don't benefit anyone except Apple users tied
> > to the Apple platform.  Integrating 

Re: [osg-users] Mac OpenGL integration / CGLMacro.h

2008-03-20 Thread Raphael Sebbe
Well this is way off I guess, but interesting anyway ;-)
I think Apple works pretty well with standards and open-source generally.
And OpenGL is no exception. I even think that Apple supporting OpenGL is a
good thing for both OpenGL and Apple dev communities.

answering in the text:

On Thu, Mar 20, 2008 at 2:52 PM, Robert Osfield <[EMAIL PROTECTED]>
wrote:

> On Thu, Mar 20, 2008 at 1:35 PM, Raphael Sebbe <[EMAIL PROTECTED]>
> wrote:
> > thanks for answering. I understand your point regarding cross platform
> > complexity. However, I am pretty convinced that passing the context to
> > drawing functions makes sense these days, especially considering the
> many
> > contexts and threads running in parallel, and I don't get this as a
> vendor
> > lock-in strategy, although this can be a side-effect of course.
>
> Ahhh the Steve Job reality distortion field...
>
> The vendor lock-in comes from getting developers to start off on Apple
> then get them hooked on non portable API's, then come the day when you
> want to port to another platform you whole architecture is tied up
> around these non portable API's.   Porting to other platforms becomes
> a prohibitive task.


> Apple and Microsoft do it all the time.  Think how non standard all
> MS's and Apple's API's are.  They even making porting back to previous
> versions of their OS's hard.  MS's did 3D graphics with Direct3D
> rather than the industry standard OpenGL purely for lock-in and hey
> now Apple are tying to use OSX specific extensions to OpenGL that do
> the same thing.  MS and Apple offer sweeteners, make it easy to
> sudducced by their API's with slick and "plausable" marketing and
> before you know you're hooked and tied in.
>

Do you really think Apple introduce new extensions to lock-in developers?
My understanding is they do it when they need it, although I agree they
don't care about other vendors following it or not...


>
> The OSG is so portable because I and other resist such vendor lock-in,
>  any platform specifics are kept well encapsulated enabling the end
> users to easily move between platforms.  Portability is sweet for a
> project like the OSG not only because it gives freedom to end users,
> but also means that it's diverse development community can help each.
> Windows user writes and OpenGL extension that is portable, submits, I
> come it under Linux, then others under OSX, then Solaris, etc, the
> same works the other way.  All users get to benefit from each other.
>

Yes, and this is great. What I suggested was not to break compatibility but
to improve it on one platform.


>
> Extensions like Apple's don't benefit anyone except Apple users tied
> to the Apple platform.  Integrating it into the OSG or other
> applications/API's harm *all* other platforms because it takes
> developers resources away from things that benefit all platforms, it
> adds complexity so adds maintenance burden which further weighs down
> the whole product/project/community   To be successful as a software
> engineer you have to be vigilant about these issues, if you aren't you
> can get sucked in by the slick marketing and end up forgetting that
> you're developing software for the benefit of all your users, not to a
> particular vendor.
>


Definitely supporting multiple paradigms may take up some dev resources. And
I guess this is your role to decide whether to accept that or not.

For the second part, I have to disagree. It depends on your
project/business, and who your users are.

Raphael



>
> Robert.
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Mac OpenGL integration / CGLMacro.h

2008-03-20 Thread Raphael Sebbe
Hi Robert,
thanks for answering. I understand your point regarding cross platform
complexity. However, I am pretty convinced that passing the context to
drawing functions makes sense these days, especially considering the many
contexts and threads running in parallel, and I don't get this as a vendor
lock-in strategy, although this can be a side-effect of course.

Raphael

On Thu, Mar 20, 2008 at 1:32 PM, Robert Osfield <[EMAIL PROTECTED]>
wrote:

> Hi Raphael,
>
> The changes of the OSG uses this tricked out non standard vendor
> lock-in mechanism are 0.
>
> If such an extension was available across platforms it might make some
> sense, but given the massive amount of changes it'd require, for a
> minor platform like OSX, and the likely small performance delta they
> might give anyway I think it would be huge waste of resources and
> added code complexity, associated bugs and testing.  It would in the
> end be an utter nightmare to maintain for little gain for a small
> number of users.
>
> Far more interesting is items like OpenGL-ES and OpenGL 3.x, this is
> the future of OpenGL, not a silly little detour Apple hopes to hook
> you into to make you apps even more tied to a single platform.
>
> Robert.
>
> On Thu, Mar 20, 2008 at 11:22 AM, Raphael Sebbe <[EMAIL PROTECTED]>
> wrote:
> > Hi all,
> >
> > On Mac OS X, there is a possibility of using a special mechanism for
> OpenGL
> > calls, that is, you include CGLMacros.h in your source, and all gl***
> calls
> > take an implicit additional parameter, the context. This makes the use
> of a
> > "current active context" unnecessary, gives better performance and it
> works
> > better with threads. Developers are encouraged to use that mechanism.
> >
> > Some apps (thinking of Quartz Composer) strongly suggest that this is
> used
> > for their plugins, drawing to the given context, and discourage changing
> the
> > current OpenGL context. I guess we're not far from a "suggest" becoming
> a
> > "require" in the future.
> >
> >
> > The way it works is that you have to define a local variable with the
> name
> > cgl_ctx, and that variable is passed as first argument to the special gl
> > calls:
> >
> > CGLContextObj cgl_ctx = aContext;
> > glBegin(GL_LINES);   // which actually is like _blaBegin(cgl_ctx,
> GL_LINES),
> > because of CGLMacro.h inclusion
> > // this won't compile if cgl_ctx is not defined
> >
> >
> >
> >
> > So my question is: what is your thought about integrating this mechanism
> in
> > OSG?
> >
> > Thanks,
> >
> > Raphael
> >
> >
> >
> > ___
> >  osg-users mailing list
> >  osg-users@lists.openscenegraph.org
> >
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
> >
> >
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Mac OpenGL integration / CGLMacro.h

2008-03-20 Thread Raphael Sebbe
Hi all,
On Mac OS X, there is a possibility of using a special mechanism for OpenGL
calls, that is, you include CGLMacros.h in your source, and all gl*** calls
take an implicit additional parameter, the context. This makes the use of a
"current active context" unnecessary, gives better performance and it works
better with threads. Developers are encouraged to use that mechanism.

Some apps (thinking of Quartz Composer) strongly suggest that this is used
for their plugins, drawing to the given context, and discourage changing the
current OpenGL context. I guess we're not far from a "suggest" becoming a
"require" in the future.

The way it works is that you have to define a local variable with the name
cgl_ctx, and that variable is passed as first argument to the special gl
calls:

CGLContextObj cgl_ctx = aContext;
glBegin(GL_LINES);   // which actually is like _blaBegin(cgl_ctx, GL_LINES),
because of CGLMacro.h inclusion
// this won't compile if cgl_ctx is not defined



So my question is: what is your thought about integrating this mechanism in
OSG?

Thanks,

Raphael
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Syntax coloring of OSG headers in Xcode

2008-03-13 Thread Raphael Sebbe
Hi,
does anyone know a way to make OSG headers syntax-colored in Xcode? I can
change the file type to sourcecode.cpp for individual files in a given
project and make them colored, but I am interested in an automatic way. I
saw there was some support for other IDEs on OSG website.

Thanks,

Raphael
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] New to OSG, some questions (selection buffer, parametric curves)

2008-02-11 Thread Raphael Sebbe
I didn't know software rasterizer was used... hmm, but thinking about it, it
makes sense as retrieving info from the GPU has not always been an easy
thing, yet selection buffer exists since a very long time. Anyway, as you
said, higher level knowledge of the scene is definitely an advantage.
I'll try to reproduce the same configuration (selecting a square of given
width around the click in view space, think I saw a pyramid-like
intersection primitive for doing this in OSG) as I did before with selection
buffer.

Thanks for that info,

Raphael

On Feb 12, 2008 2:52 AM, Paul Martz <[EMAIL PROTECTED]> wrote:

>  - OpenGL selection buffer seems not recommended as a way of picking in
> OSG, explicit primitive intersections are used instead. Could someone please
> comment on this (why...)?
>
> OpenGL's selection render mode is almost universally implemented via
> software rendering, and is therefore painfully slow for any moderate
> geometry load. It also tends to be O(n) -- your app just renders everything,
> even stuff no where near the pick point. You can sidestep these issues by
> reducing or simplifying the geometry being tested.
>
> Selection is the only (sort of direct) support OpenGL has for picking, so
> if you're using OpenGL and don't want to code a better method, then you're
> stuck with selection.
>
> Fortunately OSG has a superior alternative. One thing scene graphs are
> good at is spatial organization. As a result, pick testing in a scene graph
> is O(log(n)). Although it also runs in software, it performs simple
> ray/sphere intersection tests, which are much more efficient than software
> rasterization.
>
>  Paul Martz
> *Skew Matrix Software LLC*
> http://www.skew-matrix.com
> 303 859 9466
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] New to OSG, some questions (selection buffer, parametric curves)

2008-02-11 Thread Raphael Sebbe
Hi J-S,

Yes, geometry shaders would be a must for these (and also for subdivision
surfaces). I'd be interested in contributing to these (time permitting, you
know that...), although I first have to catch up with OSG.
Thank you!

Raphael

On Feb 11, 2008 10:29 PM, Jean-Sébastien Guay <
[EMAIL PROTECTED]> wrote:

> Hello Raphael,
>
> Welcome! Hope you enjoy your time working with OSG, and the community
> around it!
>
> > - OpenGL selection buffer seems not recommended as a way of picking in
> > OSG, explicit primitive intersections are used instead. Could someone
> > please comment on this (why...)?
>
> I'll let others answer this one, I don't know myself. I have always
> found OpenGL selection buffers cumbersome to use, but with a good
> wrapper it should be possible to make something that works well and is
> easy to use I imagine. So I don't know why that was not done.
>
> > - Support for parametric curves / surfaces (Bezier, NURBs and the
> > likes). Are there any plan to support this directly (through GLU or
> > other), or should I export those as polygonal data instead?
>
> I would like to eventually help in adding support for parametric curves
> and surfaces in OSG. I have used the GLU interface before and would have
> used that as the first step, followed by an implementation using
> Geometry Shaders, with the ability to select the implementation you want
> to use at runtime and a fallback mechanism. I think it would be really
> cool to support that. But I haven't gotten to it thus far... :-(
>
> If you want to start doing something in this direction, I will be glad
> to test and comment. And search the archives, there was a discussion
> related to this not too long ago, so there was obviously interest in this.
>
> Hope that helps, and again welcome.
>
> J-S
> --
> __
> Jean-Sebastien Guay[EMAIL PROTECTED]
>http://www.cm-labs.com/
> http://whitestar02.webhop.org/
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] New to OSG, some questions (selection buffer, parametric curves)

2008-02-11 Thread Raphael Sebbe
Hi everyone,
I am new to OSG, yet intend to use it for some projects. I've been through
implementing some scene graph things before, I find OSG very interesting, I
appreciate tight OpenGL integration (no cumbersome abstractions) and clean
class design. So far I have a few questions regarding some aspects of OSG:

- OpenGL selection buffer seems not recommended as a way of picking in OSG,
explicit primitive intersections are used instead. Could someone please
comment on this (why...)?
- Support for parametric curves / surfaces (Bezier, NURBs and the likes).
Are there any plan to support this directly (through GLU or other), or
should I export those as polygonal data instead?

Thank you very much,

Raphael
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org