Hi André,

On 1/18/07, André Garneau <[EMAIL PROTECTED]> wrote:
I've played with this issue some more and here's the exact behavior:

1. For some windows (not all), moving them from one screen to another will
loose texturing (they will be shaded but not textured). Other windows will
work just fine.

2. When a window is loosing texturing as soon as you overlap into the other
screen the effect is visible (I mentioned previously that it was only when
it was fully inside the target screen - this was incorrect, or at least I
did not see that behavior again today).

3. Using the osgwindows cow.osg application as an example, I can move one
window to another screen without loosing texturing. Moving the other window
will always loose texturing. There does not seem to be any order in which
this will happen (i.e. it's not always the first window moved that works).

What happens when you move across to a screen on a different graphics card?

The fact that it works at all is slightly encouraging, perhaps it is
just an issue of the graphics context OpenGL current state getting
corrupted and a osg::State::reset might get the OSG to reapply things
- perhaps this could be called in a custom
GraphicsWindowWin32::resizeImplementation();

Note that under OSX Stephan has had to add a update call to make sure
that the graphics context hangs together on a resize.  Dragging a
window will also produce a resizedImplementation call so perhaps could
be reused for the same tweaks.

The thread blocking is not caused by the lack of a message but the fact that
one message dispatch is not returning until the mouse button has been
released (i.e. when the window repositioning has been completed). There is
already a check made (PeekMessage) to exit the loop whenever the message
loop is empty. The only real practical solution to this is to go with a
separate thread. That thread will still block on the call, but at least the
rendering loop will be running meanwhile.

If its what you have to do, then you have to do it...  The only thing
I'd add is the importance of being able to cleanly kill the thread on
exit - we need to keep in mind then people will be open and closing
graphics windows and viewers, and even unloading the OSG libraries
completely and then reloading them later.

>> I don't know if this helps, but I have been considering adding a
>> osg::GraphicsContext* and osg::View* pointer (possible a
>> observer_ptr<>) to osgGA::GUIEventAdapter to allow apps and windows to
>> specify where the events came from.  This might help if you are piling
>> all the events into one EventQueue and wish to later filter it so that
>> events end up in the appropriate windows event queue during
>> checkEvents.

I think that would be a good addition. This feature is not required to
implement the planned solution though since the message pumping thread would
still queue in the proper osgGA event queue.

I ping the list once I add them.  Both EventQueue and the viewers will
modifying to populate the pointers - the event queue to assign the
source graphics window, and the viewers to assign the view they are
associated with.


Note that you already have this problem since it is the main thread that
would block on those X11 calls. No matter what is done, if these calls
block, the only possible choice we have is to pick up which thread will
block and code around it.

Some X11 calls will block, but most wont, the ones the the
GraphicsWindowX11 uses don't block some I'm not expecting any
problems.

Yes there is an API to do it. But this won't be required since in the next
release these display devices will be accessible through the
GraphicsContextWin32 class. The only thing remaining to figure out is how
this will be specified in the traits (right now the only way to create a
GraphicsContext is by specifying that we want a pbuffer). We may need to add
a new trait (:-) for just that...

OK I'm a bit confused now, what exactly are these display devices that
aren't attached to the desltop?  Are they bona-fida outputs such as
that you can attach a monitor or projector to?  Or are you talking
just pbuffers or unmapped windows?

As for adding extra Traits then open to proposals, but since I really
don't know what problem you trying to address yet I remain rather
perplexed how it might fit into the greater scheme of things.

--

On the topic of pbuffers, I haven't tackled the GraphicsContextX11
implementation yet, I'm expecting a number areas where they will be
opportunities for sharing code between the GraphicsWindowX11 and the
GraphisContextX11, but how to take advantage of this is the challenge.

What I certainly don't want to do is mix up the derivation of
GraphicsContext and GraphicsWindow, these need to be kept separate, a
pbuffer is just a graphics context and all the window functions
present on GrpahicsWindow implementation are inapprorpriate - and you
don't want end users dynamic_cast<GraphicsWindow> on pbuffers to find
this succeeding - So the pbuffer X11 implementation shoudln't subclass
from GraphicsWindowX11.  Also GraphicsWindowX11 can't inherit from
GraphicsContextX11 and GraphicsWindow, unless you start playing games
with virtual inhertance of GraphicsContext.

So I'm wondering if a X11 helper class might be in order to help
manage the common elements between the GraphicsContextX11 and
GraphicsWindowX11.  This approach could also apply to what approach to
take under Windows and OSX so I'd suggest hanging back of pbuffer work
till I have a bash at pbuffer support under X11, see what leasons I
learn and what bits of the general framework I need to tweak to get
things to play nicely together.

Cheers,
Robert.
_______________________________________________
osg-users mailing list
osg-users@openscenegraph.net
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/

Reply via email to