Re: [osg-users] Vicon tracking

2010-03-12 Thread Mel Av
Thanks for your answers. 
I understand the theory behind this, which means that I will need to set 
projection matrices for the left and right eye cameras according to my distance 
to the display surface. The problem is, can these cameras be retrieved and have 
their projection matrices modified before each viewer.frame() call? As I have 
mentioned, at the moment I am using the ---stereo command line argument to 
enable stereo. I read at an older thread(around July 2009) that cameras could 
not be retrieved using the viewer object, but this was going to change. Now I 
see a getCameras method in the Viewer class. Will that work for me, or will 
setting the camera matrices interfere with the already implemented stereo 
rendering? In that case, will I need to set stereo settings to off and 
implement stereo manually using slave cameras, as was suggested at that thread?
By the way, the stereo settings wiki:
http://www.openscenegraph.org/projects/osg/wiki/Support/UserGuides/StereoSettings
mentions:
If the user is planning to use head tracked stereo, or a cave then it is 
currently recommend to set it up via a VR toolkit such as VRjuggler, in this 
case refer to the VR toolkits handling of stereo, and keep all the OSG's stereo 
specific environment variables (below) set to OFF

Does anyone know if this is still the case, or is that note outdated?

Thanks a lot

Mel

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=25587#25587





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Vicon tracking

2010-03-12 Thread Jason Daly

Mel Av wrote:
Thanks for your answers. 
I understand the theory behind this, which means that I will need to set projection matrices for the left and right eye cameras according to my distance to the display surface. The problem is, can these cameras be retrieved and have their projection matrices modified before each viewer.frame() call? As I have mentioned, at the moment I am using the ---stereo command line argument to enable stereo. I read at an older thread(around July 2009) that cameras could not be retrieved using the viewer object, but this was going to change. Now I see a getCameras method in the Viewer class. Will that work for me, or will setting the camera matrices interfere with the already implemented stereo rendering? In that case, will I need to set stereo settings to off and implement stereo manually using slave cameras, as was suggested at that thread?

By the way, the stereo settings wiki:
http://www.openscenegraph.org/projects/osg/wiki/Support/UserGuides/StereoSettings
mentions:
If the user is planning to use head tracked stereo, or a cave then it is currently 
recommend to set it up via a VR toolkit such as VRjuggler, in this case refer to the VR 
toolkits handling of stereo, and keep all the OSG's stereo specific environment variables 
(below) set to OFF
  


I'm pretty sure VRJuggler is still using the old SceneView class, so I 
wouldn't put too much faith in that statement.  You'll still need to 
enable quad-buffer stereo (whether you do it via environment variable or 
in code directly using DisplaySettings doesn't really matter).


You should be able to get the cameras via the getCamera() or 
getCameras() call in Viewer (I think you only have one camera, though, 
right?), and you can definitely set the projection matrix on the 
Camera(s) to correspond to your head tracking and view.  I don't know 
how the Viewer sets up the stereo projections, though (I tried to find 
it in the code, but it's buried pretty deep).  Someone else will have to 
answer that question, I guess.


--J


Does anyone know if this is still the case, or is that note outdated?

Thanks a lot

Mel

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=25587#25587





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

  


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Vicon tracking

2010-03-08 Thread Jason Daly

ted morris wrote:


yup, you will need to re-compute the proper frustum dynamically.  To 
do that,  you will need to know the position/orientation of sensor 
coordinate system is w.r.t. the 'window'.


Ah, I missed the part about the stereo projector.  Yes, you will need to 
adjust the projection matrix on the fly as well.  There have been a 
number of papers about this over the years.  I got several hits when I 
Googled for head tracked stereo projection matrix (without the quotes).


It'll probably take several attempts to get it right.  Good luck!

--J

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Vicon tracking

2010-03-07 Thread ted morris
On Wed, Mar 3, 2010 at 10:56 AM, Mel Av melinos...@hotmail.com wrote:

 Hey all,

 Many thanks for your answers.
 The Vicon system uses a z-up coordinate system, with millimeter units. It
 sends x,y,z coordinates for a given tracked object(in that case the cap you
 are wearing) as well as rotation information in axis/angle form. The client
 I'm using converts this to three different forms I can use interchangeably:
 1. Euler angles
 2. Global rotation matrix
 3. Quaternion

 Right now I am just using the viewer-getCamera()-setViewMatrix() call
 before the viewer.frame() call. The problem seems to be that the matrix I
 pass to setViewMatrix() does not seem to be correct. I use the x,y,z
 coordinates for the cap to set the translation matrix ( tr.makeTranslate() )
 and the quaternion to set the rotation matrix ( rot.makeRotate() ). I then
 multiply tr * rot and pass this to setViewMatrix(). The result is that when
 I move closer to the projected image, the rendered objects seems to come
 closer, which is correct behavior, but when i rotate my head this causes the
 rendered objects to rotate in the opposite direction. The code I use is from
 the OSG Quick start guide and it was supposed to change the view according
 to its description. However this does not seem to be the case for two
 reasons, from what I found out last night reading one old camera tutorial:
 1. You have to inverse the tr*rot result first
 2. After you do that, you have to rotate -90 degrees about the x-axis
 because Matrix classes use a Y-up coordinate frame whereas the viewer uses a
 Z-up coordinate frame.

 Will these two corrections solve the problem? Will I also need to do
 something more advanced like dynamically changing the frustum according to
 the head position?


yup, you will need to re-compute the proper frustum dynamically.  To do
that,  you will need to know the position/orientation of sensor coordinate
system is w.r.t. the 'window'.
Then you need to transform where your head is with respect to the window.
Finally you will need to know where your 'window' (what you see the world
through)
is w.r.t. your model world  coordinate system.  And of course you will need
to know this for the left/right eye positions. Doing a good job to solve for
these transformations
might take you a little thought/time depending on your set up. But, as it
seems like you have figured out as much, you can see that it can all be
realized pretty
easily with OSG matrix objects once you obtain the needed transformations.

Sometimes, as you start to chain your transformations together, it might
make things easier to
break it down into smaller steps and test out what you see. With so many
different conventions, etc...





 Thank you!

 Cheers,
 Mel

 P.S Sorry if my questions seem nub. I just thought I had the necessary
 background for understanding rotation and translation transformations but
 I'm completely confused by how OSG handles these and why it uses different
 coordinate frames in different situations. Anw, perhaps I'm confused by
 DirectX because camera manipulation was much easier with the separation of
 the Modelview matrix in two different matrices whereas in OpenGL you may be
 moving/rotating objects instead of the actual camera viewpoint.

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=25121#25121





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Vicon tracking

2010-03-03 Thread Mel Av
Hey,

I was wondering if anyone knows what would be the best solution for building an 
application where the camera is headtracked using motion captured data provided 
in realtime from a Vicon system in OpenSceneGraph. I can get the position and 
rotation of the 'real' camera (cap with infrared markers) at every frame but 
was not successful in using these to move the camera in OpenSceneGraph. Also I 
have to mention that the application is rendered in a stereo projector. Right 
now I'm only using the --stereo QUAD_BUFFER command line argument but I'm not 
sure if this is the appropriate way to do this.
I apologise if this has been answered somewhere else.
Any help is much appreciated.

Thank you!

Cheers,
Mel

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=25082#25082





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Vicon tracking

2010-03-03 Thread ted morris
I did something similar a very long time ago with OSG.

I set the ModelView and Projection matrix of the SceneView directly:

setProjectionMatrix( _mtx_proj )
setViewMatrix(vmtx)

for model view you would calculate matrix::makeLookAt and for the projection
matrix
you need to compute your frustum.  Depends on the application how you set
these up.

OSGers-- I'm talking *very old versions* of OSG-- is there now a bundled
'convenience' class that
takes care of this monkey business?

t



On Tue, Mar 2, 2010 at 6:09 PM, Mel Av melinos...@hotmail.com wrote:

 Hey,

 I was wondering if anyone knows what would be the best solution for
 building an application where the camera is headtracked using motion
 captured data provided in realtime from a Vicon system in OpenSceneGraph. I
 can get the position and rotation of the 'real' camera (cap with infrared
 markers) at every frame but was not successful in using these to move the
 camera in OpenSceneGraph. Also I have to mention that the application is
 rendered in a stereo projector. Right now I'm only using the --stereo
 QUAD_BUFFER command line argument but I'm not sure if this is the
 appropriate way to do this.
 I apologise if this has been answered somewhere else.
 Any help is much appreciated.

 Thank you!

 Cheers,
 Mel

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=25082#25082





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Vicon tracking

2010-03-03 Thread Jason Daly

ted morris wrote:
OSGers-- I'm talking *very old versions* of OSG-- is there now a 
bundled 'convenience' class that

takes care of this monkey business?


I'd think all you should need to do is call setViewMatrix() on the 
Camera node.  The sticky part is usually that the tracking system's 
coordinate frame is different from what your OSG app is expecting.


Do you know what the coordinate system is for your tracking space?  
Also, in what format does the Vicon system send its data?  Do you get a 
vector and Euler angles, a matrix, a quaternion?You said that you 
weren't successful in using the tracking data to move the OSG camera.  
Can you elaborate?


--J

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Vicon tracking

2010-03-03 Thread Jason Daly

ted morris wrote:
OSGers-- I'm talking *very old versions* of OSG-- is there now a 
bundled 'convenience' class that

takes care of this monkey business?


I'd think all you should need to do is call setViewMatrix() on the 
Camera node.  The sticky part is usually that the tracking system's 
coordinate frame is different from what your OSG app is expecting.


Do you know what the coordinate system is for your tracking space?  
Also, in what format does the Vicon system send its data?  Do you get a 
vector and Euler angles, a matrix, a quaternion?You said that you 
weren't successful in using the tracking data to move the OSG camera.  
Can you elaborate?


--J

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Vicon tracking

2010-03-03 Thread Mel Av
Hey all,

Many thanks for your answers.
The Vicon system uses a z-up coordinate system, with millimeter units. It sends 
x,y,z coordinates for a given tracked object(in that case the cap you are 
wearing) as well as rotation information in axis/angle form. The client I'm 
using converts this to three different forms I can use interchangeably:
1. Euler angles
2. Global rotation matrix
3. Quaternion

Right now I am just using the viewer-getCamera()-setViewMatrix() call before 
the viewer.frame() call. The problem seems to be that the matrix I pass to 
setViewMatrix() does not seem to be correct. I use the x,y,z coordinates for 
the cap to set the translation matrix ( tr.makeTranslate() ) and the quaternion 
to set the rotation matrix ( rot.makeRotate() ). I then multiply tr * rot and 
pass this to setViewMatrix(). The result is that when I move closer to the 
projected image, the rendered objects seems to come closer, which is correct 
behavior, but when i rotate my head this causes the rendered objects to rotate 
in the opposite direction. The code I use is from the OSG Quick start guide and 
it was supposed to change the view according to its description. However this 
does not seem to be the case for two reasons, from what I found out last night 
reading one old camera tutorial:
1. You have to inverse the tr*rot result first
2. After you do that, you have to rotate -90 degrees about the x-axis because 
Matrix classes use a Y-up coordinate frame whereas the viewer uses a Z-up 
coordinate frame.

Will these two corrections solve the problem? Will I also need to do something 
more advanced like dynamically changing the frustum according to the head 
position? 

Thank you!

Cheers,
Mel

P.S Sorry if my questions seem nub. I just thought I had the necessary 
background for understanding rotation and translation transformations but I'm 
completely confused by how OSG handles these and why it uses different 
coordinate frames in different situations. Anw, perhaps I'm confused by DirectX 
because camera manipulation was much easier with the separation of the 
Modelview matrix in two different matrices whereas in OpenGL you may be 
moving/rotating objects instead of the actual camera viewpoint.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=25121#25121





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Vicon tracking

2010-03-03 Thread Jason Daly


Hi, Mel,


Mel Av wrote:

1. You have to inverse the tr*rot result first
  


This probably comes from the fact that you're setting a View matrix, 
which is essentially the inverse of a model matrix.  The transform in 
the OSG scene forms the Model matrix, which is combined with the view 
matrix in the camera to get the overall modelview matrix passed to OpenGL.


If you're already getting accurate position tracking, though, I don't 
think you'll get correct behavior if you invert the result.




2. After you do that, you have to rotate -90 degrees about the x-axis because 
Matrix classes use a Y-up coordinate frame whereas the viewer uses a Z-up 
coordinate frame.
  


There's no inherent coordinate frame in the Matrix classes, or in 
OpenGL, or OSG.  The coordinate frame depends primarily on the scene 
that you create.  Typically, OSG scenes are Z-up, but this isn't 
necessarily true.


If you do need to convert a rotation between coordinate systems, the 
correct way to do this is  M * R * Minv, where M is the matrix needed to 
rotate one coordinate system to the other, Minv is the inverse of that 
matrix, and R is the rotation that needs conversion.


Will these two corrections solve the problem? 


I don't think so, simply because you're already getting correct behavior 
from your position tracking.  It's only the orientation tracking where 
you're having issues.


Oftentimes, tracking systems use one coordinate system for the overall 
tracking space, and a different one to report the orientation.  Are you 
sure you've got the correct coordinate system for the tracked object?  
From what you said above, it sounds to me like your tracked object 
might be Z-down (even though the overall tracking space is Z-up).




Will I also need to do something more advanced like dynamically changing the 
frustum according to the head position?


No, you shouldn't need anything like that.

--J
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org