On Wed, Mar 3, 2010 at 10:56 AM, Mel Av <melinos...@hotmail.com> wrote:

> Hey all,
>
> Many thanks for your answers.
> The Vicon system uses a z-up coordinate system, with millimeter units. It
> sends x,y,z coordinates for a given tracked object(in that case the cap you
> are wearing) as well as rotation information in axis/angle form. The client
> I'm using converts this to three different forms I can use interchangeably:
> 1. Euler angles
> 2. Global rotation matrix
> 3. Quaternion
>
> Right now I am just using the viewer->getCamera()->setViewMatrix() call
> before the viewer.frame() call. The problem seems to be that the matrix I
> pass to setViewMatrix() does not seem to be correct. I use the x,y,z
> coordinates for the cap to set the translation matrix ( tr.makeTranslate() )
> and the quaternion to set the rotation matrix ( rot.makeRotate() ). I then
> multiply tr * rot and pass this to setViewMatrix(). The result is that when
> I move closer to the projected image, the rendered objects seems to come
> closer, which is correct behavior, but when i rotate my head this causes the
> rendered objects to rotate in the opposite direction. The code I use is from
> the OSG Quick start guide and it was supposed to change the view according
> to its description. However this does not seem to be the case for two
> reasons, from what I found out last night reading one old camera tutorial:
> 1. You have to inverse the tr*rot result first
> 2. After you do that, you have to rotate -90 degrees about the x-axis
> because Matrix classes use a Y-up coordinate frame whereas the viewer uses a
> Z-up coordinate frame.
>
> Will these two corrections solve the problem? Will I also need to do
> something more advanced like dynamically changing the frustum according to
> the head position?
>

yup, you will need to re-compute the proper frustum dynamically.  To do
that,  you will need to know the position/orientation of sensor coordinate
system is w.r.t. the 'window'.
Then you need to transform where your head is with respect to the window.
Finally you will need to know where your 'window' (what you see the world
through)
is w.r.t. your model world  coordinate system.  And of course you will need
to know this for the left/right eye positions. Doing a good job to solve for
these transformations
might take you a little thought/time depending on your set up. But, as it
seems like you have figured out as much, you can see that it can all be
realized pretty
easily with OSG matrix objects once you obtain the needed transformations.

Sometimes, as you start to chain your transformations together, it might
make things easier to
break it down into smaller steps and test out what you see. With so many
different conventions, etc...




>
> Thank you!
>
> Cheers,
> Mel
>
> P.S Sorry if my questions seem nub. I just thought I had the necessary
> background for understanding rotation and translation transformations but
> I'm completely confused by how OSG handles these and why it uses different
> coordinate frames in different situations. Anw, perhaps I'm confused by
> DirectX because camera manipulation was much easier with the separation of
> the Modelview matrix in two different matrices whereas in OpenGL you may be
> moving/rotating objects instead of the actual camera viewpoint.
>
> ------------------
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=25121#25121
>
>
>
>
>
> _______________________________________________
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to