HI Mukend,

On Fri, Mar 18, 2011 at 6:57 PM, Mukund Keshav <[email protected]> wrote:
> Thanks a lot. This has been really helpful. So, basically i build two 
> cameras(virtual cameras i mean).

Yep.

> One camera matching left eye intrinsics and one camera matching right eye 
> intrinsics. And making them separated by a certain distance.

I found it best to  seperate the computation of the eye positions from
the computation of the projection and view matrices, just focus of
code the computes the correct projection and view matrices for a given
eye position, then seperately invoke this code for where you compute
the left and right eyes to be.

> Also, in here, we intend to let the use reach out to the object etc. So, we 
> will need to match the exact human eye intrinsics. Am i right about that?

You need to calibrate the trackers for the hand and the glasses to the
coordinate frame of the display system if it's an external display
system i.e. bench/powerwall/monitor, which is what I've been assuming.
 I've just realized that you could be using a head mounted display in
which case the way you set things up is very different and is rather
more straight forward w.r.t projection/view matrices.


> PS: i went through some examples of slave cameras used to render in two 
> windows. how do i go about rendering the two views in the same window?

You simple assign the same GraphicsWindow to both Camera's, then set
the viewports to render the appropriate halves.

Robert.
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to