Hi all, I'm (very) new to OSG, though do have some prior graphics experience (which could use brushing up, I'll admit). However, I just can't seem to determine the best solution to this problem.
I'm working on an application where, in one window, an aircraft HUD view is being rendered. Originally, a single osg::Viewer with one camera was being used, which works fine. However, I'm trying to implement what are essentially "peripheral" views which are rendered to the same viewport as the same view, on the left and right sides (I'm not using a single camera as these peripheral views will be compressed horizontally and will allow the user to set an arbitrary FOV for these views, stretching up to 180 degrees across all three views). There is no issue with this part, per se - the problem arises from the fact that there are jarring issues at the seams between viewports. Objects which are partially drawn on each viewport have slight, though very clear, slight issues in offset and scale, such that when they're crossing the boundary, scale and immersion is broken. I've attempted two solutions to compare them. Currently, all views have the same field of view and are drawn with equal size in the viewport. In one method, I add two slave cameras to the viewer, which has its rotation set via a manipulator (this was the implementation before I started the project) with a view offset rotation of +/- main camera's horizontal field of view about the Y-axis (gained by multiplying the vertical FOV by the aspect ratio, all retrieved via getProjectionMatrixAsPerspective()) passed in as an argument. From my understanding, as the main camera's view matrix is post-multiplied by the slave's offset to get the final slave view matrix, this should be a rotation about the local axis and give the correct result. In fact, in this case, not only is there a slight scale/position issue, but there are odd issues with the angle of the horizon getting slightly more off as the roll increases (though the rotation itself seems to be about the correct axis, and tests with other axes are proven to be definitely incorrect). I've used both matrix and quaternion rotations in case there are precision/accuracy differences with no change in results. I've taken a look at setUpViewFor3DSphericalDisplay() which seems to use a very similar method (though I don't quite get the rotation about the Z-axis). In my other (naive, first) implementation, I use a composite viewer with three identical cameras each with their own manipulator, set each to identical rotations each frame, and calculate pitch and azimuth offsets for the peripheral views based on the aircraft roll which is also calculated each frame. This actually seems to work better than the single viewer solution, but is not what I want and certainly doesn't seem to be the "correct" solution. I'll add code if/when I can (need to clean it up quite a bit, as I've tried this about twenty different ways) and will add information as often as I can. Thanks for any help you can give! -Adam ------------------ Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=44488#44488 _______________________________________________ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org