Hi Robert,
Sorry for the delayed response, I did not read your e-mail until today,
gmail messed up the notifications :(
Regarding the multiview approach, it doesn't matter how you set the layout,
I usually use a grid because is more compact when doing the RTT. The stereo
computing is always
Hi Rafa,
On 13 April 2013 09:23, Rafa Gaitan rafa.gai...@gmail.com wrote:
Sorry for the delayed response, I did not read your e-mail until today,
gmail messed up the notifications :(
Regarding the multiview approach, it doesn't matter how you set the
layout, I usually use a grid because is
Hi Robert,
It sounds like you've been rendering all the cameras to a single texture?
I am still weighing up the whether multiple textures or a shared texture
would be appropriate. Multiple textures, i.e. one per eye camera would be
the most scalable.
Yes, I don't remember exactly the
Hi Rafa,
On 3 April 2013 09:48, Rafa Gaitan rafa.gai...@gmail.com wrote:
I'm not sure if the refactor will affect the way that stereo will be
managed, but currently also exists auto-stereoscopic displays, which means
that not only find the classical two-view stereo but many views (8 or 9 are
Hi Robert,
I'm not sure if the refactor will affect the way that stereo will be
managed, but currently also exists auto-stereoscopic displays, which means
that not only find the classical two-view stereo but many views (8 or 9 are
the most common) and even in mobile devices it's also possible to
Hi Rafa,
On 3 April 2013 09:48, Rafa Gaitan rafa.gai...@gmail.com wrote:
I'm not sure if the refactor will affect the way that stereo will be
managed, but currently also exists auto-stereoscopic displays, which means
that not only find the classical two-view stereo but many views (8 or 9 are
Hi Robert,
2013/4/3 Robert Osfield robert.osfi...@gmail.com
There new approach should be able to encompass novel displays that require
multiple camera and then a second pass to render the result. In essence it
won't be any different to how stereo will be done - a master Camera on the
On 04/03/2013 10:58 AM, Robert Osfield wrote:
The question for me is how best to wire up the interface for the stereo
support and any other schemes. One approach is to leave it low level
viewer configuration where the user or a configuration file describe
exactly what slave Camera are required,
Why not just build an approximate mesh based on the Oculus projection
formula? ;)
Christian
2013/4/3 Jan Ciger jan.ci...@gmail.com
On 04/03/2013 10:58 AM, Robert Osfield wrote:
The question for me is how best to wire up the interface for the stereo
support and any other schemes. One
On 04/03/2013 03:47 PM, Christian Buchner wrote:
Why not just build an approximate mesh based on the Oculus projection
formula? ;)
Christian
Why to do that when the formula including the distortion coefficients is
provided in the SDK? Do you have a parametric modeller that could do it?
Hi All,
With the work on integration of keystone distortion correction support in
osgViewer I have been thinking about the general Render To Texture (RTT)
approach that it uses I there are direct parallels to other effects, not
just distortion correction. Other areas that are potentially
Would supporting barrel predistortion for the Oculus Rift VR headset touch
this area of work as well? Mathematically speaking it's slightly more
advanced than keystoning, but both could be implemented with a distortion
mesh.
Christian
2013/3/29 Robert Osfield robert.osfi...@gmail.com
Hi
Hi Christian,
On 29 March 2013 13:30, Christian Buchner christian.buch...@gmail.comwrote:
Would supporting barrel predistortion for the Oculus Rift VR headset touch
this area of work as well? Mathematically speaking it's slightly more
advanced than keystoning, but both could be implemented
Hi All,
I'm currently working on a project that needs keystone correction for
using projectors off axis when doing a portable stereo visualization
setup. As part of this work I've written a simple example,
osgkeystone, that I've used to prototype the UI, maths and viewer
setup to do the adjust
14 matches
Mail list logo