I'm also pretty excited about using the rift for scientific/technical
visualization (I have a DK1), but so far that hasn't been reflected by
an appropriate amount of spare hacking time.

If you haven't already, I'd recommend checking out the VRUI youtube
demos (for example, this one is pretty sweet:
https://www.youtube.com/watch?v=IERHs7yYsWI).  The VRUI guys really
seem to have nailed a lot of the data set navigation issues as well as
various VR-specific issues like having to embed the menu system
properly in 3D.  I suspect this is because they've been working on
such things for years before the rift came along and brought such
technology to the general public at a reasonable price :)

~Chris

On Tue, May 20, 2014 at 6:31 AM, Simon Danisch <sdani...@gmail.com> wrote:
> Hi Kevin,
> this is actually something I wanted to look into after GSOC, as I'm really
> psyched about integrating Occulus Rift into the visualization engine, which
> would include to have different lense models and camera distortions in the
> render pipeline;)
>
> Am Montag, 19. Mai 2014 18:07:59 UTC+2 schrieb Kevin Squire:
>>
>> In my spare time, I've recently been exploring some augmented reality
>> projects.  One challenge with existing systems is the disconnect between
>> computer vision systems and 3D renderers.  In particular, it would be really
>> nice if it were easy (or at least possible) to add a camera distortion model
>> to the renderer, so as to "redistort" a rendered model to a video.
>>
>> Cheers!
>>    Kevin
>>
>> On Monday, May 19, 2014, Jason Grout <jason...@creativetrax.com> wrote:
>>>
>>> On 5/19/14, 10:52, Jason Grout wrote:
>>>>
>>>> (see http://sagecell.sagemath.org/?q=vtjadv for an example of a point
>>>> tracking the mouse on a sphere)
>>>
>>>
>>> Sorry; the correct link for the example is
>>> http://sagecell.sagemath.org/?q=yiarsu
>>>
>>> Thanks,
>>>
>>> Jason
>>>
>

Reply via email to