@Chris Foster and @Andreas Lobinger:
Very good points! Maybe I should describe what I already have, to channel
the discussion a little.
I want to take some more time for that, and maybe make a little graphic.
Right now I don't have much time, so I might do that later ;)
Thanks for all the feedback, this discussion is helping a lot! =)


2014-05-19 22:31 GMT+02:00 Simon Danisch <[email protected]>:

> Hi Kevin,
> this is actually something I wanted to look into after GSOC, as I'm really
> psyched about integrating Occulus Rift into the visualization engine, which
> would include to have different lense models and camera distortions in the
> render pipeline;)
>
> Am Montag, 19. Mai 2014 18:07:59 UTC+2 schrieb Kevin Squire:
>>
>> In my spare time, I've recently been exploring some augmented reality
>> projects.  One challenge with existing systems is the disconnect between
>> computer vision systems and 3D renderers.  In particular, it would be
>> really nice if it were easy (or at least possible) to add a camera
>> distortion model to the renderer, so as to "redistort" a rendered model to
>> a video.
>>
>> Cheers!
>>    Kevin
>>
>> On Monday, May 19, 2014, Jason Grout <[email protected]> wrote:
>>
>>> On 5/19/14, 10:52, Jason Grout wrote:
>>>
>>>> (see http://sagecell.sagemath.org/?q=vtjadv for an example of a point
>>>> tracking the mouse on a sphere)
>>>>
>>>
>>> Sorry; the correct link for the example is
>>> http://sagecell.sagemath.org/?q=yiarsu
>>>
>>> Thanks,
>>>
>>> Jason
>>>
>>>

Reply via email to