Hi Michael,

This is more or less where I am stuck as the geo is not meshed yet and I am
still waiting for the LIDAR guys to provide just that.

I am able to solve multiple pano locations using one of the 6-pack cubic
images as soon as I solve the camera positions of those still images. I was
hoping to solve still camera positions relative to the LIDAR data without
using the actual meshed geo. For example, by manually feeding the XYZ
coordinate or some other way to get the LIDAR information involved during
the still images solving process.

Currently, I am tracking a few user tracks along with the auto-track in the
CameraTracker. Once it's solved, I then set the actual distance based on
LIDAR data to sort of line up the two worlds. Need to experiment a bit
further to see how solid the result is.

Cheers,
Jason


On Thu, Jan 29, 2015 at 8:07 PM, Michael Garrett <michaeld...@gmail.com>
wrote:

> Hi Jason,
>
> From what I understand of what you're saying, I've done this exact thing
> in the past using the ProjectionSolver. That's now been replaced by the
> updated CameraTracker, although I haven't checked the workflow. Of course
> you can still access the ProjectionSolver if need be.
>
> It sounds like you need to mesh the Lidar data then decimate it, or get a
> modeler to use it as a snapping ref to create a lightweight version. Once
> it's in Nuke, I used the ProjectionSolver with the ref images and the Lidar
> to pick corresponding points and from there, it generated a camera at the
> nodal position for projecting the pano back onto the Lidar geometry. I
> actually just used the camera's position though. I used a cubic projection
> setup (camera sixpack) at that position to project a full 360 pano split
> into six cubic faces.
>
> Michael
>
> On 28 January 2015 at 14:43, Jason Huang <jasonhuang1...@gmail.com> wrote:
>
>> Hi guys,
>>
>> I am trying to solve cameras of still images along with LIDAR data
>> acquired. Ideally I would like to solve each still camera positions and
>> lock in with the LIDAR data in terms of scale and accuracy, and then
>> triangulate the positions of multiple HDR panos.
>>
>> I first created 6+ user tracks across a few frames at the beginning of
>> the sequence and gave each of them xyz read out from corresponding LIDAR
>> points (which is cached in PFTrack). Flagged them as solved. Then either
>> auto-track with them and solve, or auto-track without them to solve still
>> camera positions; the result is either failed or out of whack wrong.
>>
>> There is a section in the documentation about using 3D survey points for
>> camera solve but it seems to tie to using 3D geometry/point positions that
>> is already generated from the existing still camera solve result whereas I
>> wanted to integrated external 3D survey points like the LIDAR...
>>
>> Maybe I didn't create enough user tracks across more frames or there is a
>> specific steps I should take?
>>
>> Btw, if anyone knows how to bring in 8GB 170 million points LIDAR data
>> into Nuke, I am all ears.
>>
>> Thanks!
>>
>>
>>
>>
>> _______________________________________________
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> _______________________________________________
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
_______________________________________________
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Reply via email to