Hi Ivan,
thanks, for your help, I on my side however am not sure if I understand you
right and if we're talking about the same ;)
So let me try to explain better:
Here's what I'm doing:
1.)Make a Vector 4 which is the bottom left corner of a card that sits at z=-1
pt=nuke.math.Vector4(-0.5,-0.5,-1,1)
2.)get the projection matrix for my camera, that sits at z=3, as you can see
below, cameratransforms are already in the matrix
cameraMat=nukescripts.snap3d.cameraProjectionMatrix(nuke.toNode("Camera1"))
{512, 0, 0, 0, 0, 512, 0, 0, -128, -128, -1.00002, -1, 384, 384, 2.80006, 3}
3.) project them by multiplying the Vector with the matrix
result=cameraMat*pt
{256, 256, 3.80008, 4}
Then I want to inverse the wohle thing so I do the following
1.) Invert camera projection matrix
cameraMatInv=cameraMat.inverse()
2.) unproject by multiplying the inverse matrix with the result Vector from
above
ptNew=cameraMatInv*result
{-0.5, -0.5, -1, 1}
So all works fine up to here. When I interpret my Vector(result) it seems to me
that w=distance from camera to object, which is exactly my deep value. To get x
and y pixel coordinates I divide x/w and y/w as you said. So in this process I
seem to drop my z value as I don't need it.
If now however I have a deep Image and the coressponding camera, I have the
following values x=64,y=64,deep=4,z=0.25
to build my Vector4 in homogeneous coordinates I would normaly do the following:
x=x*deep=64*4=256
y=y*deep=64*4=256
z=z*deep=0.25*4=1
w=deep
so the resulting Vector is
px=nuke.math.Vector4(256,256,1,4)
{256,256,1,4}
that already doesn't look like the Vector result from above so unprojecting it
cameraMatInv*px
doesn't return the point I'm looking for:
{-0.5, -0.5, 41.0008, 15.0003}
So now my question is, when I have a deep Image and know x,y,zDepth and deep
aswell as all the camera Parameters, how do I calculate z of my Vector4?
I hope that it's clearer now.
Thanks a lot for your help!
cheers,
Patrick
----- Original Message -----
From: [email protected]
To: [email protected]
Date: 04.11.2012 02:19:14
Subject: Re: [Nuke-python] Projection matrix question
> Hi Patrick,
>
> If I understand your question correctly, you'll want to follow these steps:
>
> 1. un-project your 2D coord into 3D space (using cam projection matrix)
> 2. divide the resulting vector by its 4th component (w coordinate)
> 3. Scale the vector so that its Z == -depth_sample
> 4. Apply camera transformations (if any) to go from camera to world
> space (using cam transform matrix)
>
> Hope that helps.
>
> Cheers,
> Ivan
>
> On Fri, Nov 2, 2012 at 3:30 AM, Patrick Heinen
> <[email protected]> wrote:
>> Hi everyone,
>>
>> I'm trying to project a point from a deep image back to 3d space. I'm
>> currently taking the camera projection matrix, inverting it and multiplying
>> it with a Vector 4 {pixelX*deep,pixelY*deep,deep,deep}.
>> However this doesn't give me the exact position.
>> When projecting from 3d to 2d the 3rd position of the resulting vector is
>> not equal to the 4th position, the value changes as a function of the
>> distance from camera to object and the near crop.
>> To get the exact point in 3d I now need to be able to reconstruct this 3rd
>> position in the vector.
>> Does anyone know how it is calculated?
>>
>> Thanks
>> Patrick
>> _______________________________________________
>> Nuke-python mailing list
>> [email protected], http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-python
> _______________________________________________
> Nuke-python mailing list
> [email protected], http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-python
_______________________________________________
Nuke-python mailing list
[email protected], http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-python