Sorry, a few corrections. I was just typing that off the top of my head, but
before I send you down the wrong path, please note the following:

- You'll want to multiply your vectors by the camera projection matrix, not
the inverse.
- If you use cameraProjectionMatrix(cameraNode), there's no need to scale up
the result to the image resolution. The function handles that already (it
uses the root format)
- Use Vector4 instead of Vector3, with the w component set to 1.
Vector4(x,y,z,1)
- Multiply that by the camera matrix
- The resulting vector will be in homogeneous coordinates, so divide by w.
In pseudo-code:

screenPos = cam_matrix * Vector4(x,y,z,1)
screenPos /= screenPos.w
# Now you should have your screen coordinates in screenPos.x and screenPos.y

Once you get this right, the biggest pain will be getting the camera matrix
for an animated camera over a certain framerange. For that, you have 2
options:

- Look at what nukescripts.snap3d.cameraProjectionMatrix() is doing, and
re-write it so you can get the matrix at different frames
- Use that function as is, but you'll need a hack to loop over frames and
force everything to evaluate on each frame. The popular choice for that is
to use a temporary "CurveTool" node and make it execute on each frame.
Awkward, but it works :)


P.S. I just realized you were already using reconcile3D to do all that work
for you. Why don't you just re-use that technique for each point in the
selection? What problems are you facing?

Cheers,
Ivan


On Tue, Mar 1, 2011 at 9:05 PM, Ivan Busquets <[email protected]>wrote:

> Hey Jacob,
>
> Since you're already using the snap3d module, have a look at
> "projectSelectedPoints". That should give you projected points into screen
> space coordinates.
>
> But I guess you'll ultimately want to get an animated curve for each point.
> For that, you may need to do a bit more work and handle the transformations
> in a loop for yourself. Still, there's other functions in there that can
> help:
>
> - cameraProjectionMatrix(cameraNode) -> gives you a matrix object the
> inverse of which you can use to convert your Vector3 from world to screen
> space.
>
> If it's of any help, I've done something very similar but using matrices
> from the metadata of prman renders.
>
> http://www.nukepedia.com/gizmos/gizmo-downloads/metadata/prmantracker/
>
> The methods should be the same, though (get the inverse of your camera's
> projection matrix, use that to get normalized screen coordinates, scale that
> to your format's resolution, and copy the resulting XY coordinates into your
> tracker / cornerpin node.
>
> Hope that helps. Can't check any of that now, but I'm happy to elaborate if
> this sounds confusing.
>
> Cheers,
> Ivan
>
>
>
> On Tue, Mar 1, 2011 at 8:30 PM, Jacob Harris <[email protected]>wrote:
>
>> I've written a script to semi-automate a Reconcile3d workflow. It uses
>>
>> for pt in nukescripts.snap3d.selectedPoints():
>>    pos = nuke.math.Vector3(pt.x, pt.y, pt.z)
>>
>> to grab the currently selected point, feeds that to Reconcile3d,
>> executes it, and copies the output to a Tracker node.
>>
>> What I want to do now is grab multiple points and get a separate
>> output curve from each one in turn. I seem to have hit a wall, though.
>> I've picked apart the snap3d module as best I could as well as any
>> post containing the words "Ivan Busquets" ;) Not sure if this requires
>> extensive camera matrix calculations or if I'm missing something
>> easier.
>> _______________________________________________
>> Nuke-python mailing list
>> [email protected]
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-python
>>
>
>
_______________________________________________
Nuke-python mailing list
[email protected]
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-python

Reply via email to