Re: [Nuke-users] 3d CameraShake

2011-09-06 Thread Farhad Mohasseb
The real answer is just to grab any kind of camera and shoot something by
just holding the camera and then solving that you can't get more
realistic than real ;)

If you don't feel like doing that, then you could always do a 2D track and
use that in various strength on the x and y axis and just do a cos curve
expression for the z and multiple it up or down as needed .

The technical way is to setup a series of expressions on the 3axis and the
rotational axis where you amplify the random and its frequency based on
seeds and frame if you take apart the camera shake gizmo and look at its
transform you can see what I'm talking about.

Hope this helps. If you get stuck let me know and I can dig up the one I
wrote ...

Cheers,
Farhad
On Sep 6, 2011 6:13 PM, "Spider"  wrote:
> Hi Everyone,
>
> I would like to know what's the best way to simulate a camera hand shake
but
> with a 3dcamera,
> i already tried to search if anybody ask this and i also tried to convert
> the 2d CameraShake gizmo node in group
> in order to copy expression in z axis...
>
> any idea ?
>
> Thnaks
>
> Spider
>
> --
> *Luddnel Spider Magne **|** Director - Lead Motion Compositor*
> 555Lab – Alchemy between you and us
> 24 rue du Pré St-Gervais 93500 Pantin
> Office (+33)148 453 555 | Fax (+33)171 864 387
> Mobile (+33)699 434 555 | 555lab.com 
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Passing an object track through multiple transforms

2011-09-06 Thread Ivan Busquets
> Nuke's appears to be not be an integer, but the values in your tree appear
> to either be 0.0 or 0.5, which is slightly odd
>

Bbox boundaries in Nuke are also integers (just like Shake's DOD). The
output value is always n.0 or n.5 because I'm averaging the center of the
bbox.

Of course this is by no means an accurate way of getting the transformation
of a given point, but more an idea of something he could do without
resorting to the NDK.

Ideally, you'd want to write a plugin for that, I agree. Either one that
exposes the concatenated matrix, or one where you could plug a stack of
transforms and directly apply the result to one or more points. A
"Reconcile2D" ? :)


On Tue, Sep 6, 2011 at 8:47 PM, Ben Dickson  wrote:

> Heh, I remember trying the exact same thing in Shake years ago, to
> transform a roto point instead of using a 4-point stablise - the problem is
> the dod was an integer
>
> Nuke's appears to be not be an integer, but the values in your tree appear
> to either be 0.0 or 0.5, which is slightly odd
>
> Seems like it'd be fairly simple to make a plugin which exposes the 2D
> transform matrix,
> http://docs.thefoundry.co.uk/**nuke/63/ndkdevguide/knobs-and-**
> handles/output-knobs.html
>
> Ivan Busquets wrote:
>
>>it looks like nuke gives a rounding error using that setup (far
>>values are .99902 instead of 1.0).  probably negligible but I like
>>1.0 betta.
>>
>>
>> One small thing about both those UV-map generation methods. Keep in mind
>> that STMap samples pixels at the center, so you'll need to account for that
>> half-pixel difference in your expression. Otherwise the resulting map is
>> going to introduce a bit of unnecessary filtering when you feed it to an
>> STmap.
>>
>> An expression like this should give you a 1-to-1 result when you feed it
>> into an STMap:
>> --**--
>> set cut_paste_input [stack 0]
>> version 6.3 v2
>> push $cut_paste_input
>> Expression {
>>  expr0 (x+0.5)/(width)
>>  expr1 (y+0.5)/(height)
>>  name Expression2
>>  selected true
>>  xpos -92
>>  ypos -143
>> }
>> --**--
>>
>> With regards to the original question, though, it's a shame that one
>> doesn't have access to the concatenated 2d matrix from 2D transform nodes
>> within expressions. Otherwise you could just multiply your source point by
>> the concatenated matrix and get its final position. This information is
>> indeed passed down the tree, but it's not accessible for anything but
>> plugins (that I know).
>>
>> You could probably take advantage of the fact that the bbox is transformed
>> the same way as your image, and you CAN ask for the bbox boundaries using
>> expressions. So, you could have something with a very small bbox centered
>> around your point of interest, transform that using the same transforms
>> you're using for your kites, and then get the center of the transformed
>> bbox, if that makes sense. It's a bit convoluted, but it might do the trick
>> for you.
>>
>> Here's an example:
>> --**--
>> set cut_paste_input [stack 0]
>> version 6.3 v2
>> push $cut_paste_input
>> Group {
>>  name INPUT_POSITION
>>  selected true
>>  xpos -883
>>  ypos -588
>>  addUserKnob {20 User}
>>  addUserKnob {12 position}
>>  position {1053.5 592}
>> }
>>  Input {
>>  inputs 0
>>  name Input1
>>  xpos -469
>>  ypos -265
>>  }
>>  Rectangle {
>>  area {{parent.position.x i x1 962} {parent.position.y i x1 391} {area.x+1
>> i} {area.y+1 i}}
>>  name Rectangle1
>>  selected true
>>  xpos -469
>>  ypos -223
>>  }
>>  Output {
>>  name Output1
>>  xpos -469
>>  ypos -125
>>  }
>> end_group
>> Transform {
>>  translate {36 0}
>>  center {1052 592}
>>  shutteroffset centred
>>  name Transform1
>>  selected true
>>  xpos -883
>>  ypos -523
>> }
>> set C48d17580 [stack 0]
>> Transform {
>>  translate {0 -11}
>>  rotate -34
>>  center {1052 592}
>>  shutteroffset centred
>>  name Transform2
>>  selected true
>>  xpos -883
>>  ypos -497
>> }
>> set C4489ddc0 [stack 0]
>> Transform {
>>  scale 1.36
>>  center {1052 592}
>>  shutteroffset centred
>>  name Transform3
>>  selected true
>>  xpos -883
>>  ypos -471
>> }
>> set C4d2c2290 [stack 0]
>> Group {
>>  name OUT_POSITION
>>  selected true
>>  xpos -883
>>  ypos -409
>>  addUserKnob {20 User}
>>  addUserKnob {12 out_position}
>>  out_position {{"(input.bbox.x + input.bbox.r) / 2"} {"(input.bbox.y +
>> input.bbox.t) / 2"}}
>> }
>>  Input {
>>  inputs 0
>>  name Input1
>>  selected true
>>  xpos -469
>>  ypos -265
>>  }
>>  Output {
>>  name Output1
>>  xpos -469
>>  ypos -125
>>  }
>> end_group
>> CheckerBoard2 {
>>  inputs 0
>>  name CheckerBoard2
>>  selected true
>>  xpos -563
>>  ypos -623
>> }
>> clone $C48d17580 {
>>  xpos -563
>>  ypos -521
>>  selected true
>> }
>> clone $C4489ddc0 {
>>  xpos -563
>>  ypos -495
>>  select

Re: [Nuke-users] Re: Passing an object track through multiple transforms

2011-09-06 Thread Ben Dickson
Heh, I remember trying the exact same thing in Shake years ago, to 
transform a roto point instead of using a 4-point stablise - the problem 
is the dod was an integer


Nuke's appears to be not be an integer, but the values in your tree 
appear to either be 0.0 or 0.5, which is slightly odd


Seems like it'd be fairly simple to make a plugin which exposes the 2D 
transform matrix,

http://docs.thefoundry.co.uk/nuke/63/ndkdevguide/knobs-and-handles/output-knobs.html

Ivan Busquets wrote:

it looks like nuke gives a rounding error using that setup (far
values are .99902 instead of 1.0).  probably negligible but I like
1.0 betta.


One small thing about both those UV-map generation methods. Keep in mind 
that STMap samples pixels at the center, so you'll need to account for 
that half-pixel difference in your expression. Otherwise the resulting 
map is going to introduce a bit of unnecessary filtering when you feed 
it to an STmap.


An expression like this should give you a 1-to-1 result when you feed it 
into an STMap:


set cut_paste_input [stack 0]
version 6.3 v2
push $cut_paste_input
Expression {
 expr0 (x+0.5)/(width)
 expr1 (y+0.5)/(height)
 name Expression2
 selected true
 xpos -92
 ypos -143
}


With regards to the original question, though, it's a shame that one 
doesn't have access to the concatenated 2d matrix from 2D transform 
nodes within expressions. Otherwise you could just multiply your source 
point by the concatenated matrix and get its final position. This 
information is indeed passed down the tree, but it's not accessible for 
anything but plugins (that I know).


You could probably take advantage of the fact that the bbox is 
transformed the same way as your image, and you CAN ask for the bbox 
boundaries using expressions. So, you could have something with a very 
small bbox centered around your point of interest, transform that using 
the same transforms you're using for your kites, and then get the center 
of the transformed bbox, if that makes sense. It's a bit convoluted, but 
it might do the trick for you.


Here's an example:

set cut_paste_input [stack 0]
version 6.3 v2
push $cut_paste_input
Group {
 name INPUT_POSITION
 selected true
 xpos -883
 ypos -588
 addUserKnob {20 User}
 addUserKnob {12 position}
 position {1053.5 592}
}
 Input {
  inputs 0
  name Input1
  xpos -469
  ypos -265
 }
 Rectangle {
  area {{parent.position.x i x1 962} {parent.position.y i x1 391} 
{area.x+1 i} {area.y+1 i}}

  name Rectangle1
  selected true
  xpos -469
  ypos -223
 }
 Output {
  name Output1
  xpos -469
  ypos -125
 }
end_group
Transform {
 translate {36 0}
 center {1052 592}
 shutteroffset centred
 name Transform1
 selected true
 xpos -883
 ypos -523
}
set C48d17580 [stack 0]
Transform {
 translate {0 -11}
 rotate -34
 center {1052 592}
 shutteroffset centred
 name Transform2
 selected true
 xpos -883
 ypos -497
}
set C4489ddc0 [stack 0]
Transform {
 scale 1.36
 center {1052 592}
 shutteroffset centred
 name Transform3
 selected true
 xpos -883
 ypos -471
}
set C4d2c2290 [stack 0]
Group {
 name OUT_POSITION
 selected true
 xpos -883
 ypos -409
 addUserKnob {20 User}
 addUserKnob {12 out_position}
 out_position {{"(input.bbox.x + input.bbox.r) / 2"} {"(input.bbox.y + 
input.bbox.t) / 2"}}

}
 Input {
  inputs 0
  name Input1
  selected true
  xpos -469
  ypos -265
 }
 Output {
  name Output1
  xpos -469
  ypos -125
 }
end_group
CheckerBoard2 {
 inputs 0
 name CheckerBoard2
 selected true
 xpos -563
 ypos -623
}
clone $C48d17580 {
 xpos -563
 ypos -521
 selected true
}
clone $C4489ddc0 {
 xpos -563
 ypos -495
 selected true
}
clone $C4d2c2290 {
 xpos -563
 ypos -469
 selected true
}


Cheers,
Ivan






On Tue, Sep 6, 2011 at 6:09 PM, J Bills > wrote:


sure - looks even cleaner than the ramps crap done from memory -
actually, now that I look at it for some reason it looks like nuke
gives a rounding error using that setup (far values are .99902
instead of 1.0).  probably negligible but I like 1.0 betta.  nice
one AK. 


so play around with this, joshua -

 
set cut_paste_input [stack 0]

version 6.2 v4

Constant {
 inputs 0
 channels rgb
 name Constant2
 selected true
 xpos 184
 ypos -174

}
Expression {
 expr0 x/(width-1)
 expr1 y/(height-1)
 name Expression2
 selected true
 xpos 184
 ypos -71

}
NoOp {
 name WARP_GOES_HERE
 tile_color 0xff00ff
 selected true
 xpos 184
 ypos 11

}
Shuffle {
 out motion
 name Shuffle
 label "choose motion\nor other output\nchannel"
 selected true
 xpos 184
 ypos 83

}
push 0
STMap {
 inputs 2
 channels motion
 name STMap1
 select

Re: [Nuke-users] Re: Passing an object track through multiple transforms

2011-09-06 Thread Ivan Busquets
>
> it looks like nuke gives a rounding error using that setup (far values are
> .99902 instead of 1.0).  probably negligible but I like 1.0 betta.
>

One small thing about both those UV-map generation methods. Keep in mind
that STMap samples pixels at the center, so you'll need to account for that
half-pixel difference in your expression. Otherwise the resulting map is
going to introduce a bit of unnecessary filtering when you feed it to an
STmap.

An expression like this should give you a 1-to-1 result when you feed it
into an STMap:

set cut_paste_input [stack 0]
version 6.3 v2
push $cut_paste_input
Expression {
 expr0 (x+0.5)/(width)
 expr1 (y+0.5)/(height)
 name Expression2
 selected true
 xpos -92
 ypos -143
}


With regards to the original question, though, it's a shame that one doesn't
have access to the concatenated 2d matrix from 2D transform nodes within
expressions. Otherwise you could just multiply your source point by the
concatenated matrix and get its final position. This information is indeed
passed down the tree, but it's not accessible for anything but plugins (that
I know).

You could probably take advantage of the fact that the bbox is transformed
the same way as your image, and you CAN ask for the bbox boundaries using
expressions. So, you could have something with a very small bbox centered
around your point of interest, transform that using the same transforms
you're using for your kites, and then get the center of the transformed
bbox, if that makes sense. It's a bit convoluted, but it might do the trick
for you.

Here's an example:

set cut_paste_input [stack 0]
version 6.3 v2
push $cut_paste_input
Group {
 name INPUT_POSITION
 selected true
 xpos -883
 ypos -588
 addUserKnob {20 User}
 addUserKnob {12 position}
 position {1053.5 592}
}
 Input {
  inputs 0
  name Input1
  xpos -469
  ypos -265
 }
 Rectangle {
  area {{parent.position.x i x1 962} {parent.position.y i x1 391} {area.x+1
i} {area.y+1 i}}
  name Rectangle1
  selected true
  xpos -469
  ypos -223
 }
 Output {
  name Output1
  xpos -469
  ypos -125
 }
end_group
Transform {
 translate {36 0}
 center {1052 592}
 shutteroffset centred
 name Transform1
 selected true
 xpos -883
 ypos -523
}
set C48d17580 [stack 0]
Transform {
 translate {0 -11}
 rotate -34
 center {1052 592}
 shutteroffset centred
 name Transform2
 selected true
 xpos -883
 ypos -497
}
set C4489ddc0 [stack 0]
Transform {
 scale 1.36
 center {1052 592}
 shutteroffset centred
 name Transform3
 selected true
 xpos -883
 ypos -471
}
set C4d2c2290 [stack 0]
Group {
 name OUT_POSITION
 selected true
 xpos -883
 ypos -409
 addUserKnob {20 User}
 addUserKnob {12 out_position}
 out_position {{"(input.bbox.x + input.bbox.r) / 2"} {"(input.bbox.y +
input.bbox.t) / 2"}}
}
 Input {
  inputs 0
  name Input1
  selected true
  xpos -469
  ypos -265
 }
 Output {
  name Output1
  xpos -469
  ypos -125
 }
end_group
CheckerBoard2 {
 inputs 0
 name CheckerBoard2
 selected true
 xpos -563
 ypos -623
}
clone $C48d17580 {
 xpos -563
 ypos -521
 selected true
}
clone $C4489ddc0 {
 xpos -563
 ypos -495
 selected true
}
clone $C4d2c2290 {
 xpos -563
 ypos -469
 selected true
}


Cheers,
Ivan






On Tue, Sep 6, 2011 at 6:09 PM, J Bills  wrote:

> sure - looks even cleaner than the ramps crap done from memory - actually,
> now that I look at it for some reason it looks like nuke gives a rounding
> error using that setup (far values are .99902 instead of 1.0).  probably
> negligible but I like 1.0 betta.  nice one AK.
>
> so play around with this, joshua -
>
>
> set cut_paste_input [stack 0]
> version 6.2 v4
>
> Constant {
>  inputs 0
>  channels rgb
>  name Constant2
>  selected true
>  xpos 184
>  ypos -174
>
> }
> Expression {
>  expr0 x/(width-1)
>  expr1 y/(height-1)
>  name Expression2
>  selected true
>  xpos 184
>  ypos -71
>
> }
> NoOp {
>  name WARP_GOES_HERE
>  tile_color 0xff00ff
>  selected true
>  xpos 184
>  ypos 11
>
> }
> Shuffle {
>  out motion
>  name Shuffle
>  label "choose motion\nor other output\nchannel"
>  selected true
>  xpos 184
>  ypos 83
>
> }
> push 0
> STMap {
>  inputs 2
>  channels motion
>  name STMap1
>  selected true
>  xpos 307
>  ypos 209
>
> }
>
>
>
>
>
> On Tue, Sep 6, 2011 at 5:23 PM, Anthony Kramer 
> wrote:
>
>> Heres a 1-node UVmap for you:
>>
>> set cut_paste_input [stack 0]
>> version 6.3 v2
>> push $cut_paste_input
>> Expression {
>>  expr0 x/(width-1)
>>  expr1 y/(height-1)
>>  name Expression2
>>  selected true
>>  xpos -480
>>  ypos 2079
>> }
>>
>>
>>
>> On Tue, Sep 6, 2011 at 4:46 PM, J Bills  wrote:
>>
>>> sure - that's what he's saying.  think of the uv map as creating a
>>> "blueprint" of your transforms or distortions.
>>>
>>> after you have that blueprint, you can run whatever you want through the
>>> same distortion and repu

Re: [Nuke-users] nuke renders and server loads

2011-09-06 Thread Ivan Busquets
Or, to go even further and remove any differences between multi-channel vs
non multi-channel exrs, have a look at this script instead (attached)

Even when you're reading in the same multi-channel exr, my experience is
that shuffling out to rgba and doing merges in rgba only uses less resources
than picking the channels in the merges.



On Tue, Sep 6, 2011 at 6:37 PM, Ivan Busquets wrote:

> Sure, I understand what you're saying.
> The example is only bundled that way because I didn't want to send a huge
> multi-channel exr file.
> But if you were to write out each generator to a file, plus a multi-channel
> exr at the end of all the Copy nodes, and then redo those trees with actual
> inputs, the results are pretty much the same.
>
> At least, that's what I used in my original test.
>
> Sorry the example was half baked. Does that make sense?
>
>
>
> On Tue, Sep 6, 2011 at 6:32 PM, Deke Kincaid wrote:
>
>> Hi Ivan
>>
>> The thing is in your slower one in red of your example your first
>> copying/shuffling everything to another channel before merging them from
>> their respective channels.  The fast one there isn't any shuffling around of
>> channels first.  Your going in the opposite direction(shuffling to other
>> channels instead of to rgba).  The act of actually moving channels around is
>> what causes the hit no matter which direction your going.
>>
>> To make the test equal you would need to use generators that allow you to
>> create in a specific channel.  The Checkerboard and Colorbars in your
>> example doesn't have this ability.
>>
>>  -deke
>>
>> On Mon, Sep 5, 2011 at 23:06, Ivan Busquets wrote:
>>
>>> Hi,
>>>
>>> Found the script I sent a while back as an example of picking layers in
>>> merges using up more resources.
>>> Just tried it in 6.3, and I still get similar results.
>>>
>>> Script attached for reference. Try viewing/rendering each of the two
>>> groups while keeping an eye on memory usage of your Nuke process.
>>>
>>> Cheers,
>>> Ivan
>>>
>>>
>>>
>>> On Mon, Sep 5, 2011 at 11:42 AM, Ivan Busquets 
>>> wrote:
>>>
 Another thing is it sounds like you are shuffling out the channels to
> the rgb before you merge them.  This also does quite a hit in speed.  It 
> is
> far faster to merge and pick the channels you need rather then shuffling
> them out first.
>

 That's interesting. My experience has usually been quite the opposite. I
 find the same operations done in Merges after shuffling to rgb are faster,
 and definitely use less resources, than picking the relevant layers inside
 the Merge nodes.

 Back in v5, I sent a script to support as an example of this behavior,
 more specifically how using layers within the Merge nodes caused memory
 usage to go through the roof (and not respect the memory limit in the
 preferences). At the time, this was logged as a memory leak bug. I don't
 think this was ever resolved, but to be fair this is probably less of an
 issue nowadays with higher-specced workstations.

 Hearing that you find it faster to pick layers in a merge node than
 shuffling & merging makes me very curious, though. I wonder if, given 
 enough
 memory (so it's not depleted by the mentioned leak/overhead), some scripts
 may indeed run faster that way. Do you have any examples?

 And going back to the original topic, my experience with multi-channel
 exr files is:

 - Separate exr sequences for each aov/layer is faster than a single
 multi-channel exr, yes. As you mentioned, exr stores additional
 channels/layers in an interleaved fashion, so the reader has to step 
 through
 all of them before going to the next scanline, even if you're not using 
 them
 all. Even if you read each layer separately and copy them all into layers 
 in
 your script (so you get the equivalent of a multi-channel exr), this is
 still faster than using a multi-channel exr file.

 - When merging different layers coming from the same stream, I find
 performance to be better when shuffling layers to rgba and keeping merges 
 to
 operate on rgba. (although this is the opposite of what Deke said, so your
 mileage may vary)

 Cheers,
 Ivan

 On Thu, Sep 1, 2011 at 1:55 PM, Deke Kincaid wrote:

> Exr files are interleaved.  So when you look at some scanlines, you
> need to read in every single channel in the EXR from those scanlines even 
> if
> you only need one of them.  So if you have a multichannel file with 40
> channels but you only use rgba and one or two matte channels, then your
> going to incur a large hit.
>
> Another thing is it sounds like you are shuffling out the channels to
> the rgb before you merge them.  This also does quite a hit in speed.  It 
> is
> far faster to merge and pick the channels you need rather then shuffling
> them out first.
>
>>

Re: [Nuke-users] nuke renders and server loads

2011-09-06 Thread Ivan Busquets
Sure, I understand what you're saying.
The example is only bundled that way because I didn't want to send a huge
multi-channel exr file.
But if you were to write out each generator to a file, plus a multi-channel
exr at the end of all the Copy nodes, and then redo those trees with actual
inputs, the results are pretty much the same.

At least, that's what I used in my original test.

Sorry the example was half baked. Does that make sense?


On Tue, Sep 6, 2011 at 6:32 PM, Deke Kincaid  wrote:

> Hi Ivan
>
> The thing is in your slower one in red of your example your first
> copying/shuffling everything to another channel before merging them from
> their respective channels.  The fast one there isn't any shuffling around of
> channels first.  Your going in the opposite direction(shuffling to other
> channels instead of to rgba).  The act of actually moving channels around is
> what causes the hit no matter which direction your going.
>
> To make the test equal you would need to use generators that allow you to
> create in a specific channel.  The Checkerboard and Colorbars in your
> example doesn't have this ability.
>
> -deke
>
> On Mon, Sep 5, 2011 at 23:06, Ivan Busquets wrote:
>
>> Hi,
>>
>> Found the script I sent a while back as an example of picking layers in
>> merges using up more resources.
>> Just tried it in 6.3, and I still get similar results.
>>
>> Script attached for reference. Try viewing/rendering each of the two
>> groups while keeping an eye on memory usage of your Nuke process.
>>
>> Cheers,
>> Ivan
>>
>>
>>
>> On Mon, Sep 5, 2011 at 11:42 AM, Ivan Busquets wrote:
>>
>>> Another thing is it sounds like you are shuffling out the channels to the
 rgb before you merge them.  This also does quite a hit in speed.  It is far
 faster to merge and pick the channels you need rather then shuffling them
 out first.

>>>
>>> That's interesting. My experience has usually been quite the opposite. I
>>> find the same operations done in Merges after shuffling to rgb are faster,
>>> and definitely use less resources, than picking the relevant layers inside
>>> the Merge nodes.
>>>
>>> Back in v5, I sent a script to support as an example of this behavior,
>>> more specifically how using layers within the Merge nodes caused memory
>>> usage to go through the roof (and not respect the memory limit in the
>>> preferences). At the time, this was logged as a memory leak bug. I don't
>>> think this was ever resolved, but to be fair this is probably less of an
>>> issue nowadays with higher-specced workstations.
>>>
>>> Hearing that you find it faster to pick layers in a merge node than
>>> shuffling & merging makes me very curious, though. I wonder if, given enough
>>> memory (so it's not depleted by the mentioned leak/overhead), some scripts
>>> may indeed run faster that way. Do you have any examples?
>>>
>>> And going back to the original topic, my experience with multi-channel
>>> exr files is:
>>>
>>> - Separate exr sequences for each aov/layer is faster than a single
>>> multi-channel exr, yes. As you mentioned, exr stores additional
>>> channels/layers in an interleaved fashion, so the reader has to step through
>>> all of them before going to the next scanline, even if you're not using them
>>> all. Even if you read each layer separately and copy them all into layers in
>>> your script (so you get the equivalent of a multi-channel exr), this is
>>> still faster than using a multi-channel exr file.
>>>
>>> - When merging different layers coming from the same stream, I find
>>> performance to be better when shuffling layers to rgba and keeping merges to
>>> operate on rgba. (although this is the opposite of what Deke said, so your
>>> mileage may vary)
>>>
>>> Cheers,
>>> Ivan
>>>
>>> On Thu, Sep 1, 2011 at 1:55 PM, Deke Kincaid wrote:
>>>
 Exr files are interleaved.  So when you look at some scanlines, you need
 to read in every single channel in the EXR from those scanlines even if you
 only need one of them.  So if you have a multichannel file with 40 channels
 but you only use rgba and one or two matte channels, then your going to
 incur a large hit.

 Another thing is it sounds like you are shuffling out the channels to
 the rgb before you merge them.  This also does quite a hit in speed.  It is
 far faster to merge and pick the channels you need rather then shuffling
 them out first.

 -deke

 On Thu, Sep 1, 2011 at 12:37, Ryan O'Phelan wrote:

> Recently I've been trying to evaluate the load of nuke renders on our
> file server, and ran a few tests comparing multichannel vs. 
> non-multichannel
> reads, and my initial test results were opposite of what I was expecting.
> My tests showed that multichannel comps rendered about 20-25% slower,
> and made about 25% more load on the server in terms of disk reads. I was
> expecting the opposite, since there are fewer files being called with
> multi

Re: [Nuke-users] nuke renders and server loads

2011-09-06 Thread Deke Kincaid
Hi Ivan

The thing is in your slower one in red of your example your first
copying/shuffling everything to another channel before merging them from
their respective channels.  The fast one there isn't any shuffling around of
channels first.  Your going in the opposite direction(shuffling to other
channels instead of to rgba).  The act of actually moving channels around is
what causes the hit no matter which direction your going.

To make the test equal you would need to use generators that allow you to
create in a specific channel.  The Checkerboard and Colorbars in your
example doesn't have this ability.

-deke

On Mon, Sep 5, 2011 at 23:06, Ivan Busquets  wrote:

> Hi,
>
> Found the script I sent a while back as an example of picking layers in
> merges using up more resources.
> Just tried it in 6.3, and I still get similar results.
>
> Script attached for reference. Try viewing/rendering each of the two groups
> while keeping an eye on memory usage of your Nuke process.
>
> Cheers,
> Ivan
>
>
>
> On Mon, Sep 5, 2011 at 11:42 AM, Ivan Busquets wrote:
>
>> Another thing is it sounds like you are shuffling out the channels to the
>>> rgb before you merge them.  This also does quite a hit in speed.  It is far
>>> faster to merge and pick the channels you need rather then shuffling them
>>> out first.
>>>
>>
>> That's interesting. My experience has usually been quite the opposite. I
>> find the same operations done in Merges after shuffling to rgb are faster,
>> and definitely use less resources, than picking the relevant layers inside
>> the Merge nodes.
>>
>> Back in v5, I sent a script to support as an example of this behavior,
>> more specifically how using layers within the Merge nodes caused memory
>> usage to go through the roof (and not respect the memory limit in the
>> preferences). At the time, this was logged as a memory leak bug. I don't
>> think this was ever resolved, but to be fair this is probably less of an
>> issue nowadays with higher-specced workstations.
>>
>> Hearing that you find it faster to pick layers in a merge node than
>> shuffling & merging makes me very curious, though. I wonder if, given enough
>> memory (so it's not depleted by the mentioned leak/overhead), some scripts
>> may indeed run faster that way. Do you have any examples?
>>
>> And going back to the original topic, my experience with multi-channel exr
>> files is:
>>
>> - Separate exr sequences for each aov/layer is faster than a single
>> multi-channel exr, yes. As you mentioned, exr stores additional
>> channels/layers in an interleaved fashion, so the reader has to step through
>> all of them before going to the next scanline, even if you're not using them
>> all. Even if you read each layer separately and copy them all into layers in
>> your script (so you get the equivalent of a multi-channel exr), this is
>> still faster than using a multi-channel exr file.
>>
>> - When merging different layers coming from the same stream, I find
>> performance to be better when shuffling layers to rgba and keeping merges to
>> operate on rgba. (although this is the opposite of what Deke said, so your
>> mileage may vary)
>>
>> Cheers,
>> Ivan
>>
>> On Thu, Sep 1, 2011 at 1:55 PM, Deke Kincaid wrote:
>>
>>> Exr files are interleaved.  So when you look at some scanlines, you need
>>> to read in every single channel in the EXR from those scanlines even if you
>>> only need one of them.  So if you have a multichannel file with 40 channels
>>> but you only use rgba and one or two matte channels, then your going to
>>> incur a large hit.
>>>
>>> Another thing is it sounds like you are shuffling out the channels to the
>>> rgb before you merge them.  This also does quite a hit in speed.  It is far
>>> faster to merge and pick the channels you need rather then shuffling them
>>> out first.
>>>
>>> -deke
>>>
>>> On Thu, Sep 1, 2011 at 12:37, Ryan O'Phelan wrote:
>>>
 Recently I've been trying to evaluate the load of nuke renders on our
 file server, and ran a few tests comparing multichannel vs. 
 non-multichannel
 reads, and my initial test results were opposite of what I was expecting.
 My tests showed that multichannel comps rendered about 20-25% slower,
 and made about 25% more load on the server in terms of disk reads. I was
 expecting the opposite, since there are fewer files being called with
 multichannel reads.

 For what it's worth, all reads were zip1 compressed EXRs and I tested
 real comps, as well as extremely simplified comps where the multichannel
 files were branched and then fed into a contact sheet. I was monitoring
 performance using the performance monitor on the file server using only 20
 nodes and with almost nobody using the server.

 Can anyone explain this? Or am I wrong and need to redo these tests?

 Thanks,
 Ryan



 ___
 Nuke-users mailing list
 Nuke-users@support.

[Nuke-users] 3d CameraShake

2011-09-06 Thread Spider
Hi Everyone,

I would like to know what's the best way to simulate a camera hand shake but
with a 3dcamera,
i already tried to search if anybody ask this and i also tried to convert
the 2d CameraShake gizmo node in group
in order to copy expression in z axis...

any idea ?

Thnaks

Spider

-- 
*Luddnel Spider Magne **|** Director - Lead Motion Compositor*
555Lab – Alchemy between you and us
24 rue du Pré St-Gervais 93500 Pantin
Office (+33)148 453 555 | Fax (+33)171 864 387
Mobile (+33)699 434 555 | 555lab.com 
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Passing an object track through multiple transforms

2011-09-06 Thread J Bills
sure - looks even cleaner than the ramps crap done from memory - actually,
now that I look at it for some reason it looks like nuke gives a rounding
error using that setup (far values are .99902 instead of 1.0).  probably
negligible but I like 1.0 betta.  nice one AK.

so play around with this, joshua -

set cut_paste_input [stack 0]
version 6.2 v4
Constant {
 inputs 0
 channels rgb
 name Constant2
 selected true
 xpos 184
 ypos -174
}
Expression {
 expr0 x/(width-1)
 expr1 y/(height-1)
 name Expression2
 selected true
 xpos 184
 ypos -71
}
NoOp {
 name WARP_GOES_HERE
 tile_color 0xff00ff
 selected true
 xpos 184
 ypos 11
}
Shuffle {
 out motion
 name Shuffle
 label "choose motion\nor other output\nchannel"
 selected true
 xpos 184
 ypos 83
}
push 0
STMap {
 inputs 2
 channels motion
 name STMap1
 selected true
 xpos 307
 ypos 209
}





On Tue, Sep 6, 2011 at 5:23 PM, Anthony Kramer wrote:

> Heres a 1-node UVmap for you:
>
> set cut_paste_input [stack 0]
> version 6.3 v2
> push $cut_paste_input
> Expression {
>  expr0 x/(width-1)
>  expr1 y/(height-1)
>  name Expression2
>  selected true
>  xpos -480
>  ypos 2079
> }
>
>
>
> On Tue, Sep 6, 2011 at 4:46 PM, J Bills  wrote:
>
>> sure - that's what he's saying.  think of the uv map as creating a
>> "blueprint" of your transforms or distortions.
>>
>> after you have that blueprint, you can run whatever you want through the
>> same distortion and repurpose it all day long for whatever you want that
>> might need it.  so if you need it for some utility use to know where  pixel
>> 1234x735 is, just UV map those distortions and then put a little paint dot
>> at 1234x735 and run it through the stmap using that uvmap.  1234x735 will be
>> magically whisked away to 1414x644 or wherever your transforms take it.
>>
>> here's an example for you to plug your xforms into, and then your paint
>> blob or whatever would go into the stmap:
>>
>> set cut_paste_input [stack 0]
>> version 6.3 v2
>> Constant {
>>  inputs 0
>>  channels rgb
>>  name Constant2
>>  selected true
>>  xpos 2862
>>  ypos 1292
>> }
>> set Ncc0e6650 [stack 0]
>> Ramp {
>>  output {rgba.red -rgba.green -rgba.blue -rgba.alpha}
>>  p0 {{width-width} 100}
>>  p1 {{width} 100}
>>  name Ramp3
>>  selected true
>>  xpos 2970
>>  ypos 1387
>> }
>> push $Ncc0e6650
>> Ramp {
>>  output {-rgba.red rgba.green -rgba.blue -rgba.alpha}
>>  p0 {100 {height-height}}
>>  p1 {100 {height}}
>>  name Ramp4
>>  selected true
>>  xpos 2862
>>  ypos 1384
>> }
>> Copy {
>>  inputs 2
>>  from0 rgba.red
>>  to0 rgba.red
>>  name Copy3
>>  selected true
>>  xpos 2862
>>  ypos 1449
>> }
>> NoOp {
>>  name WARP_GOES_HERE
>>  tile_color 0xff00ff
>>  selected true
>>  xpos 2862
>>  ypos 1519
>> }
>> Shuffle {
>>  out motion
>>  name Shuffle
>>  label "choose output\nchannel"
>>  selected true
>>  xpos 2862
>>  ypos 1565
>> }
>> push 0
>> STMap {
>>  inputs 2
>>  uv motion
>>  name STMap1
>>  selected true
>>  xpos 2985
>>  ypos 1675
>> }
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Sep 6, 2011 at 3:59 PM, Joshua LaCross <
>> nuke-users-re...@thefoundry.co.uk> wrote:
>>
>>> **
>>> Thanks Anothony. The kites are already in the scene. I'm trying get the
>>> positional x and y values after going through all the additional transforms
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Passing an object track through multiple transforms

2011-09-06 Thread Anthony Kramer
Heres a 1-node UVmap for you:

set cut_paste_input [stack 0]
version 6.3 v2
push $cut_paste_input
Expression {
 expr0 x/(width-1)
 expr1 y/(height-1)
 name Expression2
 selected true
 xpos -480
 ypos 2079
}



On Tue, Sep 6, 2011 at 4:46 PM, J Bills  wrote:

> sure - that's what he's saying.  think of the uv map as creating a
> "blueprint" of your transforms or distortions.
>
> after you have that blueprint, you can run whatever you want through the
> same distortion and repurpose it all day long for whatever you want that
> might need it.  so if you need it for some utility use to know where  pixel
> 1234x735 is, just UV map those distortions and then put a little paint dot
> at 1234x735 and run it through the stmap using that uvmap.  1234x735 will be
> magically whisked away to 1414x644 or wherever your transforms take it.
>
> here's an example for you to plug your xforms into, and then your paint
> blob or whatever would go into the stmap:
>
> set cut_paste_input [stack 0]
> version 6.3 v2
> Constant {
>  inputs 0
>  channels rgb
>  name Constant2
>  selected true
>  xpos 2862
>  ypos 1292
> }
> set Ncc0e6650 [stack 0]
> Ramp {
>  output {rgba.red -rgba.green -rgba.blue -rgba.alpha}
>  p0 {{width-width} 100}
>  p1 {{width} 100}
>  name Ramp3
>  selected true
>  xpos 2970
>  ypos 1387
> }
> push $Ncc0e6650
> Ramp {
>  output {-rgba.red rgba.green -rgba.blue -rgba.alpha}
>  p0 {100 {height-height}}
>  p1 {100 {height}}
>  name Ramp4
>  selected true
>  xpos 2862
>  ypos 1384
> }
> Copy {
>  inputs 2
>  from0 rgba.red
>  to0 rgba.red
>  name Copy3
>  selected true
>  xpos 2862
>  ypos 1449
> }
> NoOp {
>  name WARP_GOES_HERE
>  tile_color 0xff00ff
>  selected true
>  xpos 2862
>  ypos 1519
> }
> Shuffle {
>  out motion
>  name Shuffle
>  label "choose output\nchannel"
>  selected true
>  xpos 2862
>  ypos 1565
> }
> push 0
> STMap {
>  inputs 2
>  uv motion
>  name STMap1
>  selected true
>  xpos 2985
>  ypos 1675
> }
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Tue, Sep 6, 2011 at 3:59 PM, Joshua LaCross <
> nuke-users-re...@thefoundry.co.uk> wrote:
>
>> **
>> Thanks Anothony. The kites are already in the scene. I'm trying get the
>> positional x and y values after going through all the additional transforms
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] nuke renders and server loads

2011-09-06 Thread Ryan O'Phelan
Thanks Ivan,
That's some really good info. I'll check it out.

Ryan

On Tue, Sep 6, 2011 at 2:06 AM, Ivan Busquets wrote:

> Hi,
>
> Found the script I sent a while back as an example of picking layers in
> merges using up more resources.
> Just tried it in 6.3, and I still get similar results.
>
> Script attached for reference. Try viewing/rendering each of the two groups
> while keeping an eye on memory usage of your Nuke process.
>
> Cheers,
> Ivan
>
>
>
> On Mon, Sep 5, 2011 at 11:42 AM, Ivan Busquets wrote:
>
>> Another thing is it sounds like you are shuffling out the channels to the
>>> rgb before you merge them.  This also does quite a hit in speed.  It is far
>>> faster to merge and pick the channels you need rather then shuffling them
>>> out first.
>>>
>>
>> That's interesting. My experience has usually been quite the opposite. I
>> find the same operations done in Merges after shuffling to rgb are faster,
>> and definitely use less resources, than picking the relevant layers inside
>> the Merge nodes.
>>
>> Back in v5, I sent a script to support as an example of this behavior,
>> more specifically how using layers within the Merge nodes caused memory
>> usage to go through the roof (and not respect the memory limit in the
>> preferences). At the time, this was logged as a memory leak bug. I don't
>> think this was ever resolved, but to be fair this is probably less of an
>> issue nowadays with higher-specced workstations.
>>
>> Hearing that you find it faster to pick layers in a merge node than
>> shuffling & merging makes me very curious, though. I wonder if, given enough
>> memory (so it's not depleted by the mentioned leak/overhead), some scripts
>> may indeed run faster that way. Do you have any examples?
>>
>> And going back to the original topic, my experience with multi-channel exr
>> files is:
>>
>> - Separate exr sequences for each aov/layer is faster than a single
>> multi-channel exr, yes. As you mentioned, exr stores additional
>> channels/layers in an interleaved fashion, so the reader has to step through
>> all of them before going to the next scanline, even if you're not using them
>> all. Even if you read each layer separately and copy them all into layers in
>> your script (so you get the equivalent of a multi-channel exr), this is
>> still faster than using a multi-channel exr file.
>>
>> - When merging different layers coming from the same stream, I find
>> performance to be better when shuffling layers to rgba and keeping merges to
>> operate on rgba. (although this is the opposite of what Deke said, so your
>> mileage may vary)
>>
>> Cheers,
>> Ivan
>>
>> On Thu, Sep 1, 2011 at 1:55 PM, Deke Kincaid wrote:
>>
>>> Exr files are interleaved.  So when you look at some scanlines, you need
>>> to read in every single channel in the EXR from those scanlines even if you
>>> only need one of them.  So if you have a multichannel file with 40 channels
>>> but you only use rgba and one or two matte channels, then your going to
>>> incur a large hit.
>>>
>>> Another thing is it sounds like you are shuffling out the channels to the
>>> rgb before you merge them.  This also does quite a hit in speed.  It is far
>>> faster to merge and pick the channels you need rather then shuffling them
>>> out first.
>>>
>>> -deke
>>>
>>> On Thu, Sep 1, 2011 at 12:37, Ryan O'Phelan wrote:
>>>
 Recently I've been trying to evaluate the load of nuke renders on our
 file server, and ran a few tests comparing multichannel vs. 
 non-multichannel
 reads, and my initial test results were opposite of what I was expecting.
 My tests showed that multichannel comps rendered about 20-25% slower,
 and made about 25% more load on the server in terms of disk reads. I was
 expecting the opposite, since there are fewer files being called with
 multichannel reads.

 For what it's worth, all reads were zip1 compressed EXRs and I tested
 real comps, as well as extremely simplified comps where the multichannel
 files were branched and then fed into a contact sheet. I was monitoring
 performance using the performance monitor on the file server using only 20
 nodes and with almost nobody using the server.

 Can anyone explain this? Or am I wrong and need to redo these tests?

 Thanks,
 Ryan



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

>>>
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.

Re: [Nuke-users] Re: Passing an object track through multiple transforms

2011-09-06 Thread J Bills
sure - that's what he's saying.  think of the uv map as creating a
"blueprint" of your transforms or distortions.

after you have that blueprint, you can run whatever you want through the
same distortion and repurpose it all day long for whatever you want that
might need it.  so if you need it for some utility use to know where  pixel
1234x735 is, just UV map those distortions and then put a little paint dot
at 1234x735 and run it through the stmap using that uvmap.  1234x735 will be
magically whisked away to 1414x644 or wherever your transforms take it.

here's an example for you to plug your xforms into, and then your paint blob
or whatever would go into the stmap:

set cut_paste_input [stack 0]
version 6.3 v2
Constant {
 inputs 0
 channels rgb
 name Constant2
 selected true
 xpos 2862
 ypos 1292
}
set Ncc0e6650 [stack 0]
Ramp {
 output {rgba.red -rgba.green -rgba.blue -rgba.alpha}
 p0 {{width-width} 100}
 p1 {{width} 100}
 name Ramp3
 selected true
 xpos 2970
 ypos 1387
}
push $Ncc0e6650
Ramp {
 output {-rgba.red rgba.green -rgba.blue -rgba.alpha}
 p0 {100 {height-height}}
 p1 {100 {height}}
 name Ramp4
 selected true
 xpos 2862
 ypos 1384
}
Copy {
 inputs 2
 from0 rgba.red
 to0 rgba.red
 name Copy3
 selected true
 xpos 2862
 ypos 1449
}
NoOp {
 name WARP_GOES_HERE
 tile_color 0xff00ff
 selected true
 xpos 2862
 ypos 1519
}
Shuffle {
 out motion
 name Shuffle
 label "choose output\nchannel"
 selected true
 xpos 2862
 ypos 1565
}
push 0
STMap {
 inputs 2
 uv motion
 name STMap1
 selected true
 xpos 2985
 ypos 1675
}


















On Tue, Sep 6, 2011 at 3:59 PM, Joshua LaCross <
nuke-users-re...@thefoundry.co.uk> wrote:

> **
> Thanks Anothony. The kites are already in the scene. I'm trying get the
> positional x and y values after going through all the additional transforms
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

[Nuke-users] Re: Passing an object track through multiple transforms

2011-09-06 Thread Joshua LaCross
Thanks Anothony. The kites are already in the scene. I'm trying get the 
positional x and y values after going through all the additional transforms



___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Passing an object track through multiple transforms

2011-09-06 Thread Anthony Kramer
You could create a UVmap, do all your tranformations/cam projections/etc to
that and then use the STMap to map your kite to the new postion of your
UVmap.

-ak

On Tue, Sep 6, 2011 at 10:36 AM, Joshua LaCross <
nuke-users-re...@thefoundry.co.uk> wrote:

> **
> I've object tracked a kite and I've populated my scene with that same kite
> several times at various positions, scales and rotations. I'm wondering if
> there is a way to pass that position information through the scene just as I
> would a pixel.
>
> For example: If I have a pixel at x:1234 and y:735 and then I add a
> transform with a translate of y:-5, well that's an easy expression but what
> if I add a several transform with various rotations and scales where is it
> now? what if this pixel goes through a camera? is there and expression or
> tool I'm unaware of that can pass that through?
>
> A work around would be to have a white pixel in a separate channel that
> represents the track and then use the CurveTool to track it again after
> going through all the transforms, but that's no fun.
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

[Nuke-users] Passing an object track through multiple transforms

2011-09-06 Thread Joshua LaCross
I've object tracked a kite and I've populated  my scene with that same kite 
several times at various positions, scales and rotations. I'm wondering if 
there is a way to pass that position information through the scene just as I 
would a pixel.

For example: If I have a pixel at x:1234 and y:735 and then I add a transform 
with a translate of y:-5, well that's an easy expression but what if I add a 
several transform with various rotations and scales where is it now? what if 
this pixel goes through a camera? is there and expression or tool I'm unaware 
of that can pass that through?

A work around would be to have a white pixel in a separate channel that 
represents the track and then use the CurveTool to track it again after going 
through all the transforms, but that's no fun.



___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users