Re: [Nuke-users] nuBridge launch event

2017-04-11 Thread Ivan Busquets
Congrats! And sorry I never replied to your last email.

Basically, I realized I couldn't even help testing through a proxy
connection, because our setup goes beyond that. We are literally air-gapped
in production machines (sigh), and the only way to the outside world is
using virtual machines that don't have access to our production
disks/network.

Anyway, I still wish you a successful launch. :)
Cheers!


On Tue, Apr 11, 2017 at 2:43 AM, Frank Rueter|OHUfx  wrote:

> Hi all,
>
> sorry for the spam but after years of battling to find enough time, I am
> finally in the last throws and aiming to release the nuBridge in the next
> few weeks.
>
> As a launch event we have started another Most Valued Contributor
> competition with generous support by the Foundry.
>
> Vote for your most valuable contributor and be in the draw to win a
> NukeStudio, CaraVR & nuBridge license.
> Lots of prizes for the winners as well of course!
>
> http://www.nukepedia.com/vote-your-favourite-contributor
>
> Cheers,
> frank
>
>
> --
> 
>
> over 1,000 free tools for Nuke 
> 
>
> full access from within... coming soon
> 
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] deepholdout issue

2015-12-14 Thread Ivan Busquets
Results should be identical to the equivalent DeepMerge, provided every
element is properly held out by the rest.

Can't really tell what differences you may be seeing, but for overlapping
volumetric samples it's possible that you may get some artifacts from a
DeepMerge set to "holdout". You could try with a "DeepHoldout" instead,
which should already produce a flat held-out image.

For simple cases like the one you sent earlier, though, the results should
be identical. Is that not what you're seeing?


On Mon, Dec 14, 2015 at 11:23 AM, Patrick Heinen <
mailingli...@patrickheinen.com> wrote:

> Thanks Ivan! That seams to give pretty good results! Still not exactly the
> same as deepMerging them unfortunately, but pretty close.
> Should it in theory look exactly the same? Or is there no way to get it to
> be the same? Bothers me a little bit that it's not perfect ;)
> I don't want to deep merge it because I basically just want to hold my
> render out with the actors from the plate.
>
>
> Ivan Busquets wrote on 14.12.2015 11:10:
>
> > If you're combining 2 elements that are already pre-held out by each
> other, you'd probably want to use a disjoint-over instead of a regular over.
> >
> >
> > Either that, or DeepMerge them directly before you go DeepToImage.
> >
> >
> > On Mon, Dec 14, 2015 at 10:43 AM, Patrick Heinen <
> mailingli...@patrickheinen.com <mailto:mailingli...@patrickheinen.com> >
> wrote:
> >> Hey everyone,
> >>
> >> I thought I'd use deep again for a few shots on my current show, but am
> now running into some issues I had never noticed before.
> >> I'm using a roto on a card to holdout my cg renders, but doing that is
> creating a dark edge, as the alpha seams to get held out too much.
> >> Maybe it's just too early on monday morning and I'm doing something
> wrong. But I can't for the heck of it find my error.
> >> I attached a script that recreates the issue with two rotos. Hope
> someone can point me into the right dirrection.
> >>
> >> Thanks!
> >> Patrick
> >> ___
> >> Nuke-users mailing list
> >> Nuke-users@support.thefoundry.co.uk  Nuke-users@support.thefoundry.co.uk> , http://forums.thefoundry.co.uk/ <
> http://forums.thefoundry.co.uk/>
> >> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users <
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users>
> >
> >
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] deepholdout issue

2015-12-14 Thread Ivan Busquets
If you're combining 2 elements that are already pre-held out by each other,
you'd probably want to use a disjoint-over instead of a regular over.

Either that, or DeepMerge them directly before you go DeepToImage.


On Mon, Dec 14, 2015 at 10:43 AM, Patrick Heinen <
mailingli...@patrickheinen.com> wrote:

> Hey everyone,
>
> I thought I'd use deep again for a few shots on my current show, but am
> now running into some issues I had never noticed before.
> I'm using a roto on a card to holdout my cg renders, but doing that is
> creating a dark edge, as the alpha seams to get held out too much.
> Maybe it's just too early on monday morning and I'm doing something wrong.
> But I can't for the heck of it find my error.
> I attached a script that recreates the issue with two rotos. Hope someone
> can point me into the right dirrection.
>
> Thanks!
> Patrick
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Blink mini-project

2014-05-18 Thread Ivan Busquets
Hey Jed,

Coincidentally, I have a simple Voronoi-cell noise generator made in Blink.

This was done as a test ahead of a more complete set of noise-generation
tools, so it was never polished and was more of a proof of concept.
t's mostly a straight conversion of the Voronoi generator from the libNoise
library into BlinkScript.

It should work as an example, though.

I've uploaded it to the new Blink section in Nukepedia.
http://www.nukepedia.com/blink/image/voronoi

Cheers,
Ivan


On Sun, May 18, 2014 at 3:16 PM, Jed Smith jedy...@gmail.com wrote:

  One thing that I would love to see would be a more versatile noise
 generator.

 Maybe something with options for Voronoi noise, tiled shapes, hexagons,
 other types of useful noise that I'm not aware of?
 This nuke 
 pluginhttps://bitbucket.org/katisss/projects/src/4ca881133b7b/Nuke/Plugins/Voronoi.cppexists
  for voronoi noise, but I could never get it to compile.

 I think that would be super useful and perhaps not insanely difficult to
 make.

 What other types of noise are there that would be useful to have
 generators for?

 Another suggestion might be a simple 2d slice volumetric noise generator
 that functioned in the 3d system. This should be possible with blink right?

 On Friday, 2014-05-16 at 8:34a, Neil Rögnvaldr Scholes wrote:

  Oooh Anamorphic Lens Flares...:)


 Neil Rögnvaldr Scholes
 www.neilscholes.com

 On 16/05/14 16:19, Nik Yotis wrote:

  Hi,

  any ideas/suggestions for a mini-project Blink project people 'd like to
 see live?
 Dev time is 2 weeks, I have a basic understanding of the Blink | NDK API

 cheersQ

  --

 Nik Yotis | Software Engineer/3D Graphics RD

 BlueBolt Ltd | 15-16 Margaret Street | London W1W 8RW | T: +44 (0)20 7637
 5575 | F: +44 (0)20 7637 3296 | www.blue-bolt.com |


 ___
 Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
 http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] day rates in the UK

2014-03-22 Thread Ivan Busquets
In my first job in the industry I had the chance to work with a great
editor. He taught me something I still remember almost on a daily basis.

He had made the transition from physical film-cutting to non-linear editing
systems, and had this opinion about the many benefits that non-linear
editing brought to the table.

It's obviously great and makes my job so much easier, and I wouldn't want
to ever look back. However, it is now so easy to make a cut that a lot of
editors/directors never commit to one. They'll cut on a certain frame, then
try a couple of frames later, then a couple of frames earlier, then one
more, then leave it there temporarily to revisit later.
When you're physically cutting a reel of film, there's something permanent
about it that urges you to THINK why you want to cut on that frame and not
on any other, and then COMMIT to that decision.

I firmly believe that the analogy applies to many technological advances in
our industry.
There is a growing belief that some changes in post are fast/cheap enough
that the exercise of THINKING and COMMITTING just keeps getting delayed.
The process then becomes reactive, with clients/supervisors spending more
time reacting to what they're seeing than directing what they would like to
see. And with it comes the frustration when, iteration after iteration,
they're still not seeing something they like.

We've all seen it:
- I don't know what kind of look I'm going to want for this, so I'll just
shoot it as neutral as possible and choose between different looks later.
- I want to keep the edit open as you guys work on these shots, so I can
make the decisions on what should be in or out LATER, because it's so much
easier to do once I see how these shots are coming together.
- I can't judge this animation until it has proper motion blur, lighting,
and I can see it integrated in the plate. (This one is particularly
infuriating, and makes me wonder how are these people able to judge
storyboards before they shoot the whole thing)

Studios have learnt to protect themselves a bit against this, managing
client's expectations, planning staged deliveries, etc. But ultimately, our
line of work is very subjective, so it always takes someone with a strong
vision and the ability to convey that vision for things to go more or less
smoothly.

The most successful projects I've ever worked on have a few of things in
common:

- A clear vision from a very early stage.
- A strong leadership
- Very little or no micromanaging.

Every once in a blue moon, those 3 line up and you are reminded of how much
fun this job can be.




On Thu, Mar 20, 2014 at 5:29 PM, Frank Rueter|OHUfx fr...@ohufx.com wrote:

  Totally agree. Just because we are more flexible in post has created a
 culture of creative micro management that is equivalent to man handling
 actors on set rather than letting them act




 On 3/21/14, 12:25 PM, matt estela wrote:


 On 21 March 2014 10:09, Elias Ericsson Rydberg 
 elias.ericsson.rydb...@gmail.com wrote:

  In all kinds of productions there seems to be a heavy reliance on the
 director. That's the standard I guess. Should not we, the vfx-artists, be
 the authority of our own domain?


  I do wonder if non cg fx heavy films of the past were as reliant on
 director approval as they are today. Using raiders as the example again,
 was Spielberg really approving every rock, every mine cart that was created
 for the mine chase sequence, sending shots back 10, 50, 100 times for
 revisions? Or as I suspect, was there the simple reality of 'we need to
 make these things, that takes time, you really can't change much once we
 start shooting miniatures.'? The ability for digital to change anything and
 everything is both the best and worst thing that happened to post
 production.




 ___
 Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
 http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 --
   [image: ohufxLogo 50x50] http://www.ohufx.com *vfx compositing
 http://ohufx.com/index.php/vfx-compositing | workflow customisation and
 consulting http://ohufx.com/index.php/vfx-customising *

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

inline: ohufxLogo_50x50.png___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] oldest card for nuke 8

2013-12-31 Thread Ivan Busquets
Randy,

check this list:
https://developer.nvidia.com/cuda-gpus

For Nuke 8, you're going to want anything that has a compute capability of
2.0 or higher. Your 285 seems to be rated for 1.3


You told me my card wouldn't be supported before since it's to old.


From my Android phone on T-Mobile. The first nationwide 4G network.


 Original message 
From: Deke Kincaid
Date:12/31/2013 7:15 PM (GMT-05:00)
To: Nuke user discussion
Subject: Re: [Nuke-users] oldest card for nuke 8

Randy: this is an Nvidia issue.  They haven't released cuda drivers for
your card under Mavericks yet.  There are tons of threads about this if you
google for cuda mavericks nvidia 285

-deke

On Tuesday, December 31, 2013, Randy Little wrote:

 But Nuke 8 doesn't support older cards I have been told.   My 285 no
 longer shows up but this is my old box so  it could be Nuke and it
 could be 10.9 or both.  I was told the 285 isn't supported.   Since I do
 very little from home and try not to work from home I just keep the bare
 minimum system for when I have to bring work home.   Something about
 something 2.0 vs 1.0 precision in some cuda lib is all I can remember.
 Will ask deke Later.


 Randy S. Little
 http://www.rslittle.com/
 http://www.imdb.com/name/nm2325729/




 On Tue, Dec 31, 2013 at 3:30 PM, Nathan Rusch nathan_ru...@hotmail.comwrote:

 There has been talk of supporting OpenCL as well as (or possibly in place
 of) CUDA at some point, but for now, even if Blink is capable of generating
 OpenCL code, Nuke still limits support to CUDA-enabled cards.

 -Nathan


 -Original Message- From: Martin Winkler
 Sent: Tuesday, December 31, 2013 12:18 PM
 To: Nuke user discussion
 Subject: Re: [Nuke-users] oldest card for nuke 8


 On Tue, Dec 31, 2013 at 9:06 PM, Nathan Rusch nathan_ru...@hotmail.com
 wrote:

 Nuke still only supports CUDA for GPU acceleration.


 Isn't Blink supposed to do OpenCL?

 Regards,


 --
 Martin Winkler, Geschäftsführer
 Grey Matter Visual Effects GmbH
 Georg-Friedrich-Str.1
 76530 Baden-Baden
 Tel. 07221 972 95 31
 HRB 700934 Amtsgericht Mannheim
 Ust-ID Nr.DE249482509
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




-- 
--
Deke Kincaid
Creative Specialist
The Foundry
Skype: dekekincaid
Tel: (310) 399 4555 - Mobile: (310) 883 4313
Web: www.thefoundry.co.uk
Email: d...@thefoundry.co.uk


___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

[Nuke-users] How does DeepRecolor distribute a target opacity across multiple samples?

2013-12-17 Thread Ivan Busquets
Hi,

Sorry for the repost. I sent this to the development list yesterday, but
posting over here as well to cast a broader net.

Has anyone dug around the target input alpha option in the DeepRecolor
node, and has some insight on how it is retargetting each sample internally?

Long story short, I'm trying to implement a procedure to re-target opacity
of each sample in a deep pixel, akin to what's happening in a DeepRecolor
node when target input alpha is checked.

I've got this to a point where it's working ok, but I think I might be
missing something, as my results differ from those you'd get in a
DeepRecolor.

My re-targetting algorithm is based on the assumption that the relative
opacity between samples should be preserved, but DeepRecolor clearly uses a
different approach.

Example:

Say you have a deep pixel with 2 samples, and the following opacities:

Samp1 :   0.4
Samp2 :   0.2

The accumulated opacity is 0.52  (Samp1 over Samp2). Note that Samp1
deliberately has an opacity of 2 times Samp2.

Now, let's say we want to re-target those samples to an accumulated opacity
of 0.9.

What I am doing is trying to calculate new opacities for Samp1 and Samp2 in
such a way that both of these conditions are met.

a) Samp1 == 2 * Samp2
b) Samp1 over Samp2 == 0.9

This gives me the following re-targeted values:

Samp1 :   0.829284

Samp2 :   0.414642


I'm happy with those, but it bugs me that DeepRecolor throws different
results:


Samp1 :   0.798617
Samp2 :   0.503434

Which meets the second condition (Samp1 over Samp2 == 0.9), but does not
preserve the relative opacities of the original samples.

It seems to me like DeepRecolor is applying some sort of non-linear
function to apply a different weight to each of the original samples, but I
haven't been able to figure out the logic of that weighting, or a reason
why it's done that way.

Does anyone have any insight/ideas on what DeepRecolor might be doing
internally?
Or a reason why you might want to distribute the target alpha in a
non-linear way?

Thanks in advance,
Ivan
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] DepthToPoints

2013-12-04 Thread Ivan Busquets
I don't think that's right, Deke.

DepthToPoints expects 1/z by default, not a normalized input.
Same as the output from ScanlineRender.

The tootip of the invert depth knob states that as well:

Invert the depth before processing. Useful if the depth is z instead of
the expected 1/z



On Wed, Dec 4, 2013 at 2:45 PM, Deke Kincaid d...@thefoundry.co.uk wrote:

 1 is near though there is an invert depth option in depth to points.

 Am I wrong?

 Nuke is 32 bit floating point so it shouldn't matter that much as long as
 the original image was a float.   Precision would only matter if you were
 working in a 8/16 bit int box.

 -deke

 On Wednesday, December 4, 2013, Ron Ganbar wrote:

 0 is near?

 Normalised values aren't precise, though. They're very subjective to what
 was decided in the render. It won't create a very precise point cloud.
 Am I wrong?



 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/


 On Wed, Dec 4, 2013 at 11:51 PM, Deke Kincaid d...@thefoundry.co.ukwrote:

 It's just looking for 0-1.  You can do it with an expression node or 
 Jack
 has a handy J_Maths node in J_Ops which converts depth maps between
 types really easily.

 -deke


 On Wednesday, December 4, 2013, Ron Ganbar wrote:

 Hi all,
 for DepthToPoints to work, what kind of depth do I need to feed into
 it? 1/distance? distance? normalised?
 And how do I convert what comes out of Maya's built in Mental Ray so it
 will work?

 Thanks!
 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/



 --
 --
 Deke Kincaid
 Creative Specialist
 The Foundry
 Skype: dekekincaid
 Tel: (310) 399 4555 - Mobile: (310) 883 4313
 Web: www.thefoundry.co.uk
 Email: d...@thefoundry.co.uk


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




 --
 --
 Deke Kincaid
 Creative Specialist
 The Foundry
 Skype: dekekincaid
 Tel: (310) 399 4555 - Mobile: (310) 883 4313
 Web: www.thefoundry.co.uk
 Email: d...@thefoundry.co.uk


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] DepthToPoints

2013-12-04 Thread Ivan Busquets
Not sure if I follow, or what combination you used in your test, but the
standard depth output of ScanlineRender (1/z) is what DepthToPoints wants
as an input by default.


set cut_paste_input [stack 0]
version 7.0 v8
push $cut_paste_input
Camera2 {
 name Camera1
 selected true
 xpos 1009
 ypos -63
}
set N73f6770 [stack 0]
push $N73f6770
CheckerBoard2 {
 inputs 0
 name CheckerBoard1
 selected true
 xpos 832
 ypos -236
}
Sphere {
 translate {0 0 -6.44809}
 name Sphere1
 selected true
 xpos 832
 ypos -131
}
push 0
ScanlineRender {
 inputs 3
 shutteroffset centred
 motion_vectors_type distance
 name ScanlineRender1
 selected true
 xpos 830
 ypos -43
}
DepthToPoints {
 inputs 2
 name DepthToPoints1
 selected true
 xpos 830
 ypos 91
 depth depth.Z
 N_channel none
}





On Wed, Dec 4, 2013 at 3:11 PM, Deke Kincaid d...@thefoundry.co.uk wrote:

 Actually I think we are both wrong.  I was just playing with a camera from
 the 3d scene with depth and it needs to be distance to match.  1/z gives
 you the reversed coming out of a little window look.

 -deke


 On Wednesday, December 4, 2013, Ivan Busquets wrote:

 I don't think that's right, Deke.

 DepthToPoints expects 1/z by default, not a normalized input.
 Same as the output from ScanlineRender.

 The tootip of the invert depth knob states that as well:

 Invert the depth before processing. Useful if the depth is z instead of
 the expected 1/z



 On Wed, Dec 4, 2013 at 2:45 PM, Deke Kincaid d...@thefoundry.co.ukwrote:

 1 is near though there is an invert depth option in depth to points.

 Am I wrong?

 Nuke is 32 bit floating point so it shouldn't matter that much as long as
 the original image was a float.   Precision would only matter if you were
 working in a 8/16 bit int box.

 -deke

 On Wednesday, December 4, 2013, Ron Ganbar wrote:

 0 is near?

 Normalised values aren't precise, though. They're very subjective to what
 was decided in the render. It won't create a very precise point cloud.
 Am I wrong?



 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/


 On Wed, Dec 4, 2013 at 11:51 PM, Deke Kincaid d...@thefoundry.co.ukwrote:

 It's just looking for 0-1.  You can do it with an expression node or Jack
 has a handy J_Maths node in J_Ops which converts depth maps between
 types really easily.

 -deke


 On Wednesday, December 4, 2013, Ron Ganbar wrote:

 Hi all,
 for DepthToPoints to work, what kind of depth do I need to feed into it?
 1/distance? distance? normalised?
 And how do I convert what comes out of Maya's built in Mental Ray so it
 will work?

 Thanks!
 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/



 --
 --
 Deke Kincaid
 Creative Specialist
 The Foundry
 Skype: dekekincaid
 Tel: (310) 399 4555 - Mobile: (310) 883 4313
 Web: www.thefoundry.co.uk
 Email: d...@thefoundry.co.uk


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




 --
 --
 Deke Kincaid
 Creative Specialist
 The Foundry
 Skype: dekekincaid
 Tel:



 --
 --
 Deke Kincaid
 Creative Specialist
 The Foundry
 Skype: dekekincaid
 Tel: (310) 399 4555 - Mobile: (310) 883 4313
 Web: www.thefoundry.co.uk
 Email: d...@thefoundry.co.uk


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Calculate Distorted Pixel Position from STMap Sampled Pixel Values

2013-09-02 Thread Ivan Busquets
No worries!
Glad it worked out for you.



On Mon, Sep 2, 2013 at 12:14 AM, Jed Smith jedy...@gmail.com wrote:

  Ivan, thank you very much for your helpful wisdom! :)

 I think I finally intricately understand how uv maps work. With your
 awesome technique for inverting them, my gizmo seems to be working as
 expected, and with much better accuracy.

 It is updated here if anyone wants to take a look:
 https://gist.github.com/jedypod/6302723

 On Sunday, 2013-09-01 at 5:47p, Ivan Busquets wrote:

 Hi Jed,

 I believe the problem in your approach is in the assumption of how STMap
 works.
 STMap does a lookup for each pixel in the output image to find the
 coordinates in the input it needs to sample from to produce the final pixel
 value. In other words, it does not push pixels from the input into
 discrete output locations, but pulls pixels from the input for each pixel
 in the output. It's the difference between forward and backward warping.

 To do what you're trying to do you would effectively need a UV map that's
 the inverse of that distortion. You can get an approximation of such an
 inverse UV map by displacing the vertices of a card, which would be a way
 of forward-warping. The only caveat is that you'll need a card that has as
 many subdivisions/vertices as possible, since the distortion values will be
 interpolated between vertices. That's why it's only an approximation at
 best. But given enough subdivisions, it should get you close enough.

 Once you have that inverse UV map, your distorted XY coordinate should
 just be the UV value at your undistorted coordinate, multiplied by width
 and height. (script attached as an example).

 P.S. The other minor thing you might want to look into is the way you're
 generating your UV map. The expression you're using x/width and
 y/height will result in a UV map that displaces the image by half a pixel
 from scratch when fed into an STMap. STMap samples pixels from the input at
 their centre (x+0.5, y+0.5), so for a more accurate UV map you should use U
 = (x+0.5)/width and V = (y+0.5)/height.

 Hope that helps.

 Cheers,
 Ivan







 On Sat, Aug 31, 2013 at 8:18 PM, Jed Smith jedy...@gmail.com wrote:

  Greetings!

 *The Problem*
 I am trying to write a tool to distort tracking data through a distortion
 map output by a LensDistortion node. I have everything working, except
 there seems to be inaccuracy in my method of calculating the distorted
 pixel position from the sampled values of the uv distortion map, when
 compared to a visual check.

 *My Method*
 Say there is a pixel value at 1792,476 in a 1080p frame. I have a standard
 UV Map, modified with a grade node through a mask, creating a localized
 distortion when this map is plugged into an STMap node. The distorted pixel
 value is 1821,484.

 The sampled uv map pixel values at the source pixel location is 0.916767,
 0.432918 (for width, and height offset, respectively).

 I am going on the assumption that the uvmap pixel values represent the
 distorted location of that pixel, with the location being a floating point
 percentage of the frame width and height. So a value of 0.916767, 0.432918
 would basically be telling the STMap node to set the output pixel location
 for this pixel to a value of the difference between the 'unity' uvmap value
 that would result in no transformation and the sampled uv value, multiplied
 by the frame width.

 For horizontal distortion offset, this would be:
 (pixel_coordinate_x / frame_width - uvmap_red) * frame_width, or (1792 /
 1920 - 0.916767) * 1920 = 31.807
 This would result in a distorted horizontal value of 1792+31.807 =
 1823.807. This value is close, but almost 3 pixels off.

 *Help!*
 Can anyone here provide some insight into how exactly the math for the
 STMap works to determine the output location of a pixel from the incoming
 pixel values? I have attached a small nuke script demonstrating what I am
 talking about. See the Test_STMAP_Distortion_Calculations node to see
 the output results of the above algorithm.

 And if anyone is curious to check out the DistortTracks gizmo as it
 exists so far, it lives here https://gist.github.com/jedypod/6302723.

 Thanks very much!


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 Attachments:
  - inverse_UV.nk



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http

Re: [Nuke-users] Calculate Distorted Pixel Position from STMap Sampled Pixel Values

2013-09-01 Thread Ivan Busquets
Hi Jed,

I believe the problem in your approach is in the assumption of how STMap
works.
STMap does a lookup for each pixel in the output image to find the
coordinates in the input it needs to sample from to produce the final pixel
value. In other words, it does not push pixels from the input into
discrete output locations, but pulls pixels from the input for each pixel
in the output. It's the difference between forward and backward warping.

To do what you're trying to do you would effectively need a UV map that's
the inverse of that distortion. You can get an approximation of such an
inverse UV map by displacing the vertices of a card, which would be a way
of forward-warping. The only caveat is that you'll need a card that has as
many subdivisions/vertices as possible, since the distortion values will be
interpolated between vertices. That's why it's only an approximation at
best. But given enough subdivisions, it should get you close enough.

Once you have that inverse UV map, your distorted XY coordinate should just
be the UV value at your undistorted coordinate, multiplied by width and
height. (script attached as an example).

P.S. The other minor thing you might want to look into is the way you're
generating your UV map. The expression you're using x/width and
y/height will result in a UV map that displaces the image by half a pixel
from scratch when fed into an STMap. STMap samples pixels from the input at
their centre (x+0.5, y+0.5), so for a more accurate UV map you should use U
= (x+0.5)/width and V = (y+0.5)/height.

Hope that helps.

Cheers,
Ivan







On Sat, Aug 31, 2013 at 8:18 PM, Jed Smith jedy...@gmail.com wrote:

  Greetings!

 *The Problem*
 I am trying to write a tool to distort tracking data through a distortion
 map output by a LensDistortion node. I have everything working, except
 there seems to be inaccuracy in my method of calculating the distorted
 pixel position from the sampled values of the uv distortion map, when
 compared to a visual check.

 *My Method*
 Say there is a pixel value at 1792,476 in a 1080p frame. I have a standard
 UV Map, modified with a grade node through a mask, creating a localized
 distortion when this map is plugged into an STMap node. The distorted pixel
 value is 1821,484.

 The sampled uv map pixel values at the source pixel location is 0.916767,
 0.432918 (for width, and height offset, respectively).

 I am going on the assumption that the uvmap pixel values represent the
 distorted location of that pixel, with the location being a floating point
 percentage of the frame width and height. So a value of 0.916767, 0.432918
 would basically be telling the STMap node to set the output pixel location
 for this pixel to a value of the difference between the 'unity' uvmap value
 that would result in no transformation and the sampled uv value, multiplied
 by the frame width.

 For horizontal distortion offset, this would be:
 (pixel_coordinate_x / frame_width - uvmap_red) * frame_width, or (1792 /
 1920 - 0.916767) * 1920 = 31.807
 This would result in a distorted horizontal value of 1792+31.807 =
 1823.807. This value is close, but almost 3 pixels off.

 *Help!*
 Can anyone here provide some insight into how exactly the math for the
 STMap works to determine the output location of a pixel from the incoming
 pixel values? I have attached a small nuke script demonstrating what I am
 talking about. See the Test_STMAP_Distortion_Calculations node to see
 the output results of the above algorithm.

 And if anyone is curious to check out the DistortTracks gizmo as it
 exists so far, it lives here https://gist.github.com/jedypod/6302723.

 Thanks very much!


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



inverse_UV.nk
Description: Binary data
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Roto Bezier motion to Vector blur

2013-08-15 Thread Ivan Busquets
David,

Not sure if this will give you exactly what you're after, but this is a
technique I've used in the past to replicate motionblur from the motion of
a shape.

One way to get motion vectors that represent the motion of your shape is to
do a splinewarp that warps between your shape and the same shape offset by
1 frame. Then apply this SplineWarp to a coordinates map (not normalized,
but actual pixel coordinates), substract the original map from the
SplineWarp result, and use that as your motion vector information inside
VectorBlur.

Attached is an example script. Hope it helps.





On Thu, Aug 15, 2013 at 11:26 AM, Julik Tarkhanov
ju...@hecticelectric.nlwrote:

 I don't think you can do that with roto, since the only place where it can
 interpolate motion is at the edges (but not inside the shapes). So in
 theory you could get a kind of a vector map per pixel but that map would be
 limited to where the roto edge is located, since a roto shape is
 post-filled. You could try to extract (guess) the vectors from roto motion
 but this is prone to artifacts. If you want to add moblur inside of an area
 I would use Kronos with different plates used as source and warp.

 On 15 aug. 2013, at 19:39, David Yu dave...@gmail.com wrote:

  I turn on motion blur to see the effect which i want to use as the
 vectors that will drive the vector blur
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



roto_to_vectorblur_test.nk
Description: Binary data
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke 7.0v3 or 7.0v1 ?

2013-04-22 Thread Ivan Busquets
 Anyone have the bug number for that regression or the file browser issue?

Sounds very similar to Bug ID  30168, with the file browser not identifying
sequences of certain file types (like dtex).
That one got resolved during the beta, but it sounds like having no file
extension at all leads to the same behaviour.




On Mon, Apr 22, 2013 at 2:22 PM, Deke Kincaid d...@thefoundry.co.uk wrote:

 Assist us the Roto/paint version of nuke you get 2 copies of with NukeX.
  Though as you said you may have so many lics it doesn't matter.


 http://www.thefoundry.co.uk/articles/2013/03/18/496/assist-tool-for-nukex-available-now/

 Easier to see what nodes are enabled in the release notes.

 http://thefoundry.s3.amazonaws.com/products/nuke/releases/7.0v6/Nuke_7.0v6_ReleaseNotes.pdf

 Anyone have the bug number for that regression or the file browser issue?

 -deke

 On Monday, April 22, 2013, wrote:

 hey Deke,

 refresh my memory on the assist seats ?  might that help such a large
 group when we already have such a large base of licenses ?

 and what say you regarding any prominent 7.0v6 issues ?

 Lastly, do you know if the file browser issue with displaying numbered
 files as sequences is still, an issue ?  (recall we have a proprietary
 image format that only has numbers as filenames, no prefix/suffix at
 all)... as of Nuke7 the file browser changed.  This would cause all
 artists to manually set the start/end frames explicitly in the read node
 as a result.

 thx,
 Ari
 Blue Sky


  The big thing your missing out on 7.0v6 is the extra Assist seats for
 any
  NukeX lics you have.  I'm not sure if that matters much to you guys
  though.
 
  -deke
 
 
 
  On Monday, April 22, 2013, wrote:
 
  so we saying 7.0v5 is the latest 'dependable' version ?
  anyone have a particular issue with 7.0v5 ?  last call for us.
 
  thx for the notes thus far... big help
 
  Ari
  Blue Sky
 
 
 
   No, you’re right. 7.0v6 also has at least one regression as well,
  though
   it will only cause problems for people doing specific things with
  Python.
   At this point, 7.0v5 seems to the best choice for a 7.0 release.
  
   -Nathan
  
  
  
   From: Richard Bobo
   Sent: Friday, April 19, 2013 1:06 PM
   To: Nuke user discussion
   Subject: Re: [Nuke-users] Nuke 7.0v3 or 7.0v1 ?
  
   Ari,
  
  
   I believe that 7.0v3 was pulled from distribution with a semi-serious
  bug.
   You should go to 7.0v4 or higher. Someone please correct me if I'm
   wrong...
  
  
   Rich
  
  
  
   Rich Bobo
   Senior VFX Compositor
   Armstrong-White
   http://armstrong-white.com/
  
   Email:  richb...@mac.com javascript:;
   Mobile:  (248) 840-2665
   Web:  http://richbobo.com/
  
  
   A man should never be ashamed to own that he has been in the wrong,
  which
   is but saying that he is wiser today than he was yesterday.
   - Alexander Pope (1688-1744) English Poet
  
  
  
  
  
   On Apr 19, 2013, at 3:54 PM, a...@curvstudios.com javascript:;
 wrote:
  
  
 We're considering rolling out Nuke 7.0v3, but I'm curious if there
  was
 anything major which makes 7.0v1 a better choice for now ?
  
 Also, has there been a fix to the file browser's change for
 numbered
   files ?
 ie. our proprietary file format has no prefix nor suffix, only
  numbered
 frames. As of Nuke7's release, the file browser won't display
  numbered
 files (sans prefix/suffix) as singular sequences.  This presents a
  major
 workflow inconvenience where the comper has to explicity set the
  frame
 range in and out in every read node.  Multiply that times over
 1,700
   shots
 in a film... and oy.
  
 thx,
 Ari
 Blue Sky
  
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk javascript:;,
  http://forums.thefoundry.co.uk/
  
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
  
  
  
  
  
  
 
 
   ___
   Nuke-users mailing list
   Nuke-users@support.thefoundry.co.uk javascript:;,
  http://forums.thefoundry.co.uk/
  
 
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users___
   Nuke-users mailing list
   Nuke-users@support.thefoundry.co.uk javascript:;,
  http://forums.thefoundry.co.uk/
   http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk javascript:;,
  http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 
  --
  -
  Deke Kincaid
  Creative Specialist
  The Foundry
  Mobile: (310) 883 4313
  Tel: (310) 399 4555 - Fax: (310) 450 4516
 
  The Foundry Visionmongers Ltd.
  Registered in England and Wales No: 4642027
  ___
  

Re: AW: AW: [Nuke-users] Alexa Artifacts

2013-03-29 Thread Ivan Busquets
I'm with Jonathan in that this looks like a resizing filter.

You said it's most obvious on even backgrounds. To me that's yet another
sign that it's a resizing artifact.

What resolution are your source files? I don't know the specific details,
but I believe you can only get 1920x1080 ProRes quicktimes from the Alexa
(or 2K in newer firmwares).
The Alexa sensor being 2880x1620, there has to be some kind of downsampling.




On Fri, Mar 29, 2013 at 9:43 AM, Howard Jones mrhowardjo...@yahoo.comwrote:

 It was the same here - shot directly in ProRes 444.
 No idea what though


 Howard

   --
 *From:* Igor Majdandzic subscripti...@badgerfx.com
 *To:* 'Nuke user discussion' nuke-users@support.thefoundry.co.uk
 *Sent:* Friday, 29 March 2013, 16:29
 *Subject:* AW: AW: [Nuke-users] Alexa Artifacts

 Do you know what caused them?

 --
 igor majdandzic
 compositor |
 i...@badgerfx.com
 BadgerFX | www.badgerfx.com

 *Von:* nuke-users-boun...@support.thefoundry.co.uk [mailto:
 nuke-users-boun...@support.thefoundry.co.uk] *Im Auftrag von *Magno Borgo
 *Gesendet:* Freitag, 29. März 2013 14:06
 *An:* Nuke user discussion
 *Betreff:* Re: AW: [Nuke-users] Alexa Artifacts

 I've seen the exactly same artifacts when working on a film shot on Alexa.
 These are nasty specially when keying... same issue, shot directly in
 Proress .

 Magno.




 We've been having some problems with noise on some footages from Alexa,
 but nothing remotely near to that.

 diogo

 On Wed, Mar 27, 2013 at 9:50 PM, Jonathan Egstad jegs...@earthlink.net
 wrote:
 No idea, but it looks an awful lot like filtering from a slight resize
 operation.

 -jonathan

 On Mar 27, 2013, at 5:29 PM, Igor Majdandzic subscripti...@badgerfx.com
 wrote:

 do you mean in camera? because that was from the original qt footage

 --
 igor majdandzic
 compositor |
 i...@badgerfx.com
 BadgerFX | www.badgerfx.com

 *Von:* nuke-users-boun...@support.thefoundry.co.uk [mailto:
 nuke-users-boun...@support.thefoundry.co.uk] *Im Auftrag von *Jonathan
 Egstad
 *Gesendet:* Donnerstag, 28. März 2013 01:10
 *An:* Nuke user discussion
 *Cc:* Nuke user discussion
 *Betreff:* Re: [Nuke-users] Alexa Artifacts

 Looks like a very  slight resize was done.
 -jonathan

 On Mar 27, 2013, at 4:56 PM, Igor Majdandzic subscripti...@badgerfx.com
 wrote:

 Hey guys,
 we got footage from a shoot with Alexa being the camera. It was shot in
 ProRess 444. The problem is: The picture has some artifacts which confuse
 me the codec being 444. I attached some images which show some of the grain
 patterns. Is this normal?

 thx,
 Igor



 --
 igor majdandzic
 compositor |
 i...@badgerfx.com
 BadgerFX | www.badgerfx.com

 *Von:* nuke-users-boun...@support.thefoundry.co.uk [
 mailto:nuke-users-boun...@support.thefoundry.co.uknuke-users-boun...@support.thefoundry.co.uk]
 *Im Auftrag von *Deke Kincaid
 *Gesendet:* Mittwoch, 27. März 2013 23:47
 *An:* Nuke user discussion
 *Betreff:* Re: [Nuke-users] FusionI/O and Nuke

 Hi Michael
 I'm actually testing this right now as Fusionio just gave us a bunch of
 them.  Early tests reveal that with dpx it's awesome but with openexr zip
 compressed file it it is spending more time with compression, not sure if
 it is cpu bound or what(needs more study but its slower).  Openexr
 uncompressed files though are considerably superfast but of course the
 issue is that it is 18 meg a frame.  These are single layer rgba exr files.

 -
 Deke Kincaid
 Creative Specialist
 The Foundry
 Mobile: (310) 883 4313
 Tel: (310) 399 4555 - Fax: (310) 450 4516

 The Foundry Visionmongers Ltd.
 Registered in England and Wales No: 4642027

 On Wed, Mar 27, 2013 at 3:26 PM, Michael Garrett michaeld...@gmail.com
 wrote:
 I'm evaluating one of these at the moment and am interested to know if
 others have got it working with Nuke nicely, meaning, have you been able to
 really utilise the insane bandwidth of this card to massively accelerate
 any part of your day to day compositing?

 So far, I've found it has no benefit when localising all Reads in a
 somewhat heavy comp, or even playing back a sequence of exr's or deep
 files, compared to localised sequences on a 10K Raptor drive also in my
 workstation - hopefully I'm missing something big though, this is day one
 after all.

 There may be real tangible benefits to putting the Nuke cache on it though
 - I'll see how it goes.

 I'm also guessing that as gpu processing becomes more prevalent in Nuke
 that we will see a real speed advantage handing data from a card like this
 straight to the gpu.

 Thanks,
 Michael

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 crop-plate.jpg

 crop-plate_areas.jpg

 crop-plate_areas-edgeDetect.jpg

 ___
 Nuke-users mailing list
 

Re: [Nuke-users] 32 bit float - 16 bit half float expression?

2013-02-17 Thread Ivan Busquets
I'm not sure that Posterize would give the desired result in this case,
since it will still throw values that are not possible to represent as a
half precision float.

You'd probably want to re-create a 16 bit half-float value by breaking the
32 bit float into its exponent and mantissa(significant) values, and then
truncating those.

The following expression might work (not thoroughly tested, though)

set cut_paste_input [stack 0]
version 7.0 v4
push $cut_paste_input
add_layer {alpha alpha.red alpha.beta}
Expression {
 expr0 exponent(r)  16 ? r*inf: ldexp(rint(mantissa(r)* (2**11))/(2**11),
 exponent(r))
 expr1 exponent(g)  16 ? g*inf: ldexp(rint(mantissa(g)* (2**11))/(2**11),
 exponent(g))
 expr2 exponent(b)  16 ? b*inf: ldexp(rint(mantissa(b)* (2**11))/(2**11),
 exponent(b))
 channel3 alpha
 name Float_To_Half
 selected true
 xpos 292
 ypos 226
}

Hope that helps.

PS. Mark, hope you're doing well! :)




On Sun, Feb 17, 2013 at 5:35 AM, Shailendra Pandey shail...@gmail.comwrote:

 well actually 281474976710656 -1
 281474976710655
 Hope that helps



 On Sun, Feb 17, 2013 at 9:25 PM, Shailendra Pandey shail...@gmail.comwrote:

 Hi Mark

 You can use a posterize node
 with a value of 281474976710656
 which is 2 to the power(16*3)



 Cheers
 Shail

 On Sun, Feb 17, 2013 at 6:25 AM, Mark Nettleton mnettle...@ilm.comwrote:

 **
 I'm generating an ST map within Nuke, that needs to line up with 16 bit
 half float ST map images on disk.

 Is there a way I can generate 16bit half float values within nuke? Or
 convert 32 bit values to 16 bit half? (without writing to disk and reading
 back)

 Thanks

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] DeepToPoints bug?

2013-02-11 Thread Ivan Busquets
Hi Michael,

Hope all is going well. Can't verify this as I'm not in front of Nuke, but
that does sound like a bug to me (although I've never observed such
behaviour)

Patrick, even if the near and far clipping planes contribute to the
projection matrix, they would not alter it in such a way that projecting a
point on to a given depth would give different results.
The direction of the output vector should be the same (barring minor
rounding errors), so the point of that vector that lies at a certain depth
from camera will also remain the same.




On Mon, Feb 11, 2013 at 4:46 PM, Patrick Heinen 
mailingli...@patrickheinen.com wrote:

 Hey Michael,

 a bug I know of in combination with the DeepToPoints is that if the
 bounding box is not equal to the format, that can cause weird behaviour,
 similar to the stretching you mention. That the near clipping plane changes
 the position in 3d space seems to actually be no bug, but a normal
 behaviour. The clipping planes influence the camera projection matrix, and
 thus it is normal getting different result for the DeepToPoints, as it
 multiplies your Vector with the inverse of the projection matrix to get the
 position in world space.

 So export your camera from your 3d application or use the information
 rendered to the metadata to build your cam and don't change the settings of
 it.
 Hope that helps, if you need it more detailed I can explain it further
 tomorrow.
 I'm actually using vrst files aswell ;)

 cheers
 Patrick

 Am 11.02.2013 um 20:07 schrieb Michael Garrett:

  Hey,
 
  I've found on Nuke 6.3v8 on Windows that the near clipping plane of a
 Camera plugged into DeepToPoints will affect how the point cloud is cast.
 The depth of samples gets thrown off, ie, they go non-linear and stretch
 out, when the near clipping plane is reduced, as if you've done a gamma
 curve on the deep data.
 
  I want to try this in 6.3v9 and 7.x as soon as I can, to see if it's a
 bug that's been fixed. We're using .vrst files in this case, not that it
 should make any difference.
 
  Typically I just re-checked the scene and it's working fine now...
 
  Has anyone else experienced this?
 
  Thanks,
  Michael
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] light linking as maya

2013-02-11 Thread Ivan Busquets
Excluding an object from being shaded by lights in the scene can be done
using a standard shader (basicmat, etc) and setting both the diffuse and
specular rates to 0.

However, having different objects affected by different lights within the
same scene is not possible as far as I know. (there should be a feature
request for that)



On Mon, Feb 11, 2013 at 4:32 PM, Gustaf Nilsson gus...@laserpanda.comwrote:

 yeah, no, it doesnt work like that. only solution i can think of right off
 my toes is to have two scanline renderers


 On Mon, Feb 11, 2013 at 10:05 PM, Marten Blumen mar...@gmail.com wrote:

 I couldn't get that to work. what am I missing?
 [image: Inline images 1]


 On 12 February 2013 09:45, Randy Little randyslit...@gmail.com wrote:

 just plug that card and that light into its own scene and the plug that
 scene into your next scene with your other card.


 Randy S. Little
 http://www.rslittle.com http://reel.rslittle.com
 http://www.imdb.com/name/nm2325729/




 On Mon, Feb 11, 2013 at 12:14 PM, Marten Blumen mar...@gmail.comwrote:

 not that I know of- you can use FillMat to turn shading off for each
 card on an ad hoc basis.




 On 9 February 2013 18:41, nandkishor19 
 nuke-users-re...@thefoundry.co.uk wrote:

 **
 I have two card in my scene. I am adding one light. This light should
 effect only on one card not to the other and i am using only one scene. Is
 it possible to do light linking technique in nuke?
 Thanks

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




 --
 ■ ■ ■ ■ ■ ■ ■ ■ ■ ■

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: How to prioritize cards layering in nuke?

2012-11-25 Thread Ivan Busquets
If your 2 cards are just a copy of each other, you might be better off
using a MergeMaterial set to over to layer both projections onto the same
card.




On Thu, Nov 22, 2012 at 12:41 AM, itaibachar 
nuke-users-re...@thefoundry.co.uk wrote:

 **
 Thanks Deke for the great tips!
 though I still get the tearing artifact in my setup for some reason.
 I played with the clipping plane with numbers all over the range, on all
 cameras (the projection cameras and the scene camera) and it always shows
 this tearing.
 Changing the piping order into the scene node seems to work but it is
 difficult to see underneath the tearing.
 what can it be?
 thanks
 itai

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] PositionPass to camera

2012-11-25 Thread Ivan Busquets
+1

Renderman has had a standard and consistent way of writing those out for a
while, but throw other renderers into the mix and it's very hard to write
tools that will work for any given render.

It would be great to see this standardized for sure.



On Wed, Nov 21, 2012 at 5:36 PM, Johannes Saam johannes.s...@gmail.comwrote:

 The header would be such a perfect way to deal with this, if only we could
 have ONE standard to do it. Anyone up for a challange to standardize it?
 Come up with ONE way and persuade renderes to do it all over?
 I am on your side :)
 jo


 On Mon, Nov 19, 2012 at 1:14 PM, Deke Kincaid dekekinc...@gmail.comwrote:

 Adding on top of what Michael mentioned. If you happen to use Vray then
 your in luck as the camera matrix is embedded in the metadata.  I can’t
 remember if it was in house script or not but I have also seen it built
 into Arnold and Prman exr files.  MR though your probably SOL.

 -deke


 On Sun, Nov 18, 2012 at 4:09 PM, Michael Ralla 
 michaelisbackfromh...@gmail.com wrote:

 In case you are handed exr's, I'd have quick look first if there's
 possibly camera data in the header of your xyz/pworld pass. There's a good
 chance you might find a translation and projection matrix you might be able
 to use to generate a camera that should match the camera the sequence was
 rendered with - without having to resort to xyz-pass trickery...

 Cheers, m.


 On Sun, Nov 18, 2012 at 5:33 AM, Howard Jones 
 mrhowardjo...@yahoo.comwrote:

 Hi

 Has anyone got a method of creating a camera from a world position pass?
 I'm thinking of scenarios where your 3D dept is struggling to create a
 usable camera, either through laziness, lack of knowledge or just
 reluctance.
 You have got a ppass (eventually), so it should be possible to retrofit
 a camera, as all the info is essentially there on a plate.

 Obviously not something a big facility has to deal with, but I have
 found an issue in the past, and possibly next week.

 Cheers
 Howard

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] point cloud from deep data where two objects intersect

2012-11-25 Thread Ivan Busquets
I think I've seen this behaviour before where (I assume due to a precision
error), two samples are not fully held out from each other. I imagine that,
even if the FG sample shows an opacity of 1, it might be something like
0.999. In the past, I've had some success by either forcing all FG
samples to have an alpha of 1 (in a deep expression), or multiply their
alpha up a bit until they fully occlude the samples in the BG.

Not the cleanest, but I think that would help in your example setup.

Cheers,
Ivan


On Sun, Nov 25, 2012 at 11:55 AM, Frank Rueter fr...@beingfrank.infowrote:

 isn't that the same approach with less control?


 On 26/11/12 8:05 AM, Ari Rubenstein wrote:

 How about forgetting the deep data approach , and instead doing a
 Zintersect and feeding that into Nathan's old PixelGeo tool to generate
 just the geo from the intersection... then use that as the particle emitter?

 Although the PixelGeo tool hasn't been updated for recent Nuke... but it
 would be great if NFX plugins got an update ... any news of such I might
 have missed ... anyone?

 Ari
 Blue Sky

 Sent from my iPhone

 On Nov 24, 2012, at 9:29 PM, Frank Rueter fr...@beingfrank.info wrote:

  Hi everybody,

 I'm messing around with deep data to see if I can produce a point cloud
 of an object where it intersects the ground plane.
 In my test setup, I offset the ground plane a little bit to be closer to
 camera, then hold out the cylinder with it (leaving only the bits above the
 offset ground). I then hold out the original cylinder with the previous
 result to only get the bits underneath the offset ground.
 Lastly I hold out the result with a second version of the ground plane
 which is offset in the opposite direction, effectively sandwiching the
 cylinder in a user defined thickness of the ground.

 The problem I'm currently seeing is that the last hold out still lets
 deep samples of the cylinder through even though it should be fully covered
 by the ground. I have sent a support mail for this (not sure if it's me or
 Nuke).

 Anyway, I'm wondering if people have done something similar and
 found a more elegant solution for this?
 I'd also be happy to get an intersection map for the ground plane.

 The general idea is to spawn particles from where one object intersects
 another, be it from a point cloud or a textured ground.

 Any thoughts?

 Cheers,
 frank
 DeepTest.nk
 __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] UV un-wrapping - re-wrapping issue

2012-11-12 Thread Ivan Busquets
Mmm, not really. Project3D won't change the projection mode of a camera,
whereas ScanlineRender can.

I was just trying to explain that the results stated in the original post
are not due to a caching issue, or Nuke ignoring the second ScanlineRender.
But Nathan is right, there's no need to connect a camera to a
ScanlineRender set to uv, so that's probably the easiest and safest
approach :)



On Mon, Nov 12, 2012 at 11:05 AM, Marten Blumen mar...@gmail.com wrote:

 Ya, but it has to be connected to the project3d, which is piped into the
 Scanline set to UV, which has the same effect to the camera I imagine.

 There is no need to connect a camera to a ScanlineRender that is set to
 'uv' projection mode. The only thing that will affect its output is the
 format of the 'bg' input.


 On 13 November 2012 08:00, Nathan Rusch nathan_ru...@hotmail.com wrote:

   There is no need to connect a camera to a ScanlineRender that is set
 to 'uv' projection mode. The only thing that will affect its output is the
 format of the 'bg' input.

 -Nathan


  *From:* Justin Ball blamsamm...@gmail.com
 *Sent:* Sunday, November 11, 2012 2:33 PM
 *To:* Nuke user discussion nuke-users@support.thefoundry.co.uk
 *Subject:* Re: [Nuke-users] UV un-wrapping - re-wrapping issue


 Well I should not need to manipulate cameras, just no point in having
 multiple copies of them in my opinion.

 Breaking them out did work though.  ran it through the farm and came out
 properly.  Now I can see all the problems in the matchmove.  :)

 I do not clone things out of principal, or well... lack of trust when in
 nuke 5 when it did not work and exploded scripts all the time.  I still
 flinch when I think about that.

 Thanks for the tip.  It really seems to have solved the issue for now!

 (not sure why though... seems like too much info would be traveling
 up-stream to the cameras)

 Thanks!

 Justin
 On Sun, Nov 11, 2012 at 4:23 PM, Marten Blumen mar...@gmail.com wrote:

 I think cloned cameras work als, which, keeps everything live.


 On 12 November 2012 11:14, Justin Ball blamsamm...@gmail.com wrote:

 I do have all the scanlines linked to the same camera, because, well,
 why wouldn't I.

 Breaking them up into separate cameras seems to have helped.
 I'm going to run it through the farm now and see if it sticks.  Could
 help with the other issue I was having where one scanline was rendering an
 output that wasn't even in its tree.

 A little annoying.

 Thanks!

 Ill let you know how it goes.

 Justin


 On Sun, Nov 11, 2012 at 4:08 PM, Marten Blumen mar...@gmail.comwrote:

 not sure the exact problem but there is an issue when using the same
 camera for both Scanline Render nodes. Try duplicating the camera and use
 the individual ones for each Scanline.




 On 12 November 2012 11:04, Justin Ball blamsamm...@gmail.com wrote:

 Hey Guys,

 Having a funky issue here.

 I'm using the old Mummy technique of match moving blood to an
 actors face.  Using the 3d model, I'm rendering the scanline to UV space,
 using a grid-warp to touch up the fit and then re-wrapping that animated
 image to the 3d and rendered through a render camera.

 I am doing this all in line and Nuke apparently does not like this as
 when trying to view the re-wrap over the plate at the end, the scanline
 will render the up-stream output of the UV scanline instead of the 
 updated
 information.

 I'm sure others have had this issue before, but what would be the fix?

 I'm using 6.3v8 x 64 on windows.

 I've tried throwing a crop or a grade node in between the 2 scanline
 render node process to break concatenation, but it does not seem to work.
 It seems like a caching issue.


 Any thoughts?
 --
 Justin Ball VFX
 VFX Supervisor, Comp, Effects, Pipeline and more...
 jus...@justinballvfx.com
 818.384.0923


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




 --
 Justin Ball VFX
 VFX Supervisor, Comp, Effects, Pipeline and more...
 jus...@justinballvfx.com
 818.384.0923


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




 --
 Justin Ball VFX
 VFX Supervisor, Comp, Effects, Pipeline and more...
 jus...@justinballvfx.com
 818.384.0923

  --
 ___
 Nuke-users 

Re: [Nuke-users] blut in camera

2012-11-06 Thread Ivan Busquets
You can change the focus diameter in the multisample tab of the scanline
render, and turn up the samples.
This will introduce a bit of jitter-rotation about the focal point of the
camera for each sample, effectively giving you an in-camera DOF effect.

set cut_paste_input [stack 0]
version 6.3 v8
Camera2 {
 inputs 0
 translate {0 0 0.625871}
 focal_point 1.2
 name Camera1
 selected true
 xpos -1375
 ypos -92
}
Text {
 inputs 0
 message C
 font \[python nuke.defaultFontPathname()]
 size 100
 xjustify center
 yjustify center
 box {536 301 1608 904}
 center {1072 603}
 name Text3
 selected true
 xpos -1090
 ypos -402
}
Card2 {
 translate {0.0939678 0 -0.547781}
 control_points {3 3 3 6

1 {-0.5 -0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {0 0 0}
1 {0 -0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166865
0} 0 {0 0 0} 0 {0.5 0 0}
1 {0.5 -0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {1 0 0}
1 {-0.5 0 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {0 0.5 0}
1 {0 0 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166716 0} 0
{0 -0.166716 0} 0 {0.5 0.5 0}
1 {0.5 0 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {1 0.5 0}
1 {-0.5 0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {0 1 0}
1 {0 0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0 0} 0 {0
-0.166865 0} 0 {0.5 1 0}
1 {0.5 0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {1 1 0} }
 name Card3
 selected true
 xpos -1090
 ypos -299
}
Text {
 inputs 0
 message B
 font \[python nuke.defaultFontPathname()]
 size 100
 xjustify center
 yjustify center
 box {536 301 1608 904}
 center {1072 603}
 name Text2
 selected true
 xpos -1211
 ypos -395
}
Card2 {
 translate {0.0379878 0 -0.280012}
 control_points {3 3 3 6

1 {-0.5 -0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {0 0 0}
1 {0 -0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166865
0} 0 {0 0 0} 0 {0.5 0 0}
1 {0.5 -0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {1 0 0}
1 {-0.5 0 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {0 0.5 0}
1 {0 0 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166716 0} 0
{0 -0.166716 0} 0 {0.5 0.5 0}
1 {0.5 0 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {1 0.5 0}
1 {-0.5 0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {0 1 0}
1 {0 0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0 0} 0 {0
-0.166865 0} 0 {0.5 1 0}
1 {0.5 0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {1 1 0} }
 name Card2
 selected true
 xpos -1211
 ypos -300
}
push $cut_paste_input
Text {
 message A
 font \[python nuke.defaultFontPathname()]
 size 100
 xjustify center
 yjustify center
 box {536 301 1608 904}
 center {1072 603}
 name Text1
 selected true
 xpos -1347
 ypos -392
}
Card2 {
 control_points {3 3 3 6

1 {-0.5 -0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {0 0 0}
1 {0 -0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166865
0} 0 {0 0 0} 0 {0.5 0 0}
1 {0.5 -0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {1 0 0}
1 {-0.5 0 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {0 0.5 0}
1 {0 0 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166716 0} 0
{0 -0.166716 0} 0 {0.5 0.5 0}
1 {0.5 0 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {1 0.5 0}
1 {-0.5 0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {0 1 0}
1 {0 0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0 0} 0 {0
-0.166865 0} 0 {0.5 1 0}
1 {0.5 0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {1 1 0} }
 name Card1
 selected true
 xpos -1347
 ypos -298
}
Scene {
 inputs 3
 name Scene1
 selected true
 xpos -1201
 ypos -212
}
push 0
ScanlineRender {
 inputs 3
 overscan 50
 samples 10
 shutter 0.47916667
 shutteroffset centred
 focal_jitter 0.2
 output_motion_vectors_type off
 MB_channel none
 name ScanlineRender1
 selected true
 xpos -1211
 ypos -72
}


On Tue, Nov 6, 2012 at 8:12 AM, Gabriel Dinis gabriel.di...@hotmail.comwrote:


 Hi there!

 Does anybody know the best way to get defocus blur when we change the
 focal distance in camera?

 Thanks in advance!
 Gab

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Normalize Viewer

2012-10-13 Thread Ivan Busquets
I'd also cast my vote for having this built into the viewer, maybe as a
dropdown under the cliptest/zebra pattern option, for the sake of
convenience.

However, in terms of a more efficient way to do a custom one, there are
ways around having to sample the image (with tcl or python), or having to
pre-analyze, avoiding the notable overhead that goes with it.

Taking Diogo's Dilate Min/Max approach, for example, there's no need to
sample the image afterwards, since you can do all the scaling
and offsetting using regular merges.

Ex:
set cut_paste_input [stack 0]
version 6.3 v8
push $cut_paste_input
Ramp {
 p0 {0 0}
 p1 {2048 0}
 color 1000
 name Ramp2
 label 0 to 1000
 selected true
 xpos 1112
 ypos -322
}
Group {
 name Normalize
 tile_color 0x7aa9
 selected true
 xpos 1112
 ypos -216
}
 Input {
  inputs 0
  name Input
  xpos -450
  ypos -312
 }
set N18046380 [stack 0]
push $N18046380
 Dilate {
  size {{-max(input.format.w, input.format.h)}}
  name Dilate2
  label Min
  xpos -376
  ypos -200
 }
 CopyBBox {
  inputs 2
  name CopyBBox2
  xpos -376
  ypos -76
 }
set N1a498300 [stack 0]
push $N18046380
 Merge2 {
  inputs 2
  operation from
  name Merge4
  xpos -450
  ypos 59
 }
push $N1a498300
push $N18046380
push $N18046380
 Dilate {
  size {{max(input.format.w, input.format.h)}}
  name Dilate1
  label Max
  xpos -281
  ypos -323
 }
 CopyBBox {
  inputs 2
  name CopyBBox1
  xpos -281
  ypos -173
 }
 Merge2 {
  inputs 2
  operation from
  name Merge1
  xpos -281
  ypos -76
 }
 Merge2 {
  inputs 2
  operation divide
  name Merge3
  xpos -281
  ypos 59
 }
 Output {
  name Output1
  xpos -281
  ypos 137
 }
end_group





On Sat, Oct 13, 2012 at 6:22 PM, Frank Rueter fr...@beingfrank.info wrote:

  None of those solutions actually produce what we're after though (some of
 your solutions seem to invert the input).

 We need something that can compresses the input to a 0-1 range by
 offsetting and scaling based on the image's min and max values (so the
 resulting range is 0-1). You can totally do this with a Grade or Expression
 node and a bit of tcl or python (or the CurveTool if you want to
 pre-compute), but that's not efficient.

 I reckon this should be a feature built into the viewer for ease-of-use
 and speed.






 On 14/10/12 1:04 PM, Marten Blumen wrote:

 and this group does all channels rgba,depth,motion using the expressions.
 should be quite fast as an input process

 set cut_paste_input [stack 0]
 version 7.0 v1b74
 push $cut_paste_input
 Group {
  name Normalised_channels
  selected true
  xpos -526
  ypos 270
 }
  Input {
   inputs 0
   name Input1
   xpos -458
   ypos 189
  }
  Expression {
   expr0 mantissa (abs(r))
   expr1 mantissa (abs(g))
   expr2 mantissa (abs(b))
   channel3 depth
   expr3 mantissa (abs(z))
   name Normalized_Technical1
   tile_color 0xb200
   label rgbz
   note_font Helvetica
   xpos -458
   ypos 229
  }
  Expression {
   channel0 alpha
   expr0 mantissa (abs(a))
   channel1 {forward.u -forward.v -backward.u forward.u}
   expr1 mantissa (abs(u))
   channel2 {-forward.u forward.v -backward.u forward.v}
   expr2 mantissa (abs(v))
   channel3 depth
   name Normalized_Motion1
   tile_color 0xb200
   label a, motion u  v
   note_font Helvetica
   xpos -458
   ypos 270
  }
  Output {
   name Output1
   xpos -458
   ypos 370
  }
 end_group


 On 14 October 2012 11:29, Marten Blumen mar...@gmail.com wrote:

 And one that looks technical or techni-color!


 set cut_paste_input [stack 0]
 version 7.0 v1b74
 push $cut_paste_input
 Expression {
   expr0 mantissa (abs(r))
  expr1 mantissa (abs(g))
  expr2 mantissa (abs(b))
  channel3 depth
  expr3 mantissa (abs(z))
  name Normalized_Technical
  tile_color 0xb200

  label Normalized\n
  note_font Helvetica
  selected true
   xpos -286
  ypos -49

 }


 On 14 October 2012 10:46, Marten Blumen mar...@gmail.com wrote:

 This works for rgb  depth. Pop it into the ViewerProcess for normalized
 viewing. It seems to work with all values, free polygon cube to anyone who
 breaks it ;)

 Who knows the expression node; can we just apply the formula to all the
 present channels?


 set cut_paste_input [stack 0]
 version 7.0 v1b74
 push $cut_paste_input
  Expression {
  expr0 1/(r+1)/10
  expr1 1/(g+1)/10
  expr2 1/(b+1)/10
   channel3 depth
  expr3 1/(z+1)/10
  name RGBDEPTH
  label Normalized\n
  note_font Helvetica
  selected true
  xpos -220
  ypos 50

 }


 On 14 October 2012 10:24, Marten Blumen mar...@gmail.com wrote:

 A normalised expression node:


 set cut_paste_input [stack 0]
 version 7.0 v1b74
 push $cut_paste_input
  Expression {
  expr0 1/(r+1)/10
  expr1 1/(g+1)/10
  expr2 1/(b+1)/10
  name Expression6
  label Normalize Me\n
  note_font Helvetica
  selected true
  xpos -306
  ypos 83

 }


 On 14 October 2012 09:33, Marten Blumen mar...@gmail.com wrote:

 + 1

 as a side note, doesn't SoftClip and Toe nodes do dynamic normalising
 of the RGB channels?

 set cut_paste_input [stack 0]
 version 7.0 v1b74
 push 

Re: [Nuke-users] Re: Baking camera and axis animation together

2012-10-01 Thread Ivan Busquets
Hi Johannes,

That's a different monster, in my opinion, which is to try and match the
camera (and motionblur) from an existing set of renders. In this case,
matching your renders is probably more important than a) keeping the camera
animatable, and b) having accurate motionblur.

If I had to guess, I'd say there's 2 possible reasons why you're getting a
better match by setting the local_matrix directly to the one in the exr's
metadata, instead of converting that back to Euler rotations:

1. No way to know the original rotation order from a transformation matrix
alone. So, if you're converting to Euler, you'd have to choose an arbitrary
rotation order, which may or may not match the one of the original camera.
Of course, you could have known the correct rotation order beforehand, in
which case this shouldn't be an issue.

2. How is your renderer (the one that produced the exrs) handling
motionblur? Assuming you're using Renderman, is subframe MotionBlur turned
on? Otherwise the renderer might just be doing a linear interpolation
between the camera position/rotation at each integer frame, which is the
same you'll get in Nuke when explicitly setting a local_matrix.
I'm not an expert in Renderman, though, so someone with more insight might
be able to confirm or deny this.

Having said that, I've used both approaches to re-create a camera from
Renderman metadata, and I've rarely had motionblur issues with one or the
other. The few occasions where I have found differences has always been due
a different rotation order.

Hope this helps.

Cheers,
Ivan




On Mon, Oct 1, 2012 at 12:14 AM, Johannes Hezer j.he...@studiorakete.dewrote:

  Hi Ivan,

 that is interesting with the motionblur.
 In my experience sofar, when getting cameras into nuke via exrs it was
 always best to use the matrix on the camera instead of converting all back
 to euler values in the rotation knobs ?!
 It was more accurate and motionblur issues were gone ?
 I know that is not exactly what you stated but I would be interested if
 you expierenced the same thing with the cam data from exrs?

 cheers



 Am 10/1/12 2:22 AM, schrieb Ivan Busquets:

 Might be splitting hairs, but since this comes up every now and then, I
 think it's worth noting that there are some important caveats to using the
 local_matrix knob to do that for animated cameras:

  - You lose the ability to tweak the animation afterwards.

  - Inaccurate motionblur. If you bake animated transform knobs into a
 single animated matrix, you're effectively losing the ability to
 interpolate curved paths correctly. The matrix values will interpolate
 between frames, but there's no guarantee that the result of that
 interpolation will match the transformation you'd get by interpolating the
 original rotation/translation/scale values.

  Getting back to the use case of the original post, I would recommend
 keeping the two separate transforms when exporting out to Maya.
 For one, if you use the forced local matrix approach you'll have no easy
 way to transfer that to Maya (as in, it won't export correctly when writing
 an FBX file, for example).
 But also, if you're planning to refine animation later on, it might be
 easier to do so on the original transformations.




 On Sun, Sep 30, 2012 at 2:18 PM, Marten Blumen mar...@gmail.com wrote:

 Stoked, solved it.  Very easy thanks to the exposed World and Local
 Matrix's.

 Attached is a verbose tutorial nuke script; all instructions included in
 stickies. Hit me up if it needs more work. thanks!





 On 29 September 2012 16:08, Marten Blumen mar...@gmail.com wrote:

 I just retested my test and it only worked on simple setups. Probably
 need expert equations to make it work properly!

  On 29 September 2012 15:39, C_Sander nuke-users-re...@thefoundry.co.uk
  wrote:

  I guess now would be a good time to learn expressions. I'll check
 that out, thanks!

  ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




 ___
 Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
 http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Baking camera and axis animation together

2012-09-30 Thread Ivan Busquets
Might be splitting hairs, but since this comes up every now and then, I
think it's worth noting that there are some important caveats to using the
local_matrix knob to do that for animated cameras:

- You lose the ability to tweak the animation afterwards.

- Inaccurate motionblur. If you bake animated transform knobs into a single
animated matrix, you're effectively losing the ability to interpolate
curved paths correctly. The matrix values will interpolate between
frames, but there's no guarantee that the result of that interpolation will
match the transformation you'd get by interpolating the original
rotation/translation/scale values.

Getting back to the use case of the original post, I would recommend
keeping the two separate transforms when exporting out to Maya.
For one, if you use the forced local matrix approach you'll have no easy
way to transfer that to Maya (as in, it won't export correctly when writing
an FBX file, for example).
But also, if you're planning to refine animation later on, it might be
easier to do so on the original transformations.




On Sun, Sep 30, 2012 at 2:18 PM, Marten Blumen mar...@gmail.com wrote:

 Stoked, solved it.  Very easy thanks to the exposed World and Local
 Matrix's.

 Attached is a verbose tutorial nuke script; all instructions included in
 stickies. Hit me up if it needs more work. thanks!





 On 29 September 2012 16:08, Marten Blumen mar...@gmail.com wrote:

 I just retested my test and it only worked on simple setups. Probably
 need expert equations to make it work properly!

 On 29 September 2012 15:39, C_Sander 
 nuke-users-re...@thefoundry.co.ukwrote:

 **
 I guess now would be a good time to learn expressions. I'll check that
 out, thanks!

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Baking camera and axis animation together

2012-09-30 Thread Ivan Busquets
Sure, you could enter a sub-frame increment when generating/baking the
curve.

But still, the point is that, if you need to bring that camera to a
different app, that's not going to help either.
And if you're keeping it in Nuke, you'd just get a camera that weighs more
than the original Axis+Camera stack, and is a lot harder to do any
animation on.

That's just my opinion, though, and the techniques mentioned before might
still useful from an experimental point of view, or for very specific
scenarios.

Cheers,
Ivan

On Sun, Sep 30, 2012 at 7:25 PM, Marten Blumen mar...@gmail.com wrote:

 Is it possible to bake sub-frame samples to have more accurate motion-blur?


 On 1 October 2012 13:22, Ivan Busquets ivanbusqu...@gmail.com wrote:

 Might be splitting hairs, but since this comes up every now and then, I
 think it's worth noting that there are some important caveats to using the
 local_matrix knob to do that for animated cameras:

 - You lose the ability to tweak the animation afterwards.

 - Inaccurate motionblur. If you bake animated transform knobs into a
 single animated matrix, you're effectively losing the ability to
 interpolate curved paths correctly. The matrix values will interpolate
 between frames, but there's no guarantee that the result of that
 interpolation will match the transformation you'd get by interpolating the
 original rotation/translation/scale values.

 Getting back to the use case of the original post, I would recommend
 keeping the two separate transforms when exporting out to Maya.
 For one, if you use the forced local matrix approach you'll have no
 easy way to transfer that to Maya (as in, it won't export correctly when
 writing an FBX file, for example).
 But also, if you're planning to refine animation later on, it might be
 easier to do so on the original transformations.




 On Sun, Sep 30, 2012 at 2:18 PM, Marten Blumen mar...@gmail.com wrote:

 Stoked, solved it.  Very easy thanks to the exposed World and Local
 Matrix's.

 Attached is a verbose tutorial nuke script; all instructions included in
 stickies. Hit me up if it needs more work. thanks!





 On 29 September 2012 16:08, Marten Blumen mar...@gmail.com wrote:

 I just retested my test and it only worked on simple setups. Probably
 need expert equations to make it work properly!

 On 29 September 2012 15:39, C_Sander nuke-users-re...@thefoundry.co.uk
  wrote:

 **
 I guess now would be a good time to learn expressions. I'll check that
 out, thanks!

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: UVProject not sticking?

2012-09-20 Thread Ivan Busquets
Hi Thoma,

My problem is that I'm using an FBX with animated geo


If I understand correctly your situation, I'm not sure UVProject has ever
worked the way you expect it to.

I think UVProject does do its job correctly. The reason you don't get your
textures to stick is that your UV-baking is happening for each frame of
the animated geo. As in, on every frame, UV project is doing its job
correctly, but on the next frame the baked UVs will be replaced again (with
those corresponding to the projection onto the new position of the geo).
The wording is a bit confusing, but hope it makes sense.

If you want to bake UVs based on the projection at a certain reference
frame (and therefore have the textures stick to the animated geo), you can
try StickyProject from Nukepedia.

http://www.nukepedia.com/plugins/3d/stickyproject/

Hope that helps.

Ivan


On Thu, Sep 20, 2012 at 11:15 AM, thoma
nuke-users-re...@thefoundry.co.ukwrote:

 **
 Hi Deke,

 I can't post exactly what I'm working on but I'll provide a general
 illustration of what I'm talking about below. My problem is that I'm using
 an FBX with animated geo and the transforms for that geo aren't accessible
 seperately within nuke. Plus it's the principle that this node doesn't seem
 to work anymore! So before anyone says it - my real world scenario doesn't
 allow for the parented projector camera example below

 *Code:*

 set cut_paste_input [stack 0]
 version 6.3 v4
 BackdropNode {
  inputs 0
  name BackdropNode3
  tile_color 0x99ff
  note_font_size 25
  selected true
  xpos -3153
  ypos 2802
  bdwidth 1339
  bdheight 760
 }
 Camera2 {
  inputs 0
  name Camera12
  selected true
  xpos -2278
  ypos 3327
 }
 push $cut_paste_input
 Axis2 {
  translate {{curve i x1 0 x20 0.2} {curve i x1 0} {curve i x1 0 x20 0}}
  name Axis3
  selected true
  xpos -2278
  ypos 3046
 }
 set N2f2977d0 [stack 0]
 push $N2f2977d0
 Camera2 {
  name Camera13
  selected true
  xpos -2311
  ypos 3147
 }
 CheckerBoard2 {
  inputs 0
  name CheckerBoard4
  selected true
  xpos -2152
  ypos 3030
 }
 RotoPaint {
  curves {AnimTree:  {
  Version: 1.2
  Flag: 0
  RootNode: 1
  Node: {
   NodeName: Root {
Flag: 512
NodeType: 1
Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 578
NumOfAttributes: 11
vis S 0 1 opc S 0 1 mbo S 0 1 mb S 0 1 mbs S 0 0.5 fo S 0 1
 fx S 0 0 fy S 0 0 ff S 0 1 ft S 0 0 pt S 0 0
   }
   NumOfChildren: 1
   Node: {
NodeName: Bezier1 {
 Flag: 576
 NodeType: 3
 CurveGroup:  {
  Transform: 0 0 S 1 120 0 S 1 120 0 S 1 120 0 S 1 120 1 S 1 120 1 S 1
 120 0 S 1 120 1274.83 S 1 120 730.333
  Flag: 0
  NumOfCubicCurves: 2
  CubicCurve:  {
   Type: 0 Flag: 8192 Dim: 2
   NumOfPoints: 36
   0 S 1 120 0 S 1 120 2 0 0 S 1 120 1524 S 1 120 880 0 0 S 1 120 0 S 1
 120 -2 0 0 S 1 120 4 S 1 120 -2 0 0 S 1 120 1434 S 1 120 944 0 0 S 1 120 -4
 S 1 120 2 0 0 S 1 120 34 S 1 120 32 0 0 S 1 120 1206 S 1 120 870 0 0 S 1
 120 -34 S 1 120 -32 0 0 S 1 120 14 S 1 120 10 0 0 S 1 120 1128 S 1 120 800
 0 0 S 1 120 -14 S 1 120 -10 0 0 S 1 120 32 S 1 120 20 0 0 S 1 120 1062 S 1
 120 762 0 0 S 1 120 -32 S 1 120 -20 0 0 S 1 120 -8 S 1 120 8 0 0 S 1 120
 1016 S 1 120 632 0 0 S 1 120 8 S 1 120 -8 0 0 S 1 120 -8 S 1 120 4 0 0 S 1
 120 1042 S 1 120 606 0 0 S 1 120 8 S 1 120 -4 0 0 S 1 120 -14 S 1 120 -8 0
 0 S 1 120 1178 S 1 120 582 0 0 S 1 120 14 S 1 120 8 0 0 S 1 120 -14 S 1 120
 -4 0 0 S 1 120 1206 S 1 120 596 0 0 S 1 120 14 S 1 120 4 0 0 S 1 120 -212 S
 1 120 30 0 0 S 1 120 1352 S 1 120 644 0 0 S 1 120 212 S 1 120 -30 0 0 S 1
 120 -4 S 1 120 -28 0 0 S 1 120 1594 S 1 120 676 0 0 S 1 120 4 S 1 120 28 0
 0 S 1 120 6 S 1 120 -6 0 0 S 1 120 1556 S 1 120 772 0 0 S 1 120 -6 S 1 120
 6 0
  }
  CubicCurve:  {
   Type: 0 Flag: 8192 Dim: 2
   NumOfPoints: 36
   0 S 1 120 0 S 1 120 2 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 0 S 1 120
 -2 0 0 S 1 120 4 S 1 120 -2 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 -4 S 1 120
 2 0 0 S 1 120 34 S 1 120 32 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 -34 S 1 120
 -32 0 0 S 1 120 14 S 1 120 10 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 -14 S 1
 120 -10 0 0 S 1 120 32 S 1 120 20 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 -32 S
 1 120 -20 0 0 S 1 120 -8 S 1 120 8 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 8 S
 1 120 -8 0 0 S 1 120 -8 S 1 120 4 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 8 S 1
 120 -4 0 0 S 1 120 -14 S 1 120 -8 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 14 S
 1 120 8 0 0 S 1 120 -14 S 1 120 -4 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 14 S
 1 120 4 0 0 S 1 120 -212 S 1 120 30 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 212
 S 1 120 -30 0 0 S 1 120 -4 S 1 120 -28 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120
 4 S 1 120 28 0 0 S 1 120 6 S 1 120 -6 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120
 -6 S 1 120 6 0
  }
  NumOfAttributes: 44
  vis S 0 1 r S 0 1 g S 0 1 b S 0 1 a S 0 1 ro S 0 0 go S
 0 0 bo S 0 0 ao S 0 0 opc S 0 1 bm S 0 0 inv S 0 0 mbo S 0 0
 mb S 0 1 mbs S 0 0.5 mbsot S 0 0 mbso S 0 0 fo S 0 1 fx S 0 0
 fy S 0 0 ff S 0 1 ft S 0 0 

Re: [Nuke-users] Re: UVProject not sticking?

2012-09-20 Thread Ivan Busquets
You can try reading that FBX into an Axis node instead. Usually you
should be able to get the local transforms for any given object. If
there's parented transforms, you can chain up different Axis nodes to
replicate the same hierarchy you had in Maya.

That failing... try StickyProject ;-)


On Thu, Sep 20, 2012 at 3:06 PM, thoma
nuke-users-re...@thefoundry.co.uk wrote:
 ahhh yes. I guess i was remembering UVproject as having a bit more
 functionality than it does. In that case...not having used stickyProject,
 does anyone know how to get transformational data out of an fbx that doesn't
 include it in the dropdowns? I have some geo with an animated parent
 transform in maya but the parent node/transformation matrix doesn't show in
 nuke. It only is accessible by enabling 'all objects' and 'read transform
 from file' on the fbx node. Is there any way to pull it out? (in this case
 to make my own little sticky projection)

 Thanks
 Thomas

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] J_Ops 2.0 available - adding a rigid body physics engine for Nuke's 3D system

2012-08-20 Thread Ivan Busquets
A little late to the party, but just wanted to add my thanks to Jack for
sharing this.
This is a really awesome addition to J_Ops, and it has a great performance
too!

As an idea, and seeing how some of the above problems came from the
auto-calculated center of mass, maybe you could add a visual representation
(like a non-interactive viewer handle) of where the CoM is when it's not
overridden by the user?
That way it would at least be easier to detect the cases where it's off.

Cheers,
Ivan

On Sun, Aug 19, 2012 at 3:44 PM, Frank Rueter fr...@beingfrank.info wrote:

 Hi Jack,

 thanks, but that was still giving odd results. I have adjusted the CoM a
 bit more (linked to an axis for better control and that seems to give the
 expected result):

 set cut_paste_input [stack 0]
 version 6.3 v8

 push $cut_paste_input
 Cube {
  cube {-0.2 -0.2 -0.2 0.2 0.5 0.2}
  translate {0 -0.5 0}
  rotate {35.26261719 0 0}
  pivot {0 0.5 0}
  name torso1
  selected true
  xpos 21
  ypos -130
 }
 J_MulletBody {
  bodydamping {0.09 0.09}
  bodycenterofmass {{parent.Axis1.translate x1 0} {parent.Axis1.translate
 x1 -0.167977} {parent.Axis1.translate x1 -0.109994}}

  bodycenterofmassoverride true
  labelset true
  name J_MulletBody6
  label \[value this.bodytype]-\[value this.coltype]
  selected true
  xpos 21
  ypos -72

 }
 J_MulletConstraint {
  conbodycount One
  conbodypreview true
  labelset true
  name J_MulletConstraint1
  label \[value this.contype]
  selected true
  xpos 21
  ypos -22

 }
 J_MulletSolver {
  name J_MulletSolver1
  selected true
  xpos 21
  ypos 45
 }
 Axis2 {
  inputs 0
  translate {0 -0.4 -0.29}
  name Axis1
  selected true
  xpos 197
  ypos -99

 }




 On 17/08/12 7:34 PM, Jack Binks wrote:

 Hey Gents,

 Will have to investigate further, but I think what you're seeing is
 related to the auto calculated center of mass. Does the below
 amendment make it more what you expect (body has CoM overriden)?

 set cut_paste_input [stack 0]
 version 6.3 v1
 push $cut_paste_input
 Cube {
   cube {-0.2 -0.2 -0.2 0.2 0.5 0.2}
   translate {0 -0.5 0}
   rotate {35.26261719 0 0}
   pivot {0 0.5 0}
   name torso1
   selected true
   xpos -224
   ypos -283
 }
 J_MulletBody {
   bodydamping {0.09 0.09}
   bodycenterofmass {0.15 -0.5 -0.4}
   bodycenterofmassoverride true
   labelset true
   name J_MulletBody6
   label \[value this.bodytype]-\[value this.coltype]
   selected true
   xpos -224
   ypos -225
 }
 J_MulletConstraint {
   conbodycount One
   conbodypreview true
   labelset true
   name J_MulletConstraint1
   label \[value this.contype]
   selected true
   xpos -224
   ypos -175
 }
 J_MulletSolver {
   name J_MulletSolver1
   selected true
   xpos -224
   ypos -108
 }

 Cheers
 Jack

 On 16 August 2012 23:41, Marten Blumen mar...@gmail.com wrote:

 that's what I got- I couldn't solve it properly before the deadline. It
 appeared to be some combination of the initial object position and the
 constraint axis.

 luckily this fit my shot. karabiners can shift within the bolt hanger
 when
 attached to the rock wall- it added to the realism!


 On 17 August 2012 10:18, Frank Rueter fr...@beingfrank.info wrote:

 I just had a play with this sort of simple constraint as well and am not
 getting the exected result (the box is not swinging around the
 constraint
 point.
 Am I doing something wrong?


 Cube {
   cube {-0.2 -0.2 -0.2 0.2 0.5 0.2}
   translate {0 -0.5 0}
   rotate {35.26261719 0 0}
   pivot {0 0.5 0}
   name torso1
   selected true
   xpos -464
   ypos -197
 }
 J_MulletBody {
   bodydamping {0.09 0.09}
   labelset true
   name J_MulletBody6
   label \[value this.bodytype]-\[value this.coltype]
   selected true
   xpos -464
   ypos -139
 }
 J_MulletConstraint {
   conbodycount One
   conbodypreview true
   labelset true
   name J_MulletConstraint1
   label \[value this.contype]
   selected true
   xpos -464
   ypos -89
 }
 J_MulletSolver {
   name J_MulletSolver1
   selected true
   xpos -464
   ypos -22

 }




 On 17/08/12 9:03 AM, Marten Blumen wrote:

 Cool - I had about 12-16 of them swinging on a wall. modeled and
 painted,
 6 hero ones and the rest in the distance.

 I had to bodgy the whole thing, didn't have time to learn it and the
 looming  shot deadline.

 Would really like to have a RBD rope, split into segments, pullling at
 them to make them move.



 On 17 August 2012 08:52, Jack Binks jackbi...@gmail.com wrote:

 Cracking, thanks Marten, will have a play!


 On 16 Aug 2012, at 19:48, Marten Blumen mar...@gmail.com wrote:

 Yeah - its an awesome bit of kit to have in the Nuke toolbox. Concept
 attached.

 The shot was a bit dead so I wanted to add sun glints off karabiners
 swinging on the wall. I could have animated it by hand but no need now!

 Super simple / amazing to be able to do it in Nuke.

 On 17 August 2012 06:35, Jack Binks jackbi...@gmail.com wrote:

 Sounds great + completely understand.
 Still, first production use I know of :)
 Cheers
 Jack

 On 16 August 2012 

Re: [Nuke-users] Re: Projecting Alpha channel = making holes in card?

2012-08-19 Thread Ivan Busquets
Unless I'm misreading your question, a MergeMaterial set to stencil
should be all you need to combine them.



On Sun, Aug 19, 2012 at 3:15 PM, kafkaz
nuke-users-re...@thefoundry.co.ukwrote:

 **
 I am not sure if I made myself clear.

 I want to project two textures on single card. First texture is RGB
 component, the second is alpha channel which would make holes in that card.

 Is it possible?

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Projecting Alpha channel = making holes in card?

2012-08-19 Thread Ivan Busquets
Sorry, didn't see your previous reply, Marten.
What is it that didn't work for you using a MergeMat?

As long as one of the projected textures has an alpha channel, a MergeMat
set to stencil (with the cutout texture plugged to the A input) should do
the job.
Is your setup any different?



On Sun, Aug 19, 2012 at 3:22 PM, Marten Blumen mar...@gmail.com wrote:

 I couldn't make it work combining 2 x Project3D with a MergeMat. I'm sure
 there is a way somehow though.

 On 20 August 2012 10:15, kafkaz nuke-users-re...@thefoundry.co.uk wrote:

 **
 I am not sure if I made myself clear.

 I want to project two textures on single card. First texture is RGB
 component, the second is alpha channel which would make holes in that card.

 Is it possible?

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Projecting Alpha channel = making holes in card?

2012-08-19 Thread Ivan Busquets
Ah, I see.

If you want to see other parts of the same scene through the cutout hole,
you'll need to add a BlendMaterial (set to over), to tell the shader how it
needs to interact with stuff in the BG.


On Sun, Aug 19, 2012 at 3:59 PM, Marten Blumen mar...@gmail.com wrote:

 I'm sure I'm doing it wrong. Attached is my test.


 On 20 August 2012 10:44, Ivan Busquets ivanbusqu...@gmail.com wrote:

 Sorry, didn't see your previous reply, Marten.
 What is it that didn't work for you using a MergeMat?

 As long as one of the projected textures has an alpha channel, a MergeMat
 set to stencil (with the cutout texture plugged to the A input) should do
 the job.
 Is your setup any different?




 On Sun, Aug 19, 2012 at 3:22 PM, Marten Blumen mar...@gmail.com wrote:

 I couldn't make it work combining 2 x Project3D with a MergeMat. I'm
 sure there is a way somehow though.

 On 20 August 2012 10:15, kafkaz nuke-users-re...@thefoundry.co.ukwrote:

 **
 I am not sure if I made myself clear.

 I want to project two textures on single card. First texture is RGB
 component, the second is alpha channel which would make holes in that card.

 Is it possible?

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Separate particle channel from another object channel

2012-08-19 Thread Ivan Busquets
You could use a FillMat on either the card or the particles to make it a
holdout of the rest of the scene. Or you could even have two
ScanlineRenders, one with the card held out, and one with the particles
held out, to have full control over both before merging them together.

Or, in the same line Frank suggested, you could use additional channels to
create an id pass for each part of your scene and use that as a matte to
your color corrections.


Attached is an example of both setups. Hope it helps.


On Fri, Aug 17, 2012 at 3:55 PM, Marten Blumen mar...@gmail.com wrote:

 I'm not sure how you can have a bad depth channel. can you post an image?

 On 18 August 2012 10:33, poiboy nuke-users-re...@thefoundry.co.uk wrote:

 **
 Marten,

 I actually thought of that, but the depth channel for the partciles are
 pretty fubar as well as the card for the projected image.

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



particles_and_card.nk
Description: Binary data
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] unwanted label on gizmo/group?

2012-07-24 Thread Ivan Busquets
Are any of the knobs in your gizmo called output, channels,
maskChannelInput or unpremult?

autolabel.py explicitly looks for those as part of the automatic labeling,
so if you have any knobs named like that they will be picked up.

If that's the case, you can either rename them to something else, or
override autolabel() if you'd rather not have that behaviour.


Hope that helps.

Ivan



On Tue, Jul 24, 2012 at 7:51 PM, Jordan Olson jorxs...@gmail.com wrote:

 hey guys!
 I was making a group node today, adding expressions, linking knobs,
 etc. Then when I checked it out in the main node graph, I noticed it
 had a label underneath the name : which read (- / false).
 Where is this one coming from, and how can I get rid of this label?
 can't seem to figure this one out.
 All I have internally is three nodes, two of which have expressions on
 their disable knobs.

 cheers,
 Jordan
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] DeepFromImage: what depth.Z range is it expecting?

2012-05-08 Thread Ivan Busquets
When using the depth input, it expects the depth channel to conform to
Nuke's standard (1/distance).

If your Arnold depth shader is outputting real depth values, you should be
able to just add an expression node to turn depth.Z into 1/depth.Z.


On Tue, May 8, 2012 at 7:01 PM, Paul Hudson phudson1...@gmail.com wrote:

 Hi all,

 I am attempting to use the DeepFromImage node with a render from
 Arnold.  My objects are about 100 units from camera.  I cannot get
 anything that looks close to correct until I rescale my depth pass to
 be between 1 and 0 (with 1 being 0 units from camera and 0 being
 infinity).  Is the DeepFromImage node setup to be convenient with
 Noise, Ramp, etc nodes but not actual renders?


 -Paul
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] python [topnode] equivalent

2012-04-23 Thread Ivan Busquets
What Hugo said.
You can find more info here:

http://docs.python.org/library/stdtypes.html#string-formatting

As for the 16 in the int() command (or 0 in the example I sent), that
is the base for the string-to-integer conversion. If the argument is not
given, it uses a base of 10 by default. You can pass it a value of 16 so it
will interpret hex characters correctly, or 0 to let python guess the best
base to use for the given string. In this case, this works because the
string will always start with '0x', which Python will interpret as an
hexadecimal.

More info:
http://docs.python.org/library/functions.html#int

Hope that clarifies it a bit.

Cheers,
Ivan

On Mon, Apr 23, 2012 at 12:12 PM, Hugo Léveillé hu...@fastmail.net wrote:

   Its called string formatting

  ex:
  age = '16'
 print Hi, I am  + age +  years old

  is the same as:

  print Hi, I am %s years old % age

  It has the advantage of making clearer string writing as well as
 converting int and floats to string as well

  ex:

  I am %s years old and I have %s dollars % (10 , 3.5)




On Mon, Apr 23, 2012, at 12:00, Adam Hazard wrote:

  ok, cool, I think I understand it better now. Thanks, guys. Also, if you
 don't mind, another question, what exactly is the '%' doing in this code.
 And I have used 'int' before, and seen the '16' posted around, what exactly
 are those doing, I am guessing that is what converts the value from string
 to integer?

 -Adam

 On 04/23/2012 10:48 AM, Nathan Rusch wrote:

   The problem isn’t hex vs. int; the value you’re getting back from the
 Python knob is identical to the hex value returned by the nuke.tcl call.
 The issue you’re running into is that the nuke.tcl call is returning the
 hex value as a string, so you need to cast it to a numeric type before you
 can actually use it.

  n = nuke.selectedNode()
  tile_color = int(nuke.tcl('value [topnode %s].tile_color' % n.name()),
 16)


  -Nathan


  From: Adam Hazard ahaz...@tippett.com
  Sent: Monday, April 23, 2012 10:12 AM
  To: Nuke user discussion nuke-users@support.thefoundry.co.uk
  Subject: Re: [Nuke-users] python [topnode] equivalent

  Thanks Ivan.
 This was pretty much exactly what I was looking for. However I had to
 change it a little bit because this was returning the tile color hex value,
 if I understand all this correctly, and my function needs just the integer
 value. As I can't assign or set a tile_color using hex, or I haven't been
 able to figure it out.

 Anyways, for whatever reason this does the trick, kinda mixing your code
 with what I had before. I am still not very sure why the tile_color has 2
 different value formats.

 n = nuke.selectedNode()
 topnode_name = nuke.tcl(full_name [topnode %s] % n.name())
 topnode = nuke.toNode(topnode_name)
 tile_col = topnode['tile_color'].value()

 Thanks again and much appreciated.
 Adam


 On 04/20/2012 06:47 PM, Ivan Busquets wrote: Or if you just want the
 tile_color of the top node, you could of course do:

 n = nuke.selectedNode()

 tile_color = nuke.tcl(value [topnode %s].tile_color % n.name())

 Hope that helps


  On Fri, Apr 20, 2012 at 6:41 PM, Ivan Busquets ivanbusqu...@gmail.comwrote:
 You can use nuke.tcl() within python to execute a tcl command.

 So, in your case, something like this should work:

n = nuke.selectedNode()

 topnode_name = nuke.tcl(full_name [topnode %s] % n.name())

 topnode = nuke.toNode(topnode_name)



  On Fri, Apr 20, 2012 at 6:30 PM, Adam Hazard ahaz...@tippett.com wrote:
 Hopefully a quick question,

 If I currently have a node selected somewhere in a tree, and I want to
 access the topnodes tile color using python, how would I do so? Using
 [[topnode].tile_color] doesn't seem to work as it is tcl? Looking around it
 seems you need to check dependecies of all the nodes or something, but I
 haven't been able to get anything to work.  Is there no way to convert the
 tcl function to work in python?

 Thanks in advance for any help,
 Adam
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users





 ___
 Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
 http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 --
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
 http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


  *___*
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http

Re: [Nuke-users] python [topnode] equivalent

2012-04-20 Thread Ivan Busquets
You can use nuke.tcl() within python to execute a tcl command.

So, in your case, something like this should work:

  n = nuke.selectedNode()

topnode_name = nuke.tcl(full_name [topnode %s] % n.name())

topnode = nuke.toNode(topnode_name)



On Fri, Apr 20, 2012 at 6:30 PM, Adam Hazard ahaz...@tippett.com wrote:

 Hopefully a quick question,

 If I currently have a node selected somewhere in a tree, and I want to
 access the topnodes tile color using python, how would I do so? Using
 [[topnode].tile_color] doesn't seem to work as it is tcl? Looking around it
 seems you need to check dependecies of all the nodes or something, but I
 haven't been able to get anything to work.  Is there no way to convert the
 tcl function to work in python?

 Thanks in advance for any help,
 Adam
 __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] python [topnode] equivalent

2012-04-20 Thread Ivan Busquets
Or if you just want the tile_color of the top node, you could of course do:

n = nuke.selectedNode()

tile_color = nuke.tcl(value [topnode %s].tile_color % n.name())

Hope that helps


On Fri, Apr 20, 2012 at 6:41 PM, Ivan Busquets ivanbusqu...@gmail.comwrote:

 You can use nuke.tcl() within python to execute a tcl command.

 So, in your case, something like this should work:

   n = nuke.selectedNode()

 topnode_name = nuke.tcl(full_name [topnode %s] % n.name())

 topnode = nuke.toNode(topnode_name)



 On Fri, Apr 20, 2012 at 6:30 PM, Adam Hazard ahaz...@tippett.com wrote:

 Hopefully a quick question,

 If I currently have a node selected somewhere in a tree, and I want to
 access the topnodes tile color using python, how would I do so? Using
 [[topnode].tile_color] doesn't seem to work as it is tcl? Looking around it
 seems you need to check dependecies of all the nodes or something, but I
 haven't been able to get anything to work.  Is there no way to convert the
 tcl function to work in python?

 Thanks in advance for any help,
 Adam
 __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Creating infinate zoom and image nesting

2012-04-10 Thread Ivan Busquets
Ok, so here's a few pointers that should let you make it work with either
of the examples.

1. Common to all 3 examples. Keep your transform nodes stacked together.
Adding the Grade node between transforms is breaking concatenation between
them.

2. In the first example, use a cloned transform on each one of the
branches, instead of doing it after everything is merged.

3. The Card3D example, as Randy suggested, should work similar to the
cloned transform example, with the convenience that you don't have to clone
anything. You would plug one camera to all the Card3D nodes, and drive your
zoom with that camera

4. For the full 3D setup to work, you need to use MergeMat nodes, not
regular Merges. Have a look at the example I sent before.


Here's your own script with those fixes in place, in case something isn't
clear. Hope that helps.



set cut_paste_input [stack 0]
version 6.3 v1
BackdropNode {
 inputs 0
 name BackdropNode1
 tile_color 0x8e8e3800
 label Regular Transform
 note_font_size 42
 selected true
 xpos -6788
 ypos -569
 bdwidth 724
 bdheight 867
}
BackdropNode {
 inputs 0
 name BackdropNode2
 tile_color 0x7171c600
 label Card 3D
 note_font_size 42
 selected true
 xpos -5908
 ypos -569
 bdwidth 680
 bdheight 869
}
BackdropNode {
 inputs 0
 name BackdropNode3
 tile_color 0x8e8e3800
 label 3D Scanline render
 note_font_size 42
 selected true
 xpos -5124
 ypos -569
 bdwidth 691
 bdheight 871
}
CheckerBoard2 {
 inputs 0
 boxsize 300
 centerlinewidth 10
 name CheckerBoard4
 selected true
 xpos -5458
 ypos -419
}
Grade {
 white {3 1 1 1}
 name Grade4
 selected true
 xpos -5458
 ypos -347
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform9
 selected true
 xpos -5458
 ypos -273
}
set N34d05e80 [stack 0]
Dot {
 name Dot7
 selected true
 xpos -5284
 ypos -150
}
Dot {
 name Dot3
 label Original close up size
 selected true
 xpos -5284
 ypos 234
}
CheckerBoard2 {
 inputs 0
 boxsize 300
 centerlinewidth 10
 name CheckerBoard2
 selected true
 xpos -6338
 ypos -337
}
Grade {
 white {3 1 1 1}
 name Grade1
 selected true
 xpos -6338
 ypos -265
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform18
 selected true
 xpos -6338
 ypos -147
}
set N68125270 [stack 0]
Dot {
 name Dot9
 selected true
 xpos -6149
 ypos -150
}
Dot {
 name Dot1
 label Original close up size
 selected true
 xpos -6149
 ypos 234
}
push $N68125270
Transform {
 scale 0.333
 center {1024 778}
 name Transform19
 selected true
 xpos -6338
 ypos -89
}
clone node12feb1e10|Transform|77695 Transform {
 scale 3
 center {1024 778}
 name Transform23
 label Use this as your master zoom
 selected true
 xpos -6335
 ypos -21
}
set C2feb1e10 [stack 0]
CheckerBoard2 {
 inputs 0
 boxsize 200
 centerlinewidth 10
 name CheckerBoard3
 selected true
 xpos -6558
 ypos -425
}
Grade {
 white {1 3 1 1}
 name Grade2
 selected true
 xpos -6558
 ypos -353
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform20
 selected true
 xpos -6558
 ypos -260
}
Transform {
 scale 0.5
 center {1024 778}
 name Transform21
 selected true
 xpos -6558
 ypos -185
}
clone $C2feb1e10 {
 xpos -6556
 ypos -100
 selected true
}
CheckerBoard2 {
 inputs 0
 boxsize 100
 centerlinewidth 10
 name CheckerBoard10
 selected true
 xpos -6778
 ypos -488
}
Grade {
 white {1 1 3 1}
 name Grade3
 selected true
 xpos -6778
 ypos -416
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform22
 selected true
 xpos -6778
 ypos -269
}
clone $C2feb1e10 {
 xpos -6778
 ypos -139
 selected true
}
Merge2 {
 inputs 2
 name Merge1
 selected true
 xpos -6558
 ypos 7
}
Merge2 {
 inputs 2
 name Merge2
 selected true
 xpos -6338
 ypos 103
}
Dot {
 name Dot2
 label Re-sized close up
 selected true
 xpos -6310
 ypos 253
}
push $cut_paste_input
Camera2 {
 translate {0 0 0.677}
 name Camera2
 selected true
 xpos -4960
 ypos 149
}
CheckerBoard2 {
 inputs 0
 boxsize 300
 centerlinewidth 10
 name CheckerBoard8
 selected true
 xpos -4674
 ypos -473
}
Grade {
 white {3 1 1 1}
 name Grade8
 selected true
 xpos -4674
 ypos -401
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform13
 selected true
 xpos -4674
 ypos -305
}
set N61e7bfd0 [stack 0]
Transform {
 scale 0.333
 center {1024 778}
 name Transform14
 selected true
 xpos -4674
 ypos -137
}
CheckerBoard2 {
 inputs 0
 boxsize 200
 centerlinewidth 10
 name CheckerBoard9
 selected true
 xpos -4894
 ypos -473
}
Grade {
 white {1 3 1 1}
 name Grade9
 selected true
 xpos -4894
 ypos -401
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform15
 selected true
 xpos -4894
 ypos -311
}
Transform {
 scale 0.5
 center {1024 778}
 name Transform16
 selected true
 xpos -4894
 ypos -233
}
CheckerBoard2 {
 inputs 0
 boxsize 100
 centerlinewidth 10
 name CheckerBoard7
 selected true
 xpos -5114
 ypos -489
}
Grade {
 white {1 1 3 1}
 name Grade7
 selected true
 xpos -5114
 ypos -417
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 

Re: [Nuke-users] Creating infinate zoom and image nesting

2012-04-09 Thread Ivan Busquets
Hi,

In Nuke, I think there's 2 ways you could go about this:

1 - Keep all your transforms together for each image, before merging them
together. That means you'll probably want to have one master transform that
drives the camera move, and have it cloned (or expression linked) as the
last transform of each one of your images. Then, above that transform, just
position, scale each image to line them up. And if you need to do any
masking to blend them together, make sure you do that before the transforms.

2 - The workflow you describe from AE  Flame can be achieved by moving to
a 3D setup instead. One of the really cool things about Nuke that doesn't
get enough rep is the fact that geometry honors concatenation when looking
up its textures. So, say you have an image, scale it way down, and put it
on a card. If you render that through a camera that gets very close to the
card (and therefore scales it up again), you'll see that it's still
concatenating with the transformations before the card. And, most
importantly, this stays true even when you use MergeMaterial nodes.
So, for the case of the Earth Zoom, you could use a setup like this  (and
if you're going to be adding cloud layers, etc, they might need to have
some parallax, so I would definitely recommend the 3D setup in this case):

set cut_paste_input [stack 0]
version 6.3 v4
StickyNote {
 inputs 0
 name StickyNote1
 label because of the way geometry textures itself honoring\nconcatenation
of transforms, you can get close to your scaled\ndown images without
loosing detail down here
 selected true
 xpos -1564
 ypos 342
}
push $cut_paste_input
Camera2 {
 translate {{curve x1 0} {curve x1 0} {curve x1 -0.528 x20 1.528}}
 name Camera1
 selected true
 xpos -1586
 ypos 250
}
CheckerBoard2 {
 inputs 0
 name CheckerBoard1
 selected true
 xpos -1221
 ypos -27
}
Transform {
 scale 0.1
 center {1024 778}
 name Transform8
 label sacled way down
 selected true
 xpos -1228
 ypos 61
}
CheckerBoard2 {
 inputs 0
 name CheckerBoard7
 selected true
 xpos -1318
 ypos -144
}
Transform {
 scale 0.3
 center {1024 778}
 name Transform9
 label sacled way down
 selected true
 xpos -1318
 ypos -55
}
ColorWheel {
 inputs 0
 gamma 0.45
 name ColorWheel1
 label BG = widest image
 selected true
 xpos -1428
 ypos -230
}
MergeMat {
 inputs 2
 name MergeMat2
 selected true
 xpos -1428
 ypos -49
}
MergeMat {
 inputs 2
 name MergeMat1
 selected true
 xpos -1428
 ypos 67
}
Card2 {
 translate {0 0 -0.713866}
 control_points {3 3 3 6

1 {-0.5 -0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {0 0 0}
1 {0 -0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166865
0} 0 {0 0 0} 0 {0.5 0 0}
1 {0.5 -0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {1 0 0}
1 {-0.5 0 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {0 0.5 0}
1 {0 0 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166716 0} 0
{0 -0.166716 0} 0 {0.5 0.5 0}
1 {0.5 0 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {1 0.5 0}
1 {-0.5 0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {0 1 0}
1 {0 0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0 0} 0 {0
-0.166865 0} 0 {0.5 1 0}
1 {0.5 0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {1 1 0} }
 name Card1
 selected true
 xpos -1428
 ypos 158
}
push 0
ScanlineRender {
 inputs 3
 output_motion_vectors_type accurate
 name ScanlineRender1
 selected true
 xpos -1428
 ypos 270
}


Hope that helps.

Cheers,
Ivan



On Mon, Apr 9, 2012 at 6:09 PM, mesropa
nuke-users-re...@thefoundry.co.ukwrote:

 **
 I have been trying for the last few days in creating an Earth Zoom also
 known as a Cosmic Zoom or as I like to think of it simply as image
 nesting. I found a tutorial for it in AE and it seams straight forward. It
 can also be done in Flame with the same logical steps, however I have been
 unable to do the same thing using Nuke. The problem is that once a node
 passes through a merge the pixels are baked down. You can have transform
 nodes one after the other doing inverse things and because of CONCATENATING
 they will cancel each other out without effect. but if you scale something
 down using a transform and merge it with another plate the output can not
 be inversely scaled back up with out degradation. Short of creating giant
 30K and larger images (using a reformat to nest them ) I can't make
 something work as efficiently as possible. Below is the tutorial of the
 After Effects setup. If any one can give some pointers that would be an
 amazing help

 http://www.videocopilot.net/tutorial/earth_zoom/

 Thanks

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list

Re: [Nuke-users] Re: Ocula Disparity from Depth Channels

2012-04-03 Thread Ivan Busquets
A little late to the party, but just wanted to add my 2 cents in case
it helps someone make a full dollar :)

@Thomas: the process Michael described above is exactly what you need
if your starting point is already a world position pass. You should
only need the pass rendered for one eye and the opposite camera to get
disparity data.

Unfortunately, this is a lot more tedious to do with standard nodes
than it would be by writing a dedicated plugin, specially if you want
to account for any possible variation with the Cameras.
For example, you can get the transformation matrix of your cameras
from their world_matrix knob, but you can't get the projection matrix
(unless you're writing a plugin, that is). So, you need to manually
figure out the camera-to-screen transformation using the knobs from
the camera's projection tab. For simple cases, you can use just the
focal and horizontal aperture values, but if you need to account for
window_translate, window_scale and roll changes, then it gets messy
very easily.

That said, I've put together a little Nuke script (attached) to go
from world position to disparity. It could be more compact, but it's
split out to show the different transforms and conversions between
coordinate systems one by one, so hopefully it'll be easier to
understand. Keep in mind that, as stated previously, this one doesn't
account for changes to the win_translate or win_scale knobs, though.

Hope that helps.

Cheers,
Ivan



On Sun, Apr 1, 2012 at 12:52 PM, Michael Habenicht m...@tinitron.de wrote:
 Hi Thomas,

 you are right the pworld pass is already the first part. We have the screen
 space and the coresponding world position. But to be able to calculate the
 disparity you need the screen space position for this particular point
 viewed through the second camera. It is possible to calculate this based on
 the camera for sure as this is what the scanline render node and reconcile3d
 node do. But don't ask me about the math.

 Best regards,
 Michael

 Am 01.04.2012 18:08, schrieb thoma:

 Hi Michael,

 We're using Arnold. If i have my stereo Pworld passes and stereo cameras
 in nuke couldn't i make this work? When you say world position projected
 to screen space isn't that essentially what a Ppass is or are you
 talking about something more? I tried doing some logic operations on the
 left and right eyes of the Ppass to see if it would yield anything
 meaningful but no luck

 Thanks
 Thomas


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


world_to_disparity.nk
Description: Binary data
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: difference between uv project and project 3d

2012-03-24 Thread Ivan Busquets
Thanks guys, I also think we have a great community, and it's a pleasure to
share when possible, as much as it is to learn from everyone who
participates.

@Thorsten: thanks for the Windows compile. I'll upload it to Nukepedia as
well.

Cheers,
Ivan

On Fri, Mar 23, 2012 at 5:44 PM, Michael Garrett michaeld...@gmail.comwrote:

 Totally agree it's made all the difference since the Shake days.  Thanks
 Ivan for contributing this (and all the other stuff!).

 Ari, I do have a gizmo version of a depth to Pworld conversion but it
 assumes raw planar depth from camera.  Though once you start factoring in
 near and far clipping planes, and different depth formats, it gets a bit
 more complicated.  Ivan may have something to say on this.

 Michael




 On 23 March 2012 03:16, ari Rubenstein a...@curvstudios.com wrote:

 Wow, much appreciated.


 Thinking back to how artists and studios in the film industry used to
 hold tight their techniques for leverage and advantage, it's great to see
 how much this comp community encourages and props up one another for
 creative advancement for all.

 Thanks again Ivan

 Ari


 Sent from my iPhone

 On Mar 23, 2012, at 3:16 AM, Ivan Busquets ivanbusqu...@gmail.com
 wrote:

 Hey Ari,

 Here's the plugin I mentioned before.

 http://www.nukepedia.com/gizmos/plugins/3d/stickyproject/

 There's only compiled versions for Nuke 6.3 (MacOS and Linux64), but I've
 uploaded the source code as well, so someone else can compile it for
 Windows if needed

 Hope it proves useful.
 Cheers,
 Ivan

 On Wed, Mar 21, 2012 at 2:55 PM, a...@curvstudios.com wrote:

 thanks Frank for the clarification.

 thanks Ivan for digging that plugin up if ya can.  i have a solution I
 wrapped into a tool as well, but I'd love to see your approach as well.



 , 
  2)  if you've imported an obj sequence with UV's already on (for an
  animated, deformable piece of geo)... and your using a static camera
  (say
  a single frame of your shot camera)... is there a way to do something
  akin
  to Maya's texture reference object whereby the UV's are changed
 based
  on
  this static camera, for all the subsequent frames of the obj sequence
 ?
 
 
  I've got a plugin that does exactly that. I'll see if I can share on
  Nukepedia soon.
 
  Cheers,
  Ivan
 
  On Wed, Mar 21, 2012 at 2:31 PM, Frank Rueter fr...@beingfrank.info
  wrote:
 
  1 - UVProject creates UVs from scratch, with 0,0 in the lower left of
  the
  camera frustum and1,1 in the upper right.
 
  2 - been waiting for that feature a long time ;).It should be logged
 as
  a
  feature request but would certainly be good to report again to make
 sure
  (and to push it in priority)
 
 
 
 
  On 3/22/12 8:28 AM, a...@curvstudios.com wrote:
 
  couple more questions:
 
  1)  if imported geo does not already have UV's, will UVproject
 create a
  new set or does it require them to...replace them ?
 
  2)  if you've imported an obj sequence with UV's already on (for an
  animated, deformable piece of geo)... and your using a static camera
  (say
  a single frame of your shot camera)... is there a way to do something
  akin
  to Maya's texture reference object whereby the UV's are changed
 based
  on
  this static camera, for all the subsequent frames of the obj
 sequence ?
 
  ..sorry if I'm too verbose...that was sort of a stream of
 consciousness
  question.  Basically I'm asking if there is an easier way then my
  current
  method where I export an obj sequence with UV's, project3D on a
 single
  frame, render with scanline to unwrapped UV, then input that into the
  full
  obj sequence to get my paint to stick throughout.
 
  oy, sorry again.
 
  Ari
  Blue Sky
 
 
 
   ivanbusquets wrote:
 
  You can think of UVProject as a baked or sticky projection.
 
  The main difference is how they'll behave if you transform/deform
  your
  geometry AFTER your projection.
 
  UVProject bakes the UV values into each vertex, so if you
 transform
  those vertices later on, they'll still pull the textures from the
  same
  coordinate.
 
  The other difference between UVProject and Project3D is how they
  behave
  when the aspect ratio of the camera window is different than the
  aspect
  ratio of the projected image.
  With UVProject, projection is defined by both the horizontal and
  vertical aperture. Project3D only takes the horizontal aperture,
 and
  preserves the aspect ratio of whatever image you're projecting.
 
 
  Hope that makes sense.
 
  Cheers,
  Ivan
 
 
  On Tue, Mar 20, 2012 at 4:50 PM, coolchippernuke  wrote:
 
  hey Nukers, may be a very basic question, but i
  wanted
  to know
 
  what is the difference between the two, i do a lot of clean up
  work everyday and i am kind of confused when to use the uv project
  node an when to go for a project 3d node.i know that uv project
  project a mapping coordinates to a mesh, in one of frank videos he
  used the uv project node to clean up dolly tracks,that might have
  been done using the project

Re: [Nuke-users] Re: difference between uv project and project 3d

2012-03-23 Thread Ivan Busquets
Hey Ari,

Here's the plugin I mentioned before.

http://www.nukepedia.com/gizmos/plugins/3d/stickyproject/

There's only compiled versions for Nuke 6.3 (MacOS and Linux64), but I've
uploaded the source code as well, so someone else can compile it for
Windows if needed

Hope it proves useful.
Cheers,
Ivan

On Wed, Mar 21, 2012 at 2:55 PM, a...@curvstudios.com wrote:

 thanks Frank for the clarification.

 thanks Ivan for digging that plugin up if ya can.  i have a solution I
 wrapped into a tool as well, but I'd love to see your approach as well.



 , 
  2)  if you've imported an obj sequence with UV's already on (for an
  animated, deformable piece of geo)... and your using a static camera
  (say
  a single frame of your shot camera)... is there a way to do something
  akin
  to Maya's texture reference object whereby the UV's are changed based
  on
  this static camera, for all the subsequent frames of the obj sequence ?
 
 
  I've got a plugin that does exactly that. I'll see if I can share on
  Nukepedia soon.
 
  Cheers,
  Ivan
 
  On Wed, Mar 21, 2012 at 2:31 PM, Frank Rueter fr...@beingfrank.info
  wrote:
 
  1 - UVProject creates UVs from scratch, with 0,0 in the lower left of
  the
  camera frustum and1,1 in the upper right.
 
  2 - been waiting for that feature a long time ;).It should be logged as
  a
  feature request but would certainly be good to report again to make sure
  (and to push it in priority)
 
 
 
 
  On 3/22/12 8:28 AM, a...@curvstudios.com wrote:
 
  couple more questions:
 
  1)  if imported geo does not already have UV's, will UVproject create a
  new set or does it require them to...replace them ?
 
  2)  if you've imported an obj sequence with UV's already on (for an
  animated, deformable piece of geo)... and your using a static camera
  (say
  a single frame of your shot camera)... is there a way to do something
  akin
  to Maya's texture reference object whereby the UV's are changed based
  on
  this static camera, for all the subsequent frames of the obj sequence ?
 
  ..sorry if I'm too verbose...that was sort of a stream of consciousness
  question.  Basically I'm asking if there is an easier way then my
  current
  method where I export an obj sequence with UV's, project3D on a single
  frame, render with scanline to unwrapped UV, then input that into the
  full
  obj sequence to get my paint to stick throughout.
 
  oy, sorry again.
 
  Ari
  Blue Sky
 
 
 
   ivanbusquets wrote:
 
  You can think of UVProject as a baked or sticky projection.
 
  The main difference is how they'll behave if you transform/deform
  your
  geometry AFTER your projection.
 
  UVProject bakes the UV values into each vertex, so if you transform
  those vertices later on, they'll still pull the textures from the
  same
  coordinate.
 
  The other difference between UVProject and Project3D is how they
  behave
  when the aspect ratio of the camera window is different than the
  aspect
  ratio of the projected image.
  With UVProject, projection is defined by both the horizontal and
  vertical aperture. Project3D only takes the horizontal aperture, and
  preserves the aspect ratio of whatever image you're projecting.
 
 
  Hope that makes sense.
 
  Cheers,
  Ivan
 
 
  On Tue, Mar 20, 2012 at 4:50 PM, coolchippernuke  wrote:
 
  hey Nukers, may be a very basic question, but i
  wanted
  to know
 
  what is the difference between the two, i do a lot of clean up
  work everyday and i am kind of confused when to use the uv project
  node an when to go for a project 3d node.i know that uv project
  project a mapping coordinates to a mesh, in one of frank videos he
  used the uv project node to clean up dolly tracks,that might have
  been done using the project 3d node too, so whats the difference
  in using uv project node for cleanup work? thanks ..
  [img][/img]
 
 
 
 
 
 
  Thanks Ivan.
 
 
 
  __**_
  Nuke-users mailing list
  Nuke-users@support.thefoundry.**co.uk
 Nuke-users@support.thefoundry.co.uk,
  http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.
 **uk/cgi-bin/mailman/listinfo/**nuke-users
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
  __**_
  Nuke-users mailing list
  Nuke-users@support.thefoundry.**co.uk
 Nuke-users@support.thefoundry.co.uk,
  http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.
 **uk/cgi-bin/mailman/listinfo/**nuke-users
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
   __**_
  Nuke-users mailing list
  Nuke-users@support.thefoundry.**co.uk
 Nuke-users@support.thefoundry.co.uk,
  http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.
 **uk/cgi-bin/mailman/listinfo/**nuke-users
 

Re: [Nuke-users] Re: difference between uv project and project 3d

2012-03-21 Thread Ivan Busquets

 2)  if you've imported an obj sequence with UV's already on (for an
 animated, deformable piece of geo)... and your using a static camera (say
 a single frame of your shot camera)... is there a way to do something akin
 to Maya's texture reference object whereby the UV's are changed based on
 this static camera, for all the subsequent frames of the obj sequence ?


I've got a plugin that does exactly that. I'll see if I can share on
Nukepedia soon.

Cheers,
Ivan

On Wed, Mar 21, 2012 at 2:31 PM, Frank Rueter fr...@beingfrank.info wrote:

 1 - UVProject creates UVs from scratch, with 0,0 in the lower left of the
 camera frustum and1,1 in the upper right.

 2 - been waiting for that feature a long time ;).It should be logged as a
 feature request but would certainly be good to report again to make sure
 (and to push it in priority)




 On 3/22/12 8:28 AM, a...@curvstudios.com wrote:

 couple more questions:

 1)  if imported geo does not already have UV's, will UVproject create a
 new set or does it require them to...replace them ?

 2)  if you've imported an obj sequence with UV's already on (for an
 animated, deformable piece of geo)... and your using a static camera (say
 a single frame of your shot camera)... is there a way to do something akin
 to Maya's texture reference object whereby the UV's are changed based on
 this static camera, for all the subsequent frames of the obj sequence ?

 ..sorry if I'm too verbose...that was sort of a stream of consciousness
 question.  Basically I'm asking if there is an easier way then my current
 method where I export an obj sequence with UV's, project3D on a single
 frame, render with scanline to unwrapped UV, then input that into the full
 obj sequence to get my paint to stick throughout.

 oy, sorry again.

 Ari
 Blue Sky



  ivanbusquets wrote:

 You can think of UVProject as a baked or sticky projection.

 The main difference is how they'll behave if you transform/deform your
 geometry AFTER your projection.

 UVProject bakes the UV values into each vertex, so if you transform
 those vertices later on, they'll still pull the textures from the same
 coordinate.

 The other difference between UVProject and Project3D is how they behave
 when the aspect ratio of the camera window is different than the aspect
 ratio of the projected image.
 With UVProject, projection is defined by both the horizontal and
 vertical aperture. Project3D only takes the horizontal aperture, and
 preserves the aspect ratio of whatever image you're projecting.


 Hope that makes sense.

 Cheers,
 Ivan


 On Tue, Mar 20, 2012 at 4:50 PM, coolchippernuke  wrote:

 hey Nukers, may be a very basic question, but i wanted
 to know

 what is the difference between the two, i do a lot of clean up
 work everyday and i am kind of confused when to use the uv project
 node an when to go for a project 3d node.i know that uv project
 project a mapping coordinates to a mesh, in one of frank videos he
 used the uv project node to clean up dolly tracks,that might have
 been done using the project 3d node too, so whats the difference
 in using uv project node for cleanup work? thanks ..
 [img][/img]






 Thanks Ivan.



 __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

  __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: difference between uv project and project 3d

2012-03-21 Thread Ivan Busquets
It's just a modified version of UVProject that freezes the context of the
projection, so each vertex carries the UV as it was on the frame where it
was frozen.
So, it's essentially the same thing you would get with rendering in
UV-space and re-plugging that to animated geo, but it all happens in a
single 3D scene instead, and you avoid the double filtering.




On Wed, Mar 21, 2012 at 2:55 PM, a...@curvstudios.com wrote:

 thanks Frank for the clarification.

 thanks Ivan for digging that plugin up if ya can.  i have a solution I
 wrapped into a tool as well, but I'd love to see your approach as well.



 , 
  2)  if you've imported an obj sequence with UV's already on (for an
  animated, deformable piece of geo)... and your using a static camera
  (say
  a single frame of your shot camera)... is there a way to do something
  akin
  to Maya's texture reference object whereby the UV's are changed based
  on
  this static camera, for all the subsequent frames of the obj sequence ?
 
 
  I've got a plugin that does exactly that. I'll see if I can share on
  Nukepedia soon.
 
  Cheers,
  Ivan
 
  On Wed, Mar 21, 2012 at 2:31 PM, Frank Rueter fr...@beingfrank.info
  wrote:
 
  1 - UVProject creates UVs from scratch, with 0,0 in the lower left of
  the
  camera frustum and1,1 in the upper right.
 
  2 - been waiting for that feature a long time ;).It should be logged as
  a
  feature request but would certainly be good to report again to make sure
  (and to push it in priority)
 
 
 
 
  On 3/22/12 8:28 AM, a...@curvstudios.com wrote:
 
  couple more questions:
 
  1)  if imported geo does not already have UV's, will UVproject create a
  new set or does it require them to...replace them ?
 
  2)  if you've imported an obj sequence with UV's already on (for an
  animated, deformable piece of geo)... and your using a static camera
  (say
  a single frame of your shot camera)... is there a way to do something
  akin
  to Maya's texture reference object whereby the UV's are changed based
  on
  this static camera, for all the subsequent frames of the obj sequence ?
 
  ..sorry if I'm too verbose...that was sort of a stream of consciousness
  question.  Basically I'm asking if there is an easier way then my
  current
  method where I export an obj sequence with UV's, project3D on a single
  frame, render with scanline to unwrapped UV, then input that into the
  full
  obj sequence to get my paint to stick throughout.
 
  oy, sorry again.
 
  Ari
  Blue Sky
 
 
 
   ivanbusquets wrote:
 
  You can think of UVProject as a baked or sticky projection.
 
  The main difference is how they'll behave if you transform/deform
  your
  geometry AFTER your projection.
 
  UVProject bakes the UV values into each vertex, so if you transform
  those vertices later on, they'll still pull the textures from the
  same
  coordinate.
 
  The other difference between UVProject and Project3D is how they
  behave
  when the aspect ratio of the camera window is different than the
  aspect
  ratio of the projected image.
  With UVProject, projection is defined by both the horizontal and
  vertical aperture. Project3D only takes the horizontal aperture, and
  preserves the aspect ratio of whatever image you're projecting.
 
 
  Hope that makes sense.
 
  Cheers,
  Ivan
 
 
  On Tue, Mar 20, 2012 at 4:50 PM, coolchippernuke  wrote:
 
  hey Nukers, may be a very basic question, but i
  wanted
  to know
 
  what is the difference between the two, i do a lot of clean up
  work everyday and i am kind of confused when to use the uv project
  node an when to go for a project 3d node.i know that uv project
  project a mapping coordinates to a mesh, in one of frank videos he
  used the uv project node to clean up dolly tracks,that might have
  been done using the project 3d node too, so whats the difference
  in using uv project node for cleanup work? thanks ..
  [img][/img]
 
 
 
 
 
 
  Thanks Ivan.
 
 
 
  __**_
  Nuke-users mailing list
  Nuke-users@support.thefoundry.**co.uk
 Nuke-users@support.thefoundry.co.uk,
  http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.
 **uk/cgi-bin/mailman/listinfo/**nuke-users
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
  __**_
  Nuke-users mailing list
  Nuke-users@support.thefoundry.**co.uk
 Nuke-users@support.thefoundry.co.uk,
  http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.
 **uk/cgi-bin/mailman/listinfo/**nuke-users
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
   __**_
  Nuke-users mailing list
  Nuke-users@support.thefoundry.**co.uk
 Nuke-users@support.thefoundry.co.uk,
  http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.
 

Re: [Nuke-users] difference between uv project and project 3d

2012-03-20 Thread Ivan Busquets
You can think of UVProject as a baked or sticky projection.

The main difference is how they'll behave if you transform/deform your
geometry AFTER your projection.

UVProject bakes the UV values into each vertex, so if you transform those
vertices later on, they'll still pull the textures from the same coordinate.

The other difference between UVProject and Project3D is how they behave
when the aspect ratio of the camera window is different than the aspect
ratio of the projected image.
With UVProject, projection is defined by both the horizontal and vertical
aperture. Project3D only takes the horizontal aperture, and preserves the
aspect ratio of whatever image you're projecting.


Hope that makes sense.

Cheers,
Ivan


On Tue, Mar 20, 2012 at 4:50 PM, coolchipper 
nuke-users-re...@thefoundry.co.uk wrote:

 **
 hey Nukers, may be a very basic question, but i wanted to know what is the
 difference between the two, i do a lot of clean up work everyday and i am
 kind of confused when to use the uv project node an when to go for a
 project 3d node.i know that uv project project a mapping coordinates to a
 mesh, in one of frank videos he used the uv project node to clean up dolly
 tracks,that might have been done using the project 3d node too, so whats
 the difference in using uv project node for cleanup work? thanks .. 
 [image:
 Very Happy]

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Luminance / Chroma B44 Compressed EXRs in Nuke

2012-03-14 Thread Ivan Busquets

 Right, well I wouldn't call that unstable.


As Nathan said before, nobody is questioning its actual stability. The name
distinction here was only between stable and feature releases. I'm
sorry if you interpreted it that way when I quoted 1.6.1 as being the
latest stable release, but you've already been given an explanation for
the naming scheme, so let's just drop that argument. :)

In my opinion, 1.7 is perfectly stable. If anyone can identify an issue
 with it, I'll gladly fix the bug myself. It is open source, after all.


Nothing stops you from doing the same thing for Nuke. The source for
exrReader and exrWriter is available in the NDK, so you can recompile
against 1.7 if you need long channel names in Nuke.

Whether Nuke should use 1.7 by default or not is debatable, in my opinion.
I understand your point, and was just giving a possible reason why they
would be reluctant to.


Cheers,
Ivan

On Wed, Mar 14, 2012 at 9:59 AM, fnordware 
nuke-users-re...@thefoundry.co.uk wrote:

 **
 *peter wrote:*
 The issue was that in 1.70, long channel names were added in a way that
 broke compatibility with older versions of OpenEXR:


 Right, well I wouldn't call that unstable. That would be like saying a
 new version of Nuke is unstable because old versions didn't recognize a
 node that was only in the new version.

 If you're worried about writing out files that will be incompatible with
 older versions, then just crop the channel names before passing them to
 OpenEXR. But since these files are out there, you may as well use the 1.7
 library so you can read them. Ideally you'd let the user check a box to
 write out long channel names if they really wanted to.

 The other time the EXR format was expanded was to allow for tiled images.
 It will soon be expanded again for these deep image buffers. At least I'm
 pretty confident Nuke will upgrade to the latest library when that happens.


 Brendan

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Luminance / Chroma B44 Compressed EXRs in Nuke

2012-03-12 Thread Ivan Busquets
For what it's worth, I don't think this is due to outdated libraries.

Nuke's exrReader uses OpenEXR 1.6.1. In fact, it's slightly above that.
IIRC from a post to this list a while ago, it's a checkout from somewhere
between 1.6.1 and 1.7 releases, after the addition of StringVector and
MultiView attributes, but before support for long channel names.

1.6.1 is the latest stable release, and there are other reasons not to move
to 1.7, like the fact that exr files with long channel names are not
backwards compatible. I don't think there's many vendors using 1.7, if any,
and I suppose most are now waiting for OpenEXR 2.0 before updating their
EXR libs.

If Nuke is indeed not reading chroma subsampled EXRs correctly, I would
imagine this is not because of the libs used, but because it's simply not
handled in the exrReader (though I might be wrong).
As for long channel names, I think they were added in 1.7, not 1.6. So no,
Nuke's default exrReader won't read them.




On Sun, Mar 11, 2012 at 4:36 PM, fnordware 
nuke-users-re...@thefoundry.co.uk wrote:

 **
 *Jed Smith wrote:*
 It seems like Nuke does not handle chroma subsampled EXR files properly?
 Is this a bug? Has anyone else experienced this problem?


 It looks like Nuke is actually using an outdated version of the OpenEXR
 library. Nuke, of all programs!

 The 4:2:0 Luminance/Chroma sampling was added to OpenEXR after the initial
 release. The library now provides a class that will handle the conversion
 to and from 4:4:4 RGB, but you have to be using a version of the library
 that supports it and be on the lookout for that situation.

 Something else I recently noticed: Nuke can't handle EXR files if they
 have channel names longer than 32 characters (at least the Mac version).
 This is another sign they're using the old library. OpenEXR originally had
 that limit, but it was expanded to 256 characters in 2007 with OpenEXR 1.6.
 Older versions of the library will reject these files.

 But the Luminance/Chroma stuff was added in 2004 with OpenEXR 1.2!


 Brendan

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Gizmo placement

2012-01-23 Thread Ivan Busquets
How are you 'creating' the node?

When you add a menu command to create your gizmo-node, make sure you use:

nuke.createNode('nodeClass')

instead of

nuke.nodes.nodeClass()



On Mon, Jan 23, 2012 at 12:30 PM, r...@hexagonstudios.com 
r...@hexagonstudios.com wrote:

 **

  I did look into the X and Y pos and it is not in the add knobs section.
 The only xypos that is in there is for the nodes inside the group.


   #! X:/apps/Nuke6.1v5/Nuke6.1.exe -nx
 version 6.1 v5
 Gizmo {
 tile_color 0x99
 selected true

 inputs 2
 help LX_CA V1.1\nCreated by: Rob Bannister\n\nBased on the CA found in
 the Retro node by julian van mil. You can download his plugin on
 Nukepedia.\n\nOperation: \nYou can base your CA on 4 opereations. Full,
 Radial Falloff, Lumanance or alpha Mask, or a combination of Radial falloff
 and Lumanance.\n\nMask:\n- Source will be based on the input source and all
 the controls are handled with the gizmo.\n\n-Luma Mask, input souce into
 the mask input and use the gizmo controls to create your mask.\n\n- Alpah
 mask, create your mask externally and input the alpha into the mask
 input.\n\nUse the preview falloff to adjust your mattes.
 tile_color 0x99
 addUserKnob {20 User}
 addUserKnob {4 Operation M {Full Radial Luma or Mask Radial + Luma 
 }}
 addUserKnob {4 LumaMatte l Mask M {Source Luma Mask Alpha Mask}}
 addUserKnob {4 PreviewFalloff l Preview Falloff M {Result Falloff}}
 addUserKnob {41 multiplier l amount T Dot14.multiplier}
 addUserKnob {41 mixRay l mixRays T moxDot1.mixRay}
 addUserKnob {41 size l blur T Blur2.size}
 addUserKnob {41 which T Switch3.which}
 addUserKnob {26 }
 addUserKnob {26 Falloff l  +STARTLINE T Radial Falloff}
 addUserKnob {41 softness T Radial1.softness}
 addUserKnob {41 size_1 l Blur T Blur3.size}
 addUserKnob {41 scale T Transform_radialscale.scale}
 addUserKnob {26 }
 addUserKnob {26 Tolerance l  +STARTLINE T Luma Tolerance}
 addUserKnob {41 blackpoint T Grade_luma.blackpoint}
 addUserKnob {41 whitepoint T Grade_luma.whitepoint}
 addUserKnob {41 size_2 l Erode T FilterErode_luma.size}
 addUserKnob {41 size_3 l Blur T Blur_luma.size}
 }

 group nodes

 } end_group



 On January 23, 2012 at 7:00 AM 
 nuke-users-requ...@support.thefoundry.co.ukwrote:

  Send Nuke-users mailing list submissions to
  nuke-users@support.thefoundry.co.uk
 
  To subscribe or unsubscribe via the World Wide Web, visit
 
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
  or, via email, send a message with subject or body 'help' to
  nuke-users-requ...@support.thefoundry.co.uk
 
  You can reach the person managing the list at
  nuke-users-ow...@support.thefoundry.co.uk
 
  When replying, please edit your Subject line so it is more specific
  than Re: Contents of Nuke-users digest...
 
 
  Today's Topics:
 
 1. Re: Unwrap Latlong(Equirectangular) image (ruchitinfushion)
 
 
  --
 
  Message: 1
  Date: Mon, 23 Jan 2012 04:11:23 +
  From: ruchitinfushion nuke-users-re...@thefoundry.co.uk
  Subject: [Nuke-users] Re: Unwrap Latlong(Equirectangular) image
  To: nuke-users@support.thefoundry.co.uk
  Message-ID: 1327291883.m2f.29...@forums.thefoundry.co.uk
  Content-Type: text/plain; charset=utf-8
 
  Ok, will do that..and thanx for your help.Enjoy
 
 
 
  -- next part --
  An HTML attachment was scrubbed...
  URL:
 http://support.thefoundry.co.uk/cgi-bin/mailman/private/nuke-users/attachments/20120123/37dcd842/attachment.html
 
  --
 
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
  End of Nuke-users Digest, Vol 47, Issue 19
  **

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Viewer colour sampler 8-bit values?

2012-01-08 Thread Ivan Busquets
Ugh, nevermind. It seems to be on 6.3v1 only, so I assume it was just a
hiccup in that one version alone.

Sorry for the noise. :)


On Sat, Jan 7, 2012 at 11:40 PM, Ivan Busquets ivanbusqu...@gmail.comwrote:

 Huh, weird. Just checked, and it seems to be just a linear mapping in 6.3,
 but indeed that's not the case for 6.2.

 In 6.2, it does seem to apply an sRGB curve to the rgb values (as you
 said), but not to the alpha.



 On Sat, Jan 7, 2012 at 11:34 PM, Ivan Busquets ivanbusqu...@gmail.comwrote:

 Hi Ciaran,

 I don't think that's the case. I believe it's just a linear mapping of
 0-1 to 0-255.

 But the easiest way to check would be to make a 0-1 ramp across a 256
 pixel-wide format, and sample that.



 On Fri, Jan 6, 2012 at 7:36 PM, Ciaran Wills cwi...@cinderbiter.comwrote:

 If I choose '8 bit' from the menu on the viewer's colour sampler how
 exactly is it getting those 8-bit values from the float pixels?

 My guess is it seems to be applying a linear-sRGB conversion,
 regardless of what the viewer lookup is set to - is that right?

 The content of this e-mail (including any attachments) is strictly
 confidential and may be commercially sensitive. If you are not, or believe
 you may not be, the intended recipient, please advise the sender
 immediately by return e-mail, delete this e-mail and destroy any copies.
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] importing boujou point cloud via FBX from maya

2012-01-05 Thread Ivan Busquets
I might be wrong, but I don't think ReadGeo can read Maya locators
exported to an FBX file.

I thought the point cloud option was just to read all vertices of geometry
as points.

For locators, you'd need to import the FBX into an Axis node (although then
you can only read them one at a time)

To bring in the full point cloud from a boujou file, you could use the
import_boujou tcl script that comes with Nuke. Or, this python-ported
version:

http://www.nukepedia.com/python-scripts/import-export/importboujou/



On Thu, Jan 5, 2012 at 4:13 PM, Deke Kincaid dekekinc...@gmail.com wrote:

 are you using fbx 2010(nuke doesn't support 2011 or 2012)?  You should be
 able to pick point cloud from the drop down in the readGeo.  Also I suggest
 you export the point cloud to a separate fbx file from the camera.

 -deke


 On Thu, Jan 5, 2012 at 15:10, Bill Gilman billgil...@yahoo.com wrote:

 Hey all

 I'm trying to track a shot in Boujou and bring the camera, point cloud
 and ground plane into Nuke via a Maya FBX file.  What do I need to do to
 indicate that the cloud of locators in Maya are a point cloud that the
 ReadGeo node can understand?

 Also, the camera comes in fine but none of the geometry makes it over.
  Any help would be appreciated, thanks

 Bill
 323-428-0913___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] importing boujou point cloud via FBX from maya

2012-01-05 Thread Ivan Busquets
I'll eat my words, then. :)

That's good to know. Thanks Deke.

On Thu, Jan 5, 2012 at 5:50 PM, Deke Kincaid dekekinc...@gmail.com wrote:

 We fixed this in Nuke 6.1 so readGeo properly sees point clouds.  You only
 have to change the object type from Mesh to Point Cloud.  If this
 doesn't work then something may be broken and you should send the fbx into
 support(pending it is fbx 2010 or earlier).

 It also works the other way around.  A point cloud exported to FBX from
 Nuke will show up as a bunch of locators in Maya.

 -deke


 On Thu, Jan 5, 2012 at 17:40, Ivan Busquets ivanbusqu...@gmail.comwrote:

 I might be wrong, but I don't think ReadGeo can read Maya locators
 exported to an FBX file.

 I thought the point cloud option was just to read all vertices of
 geometry as points.

 For locators, you'd need to import the FBX into an Axis node (although
 then you can only read them one at a time)

 To bring in the full point cloud from a boujou file, you could use the
 import_boujou tcl script that comes with Nuke. Or, this python-ported
 version:

 http://www.nukepedia.com/python-scripts/import-export/importboujou/




 On Thu, Jan 5, 2012 at 4:13 PM, Deke Kincaid dekekinc...@gmail.comwrote:

 are you using fbx 2010(nuke doesn't support 2011 or 2012)?  You should
 be able to pick point cloud from the drop down in the readGeo.  Also I
 suggest you export the point cloud to a separate fbx file from the camera.

 -deke


 On Thu, Jan 5, 2012 at 15:10, Bill Gilman billgil...@yahoo.com wrote:

 Hey all

 I'm trying to track a shot in Boujou and bring the camera, point cloud
 and ground plane into Nuke via a Maya FBX file.  What do I need to do to
 indicate that the cloud of locators in Maya are a point cloud that the
 ReadGeo node can understand?

 Also, the camera comes in fine but none of the geometry makes it over.
  Any help would be appreciated, thanks

 Bill
 323-428-0913___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke Alembic

2012-01-04 Thread Ivan Busquets
Hi Gary,

Yes, the precompiled version should be all you need, but that's obviously
not the case :(

Looks like I compiled the MacOS version against the HDF5 libraries
dynamically, not statically.

Let me see if I can fix this and will post a new download link.

Thanks for the heads up.


On Wed, Jan 4, 2012 at 10:57 PM, Gary Jaeger g...@corestudio.com wrote:

 ok forgive my noobish question, but is the pre-compiled version all we
 need to give this a shot? I'm getting a message:

 Library not loaded: /opt/local/lib/libhdf5_hl.7.dylib

 thanks for any help

 On Dec 31, 2011, at 1:01 AM, Ivan Busquets wrote:

 If you want to try it out without the hassle of compiling it, here's a
 couple of links to pre-compiled versions for Mac and Linux, each with a
 version for Nuke 6.2 and 6.3. (again, only ABCReadGeo available so far).
 I'll try to upload them to Nukepedia as well, but the upload links were not
 working for me today.



 Gary Jaeger // Core Studio
 249 Princeton Avenue
 Half Moon Bay, CA 94019
 650 728 7060
 http://corestudio.com


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke Alembic

2012-01-04 Thread Ivan Busquets
Ok, I believe it was just the MacOS Nuke 6.3 version that had the problem.

I think it's fixed now. Let me know if that's not the case.

Download link:
http://dl.dropbox.com/u/17836731/ABCNuke_plugins_macos.zip

Cheers,
Ivan


On Wed, Jan 4, 2012 at 11:14 PM, Ivan Busquets ivanbusqu...@gmail.comwrote:

 Hi Gary,

 Yes, the precompiled version should be all you need, but that's obviously
 not the case :(

 Looks like I compiled the MacOS version against the HDF5 libraries
 dynamically, not statically.

 Let me see if I can fix this and will post a new download link.

 Thanks for the heads up.


 On Wed, Jan 4, 2012 at 10:57 PM, Gary Jaeger g...@corestudio.com wrote:

 ok forgive my noobish question, but is the pre-compiled version all we
 need to give this a shot? I'm getting a message:

 Library not loaded: /opt/local/lib/libhdf5_hl.7.dylib

 thanks for any help

 On Dec 31, 2011, at 1:01 AM, Ivan Busquets wrote:

 If you want to try it out without the hassle of compiling it, here's a
 couple of links to pre-compiled versions for Mac and Linux, each with a
 version for Nuke 6.2 and 6.3. (again, only ABCReadGeo available so far).
 I'll try to upload them to Nukepedia as well, but the upload links were not
 working for me today.



 Gary Jaeger // Core Studio
 249 Princeton Avenue
 Half Moon Bay, CA 94019
 650 728 7060
 http://corestudio.com


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke Alembic

2012-01-02 Thread Ivan Busquets
Thanks for the kind words, Paolo.

I do know that AtomKraft supports Alembic (which is great), and just to
clarify, I didn't meant to undermine AtomKraft in any way by posting this
here. Likewise, I imagine Nuke will support it natively soon too, and I do
look forward to that.

I just thought that, with Alembic being an open-source framework, it would
be a good idea to have an open-source implementation for Nuke as well,
hoping that this will help Alembic get more popular amongst the Nuke
community.

That said, I do have a couple of questions I'd like to bounce off you with
regards to the Alembic support in AtomKraft, but I'll do that in private to
avoid clobbering the list, if that's cool.

Cheers,
Ivan


On Mon, Jan 2, 2012 at 1:48 AM, Paolo Berto pbe...@jupiter-jazz.com wrote:

 Very nice work Ivan. Congrats.

 I like the idea of selectively load a child object, we'll add that too.

 I'd like to point out again that our AtomReadGeo node which reads ABC,
 Houdini (B)GEO) and OBJ (we decided not to read FBX) does not check
 out a license, meaning you can just dload AK, plug  play! Happy Nuke
 Year :)


 Paolo

 ps - docs (not updated to 1.0) here:
 http://www.jupiter-jazz.com/docs/atomkraft/AtomReadGeo/index.html




 On Mon, Jan 2, 2012 at 3:13 AM, Michael Garrett michaeld...@gmail.com
 wrote:
  Great!  Thanks Ivan!  Looking forward to checking this out, and
 comparing to
  the AtomKraft implementation.  Thanks for uploading compiled versions
 too.
 
  Michael
 
 
  On 31 December 2011 09:01, Ivan Busquets ivanbusqu...@gmail.com wrote:
 
  Happy holidays, Nukers!
 
  Sorry for the spam to both the users and dev lists, but I thought this
  might be of interest to people on both.
 
  Since Alembic seems to be gaining some traction amongst the Nuke
 community
  these days, I wanted to share the following:
 
  I've been working on a set of Alembic-related plugins in my spare time,
  and it's come to the point where I don't have the time (or the skills)
 to
  bring them much further, so I've decided to open source the project so
  anyone can use / modify / contribute as they please.
 
  The project is freely available here:
 
  http://github.com/ivanbusquets/ABCNuke/
 
  And includes the following plugins:
 
  - ABCReadGeo
  - ABCAxis
  - ABCCamera
 
  But only ABCReadGeo is released so far (need to clean up the rest, but
  hopefully they will follow soon).
 
  If you want to try it out without the hassle of compiling it, here's a
  couple of links to pre-compiled versions for Mac and Linux, each with a
  version for Nuke 6.2 and 6.3. (again, only ABCReadGeo available so far).
  I'll try to upload them to Nukepedia as well, but the upload links were
 not
  working for me today.
 
  http://dl.dropbox.com/u/17836731/ABCNuke_plugins_macos.zip
  http://dl.dropbox.com/u/17836731/ABCNuke_plugins_linux.zip
 
  Also, here's a link with a few example scripts, along with the Alembic
  files and media required, to show some of the features of ABCReadGeo
 
  http://dl.dropbox.com/u/17836731/examples.zip
 
  And a couple of screenshots to know what to expect from the interface,
  etc.
 
  http://dl.dropbox.com/u/17836731/ABCReadGeo_screenshot1.png
  http://dl.dropbox.com/u/17836731/ABCReadGeo_screenshot2.png
 
 
  Here's some Key features of ABCReadGeo:
  - Selective reading of different objects in an Alembic archive. For
  example, you may read a single object from an archive that has multiple
  objects, without the speed penalty of reading through the whole archive.
  - Bbox mode for each object. (much faster if you don't need to load the
  full geometry)
  - Ability to interpolate between geometry samples
  - Retiming / offseting geometry animation
 
 
  Disclaimers:
  - It's the first time I have a go at a project of this size, and the
 first
  time I use Cmake, so I'll appreciate any comments / suggestions on
 improving
  both the code and the presentation.
  - Overall, I've tried to focus on performance, but I'm sure there will
 be
  cases where things break or are not handled properly. If you have an
 Alembic
  file that's not being interpreted correctly, I would very much like to
 know.
  :)
 
  And that's it. Hope people find it useful.
 
  Happy New Year everyone!
 
  Ivan
 
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 --
 paolo berto durante
 ex-galactic president, etc.
 /*jupiter jazz*/ visual research — hong kong
 www.jupiter-jazz.com
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin

[Nuke-users] Nuke Alembic

2011-12-31 Thread Ivan Busquets
Happy holidays, Nukers!

Sorry for the spam to both the users and dev lists, but I thought this
might be of interest to people on both.

Since Alembic seems to be gaining some traction amongst the Nuke community
these days, I wanted to share the following:

I've been working on a set of Alembic-related plugins in my spare time, and
it's come to the point where I don't have the time (or the skills) to bring
them much further, so I've decided to open source the project so anyone can
use / modify / contribute as they please.

The project is freely available here:

http://github.com/ivanbusquets/ABCNuke/

And includes the following plugins:

- ABCReadGeo
- ABCAxis
- ABCCamera

But only ABCReadGeo is released so far (need to clean up the rest, but
hopefully they will follow soon).

If you want to try it out without the hassle of compiling it, here's a
couple of links to pre-compiled versions for Mac and Linux, each with a
version for Nuke 6.2 and 6.3. (again, only ABCReadGeo available so far).
I'll try to upload them to Nukepedia as well, but the upload links were not
working for me today.

http://dl.dropbox.com/u/17836731/ABCNuke_plugins_macos.zip
http://dl.dropbox.com/u/17836731/ABCNuke_plugins_linux.zip

Also, here's a link with a few example scripts, along with the Alembic
files and media required, to show some of the features of ABCReadGeo

http://dl.dropbox.com/u/17836731/examples.zip

And a couple of screenshots to know what to expect from the interface, etc.

http://dl.dropbox.com/u/17836731/ABCReadGeo_screenshot1.png
http://dl.dropbox.com/u/17836731/ABCReadGeo_screenshot2.png


Here's some Key features of ABCReadGeo:
- Selective reading of different objects in an Alembic archive. For
example, you may read a single object from an archive that has multiple
objects, without the speed penalty of reading through the whole archive.
- Bbox mode for each object. (much faster if you don't need to load the
full geometry)
- Ability to interpolate between geometry samples
- Retiming / offseting geometry animation


Disclaimers:
- It's the first time I have a go at a project of this size, and the first
time I use Cmake, so I'll appreciate any comments / suggestions on
improving both the code and the presentation.
- Overall, I've tried to focus on performance, but I'm sure there will be
cases where things break or are not handled properly. If you have an
Alembic file that's not being interpreted correctly, I would very much like
to know. :)

And that's it. Hope people find it useful.

Happy New Year everyone!

Ivan
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] SplineWarp python question

2011-12-14 Thread Ivan Busquets
Each control point has both the source and destination attributes. The
confusing part is that the naming of those attributes makes more sense for
a standard roto shape.

So, your source curve is:
controlPoint.center

and the dest curve is:
controlPoint.featherCenter

Also, if you're getting the position data out of each attribute, keep in
mind that the dest points are stored as an offset relative to the src
point, instead of an absolute position.

Have a look at the output of this:

node = nuke.selectedNode()

curves = node['curves']

sourcecurve = curves.toElement('Bezier1')

for p in sourcecurve:

print p.center.getPosition(nuke.frame()),
p.featherCenter.getPosition(nuke.frame())



Hope that helps.
Cheers,
Ivan




On Wed, Dec 14, 2011 at 4:21 PM, Magno Borgo mag...@pop.com.br wrote:

 **
 Hello!

 I trying to figure out how to access the *control points *of the* **
 destination* curves of the SplineWarp node via python.

 The points of the source curves are easy:

 node = nuke.selectedNode()

 curves = node['curves']

 sourcecurve = curves.toElement('Ellipse1')


 With sourcecurve[0], sourcecurve[1], etc i can access each control point.



 Any help?


 --
 **
 Magno Borgo

 www.borgo.tv
 www.boundaryvfx.com

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Edges

2011-11-24 Thread Ivan Busquets
Why not use a simple min between both?

On Thu, Nov 24, 2011 at 12:15 PM, Ron Ganbar ron...@gmail.com wrote:

 Hi all,
 I've been thinking about this for a while, and I'm consulting you guys in
 order to see how wrong I'm getting this.
 [example below]

 When using the Mask operation under Merge to hold one image inside of
 another image where both images have an edge that's exactly the same, the
 edge that's the same is getting degraded - as in, it gets darker because of
 the multiplication that occurs. This happens a lot when working with full
 CG shots rather than CG over plate bg work.
 To get around this what I normally do is unpremult the image, min both
 mattes, then premult the result of the min with the RGB again. This
 produces the correct results - at least as far as the part of the edge that
 shouldn't change. Feels to me like this should be made simpler, no?
 Am I wrong about this?

 In the example below you can see what I mean. The antialiased edge that
 both shapes share gets darker after the Merge.

 Thanks all.
 R


 Paste this into your DAG:

 set cut_paste_input [stack 0]
 version 6.3 v1
 RotoPaint {
  inputs 0
  curves {AnimTree:  {
  Version: 1.2
  Flag: 0
  RootNode: 1
  Node: {
   NodeName: Root {
Flag: 512
NodeType: 1
Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
NumOfAttributes: 11
vis S 0 1 opc S 0 1 mbo S 0 1 mb S 0 1 mbs S 0 0.5 fo S 0 1
 fx S 0 0 fy S 0 0 ff S 0 1 ft S 0 0 pt S 0 0
   }
   NumOfChildren: 1
   Node: {
NodeName: Bezier1 {
 Flag: 576
 NodeType: 3
 CurveGroup:  {
  Transform: 0 0 S 1 1 0 S 1 1 0 S 1 1 0 S 1 1 1 S 1 1 1 S 1 1 0 S 1 1
 885 S 1 1 936
  Flag: 0
  NumOfCubicCurves: 2
  CubicCurve:  {
   Type: 0 Flag: 8192 Dim: 2
   NumOfPoints: 18
   0 S 1 1 40 S 1 1 15 0 0 S 1 1 600 S 1 1 1195 0 0 S 1 1 -40 S 1 1 -15
 0 0 S 1 1 -10 S 1 1 15 0 0 S 1 1 340 S 1 1 830 0 0 S 1 1 5 S 1 1 -7.5 0 0 S
 1 1 -176.25 S 1 1 69.375 0 0 S 1 1 520 S 1 1 350 0 0 S 1 1 176.25 S 1 1
 -69.375 0 0 S 1 1 -20 S 1 1 -20 0 0 S 1 1 1070 S 1 1 565 0 0 S 1 1 40 S 1 1
 40 0 0 S 1 1 15 S 1 1 -25 0 0 S 1 1 1390 S 1 1 1000 0 0 S 1 1 -15 S 1 1 25
 0 0 S 1 1 25 S 1 1 -10 0 0 S 1 1 795 S 1 1 800 0 0 S 1 1 -25 S 1 1 10 0
  }
  CubicCurve:  {
   Type: 0 Flag: 8192 Dim: 2
   NumOfPoints: 18
   0 S 1 1 40 S 1 1 15 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -40 S 1 1 -15 0 0
 S 1 1 -10 S 1 1 15 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 5 S 1 1 -7.5 0 0 S 1 1
 -176.25 S 1 1 69.375 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 176.25 S 1 1 -69.375 0 0
 S 1 1 -20 S 1 1 -20 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 40 S 1 1 40 0 0 S 1 1 15
 S 1 1 -25 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -15 S 1 1 25 0 0 S 1 1 25 S 1 1 -10
 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -25 S 1 1 10 0
  }
  NumOfAttributes: 44
  vis S 0 1 r S 0 1 g S 0 1 b S 0 1 a S 0 1 ro S 0 0 go S
 0 0 bo S 0 0 ao S 0 0 opc S 0 1 bm S 0 0 inv S 0 0 mbo S 0 0
 mb S 0 1 mbs S 0 0.5 mbsot S 0 0 mbso S 0 0 fo S 0 1 fx S 0 0
 fy S 0 0 ff S 0 1 ft S 0 0 src S 0 0 stx S 0 0 sty S 0 0 str
 S 0 0 sr S 0 0 ssx S 0 1 ssy S 0 1 ss S 0 0 spx S 0 1024 spy S
 0 778 stot S 0 0 sto S 0 0 sv S 0 0 sf S 0 1 sb S 0 1 nv S 0 1
 view1 S 0 1 ltn S 0 1 ltm S 0 1 ltt S 0 0 tt S 0 4 pt S 0 0
 }
}
NumOfChildren: 0
   }
  }
 }
 }
  toolbox {selectAll {
   { selectAll ssx 1 ssy 1 sf 1 }
   { createBezier ssx 1 ssy 1 sf 1 sb 1 tt 4 }
   { createBSpline ssx 1 ssy 1 sf 1 sb 1 }
   { createEllipse ssx 1 ssy 1 sf 1 sb 1 }
   { createRectangle ssx 1 ssy 1 sf 1 sb 1 }
   { brush ssx 1 ssy 1 sf 1 sb 1 }
   { eraser src 2 ssx 1 ssy 1 sf 1 sb 1 }
   { clone src 1 ssx 1 ssy 1 sf 1 sb 1 }
   { reveal src 3 ssx 1 ssy 1 sf 1 sb 1 }
   { dodge src 1 ssx 1 ssy 1 sf 1 sb 1 }
   { burn src 1 ssx 1 ssy 1 sf 1 sb 1 }
   { blur src 1 ssx 1 ssy 1 sf 1 sb 1 }
   { sharpen src 1 ssx 1 ssy 1 sf 1 sb 1 }
   { smear src 1 ssx 1 ssy 1 sf 1 sb 1 }
 } }
  toolbar_brush_hardness 0.20003
  toolbar_lifetime_type all
  toolbar_source_transform_scale {1 1}
  toolbar_source_transform_center {320 240}
  colorOverlay 0
  lifetime_type all frames
  motionblur_shutter_offset_type centred
  source_black_outside true
  createNewTrack {{-1} -1\t(none)\t-1 1000\tNew Track Layer\t1000}
  name RotoPaint1
  selected true
  xpos -306
  ypos -156
 }
 set N221a3540 [stack 0]
 Unpremult {
  name Unpremult1
  selected true
  xpos -280
  ypos -82
 }
 set N2962c380 [stack 0]
 push $cut_paste_input
 RotoPaint {
  curves {AnimTree:  {
  Version: 1.2
  Flag: 0
  RootNode: 1
  Node: {
   NodeName: Root {
Flag: 512
NodeType: 1
Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
NumOfAttributes: 11
vis S 0 1 opc S 0 1 mbo S 0 1 mb S 0 1 mbs S 0 0.5 fo S 0 1
 fx S 0 0 fy S 0 0 ff S 0 1 ft S 0 0 pt S 0 0
   }
   NumOfChildren: 1
   Node: {
NodeName: Bezier1 {
 Flag: 576
 NodeType: 3
 CurveGroup:  {
  Transform: 0 0 S 1 1 0 S 1 1 0 S 1 1 0 S 1 1 1 S 1 1 1 S 1 1 0 S 1 1
 885 S 1 1 936
  Flag: 0
  NumOfCubicCurves: 

Re: [Nuke-users] Edges

2011-11-24 Thread Ivan Busquets
Sorry for the overly simplified answer.
Didn't mean to say you can just min the two images together (unless both
are just a matte), but that you can unpremult, min only the alpha channel
of both, and then premult again, so you don't have to shuffle things back
and forth.


set cut_paste_input [stack 0]
version 6.3 v1
Dot {
 inputs 0
 name Dot2
 label premultiplied img with holdout matte
 selected true
 xpos -398
 ypos 30
}
push $cut_paste_input
Dot {
 name Dot1
 label your premultiplied img
 selected true
 xpos -588
 ypos -100
}
Unpremult {
 name Unpremult2
 selected true
 xpos -616
 ypos -9
}
Merge2 {
 inputs 2
 operation min
 Achannels alpha
 Bchannels alpha
 output alpha
 name Merge6
 selected true
 xpos -616
 ypos 28
}
Premult {
 name Premult4
 selected true
 xpos -616
 ypos 80
}




On Thu, Nov 24, 2011 at 12:22 PM, Ivan Busquets ivanbusqu...@gmail.comwrote:

 Why not use a simple min between both?

 On Thu, Nov 24, 2011 at 12:15 PM, Ron Ganbar ron...@gmail.com wrote:

 Hi all,
 I've been thinking about this for a while, and I'm consulting you guys in
 order to see how wrong I'm getting this.
 [example below]

 When using the Mask operation under Merge to hold one image inside of
 another image where both images have an edge that's exactly the same, the
 edge that's the same is getting degraded - as in, it gets darker because of
 the multiplication that occurs. This happens a lot when working with full
 CG shots rather than CG over plate bg work.
 To get around this what I normally do is unpremult the image, min both
 mattes, then premult the result of the min with the RGB again. This
 produces the correct results - at least as far as the part of the edge that
 shouldn't change. Feels to me like this should be made simpler, no?
 Am I wrong about this?

 In the example below you can see what I mean. The antialiased edge that
 both shapes share gets darker after the Merge.

 Thanks all.
 R


 Paste this into your DAG:

 set cut_paste_input [stack 0]
 version 6.3 v1
 RotoPaint {
  inputs 0
  curves {AnimTree:  {
  Version: 1.2
  Flag: 0
  RootNode: 1
  Node: {
   NodeName: Root {
Flag: 512
NodeType: 1
Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
NumOfAttributes: 11
vis S 0 1 opc S 0 1 mbo S 0 1 mb S 0 1 mbs S 0 0.5 fo S 0
 1 fx S 0 0 fy S 0 0 ff S 0 1 ft S 0 0 pt S 0 0
   }
   NumOfChildren: 1
   Node: {
NodeName: Bezier1 {
 Flag: 576
 NodeType: 3
 CurveGroup:  {
  Transform: 0 0 S 1 1 0 S 1 1 0 S 1 1 0 S 1 1 1 S 1 1 1 S 1 1 0 S 1 1
 885 S 1 1 936
  Flag: 0
  NumOfCubicCurves: 2
  CubicCurve:  {
   Type: 0 Flag: 8192 Dim: 2
   NumOfPoints: 18
   0 S 1 1 40 S 1 1 15 0 0 S 1 1 600 S 1 1 1195 0 0 S 1 1 -40 S 1 1
 -15 0 0 S 1 1 -10 S 1 1 15 0 0 S 1 1 340 S 1 1 830 0 0 S 1 1 5 S 1 1 -7.5 0
 0 S 1 1 -176.25 S 1 1 69.375 0 0 S 1 1 520 S 1 1 350 0 0 S 1 1 176.25 S 1 1
 -69.375 0 0 S 1 1 -20 S 1 1 -20 0 0 S 1 1 1070 S 1 1 565 0 0 S 1 1 40 S 1 1
 40 0 0 S 1 1 15 S 1 1 -25 0 0 S 1 1 1390 S 1 1 1000 0 0 S 1 1 -15 S 1 1 25
 0 0 S 1 1 25 S 1 1 -10 0 0 S 1 1 795 S 1 1 800 0 0 S 1 1 -25 S 1 1 10 0
  }
  CubicCurve:  {
   Type: 0 Flag: 8192 Dim: 2
   NumOfPoints: 18
   0 S 1 1 40 S 1 1 15 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -40 S 1 1 -15 0 0
 S 1 1 -10 S 1 1 15 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 5 S 1 1 -7.5 0 0 S 1 1
 -176.25 S 1 1 69.375 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 176.25 S 1 1 -69.375 0 0
 S 1 1 -20 S 1 1 -20 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 40 S 1 1 40 0 0 S 1 1 15
 S 1 1 -25 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -15 S 1 1 25 0 0 S 1 1 25 S 1 1 -10
 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -25 S 1 1 10 0
  }
  NumOfAttributes: 44
  vis S 0 1 r S 0 1 g S 0 1 b S 0 1 a S 0 1 ro S 0 0 go
 S 0 0 bo S 0 0 ao S 0 0 opc S 0 1 bm S 0 0 inv S 0 0 mbo S 0 0
 mb S 0 1 mbs S 0 0.5 mbsot S 0 0 mbso S 0 0 fo S 0 1 fx S 0 0
 fy S 0 0 ff S 0 1 ft S 0 0 src S 0 0 stx S 0 0 sty S 0 0 str
 S 0 0 sr S 0 0 ssx S 0 1 ssy S 0 1 ss S 0 0 spx S 0 1024 spy S
 0 778 stot S 0 0 sto S 0 0 sv S 0 0 sf S 0 1 sb S 0 1 nv S 0 1
 view1 S 0 1 ltn S 0 1 ltm S 0 1 ltt S 0 0 tt S 0 4 pt S 0 0
 }
}
NumOfChildren: 0
   }
  }
 }
 }
  toolbox {selectAll {
   { selectAll ssx 1 ssy 1 sf 1 }
   { createBezier ssx 1 ssy 1 sf 1 sb 1 tt 4 }
   { createBSpline ssx 1 ssy 1 sf 1 sb 1 }
   { createEllipse ssx 1 ssy 1 sf 1 sb 1 }
   { createRectangle ssx 1 ssy 1 sf 1 sb 1 }
   { brush ssx 1 ssy 1 sf 1 sb 1 }
   { eraser src 2 ssx 1 ssy 1 sf 1 sb 1 }
   { clone src 1 ssx 1 ssy 1 sf 1 sb 1 }
   { reveal src 3 ssx 1 ssy 1 sf 1 sb 1 }
   { dodge src 1 ssx 1 ssy 1 sf 1 sb 1 }
   { burn src 1 ssx 1 ssy 1 sf 1 sb 1 }
   { blur src 1 ssx 1 ssy 1 sf 1 sb 1 }
   { sharpen src 1 ssx 1 ssy 1 sf 1 sb 1 }
   { smear src 1 ssx 1 ssy 1 sf 1 sb 1 }
 } }
  toolbar_brush_hardness 0.20003
  toolbar_lifetime_type all
  toolbar_source_transform_scale {1 1}
  toolbar_source_transform_center {320 240}
  colorOverlay 0
  lifetime_type all frames
  motionblur_shutter_offset_type centred

Re: [Nuke-users] Edges

2011-11-24 Thread Ivan Busquets
I see. Well, I'm sure you know you could use a MergeExpression, or wrap it
all into a gizmo if this is something you need often, so I suppose you're
just looking for opinions on whether such a merge operation should exist by
default.

Personally, I prefer having to unpremult/premult explicitly, so there's a
visual clue of what's going on in the script, and because it gives me a bit
more control over what I want to premult/unpremult. Say you want to merge
all channels, but you only want to unpremult rgb, because all other layers
already come unpremultiplied. That would be hard/obscure to handle in a
single merge operation.

But again, that's just an opinion, and if you run into this repeatedly,
then it's fair to think there should be a simpler way to handle it :)


On Thu, Nov 24, 2011 at 1:54 PM, Ron Ganbar ron...@gmail.com wrote:

 True, Ivan,
 but I'm hoping to have an operation inside Merge that will do that for me.
 Am I the only one who runs into this kind of issue repeatedly?



 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/



 On 24 November 2011 23:04, Ivan Busquets ivanbusqu...@gmail.com wrote:

 Sorry for the overly simplified answer.
 Didn't mean to say you can just min the two images together (unless
 both are just a matte), but that you can unpremult, min only the alpha
 channel of both, and then premult again, so you don't have to shuffle
 things back and forth.


 set cut_paste_input [stack 0]
 version 6.3 v1
 Dot {
  inputs 0
  name Dot2
  label premultiplied img with holdout matte
  selected true
  xpos -398
  ypos 30
 }
 push $cut_paste_input
 Dot {
  name Dot1
  label your premultiplied img
  selected true
  xpos -588
  ypos -100
 }
 Unpremult {
  name Unpremult2
  selected true
  xpos -616
  ypos -9
 }
 Merge2 {
  inputs 2
  operation min
  Achannels alpha
  Bchannels alpha
  output alpha
  name Merge6
  selected true
  xpos -616
  ypos 28
 }
 Premult {
   name Premult4
  selected true
  xpos -616
  ypos 80
 }




 On Thu, Nov 24, 2011 at 12:22 PM, Ivan Busquets 
 ivanbusqu...@gmail.comwrote:

 Why not use a simple min between both?

 On Thu, Nov 24, 2011 at 12:15 PM, Ron Ganbar ron...@gmail.com wrote:

 Hi all,
 I've been thinking about this for a while, and I'm consulting you guys
 in order to see how wrong I'm getting this.
 [example below]

 When using the Mask operation under Merge to hold one image inside of
 another image where both images have an edge that's exactly the same, the
 edge that's the same is getting degraded - as in, it gets darker because of
 the multiplication that occurs. This happens a lot when working with full
 CG shots rather than CG over plate bg work.
 To get around this what I normally do is unpremult the image, min both
 mattes, then premult the result of the min with the RGB again. This
 produces the correct results - at least as far as the part of the edge that
 shouldn't change. Feels to me like this should be made simpler, no?
 Am I wrong about this?

 In the example below you can see what I mean. The antialiased edge that
 both shapes share gets darker after the Merge.

 Thanks all.
 R


 Paste this into your DAG:

 set cut_paste_input [stack 0]
 version 6.3 v1
 RotoPaint {
  inputs 0
  curves {AnimTree:  {
  Version: 1.2
  Flag: 0
  RootNode: 1
  Node: {
   NodeName: Root {
Flag: 512
NodeType: 1
Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
NumOfAttributes: 11
vis S 0 1 opc S 0 1 mbo S 0 1 mb S 0 1 mbs S 0 0.5 fo S
 0 1 fx S 0 0 fy S 0 0 ff S 0 1 ft S 0 0 pt S 0 0
   }
   NumOfChildren: 1
   Node: {
NodeName: Bezier1 {
 Flag: 576
 NodeType: 3
 CurveGroup:  {
  Transform: 0 0 S 1 1 0 S 1 1 0 S 1 1 0 S 1 1 1 S 1 1 1 S 1 1 0 S 1
 1 885 S 1 1 936
  Flag: 0
  NumOfCubicCurves: 2
  CubicCurve:  {
   Type: 0 Flag: 8192 Dim: 2
   NumOfPoints: 18
   0 S 1 1 40 S 1 1 15 0 0 S 1 1 600 S 1 1 1195 0 0 S 1 1 -40 S 1 1
 -15 0 0 S 1 1 -10 S 1 1 15 0 0 S 1 1 340 S 1 1 830 0 0 S 1 1 5 S 1 1 -7.5 0
 0 S 1 1 -176.25 S 1 1 69.375 0 0 S 1 1 520 S 1 1 350 0 0 S 1 1 176.25 S 1 1
 -69.375 0 0 S 1 1 -20 S 1 1 -20 0 0 S 1 1 1070 S 1 1 565 0 0 S 1 1 40 S 1 1
 40 0 0 S 1 1 15 S 1 1 -25 0 0 S 1 1 1390 S 1 1 1000 0 0 S 1 1 -15 S 1 1 25
 0 0 S 1 1 25 S 1 1 -10 0 0 S 1 1 795 S 1 1 800 0 0 S 1 1 -25 S 1 1 10 0
  }
  CubicCurve:  {
   Type: 0 Flag: 8192 Dim: 2
   NumOfPoints: 18
   0 S 1 1 40 S 1 1 15 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -40 S 1 1 -15 0
 0 S 1 1 -10 S 1 1 15 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 5 S 1 1 -7.5 0 0 S 1 1
 -176.25 S 1 1 69.375 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 176.25 S 1 1 -69.375 0 0
 S 1 1 -20 S 1 1 -20 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 40 S 1 1 40 0 0 S 1 1 15
 S 1 1 -25 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -15 S 1 1 25 0 0 S 1 1 25 S 1 1 -10
 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -25 S 1 1 10 0
  }
  NumOfAttributes: 44
  vis S 0 1 r S 0 1 g S 0 1 b S 0 1 a S 0 1 ro S 0 0
 go S

Re: [Nuke-users] ??? redguard1.glow ???

2011-11-21 Thread Ivan Busquets
Searching through the archives, it looks like there's also a few cases of
shared snippets and copy/pasted scripts sent to this list that were
infected.
They could have propagated just by copy-pasting them back.

These layer infections are so hard to track down and completely get rid
of...
Makes me think that maybe there should be a more restrictive policy
enforced by Nuke itself, where a layer/channel won't be created unless it
meets a certain criteria?



On Mon, Nov 21, 2011 at 1:46 PM, Howard Jones mrhowardjo...@yahoo.comwrote:

 Any tools that might be in the public domain? On nukepedia?

 I suspect that's where I have it from.

 Howard

 On 21 Nov 2011, at 21:38, Dan Walker walkerd...@gmail.com wrote:

 Well, I've found it in several of our tools, which will then be propagated
 into scene files.

 I betcha Nukepedia has some gizmos that are infected too and again, if
 someone is reusing a config/resource file, downloaded tools or tools
 they've borrowed from other facilities and they've been incorporated into a
 pipeline or for personal use in their shots, there is the possibility of
 contamination.

 If this is causing scene files to crash, then it's a bigger issue in which
 the Foundry should be involved.

 Dan




 On Mon, Nov 21, 2011 at 12:58 PM, Dennis Steinschulte 
 den...@rebelsofdesign.com
 den...@rebelsofdesign.com wrote:

 Err.. actually it's the MCP …d'oh
 well no job at MPC for me anymore

 cheers

 On 21.11.2011, at 21:49, Dennis Steinschulte wrote:

 hey ned,

 this is interesting information, but i haven't worked on TRON or somehow
 'near' the cinema world, lately
 So far I deleted everything in the scripts (haven't been many in the few
 days i figured out the add_layer part). But today, while showing several
 old comps (6.0.1 nearly over a year old), the RED GREEN BLUE (aka the red
 guard - virus ;) ) showed up all of a sudden.
 OMG, the MPC really taking over all scripts… the past .. the future???

 cheers, the anxiously Dennis


 On 21.11.2011, at 20:50, Ned Wilson wrote:

 This thread is great, I haven't seen this issue going around in a while!
 I was a compositor on Tron at DD, and that is exactly where this channel
 virus came from. The red guards were the armies of programs that CLU used
 as his muscle in the machine world of Tron. They can be seen in the
 background of many shots, and they wear helmets and carry spears which glow
 in some cases. The costume designers on that show did an amazing job. Those
 lines on the suits that glow were actually practical. However, there were
 some cases where they wanted the glow enhanced, or the electrical portions
 of the suits were malfunctioning, so we did this work digitally. Once a
 look was established, someone at DD made a gizmo for the glow enhancement,
 hence the redguard1.glow layer.

 This thing is insidious. It quickly spread to pretty much every comp on
 Tron. Whenever you cut and paste a node from a script which has this layer,
 it would embed the layer creation code in the cut and paste stack, as
 someone on the list demonstrated. Every time a script was reused, a gizmo
 exported, or an artist shared some nodes with another artist, the channel
 virus was propagated. The redguard1.glow layer started showing up elsewhere
 in DD, it surfaced on Real Steel and Transformers 3.

 As mentioned previously in the thread, the only way to get rid of this is
 with a text editor, or if you're handy with sed or awk you can probably
 figure that out too. Every Nuke script in the entire facility must be
 checked, plus every single gizmo and Nuke script that is found in the
 NUKE_PATH environment. Don't forget user's home directories either.


 On Nov 19, 2011, at 12:07 PM, Dan Walker wrote:

 I'm gonna try a show wide search for this in all our Nuke comps on
 Monday.

 Will let ya know what I find too.



 On Sat, Nov 19, 2011 at 10:38 AM, Diogo Girondi diogogiro...@gmail.com
 diogogiro...@gmail.com wrote:

 I'll look for those in some scripts I have here. But I honestly don't
 remember seeing any of those layers showing up in 6.3v2 and earlier
 versions.


 On 19/11/2011, at 13:53, Ean Carr  eanc...@gmail.comeanc...@gmail.com
 wrote:

 Well, what a coincidence. I just found a script at our facility with
 this:

 add_layer {rgba rgba.beta redguard1.glow}

 Fun times.

 -Ean

 On Sat, Nov 19, 2011 at 12:12 PM, Ean Carr  eanc...@gmail.com
 eanc...@gmail.com wrote:

 Our little virus layer is rgba.beta. Can't seem to get rid of the
 little rascal. -Ean


 On Sat, Nov 19, 2011 at 11:56 AM, Howard Jones mrhowardjo...@yahoo.com
 mrhowardjo...@yahoo.com wrote:

 I've sent this to support - but it could be a legacy thing, I'm on
 6.2v2 here so maybe 6.3 has the cure?

 Howard

   --
 *From:* Dennis Steinschulte  den...@rebelsofdesign.com
 den...@rebelsofdesign.com
 *To:* Howard Jones  mrhowardjo...@yahoo.commrhowardjo...@yahoo.com;
 Nuke user discussion  nuke-users@support.thefoundry.co.uk
 

Re: [Nuke-users] Gamma and Alpha

2011-11-15 Thread Ivan Busquets
Hi Gavin,

As you said yourself, the equation cannot be solved UNLESS you know both
variables on one of the sides.
In other words, you'd need to have the BG image in order to prep a FG image
so it can be comped in sRGB space and match the results of a linear comp.

So is there no way to output a PSD or PNG or TIFF which will look the same
 as my composite in Nuke over a white background?


If you need to get the same results on a white background, you could prep
your FG element such that:

X = (  (FG * alpha + (1 - alpha)) ^ 2.2  - (1 - alpha)  /  alpha  ) ^
(1/2.2)

Where X is the FG image you'd want to export to be comped on a white BG.
But of course, this will only give you a match when comping the FG over a
WHITE BG. If the BG changes, then you'd need to prep a different FG to go
with it.

Hope that helps.

Cheers,
Ivan



On Mon, Nov 14, 2011 at 12:13 PM, Gavin Greenwalt
im.thatone...@gmail.comwrote:

 How are Nuke users handling workflows in which they need to deliver images
 with alpha that will be composited in sRGB space not linear space?

 Essentially we have a situation where you would need to find equations for
 u and v such that (xy + z(1-y))^(1-2.2) = (uv + z^(1-2.2)(1-v)).

 My initial impression is that it's impossible since the simplified version
 of this conundrum would be (x+y)^2 = (u+v)  which I believe is
 mathematically impossible to solve... right?   So is there no way to output
 a PSD or PNG or TIFF which will look the same as my composite in Nuke over
 a white background?

 Thanks,
 Gavin

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Gamma and Alpha

2011-11-15 Thread Ivan Busquets
I hope there was a bet involved... :)

If you need further proof, you could use this script:

set cut_paste_input [stack 0]
version 6.2 v4
BackdropNode {
 inputs 0
 name BackdropNode2
 tile_color 0x7171c600
 label BG
 note_font_size 42
 selected true
 xpos 2434
 ypos 13246
 bdheight 156
}
BackdropNode {
 inputs 0
 name BackdropNode3
 tile_color 0x8e8e3800
 label Convert back to linear\n\n(assuming your destination\napp will
convert\nto sRGB when importing)
 note_font_size 22
 selected true
 xpos 2103
 ypos 14009
 bdwidth 286
 bdheight 218
}
BackdropNode {
 inputs 0
 name BackdropNode4
 tile_color 0x8e8e3800
 label Compare
 note_font_size 42
 selected true
 xpos 2613
 ypos 13944
 bdwidth 360
 bdheight 167
}
BackdropNode {
 inputs 0
 name BackdropNode1
 tile_color 0x8e8e3800
 label FG
 note_font_size 42
 selected true
 xpos 2191
 ypos 13222
 bdheight 190
}
add_layer {rgba redguard1.glow}
ColorWheel {
 inputs 0
 gamma 0.45
 name A
 selected true
 xpos 2201
 ypos 13300
}
Blur {
 size 100
 name Blur46
 selected true
 xpos 2201
 ypos 13374
}
Dot {
 name Dot30
 selected true
 xpos 2235
 ypos 13479
}
set N18ce1090 [stack 0]
CheckerBoard2 {
 inputs 0
 name B
 selected true
 xpos 2444
 ypos 13326
}
set Nd0e8d830 [stack 0]
Dot {
 name Dot31
 selected true
 xpos 2478
 ypos 13639
}
set N985ce2a0 [stack 0]
Colorspace {
 colorspace_out sRGB
 name Colorspace2
 selected true
 xpos 2444
 ypos 13704
}
set Ned256de0 [stack 0]
Merge2 {
 inputs 2
 operation stencil
 name Merge100
 label B*(1-a)
 selected true
 xpos 2444
 ypos 13852
}
push $N18ce1090
push $N985ce2a0
Merge2 {
 inputs 2
 name Merge101
 selected true
 xpos 2207
 ypos 13634
}
Colorspace {
 colorspace_out sRGB
 name Colorspace1
 selected true
 xpos 2207
 ypos 13698
}
Merge2 {
 inputs 2
 operation from
 name Merge102
 selected true
 xpos 2207
 ypos 13857
}
set Nd30cc020 [stack 0]
Unpremult {
 name Unpremult4
 selected true
 xpos 2207
 ypos 14144
}
Colorspace {
 colorspace_in sRGB
 name Colorspace3
 selected true
 xpos 2207
 ypos 14168
}
Premult {
 name Premult6
 selected true
 xpos 2207
 ypos 14196
}
Write {
 name Write2
 label write out here\n
 selected true
 xpos 2207
 ypos 14263
}
push $Nd30cc020
push $Ned256de0
Dot {
 name Dot32
 selected true
 xpos 2657
 ypos 13709
}
Merge2 {
 inputs 2
 name Merge103
 label Comped in sRGB space
 selected true
 xpos 2623
 ypos 14057
}
push $N18ce1090
Dot {
 name Dot33
 selected true
 xpos 2755
 ypos 13479
}
push $Nd0e8d830
Dot {
 name Dot34
 selected true
 xpos 2917
 ypos 13354
}
Merge2 {
 inputs 2
 name Merge104
 label Comped in linear
 selected true
 xpos 2883
 ypos 14024
}
Colorspace {
 colorspace_out sRGB
 name Colorspace4
 label Post sRGB conversion
 selected true
 xpos 2883
 ypos 14066
}


On Tue, Nov 15, 2011 at 12:30 AM, Ron Ganbar ron...@gmail.com wrote:

 I knew I was right. (You guys just proved an old argument I had with
 someone).
 Oh, the joys of self gratification.


 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/



 On 15 November 2011 10:28, Ivan Busquets ivanbusqu...@gmail.com wrote:

 Hi Gavin,

 As you said yourself, the equation cannot be solved UNLESS you know both
 variables on one of the sides.
 In other words, you'd need to have the BG image in order to prep a FG
 image so it can be comped in sRGB space and match the results of a linear
 comp.

 So is there no way to output a PSD or PNG or TIFF which will look the
 same as my composite in Nuke over a white background?


 If you need to get the same results on a white background, you could prep
 your FG element such that:

 X = (  (FG * alpha + (1 - alpha)) ^ 2.2  - (1 - alpha)  /  alpha  ) ^
 (1/2.2)

 Where X is the FG image you'd want to export to be comped on a white BG.
 But of course, this will only give you a match when comping the FG over a
 WHITE BG. If the BG changes, then you'd need to prep a different FG to go
 with it.

 Hope that helps.

 Cheers,
 Ivan



 On Mon, Nov 14, 2011 at 12:13 PM, Gavin Greenwalt 
 im.thatone...@gmail.com wrote:

 How are Nuke users handling workflows in which they need to deliver
 images with alpha that will be composited in sRGB space not linear space?

 Essentially we have a situation where you would need to find equations
 for u and v such that (xy + z(1-y))^(1-2.2) = (uv + z^(1-2.2)(1-v)).

 My initial impression is that it's impossible since the simplified
 version of this conundrum would be (x+y)^2 = (u+v)  which I believe is
 mathematically impossible to solve... right?   So is there no way to output
 a PSD or PNG or TIFF which will look the same as my composite in Nuke over
 a white background?

 Thanks,
 Gavin

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users

Re: [Nuke-users] OFlow: Source Frame at the current Frame using Speed?

2011-10-25 Thread Ivan Busquets
If it's set to speed, something like this should do it:

(frame-OFlow.first_frame) * OFlow.timingSpeed + OFlow.first_frame


On Tue, Oct 25, 2011 at 11:43 AM, David Schnee dav...@tippett.com wrote:

 **
 Does anyone know how to derive the actual source frames on the current
 frame when using the 'Speed' timing method in OFlow?  I'm looking to get a
 curve to export ascii data of the source frames on the current frame for a
 range.

 Cheers,
 -Schnee

 --

 \/ davids / comp \/ 177
 /\ tippettstudio /\ sno


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Do not display the error state of a node which isinside a Gizmo

2011-10-12 Thread Ivan Busquets
Both error and hasError are available in the expression parser, actually.

node.error returns true if using the Node would result in an error (even if
the error comes from somewhere else upstream)
node.hasError only returns true when an error is raised within the node
itself.

As J said, just using hasError as an expression in the disable knob of a
Read node should do the trick there.


On Wed, Oct 12, 2011 at 10:38 AM, Nathan Rusch nathan_ru...@hotmail.comwrote:

 I think J meant error instead of hasError. node.hasError() is a Python
 method, while error is the Nuke expression command.

 -Nathan

 -Original Message- From: Dorian Fevrier
 Sent: Wednesday, October 12, 2011 9:45 AM
 To: nuke-users@support.thefoundry.**co.uknuke-users@support.thefoundry.co.uk
 Subject: Re: [Nuke-users] Do not display the error state of a node which
 isinside a Gizmo


 Thanks for your answer!

 To be honest, I do not really understand. :(
 But it gave me an idea

 def returnFalse():
  return False
 node.hasError = returnFalse

 # Result: Traceback (most recent call last):
  File string, line 1, in module
 AttributeError: 'Node' object attribute 'hasError' is read-only

 You was talking about overload hasError function?

 Thanks in advance. :)

 Regards,

 Dorian

 On 10/12/2011 06:22 PM, J Bills wrote:

 someone else might have a better answer, but off the top of my head,
 if you put hasError in the disable knob of the offending node, I
 believe that will fix it.

 On Wed, Oct 12, 2011 at 4:53 AM, Dorian Fevrierdor...@macguff.fr
  wrote:

 Hi Nuke users,

 I'm searching something that appear to be simple but I don't find any way
 to
 do this.

 I have a Gizmo node with some switch and read nodes inside.

 Following the case, the read node can have a bad file value (generated by
 an
 expression) and be in ERROR and ERROR is wrote on the Gizmo.

 Is there a simple way to avoid this node to return his ERROR state on
 the
 Gizmo?

 Actually, the error message is write on it but, because I use switch, the
 gizmo work perfectly...

 I hope someone already encountered this before.

 Thanks in advance,

 Dorian
 __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


  __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 __**_
 Nuke-users mailing list
 Nuke-users@support.thefoundry.**co.ukNuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.**uk/ http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-usershttp://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] merge all: mask/stencil VS in/out

2011-10-12 Thread Ivan Busquets
Hey Jan!

Annoying one for sure. Specially since the older Merge node did have the
expected behaviour.

This has bitten me a few times, and while you could obviously write your own
workaround gizmo/plugin, I find the easiest approach is to set up a couple
of shortcuts for 'Stencil' and 'Mask' that use the Merge node class instead
of Merge2

your_menu_item.addCommand(Merge/Merges/Stencil, nuke.createNode('Merge',
'operation stencil'), icon = MergeOut.png)
your_menu_item.addCommand(Merge/Merges/Mask, nuke.createNode('Merge',
'operation mask'), icon = MergeIn.png)

Hope that helps. But it might be worth giving The Foundry a nudge too.


On Wed, Oct 12, 2011 at 2:12 PM, Jan Dubberke j...@dewback.de wrote:

 Hi all,

 merge all doesn't seem to work with additional channels when set to
 stencil/mask.

 It does work using in/out (which I'm avoiding to use for all the obvious
 reasons)
 IMO this a bug (and am actually baffled that I didn't come across earlier)

 please have a look at the attached script snippet where I'm trying to alter
 the mask channel. so set your viewer to mask and compare the 2 merge
 nodes

 does this make sense to anyone?
 cheers,
 Jan



 set cut_paste_input [stack 0]
 version 6.2 v2
 CheckerBoard2 {
  inputs 0
  name CheckerBoard1
  selected true
  xpos 255
  ypos -169
 }
 Roto {
  output mask
  curves {AnimTree:  {
  Version: 1.2
  Flag: 0
  RootNode: 1
  Node: {
  NodeName: Root {
   Flag: 512
   NodeType: 1
   Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
   NumOfAttributes: 10
   vis S 0 1 opc S 0 1 mbo S 0 1 mb S 0 1 mbs S 0 0.5 fo S 0 1
 fx S 0 0 fy S 0 0 ff S 0 1 ft S 0 0
  }
  NumOfChildren: 1
  Node: {
   NodeName: Ellipse1 {
Flag: 576
NodeType: 3
CurveGroup:  {
 Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1057.5 S 0 727.5
 Flag: 0
 NumOfCubicCurves: 2
 CubicCurve:  {
  Type: 0 Flag: 8192 Dim: 2
  NumOfPoints: 12
  0 S 0 -329.99 S 0 0 0 0 S 0 1022.5 S 0 285 0 0 S 0 329.99 S 0 0 0 0 S
 0 0 S 0 -280.284 0 0 S 0 1620 S 0 792.5 0 0 S 0 0 S 0 280.284 0 0 S 0 329.99
 S 0 0 0 0 S 0 1022.5 S 0 1300 0 0 S 0 -329.99 S 0 0 0 0 S 0 0 S 0 280.284 0
 0 S 0 425 S 0 792.5 0 0 S 0 0 S 0 -280.284 0
 }
 CubicCurve:  {
  Type: 0 Flag: 8192 Dim: 2
  NumOfPoints: 12
  0 S 0 -329.99 S 0 0 0 0 S 0 0 S 0 0 0 0 S 0 329.99 S 0 0 0 0 S 0 0 S 0
 -280.284 0 0 S 0 0 S 0 0 0 0 S 0 0 S 0 280.284 0 0 S 0 329.99 S 0 0 0 0 S 0
 0 S 0 0 0 0 S 0 -329.99 S 0 0 0 0 S 0 0 S 0 280.284 0 0 S 0 0 S 0 0 0 0 S 0
 0 S 0 -280.284 0
 }
 NumOfAttributes: 43
 vis S 0 1 r S 0 1 g S 0 1 b S 0 1 a S 0 1 ro S 0 0 go S 0
 0 bo S 0 0 ao S 0 0 opc S 0 1 bm S 0 0 inv S 0 0 mbo S 0 0 mb
 S 0 1 mbs S 0 0.5 mbsot S 0 0 mbso S 0 0 fo S 0 1 fx S 0 0 fy S
 0 0 ff S 0 1 ft S 0 0 src S 0 0 stx S 0 0 sty S 0 0 str S 0 0
 sr S 0 0 ssx S 0 1 ssy S 0 1 ss S 0 0 spx S 0 1024 spy S 0 778
 stot S 0 0 sto S 0 0 sv S 0 0 sf S 0 1 sb S 0 1 nv S 0 1 view1
 S 0 1 ltn S 0 690 ltm S 0 690 ltt S 0 0 tt S 0 6
}
   }
   NumOfChildren: 0
  }
  }
 }
 }
  toolbox {selectAll {
  { selectAll ssx 1 ssy 1 sf 1 }
  { createBezier ssx 1 ssy 1 sf 1 sb 1 tt 4 }
  { createBSpline ssx 1 ssy 1 sf 1 sb 1 }
  { createEllipse ssx 1 ssy 1 sf 1 sb 1 tt 6 }
  { createRectangle ssx 1 ssy 1 sf 1 sb 1 }
  { brush ssx 1 ssy 1 sf 1 sb 1 }
  { eraser src 2 ssx 1 ssy 1 sf 1 sb 1 }
  { clone src 1 ssx 1 ssy 1 sf 1 sb 1 }
  { reveal src 3 ssx 1 ssy 1 sf 1 sb 1 }
  { dodge src 1 ssx 1 ssy 1 sf 1 sb 1 }
  { burn src 1 ssx 1 ssy 1 sf 1 sb 1 }
  { blur src 1 ssx 1 ssy 1 sf 1 sb 1 }
  { sharpen src 1 ssx 1 ssy 1 sf 1 sb 1 }
  { smear src 1 ssx 1 ssy 1 sf 1 sb 1 }
 } }
  toolbar_brush_hardness 0.20003
  toolbar_lifetime_type all
  toolbar_source_transform_scale {1 1}
  toolbar_source_transform_**center {320 240}
  colorOverlay 0
  lifetime_type all frames
  lifetime_start 690
  lifetime_end 690
  motionblur_shutter_offset_type centred
  source_black_outside true
  name Roto1
  selected true
  xpos 255
  ypos -56
 }
 set N2277ad20 [stack 0]
 push $cut_paste_input
 Roto {
  output alpha
  curves {AnimTree:  {
  Version: 1.2
  Flag: 0
  RootNode: 1
  Node: {
  NodeName: Root {
   Flag: 512
   NodeType: 1
   Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
   NumOfAttributes: 10
   vis S 0 1 opc S 0 1 mbo S 0 1 mb S 0 1 mbs S 0 0.5 fo S 0 1
 fx S 0 0 fy S 0 0 ff S 0 1 ft S 0 0
  }
  NumOfChildren: 1
  Node: {
   NodeName: Ellipse1 {
Flag: 512
NodeType: 3
CurveGroup:  {
 Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1057.5 S 0 727.5
 Flag: 0
 NumOfCubicCurves: 2
 CubicCurve:  {
  Type: 0 Flag: 8192 Dim: 2
  NumOfPoints: 12
  0 S 0 -329.99 S 0 0 0 0 S 0 1297.9 S 0 276.9 0 0 S 0 329.99 S 0 0 0 0
 S 0 0 S 0 -280.284 0 0 S 0 1895.4 S 0 784.4 0 0 S 0 0 S 0 280.284 0 0 S 0
 329.99 S 0 0 0 0 S 0 1297.9 S 0 1291.9 0 0 S 0 -329.99 S 0 0 0 0 S 0 0 S 0
 280.284 0 0 S 0 700.4 S 0 784.4 0 0 S 0 0 S 0 -280.284 0
 }
 CubicCurve:  

Re: [Nuke-users] Transparancy grid

2011-10-05 Thread Ivan Busquets
If your show is using viewerProcess, then you still have the old Input
Process for yourself, right?
You can set up Input Process to happen either before or after the
viewerProcess, depending on your needs, but you don't need to turn off
either of them to see the other.

Unless I'm misreading and your show's viewer options are actually set up as
an Input Process node. If that's the case, I'd definitely recommend moving
that into the viewerProcess dropdown, so the users still get the Input
Process slot free to use for anything they need (an overlay, turning on/off
an anaglyph view, a certain look, etc).

I agree that this seems like a standard option in every other comp package,
but having the ability to use Input Process for anything you need makes it a
lot more flexible, IMHO.



On Wed, Oct 5, 2011 at 10:31 AM, Randy Little randyslit...@gmail.comwrote:

 Ron what I am saying is that I wouldn't want to be messing around with a
 SHOW template viewer process that may have all kinds of hooks inside of it.
   It would be nice if nuke could show Alpha as Transparent.   I always
 feel like Nukes viewer is just antique even compared to what was capable in
 Shake 2.  ( I know its way faster but the viewer options are so limited)
 LIke to do what you are saying in an environment where its safe to do so
 would also turn on/off all your other view processes.  Then you have to have
 the group open somewhere and go hunt for it just to toggle alpha
 transparency on and off.Does it work?  Sure.  Does it seem like almost
 every other compositing program dating back to at least combustion and maybe
 even composite had or has this feature? Yes.is it a killer?  No.
 It sure would be nice though to have more of those viewer overlays that
 shake had though.

 Randy S. Little
 http://www.rslittle.com http://reel.rslittle.com




 On Wed, Oct 5, 2011 at 11:09, Ron Ganbar ron...@gmail.com wrote:

 You simply make a bigger viewer process with more options in it that can
 be turned on and off.



 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/



 On 5 October 2011 19:06, Randy Little randyslit...@gmail.com wrote:

 Yeah how does that work if you already have a view process for a job.
 Randy S. Little
 http://www.rslittle.com http://reel.rslittle.com





 On Wed, Oct 5, 2011 at 10:39, Deke Kincaid dekekinc...@gmail.comwrote:

 You can make the viewer go through any gizmo/group.  Just take the
 example Ron gave and register it as a viewer process.

 -deke


 On Wed, Oct 5, 2011 at 06:14, Ron Ganbar ron...@gmail.com wrote:

 Make a checkerboard and put everything over it? Wrap it up in a group
 and use it as VIEWER_INPUT.


 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/



 On 5 October 2011 15:03, blemma nuke-users-re...@thefoundry.co.ukwrote:

 **
 Hi.

 Is there a way to view the alpha as a transparancy grid? Like AE or
 Fusion.

 Thanks

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Transparancy grid

2011-10-05 Thread Ivan Busquets
You can register multiple viewerProcesses, not IP's.

That's why I was recommending to use viewerProcesses for anything that needs
to be shared across a show (like a 3D lut, any additional looks, crop
guides, etc), and leave the IP free for the artists to use anything they
want in there.

It's just an opinion, but I find people make a lot more use of the IP if it
doesn't interfere with anything else (like, they won't loose any of the
show's predefined looks if they switch their IP on and off)

Cheers,
Ivan

On Wed, Oct 5, 2011 at 12:37 PM, Deke Kincaid dekekinc...@gmail.com wrote:

 You can define any number of gizmos as separate viewer processes just like
 srgb/rec709, etc  So you can have more then one IP essentially.

 -deke


 On Wed, Oct 5, 2011 at 11:52, Randy Little randyslit...@gmail.com wrote:

 Yeah I mean it would be nice to have more then one IP.   LIke you could
 have several IP groups.   Does that make since?   Is there an easy way to
 have several IP groups.   Never tried it.

 I think its that I miss shake built in overlays.

 Randy S. Little
 http://www.rslittle.com http://reel.rslittle.com





 On Wed, Oct 5, 2011 at 12:18, Ivan Busquets ivanbusqu...@gmail.comwrote:

 If your show is using viewerProcess, then you still have the old Input
 Process for yourself, right?
 You can set up Input Process to happen either before or after the
 viewerProcess, depending on your needs, but you don't need to turn off
 either of them to see the other.

 Unless I'm misreading and your show's viewer options are actually set up
 as an Input Process node. If that's the case, I'd definitely recommend
 moving that into the viewerProcess dropdown, so the users still get the
 Input Process slot free to use for anything they need (an overlay, turning
 on/off an anaglyph view, a certain look, etc).

 I agree that this seems like a standard option in every other comp
 package, but having the ability to use Input Process for anything you need
 makes it a lot more flexible, IMHO.




 On Wed, Oct 5, 2011 at 10:31 AM, Randy Little randyslit...@gmail.comwrote:

 Ron what I am saying is that I wouldn't want to be messing around with a
 SHOW template viewer process that may have all kinds of hooks inside of it.
   It would be nice if nuke could show Alpha as Transparent.   I always
 feel like Nukes viewer is just antique even compared to what was capable in
 Shake 2.  ( I know its way faster but the viewer options are so limited)
 LIke to do what you are saying in an environment where its safe to do so
 would also turn on/off all your other view processes.  Then you have to 
 have
 the group open somewhere and go hunt for it just to toggle alpha
 transparency on and off.Does it work?  Sure.  Does it seem like almost
 every other compositing program dating back to at least combustion and 
 maybe
 even composite had or has this feature? Yes.is it a killer?  No.
 It sure would be nice though to have more of those viewer overlays that
 shake had though.

 Randy S. Little
 http://www.rslittle.com http://reel.rslittle.com




 On Wed, Oct 5, 2011 at 11:09, Ron Ganbar ron...@gmail.com wrote:

 You simply make a bigger viewer process with more options in it that
 can be turned on and off.



 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/



 On 5 October 2011 19:06, Randy Little randyslit...@gmail.com wrote:

 Yeah how does that work if you already have a view process for a job.

 Randy S. Little
 http://www.rslittle.com http://reel.rslittle.com





 On Wed, Oct 5, 2011 at 10:39, Deke Kincaid dekekinc...@gmail.comwrote:

 You can make the viewer go through any gizmo/group.  Just take the
 example Ron gave and register it as a viewer process.

 -deke


 On Wed, Oct 5, 2011 at 06:14, Ron Ganbar ron...@gmail.com wrote:

 Make a checkerboard and put everything over it? Wrap it up in a
 group and use it as VIEWER_INPUT.


 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/



 On 5 October 2011 15:03, blemma 
 nuke-users-re...@thefoundry.co.ukwrote:

 **
 Hi.

 Is there a way to view the alpha as a transparancy grid? Like AE or
 Fusion.

 Thanks

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk,
 http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Odd gamma behaviour

2011-10-05 Thread Ivan Busquets
Yep, I don't think __alpha is defined when building on more modern machines
(didn't know the history behind it, though. Thanks Jonathan)

Ben, thanks for the credit, but this is not what I meant in the original
post.

What I was trying to say was that the original issue Ron posted was due to
filters acting on a square region. If you're applying a large blur to
something like a circle, pixels will still be filtered based on a square
region. This is generally ok, since the pixels at the corners of your image
(further from the center of the circle) will be filtered using a lot more
black samples than, say, any other pixels along the sides of the image
(which would be closer to the circle). So, they'll get smaller values, as
you would expect.

The problem comes when you push those smaller values (with a gamma
operation, for example) far enough that they start getting close to any
other value in the image. Then you end up with a square, as Ron's example
was showing.

Hope that makes more sense. :)

Cheers,
Ivan

On Wed, Oct 5, 2011 at 2:29 PM, Jonathan Egstad jegs...@earthlink.netwrote:

  ---snip--
 
  float G = gamma[z];
  // patch for linux alphas because the pow function behaves badly
  // for very large or very small exponent values.
  #ifdef __alpha
  if (G  0.008f)
G = 0.0f;
  if (G  125.0f)
G = 125.0f;
  #endif
 
  ---snip---

 I might be wrong, but I think this clamp patch is only enabled when built
 on on DEC Alpha machines.  You may want to double-check that the __alpha
 token gets enabled during a build on an Intel machine.

 For those interested in Nuke history - the DEC Alpha was one of the first
 Linux boxes used in production at Digital Domain back in the mid '90s and
 Nuke ran like blazes on them compared to the relatively pokey SGIs.

 -jonathan

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

[Nuke-users] All plugins- Update... slow in 6.3?

2011-10-05 Thread Ivan Busquets
Hi,

Has anyone noticed the All plugins - Update command taking a lot longer
in Nuke 6.3 than it does in 6.2?

Not sure if it's specific to my/our setup, so I'm curious if anyone else has
noticed a difference between both versions.

Thanks,
Ivan
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] nuke python - ShuffleCopy

2011-09-30 Thread Ivan Busquets
This is a Python syntax conflict. In that context, in has a meaning in
Python, and that will take precedence over the named keyword you're trying
to pass as an argument.

Try this instead, which should work:

s = nuke.nodes.ShuffleCopy()
s['in'].setValue('rgba')


On Fri, Sep 30, 2011 at 10:12 AM, Matias Volonte volontemat...@yahoo.comwrote:

 when I try to create throught python a shuffleCopy node the following way I
 get an error:

 nuke.nodes.ShuffleCopy(in='rgba')

 If I create this node manually and I print it, that parameter appears there
 and it is fine.

 what is wrong with this? thanks.

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Python Constraint in One axis alone

2011-09-08 Thread Ivan Busquets
You can pass an index as an argument to setExpression (the index being the
field you want to set the expression to)

b['translate'].setExpression('%s.translate'%st.name(), 0)  // To set the
expression for translate.x only




On Thu, Sep 8, 2011 at 8:43 AM, Matias Volonte volontemat...@yahoo.comwrote:

 Hello, I need some help, this is the issue I have,

 instead of constraining all the axis like this:

 b['translate'].setExpression('%s.translate'%st.name())

 i would like to constraint only the X axis, how can I do this? thanks

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] nuke renders and server loads

2011-09-06 Thread Ivan Busquets
Hi,

Found the script I sent a while back as an example of picking layers in
merges using up more resources.
Just tried it in 6.3, and I still get similar results.

Script attached for reference. Try viewing/rendering each of the two groups
while keeping an eye on memory usage of your Nuke process.

Cheers,
Ivan


On Mon, Sep 5, 2011 at 11:42 AM, Ivan Busquets ivanbusqu...@gmail.comwrote:

 Another thing is it sounds like you are shuffling out the channels to the
 rgb before you merge them.  This also does quite a hit in speed.  It is far
 faster to merge and pick the channels you need rather then shuffling them
 out first.


 That's interesting. My experience has usually been quite the opposite. I
 find the same operations done in Merges after shuffling to rgb are faster,
 and definitely use less resources, than picking the relevant layers inside
 the Merge nodes.

 Back in v5, I sent a script to support as an example of this behavior, more
 specifically how using layers within the Merge nodes caused memory usage to
 go through the roof (and not respect the memory limit in the preferences).
 At the time, this was logged as a memory leak bug. I don't think this was
 ever resolved, but to be fair this is probably less of an issue nowadays
 with higher-specced workstations.

 Hearing that you find it faster to pick layers in a merge node than
 shuffling  merging makes me very curious, though. I wonder if, given enough
 memory (so it's not depleted by the mentioned leak/overhead), some scripts
 may indeed run faster that way. Do you have any examples?

 And going back to the original topic, my experience with multi-channel exr
 files is:

 - Separate exr sequences for each aov/layer is faster than a single
 multi-channel exr, yes. As you mentioned, exr stores additional
 channels/layers in an interleaved fashion, so the reader has to step through
 all of them before going to the next scanline, even if you're not using them
 all. Even if you read each layer separately and copy them all into layers in
 your script (so you get the equivalent of a multi-channel exr), this is
 still faster than using a multi-channel exr file.

 - When merging different layers coming from the same stream, I find
 performance to be better when shuffling layers to rgba and keeping merges to
 operate on rgba. (although this is the opposite of what Deke said, so your
 mileage may vary)

 Cheers,
 Ivan

 On Thu, Sep 1, 2011 at 1:55 PM, Deke Kincaid dekekinc...@gmail.comwrote:

 Exr files are interleaved.  So when you look at some scanlines, you need
 to read in every single channel in the EXR from those scanlines even if you
 only need one of them.  So if you have a multichannel file with 40 channels
 but you only use rgba and one or two matte channels, then your going to
 incur a large hit.

 Another thing is it sounds like you are shuffling out the channels to the
 rgb before you merge them.  This also does quite a hit in speed.  It is far
 faster to merge and pick the channels you need rather then shuffling them
 out first.

 -deke

 On Thu, Sep 1, 2011 at 12:37, Ryan O'Phelan designer...@gmail.comwrote:

 Recently I've been trying to evaluate the load of nuke renders on our
 file server, and ran a few tests comparing multichannel vs. non-multichannel
 reads, and my initial test results were opposite of what I was expecting.
 My tests showed that multichannel comps rendered about 20-25% slower, and
 made about 25% more load on the server in terms of disk reads. I was
 expecting the opposite, since there are fewer files being called with
 multichannel reads.

 For what it's worth, all reads were zip1 compressed EXRs and I tested
 real comps, as well as extremely simplified comps where the multichannel
 files were branched and then fed into a contact sheet. I was monitoring
 performance using the performance monitor on the file server using only 20
 nodes and with almost nobody using the server.

 Can anyone explain this? Or am I wrong and need to redo these tests?

 Thanks,
 Ryan



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users





layers_vs_shuffles.nk
Description: Binary data
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] nuke renders and server loads

2011-09-06 Thread Ivan Busquets
Sure, I understand what you're saying.
The example is only bundled that way because I didn't want to send a huge
multi-channel exr file.
But if you were to write out each generator to a file, plus a multi-channel
exr at the end of all the Copy nodes, and then redo those trees with actual
inputs, the results are pretty much the same.

At least, that's what I used in my original test.

Sorry the example was half baked. Does that make sense?


On Tue, Sep 6, 2011 at 6:32 PM, Deke Kincaid dekekinc...@gmail.com wrote:

 Hi Ivan

 The thing is in your slower one in red of your example your first
 copying/shuffling everything to another channel before merging them from
 their respective channels.  The fast one there isn't any shuffling around of
 channels first.  Your going in the opposite direction(shuffling to other
 channels instead of to rgba).  The act of actually moving channels around is
 what causes the hit no matter which direction your going.

 To make the test equal you would need to use generators that allow you to
 create in a specific channel.  The Checkerboard and Colorbars in your
 example doesn't have this ability.

 -deke

 On Mon, Sep 5, 2011 at 23:06, Ivan Busquets ivanbusqu...@gmail.comwrote:

 Hi,

 Found the script I sent a while back as an example of picking layers in
 merges using up more resources.
 Just tried it in 6.3, and I still get similar results.

 Script attached for reference. Try viewing/rendering each of the two
 groups while keeping an eye on memory usage of your Nuke process.

 Cheers,
 Ivan



 On Mon, Sep 5, 2011 at 11:42 AM, Ivan Busquets ivanbusqu...@gmail.comwrote:

 Another thing is it sounds like you are shuffling out the channels to the
 rgb before you merge them.  This also does quite a hit in speed.  It is far
 faster to merge and pick the channels you need rather then shuffling them
 out first.


 That's interesting. My experience has usually been quite the opposite. I
 find the same operations done in Merges after shuffling to rgb are faster,
 and definitely use less resources, than picking the relevant layers inside
 the Merge nodes.

 Back in v5, I sent a script to support as an example of this behavior,
 more specifically how using layers within the Merge nodes caused memory
 usage to go through the roof (and not respect the memory limit in the
 preferences). At the time, this was logged as a memory leak bug. I don't
 think this was ever resolved, but to be fair this is probably less of an
 issue nowadays with higher-specced workstations.

 Hearing that you find it faster to pick layers in a merge node than
 shuffling  merging makes me very curious, though. I wonder if, given enough
 memory (so it's not depleted by the mentioned leak/overhead), some scripts
 may indeed run faster that way. Do you have any examples?

 And going back to the original topic, my experience with multi-channel
 exr files is:

 - Separate exr sequences for each aov/layer is faster than a single
 multi-channel exr, yes. As you mentioned, exr stores additional
 channels/layers in an interleaved fashion, so the reader has to step through
 all of them before going to the next scanline, even if you're not using them
 all. Even if you read each layer separately and copy them all into layers in
 your script (so you get the equivalent of a multi-channel exr), this is
 still faster than using a multi-channel exr file.

 - When merging different layers coming from the same stream, I find
 performance to be better when shuffling layers to rgba and keeping merges to
 operate on rgba. (although this is the opposite of what Deke said, so your
 mileage may vary)

 Cheers,
 Ivan

 On Thu, Sep 1, 2011 at 1:55 PM, Deke Kincaid dekekinc...@gmail.comwrote:

 Exr files are interleaved.  So when you look at some scanlines, you need
 to read in every single channel in the EXR from those scanlines even if you
 only need one of them.  So if you have a multichannel file with 40 channels
 but you only use rgba and one or two matte channels, then your going to
 incur a large hit.

 Another thing is it sounds like you are shuffling out the channels to
 the rgb before you merge them.  This also does quite a hit in speed.  It is
 far faster to merge and pick the channels you need rather then shuffling
 them out first.

 -deke

 On Thu, Sep 1, 2011 at 12:37, Ryan O'Phelan designer...@gmail.comwrote:

 Recently I've been trying to evaluate the load of nuke renders on our
 file server, and ran a few tests comparing multichannel vs. 
 non-multichannel
 reads, and my initial test results were opposite of what I was expecting.
 My tests showed that multichannel comps rendered about 20-25% slower,
 and made about 25% more load on the server in terms of disk reads. I was
 expecting the opposite, since there are fewer files being called with
 multichannel reads.

 For what it's worth, all reads were zip1 compressed EXRs and I tested
 real comps, as well as extremely simplified comps where the multichannel
 files were

Re: [Nuke-users] nuke renders and server loads

2011-09-06 Thread Ivan Busquets
Or, to go even further and remove any differences between multi-channel vs
non multi-channel exrs, have a look at this script instead (attached)

Even when you're reading in the same multi-channel exr, my experience is
that shuffling out to rgba and doing merges in rgba only uses less resources
than picking the channels in the merges.



On Tue, Sep 6, 2011 at 6:37 PM, Ivan Busquets ivanbusqu...@gmail.comwrote:

 Sure, I understand what you're saying.
 The example is only bundled that way because I didn't want to send a huge
 multi-channel exr file.
 But if you were to write out each generator to a file, plus a multi-channel
 exr at the end of all the Copy nodes, and then redo those trees with actual
 inputs, the results are pretty much the same.

 At least, that's what I used in my original test.

 Sorry the example was half baked. Does that make sense?



 On Tue, Sep 6, 2011 at 6:32 PM, Deke Kincaid dekekinc...@gmail.comwrote:

 Hi Ivan

 The thing is in your slower one in red of your example your first
 copying/shuffling everything to another channel before merging them from
 their respective channels.  The fast one there isn't any shuffling around of
 channels first.  Your going in the opposite direction(shuffling to other
 channels instead of to rgba).  The act of actually moving channels around is
 what causes the hit no matter which direction your going.

 To make the test equal you would need to use generators that allow you to
 create in a specific channel.  The Checkerboard and Colorbars in your
 example doesn't have this ability.

  -deke

 On Mon, Sep 5, 2011 at 23:06, Ivan Busquets ivanbusqu...@gmail.comwrote:

 Hi,

 Found the script I sent a while back as an example of picking layers in
 merges using up more resources.
 Just tried it in 6.3, and I still get similar results.

 Script attached for reference. Try viewing/rendering each of the two
 groups while keeping an eye on memory usage of your Nuke process.

 Cheers,
 Ivan



 On Mon, Sep 5, 2011 at 11:42 AM, Ivan Busquets 
 ivanbusqu...@gmail.comwrote:

 Another thing is it sounds like you are shuffling out the channels to
 the rgb before you merge them.  This also does quite a hit in speed.  It 
 is
 far faster to merge and pick the channels you need rather then shuffling
 them out first.


 That's interesting. My experience has usually been quite the opposite. I
 find the same operations done in Merges after shuffling to rgb are faster,
 and definitely use less resources, than picking the relevant layers inside
 the Merge nodes.

 Back in v5, I sent a script to support as an example of this behavior,
 more specifically how using layers within the Merge nodes caused memory
 usage to go through the roof (and not respect the memory limit in the
 preferences). At the time, this was logged as a memory leak bug. I don't
 think this was ever resolved, but to be fair this is probably less of an
 issue nowadays with higher-specced workstations.

 Hearing that you find it faster to pick layers in a merge node than
 shuffling  merging makes me very curious, though. I wonder if, given 
 enough
 memory (so it's not depleted by the mentioned leak/overhead), some scripts
 may indeed run faster that way. Do you have any examples?

 And going back to the original topic, my experience with multi-channel
 exr files is:

 - Separate exr sequences for each aov/layer is faster than a single
 multi-channel exr, yes. As you mentioned, exr stores additional
 channels/layers in an interleaved fashion, so the reader has to step 
 through
 all of them before going to the next scanline, even if you're not using 
 them
 all. Even if you read each layer separately and copy them all into layers 
 in
 your script (so you get the equivalent of a multi-channel exr), this is
 still faster than using a multi-channel exr file.

 - When merging different layers coming from the same stream, I find
 performance to be better when shuffling layers to rgba and keeping merges 
 to
 operate on rgba. (although this is the opposite of what Deke said, so your
 mileage may vary)

 Cheers,
 Ivan

 On Thu, Sep 1, 2011 at 1:55 PM, Deke Kincaid dekekinc...@gmail.comwrote:

 Exr files are interleaved.  So when you look at some scanlines, you
 need to read in every single channel in the EXR from those scanlines even 
 if
 you only need one of them.  So if you have a multichannel file with 40
 channels but you only use rgba and one or two matte channels, then your
 going to incur a large hit.

 Another thing is it sounds like you are shuffling out the channels to
 the rgb before you merge them.  This also does quite a hit in speed.  It 
 is
 far faster to merge and pick the channels you need rather then shuffling
 them out first.

 -deke

 On Thu, Sep 1, 2011 at 12:37, Ryan O'Phelan designer...@gmail.comwrote:

 Recently I've been trying to evaluate the load of nuke renders on our
 file server, and ran a few tests comparing multichannel vs. 
 non-multichannel
 reads, and my initial test

Re: [Nuke-users] Re: Passing an object track through multiple transforms

2011-09-06 Thread Ivan Busquets

 it looks like nuke gives a rounding error using that setup (far values are
 .99902 instead of 1.0).  probably negligible but I like 1.0 betta.


One small thing about both those UV-map generation methods. Keep in mind
that STMap samples pixels at the center, so you'll need to account for that
half-pixel difference in your expression. Otherwise the resulting map is
going to introduce a bit of unnecessary filtering when you feed it to an
STmap.

An expression like this should give you a 1-to-1 result when you feed it
into an STMap:

set cut_paste_input [stack 0]
version 6.3 v2
push $cut_paste_input
Expression {
 expr0 (x+0.5)/(width)
 expr1 (y+0.5)/(height)
 name Expression2
 selected true
 xpos -92
 ypos -143
}


With regards to the original question, though, it's a shame that one doesn't
have access to the concatenated 2d matrix from 2D transform nodes within
expressions. Otherwise you could just multiply your source point by the
concatenated matrix and get its final position. This information is indeed
passed down the tree, but it's not accessible for anything but plugins (that
I know).

You could probably take advantage of the fact that the bbox is transformed
the same way as your image, and you CAN ask for the bbox boundaries using
expressions. So, you could have something with a very small bbox centered
around your point of interest, transform that using the same transforms
you're using for your kites, and then get the center of the transformed
bbox, if that makes sense. It's a bit convoluted, but it might do the trick
for you.

Here's an example:

set cut_paste_input [stack 0]
version 6.3 v2
push $cut_paste_input
Group {
 name INPUT_POSITION
 selected true
 xpos -883
 ypos -588
 addUserKnob {20 User}
 addUserKnob {12 position}
 position {1053.5 592}
}
 Input {
  inputs 0
  name Input1
  xpos -469
  ypos -265
 }
 Rectangle {
  area {{parent.position.x i x1 962} {parent.position.y i x1 391} {area.x+1
i} {area.y+1 i}}
  name Rectangle1
  selected true
  xpos -469
  ypos -223
 }
 Output {
  name Output1
  xpos -469
  ypos -125
 }
end_group
Transform {
 translate {36 0}
 center {1052 592}
 shutteroffset centred
 name Transform1
 selected true
 xpos -883
 ypos -523
}
set C48d17580 [stack 0]
Transform {
 translate {0 -11}
 rotate -34
 center {1052 592}
 shutteroffset centred
 name Transform2
 selected true
 xpos -883
 ypos -497
}
set C4489ddc0 [stack 0]
Transform {
 scale 1.36
 center {1052 592}
 shutteroffset centred
 name Transform3
 selected true
 xpos -883
 ypos -471
}
set C4d2c2290 [stack 0]
Group {
 name OUT_POSITION
 selected true
 xpos -883
 ypos -409
 addUserKnob {20 User}
 addUserKnob {12 out_position}
 out_position {{(input.bbox.x + input.bbox.r) / 2} {(input.bbox.y +
input.bbox.t) / 2}}
}
 Input {
  inputs 0
  name Input1
  selected true
  xpos -469
  ypos -265
 }
 Output {
  name Output1
  xpos -469
  ypos -125
 }
end_group
CheckerBoard2 {
 inputs 0
 name CheckerBoard2
 selected true
 xpos -563
 ypos -623
}
clone $C48d17580 {
 xpos -563
 ypos -521
 selected true
}
clone $C4489ddc0 {
 xpos -563
 ypos -495
 selected true
}
clone $C4d2c2290 {
 xpos -563
 ypos -469
 selected true
}


Cheers,
Ivan






On Tue, Sep 6, 2011 at 6:09 PM, J Bills jbillsn...@flickfx.com wrote:

 sure - looks even cleaner than the ramps crap done from memory - actually,
 now that I look at it for some reason it looks like nuke gives a rounding
 error using that setup (far values are .99902 instead of 1.0).  probably
 negligible but I like 1.0 betta.  nice one AK.

 so play around with this, joshua -


 set cut_paste_input [stack 0]
 version 6.2 v4

 Constant {
  inputs 0
  channels rgb
  name Constant2
  selected true
  xpos 184
  ypos -174

 }
 Expression {
  expr0 x/(width-1)
  expr1 y/(height-1)
  name Expression2
  selected true
  xpos 184
  ypos -71

 }
 NoOp {
  name WARP_GOES_HERE
  tile_color 0xff00ff
  selected true
  xpos 184
  ypos 11

 }
 Shuffle {
  out motion
  name Shuffle
  label choose motion\nor other output\nchannel
  selected true
  xpos 184
  ypos 83

 }
 push 0
 STMap {
  inputs 2
  channels motion
  name STMap1
  selected true
  xpos 307
  ypos 209

 }





 On Tue, Sep 6, 2011 at 5:23 PM, Anthony Kramer 
 anthony.kra...@gmail.comwrote:

 Heres a 1-node UVmap for you:

 set cut_paste_input [stack 0]
 version 6.3 v2
 push $cut_paste_input
 Expression {
  expr0 x/(width-1)
  expr1 y/(height-1)
  name Expression2
  selected true
  xpos -480
  ypos 2079
 }



 On Tue, Sep 6, 2011 at 4:46 PM, J Bills jbillsn...@flickfx.com wrote:

 sure - that's what he's saying.  think of the uv map as creating a
 blueprint of your transforms or distortions.

 after you have that blueprint, you can run whatever you want through the
 same distortion and repurpose it all day long for whatever you want that
 might 

Re: [Nuke-users] Re: Passing an object track through multiple transforms

2011-09-06 Thread Ivan Busquets
 Nuke's appears to be not be an integer, but the values in your tree appear
 to either be 0.0 or 0.5, which is slightly odd


Bbox boundaries in Nuke are also integers (just like Shake's DOD). The
output value is always n.0 or n.5 because I'm averaging the center of the
bbox.

Of course this is by no means an accurate way of getting the transformation
of a given point, but more an idea of something he could do without
resorting to the NDK.

Ideally, you'd want to write a plugin for that, I agree. Either one that
exposes the concatenated matrix, or one where you could plug a stack of
transforms and directly apply the result to one or more points. A
Reconcile2D ? :)


On Tue, Sep 6, 2011 at 8:47 PM, Ben Dickson ben.dick...@rsp.com.au wrote:

 Heh, I remember trying the exact same thing in Shake years ago, to
 transform a roto point instead of using a 4-point stablise - the problem is
 the dod was an integer

 Nuke's appears to be not be an integer, but the values in your tree appear
 to either be 0.0 or 0.5, which is slightly odd

 Seems like it'd be fairly simple to make a plugin which exposes the 2D
 transform matrix,
 http://docs.thefoundry.co.uk/**nuke/63/ndkdevguide/knobs-and-**
 handles/output-knobs.htmlhttp://docs.thefoundry.co.uk/nuke/63/ndkdevguide/knobs-and-handles/output-knobs.html

 Ivan Busquets wrote:

it looks like nuke gives a rounding error using that setup (far
values are .99902 instead of 1.0).  probably negligible but I like
1.0 betta.


 One small thing about both those UV-map generation methods. Keep in mind
 that STMap samples pixels at the center, so you'll need to account for that
 half-pixel difference in your expression. Otherwise the resulting map is
 going to introduce a bit of unnecessary filtering when you feed it to an
 STmap.

 An expression like this should give you a 1-to-1 result when you feed it
 into an STMap:
 --**--
 set cut_paste_input [stack 0]
 version 6.3 v2
 push $cut_paste_input
 Expression {
  expr0 (x+0.5)/(width)
  expr1 (y+0.5)/(height)
  name Expression2
  selected true
  xpos -92
  ypos -143
 }
 --**--

 With regards to the original question, though, it's a shame that one
 doesn't have access to the concatenated 2d matrix from 2D transform nodes
 within expressions. Otherwise you could just multiply your source point by
 the concatenated matrix and get its final position. This information is
 indeed passed down the tree, but it's not accessible for anything but
 plugins (that I know).

 You could probably take advantage of the fact that the bbox is transformed
 the same way as your image, and you CAN ask for the bbox boundaries using
 expressions. So, you could have something with a very small bbox centered
 around your point of interest, transform that using the same transforms
 you're using for your kites, and then get the center of the transformed
 bbox, if that makes sense. It's a bit convoluted, but it might do the trick
 for you.

 Here's an example:
 --**--
 set cut_paste_input [stack 0]
 version 6.3 v2
 push $cut_paste_input
 Group {
  name INPUT_POSITION
  selected true
  xpos -883
  ypos -588
  addUserKnob {20 User}
  addUserKnob {12 position}
  position {1053.5 592}
 }
  Input {
  inputs 0
  name Input1
  xpos -469
  ypos -265
  }
  Rectangle {
  area {{parent.position.x i x1 962} {parent.position.y i x1 391} {area.x+1
 i} {area.y+1 i}}
  name Rectangle1
  selected true
  xpos -469
  ypos -223
  }
  Output {
  name Output1
  xpos -469
  ypos -125
  }
 end_group
 Transform {
  translate {36 0}
  center {1052 592}
  shutteroffset centred
  name Transform1
  selected true
  xpos -883
  ypos -523
 }
 set C48d17580 [stack 0]
 Transform {
  translate {0 -11}
  rotate -34
  center {1052 592}
  shutteroffset centred
  name Transform2
  selected true
  xpos -883
  ypos -497
 }
 set C4489ddc0 [stack 0]
 Transform {
  scale 1.36
  center {1052 592}
  shutteroffset centred
  name Transform3
  selected true
  xpos -883
  ypos -471
 }
 set C4d2c2290 [stack 0]
 Group {
  name OUT_POSITION
  selected true
  xpos -883
  ypos -409
  addUserKnob {20 User}
  addUserKnob {12 out_position}
  out_position {{(input.bbox.x + input.bbox.r) / 2} {(input.bbox.y +
 input.bbox.t) / 2}}
 }
  Input {
  inputs 0
  name Input1
  selected true
  xpos -469
  ypos -265
  }
  Output {
  name Output1
  xpos -469
  ypos -125
  }
 end_group
 CheckerBoard2 {
  inputs 0
  name CheckerBoard2
  selected true
  xpos -563
  ypos -623
 }
 clone $C48d17580 {
  xpos -563
  ypos -521
  selected true
 }
 clone $C4489ddc0 {
  xpos -563
  ypos -495
  selected true
 }
 clone $C4d2c2290 {
  xpos -563
  ypos -469
  selected true
 }
 --**--

 Cheers,
 Ivan






 On Tue, Sep 6, 2011 at 6:09 PM, J Bills jbillsn...@flickfx.com mailto:
 jbillsn...@flickfx.com** wrote:

sure - looks even cleaner than

Re: [Nuke-users] Frame Range - no effect

2011-09-01 Thread Ivan Busquets
It just sets the underlying framerange info for that part of the tree, but
doesn't change any in/out points you may have in your read nodes. For
example, you could have a FrameRange node after a series of merges, or even
retimes, and then it wouldn't be obvious what it's actually doing with Read
nodes above.

I find it useful mostly to set a framerange at a certain section of the tree
so the Viewer will pick that up when the timeline range is set to Input,
or for flipbooks to take the specified framerange by default.

From the node's help, it looks like AppendClip also reads it.

On Thu, Sep 1, 2011 at 11:55 AM, Craig Tozzi n...@2000strong.com wrote:

 I don't use this node that often,  but I'm trying to use it today, and it
 doesn't seem to be doing anything.

 I've tried a simple test outside of my comp, with just a read node to a
 Frame Range node to the viewer, and setting values seems to have no visible
 effect - frames are not cut from the beginning or end. I've not had any luck
 applying it to .movs or to image sequences.

 Am I missing something really simple or is this not doing what it's
 supposed to?

 Running NukeX 6.2v2 on OSX.

 Thanks!


 _
 Craig Tozzi
 twothousandstrong
 venice, california

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users]Invert vectorFiled 3d lut

2011-08-24 Thread Ivan Busquets
Nope, vectorfield / 3d luts / cube transformations cannot be accurately
reversed.
Depending on how extremely your 3d lookup pushes things around, you may be
able to build another one to approximate the reverse, but that's all you can
hope for.



On Wed, Aug 24, 2011 at 5:10 PM, mathieu arce arcemath...@hotmail.comwrote:

  hello every one !
 Is there a way to invert a vectorfield 3D node in order to have the exact
 opposite color transformation ?
 Bye.
 Mathieu.

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] converting a cornerpin to a matrix

2011-08-22 Thread Ivan Busquets
Hi Pete,

It's very possible, but probably easier to do it in Python than using
expression-links. Have a look at the nuke.math.Matrix4 class, and its
mapUnitSquareToQuad() method.

I have a function somewhere to feed the transformation of a CornerPin
node into the matrix knob of a roto/rotopaint node that would
probably fit the bill for this too. I can try to dig that up if you're
interested, but it would roughly consist of the following steps:

1 - Get the coordinates of the 4 TO corners from your cornerpin node.
2 - Build a TO matrix using mapUnitSquareToQuad(to0.x, to0.y, to1.x,
to1.y, ...)
3 - Get the coordinates of the 4 FROM corners from your cornerpin node.
4 - Build a FROM matrix using mapUnitSquareToQuad(from0.x, from0.y,
from1.x, from1.y, ...)
5 - The final transformation matrix should be =  to_matrix *
from_matrix.inverse().

Then you can use that to fill in the matrix knob in a gridwarp, a roto
node, etc.

Not the best example, but if you want some more info on how to use
nuke.math.Matrix4.mapUnitSquareToQuad(), have a look here (towards the
end of the page).

http://www.nukepedia.com/written-tutorials/using-the-nukemath-python-module-to-do-vector-and-matrix-operations/page-4/

Hope that helps.

Cheers,
Ivan

On Sun, Aug 21, 2011 at 10:24 PM, Pete O'Connell
pedrooconn...@gmail.com wrote:
 Hi does anyone know if it is possible somehow to convert cornerpin values to
 a 4 by 4 matrix? Seem like it would be useful for the new Gridwarp and
 Splinewarp nodes.
 Thanks
 Pete


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] converting a cornerpin to a matrix

2011-08-22 Thread Ivan Busquets
Glad to hear.
:)

In all fairness, I think it is very awkward that one would need to do that
in the first place. Specially if you can pass a matrix object as an argument
to setValue(), one would expect that the knob should be filled so it
performs the same transformation as the passed matrix object. It's not the
first time this has been flagged as problematic in this list, I think.

But anyway, glad you got it working.

Cheers,
Ivan


On Mon, Aug 22, 2011 at 7:12 PM, Pete O'Connell pedrooconn...@gmail.comwrote:

 Thanks very much Ivan. It works after transposing!

 Cheers
 Pete


 On Tue, Aug 23, 2011 at 11:34 AM, Ivan Busquets ivanbusqu...@gmail.comwrote:

 Hi Pete,

 Sorry I forgot to send that example as reference, but what you have so far
 is good.

 I think there's just a couple of things you should need to change to get
 this working.

 1- Don't normalize the from and to coordinates. The matrices you're
 building need to map from the to corners to a unit square, and then from a
 unit square to your from coordinates, using absolute values.
 So just do:

 to1x = theCornerpinNode['to1'].value()[0]
 etc...

 2- By the time you fill the theCornerpinAsMatrix object, you should have
 the correct transformation matrix already, but in column-major order.
 Unfortunately, values in matrix knobs seem to be returned (or set when using
 setValue) in row-major order, so you would either need to fill in the matrix
 knob in a loop mirroring the cell values, or, what I usually do is transpose
 the matrix before filling in the knob.

 So, in your script, after this line:

 theCornerpinAsMatrix = projectionMatrixTo * projectionMatrixFrom.inverse()

 You can just do:

 theCornerpinAsMatrix.transpose()

 And then continue as you were.

 See if that helps.

 Cheers,
 Ivan



 On Mon, Aug 22, 2011 at 6:37 PM, Pete O'Connell 
 pedrooconn...@gmail.comwrote:

 #OK I think I'm getting close. I tried Ivan's instructions but I think I
 missed something. It looks really close though. Anyone care to take a look
 at what I have so far?

 ##
 projectionMatrixTo = nuke.math.Matrix4()
 projectionMatrixFrom = nuke.math.Matrix4()

 #dir(projectionMatrix)
 theCornerpinNode = nuke.toNode(CornerPin2D1)
 imageWidth = float(theCornerpinNode.width())
 imageHeight = float(theCornerpinNode.height())

 #normalized to and from coordinates
 to1x = theCornerpinNode['to1'].value()[0]/imageWidth
 to1y = theCornerpinNode['to1'].value()[1]/imageHeight
 to2x = theCornerpinNode['to2'].value()[0]/imageWidth
 to2y = theCornerpinNode['to2'].value()[1]/imageHeight
 to3x = theCornerpinNode['to3'].value()[0]/imageWidth
 to3y = theCornerpinNode['to3'].value()[1]/imageHeight
 to4x = theCornerpinNode['to4'].value()[0]/imageWidth
 to4y = theCornerpinNode['to4'].value()[1]/imageHeight

 from1x = theCornerpinNode['from1'].value()[0]/imageWidth
 from1y = theCornerpinNode['from1'].value()[1]/imageHeight
 from2x = theCornerpinNode['from2'].value()[0]/imageWidth
 from2y = theCornerpinNode['from2'].value()[1]/imageHeight
 from3x = theCornerpinNode['from3'].value()[0]/imageWidth
 from3y = theCornerpinNode['from3'].value()[1]/imageHeight
 from4x = theCornerpinNode['from4'].value()[0]/imageWidth
 from4y = theCornerpinNode['from4'].value()[1]/imageHeight



 projectionMatrixTo.mapUnitSquareToQuad(to1x,to1y,to2x,to2y,to3x,to3y,to4x,to4y)

 projectionMatrixFrom.mapUnitSquareToQuad(from1x,from1y,from2x,from2y,from3x,from3y,from4x,from4y)

 theCornerpinAsMatrix = projectionMatrixTo*projectionMatrixFrom.inverse()

 theNewCornerpinNode = nuke.toNode(CornerPin2D2)
 theNewCornerpinNode['transform_matrix'].setValue(theCornerpinAsMatrix)


 
 #Thanks
 #Pete


 On Tue, Aug 23, 2011 at 10:14 AM, Pete O'Connell 
 pedrooconn...@gmail.com wrote:

 Hi Michael. I had a look at the planar tracker but as far as I can tell,
 that node creates a cornerpin based on the matrix that it calculates, but 
 as
 far as I can see there is no way in that you can feed the planar tracker
 cornerpin data and have it calculate the corresponding Matrix. I think
 nuke.math.Matrix4 is the way.

 Pete


 On Tue, Aug 23, 2011 at 3:30 AM, Michael Garrett michaeld...@gmail.com
  wrote:

 I haven't got 6.3 in front of me but does the PlanarTracker add
 anything into the mix to make this easier?  I seem to remember it outputs 
 a
 4*4 matrix for the planar track which is essentially a corner pin but I'll
 admit I need to look more closely at it.


 On 21 August 2011 23:40, Pete O'Connell pedrooconn...@gmail.comwrote:

 Thanks a lot Ivan. I'll have a play around with that tonight.

 Cheers
 Pete


 On Mon, Aug 22, 2011 at 3:57 PM, Ivan Busquets 
 ivanbusqu...@gmail.com wrote:

 Hi Pete,

 It's very possible, but probably easier to do it in Python than using
 expression-links

Re: [Nuke-users] zDepth maya vs nuke

2011-08-19 Thread Ivan Busquets
Nuke's ScanlineRender outputs depth as 1/z, so the data fades into black
as it gets further from camera.

This is so that anything outside the bounding box (generally black) will
already have the right value for something that's at infinity, so it will
blend nicely with parts of the scene that are far away.

It sounds like what you're getting from Maya are real distance (not
normalized) values. So if you want to bring the two in line, you can use a
ColorExpression and set the depth channel to be 1/depth.z, and attach that
to either the output of your scanline render, or to your Maya renders
(depending on which one you want to match).

Hope that helps.

Cheers,
Ivan

On Fri, Aug 19, 2011 at 5:11 PM, Gary Jaeger g...@corestudio.com wrote:

 OK, this seems like it should be so easy but I've never tried to match up
 the nuke depth with a depth pass out of maya. I've done a super-simple scene
 with just a plane and a camera. In maya the camera is 10 units from the
 origin (center of the frame). i.e. a camera slightly up on the Y looking
 down. I've rendered this with a depth pass as well as brought in the
 geometry and camera into nuke.

 My usual habit is to use a copy node to copy my cameraZ channel into the
 default depth.Z nuke channel (mistake?). I'm using the zBlur node set to
 focal plane setup to visualize my depth.

 I first assumed I could use the same values in the zBlur node but that
 doesn't seem to be the case.  I can get the focus plane to be at the same
 point if I use that Copy node, set the maya zBlur math to be depth , and
 set the focus plane to be 10. This makes sense. Then for the nuke scene
 (geo, scanline render, etc)  I set the zBlur math to be far=0 and set the
 focus plane to 0.1 for some reason and the focus planes line up. But I need
 to adjust the focus plane values in opposite directions.

 it's probably something obvious, but shouldn't these line up? and why
 wouldn't the nuke scanline use 10 as it's value for the focus plane?

 Seems like I've gone through all the math options trying (and failing) to
 find something consistent. Any help here?


 . . . . . . . . . . . .
 Gary Jaeger // Core Studio
 86 Graham Street, Suite 120
 San Francisco, CA 94129
 415 543 8140
 http://corestudio.com

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] reference camera data with expressions

2011-08-11 Thread Ivan Busquets
Hi Paul,

If you can rely on your Camera being always at the top of the tree
that goes into your node, you could get there using tcl's topnode

[topnode input0].focal

But you may hit the case where you have something connected above your
camera too (like an Axis), in which case topnode won't work either.

If you're into python, I have this snippet to crawl up the inputs of a
certain node and find the first node that matches a certain class. You
could use that within an expression in your node to get the first
camera node upstream.

##
def findUpstream(node, nodeClass = []):
 Convenience function to find the first node upstream that
matches a list of node Classes
if node and node.Class() in nodeClass:
return node
else:
for n in node.dependencies( nuke.INPUTS | nuke.HIDDEN_INPUTS):
node = findUpstream(n, nodeClass)
if node: return node
##

Cheers,
Ivan

On Wed, Aug 10, 2011 at 8:24 PM, Paul Raeburn
praeburn.li...@googlemail.com wrote:

 We have been using references tot he input node and then values to get
 values from a camera node.  ie. NoOp1.input.focal
 This works fine but breaks if there is anything in between the NoOp and the
 camera, so you can tidy up the script with Dots or end up with multiple
 cameras just to be tidy.
 Is there a way to get camera stream directly, or even the name of the camera
 node from the stream?
 thanks
 Paul Raeburn
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] reference camera data with expressions

2011-08-11 Thread Ivan Busquets
Hi Michael,

You can use tcl expressions in the value field of a ModifyMetadata node.

ex:
[value Camera1.focal]

On Thu, Aug 11, 2011 at 2:40 PM, Michael Garrett michaeld...@gmail.com wrote:
 Do you mean expression link to the modifymetadata value field?  How
 exactly do you do this?

 On 11 August 2011 14:36, Frank Rueter fr...@beingfrank.info wrote:

 You can expression link a camera's parameter to generically create meta
 data. But of course this creates the same problem. It would be good to have
 multiple inputs for the ModifyMetadata node that allows any type of input
 type to do this sort of thing (I guess input 0 would be the type being
 passes through like a copy node)


 On Aug 11, 2011, at 1:20 PM, Paul Raeburn praeburn.li...@googlemail.com
 wrote:

 I wondered about that, but metadata node dont work in camera pipes, so I
 ran into a dead end.  Getting the name of the camera node node world be
 great (as per Ivans recommendation).  Ideally it would be great to et the
 data from the camera pipe directly, so it can be modified by downstream axis
 etc, but I assume that a foundry question.
 Any suggestions on how to access the metadata of the camera pipe?

 On 12 August 2011 05:51, Frank Rueter fr...@beingfrank.info wrote:

 Try [topnode] which will give you the top most node of a stream.
 Obviously it won't work if the camera's parent pipe is connected. In that
 case you will need a script that walks upstream and returns the first camera
 node, then use that in your expression. Or use meta Data to make the camera
 info flow in the stream which is more elegant and probably faster.



 On Aug 10, 2011, at 8:24 PM, Paul Raeburn praeburn.li...@googlemail.com
 wrote:

 
  We have been using references tot he input node and then values to get
  values from a camera node.  ie. NoOp1.input.focal
 
  This works fine but breaks if there is anything in between the NoOp and
  the camera, so you can tidy up the script with Dots or end up with 
  multiple
  cameras just to be tidy.
 
  Is there a way to get camera stream directly, or even the name of the
  camera node from the stream?
 
  thanks
 
  Paul Raeburn
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] reference camera data with expressions

2011-08-11 Thread Ivan Busquets
But by having a ModifyMetadata node inside, you're re-casting the
output of that group to be an Image-Op, and therefore you wouldn't be
able to connect its output to anything that requires a camera input?

Using metadata is a nice workaround, but you still need to know what
camera you're pulling from beforehand (or do a topnode or a recursive
loop on the inputs).

I wish there was a more straightforward way to do this for gizmos,
though. (like it is in plugin-land, where you can just test for a
certain input Class).


On Thu, Aug 11, 2011 at 6:50 PM, Erik Winquist quis...@wetafx.co.nz wrote:

 pretty slick workaround, michael.

 nice one.

 erik


 Michael Garrett wrote:

 I  got something working that seems to enable metadata in the camera pipe.
 So you could add this right after the camera, where you don't need a Dot to
 keep things clean, then access camera values via metadata.  Not ideal, but
 it could be helpful:

 set cut_paste_input [stack 0]
 version 6.2 v1
 push $cut_paste_input
 Group {
  name AddCamMetaData
  selected true
  xpos -152
  ypos -129
  addUserKnob {20 User}
  addUserKnob {41 shownmetadata l  -STARTLINE T
 ViewMetaData2.shownmetadata}
 }
  Input {
   inputs 0
   name Input1
   xpos 61
   ypos -238
  }
  ModifyMetaData {
   metadata {
    {set focal \[value parent.input0.focal]}
   }
   name ModifyMetaData1
   selected true
   xpos 61
   ypos -176
  }
  ViewMetaData {
   name ViewMetaData2
   xpos 61
   ypos -150
  }
  Output {
   name Output1
   xpos 61
   ypos -40
  }
 end_group
 Blur {
  size {{\[metadata focal]}}
  name Blur3
  selected true
  xpos -152
  ypos -103
 }




 On 11 August 2011 14:59, Michael Garrett michaeld...@gmail.com wrote:

 Hi Ivan, great, thanks!  I was just trying Camera1.focal



 On 11 August 2011 14:47, Ivan Busquets ivanbusqu...@gmail.com wrote:

 Hi Michael,

 You can use tcl expressions in the value field of a ModifyMetadata
 node.

 ex:
 [value Camera1.focal]

 On Thu, Aug 11, 2011 at 2:40 PM, Michael Garrett michaeld...@gmail.com
 wrote:
  Do you mean expression link to the modifymetadata value field?  How
  exactly do you do this?
 
  On 11 August 2011 14:36, Frank Rueter fr...@beingfrank.info wrote:
 
  You can expression link a camera's parameter to generically create
  meta
  data. But of course this creates the same problem. It would be good to
  have
  multiple inputs for the ModifyMetadata node that allows any type of
  input
  type to do this sort of thing (I guess input 0 would be the type being
  passes through like a copy node)
 
 
  On Aug 11, 2011, at 1:20 PM, Paul Raeburn
  praeburn.li...@googlemail.com
  wrote:
 
  I wondered about that, but metadata node dont work in camera pipes, so
  I
  ran into a dead end.  Getting the name of the camera node node world
  be
  great (as per Ivans recommendation).  Ideally it would be great to et
  the
  data from the camera pipe directly, so it can be modified by
  downstream axis
  etc, but I assume that a foundry question.
  Any suggestions on how to access the metadata of the camera pipe?
 
  On 12 August 2011 05:51, Frank Rueter fr...@beingfrank.info wrote:
 
  Try [topnode] which will give you the top most node of a stream.
  Obviously it won't work if the camera's parent pipe is connected. In
  that
  case you will need a script that walks upstream and returns the first
  camera
  node, then use that in your expression. Or use meta Data to make the
  camera
  info flow in the stream which is more elegant and probably faster.
 
 
 
  On Aug 10, 2011, at 8:24 PM, Paul Raeburn
  praeburn.li...@googlemail.com
  wrote:
 
  
   We have been using references tot he input node and then values to
   get
   values from a camera node.  ie. NoOp1.input.focal
  
   This works fine but breaks if there is anything in between the NoOp
   and
   the camera, so you can tidy up the script with Dots or end up with
   multiple
   cameras just to be tidy.
  
   Is there a way to get camera stream directly, or even the name of
   the
   camera node from the stream?
  
   thanks
  
   Paul Raeburn
   ___
   Nuke-users mailing list
   Nuke-users@support.thefoundry.co.uk,
   http://forums.thefoundry.co.uk/
   http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Shake's lnoise3d equivalent in Nuke?

2011-08-03 Thread Ivan Busquets
Hi David,

You could probably use lerp to interpolate between values at certain
points of your curve.

Here's a quick and dirty example, with an overly long expression, but
you get the point:

set cut_paste_input [stack 0]
version 6.2 v3
push $cut_paste_input
NoOp {
 name NoOp1
 selected true
 xpos 37
 ypos -81
 addUserKnob {20 User}
 addUserKnob {7 freq}
 freq 0.08
 addUserKnob {7 lnoise}
 lnoise {{frame%(1/freq)?lerp( frame-(frame%(1/freq)),
random((frame-(frame%(1/freq)))*freq), frame-(frame%(1/freq)) +
(1/freq), random((frame-(frame%(1/freq)) + (1/freq))*freq),
frame):random(frame*freq) i}}
}



You could use the same approach to sample an existing (non-linear) curve too:

set cut_paste_input [stack 0]
version 6.2 v3
push $cut_paste_input
NoOp {
 name NoOp2
 selected true
 xpos 176
 ypos -88
 addUserKnob {20 User}
 addUserKnob {7 randomCurve}
 randomCurve {{random(frame) i}}
 addUserKnob {7 linearFreq l sample every N frames}
 linearFreq 5
 addUserKnob {7 lnoise}
 lnoise {{frame%(linearFreq)?lerp( frame-(frame%linearFreq),
randomCurve(frame-(frame%linearFreq)), frame-(frame%linearFreq) +
linearFreq, randomCurve(frame-(frame%linearFreq) + linearFreq),
frame):random(frame*freq)}}
}



Hope that helps.

Cheers,
Ivan


On Tue, Aug 2, 2011 at 3:45 PM, David Schnee dav...@tippett.com wrote:
 Hi Farhad,

 Simply looking for a way to interpolate an expression driven curve to
 linear, or use a linear expression driven curve.  In the attached image x =
 random(t) and y = is a key framed curve with linear interpolation.  Shake's
 l3dnoise(time) would generate a noise curve with linear interpolation, this
 is what I'm after in Nuke.  Ideally it would be something like lnoise(t),
 otherwise a way to bracket the expression to interpolate it as linear after
 you've input the expression in using say 'random' or 'noise'.

 Cheers,
 -Schnee

 On 08/02/2011 03:09 PM, Farhad Mohasseb wrote:

 I think people are just confused about what lnoise3D is supposed to do.
 Maybe if you explain what you're trying to achieve or what this node does,
 people could help you more.

 On Aug 2, 2011 9:26 AM, David Schnee dav...@tippett.com wrote:
 Thanks Nathan, this is cool.

 So nobody knows how to derive a linear noise curve in Nuke? Inconceivable!

 On 07/27/2011 03:49 PM, Nathan Hackett wrote:
 Brian Torres wrote a cool gizmo called BATCurve that might help
 http://www.vfxectropy.com/resources.php
 Has controls for random curves that most things call for...
 n


 Ivan Busquets wrote:
 I think he wants a linear noise generator, like Shake's lnoise.

 Don't know if there would be a way to get that using a noise curve
 plus its derivative, but otherwise you could sample your curve and get
 rid of any unwanted keyframes.
 Not great, though. :(


 On Wed, Jul 27, 2011 at 2:35 PM, Anthony Kramer
 anthony.kra...@gmail.com wrote:
 random(frame)
 you can make the noise faster or slower by adding a simple multiplier
 faster noise:
 random(frame*10)
 slower noise:
 random(frame*0.5)


 On Wed, Jul 27, 2011 at 2:28 PM, David Schnee dav...@tippett.com
 wrote:
 Or possibly any method to produce random/noise driven linear curves?

 On 07/26/2011 06:40 PM, David Schnee wrote:

 Does anyone know how to achieve the equivalent of Shake's linear noise
 expression for Nuke?

 looking for:

 lnoise3d(time)

 Cheers,
 -Schnee

 --

 \/ davids / comp \/ 177
 /\ tippettstudio /\ b d


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 --

 \/ davids / comp \/ 177
 /\ tippettstudio /\ b d

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 --

 \/ davids / comp \/ 177
 /\ tippettstudio /\ b d


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 --

 \/ davids / comp \/ 177
 /\ tippettstudio /\ b d

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http

Re: [Nuke-users] Shake's lnoise3d equivalent in Nuke?

2011-07-27 Thread Ivan Busquets
I think he wants a linear noise generator, like Shake's lnoise.

Don't know if there would be a way to get that using a noise curve
plus its derivative, but otherwise you could sample your curve and get
rid of any unwanted keyframes.
Not great, though. :(


On Wed, Jul 27, 2011 at 2:35 PM, Anthony Kramer
anthony.kra...@gmail.com wrote:
 random(frame)
 you can make the noise faster or slower by adding a simple multiplier
 faster noise:
 random(frame*10)
 slower noise:
 random(frame*0.5)


 On Wed, Jul 27, 2011 at 2:28 PM, David Schnee dav...@tippett.com wrote:

 Or possibly any method to produce random/noise driven linear curves?

 On 07/26/2011 06:40 PM, David Schnee wrote:

 Does anyone know how to achieve the equivalent of Shake's linear noise
 expression for Nuke?

 looking for:

 lnoise3d(time)

 Cheers,
 -Schnee

 --

 \/ davids / comp \/ 177
 /\ tippettstudio /\ b d


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 --

 \/ davids / comp \/ 177
 /\ tippettstudio /\ b d

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Odd gamma behaviour

2011-07-20 Thread Ivan Busquets
Hi Ron,

I think you're hitting a float precision limit there, or rather a
limit introduced by the way most (all?) computer graphics software do
filtering operations by sampling a square area. But I'll put a big
question mark on the second assumption since I don't know the
specifics.

If you take a classic gauss curve and apply the same gamma operation
to it, you'll see the low end starting to flatten towards 1. Now, if
you look again after your blur operations and sample in the viewer,
you'll see that the x coordinate at which the pixels turn to full
black (0) is the same on all scanlines. The value right before that is
different on each scanline, so the radial gradient is still there, but
the point where each line reaches 0 is the same. If you push that
enough, though, you're flattening all the low end values towards one,
effectively losing the small differences between them.

The only way I can think of to get around that within your gizmo is to
set a threshold to push some of the very low end values to 0. You
could use a clamp node with the Clamp To turned on and set to 0, and
then set your threshold using the minimum knob.

Quick example:

set cut_paste_input [stack 0]
version 6.2 v4
push $cut_paste_input
Sparkles {
 size 30
 motion 200
 direction 60
 fadeTolerance 152
 broken_affected 6.6
 broken_start 0.3
 broken_holes 0.186
 sparks_angle 77
 name Sparkles2
 selected true
 xpos -893
 ypos -348
}
Crop {
 box {0 0 2048 1556}
 name Crop2
 selected true
 xpos -893
 ypos -299
}
Group {
 name ExpoBlur1
 selected true
 xpos -893
 ypos -224
 addUserKnob {20 ExpoBlur}
 addUserKnob {14 size R 0 5}
 size 2
 addUserKnob {41 strength T Grade160.white}
 addUserKnob {41 curve T Grade160.gamma}
 addUserKnob {41 black_clamp l black clamp T Grade160.black_clamp}
 addUserKnob {41 white_clamp l white clamp -STARTLINE T Grade160.white_clamp}
 addUserKnob {41 crop l crop to format T Blur11.crop}
 addUserKnob {41 minimum T Clamp1.minimum}
}
 Input {
  inputs 0
  name Input1
  xpos 389
  ypos -32
 }
 Dot {
  name Dot328
  xpos 423
  ypos -4
 }
set N4ba6c50 [stack 0]
add_layer {rgba rgba.beta}
 Blur {
  size {{parent.size**7} {parent.size**7}}
  crop {{parent.Blur11.crop}}
  name Blur17
  xpos 714
  ypos 100
 }
push $N4ba6c50
 Blur {
  size {{parent.size**6} {parent.size**6}}
  crop {{parent.Blur11.crop}}
  name Blur16
  xpos 589
  ypos 96
 }
push $N4ba6c50
 Blur {
  size {{parent.size**5} {parent.size**5}}
  crop {{parent.Blur11.crop}}
  name Blur15
  xpos 499
  ypos 104
 }
push $N4ba6c50
 Blur {
  size {{parent.size**4} {parent.size**4}}
  crop {{parent.Blur11.crop}}
  name Blur14
  xpos 389
  ypos 106
 }
push $N4ba6c50
 Blur {
  size {{parent.size**3} {parent.size**3}}
  crop {{parent.Blur11.crop}}
  name Blur13
  xpos 287
  ypos 108
 }
push 0
push $N4ba6c50
 Blur {
  size {{parent.size*2} {parent.size*2}}
  crop {{parent.Blur11.crop}}
  name Blur12
  xpos 174
  ypos 108
 }
push $N4ba6c50
 Blur {
  size {{parent.size} {parent.size}}
  name Blur11
  xpos 53
  ypos 115
 }
 Merge2 {
  inputs 7+1
  operation plus
  name Merge243
  xpos 389
  ypos 233
 }
 Clamp {
  minimum 4e-05
  maximum_enable false
  MinClampTo_enable true
  name Clamp1
  xpos 389
  ypos 278
 }
set N74959c0 [stack 0]
 Grade {
  channels rgba
  white 1.02
  gamma 11
  white_clamp true
  name Grade160
  xpos 389
  ypos 358
 }
 Output {
  name Output1
  xpos 389
  ypos 438
 }
push $N74959c0
 Viewer {
  input_process false
  name Viewer1
  xpos 570
  ypos 426
 }
end_group



On Wed, Jul 20, 2011 at 10:27 AM, Ron Ganbar ron...@gmail.com wrote:
 Is the gamma (curve) all the way up on 5 and size on 2 and you're not
 getting a square?


 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
      +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/


 On 20 July 2011 18:19, Randy Little randyslit...@gmail.com wrote:

 did you try hooking and unhooking or leaving hooked a a black constant.
 Its 2 am I don't know what buttons I pushed to make it work.   SORRY.  BUT
 Randy S. Little
 http://www.rslittle.com




 On Thu, Jul 21, 2011 at 01:16, Ron Ganbar ron...@gmail.com wrote:

 Hi Randy,
 Still wrong on my end... :-(
 Played with Blur11 - didn't seem to make any difference.
 Thanks for trying.

 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
      +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/


 On 20 July 2011 18:12, Randy Little randyslit...@gmail.com wrote:

 Merry Xmas its fixed.  The weird thing is that fixing was to turn off
 crop to format in the blur11 then turn it back on.   ?  So I don't 
 know.
  strange. maybe it was holding onto the format from pre crop some how for
 whatever reason.
 set cut_paste_input [stack 0]
 version 6.2 v3
 push $cut_paste_input
 Sparkles {
  size 30
  motion 200
  direction 60
  fadeTolerance 152
  broken_affected 6.6
  broken_start 0.3
  broken_holes 0.186
  sparks_angle 77
  name Sparkles1
  selected true
  xpos -111
  ypos -205
 }
 Crop {
  box {0 0 2048 

Re: [Nuke-users] Odd gamma behaviour

2011-07-20 Thread Ivan Busquets
Actually, I think the reason it looks correct in 8 and 16 bit is
because values hit 0 before they reach the edge of the square sampled
area.
Which would be the same as clamping some of the very low end in a float image.

The problem is, in the case of a radial gradient like this one, that
the pixels at the corners are obviously further away from the center
than say a pixel in the middle of the right edge. And indeed they get
a smaller value because of it. BUT, the sampling region is square, so
the X pixel coordinate at which they reach 0 is the same. If you were
to use an Expression node instead of a gamma to show all pixels that
are not 0, you would see exactly the same problem.


On Wed, Jul 20, 2011 at 10:58 AM, chris ze.m...@gmx.net wrote:

 yeah, strange.. the effect happens actually on any simple
 blur... looks like the gaussian blur is not really spreading
 uniformly, but more boxy. boosting the quality level
 doesn't help either. ironically, if you switch the filter to
 box it gets more like a circle.

 also, if i replicate the same setup in shake, it looks like
 a circle in 8 and 16bit, but gets wonky in float. so maybe a
 gaussian blur is tricky to do in float?

 examples below
 ++ c.


 nuke:
 *

 set cut_paste_input [stack 0]
 add_layer {rgba rgba.beta}
 ColorWheel {
  inputs 0
  gamma 0.45
  name ColorWheel1
  selected true
  xpos -52
  ypos -353
 }
 Shuffle {
  red alpha
  green alpha
  blue alpha
  name Shuffle1
  selected true
  xpos -52
  ypos -268
 }
 Transform {
  scale 0.5
  center {960 540}
  name Transform1
  selected true
  xpos -52
  ypos -244
 }
 set N1e524f50 [stack 0]
 Blur {
  channels rgb
  size 50
  filter box
  quality 80
  name Blur1
  label size: \[value size]
  selected true
  xpos -52
  ypos -203
 }
 Gamma {
  channels rgb
  value 8
  name Gamma1
  selected true
  xpos -52
  ypos -152
 }
 push $N1e524f50
 Blur {
  channels rgb
  size 50
  quality 80
  name Blur2
  label size: \[value size]
  selected true
  xpos 65
  ypos -202
 }
 Gamma {
  channels rgb
  value 8
  name Gamma2
  selected true
  xpos 65
  ypos -148
 }



 shake:
 **


 ColorWheel1 = ColorWheel(1920, 1080, 4, 0, 1, 1, 1);
 Bytes1 = Bytes(ColorWheel1, 2);
 Reorder1 = Reorder(Bytes1, );
 Move2D1 = Move2D(Reorder1, 0, 0, 0, 1, 0.5, xScale, 0, 0, width/2,
    height/2, default, xFilter, trsx, 0, 0, 0.5, 0, 0, time);
 Blur1 = Blur(Move2D1, 100, xPixels/GetDefaultAspect(), 0, gauss,
    xFilter, rgba);
 Gamma1 = Gamma(Blur1, 8, rGamma, rGamma, 1);







 On 7/20/11 at 7:16 PM, ron...@gmail.com (Ron Ganbar) wrote:

 Hi Randy,
 Still wrong on my end... :-(
 Played with Blur11 - didn't seem to make any difference.
 Thanks for trying.

 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
 +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/



 On 20 July 2011 18:12, Randy Little randyslit...@gmail.com wrote:

 Merry Xmas its fixed.  The weird thing is that fixing was

 to turn off crop

 to format in the blur11 then turn it back on.   ?  So

 I don't know.

 strange. maybe it was holding onto the format from pre

 crop some how for

 whatever reason.

 set cut_paste_input [stack 0]
 version 6.2 v3
 push $cut_paste_input
 Sparkles {
 size 30
 motion 200
 direction 60
 fadeTolerance 152
 broken_affected 6.6
 broken_start 0.3
 broken_holes 0.186
 sparks_angle 77
 name Sparkles1
 selected true
 xpos -111
 ypos -205
 }
 Crop {
 box {0 0 2048 1556}
 name Crop4
 selected true
 xpos -106
 ypos -157
 }
 Group {
 name ExpoBlur2
 selected true
 xpos -106
 ypos -103
 addUserKnob {20 ExpoBlur}
 addUserKnob {14 size R 0 5}
 size 0.2
 addUserKnob {41 strength T Grade160.white}
 addUserKnob {41 curve T Grade160.gamma}
 addUserKnob {41 black_clamp l black clamp T

 Grade160.black_clamp}

 addUserKnob {41 white_clamp l white clamp -STARTLINE T
 Grade160.white_clamp}
 addUserKnob {41 crop l crop to format T Blur11.crop}
 }
 Input {
 inputs 0
 name Input1
 xpos 389
 ypos -32
 }
 Dot {
 name Dot328
 xpos 423
 ypos -4
 }
 set N1c441b60 [stack 0]
 add_layer {rgba rgba.beta}
 Blur {
 size {{parent.size**7} {parent.size**7}}
 crop {{parent.Blur11.crop}}
 name Blur17
 xpos 714
 ypos 100
 }
 push $N1c441b60
 Blur {
 size {{parent.size**6} {parent.size**6}}
 crop {{parent.Blur11.crop}}
 name Blur16
 xpos 589
 ypos 96
 }
 push $N1c441b60
 Blur {
 size {{parent.size**5} {parent.size**5}}
 crop {{parent.Blur11.crop}}
 name Blur15
 xpos 499
 ypos 104
 }
 push $N1c441b60
 Blur {
 size {{parent.size**4} {parent.size**4}}
 crop {{parent.Blur11.crop}}
 name Blur14
 xpos 389
 ypos 106
 }
 push $N1c441b60
 Blur {
 size {{parent.size**3} {parent.size**3}}
 crop {{parent.Blur11.crop}}
 name Blur13
 xpos 287
 ypos 108
 }
 push 0
 push $N1c441b60
 Blur {
 size {{parent.size*2} {parent.size*2}}
 crop {{parent.Blur11.crop}}
 name Blur12
 xpos 174
 ypos 108
 }
 push $N1c441b60
 Blur {
 size {{parent.size} {parent.size}}
 name Blur11
 xpos 53
 ypos 115
 }
 Merge2 {
 inputs 7+1
 operation plus
 name Merge243
 

Re: [Nuke-users] Changing tile_color with user knobs?

2011-07-11 Thread Ivan Busquets
Hi Ron,

You can bake knobChanged callbacks (or other callbacks like onCreate) into a
node by filling its hidden knobChanged knob with your desired code as a
string.
For example:

n = nuke.createNode('Blur')

n['knobChanged'].setValue('if nuke.thisKnob().name() == size: print size
knob changed')


If you need your callback code to have multiple lines, just triple-quote the
string:

n['knobChanged'].setValue(
# do
# smart
# stuff
# here
)

Cheers,
Ivan

On Mon, Jul 11, 2011 at 9:26 AM, Ron Ganbar ron...@gmail.com wrote:

 Hi Ivan,
 I had to look at the actual .nk file in a text editor to see what you made
 there. How can I see and change the python script in the gui itself? How did
 you add it to the node?

 Thanks,
 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/



 On 11 July 2011 17:06, David Schnee dav...@tippett.com wrote:

 **
 Thank you Ivan, this is great!

 Cheers,
 -Schnee


 On 07/08/2011 11:49 PM, Ivan Busquets wrote:

 Ok, something's going definitely wrong when I copy-paste this. Script
 attached instead.

 Sorry about that.


 On Fri, Jul 8, 2011 at 11:40 PM, Ivan Busquets ivanbusqu...@gmail.com 
 ivanbusqu...@gmail.com wrote:


  Hmm, something went funny with the formatting after copy/pasting. Here
 it is again.


 set cut_paste_input [stack 0]
 version 6.2 v3
 push $cut_paste_input
 NoOp {
  name NoOp1
  knobChanged \nn = nuke.thisNode()\nk = nuke.thisKnob()\nif k.name()
 in \['red', 'green', 'blue']:\n
 nuke.thisNode()\['tile_color'].setValue(int('%02x%02x%02x%02x' %
 (n\['red'].value()*255,n\['green'].value()*255,n\['blue'].value()*255,1),16))\n\n\n
  tile_color 0xff0001
  selected true
  xpos -299
  ypos -47
  addUserKnob {20 User}
  addUserKnob {6 red +STARTLINE}
  addUserKnob {6 green +STARTLINE}
  green true
  addUserKnob {6 blue +STARTLINE}
 }



 On Fri, Jul 8, 2011 at 11:37 PM, Ivan Busquets ivanbusqu...@gmail.com 
 ivanbusqu...@gmail.com wrote:


  That's a fun idea :)
 Yes you can, using a knobChanged callback that fires when any of your red, 
 green or blue knobs are changed.
 The trickiest bit is probably to set the right value for the tile_color knob 
 based on your rgb values, since tile_color uses values packed in a rather 
 awkward way.
 But here, have a look and see if that does what you want.

 set cut_paste_input [stack 0] version 6.2 v3 push $cut_paste_input NoOp { 
 name NoOp1 knobChanged \nn = nuke.thisNode()\nk = nuke.thisKnob()\nif 
 k.name() in \['red', 'green', 'blue']:\n 
 nuke.thisNode()\['tile_color'].setValue(int('%02x%02x%02x%02x' % 
 (n\['red'].value()*255,n\['green'].value()*255,n\['blue'].value()*255,1),16))\n\n\n
  tile_color 0xff0001 selected true xpos -299 ypos -17 addUserKnob {20 User} 
 addUserKnob {6 red +STARTLINE} addUserKnob {6 green +STARTLINE} green true 
 addUserKnob {6 blue +STARTLINE} }

 Cheers,
 Ivan
 On Fri, Jul 8, 2011 at 10:46 AM, David Schnee dav...@tippett.com 
 dav...@tippett.com wrote:


  Does anyone know if there is a way to dynamically link/change tile_color 
 with user knobs?  Say I have a gizmo with a check box for red,green, and 
 blue.  If only the red is checked, I want the tile_color to be red, if red 
 and green are checked, yellow, just blue, blue, and so on.  Is this possible?

 Cheers,
 -Schnee

 --

 \/ davids / comp \/ 177
 /\ tippettstudio /\ b d

 ___
 Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
 http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 ___
 Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
 http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 --

 \/ davids / comp \/ 177
 /\ tippettstudio /\ b d


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Changing tile_color with user knobs?

2011-07-09 Thread Ivan Busquets
Hmm, something went funny with the formatting after copy/pasting. Here
it is again.


set cut_paste_input [stack 0]
version 6.2 v3
push $cut_paste_input
NoOp {
 name NoOp1
 knobChanged \nn = nuke.thisNode()\nk = nuke.thisKnob()\nif k.name()
in \['red', 'green', 'blue']:\n
nuke.thisNode()\['tile_color'].setValue(int('%02x%02x%02x%02x' %
(n\['red'].value()*255,n\['green'].value()*255,n\['blue'].value()*255,1),16))\n\n\n
 tile_color 0xff0001
 selected true
 xpos -299
 ypos -47
 addUserKnob {20 User}
 addUserKnob {6 red +STARTLINE}
 addUserKnob {6 green +STARTLINE}
 green true
 addUserKnob {6 blue +STARTLINE}
}



On Fri, Jul 8, 2011 at 11:37 PM, Ivan Busquets ivanbusqu...@gmail.com wrote:

 That's a fun idea :)
 Yes you can, using a knobChanged callback that fires when any of your red, 
 green or blue knobs are changed.
 The trickiest bit is probably to set the right value for the tile_color knob 
 based on your rgb values, since tile_color uses values packed in a rather 
 awkward way.
 But here, have a look and see if that does what you want.

 set cut_paste_input [stack 0] version 6.2 v3 push $cut_paste_input NoOp { 
 name NoOp1 knobChanged \nn = nuke.thisNode()\nk = nuke.thisKnob()\nif 
 k.name() in \['red', 'green', 'blue']:\n 
 nuke.thisNode()\['tile_color'].setValue(int('%02x%02x%02x%02x' % 
 (n\['red'].value()*255,n\['green'].value()*255,n\['blue'].value()*255,1),16))\n\n\n
  tile_color 0xff0001 selected true xpos -299 ypos -17 addUserKnob {20 User} 
 addUserKnob {6 red +STARTLINE} addUserKnob {6 green +STARTLINE} green true 
 addUserKnob {6 blue +STARTLINE} }

 Cheers,
 Ivan
 On Fri, Jul 8, 2011 at 10:46 AM, David Schnee dav...@tippett.com wrote:

 Does anyone know if there is a way to dynamically link/change tile_color 
 with user knobs?  Say I have a gizmo with a check box for red,green, and 
 blue.  If only the red is checked, I want the tile_color to be red, if red 
 and green are checked, yellow, just blue, blue, and so on.  Is this possible?

 Cheers,
 -Schnee

 --

 \/ davids / comp \/ 177
 /\ tippettstudio /\ b d

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Changing tile_color with user knobs?

2011-07-09 Thread Ivan Busquets
Ok, something's going definitely wrong when I copy-paste this. Script
attached instead.

Sorry about that.


On Fri, Jul 8, 2011 at 11:40 PM, Ivan Busquets ivanbusqu...@gmail.com wrote:
 Hmm, something went funny with the formatting after copy/pasting. Here
 it is again.


 set cut_paste_input [stack 0]
 version 6.2 v3
 push $cut_paste_input
 NoOp {
  name NoOp1
  knobChanged \nn = nuke.thisNode()\nk = nuke.thisKnob()\nif k.name()
 in \['red', 'green', 'blue']:\n
 nuke.thisNode()\['tile_color'].setValue(int('%02x%02x%02x%02x' %
 (n\['red'].value()*255,n\['green'].value()*255,n\['blue'].value()*255,1),16))\n\n\n
  tile_color 0xff0001
  selected true
  xpos -299
  ypos -47
  addUserKnob {20 User}
  addUserKnob {6 red +STARTLINE}
  addUserKnob {6 green +STARTLINE}
  green true
  addUserKnob {6 blue +STARTLINE}
 }



 On Fri, Jul 8, 2011 at 11:37 PM, Ivan Busquets ivanbusqu...@gmail.com wrote:

 That's a fun idea :)
 Yes you can, using a knobChanged callback that fires when any of your red, 
 green or blue knobs are changed.
 The trickiest bit is probably to set the right value for the tile_color knob 
 based on your rgb values, since tile_color uses values packed in a rather 
 awkward way.
 But here, have a look and see if that does what you want.

 set cut_paste_input [stack 0] version 6.2 v3 push $cut_paste_input NoOp { 
 name NoOp1 knobChanged \nn = nuke.thisNode()\nk = nuke.thisKnob()\nif 
 k.name() in \['red', 'green', 'blue']:\n 
 nuke.thisNode()\['tile_color'].setValue(int('%02x%02x%02x%02x' % 
 (n\['red'].value()*255,n\['green'].value()*255,n\['blue'].value()*255,1),16))\n\n\n
  tile_color 0xff0001 selected true xpos -299 ypos -17 addUserKnob {20 User} 
 addUserKnob {6 red +STARTLINE} addUserKnob {6 green +STARTLINE} green true 
 addUserKnob {6 blue +STARTLINE} }

 Cheers,
 Ivan
 On Fri, Jul 8, 2011 at 10:46 AM, David Schnee dav...@tippett.com wrote:

 Does anyone know if there is a way to dynamically link/change tile_color 
 with user knobs?  Say I have a gizmo with a check box for red,green, and 
 blue.  If only the red is checked, I want the tile_color to be red, if red 
 and green are checked, yellow, just blue, blue, and so on.  Is this 
 possible?

 Cheers,
 -Schnee

 --

 \/ davids / comp \/ 177
 /\ tippettstudio /\ b d

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




tile_color.nk
Description: Binary data
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: Re: [Nuke-users] Default Pixel aspect ratio for Read Nodes

2011-06-24 Thread Ivan Busquets
Yes, you're right Ean.

I mentioned the multiple formats case because the debate originated from
why would changing the order in formats.tcl make a difference.
But sure, if there's more than one format with the same res, it'll match the
first one; and if there's only one, it'll match that one, of course.

I suppose a better statement would be: yes, formats already defined in the
list of formats DO have an effect on the Read's pixel_aspect when
pixel_aspect can't be recovered from the metadata

Thanks for pointing it out, though.

Have a good weekend!
Ivan

2011/6/24 Ean Carr eanc...@gmail.com

 Hey Ivan,

 Doesn't have to be more than one format... just one will do. Try a test:

 1. Write out a file without pixel_aspect in the header, like SGI with the
 format: 20x20 5
 2. In a new Nuke instance, read that in. It's square; Nuke has no way of
 knowing it had a pixel_aspect of 5.0.
 3. In a new Nuke instance, create a format in the root: 20x20 5 (you don't
 need to set it as the root format, just define it).
 4. Read the same file back in and Nuke matches it to that format,
 pixel_aspect included.

 -E

 Read thg


 2011/6/23 Ivan Busquets ivanbusqu...@gmail.com

 Formats.tcl file has nothing to do with Reader's aspect ratio.


 It does when there's more than one format defined with the same
 resolution, and the file does not contain metadata of its pixel aspect.

 In that case, Nuke will set the format of the Read to the first format in
 the list that matches the file's resolution, with whatever pixel aspect is
 defined in that format.


 On Thu, Jun 23, 2011 at 1:50 PM, Adrian Baltowski 
 adrian...@poczta.onet.pl wrote:

 Hey

 Formats.tcl file has nothing to do with Reader's aspect ratio. This is
 only list of names of formats in Nuke, created for users convenience.
 Reader set up format based on info about resolution and aspect ratio of
 file. Then Nuke compares the format with the formats.tcl and if format has a
 name, Nuke uses this name. Thats all.
 If you remove standard PAL (1.09) format from the list, Nuke will create
 new unnamed format.

 I guess that you have problem with quicktime files. The actual problem is
 inside movReader in Nuke, which in some cases can't read out actual aspect
 ratio from quicktime file. Its worth to say, that sometimes this information
 is not saved in mov files.

 Best
 Adrian

 W dniu 2011-06-23 21:43:45 użytkownik Deke Kincaid 
 dekekinc...@gmail.com napisał:

 I suggest you don't edit the files in your application directory.  Copy
 it to your .nuke or wherever you NUKE_PATH is and edit it there.

 -deke

 On Thu, Jun 23, 2011 at 15:25, Ned Wilson ned.wil...@scanlinevfx.comwrote:


 As an alternative, edit formats.tcl.

 On windows machines, you will find this here:

 C:Program FilesNuke6.2v4pluginsformats.tcl


 On Mac, here:

 /Applications/Nuke6.2v4/Nuke6.2v4.app/Contents/MacOS/plugins/formats.tcl
 ( This is from memory, I don't have a Mac in front of me )

 Inside this file you will notice that 16:9 video formats are listed
 after 4:3 video formats. Simply cut the 16:9 formats and paste them before
 the 4:3 ones. Then, Nuke will default to the 16:9 version for both NTSC and
 PAL.




 On 6/23/2011 8:42 AM, Ean Carr wrote:

 Hey Donat,

 Just edit the built-in PAL format to a different pixel aspect and Read
 nodes will match images that match the PAL resolution to that format.

 Edit  Project Settings...

 Change PAL from 1.09 to whatever you want.

 -E

 On Thu, Jun 23, 2011 at 2:35 PM, Donat Van Bellinghen 
 donat.l...@gmail.com wrote:


 Hi,

 Is there a way to change the default pixel aspect ratio for read nodes.
 We use PAL Anamorphic footage a lot and when I drop some footage in Nuke 
 the
 Read node is set to a standard PAL pixel aspect ratio.

 I hope there's a solution for this.

 Regards.

 Donat Van Bellinghen
 www.nozon.com
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
 http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing

Re: [Nuke-users] linear to AlexaV3LogC math

2011-06-23 Thread Ivan Busquets
Based on that formula, the reverse should be:

x0.0106232?(log10((x + 0.00937677) / 0.18)*0.2471896) + 0.385537:(((x+
0.00937677) / 0.18) + 0.04378604) * 0.9661776

Where 0.0106232 comes from solving the second part of the equation using the
threshold value (0.1496582)

(0.149658 / 0.9661776 - 0.04378604) * 0.18 - 0.00937677 = 0.0106232





On Wed, Jun 22, 2011 at 11:28 AM, Torax Unga tungau...@yahoo.com wrote:

 Keeping it going with the math questions.

 If this is the math for going from AlexaV3LogC to Linear:

 (x  0.1496582 ? pow(10.0, (x - 0.385537) / 0.2471896) : x / 0.9661776 -
 0.04378604) * 0.18 - 0.00937677

 What's the math to revers it?  linear to AlexaV3LogC.

 Gracias!



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: Re: [Nuke-users] Default Pixel aspect ratio for Read Nodes

2011-06-23 Thread Ivan Busquets

 Formats.tcl file has nothing to do with Reader's aspect ratio.


It does when there's more than one format defined with the same resolution,
and the file does not contain metadata of its pixel aspect.

In that case, Nuke will set the format of the Read to the first format in
the list that matches the file's resolution, with whatever pixel aspect is
defined in that format.

On Thu, Jun 23, 2011 at 1:50 PM, Adrian Baltowski
adrian...@poczta.onet.plwrote:

 Hey

 Formats.tcl file has nothing to do with Reader's aspect ratio. This is only
 list of names of formats in Nuke, created for users convenience.
 Reader set up format based on info about resolution and aspect ratio of
 file. Then Nuke compares the format with the formats.tcl and if format has a
 name, Nuke uses this name. Thats all.
 If you remove standard PAL (1.09) format from the list, Nuke will create
 new unnamed format.

 I guess that you have problem with quicktime files. The actual problem is
 inside movReader in Nuke, which in some cases can't read out actual aspect
 ratio from quicktime file. Its worth to say, that sometimes this information
 is not saved in mov files.

 Best
 Adrian

 W dniu 2011-06-23 21:43:45 użytkownik Deke Kincaid dekekinc...@gmail.com
 napisał:

 I suggest you don't edit the files in your application directory.  Copy it
 to your .nuke or wherever you NUKE_PATH is and edit it there.

 -deke

 On Thu, Jun 23, 2011 at 15:25, Ned Wilson ned.wil...@scanlinevfx.comwrote:


 As an alternative, edit formats.tcl.

 On windows machines, you will find this here:

 C:Program FilesNuke6.2v4pluginsformats.tcl


 On Mac, here:

 /Applications/Nuke6.2v4/Nuke6.2v4.app/Contents/MacOS/plugins/formats.tcl (
 This is from memory, I don't have a Mac in front of me )

 Inside this file you will notice that 16:9 video formats are listed after
 4:3 video formats. Simply cut the 16:9 formats and paste them before the 4:3
 ones. Then, Nuke will default to the 16:9 version for both NTSC and PAL.




 On 6/23/2011 8:42 AM, Ean Carr wrote:

 Hey Donat,

 Just edit the built-in PAL format to a different pixel aspect and Read
 nodes will match images that match the PAL resolution to that format.

 Edit  Project Settings...

 Change PAL from 1.09 to whatever you want.

 -E

 On Thu, Jun 23, 2011 at 2:35 PM, Donat Van Bellinghen 
 donat.l...@gmail.com wrote:


 Hi,

 Is there a way to change the default pixel aspect ratio for read nodes.
 We use PAL Anamorphic footage a lot and when I drop some footage in Nuke the
 Read node is set to a standard PAL pixel aspect ratio.

 I hope there's a solution for this.

 Regards.

 Donat Van Bellinghen
 www.nozon.com
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
 http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Gizmo doesn't insert into pipe

2011-06-16 Thread Ivan Busquets
I suppose it's because Shift is already the modifier key for splitting off a
branch.

Try with alt or ctrl instead, or you could also try alt+shift+g, which I
guess would be fine.


On Thu, Jun 16, 2011 at 6:22 PM, Anthony Kramer anthony.kra...@gmail.comwrote:

 This is probably something really simple that I'm missing, but I've scoured
 this mailing list and cant seem to find the answer

 I have a gizmo that Ive put into a custom menu and assigned a hotkey. When
 I use the menu to insert the gizmo it inserts itself into the pipe as I'd
 expect, both input and output are connected. However, when I use the hotkey,
 the gizmo attaches the input to the selected node, but doesnt connect the
 output. Heres the line in my menu.py:

 m.addCommand(GradeAOV,nuke.createNode(\GradeAOV\),shift+g)

 thoughts?

 -ak



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Line between two points like ramp

2011-05-16 Thread Ivan Busquets
Why don't you want to use the Ramp node? You could have it branched to the
side inside your gizmo (not affecting the image output) but still pick its
point 0 and point 1 knobs to get the same UI handles as the Ramp node.

Is that what you're looking for?

set cut_paste_input [stack 0]
version 6.2 v1
push $cut_paste_input
Group {
 name Group1
 selected true
 xpos 42
 ypos -83
 addUserKnob {20 User}
 addUserKnob {41 p0 l point 0 T Ramp1.p0}
 addUserKnob {41 p1 l point 1 T Ramp1.p1}
}
 Input {
  inputs 0
  name Input1
  xpos 0
 }
set N1bf545c0 [stack 0]
 Output {
  name Output1
  xpos 0
  ypos 300
 }
push $N1bf545c0
 Ramp {
  p0 {670 700}
  p1 {1550 315}
  name Ramp1
  selected true
  xpos 95
  ypos 62
 }
end_group


On Sat, May 14, 2011 at 11:49 AM, Brogan Ross broganr...@gmail.com wrote:

 Does anyone know if there's a way to create that line connecting two
 position points like the ramp, without using the ramp node?  I've got a
 gizmo with two points, and would like that visual representation.
 Thanks

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Improvements to the Nuke Mailing Lists

2011-05-06 Thread Ivan Busquets
Many thanks, Jack.
And sorry that ended up being so much trouble!

I certainly appreciate it, though. Paired with the search engine of the web
forum, this will be a huge source of info.

Thanks so much, and have a great weekend.

Ivan

On Fri, May 6, 2011 at 11:14 AM, Jack Binks j...@thefoundry.co.uk wrote:

 Well, that was more painful than I expected. Definitely need something more
 fortifying than tea and biscuits now!
 The web front end should now have been updated with the mailman archives we
 have here, sewn together with individual's mail archives for areas where the
 central ones were lacking. Anyone who's registered on the site today will
 need to re-register I'm afraid, and there may have been a post or two
 dropped in the changeover as I had to unlink to set up a bridge to an mbox
 importer and then feed posts into that. Not very pleasant.
 Have a nice weekend all.

 Cheers
 Jack


 On 06/05/2011 17:38, J Bills wrote:

 awesome to hear, thanks for the extra effort jack!

 On Fri, May 6, 2011 at 5:04 AM, Jack Binksj...@thefoundry.co.uk  wrote:

 If you have a few emails from the lists prior to that you find yourself
 often referring to, please feel free to forward these round the list
 again
 so they're archived for posterity.

 That's a nice idea, but I think having all the archives already dumped in
 there would be a lot more useful. Is there any technical (or legal?)
 reason
 that prevents you from doing that?

 They're unfortunately not consistently 100% of posts, but since you've
 asked
 so nicely I think I've found a (nasty) way of porting what archive data
 there, is with some testing on an offline site. I'm going to attempt to
 finagle this on the production site now, or rather once I've had a
 fortifying cup of tea and a hobnob. Note that the forum bridge is going
 to
 be down until I complete this, so anything posted there won't make it
 over.
 New posts to the list via email will stack up however and should make it
 (barring the potential for one or two).
 I will email against once this is complete, be it a success or failure.

 Cheers
 Jack

 Thanks,
 Ivan


 On Thu, May 5, 2011 at 9:16 AM, Ned Wilsonnedwil...@gmail.com  wrote:

 Nice one, Jack... much appreciated. :)

 On May 5, 2011, at 4:57 AM, Jack Binks wrote:

  Hi All,

 Due to popular demand we've made a few improvements to the mailing
 lists
 in the form of a search-able archive and web front end.

 These can be found at http://forums.thefoundry.co.uk/ and act as a
 bidirectional link up with the existing mailing lists. Anything posted
 to
 the list will appear in the website thread listings, and anything
 posted to
 the web will bounce out to the mailing lists (with a few minutes
 latency).
 Note that the archives are both viewable and searchable without logging
 in
 to the site, however if you wish to post via the web you'll need to log
 in.
 This has a separate membership to the mailing lists, so use the
 register
 link on the linked page to sign up.

 As with the mailing lists, these are not avenues for official support -
 such reports should continue to go through supp...@thefoundry.co.uk(along
 with any problems you notice with the forums!). The archives run back a
 couple of months, ie since the bridge element has been running, and
 will
 build with time. If you have a few emails from the lists prior to that
 you
 find yourself often referring to, please feel free to forward these
 round
 the list again so they're archived for posterity. The email and web
 front
 ends are both covered by the pre-existing code of conduct, and please
 note
 the additional TCs and privacy policies linked in the page footer.
 We'll be
 setting up links from the main site shortly.

 An important facet of making these genuinely usable for everyone is
 observing good thread etiquette, and that's down to all of us who use
 the
 lists. The threads on the web front end mimic the threads seen in your
 email
 client, so even if you don't personally use threads in your emails,
 please
 do follow standard rules when responding to ensure the archives are
 easy to
 search and follow, and indeed, for everyone else who does use a
 threaded
 email client view.

 For those of you who don't generally use threads, there are lots of
 resources on the web related to good etiquette, however the most
 important
 from this point of view are:

 -When creating a new topic hit 'new email' in your client and set to:
 mailing list address  (eg nuke-users@support.thefoundry.co.uk).
 Don't find
 a different email topic on the list, open a contained email, hit reply
 and
 change the subject as there's a secret squirrel identifier which is
 used to
 id the thread and which hasn't been changed. If you do this then your
 new
 topic will appear in the previous topic's thread.

 -When replying to an existing topic, open the email in the thread in
 particular you are replying to and hit 'reply' in your email client,
 ensuring the to: is set to the mailing list address. Do 

Re: [Nuke-users] Improvements to the Nuke Mailing Lists

2011-05-05 Thread Ivan Busquets
Very nice, thanks.

A question regarding the older archives:

Is there any chance that the whole list archives could be included in the
web front end? I find they are extremely useful source of information, and
having them in the forum would make for a great one-stop place to search for
info posted in the past.

If you have a few emails from the lists prior to that you find yourself
 often referring to, please feel free to forward these round the list again
 so they're archived for posterity.


That's a nice idea, but I think having all the archives already dumped in
there would be a lot more useful. Is there any technical (or legal?) reason
that prevents you from doing that?

Thanks,
Ivan


On Thu, May 5, 2011 at 9:16 AM, Ned Wilson nedwil...@gmail.com wrote:

 Nice one, Jack... much appreciated. :)

 On May 5, 2011, at 4:57 AM, Jack Binks wrote:

  Hi All,
 
  Due to popular demand we've made a few improvements to the mailing lists
 in the form of a search-able archive and web front end.
 
  These can be found at http://forums.thefoundry.co.uk/ and act as a
 bidirectional link up with the existing mailing lists. Anything posted to
 the list will appear in the website thread listings, and anything posted to
 the web will bounce out to the mailing lists (with a few minutes latency).
 Note that the archives are both viewable and searchable without logging in
 to the site, however if you wish to post via the web you'll need to log in.
 This has a separate membership to the mailing lists, so use the register
 link on the linked page to sign up.
 
  As with the mailing lists, these are not avenues for official support -
 such reports should continue to go through supp...@thefoundry.co.uk (along
 with any problems you notice with the forums!). The archives run back a
 couple of months, ie since the bridge element has been running, and will
 build with time. If you have a few emails from the lists prior to that you
 find yourself often referring to, please feel free to forward these round
 the list again so they're archived for posterity. The email and web front
 ends are both covered by the pre-existing code of conduct, and please note
 the additional TCs and privacy policies linked in the page footer. We'll be
 setting up links from the main site shortly.
 
  An important facet of making these genuinely usable for everyone is
 observing good thread etiquette, and that's down to all of us who use the
 lists. The threads on the web front end mimic the threads seen in your email
 client, so even if you don't personally use threads in your emails, please
 do follow standard rules when responding to ensure the archives are easy to
 search and follow, and indeed, for everyone else who does use a threaded
 email client view.
 
  For those of you who don't generally use threads, there are lots of
 resources on the web related to good etiquette, however the most important
 from this point of view are:
 
  -When creating a new topic hit 'new email' in your client and set to:
 mailing list address (eg nuke-users@support.thefoundry.co.uk). Don't
 find a different email topic on the list, open a contained email, hit reply
 and change the subject as there's a secret squirrel identifier which is used
 to id the thread and which hasn't been changed. If you do this then your new
 topic will appear in the previous topic's thread.
 
  -When replying to an existing topic, open the email in the thread in
 particular you are replying to and hit 'reply' in your email client,
 ensuring the to: is set to the mailing list address. Do not hit 'new' and
 copy and paste the subject over, as for similar reasons to the last post the
 email will appear as a new topic, rather than part of the previous topic's
 thread. An obvious difficulty that arises here is for those of you that use
 the mailing list's digest view. If possible, when you get a digest mail you
 wish to respond to, do so via the web front end against that particular
 thread, so as to preserve that thread's integrity.
 
  Hope you find the improvements useful, and do let us know at support@ if
 you run into any issues.
 
  Kind Regards
  Jack
 
  --
 
  Jack Binks, Product Manager
  The Foundry, 6th Floor, Communications Building
  48 Leicester Square, London, WC2H 7LT, UK
  Tel: +44 (0)20 7434 0449 - Fax: +44 (0)20 7434 1550
  Web: www.thefoundry.co.uk
 
  The Foundry Visionmongers Ltd.
  Registered in England and Wales No: 4642027
 
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/

Re: [Nuke-users] Generate Stereo Camera in nuke?

2011-04-28 Thread Ivan Busquets
If you have both the interocular and the convergence distance, then you can
set up your second camera as just a position  rotation offset to your main
camera. Create a new Camera node, connect your main Camera to it, and then
just work out the offsets in your new camera.
You'll need an offset in translate.x, and rotate.y. Translate.x should just
be your interocular distance. Rotate.y is the angle of the right triangle
defined by the two given sides (IO and convergence). You already have the
two sides of that triangle, so you can solve for angle =
degrees(atan(IO/convergence))

Make sure your new camera has the same projection parameters as your main
camera (link them if necessary). Also, a good way to check that your
expressions are working correctly is to link the focal_point of both
cameras to your convergence distance (meaning the crosshair handle in the 3D
view should then overlap for the 2 cameras)

Here's a quick example:

set cut_paste_input [stack 0]
version 6.1 v5
push $cut_paste_input
Camera2 {
 translate {20 9 -53}
 rotate {-17 -41 0}
 focal_point {{Camera4.convergence}}
 name Camera3
 label MAIN (left eye)
 selected true
 xpos 647
 ypos 468
}
set N74b5be0 [stack 0]
Camera2 {
 translate {{interOcular} 0 0}
 rotate {0 {degrees(atan(interOcular/convergence))} 0}
 focal {{parent.Camera3.focal}}
 haperture {{parent.Camera3.haperture}}
 vaperture {{parent.Camera3.vaperture}}
 near {{parent.Camera3.near}}
 far {{parent.Camera3.far}}
 win_translate {{parent.Camera3.win_translate}
{parent.Camera3.win_translate}}
 win_scale {{parent.Camera3.win_scale} {parent.Camera3.win_scale}}
 winroll {{parent.Camera3.winroll}}
 focal_point {{convergence}}
 name Camera4
 label RIGHT EYE
 selected true
 xpos 812
 ypos 468
 addUserKnob {20 User}
 addUserKnob {7 interOcular}
 interOcular 1
 addUserKnob {7 convergence}
 convergence 28
}
push $N74b5be0
JoinViews {
 inputs 2
 name JoinViews2
 selected true
 xpos 722
 ypos 621
 viewassoc lt\nrt
}


Cheers,
Ivan


On Thu, Apr 28, 2011 at 8:53 AM, Oliver Armstrong oliverarmstr...@gmail.com
 wrote:

 Hi there.

 Does anyone know a way to generate the other eye camera in nuke. I have a
 shot camera and the interocular distance as well as the distance of the
 convergence plane.

 Thanks,




 --
 US: +1 (646) 229-5388
 UK: +44 7899 893 541
 Skype: oliverarmstrong



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


  1   2   >