Re: [Nuke-users] nuBridge launch event

2017-04-11 Thread Ivan Busquets
Congrats! And sorry I never replied to your last email.

Basically, I realized I couldn't even help testing through a proxy
connection, because our setup goes beyond that. We are literally air-gapped
in production machines (sigh), and the only way to the outside world is
using virtual machines that don't have access to our production
disks/network.

Anyway, I still wish you a successful launch. :)
Cheers!


On Tue, Apr 11, 2017 at 2:43 AM, Frank Rueter|OHUfx  wrote:

> Hi all,
>
> sorry for the spam but after years of battling to find enough time, I am
> finally in the last throws and aiming to release the nuBridge in the next
> few weeks.
>
> As a launch event we have started another Most Valued Contributor
> competition with generous support by the Foundry.
>
> Vote for your most valuable contributor and be in the draw to win a
> NukeStudio, CaraVR & nuBridge license.
> Lots of prizes for the winners as well of course!
>
> http://www.nukepedia.com/vote-your-favourite-contributor
>
> Cheers,
> frank
>
>
> --
> 
>
> over 1,000 free tools for Nuke 
> 
>
> full access from within... coming soon
> 
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] deepholdout issue

2015-12-14 Thread Ivan Busquets
Results should be identical to the equivalent DeepMerge, provided every
element is properly held out by the rest.

Can't really tell what differences you may be seeing, but for overlapping
volumetric samples it's possible that you may get some artifacts from a
DeepMerge set to "holdout". You could try with a "DeepHoldout" instead,
which should already produce a flat held-out image.

For simple cases like the one you sent earlier, though, the results should
be identical. Is that not what you're seeing?


On Mon, Dec 14, 2015 at 11:23 AM, Patrick Heinen <
mailingli...@patrickheinen.com> wrote:

> Thanks Ivan! That seams to give pretty good results! Still not exactly the
> same as deepMerging them unfortunately, but pretty close.
> Should it in theory look exactly the same? Or is there no way to get it to
> be the same? Bothers me a little bit that it's not perfect ;)
> I don't want to deep merge it because I basically just want to hold my
> render out with the actors from the plate.
>
>
> Ivan Busquets wrote on 14.12.2015 11:10:
>
> > If you're combining 2 elements that are already pre-held out by each
> other, you'd probably want to use a disjoint-over instead of a regular over.
> >
> >
> > Either that, or DeepMerge them directly before you go DeepToImage.
> >
> >
> > On Mon, Dec 14, 2015 at 10:43 AM, Patrick Heinen <
> mailingli...@patrickheinen.com <mailto:mailingli...@patrickheinen.com> >
> wrote:
> >> Hey everyone,
> >>
> >> I thought I'd use deep again for a few shots on my current show, but am
> now running into some issues I had never noticed before.
> >> I'm using a roto on a card to holdout my cg renders, but doing that is
> creating a dark edge, as the alpha seams to get held out too much.
> >> Maybe it's just too early on monday morning and I'm doing something
> wrong. But I can't for the heck of it find my error.
> >> I attached a script that recreates the issue with two rotos. Hope
> someone can point me into the right dirrection.
> >>
> >> Thanks!
> >> Patrick
> >> ___
> >> Nuke-users mailing list
> >> Nuke-users@support.thefoundry.co.uk  Nuke-users@support.thefoundry.co.uk> , http://forums.thefoundry.co.uk/ <
> http://forums.thefoundry.co.uk/>
> >> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users <
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users>
> >
> >
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] deepholdout issue

2015-12-14 Thread Ivan Busquets
If you're combining 2 elements that are already pre-held out by each other,
you'd probably want to use a disjoint-over instead of a regular over.

Either that, or DeepMerge them directly before you go DeepToImage.


On Mon, Dec 14, 2015 at 10:43 AM, Patrick Heinen <
mailingli...@patrickheinen.com> wrote:

> Hey everyone,
>
> I thought I'd use deep again for a few shots on my current show, but am
> now running into some issues I had never noticed before.
> I'm using a roto on a card to holdout my cg renders, but doing that is
> creating a dark edge, as the alpha seams to get held out too much.
> Maybe it's just too early on monday morning and I'm doing something wrong.
> But I can't for the heck of it find my error.
> I attached a script that recreates the issue with two rotos. Hope someone
> can point me into the right dirrection.
>
> Thanks!
> Patrick
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Why does "Overlay" clamp the image?

2015-01-28 Thread Ivan Busquets
For more crude / unclamped versions of some merging operations (including
"overlay"), you can also use the olde "Merge" node (as opposed to the
default "Merge2".

Merge {
 inputs 2
 operation overlay
 name Merge1
 selected true
 xpos -254
 ypos 199
}


On Wed, Jan 28, 2015 at 8:34 AM, Mads Hagbarth Lund 
wrote:

> It's just bad with the overlay thing, as it is a pretty useful way to add
> grain and do dodge/burn effects. It works out of the box in Photoshop,
> Fusion and AfterEffects but you have to make your own overlay gizmo to make
> it work in Nuke.
>
>
> Den 28/01/2015 kl. 17.11 skrev Daniel Short :
>
> I wondered how the screen would need to be calculated differently since
> the invert function might get you some crazy values in float space.
> It makes sense that they changed it to give us a visually accurate result
> when a mathematical one would look like an error.
>
> On Wed, Jan 28, 2015 at 10:49 AM, John Mangia  wrote:
>
>> Looks like the nuke screen algorithm has a low pass to allow for values
>> over 1 to comp more naturally.  You can see the difference if you expose
>> down.
>>
>> On Wed, Jan 28, 2015 at 10:37 AM, Mads Lund  wrote:
>>
>>> Ahh ok, so the "Screen" operation in the merge node does not just screen.
>>>
>>> Screen is the same as the Merge
>>> 
>>>  node,
>>> only with operation set to screen by default. It layers images together
>>> using the screen compositing algorithm: A+B-AB if A or B ≤1, otherwise
>>> max(A,B). In other words, If A or B is less than or equal to 1, the
>>> screen algorithm is used, otherwise max is chosen. Screen resembles plus.
>>> It can be useful for combining mattes and adding laser beams.
>>>
>>> However i still don't see why they don't apply this to the overlay
>>> method aswell. (why would you want clamp the image?)
>>> And it also doesnt explain why it clamps at 0 (since multiply does not
>>> clamp at 0)
>>>
>>>
>>>
>>> On Wed, Jan 28, 2015 at 2:46 PM, John Mangia 
>>> wrote:
>>>
 Screen is also a multiplicative operation, it's just inverted images
 multiplied together and re-inverted.


 On Wed, Jan 28, 2015 at 8:35 AM, Ron Ganbar  wrote:

> The math used in Nuke for Screen makes it inheritingly clamp (it
> doesn't really clamp, but can not produce colors brighter than 1).
>
>
>
> Ron Ganbar
> email: ron...@gmail.com
> tel: +44 (0)7968 007 309 [UK]
>  +972 (0)54 255 9765 [Israel]
> url: http://ronganbar.wordpress.com/
>
> On Wed, Jan 28, 2015 at 2:15 PM, Mads Lund 
> wrote:
>
>> According to the documentation:
>>
>> • overlay - Image A brightens image B.
>> Algorithm: multiply if B<.5, screen if B>.5
>>
>> However Nuke also clamps the colors between 0 and 1.
>> It seem that all other compositing software does not clamp between 0
>> and 1, so i am wondering if this is a bug in Nuke?
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

>>>
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>>
>> --
>> John Mangia
>>
>> 908.616.1796
>> j...@johnmangia.com
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
>
> --
> Daniel Short
> www.danisnotshort.com 
> 215.859.3220
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk

Re: [Nuke-users] Blink mini-project

2014-05-18 Thread Ivan Busquets
Hey Jed,

Coincidentally, I have a simple Voronoi-cell noise generator made in Blink.

This was done as a test ahead of a more complete set of noise-generation
tools, so it was never polished and was more of a proof of concept.
t's mostly a straight conversion of the Voronoi generator from the libNoise
library into BlinkScript.

It should work as an example, though.

I've uploaded it to the new Blink section in Nukepedia.
http://www.nukepedia.com/blink/image/voronoi

Cheers,
Ivan


On Sun, May 18, 2014 at 3:16 PM, Jed Smith  wrote:

>  One thing that I would love to see would be a more versatile noise
> generator.
>
> Maybe something with options for Voronoi noise, tiled shapes, hexagons,
> other types of useful noise that I'm not aware of?
> This nuke 
> pluginexists
>  for voronoi noise, but I could never get it to compile.
>
> I think that would be super useful and perhaps not insanely difficult to
> make.
>
> What other types of noise are there that would be useful to have
> generators for?
>
> Another suggestion might be a simple 2d slice volumetric noise generator
> that functioned in the 3d system. This should be possible with blink right?
>
> On Friday, 2014-05-16 at 8:34a, Neil Rögnvaldr Scholes wrote:
>
>  Oooh Anamorphic Lens Flares...:)
>
>
> Neil Rögnvaldr Scholes
> www.neilscholes.com
>
> On 16/05/14 16:19, Nik Yotis wrote:
>
>  Hi,
>
>  any ideas/suggestions for a mini-project Blink project people 'd like to
> see live?
> Dev time is 2 weeks, I have a basic understanding of the Blink | NDK API
>
> cheersQ
>
>  --
>
> Nik Yotis | Software Engineer/3D Graphics R&D
>
> BlueBolt Ltd | 15-16 Margaret Street | London W1W 8RW | T: +44 (0)20 7637
> 5575 | F: +44 (0)20 7637 3296 | www.blue-bolt.com |
>
>
> ___
> Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
> http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] day rates in the UK

2014-03-22 Thread Ivan Busquets
In my first job in the industry I had the chance to work with a great
editor. He taught me something I still remember almost on a daily basis.

He had made the transition from physical film-cutting to non-linear editing
systems, and had this opinion about the many benefits that non-linear
editing brought to the table.

"It's obviously great and makes my job so much easier, and I wouldn't want
to ever look back. However, it is now so easy to make a cut that a lot of
editors/directors never commit to one. They'll cut on a certain frame, then
try a couple of frames later, then a couple of frames earlier, then one
more, then leave it there temporarily to revisit later.
When you're physically cutting a reel of film, there's something permanent
about it that urges you to THINK why you want to cut on that frame and not
on any other, and then COMMIT to that decision."

I firmly believe that the analogy applies to many technological advances in
our industry.
There is a growing belief that some changes in post are fast/cheap enough
that the exercise of THINKING and COMMITTING just keeps getting delayed.
The process then becomes reactive, with clients/supervisors spending more
time reacting to what they're seeing than directing what they would like to
see. And with it comes the frustration when, iteration after iteration,
they're still not seeing something they "like".

We've all seen it:
- I don't know what kind of look I'm going to want for this, so I'll just
shoot it as neutral as possible and choose between different looks later.
- I want to keep the edit open as you guys work on these shots, so I can
make the decisions on what should be in or out LATER, because it's so much
easier to do once I see how these shots are coming together.
- I can't judge this animation until it has proper motion blur, lighting,
and I can see it integrated in the plate. (This one is particularly
infuriating, and makes me wonder how are these people able to judge
storyboards before they shoot the whole thing)

Studios have learnt to protect themselves a bit against this, managing
client's expectations, planning staged deliveries, etc. But ultimately, our
line of work is very subjective, so it always takes someone with a strong
vision and the ability to convey that vision for things to go more or less
smoothly.

The most successful projects I've ever worked on have a few of things in
common:

- A clear vision from a very early stage.
- A strong leadership
- Very little or no micromanaging.

Every once in a blue moon, those 3 line up and you are reminded of how much
fun this job can be.




On Thu, Mar 20, 2014 at 5:29 PM, Frank Rueter|OHUfx  wrote:

>  Totally agree. Just because we are more flexible in post has created a
> culture of creative micro management that is equivalent to man handling
> actors on set rather than letting them act
>
>
>
>
> On 3/21/14, 12:25 PM, matt estela wrote:
>
>
> On 21 March 2014 10:09, Elias Ericsson Rydberg <
> elias.ericsson.rydb...@gmail.com> wrote:
>
>>  In all kinds of productions there seems to be a heavy reliance on the
>> director. That's the standard I guess. Should not we, the vfx-artists, be
>> the authority of our own domain?
>>
>>
>  I do wonder if non cg fx heavy films of the past were as reliant on
> director approval as they are today. Using raiders as the example again,
> was Spielberg really approving every rock, every mine cart that was created
> for the mine chase sequence, sending shots back 10, 50, 100 times for
> revisions? Or as I suspect, was there the simple reality of 'we need to
> make these things, that takes time, you really can't change much once we
> start shooting miniatures.'? The ability for digital to change anything and
> everything is both the best and worst thing that happened to post
> production.
>
>
>
>
> ___
> Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
> http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
> --
>   [image: ohufxLogo 50x50]  *vfx compositing
>  | workflow customisation and
> consulting  *
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
<>___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] oldest card for nuke 8

2013-12-31 Thread Ivan Busquets
Randy,

check this list:
https://developer.nvidia.com/cuda-gpus

For Nuke 8, you're going to want anything that has a compute capability of
2.0 or higher. Your 285 seems to be rated for 1.3


You told me my card wouldn't be supported before since it's to old.


>From my Android phone on T-Mobile. The first nationwide 4G network.


 Original message 
From: Deke Kincaid
Date:12/31/2013 7:15 PM (GMT-05:00)
To: Nuke user discussion
Subject: Re: [Nuke-users] oldest card for nuke 8

Randy: this is an Nvidia issue.  They haven't released cuda drivers for
your card under Mavericks yet.  There are tons of threads about this if you
google for "cuda mavericks nvidia 285"

-deke

On Tuesday, December 31, 2013, Randy Little wrote:

> But Nuke 8 doesn't support older cards I have been told.   My 285 no
> longer shows up but this is my old box so  it could be Nuke and it
> could be 10.9 or both.  I was told the 285 isn't supported.   Since I do
> very little from home and try not to work from home I just keep the bare
> minimum system for when I have to bring work home.   Something about
> something 2.0 vs 1.0 precision in some cuda lib is all I can remember.
> Will ask deke Later.
>
>
> Randy S. Little
> http://www.rslittle.com/
> http://www.imdb.com/name/nm2325729/
>
>
>
>
> On Tue, Dec 31, 2013 at 3:30 PM, Nathan Rusch wrote:
>
>> There has been talk of supporting OpenCL as well as (or possibly in place
>> of) CUDA at some point, but for now, even if Blink is capable of generating
>> OpenCL code, Nuke still limits support to CUDA-enabled cards.
>>
>> -Nathan
>>
>>
>> -Original Message- From: Martin Winkler
>> Sent: Tuesday, December 31, 2013 12:18 PM
>> To: Nuke user discussion
>> Subject: Re: [Nuke-users] oldest card for nuke 8
>>
>>
>> On Tue, Dec 31, 2013 at 9:06 PM, Nathan Rusch 
>> wrote:
>>
>>> Nuke still only supports CUDA for GPU acceleration.
>>>
>>
>> Isn't Blink supposed to do OpenCL?
>>
>> Regards,
>>
>>
>> --
>> Martin Winkler, Geschäftsführer
>> Grey Matter Visual Effects GmbH
>> Georg-Friedrich-Str.1
>> 76530 Baden-Baden
>> Tel. 07221 972 95 31
>> HRB 700934 Amtsgericht Mannheim
>> Ust-ID Nr.DE249482509
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>

-- 
--
Deke Kincaid
Creative Specialist
The Foundry
Skype: dekekincaid
Tel: (310) 399 4555 - Mobile: (310) 883 4313
Web: www.thefoundry.co.uk
Email: d...@thefoundry.co.uk


___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] How does DeepRecolor distribute a target opacity across multiple samples?

2013-12-19 Thread Ivan Busquets
Colin, you're a rock star.

That is spot on.

I was so obsessed with keeping the proportion of opacities between samples,
that I didn't even try to compare log (src_viz) / log (target_viz).

That is the constant I'd been looking for. Thanks so much!

Cheers,
Ivan


On Thu, Dec 19, 2013 at 12:21 AM, Colin Alway  wrote:

> Hi ivan
>
> 1-(1-0.52)^3.137169 = 0.9
> 1-(1-0.4)^3.137169 = 0.798617
> 1-(1-0.2)^3.137169 = 0.503434
>
> In other words given an accumulated alpha and a target alpha, you can
> calculate one exponent that can then be applied to every sample as above.
>
> Colin
>  On 18 Dec 2013 01:10, "Ivan Busquets"  wrote:
>
>> Hi,
>>
>> Sorry for the repost. I sent this to the development list yesterday, but
>> posting over here as well to cast a broader net.
>>
>> Has anyone dug around the "target input alpha" option in the DeepRecolor
>> node, and has some insight on how it is retargetting each sample internally?
>>
>> Long story short, I'm trying to implement a procedure to re-target
>> opacity of each sample in a deep pixel, akin to what's happening in a
>> DeepRecolor node when "target input alpha" is checked.
>>
>> I've got this to a point where it's working ok, but I think I might be
>> missing something, as my results differ from those you'd get in a
>> DeepRecolor.
>>
>> My re-targetting algorithm is based on the assumption that the relative
>> opacity between samples should be preserved, but DeepRecolor clearly uses a
>> different approach.
>>
>> Example:
>>
>> Say you have a deep pixel with 2 samples, and the following opacities:
>>
>> Samp1 :   0.4
>> Samp2 :   0.2
>>
>> The accumulated opacity is 0.52  (Samp1 over Samp2). Note that Samp1
>> deliberately has an opacity of 2 times Samp2.
>>
>> Now, let's say we want to re-target those samples to an accumulated
>> opacity of 0.9.
>>
>> What I am doing is trying to calculate new opacities for Samp1 and Samp2
>> in such a way that both of these conditions are met.
>>
>> a) Samp1 == 2 * Samp2
>> b) Samp1 over Samp2 == 0.9
>>
>> This gives me the following re-targeted values:
>>
>> Samp1 :   0.829284
>>
>> Samp2 :   0.414642
>>
>>
>> I'm happy with those, but it bugs me that DeepRecolor throws different
>> results:
>>
>>
>> Samp1 :   0.798617
>> Samp2 :   0.503434
>>
>> Which meets the second condition (Samp1 over Samp2 == 0.9), but does not
>> preserve the relative opacities of the original samples.
>>
>> It seems to me like DeepRecolor is applying some sort of non-linear
>> function to apply a different weight to each of the original samples, but I
>> haven't been able to figure out the logic of that weighting, or a reason
>> why it's done that way.
>>
>> Does anyone have any insight/ideas on what DeepRecolor might be doing
>> internally?
>> Or a reason why you might want to distribute the target alpha in a
>> non-linear way?
>>
>> Thanks in advance,
>> Ivan
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

[Nuke-users] How does DeepRecolor distribute a target opacity across multiple samples?

2013-12-17 Thread Ivan Busquets
Hi,

Sorry for the repost. I sent this to the development list yesterday, but
posting over here as well to cast a broader net.

Has anyone dug around the "target input alpha" option in the DeepRecolor
node, and has some insight on how it is retargetting each sample internally?

Long story short, I'm trying to implement a procedure to re-target opacity
of each sample in a deep pixel, akin to what's happening in a DeepRecolor
node when "target input alpha" is checked.

I've got this to a point where it's working ok, but I think I might be
missing something, as my results differ from those you'd get in a
DeepRecolor.

My re-targetting algorithm is based on the assumption that the relative
opacity between samples should be preserved, but DeepRecolor clearly uses a
different approach.

Example:

Say you have a deep pixel with 2 samples, and the following opacities:

Samp1 :   0.4
Samp2 :   0.2

The accumulated opacity is 0.52  (Samp1 over Samp2). Note that Samp1
deliberately has an opacity of 2 times Samp2.

Now, let's say we want to re-target those samples to an accumulated opacity
of 0.9.

What I am doing is trying to calculate new opacities for Samp1 and Samp2 in
such a way that both of these conditions are met.

a) Samp1 == 2 * Samp2
b) Samp1 over Samp2 == 0.9

This gives me the following re-targeted values:

Samp1 :   0.829284

Samp2 :   0.414642


I'm happy with those, but it bugs me that DeepRecolor throws different
results:


Samp1 :   0.798617
Samp2 :   0.503434

Which meets the second condition (Samp1 over Samp2 == 0.9), but does not
preserve the relative opacities of the original samples.

It seems to me like DeepRecolor is applying some sort of non-linear
function to apply a different weight to each of the original samples, but I
haven't been able to figure out the logic of that weighting, or a reason
why it's done that way.

Does anyone have any insight/ideas on what DeepRecolor might be doing
internally?
Or a reason why you might want to distribute the target alpha in a
non-linear way?

Thanks in advance,
Ivan
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] DepthToPoints

2013-12-04 Thread Ivan Busquets
Elias,
1/z does not normalize depth to 0-1.
Any values with depth < 1 will have values > 1 when inverted.

IMO, the main benefit of treating depth as 1/z is that you don't have to
deal with "infinity" values for empty areas.



On Wed, Dec 4, 2013 at 5:02 PM, Elias Ericsson Rydberg <
elias.ericsson.rydb...@gmail.com> wrote:

> 1/z would make sense to me as it would fit an arbitrariily deep map in a
> 0-1 space. Any point in the pass would be easily visualised with nukes
> viewer tools, exposure mainly ofc. Although I'm a fan of having absolute
> values, 1/z has it's benefits.
>
> Cheers,
> Elias
>
> 5 dec 2013 kl. 00:49 skrev Deke Kincaid :
>
> yup, your right, long day, brain not working.  I converted my normalized
> pass wrong, so it was giving me the inverse which appeared right. :)
>
> --
> Deke Kincaid
> Creative Specialist
> The Foundry
> Skype: dekekincaid
> Tel: (310) 399 4555 - Mobile: (310) 883 4313
> Web: www.thefoundry.co.uk
> Email: d...@thefoundry.co.uk
>
>
> On Wed, Dec 4, 2013 at 6:31 PM, Ivan Busquets wrote:
>
>> Not sure if I follow, or what combination you used in your test, but the
>> standard depth output of ScanlineRender (1/z) is what DepthToPoints wants
>> as an input by default.
>>
>>
>> set cut_paste_input [stack 0]
>> version 7.0 v8
>> push $cut_paste_input
>> Camera2 {
>>  name Camera1
>>  selected true
>>  xpos 1009
>>  ypos -63
>> }
>> set N73f6770 [stack 0]
>> push $N73f6770
>> CheckerBoard2 {
>>  inputs 0
>>  name CheckerBoard1
>>  selected true
>>  xpos 832
>>  ypos -236
>> }
>> Sphere {
>>  translate {0 0 -6.44809}
>>  name Sphere1
>>  selected true
>>  xpos 832
>>  ypos -131
>> }
>> push 0
>> ScanlineRender {
>>  inputs 3
>>  shutteroffset centred
>>  motion_vectors_type distance
>>  name ScanlineRender1
>>  selected true
>>  xpos 830
>>  ypos -43
>> }
>> DepthToPoints {
>>  inputs 2
>>  name DepthToPoints1
>>  selected true
>>  xpos 830
>>  ypos 91
>>  depth depth.Z
>>  N_channel none
>> }
>>
>>
>>
>>
>>
>> On Wed, Dec 4, 2013 at 3:11 PM, Deke Kincaid wrote:
>>
>>> Actually I think we are both wrong.  I was just playing with a camera
>>> from the 3d scene with depth and it needs to be distance to match.  1/z
>>> gives you the reversed coming out of a little window look.
>>>
>>> -deke
>>>
>>>
>>> On Wednesday, December 4, 2013, Ivan Busquets wrote:
>>>
>>>>  I don't think that's right, Deke.
>>>>
>>>> DepthToPoints expects 1/z by default, not a normalized input.
>>>> Same as the output from ScanlineRender.
>>>>
>>>> The tootip of the "invert depth" knob states that as well:
>>>>
>>>> "Invert the depth before processing. Useful if the depth is z instead
>>>> of the expected 1/z"
>>>>
>>>>
>>>>
>>>> On Wed, Dec 4, 2013 at 2:45 PM, Deke Kincaid wrote:
>>>>
>>>> 1 is near though there is an invert depth option in depth to points.
>>>>
>>>> >>Am I wrong?
>>>>
>>>> Nuke is 32 bit floating point so it shouldn't matter that much as long
>>>> as the original image was a float.   Precision would only matter if you
>>>> were working in a 8/16 bit int box.
>>>>
>>>> -deke
>>>>
>>>> On Wednesday, December 4, 2013, Ron Ganbar wrote:
>>>>
>>>> 0 is near?
>>>>
>>>> Normalised values aren't precise, though. They're very subjective to
>>>> what was decided in the render. It won't create a very precise point cloud.
>>>> Am I wrong?
>>>>
>>>>
>>>>
>>>> Ron Ganbar
>>>> email: ron...@gmail.com
>>>> tel: +44 (0)7968 007 309 [UK]
>>>>  +972 (0)54 255 9765 [Israel]
>>>> url: http://ronganbar.wordpress.com/
>>>>
>>>>
>>>> On Wed, Dec 4, 2013 at 11:51 PM, Deke Kincaid wrote:
>>>>
>>>> It's just looking for 0-1.  You can do it with an expression node or 
>>>> Jack
>>>> has a handy J_Maths node in J_Ops which converts depth maps between
>>>> types really easily.
>>>>
>>>> -deke
>>>>
>>>>
>>>>

Re: [Nuke-users] DepthToPoints

2013-12-04 Thread Ivan Busquets
Not sure if I follow, or what combination you used in your test, but the
standard depth output of ScanlineRender (1/z) is what DepthToPoints wants
as an input by default.


set cut_paste_input [stack 0]
version 7.0 v8
push $cut_paste_input
Camera2 {
 name Camera1
 selected true
 xpos 1009
 ypos -63
}
set N73f6770 [stack 0]
push $N73f6770
CheckerBoard2 {
 inputs 0
 name CheckerBoard1
 selected true
 xpos 832
 ypos -236
}
Sphere {
 translate {0 0 -6.44809}
 name Sphere1
 selected true
 xpos 832
 ypos -131
}
push 0
ScanlineRender {
 inputs 3
 shutteroffset centred
 motion_vectors_type distance
 name ScanlineRender1
 selected true
 xpos 830
 ypos -43
}
DepthToPoints {
 inputs 2
 name DepthToPoints1
 selected true
 xpos 830
 ypos 91
 depth depth.Z
 N_channel none
}





On Wed, Dec 4, 2013 at 3:11 PM, Deke Kincaid  wrote:

> Actually I think we are both wrong.  I was just playing with a camera from
> the 3d scene with depth and it needs to be distance to match.  1/z gives
> you the reversed coming out of a little window look.
>
> -deke
>
>
> On Wednesday, December 4, 2013, Ivan Busquets wrote:
>
>> I don't think that's right, Deke.
>>
>> DepthToPoints expects 1/z by default, not a normalized input.
>> Same as the output from ScanlineRender.
>>
>> The tootip of the "invert depth" knob states that as well:
>>
>> "Invert the depth before processing. Useful if the depth is z instead of
>> the expected 1/z"
>>
>>
>>
>> On Wed, Dec 4, 2013 at 2:45 PM, Deke Kincaid wrote:
>>
>> 1 is near though there is an invert depth option in depth to points.
>>
>> >>Am I wrong?
>>
>> Nuke is 32 bit floating point so it shouldn't matter that much as long as
>> the original image was a float.   Precision would only matter if you were
>> working in a 8/16 bit int box.
>>
>> -deke
>>
>> On Wednesday, December 4, 2013, Ron Ganbar wrote:
>>
>> 0 is near?
>>
>> Normalised values aren't precise, though. They're very subjective to what
>> was decided in the render. It won't create a very precise point cloud.
>> Am I wrong?
>>
>>
>>
>> Ron Ganbar
>> email: ron...@gmail.com
>> tel: +44 (0)7968 007 309 [UK]
>>  +972 (0)54 255 9765 [Israel]
>> url: http://ronganbar.wordpress.com/
>>
>>
>> On Wed, Dec 4, 2013 at 11:51 PM, Deke Kincaid wrote:
>>
>> It's just looking for 0-1.  You can do it with an expression node or Jack
>> has a handy J_Maths node in J_Ops which converts depth maps between
>> types really easily.
>>
>> -deke
>>
>>
>> On Wednesday, December 4, 2013, Ron Ganbar wrote:
>>
>> Hi all,
>> for DepthToPoints to work, what kind of depth do I need to feed into it?
>> 1/distance? distance? normalised?
>> And how do I convert what comes out of Maya's built in Mental Ray so it
>> will work?
>>
>> Thanks!
>> Ron Ganbar
>> email: ron...@gmail.com
>> tel: +44 (0)7968 007 309 [UK]
>>  +972 (0)54 255 9765 [Israel]
>> url: http://ronganbar.wordpress.com/
>>
>>
>>
>> --
>> --
>> Deke Kincaid
>> Creative Specialist
>> The Foundry
>> Skype: dekekincaid
>> Tel: (310) 399 4555 - Mobile: (310) 883 4313
>> Web: www.thefoundry.co.uk
>> Email: d...@thefoundry.co.uk
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>>
>>
>>
>> --
>> --
>> Deke Kincaid
>> Creative Specialist
>> The Foundry
>> Skype: dekekincaid
>> Tel:
>>
>>
>
> --
> --
> Deke Kincaid
> Creative Specialist
> The Foundry
> Skype: dekekincaid
> Tel: (310) 399 4555 - Mobile: (310) 883 4313
> Web: www.thefoundry.co.uk
> Email: d...@thefoundry.co.uk
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] DepthToPoints

2013-12-04 Thread Ivan Busquets
I don't think that's right, Deke.

DepthToPoints expects 1/z by default, not a normalized input.
Same as the output from ScanlineRender.

The tootip of the "invert depth" knob states that as well:

"Invert the depth before processing. Useful if the depth is z instead of
the expected 1/z"



On Wed, Dec 4, 2013 at 2:45 PM, Deke Kincaid  wrote:

> 1 is near though there is an invert depth option in depth to points.
>
> >>Am I wrong?
>
> Nuke is 32 bit floating point so it shouldn't matter that much as long as
> the original image was a float.   Precision would only matter if you were
> working in a 8/16 bit int box.
>
> -deke
>
> On Wednesday, December 4, 2013, Ron Ganbar wrote:
>
>> 0 is near?
>>
>> Normalised values aren't precise, though. They're very subjective to what
>> was decided in the render. It won't create a very precise point cloud.
>> Am I wrong?
>>
>>
>>
>> Ron Ganbar
>> email: ron...@gmail.com
>> tel: +44 (0)7968 007 309 [UK]
>>  +972 (0)54 255 9765 [Israel]
>> url: http://ronganbar.wordpress.com/
>>
>>
>> On Wed, Dec 4, 2013 at 11:51 PM, Deke Kincaid wrote:
>>
>>> It's just looking for 0-1.  You can do it with an expression node or 
>>> Jack
>>> has a handy J_Maths node in J_Ops which converts depth maps between
>>> types really easily.
>>>
>>> -deke
>>>
>>>
>>> On Wednesday, December 4, 2013, Ron Ganbar wrote:
>>>
 Hi all,
 for DepthToPoints to work, what kind of depth do I need to feed into
 it? 1/distance? distance? normalised?
 And how do I convert what comes out of Maya's built in Mental Ray so it
 will work?

 Thanks!
 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/

>>>
>>>
>>> --
>>> --
>>> Deke Kincaid
>>> Creative Specialist
>>> The Foundry
>>> Skype: dekekincaid
>>> Tel: (310) 399 4555 - Mobile: (310) 883 4313
>>> Web: www.thefoundry.co.uk
>>> Email: d...@thefoundry.co.uk
>>>
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>
> --
> --
> Deke Kincaid
> Creative Specialist
> The Foundry
> Skype: dekekincaid
> Tel: (310) 399 4555 - Mobile: (310) 883 4313
> Web: www.thefoundry.co.uk
> Email: d...@thefoundry.co.uk
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] DepthToPoints

2013-12-04 Thread Ivan Busquets
Hmm, I don't think it's expecting 0-1.

You have the choice between feeding it direct depth values or 1/depth, just
like in DepthToPosition.

There's an "invert_depth" checkbox to toggle between them, and the tooltip
will tell you which one is which.


On Wed, Dec 4, 2013 at 1:54 PM, Ron Ganbar  wrote:

> 0 is near?
>
> Normalised values aren't precise, though. They're very subjective to what
> was decided in the render. It won't create a very precise point cloud.
> Am I wrong?
>
>
>
> Ron Ganbar
> email: ron...@gmail.com
> tel: +44 (0)7968 007 309 [UK]
>  +972 (0)54 255 9765 [Israel]
> url: http://ronganbar.wordpress.com/
>
>
> On Wed, Dec 4, 2013 at 11:51 PM, Deke Kincaid wrote:
>
>> It's just looking for 0-1.  You can do it with an expression node or Jack
>> has a handy J_Maths node in J_Ops which converts depth maps between
>> types really easily.
>>
>> -deke
>>
>>
>> On Wednesday, December 4, 2013, Ron Ganbar wrote:
>>
>>> Hi all,
>>> for DepthToPoints to work, what kind of depth do I need to feed into it?
>>> 1/distance? distance? normalised?
>>> And how do I convert what comes out of Maya's built in Mental Ray so it
>>> will work?
>>>
>>> Thanks!
>>> Ron Ganbar
>>> email: ron...@gmail.com
>>> tel: +44 (0)7968 007 309 [UK]
>>>  +972 (0)54 255 9765 [Israel]
>>> url: http://ronganbar.wordpress.com/
>>>
>>
>>
>> --
>> --
>> Deke Kincaid
>> Creative Specialist
>> The Foundry
>> Skype: dekekincaid
>> Tel: (310) 399 4555 - Mobile: (310) 883 4313
>> Web: www.thefoundry.co.uk
>> Email: d...@thefoundry.co.uk
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Calculate Distorted Pixel Position from STMap Sampled Pixel Values

2013-09-02 Thread Ivan Busquets
No worries!
Glad it worked out for you.



On Mon, Sep 2, 2013 at 12:14 AM, Jed Smith  wrote:

>  Ivan, thank you very much for your helpful wisdom! :)
>
> I think I finally intricately understand how uv maps work. With your
> awesome technique for inverting them, my gizmo seems to be working as
> expected, and with much better accuracy.
>
> It is updated here if anyone wants to take a look:
> https://gist.github.com/jedypod/6302723
>
> On Sunday, 2013-09-01 at 5:47p, Ivan Busquets wrote:
>
> Hi Jed,
>
> I believe the problem in your approach is in the assumption of how STMap
> works.
> STMap does a lookup for each pixel in the output image to find the
> coordinates in the input it needs to sample from to produce the final pixel
> value. In other words, it does not "push" pixels from the input into
> discrete output locations, but "pulls" pixels from the input for each pixel
> in the output. It's the difference between forward and backward warping.
>
> To do what you're trying to do you would effectively need a UV map that's
> the inverse of that distortion. You can get an approximation of such an
> inverse UV map by displacing the vertices of a card, which would be a way
> of forward-warping. The only caveat is that you'll need a card that has as
> many subdivisions/vertices as possible, since the distortion values will be
> interpolated between vertices. That's why it's only an approximation at
> best. But given enough subdivisions, it should get you close enough.
>
> Once you have that inverse UV map, your distorted XY coordinate should
> just be the UV value at your undistorted coordinate, multiplied by width
> and height. (script attached as an example).
>
> P.S. The other "minor" thing you might want to look into is the way you're
> generating your UV map. The expression you're using "x/width" and
> "y/height" will result in a UV map that displaces the image by half a pixel
> from scratch when fed into an STMap. STMap samples pixels from the input at
> their centre (x+0.5, y+0.5), so for a more accurate UV map you should use U
> = (x+0.5)/width and V = (y+0.5)/height.
>
> Hope that helps.
>
> Cheers,
> Ivan
>
>
>
>
>
>
>
> On Sat, Aug 31, 2013 at 8:18 PM, Jed Smith  wrote:
>
>  Greetings!
>
> *The Problem*
> I am trying to write a tool to distort tracking data through a distortion
> map output by a LensDistortion node. I have everything working, except
> there seems to be inaccuracy in my method of calculating the distorted
> pixel position from the sampled values of the uv distortion map, when
> compared to a visual check.
>
> *My Method*
> Say there is a pixel value at 1792,476 in a 1080p frame. I have a standard
> UV Map, modified with a grade node through a mask, creating a localized
> distortion when this map is plugged into an STMap node. The distorted pixel
> value is 1821,484.
>
> The sampled uv map pixel values at the source pixel location is 0.916767,
> 0.432918 (for width, and height offset, respectively).
>
> I am going on the assumption that the uvmap pixel values represent the
> distorted location of that pixel, with the location being a floating point
> percentage of the frame width and height. So a value of 0.916767, 0.432918
> would basically be telling the STMap node to set the output pixel location
> for this pixel to a value of the difference between the 'unity' uvmap value
> that would result in no transformation and the sampled uv value, multiplied
> by the frame width.
>
> For horizontal distortion offset, this would be:
> (pixel_coordinate_x / frame_width - uvmap_red) * frame_width, or (1792 /
> 1920 - 0.916767) * 1920 = 31.807
> This would result in a distorted horizontal value of 1792+31.807 =
> 1823.807. This value is close, but almost 3 pixels off.
>
> *Help!*
> Can anyone here provide some insight into how exactly the math for the
> STMap works to determine the output location of a pixel from the incoming
> pixel values? I have attached a small nuke script demonstrating what I am
> talking about. See the "Test_STMAP_Distortion_Calculations" node to see
> the output results of the above algorithm.
>
> And if anyone is curious to check out the "DistortTracks" gizmo as it
> exists so far, it lives here <https://gist.github.com/jedypod/6302723>.
>
> Thanks very much!
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
> 

Re: [Nuke-users] Calculate Distorted Pixel Position from STMap Sampled Pixel Values

2013-09-01 Thread Ivan Busquets
Hi Jed,

I believe the problem in your approach is in the assumption of how STMap
works.
STMap does a lookup for each pixel in the output image to find the
coordinates in the input it needs to sample from to produce the final pixel
value. In other words, it does not "push" pixels from the input into
discrete output locations, but "pulls" pixels from the input for each pixel
in the output. It's the difference between forward and backward warping.

To do what you're trying to do you would effectively need a UV map that's
the inverse of that distortion. You can get an approximation of such an
inverse UV map by displacing the vertices of a card, which would be a way
of forward-warping. The only caveat is that you'll need a card that has as
many subdivisions/vertices as possible, since the distortion values will be
interpolated between vertices. That's why it's only an approximation at
best. But given enough subdivisions, it should get you close enough.

Once you have that inverse UV map, your distorted XY coordinate should just
be the UV value at your undistorted coordinate, multiplied by width and
height. (script attached as an example).

P.S. The other "minor" thing you might want to look into is the way you're
generating your UV map. The expression you're using "x/width" and
"y/height" will result in a UV map that displaces the image by half a pixel
from scratch when fed into an STMap. STMap samples pixels from the input at
their centre (x+0.5, y+0.5), so for a more accurate UV map you should use U
= (x+0.5)/width and V = (y+0.5)/height.

Hope that helps.

Cheers,
Ivan







On Sat, Aug 31, 2013 at 8:18 PM, Jed Smith  wrote:

>  Greetings!
>
> *The Problem*
> I am trying to write a tool to distort tracking data through a distortion
> map output by a LensDistortion node. I have everything working, except
> there seems to be inaccuracy in my method of calculating the distorted
> pixel position from the sampled values of the uv distortion map, when
> compared to a visual check.
>
> *My Method*
> Say there is a pixel value at 1792,476 in a 1080p frame. I have a standard
> UV Map, modified with a grade node through a mask, creating a localized
> distortion when this map is plugged into an STMap node. The distorted pixel
> value is 1821,484.
>
> The sampled uv map pixel values at the source pixel location is 0.916767,
> 0.432918 (for width, and height offset, respectively).
>
> I am going on the assumption that the uvmap pixel values represent the
> distorted location of that pixel, with the location being a floating point
> percentage of the frame width and height. So a value of 0.916767, 0.432918
> would basically be telling the STMap node to set the output pixel location
> for this pixel to a value of the difference between the 'unity' uvmap value
> that would result in no transformation and the sampled uv value, multiplied
> by the frame width.
>
> For horizontal distortion offset, this would be:
> (pixel_coordinate_x / frame_width - uvmap_red) * frame_width, or (1792 /
> 1920 - 0.916767) * 1920 = 31.807
> This would result in a distorted horizontal value of 1792+31.807 =
> 1823.807. This value is close, but almost 3 pixels off.
>
> *Help!*
> Can anyone here provide some insight into how exactly the math for the
> STMap works to determine the output location of a pixel from the incoming
> pixel values? I have attached a small nuke script demonstrating what I am
> talking about. See the "Test_STMAP_Distortion_Calculations" node to see
> the output results of the above algorithm.
>
> And if anyone is curious to check out the "DistortTracks" gizmo as it
> exists so far, it lives here .
>
> Thanks very much!
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>


inverse_UV.nk
Description: Binary data
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Roto Bezier motion to Vector blur

2013-08-15 Thread Ivan Busquets
David,

Not sure if this will give you exactly what you're after, but this is a
technique I've used in the past to replicate motionblur from the motion of
a shape.

One way to get motion vectors that represent the motion of your shape is to
do a splinewarp that warps between your shape and the same shape offset by
1 frame. Then apply this SplineWarp to a coordinates map (not normalized,
but actual pixel coordinates), substract the original map from the
SplineWarp result, and use that as your motion vector information inside
VectorBlur.

Attached is an example script. Hope it helps.





On Thu, Aug 15, 2013 at 11:26 AM, Julik Tarkhanov
wrote:

> I don't think you can do that with roto, since the only place where it can
> interpolate motion is at the edges (but not inside the shapes). So in
> theory you could get a kind of a vector map per pixel but that map would be
> limited to where the roto edge is located, since a roto shape is
> post-filled. You could try to extract (guess) the vectors from roto motion
> but this is prone to artifacts. If you want to add moblur inside of an area
> I would use Kronos with different plates used as source and warp.
>
> On 15 aug. 2013, at 19:39, "David Yu"  wrote:
>
> > I turn on motion blur to see the effect which i want to use as the
> "vectors" that will drive the vector blur
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>


roto_to_vectorblur_test.nk
Description: Binary data
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Casting shadows

2013-08-01 Thread Ivan Busquets
Hey Mark,

Have you tried feeding a black RGBA constant into your casting geometry?




On Thu, Aug 1, 2013 at 8:54 AM, Mark Nettleton  wrote:

> I'm trying to render cast shadows in Nuke, from a creature (alembic), onto
> some ground geom (alembic). I can get the creature casting shadows, but the
> creature is visible, which I don't want. I only want the shadow visible.
>
> Is there any way to turn off the creature visibility to camera, while
> keeping the cast shadows?
>
> thanks
> __**_
> Nuke-users mailing list
> Nuke-users@support.thefoundry.**co.uk,
> http://forums.thefoundry.co.**uk/ 
> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke 7.0v3 or 7.0v1 ?

2013-04-22 Thread Ivan Busquets
>> Anyone have the bug number for that regression or the file browser issue?

Sounds very similar to Bug ID  30168, with the file browser not identifying
sequences of certain file types (like dtex).
That one got resolved during the beta, but it sounds like having no file
extension at all leads to the same behaviour.




On Mon, Apr 22, 2013 at 2:22 PM, Deke Kincaid  wrote:

> Assist us the Roto/paint version of nuke you get 2 copies of with NukeX.
>  Though as you said you may have so many lics it doesn't matter.
>
>
> http://www.thefoundry.co.uk/articles/2013/03/18/496/assist-tool-for-nukex-available-now/
>
> Easier to see what nodes are enabled in the release notes.
>
> http://thefoundry.s3.amazonaws.com/products/nuke/releases/7.0v6/Nuke_7.0v6_ReleaseNotes.pdf
>
> Anyone have the bug number for that regression or the file browser issue?
>
> -deke
>
> On Monday, April 22, 2013, wrote:
>
>> hey Deke,
>>
>> refresh my memory on the assist seats ?  might that help such a large
>> group when we already have such a large base of licenses ?
>>
>> and what say you regarding any prominent 7.0v6 issues ?
>>
>> Lastly, do you know if the file browser issue with displaying numbered
>> files as sequences is still, an issue ?  (recall we have a proprietary
>> image format that only has numbers as filenames, no prefix/suffix at
>> all)... as of Nuke7 the file browser changed.  This would cause all
>> artists to manually set the start/end frames explicitly in the read node
>> as a result.
>>
>> thx,
>> Ari
>> Blue Sky
>>
>>
>> > The big thing your missing out on 7.0v6 is the extra Assist seats for
>> any
>> > NukeX lics you have.  I'm not sure if that matters much to you guys
>> > though.
>> >
>> > -deke
>> >
>> >
>> >
>> > On Monday, April 22, 2013, wrote:
>> >
>> >> so we saying 7.0v5 is the latest 'dependable' version ?
>> >> anyone have a particular issue with 7.0v5 ?  last call for us.
>> >>
>> >> thx for the notes thus far... big help
>> >>
>> >> Ari
>> >> Blue Sky
>> >>
>> >>
>> >>
>> >> > No, you’re right. 7.0v6 also has at least one regression as well,
>> >> though
>> >> > it will only cause problems for people doing specific things with
>> >> Python.
>> >> > At this point, 7.0v5 seems to the best choice for a 7.0 release.
>> >> >
>> >> > -Nathan
>> >> >
>> >> >
>> >> >
>> >> > From: Richard Bobo
>> >> > Sent: Friday, April 19, 2013 1:06 PM
>> >> > To: Nuke user discussion
>> >> > Subject: Re: [Nuke-users] Nuke 7.0v3 or 7.0v1 ?
>> >> >
>> >> > Ari,
>> >> >
>> >> >
>> >> > I believe that 7.0v3 was pulled from distribution with a semi-serious
>> >> bug.
>> >> > You should go to 7.0v4 or higher. Someone please correct me if I'm
>> >> > wrong...
>> >> >
>> >> >
>> >> > Rich
>> >> >
>> >> >
>> >> >
>> >> > Rich Bobo
>> >> > Senior VFX Compositor
>> >> > Armstrong-White
>> >> > http://armstrong-white.com/
>> >> >
>> >> > Email:  richb...@mac.com 
>> >> > Mobile:  (248) 840-2665
>> >> > Web:  http://richbobo.com/
>> >> >
>> >> >
>> >> > "A man should never be ashamed to own that he has been in the wrong,
>> >> which
>> >> > is but saying that he is wiser today than he was yesterday."
>> >> > - Alexander Pope (1688-1744) English Poet
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > On Apr 19, 2013, at 3:54 PM, a...@curvstudios.com 
>> wrote:
>> >> >
>> >> >
>> >> >   We're considering rolling out Nuke 7.0v3, but I'm curious if there
>> >> was
>> >> >   anything major which makes 7.0v1 a better choice for now ?
>> >> >
>> >> >   Also, has there been a fix to the file browser's change for
>> numbered
>> >> > files ?
>> >> >   ie. our proprietary file format has no prefix nor suffix, only
>> >> numbered
>> >> >   frames. As of Nuke7's release, the file browser won't display
>> >> numbered
>> >> >   files (sans prefix/suffix) as singular sequences.  This presents a
>> >> major
>> >> >   workflow inconvenience where the comper has to explicity set the
>> >> frame
>> >> >   range in and out in every read node.  Multiply that times over
>> 1,700
>> >> > shots
>> >> >   in a film... and oy.
>> >> >
>> >> >   thx,
>> >> >   Ari
>> >> >   Blue Sky
>> >> >
>> >> >   ___
>> >> >   Nuke-users mailing list
>> >> >   Nuke-users@support.thefoundry.co.uk ,
>> >> http://forums.thefoundry.co.uk/
>> >> >
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >>
>> 
>> >> > ___
>> >> > Nuke-users mailing list
>> >> > Nuke-users@support.thefoundry.co.uk ,
>> >> http://forums.thefoundry.co.uk/
>> >> >
>> >>
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users___
>> >> > Nuke-users mailing list
>> >> > Nuke-users@support.thefoundry.co.uk ,
>> >> http://forums.thefoundry.co.uk/
>> >> > http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: AW: AW: [Nuke-users] Alexa Artifacts

2013-03-29 Thread Ivan Busquets
I'm with Jonathan in that this looks like a resizing filter.

You said it's most obvious on even backgrounds. To me that's yet another
sign that it's a resizing artifact.

What resolution are your source files? I don't know the specific details,
but I believe you can only get 1920x1080 ProRes quicktimes from the Alexa
(or 2K in newer firmwares).
The Alexa sensor being 2880x1620, there has to be some kind of downsampling.




On Fri, Mar 29, 2013 at 9:43 AM, Howard Jones wrote:

> It was the same here - shot directly in ProRes 444.
> No idea what though
>
>
> Howard
>
>   --
> *From:* Igor Majdandzic 
> *To:* 'Nuke user discussion' 
> *Sent:* Friday, 29 March 2013, 16:29
> *Subject:* AW: AW: [Nuke-users] Alexa Artifacts
>
> Do you know what caused them?
>
> --
> igor majdandzic
> compositor |
> i...@badgerfx.com
> BadgerFX | www.badgerfx.com
>
> *Von:* nuke-users-boun...@support.thefoundry.co.uk [mailto:
> nuke-users-boun...@support.thefoundry.co.uk] *Im Auftrag von *Magno Borgo
> *Gesendet:* Freitag, 29. März 2013 14:06
> *An:* Nuke user discussion
> *Betreff:* Re: AW: [Nuke-users] Alexa Artifacts
>
> I've seen the exactly same artifacts when working on a film shot on Alexa.
> These are nasty specially when keying... same issue, shot directly in
> Proress .
>
> Magno.
>
>
>
>
> We've been having some problems with noise on some footages from Alexa,
> but nothing remotely near to that.
>
> diogo
>
> On Wed, Mar 27, 2013 at 9:50 PM, Jonathan Egstad 
> wrote:
> No idea, but it looks an awful lot like filtering from a slight resize
> operation.
>
> -jonathan
>
> On Mar 27, 2013, at 5:29 PM, "Igor Majdandzic" 
> wrote:
>
> do you mean in camera? because that was from the original qt footage
>
> --
> igor majdandzic
> compositor |
> i...@badgerfx.com
> BadgerFX | www.badgerfx.com
>
> *Von:* nuke-users-boun...@support.thefoundry.co.uk [mailto:
> nuke-users-boun...@support.thefoundry.co.uk] *Im Auftrag von *Jonathan
> Egstad
> *Gesendet:* Donnerstag, 28. März 2013 01:10
> *An:* Nuke user discussion
> *Cc:* Nuke user discussion
> *Betreff:* Re: [Nuke-users] Alexa Artifacts
>
> Looks like a very  slight resize was done.
> -jonathan
>
> On Mar 27, 2013, at 4:56 PM, "Igor Majdandzic" 
> wrote:
>
> Hey guys,
> we got footage from a shoot with Alexa being the camera. It was shot in
> ProRess 444. The problem is: The picture has some artifacts which confuse
> me the codec being 444. I attached some images which show some of the grain
> patterns. Is this normal?
>
> thx,
> Igor
>
>
>
> --
> igor majdandzic
> compositor |
> i...@badgerfx.com
> BadgerFX | www.badgerfx.com
>
> *Von:* nuke-users-boun...@support.thefoundry.co.uk [
> mailto:nuke-users-boun...@support.thefoundry.co.uk]
> *Im Auftrag von *Deke Kincaid
> *Gesendet:* Mittwoch, 27. März 2013 23:47
> *An:* Nuke user discussion
> *Betreff:* Re: [Nuke-users] FusionI/O and Nuke
>
> Hi Michael
> I'm actually testing this right now as Fusionio just gave us a bunch of
> them.  Early tests reveal that with dpx it's awesome but with openexr zip
> compressed file it it is spending more time with compression, not sure if
> it is cpu bound or what(needs more study but its slower).  Openexr
> uncompressed files though are considerably superfast but of course the
> issue is that it is 18 meg a frame.  These are single layer rgba exr files.
>
> -
> Deke Kincaid
> Creative Specialist
> The Foundry
> Mobile: (310) 883 4313
> Tel: (310) 399 4555 - Fax: (310) 450 4516
>
> The Foundry Visionmongers Ltd.
> Registered in England and Wales No: 4642027
>
> On Wed, Mar 27, 2013 at 3:26 PM, Michael Garrett 
> wrote:
> I'm evaluating one of these at the moment and am interested to know if
> others have got it working with Nuke nicely, meaning, have you been able to
> really utilise the insane bandwidth of this card to massively accelerate
> any part of your day to day compositing?
>
> So far, I've found it has no benefit when localising all Reads in a
> somewhat heavy comp, or even playing back a sequence of exr's or deep
> files, compared to localised sequences on a 10K Raptor drive also in my
> workstation - hopefully I'm missing something big though, this is day one
> after all.
>
> There may be real tangible benefits to putting the Nuke cache on it though
> - I'll see how it goes.
>
> I'm also guessing that as gpu processing becomes more prevalent in Nuke
> that we will see a real speed advantage handing data from a card like this
> straight to the gpu.
>
> Thanks,
> Michael
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
> 
>
> 
>
> 
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
> 

Re: [Nuke-users] 32 bit float -> 16 bit half float expression?

2013-02-17 Thread Ivan Busquets
I'm not sure that Posterize would give the desired result in this case,
since it will still throw values that are not possible to represent as a
half precision float.

You'd probably want to re-create a 16 bit half-float value by breaking the
32 bit float into its exponent and mantissa(significant) values, and then
truncating those.

The following expression might work (not thoroughly tested, though)

set cut_paste_input [stack 0]
version 7.0 v4
push $cut_paste_input
add_layer {alpha alpha.red alpha.beta}
Expression {
 expr0 "exponent(r) > 16 ? r*inf: ldexp(rint(mantissa(r)* (2**11))/(2**11),
 exponent(r))"
 expr1 "exponent(g) > 16 ? g*inf: ldexp(rint(mantissa(g)* (2**11))/(2**11),
 exponent(g))"
 expr2 "exponent(b) > 16 ? b*inf: ldexp(rint(mantissa(b)* (2**11))/(2**11),
 exponent(b))"
 channel3 alpha
 name Float_To_Half
 selected true
 xpos 292
 ypos 226
}

Hope that helps.

PS. Mark, hope you're doing well! :)




On Sun, Feb 17, 2013 at 5:35 AM, Shailendra Pandey wrote:

> well actually 281474976710656 -1
> 281474976710655
> Hope that helps
>
>
>
> On Sun, Feb 17, 2013 at 9:25 PM, Shailendra Pandey wrote:
>
>> Hi Mark
>>
>> You can use a posterize node
>> with a value of 281474976710656
>> which is 2 to the power(16*3)
>>
>>
>>
>> Cheers
>> Shail
>>
>> On Sun, Feb 17, 2013 at 6:25 AM, Mark Nettleton wrote:
>>
>>> **
>>> I'm generating an ST map within Nuke, that needs to line up with 16 bit
>>> half float ST map images on disk.
>>>
>>> Is there a way I can generate 16bit half float values within nuke? Or
>>> convert 32 bit values to 16 bit half? (without writing to disk and reading
>>> back)
>>>
>>> Thanks
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] light linking as maya

2013-02-11 Thread Ivan Busquets
Excluding an object from being shaded by lights in the scene can be done
using a standard shader (basicmat, etc) and setting both the diffuse and
specular rates to 0.

However, having different objects affected by different lights within the
same scene is not possible as far as I know. (there should be a feature
request for that)



On Mon, Feb 11, 2013 at 4:32 PM, Gustaf Nilsson wrote:

> yeah, no, it doesnt work like that. only solution i can think of right off
> my toes is to have two scanline renderers
>
>
> On Mon, Feb 11, 2013 at 10:05 PM, Marten Blumen  wrote:
>
>> I couldn't get that to work. what am I missing?
>> [image: Inline images 1]
>>
>>
>> On 12 February 2013 09:45, Randy Little  wrote:
>>
>>> just plug that card and that light into its own scene and the plug that
>>> scene into your next scene with your other card.
>>>
>>>
>>> Randy S. Little
>>> http://www.rslittle.com 
>>> http://www.imdb.com/name/nm2325729/
>>>
>>>
>>>
>>>
>>> On Mon, Feb 11, 2013 at 12:14 PM, Marten Blumen wrote:
>>>
 not that I know of- you can use FillMat to turn shading off for each
 card on an ad hoc basis.




 On 9 February 2013 18:41, nandkishor19 <
 nuke-users-re...@thefoundry.co.uk> wrote:

> **
> I have two card in my scene. I am adding one light. This light should
> effect only on one card not to the other and i am using only one scene. Is
> it possible to do light linking technique in nuke?
> Thanks
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

>>>
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
>
> --
> ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] DeepToPoints bug?

2013-02-11 Thread Ivan Busquets
Hi Michael,

Hope all is going well. Can't verify this as I'm not in front of Nuke, but
that does sound like a bug to me (although I've never observed such
behaviour)

Patrick, even if the near and far clipping planes contribute to the
projection matrix, they would not alter it in such a way that projecting a
point on to a given depth would give different results.
The direction of the output vector should be the same (barring minor
rounding errors), so the point of that vector that lies at a certain depth
from camera will also remain the same.




On Mon, Feb 11, 2013 at 4:46 PM, Patrick Heinen <
mailingli...@patrickheinen.com> wrote:

> Hey Michael,
>
> a bug I know of in combination with the DeepToPoints is that if the
> bounding box is not equal to the format, that can cause weird behaviour,
> similar to the stretching you mention. That the near clipping plane changes
> the position in 3d space seems to actually be no bug, but a normal
> behaviour. The clipping planes influence the camera projection matrix, and
> thus it is normal getting different result for the DeepToPoints, as it
> multiplies your Vector with the inverse of the projection matrix to get the
> position in world space.
>
> So export your camera from your 3d application or use the information
> rendered to the metadata to build your cam and don't change the settings of
> it.
> Hope that helps, if you need it more detailed I can explain it further
> tomorrow.
> I'm actually using vrst files aswell ;)
>
> cheers
> Patrick
>
> Am 11.02.2013 um 20:07 schrieb Michael Garrett:
>
> > Hey,
> >
> > I've found on Nuke 6.3v8 on Windows that the near clipping plane of a
> Camera plugged into DeepToPoints will affect how the point cloud is cast.
> The depth of samples gets thrown off, ie, they go non-linear and stretch
> out, when the near clipping plane is reduced, as if you've done a gamma
> curve on the deep data.
> >
> > I want to try this in 6.3v9 and 7.x as soon as I can, to see if it's a
> bug that's been fixed. We're using .vrst files in this case, not that it
> should make any difference.
> >
> > Typically I just re-checked the scene and it's working fine now...
> >
> > Has anyone else experienced this?
> >
> > Thanks,
> > Michael
> > ___
> > Nuke-users mailing list
> > Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> > http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] point cloud from deep data where two objects intersect

2012-11-25 Thread Ivan Busquets
I think I've seen this behaviour before where (I assume due to a precision
error), two samples are not fully held out from each other. I imagine that,
even if the FG sample shows an opacity of 1, it might be something like
0.999. In the past, I've had some success by either forcing all FG
samples to have an alpha of 1 (in a deep expression), or multiply their
alpha up a bit until they fully occlude the samples in the BG.

Not the cleanest, but I think that would help in your example setup.

Cheers,
Ivan


On Sun, Nov 25, 2012 at 11:55 AM, Frank Rueter wrote:

> isn't that the same approach with less control?
>
>
> On 26/11/12 8:05 AM, Ari Rubenstein wrote:
>
>> How about forgetting the deep data approach , and instead doing a
>> Zintersect and feeding that into Nathan's old PixelGeo tool to generate
>> just the geo from the intersection... then use that as the particle emitter?
>>
>> Although the PixelGeo tool hasn't been updated for recent Nuke... but it
>> would be great if NFX plugins got an update ... any news of such I might
>> have missed ... anyone?
>>
>> Ari
>> Blue Sky
>>
>> Sent from my iPhone
>>
>> On Nov 24, 2012, at 9:29 PM, Frank Rueter  wrote:
>>
>>  Hi everybody,
>>>
>>> I'm messing around with deep data to see if I can produce a point cloud
>>> of an object where it intersects the ground plane.
>>> In my test setup, I offset the ground plane a little bit to be closer to
>>> camera, then hold out the cylinder with it (leaving only the bits above the
>>> offset ground). I then hold out the original cylinder with the previous
>>> result to only get the bits underneath the offset ground.
>>> Lastly I hold out the result with a second version of the ground plane
>>> which is offset in the opposite direction, effectively sandwiching the
>>> cylinder in a user defined thickness of the ground.
>>>
>>> The problem I'm currently seeing is that the last hold out still lets
>>> deep samples of the cylinder through even though it should be fully covered
>>> by the ground. I have sent a support mail for this (not sure if it's me or
>>> Nuke).
>>>
>>> Anyway, I'm wondering if people have done something similar and
>>> found a more elegant solution for this?
>>> I'd also be happy to get an intersection map for the ground plane.
>>>
>>> The general idea is to spawn particles from where one object intersects
>>> another, be it from a point cloud or a textured ground.
>>>
>>> Any thoughts?
>>>
>>> Cheers,
>>> frank
>>> 
>>> __**_
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.**co.uk,
>>> http://forums.thefoundry.co.**uk/ 
>>> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>>>
>> __**_
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.**co.uk,
>> http://forums.thefoundry.co.**uk/ 
>> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>>
>>
> __**_
> Nuke-users mailing list
> Nuke-users@support.thefoundry.**co.uk,
> http://forums.thefoundry.co.**uk/ 
> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] PositionPass to camera

2012-11-25 Thread Ivan Busquets
+1

Renderman has had a standard and consistent way of writing those out for a
while, but throw other renderers into the mix and it's very hard to write
tools that will work for any given render.

It would be great to see this standardized for sure.



On Wed, Nov 21, 2012 at 5:36 PM, Johannes Saam wrote:

> The header would be such a perfect way to deal with this, if only we could
> have ONE standard to do it. Anyone up for a challange to standardize it?
> Come up with ONE way and persuade renderes to do it all over?
> I am on your side :)
> jo
>
>
> On Mon, Nov 19, 2012 at 1:14 PM, Deke Kincaid wrote:
>
>> Adding on top of what Michael mentioned. If you happen to use Vray then
>> your in luck as the camera matrix is embedded in the metadata.  I can’t
>> remember if it was in house script or not but I have also seen it built
>> into Arnold and Prman exr files.  MR though your probably SOL.
>>
>> -deke
>>
>>
>> On Sun, Nov 18, 2012 at 4:09 PM, Michael Ralla <
>> michaelisbackfromh...@gmail.com> wrote:
>>
>>> In case you are handed exr's, I'd have quick look first if there's
>>> possibly camera data in the header of your xyz/pworld pass. There's a good
>>> chance you might find a translation and projection matrix you might be able
>>> to use to generate a camera that should match the camera the sequence was
>>> rendered with - without having to resort to xyz-pass trickery...
>>>
>>> Cheers, m.
>>>
>>>
>>> On Sun, Nov 18, 2012 at 5:33 AM, Howard Jones 
>>> wrote:
>>>
 Hi

 Has anyone got a method of creating a camera from a world position pass?
 I'm thinking of scenarios where your 3D dept is struggling to create a
 usable camera, either through laziness, lack of knowledge or just
 reluctance.
 You have got a ppass (eventually), so it should be possible to retrofit
 a camera, as all the info is essentially there on a plate.

 Obviously not something a big facility has to deal with, but I have
 found an issue in the past, and possibly next week.

 Cheers
 Howard

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

>>>
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: How to prioritize cards "layering" in nuke?

2012-11-25 Thread Ivan Busquets
If your 2 cards are just a copy of each other, you might be better off
using a MergeMaterial set to over to layer both projections onto the same
card.




On Thu, Nov 22, 2012 at 12:41 AM, itaibachar <
nuke-users-re...@thefoundry.co.uk> wrote:

> **
> Thanks Deke for the great tips!
> though I still get the tearing artifact in my setup for some reason.
> I played with the clipping plane with numbers all over the range, on all
> cameras (the projection cameras and the scene camera) and it always shows
> this tearing.
> Changing the piping order into the scene node seems to work but it is
> difficult to see underneath the tearing.
> what can it be?
> thanks
> itai
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] UV un-wrapping -> re-wrapping issue

2012-11-12 Thread Ivan Busquets
Mmm, not really. Project3D won't change the projection mode of a camera,
whereas ScanlineRender can.

I was just trying to explain that the results stated in the original post
are not due to a caching issue, or Nuke ignoring the second ScanlineRender.
But Nathan is right, there's no need to connect a camera to a
ScanlineRender set to "uv", so that's probably the easiest and safest
approach :)



On Mon, Nov 12, 2012 at 11:05 AM, Marten Blumen  wrote:

> Ya, but it has to be connected to the project3d, which is piped into the
> Scanline set to UV, which has the same effect to the camera I imagine.
>
> >>There is no need to connect a camera to a ScanlineRender that is set to
> 'uv' projection mode. The only thing that >>will affect its output is the
> format of the 'bg' input.
>
>
> On 13 November 2012 08:00, Nathan Rusch  wrote:
>
>>   There is no need to connect a camera to a ScanlineRender that is set
>> to 'uv' projection mode. The only thing that will affect its output is the
>> format of the 'bg' input.
>>
>> -Nathan
>>
>>
>>  *From:* Justin Ball 
>> *Sent:* Sunday, November 11, 2012 2:33 PM
>> *To:* Nuke user discussion 
>> *Subject:* Re: [Nuke-users] UV un-wrapping -> re-wrapping issue
>>
>>
>> Well I should not need to manipulate cameras, just no point in having
>> multiple copies of them in my opinion.
>>
>> Breaking them out did work though.  ran it through the farm and came out
>> properly.  Now I can see all the problems in the matchmove.  :)
>>
>> I do not clone things out of principal, or well... lack of trust when in
>> nuke 5 when it did not work and exploded scripts all the time.  I still
>> flinch when I think about that.
>>
>> Thanks for the tip.  It really seems to have solved the issue for now!
>>
>> (not sure why though... seems like too much info would be traveling
>> up-stream to the cameras)
>>
>> Thanks!
>>
>> Justin
>> On Sun, Nov 11, 2012 at 4:23 PM, Marten Blumen  wrote:
>>
>>> I think cloned cameras work als, which, keeps everything live.
>>>
>>>
>>> On 12 November 2012 11:14, Justin Ball  wrote:
>>>
 I do have all the scanlines linked to the same camera, because, well,
 why wouldn't I.

 Breaking them up into separate cameras seems to have helped.
 I'm going to run it through the farm now and see if it sticks.  Could
 help with the other issue I was having where one scanline was rendering an
 output that wasn't even in its tree.

 A little annoying.

 Thanks!

 Ill let you know how it goes.

 Justin


 On Sun, Nov 11, 2012 at 4:08 PM, Marten Blumen wrote:

> not sure the exact problem but there is an issue when using the same
> camera for both Scanline Render nodes. Try duplicating the camera and use
> the individual ones for each Scanline.
>
>
>
>
> On 12 November 2012 11:04, Justin Ball  wrote:
>
>> Hey Guys,
>>
>> Having a funky issue here.
>>
>> I'm using the old "Mummy" technique of match moving blood to an
>> actors face.  Using the 3d model, I'm rendering the scanline to UV space,
>> using a grid-warp to touch up the fit and then re-wrapping that animated
>> image to the 3d and rendered through a render camera.
>>
>> I am doing this all in line and Nuke apparently does not like this as
>> when trying to view the re-wrap over the plate at the end, the scanline
>> will render the up-stream output of the UV scanline instead of the 
>> updated
>> information.
>>
>> I'm sure others have had this issue before, but what would be the fix?
>>
>> I'm using 6.3v8 x 64 on windows.
>>
>> I've tried throwing a crop or a grade node in between the 2 scanline
>> render node process to break concatenation, but it does not seem to work.
>> It seems like a caching issue.
>>
>>
>> Any thoughts?
>> --
>> Justin Ball VFX
>> VFX Supervisor, Comp, Effects, Pipeline and more...
>> jus...@justinballvfx.com
>> 818.384.0923
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>



 --
 Justin Ball VFX
 VFX Supervisor, Comp, Effects, Pipeline and more...
 jus...@justinballvfx.com
 818.384.0923


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

>>>
>>>
>>> ___
>>>

Re: [Nuke-users] UV un-wrapping -> re-wrapping issue

2012-11-12 Thread Ivan Busquets
Hi Justin,

What is your second ScanlineRender's "projection mode" set to?
If it's set to "render camera", and the same camera is also connected to a
ScanlineRender with "projection mode" set to "uv", then I believe that
result is to be expected... sort of :)

At render time, ScanlineRender effectively changes the projection mode of
the camera connected to its camera input.

So, in a flow like the one you described, you have a ScanlineRender set to
"uv", which is therefore changing the projection mode of the Camera to "uv".
Then, if your second ScanlineRender is set to "render camera", it just
looks up the current projection mode of the camera (which has previously
changed to "uv"), and you end up getting a second render in uv space.

To avoid that, as Marten said, you can use separate cameras, or you could
also set the projection mode of the second ScanlineRender to "perspective"
(or at least I believe that should work, although I usually just use a
duplicate camera)

Sounds like you already got it working, but hope it sheds some light into
why it was happening in the first place.

Cheers,
Ivan




On Sun, Nov 11, 2012 at 2:33 PM, Justin Ball  wrote:

>
> Well I should not need to manipulate cameras, just no point in having
> multiple copies of them in my opinion.
>
> Breaking them out did work though.  ran it through the farm and came out
> properly.  Now I can see all the problems in the matchmove.  :)
>
> I do not clone things out of principal, or well... lack of trust when in
> nuke 5 when it did not work and exploded scripts all the time.  I still
> flinch when I think about that.
>
> Thanks for the tip.  It really seems to have solved the issue for now!
>
> (not sure why though... seems like too much info would be traveling
> up-stream to the cameras)
>
> Thanks!
>
> Justin
>
> On Sun, Nov 11, 2012 at 4:23 PM, Marten Blumen  wrote:
>
>> I think cloned cameras work als, which, keeps everything live.
>>
>>
>> On 12 November 2012 11:14, Justin Ball  wrote:
>>
>>> I do have all the scanlines linked to the same camera, because, well,
>>> why wouldn't I.
>>>
>>> Breaking them up into separate cameras seems to have helped.
>>> I'm going to run it through the farm now and see if it sticks.  Could
>>> help with the other issue I was having where one scanline was rendering an
>>> output that wasn't even in its tree.
>>>
>>> A little annoying.
>>>
>>> Thanks!
>>>
>>> Ill let you know how it goes.
>>>
>>> Justin
>>>
>>>
>>> On Sun, Nov 11, 2012 at 4:08 PM, Marten Blumen  wrote:
>>>
 not sure the exact problem but there is an issue when using the same
 camera for both Scanline Render nodes. Try duplicating the camera and use
 the individual ones for each Scanline.




 On 12 November 2012 11:04, Justin Ball  wrote:

> Hey Guys,
>
>  Having a funky issue here.
>
>  I'm using the old "Mummy" technique of match moving blood to an
> actors face.  Using the 3d model, I'm rendering the scanline to UV space,
> using a grid-warp to touch up the fit and then re-wrapping that animated
> image to the 3d and rendered through a render camera.
>
>  I am doing this all in line and Nuke apparently does not like this as
> when trying to view the re-wrap over the plate at the end, the scanline
> will render the up-stream output of the UV scanline instead of the updated
> information.
>
>  I'm sure others have had this issue before, but what would be the fix?
>
> I'm using 6.3v8 x 64 on windows.
>
> I've tried throwing a crop or a grade node in between the 2 scanline
> render node process to break concatenation, but it does not seem to work.
> It seems like a caching issue.
>
>
> Any thoughts?
> --
> Justin Ball VFX
> VFX Supervisor, Comp, Effects, Pipeline and more...
> jus...@justinballvfx.com
> 818.384.0923
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

>>>
>>>
>>>
>>> --
>>> Justin Ball VFX
>>> VFX Supervisor, Comp, Effects, Pipeline and more...
>>> jus...@justinballvfx.com
>>> 818.384.0923
>>>
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuk

Re: [Nuke-users] blut in camera

2012-11-06 Thread Ivan Busquets
You can change the "focus diameter" in the multisample tab of the scanline
render, and turn up the samples.
This will introduce a bit of jitter-rotation about the focal point of the
camera for each sample, effectively giving you an in-camera DOF effect.

set cut_paste_input [stack 0]
version 6.3 v8
Camera2 {
 inputs 0
 translate {0 0 0.625871}
 focal_point 1.2
 name Camera1
 selected true
 xpos -1375
 ypos -92
}
Text {
 inputs 0
 message C
 font "\[python nuke.defaultFontPathname()]"
 size 100
 xjustify center
 yjustify center
 box {536 301 1608 904}
 center {1072 603}
 name Text3
 selected true
 xpos -1090
 ypos -402
}
Card2 {
 translate {0.0939678 0 -0.547781}
 control_points {3 3 3 6

1 {-0.5 -0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {0 0 0}
1 {0 -0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166865
0} 0 {0 0 0} 0 {0.5 0 0}
1 {0.5 -0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {1 0 0}
1 {-0.5 0 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {0 0.5 0}
1 {0 0 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166716 0} 0
{0 -0.166716 0} 0 {0.5 0.5 0}
1 {0.5 0 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {1 0.5 0}
1 {-0.5 0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {0 1 0}
1 {0 0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0 0} 0 {0
-0.166865 0} 0 {0.5 1 0}
1 {0.5 0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {1 1 0} }
 name Card3
 selected true
 xpos -1090
 ypos -299
}
Text {
 inputs 0
 message B
 font "\[python nuke.defaultFontPathname()]"
 size 100
 xjustify center
 yjustify center
 box {536 301 1608 904}
 center {1072 603}
 name Text2
 selected true
 xpos -1211
 ypos -395
}
Card2 {
 translate {0.0379878 0 -0.280012}
 control_points {3 3 3 6

1 {-0.5 -0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {0 0 0}
1 {0 -0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166865
0} 0 {0 0 0} 0 {0.5 0 0}
1 {0.5 -0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {1 0 0}
1 {-0.5 0 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {0 0.5 0}
1 {0 0 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166716 0} 0
{0 -0.166716 0} 0 {0.5 0.5 0}
1 {0.5 0 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {1 0.5 0}
1 {-0.5 0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {0 1 0}
1 {0 0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0 0} 0 {0
-0.166865 0} 0 {0.5 1 0}
1 {0.5 0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {1 1 0} }
 name Card2
 selected true
 xpos -1211
 ypos -300
}
push $cut_paste_input
Text {
 message A
 font "\[python nuke.defaultFontPathname()]"
 size 100
 xjustify center
 yjustify center
 box {536 301 1608 904}
 center {1072 603}
 name Text1
 selected true
 xpos -1347
 ypos -392
}
Card2 {
 control_points {3 3 3 6

1 {-0.5 -0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {0 0 0}
1 {0 -0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166865
0} 0 {0 0 0} 0 {0.5 0 0}
1 {0.5 -0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {1 0 0}
1 {-0.5 0 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {0 0.5 0}
1 {0 0 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166716 0} 0
{0 -0.166716 0} 0 {0.5 0.5 0}
1 {0.5 0 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {1 0.5 0}
1 {-0.5 0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {0 1 0}
1 {0 0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0 0} 0 {0
-0.166865 0} 0 {0.5 1 0}
1 {0.5 0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {1 1 0} }
 name Card1
 selected true
 xpos -1347
 ypos -298
}
Scene {
 inputs 3
 name Scene1
 selected true
 xpos -1201
 ypos -212
}
push 0
ScanlineRender {
 inputs 3
 overscan 50
 samples 10
 shutter 0.47916667
 shutteroffset centred
 focal_jitter 0.2
 output_motion_vectors_type off
 MB_channel none
 name ScanlineRender1
 selected true
 xpos -1211
 ypos -72
}


On Tue, Nov 6, 2012 at 8:12 AM, Gabriel Dinis wrote:

>
> Hi there!
>
> Does anybody know the best way to get defocus blur when we change the
> focal distance in camera?
>
> Thanks in advance!
> Gab
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Normalize Viewer

2012-10-13 Thread Ivan Busquets
I'd also cast my vote for having this built into the viewer, maybe as a
dropdown under the cliptest/zebra pattern option, for the sake of
convenience.

However, in terms of a more efficient way to do a custom one, there are
ways around having to sample the image (with tcl or python), or having to
pre-analyze, avoiding the notable overhead that goes with it.

Taking Diogo's Dilate Min/Max approach, for example, there's no need to
sample the image afterwards, since you can do all the scaling
and offsetting using regular merges.

Ex:
set cut_paste_input [stack 0]
version 6.3 v8
push $cut_paste_input
Ramp {
 p0 {0 0}
 p1 {2048 0}
 color 1000
 name Ramp2
 label "0 to 1000"
 selected true
 xpos 1112
 ypos -322
}
Group {
 name Normalize
 tile_color 0x7aa9
 selected true
 xpos 1112
 ypos -216
}
 Input {
  inputs 0
  name Input
  xpos -450
  ypos -312
 }
set N18046380 [stack 0]
push $N18046380
 Dilate {
  size {{"-max(input.format.w, input.format.h)"}}
  name Dilate2
  label Min
  xpos -376
  ypos -200
 }
 CopyBBox {
  inputs 2
  name CopyBBox2
  xpos -376
  ypos -76
 }
set N1a498300 [stack 0]
push $N18046380
 Merge2 {
  inputs 2
  operation from
  name Merge4
  xpos -450
  ypos 59
 }
push $N1a498300
push $N18046380
push $N18046380
 Dilate {
  size {{"max(input.format.w, input.format.h)"}}
  name Dilate1
  label Max
  xpos -281
  ypos -323
 }
 CopyBBox {
  inputs 2
  name CopyBBox1
  xpos -281
  ypos -173
 }
 Merge2 {
  inputs 2
  operation from
  name Merge1
  xpos -281
  ypos -76
 }
 Merge2 {
  inputs 2
  operation divide
  name Merge3
  xpos -281
  ypos 59
 }
 Output {
  name Output1
  xpos -281
  ypos 137
 }
end_group





On Sat, Oct 13, 2012 at 6:22 PM, Frank Rueter  wrote:

>  None of those solutions actually produce what we're after though (some of
> your solutions seem to invert the input).
>
> We need something that can compresses the input to a 0-1 range by
> offsetting and scaling based on the image's min and max values (so the
> resulting range is 0-1). You can totally do this with a Grade or Expression
> node and a bit of tcl or python (or the CurveTool if you want to
> pre-compute), but that's not efficient.
>
> I reckon this should be a feature built into the viewer for ease-of-use
> and speed.
>
>
>
>
>
>
> On 14/10/12 1:04 PM, Marten Blumen wrote:
>
> and this group does all channels rgba,depth,motion using the expressions.
> should be quite fast as an input process
>
> set cut_paste_input [stack 0]
> version 7.0 v1b74
> push $cut_paste_input
> Group {
>  name Normalised_channels
>  selected true
>  xpos -526
>  ypos 270
> }
>  Input {
>   inputs 0
>   name Input1
>   xpos -458
>   ypos 189
>  }
>  Expression {
>   expr0 "mantissa (abs(r))"
>   expr1 "mantissa (abs(g))"
>   expr2 "mantissa (abs(b))"
>   channel3 depth
>   expr3 "mantissa (abs(z))"
>   name Normalized_Technical1
>   tile_color 0xb200
>   label rgbz
>   note_font Helvetica
>   xpos -458
>   ypos 229
>  }
>  Expression {
>   channel0 alpha
>   expr0 "mantissa (abs(a))"
>   channel1 {forward.u -forward.v -backward.u forward.u}
>   expr1 "mantissa (abs(u))"
>   channel2 {-forward.u forward.v -backward.u forward.v}
>   expr2 "mantissa (abs(v))"
>   channel3 depth
>   name Normalized_Motion1
>   tile_color 0xb200
>   label "a, motion u & v"
>   note_font Helvetica
>   xpos -458
>   ypos 270
>  }
>  Output {
>   name Output1
>   xpos -458
>   ypos 370
>  }
> end_group
>
>
> On 14 October 2012 11:29, Marten Blumen  wrote:
>
>> And one that looks technical or techni-color!
>>
>>
>> set cut_paste_input [stack 0]
>> version 7.0 v1b74
>> push $cut_paste_input
>> Expression {
>>   expr0 "mantissa (abs(r))"
>>  expr1 "mantissa (abs(g))"
>>  expr2 "mantissa (abs(b))"
>>  channel3 depth
>>  expr3 "mantissa (abs(z))"
>>  name Normalized_Technical
>>  tile_color 0xb200
>>
>>  label "Normalized\n"
>>  note_font Helvetica
>>  selected true
>>   xpos -286
>>  ypos -49
>>
>> }
>>
>>
>> On 14 October 2012 10:46, Marten Blumen  wrote:
>>
>>> This works for rgb & depth. Pop it into the ViewerProcess for normalized
>>> viewing. It seems to work with all values, free polygon cube to anyone who
>>> breaks it ;)
>>>
>>> Who knows the expression node; can we just apply the formula to all the
>>> present channels?
>>>
>>>
>>> set cut_paste_input [stack 0]
>>> version 7.0 v1b74
>>> push $cut_paste_input
>>>  Expression {
>>>  expr0 1/(r+1)/10
>>>  expr1 1/(g+1)/10
>>>  expr2 1/(b+1)/10
>>>   channel3 depth
>>>  expr3 1/(z+1)/10
>>>  name RGBDEPTH
>>>  label "Normalized\n"
>>>  note_font Helvetica
>>>  selected true
>>>  xpos -220
>>>  ypos 50
>>>
>>> }
>>>
>>>
>>> On 14 October 2012 10:24, Marten Blumen  wrote:
>>>
 A normalised expression node:


 set cut_paste_input [stack 0]
 version 7.0 v1b74
 push $cut_paste_input
  Expression {
  expr0 1/(r+1)/10
  expr1 1/(g+1)/10
  expr2 1/(b+1)/10
  name Expression6
  label "Normalize Me\n"
  note_font Helvetica
  selected true
  xpos -306
  

Re: [Nuke-users] Re: Baking camera and axis animation together

2012-10-01 Thread Ivan Busquets
Hi Johannes,

That's a different monster, in my opinion, which is to try and match the
camera (and motionblur) from an existing set of renders. In this case,
matching your renders is probably more important than a) keeping the camera
animatable, and b) having accurate motionblur.

If I had to guess, I'd say there's 2 possible reasons why you're getting a
better match by setting the local_matrix directly to the one in the exr's
metadata, instead of converting that back to Euler rotations:

1. No way to know the original rotation order from a transformation matrix
alone. So, if you're converting to Euler, you'd have to choose an arbitrary
rotation order, which may or may not match the one of the original camera.
Of course, you could have known the correct rotation order beforehand, in
which case this shouldn't be an issue.

2. How is your renderer (the one that produced the exrs) handling
motionblur? Assuming you're using Renderman, is subframe MotionBlur turned
on? Otherwise the renderer might just be doing a linear interpolation
between the camera position/rotation at each integer frame, which is the
same you'll get in Nuke when explicitly setting a "local_matrix".
I'm not an expert in Renderman, though, so someone with more insight might
be able to confirm or deny this.

Having said that, I've used both approaches to re-create a camera from
Renderman metadata, and I've rarely had motionblur issues with one or the
other. The few occasions where I have found differences has always been due
a different rotation order.

Hope this helps.

Cheers,
Ivan




On Mon, Oct 1, 2012 at 12:14 AM, Johannes Hezer wrote:

>  Hi Ivan,
>
> that is interesting with the motionblur.
> In my experience sofar, when getting cameras into nuke via exrs it was
> always best to use the matrix on the camera instead of converting all back
> to euler values in the rotation knobs ?!
> It was more accurate and motionblur issues were gone ?
> I know that is not exactly what you stated but I would be interested if
> you expierenced the same thing with the cam data from exrs?
>
> cheers
>
>
>
> Am 10/1/12 2:22 AM, schrieb Ivan Busquets:
>
> Might be splitting hairs, but since this comes up every now and then, I
> think it's worth noting that there are some important caveats to using the
> local_matrix knob to do that for animated cameras:
>
>  - You lose the ability to tweak the animation afterwards.
>
>  - Inaccurate motionblur. If you bake animated transform knobs into a
> single animated matrix, you're effectively losing the ability to
> interpolate "curved" paths correctly. The matrix values will interpolate
> between frames, but there's no guarantee that the result of that
> interpolation will match the transformation you'd get by interpolating the
> original rotation/translation/scale values.
>
>  Getting back to the use case of the original post, I would recommend
> keeping the two separate transforms when exporting out to Maya.
> For one, if you use the "forced local matrix" approach you'll have no easy
> way to transfer that to Maya (as in, it won't export correctly when writing
> an FBX file, for example).
> But also, if you're planning to refine animation later on, it might be
> easier to do so on the original transformations.
>
>
>
>
> On Sun, Sep 30, 2012 at 2:18 PM, Marten Blumen  wrote:
>
>> Stoked, solved it.  Very easy thanks to the exposed World and Local
>> Matrix's.
>>
>> Attached is a verbose tutorial nuke script; all instructions included in
>> stickies. Hit me up if it needs more work. thanks!
>>
>>
>>
>>
>>
>> On 29 September 2012 16:08, Marten Blumen  wrote:
>>
>>> I just retested my test and it only worked on simple setups. Probably
>>> need expert equations to make it work properly!
>>>
>>>  On 29 September 2012 15:39, C_Sander >> > wrote:
>>>
>>>>  I guess now would be a good time to learn expressions. I'll check
>>>> that out, thanks!
>>>>
>>>>  ___
>>>> Nuke-users mailing list
>>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>>
>>>
>>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
>
> ___
> Nuke-u

Re: [Nuke-users] Re: Baking camera and axis animation together

2012-09-30 Thread Ivan Busquets
Sure, you could enter a sub-frame increment when generating/baking the
curve.

But still, the point is that, if you need to bring that camera to a
different app, that's not going to help either.
And if you're keeping it in Nuke, you'd just get a camera that weighs more
than the original Axis+Camera stack, and is a lot harder to do any
animation on.

That's just my opinion, though, and the techniques mentioned before might
still useful from an experimental point of view, or for very specific
scenarios.

Cheers,
Ivan

On Sun, Sep 30, 2012 at 7:25 PM, Marten Blumen  wrote:

> Is it possible to bake sub-frame samples to have more accurate motion-blur?
>
>
> On 1 October 2012 13:22, Ivan Busquets  wrote:
>
>> Might be splitting hairs, but since this comes up every now and then, I
>> think it's worth noting that there are some important caveats to using the
>> local_matrix knob to do that for animated cameras:
>>
>> - You lose the ability to tweak the animation afterwards.
>>
>> - Inaccurate motionblur. If you bake animated transform knobs into a
>> single animated matrix, you're effectively losing the ability to
>> interpolate "curved" paths correctly. The matrix values will interpolate
>> between frames, but there's no guarantee that the result of that
>> interpolation will match the transformation you'd get by interpolating the
>> original rotation/translation/scale values.
>>
>> Getting back to the use case of the original post, I would recommend
>> keeping the two separate transforms when exporting out to Maya.
>> For one, if you use the "forced local matrix" approach you'll have no
>> easy way to transfer that to Maya (as in, it won't export correctly when
>> writing an FBX file, for example).
>> But also, if you're planning to refine animation later on, it might be
>> easier to do so on the original transformations.
>>
>>
>>
>>
>> On Sun, Sep 30, 2012 at 2:18 PM, Marten Blumen  wrote:
>>
>>> Stoked, solved it.  Very easy thanks to the exposed World and Local
>>> Matrix's.
>>>
>>> Attached is a verbose tutorial nuke script; all instructions included in
>>> stickies. Hit me up if it needs more work. thanks!
>>>
>>>
>>>
>>>
>>>
>>> On 29 September 2012 16:08, Marten Blumen  wrote:
>>>
>>>> I just retested my test and it only worked on simple setups. Probably
>>>> need expert equations to make it work properly!
>>>>
>>>> On 29 September 2012 15:39, C_Sander >>> > wrote:
>>>>
>>>>> **
>>>>> I guess now would be a good time to learn expressions. I'll check that
>>>>> out, thanks!
>>>>>
>>>>> ___
>>>>> Nuke-users mailing list
>>>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>>>
>>>>
>>>>
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Baking camera and axis animation together

2012-09-30 Thread Ivan Busquets
Might be splitting hairs, but since this comes up every now and then, I
think it's worth noting that there are some important caveats to using the
local_matrix knob to do that for animated cameras:

- You lose the ability to tweak the animation afterwards.

- Inaccurate motionblur. If you bake animated transform knobs into a single
animated matrix, you're effectively losing the ability to interpolate
"curved" paths correctly. The matrix values will interpolate between
frames, but there's no guarantee that the result of that interpolation will
match the transformation you'd get by interpolating the original
rotation/translation/scale values.

Getting back to the use case of the original post, I would recommend
keeping the two separate transforms when exporting out to Maya.
For one, if you use the "forced local matrix" approach you'll have no easy
way to transfer that to Maya (as in, it won't export correctly when writing
an FBX file, for example).
But also, if you're planning to refine animation later on, it might be
easier to do so on the original transformations.




On Sun, Sep 30, 2012 at 2:18 PM, Marten Blumen  wrote:

> Stoked, solved it.  Very easy thanks to the exposed World and Local
> Matrix's.
>
> Attached is a verbose tutorial nuke script; all instructions included in
> stickies. Hit me up if it needs more work. thanks!
>
>
>
>
>
> On 29 September 2012 16:08, Marten Blumen  wrote:
>
>> I just retested my test and it only worked on simple setups. Probably
>> need expert equations to make it work properly!
>>
>> On 29 September 2012 15:39, C_Sander 
>> wrote:
>>
>>> **
>>> I guess now would be a good time to learn expressions. I'll check that
>>> out, thanks!
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: UVProject not sticking?

2012-09-20 Thread Ivan Busquets
You can try reading that FBX into an Axis node instead. Usually you
should be able to get the local transforms for any given object. If
there's parented transforms, you can chain up different Axis nodes to
replicate the same hierarchy you had in Maya.

That failing... try StickyProject ;-)


On Thu, Sep 20, 2012 at 3:06 PM, thoma
 wrote:
> ahhh yes. I guess i was remembering UVproject as having a bit more
> functionality than it does. In that case...not having used stickyProject,
> does anyone know how to get transformational data out of an fbx that doesn't
> include it in the dropdowns? I have some geo with an animated parent
> transform in maya but the parent node/transformation matrix doesn't show in
> nuke. It only is accessible by enabling 'all objects' and 'read transform
> from file' on the fbx node. Is there any way to pull it out? (in this case
> to make my own little sticky projection)
>
> Thanks
> Thomas
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Re: UVProject not sticking?

2012-09-20 Thread Ivan Busquets
Hi Thoma,

My problem is that I'm using an FBX with animated geo
>

If I understand correctly your situation, I'm not sure UVProject has ever
worked the way you expect it to.

I think UVProject does do its job correctly. The reason you don't get your
textures to "stick" is that your UV-baking is happening for each frame of
the animated geo. As in, on every frame, UV project is doing its job
correctly, but on the next frame the baked UVs will be replaced again (with
those corresponding to the projection onto the new position of the geo).
The wording is a bit confusing, but hope it makes sense.

If you want to bake UVs based on the projection at a certain reference
frame (and therefore have the textures stick to the animated geo), you can
try StickyProject from Nukepedia.

http://www.nukepedia.com/plugins/3d/stickyproject/

Hope that helps.

Ivan


On Thu, Sep 20, 2012 at 11:15 AM, thoma
wrote:

> **
> Hi Deke,
>
> I can't post exactly what I'm working on but I'll provide a general
> illustration of what I'm talking about below. My problem is that I'm using
> an FBX with animated geo and the transforms for that geo aren't accessible
> seperately within nuke. Plus it's the principle that this node doesn't seem
> to work anymore! So before anyone says it - my real world scenario doesn't
> allow for the parented projector camera example below
>
> *Code:*
>
> set cut_paste_input [stack 0]
> version 6.3 v4
> BackdropNode {
>  inputs 0
>  name BackdropNode3
>  tile_color 0x99ff
>  note_font_size 25
>  selected true
>  xpos -3153
>  ypos 2802
>  bdwidth 1339
>  bdheight 760
> }
> Camera2 {
>  inputs 0
>  name Camera12
>  selected true
>  xpos -2278
>  ypos 3327
> }
> push $cut_paste_input
> Axis2 {
>  translate {{curve i x1 0 x20 0.2} {curve i x1 0} {curve i x1 0 x20 0}}
>  name Axis3
>  selected true
>  xpos -2278
>  ypos 3046
> }
> set N2f2977d0 [stack 0]
> push $N2f2977d0
> Camera2 {
>  name Camera13
>  selected true
>  xpos -2311
>  ypos 3147
> }
> CheckerBoard2 {
>  inputs 0
>  name CheckerBoard4
>  selected true
>  xpos -2152
>  ypos 3030
> }
> RotoPaint {
>  curves {AnimTree: "" {
>  Version: 1.2
>  Flag: 0
>  RootNode: 1
>  Node: {
>   NodeName: "Root" {
>Flag: 512
>NodeType: 1
>Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 578
>NumOfAttributes: 11
>"vis" S 0 1 "opc" S 0 1 "mbo" S 0 1 "mb" S 0 1 "mbs" S 0 0.5 "fo" S 0 1
> "fx" S 0 0 "fy" S 0 0 "ff" S 0 1 "ft" S 0 0 "pt" S 0 0
>   }
>   NumOfChildren: 1
>   Node: {
>NodeName: "Bezier1" {
> Flag: 576
> NodeType: 3
> CurveGroup: "" {
>  Transform: 0 0 S 1 120 0 S 1 120 0 S 1 120 0 S 1 120 1 S 1 120 1 S 1
> 120 0 S 1 120 1274.83 S 1 120 730.333
>  Flag: 0
>  NumOfCubicCurves: 2
>  CubicCurve: "" {
>   Type: 0 Flag: 8192 Dim: 2
>   NumOfPoints: 36
>   0 S 1 120 0 S 1 120 2 0 0 S 1 120 1524 S 1 120 880 0 0 S 1 120 0 S 1
> 120 -2 0 0 S 1 120 4 S 1 120 -2 0 0 S 1 120 1434 S 1 120 944 0 0 S 1 120 -4
> S 1 120 2 0 0 S 1 120 34 S 1 120 32 0 0 S 1 120 1206 S 1 120 870 0 0 S 1
> 120 -34 S 1 120 -32 0 0 S 1 120 14 S 1 120 10 0 0 S 1 120 1128 S 1 120 800
> 0 0 S 1 120 -14 S 1 120 -10 0 0 S 1 120 32 S 1 120 20 0 0 S 1 120 1062 S 1
> 120 762 0 0 S 1 120 -32 S 1 120 -20 0 0 S 1 120 -8 S 1 120 8 0 0 S 1 120
> 1016 S 1 120 632 0 0 S 1 120 8 S 1 120 -8 0 0 S 1 120 -8 S 1 120 4 0 0 S 1
> 120 1042 S 1 120 606 0 0 S 1 120 8 S 1 120 -4 0 0 S 1 120 -14 S 1 120 -8 0
> 0 S 1 120 1178 S 1 120 582 0 0 S 1 120 14 S 1 120 8 0 0 S 1 120 -14 S 1 120
> -4 0 0 S 1 120 1206 S 1 120 596 0 0 S 1 120 14 S 1 120 4 0 0 S 1 120 -212 S
> 1 120 30 0 0 S 1 120 1352 S 1 120 644 0 0 S 1 120 212 S 1 120 -30 0 0 S 1
> 120 -4 S 1 120 -28 0 0 S 1 120 1594 S 1 120 676 0 0 S 1 120 4 S 1 120 28 0
> 0 S 1 120 6 S 1 120 -6 0 0 S 1 120 1556 S 1 120 772 0 0 S 1 120 -6 S 1 120
> 6 0
>  }
>  CubicCurve: "" {
>   Type: 0 Flag: 8192 Dim: 2
>   NumOfPoints: 36
>   0 S 1 120 0 S 1 120 2 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 0 S 1 120
> -2 0 0 S 1 120 4 S 1 120 -2 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 -4 S 1 120
> 2 0 0 S 1 120 34 S 1 120 32 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 -34 S 1 120
> -32 0 0 S 1 120 14 S 1 120 10 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 -14 S 1
> 120 -10 0 0 S 1 120 32 S 1 120 20 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 -32 S
> 1 120 -20 0 0 S 1 120 -8 S 1 120 8 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 8 S
> 1 120 -8 0 0 S 1 120 -8 S 1 120 4 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 8 S 1
> 120 -4 0 0 S 1 120 -14 S 1 120 -8 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 14 S
> 1 120 8 0 0 S 1 120 -14 S 1 120 -4 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 14 S
> 1 120 4 0 0 S 1 120 -212 S 1 120 30 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120 212
> S 1 120 -30 0 0 S 1 120 -4 S 1 120 -28 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120
> 4 S 1 120 28 0 0 S 1 120 6 S 1 120 -6 0 0 S 1 120 0 S 1 120 0 0 0 S 1 120
> -6 S 1 120 6 0
>  }
>  NumOfAttributes: 44
>  "vis" S 0 1 "r" S 0 1 "g" S 0 1 "b" S 0 1 "a" S 0 1 "ro" S 0 0 "go" S
> 0 0 "bo" S 0 0 "ao

Re: [Nuke-users] Light affects only specific object

2012-09-18 Thread Ivan Busquets
You could also use a BasicMaterial with zero diffuse and specular, and
100% emission for the objects that you don't want to be affected by
the light(s) in your scene.

Like so:

set cut_paste_input [stack 0]
version 6.3 v8
Camera2 {
 inputs 0
 translate {3.7807 1 22.9754}
 rotate {-1 0 0}
 name Camera1
 selected true
 xpos -318
 ypos 199
}
push $cut_paste_input
Light2 {
 intensity 10
 translate {-2.56933 2.70048 4.61134}
 name Light1
 selected true
 xpos 83
 ypos 27
}
CheckerBoard2 {
 inputs 0
 name CheckerBoard1
 selected true
 xpos -153
 ypos -299
}
set N1b0df100 [stack 0]
push 0
BasicMaterial {
 inputs 2
 name BasicMaterial3
 label "DIFF + SPEC"
 selected true
 xpos -281
 ypos -138
}
Sphere {
 name Sphere1
 selected true
 xpos -281
 ypos -65
}
push $N1b0df100
push 0
push 0
BasicMaterial {
 inputs 3
 emission 1
 diffuse 0
 specular 0
 name BasicMaterial2
 label "EMISSION ONLY"
 selected true
 xpos -153
 ypos -141
}
Sphere {
 translate {4 0 0}
 name Sphere2
 selected true
 xpos -153
 ypos -68
}
push $N1b0df100
push 0
Diffuse {
 inputs 2
 name Diffuse1
 selected true
 xpos -33
 ypos -133
}
Sphere {
 translate {8 0 0}
 name Sphere3
 selected true
 xpos -33
 ypos -67
}
Scene {
 inputs 4
 name Scene1
 selected true
 xpos -169
 ypos 27
}
push 0
ScanlineRender {
 inputs 3
 overscan 50
 shutter 0.47916667
 shutteroffset centred
 output_motion_vectors_type off
 MB_channel none
 name ScanlineRender1
 selected true
 xpos -179
 ypos 219
}



On Tue, Sep 18, 2012 at 5:05 PM, Deke Kincaid  wrote:
> You would need to make separate scanline render nodes.
>
> -deke
>
> On Tue, Sep 18, 2012 at 9:23 AM, kafkaz 
> wrote:
>>
>> Is there a simple way to affect only specific object by light, or do I
>> need to do separate scanline render?
>>
>> Thanks!
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] J_Ops 2.0 available - adding a rigid body physics engine for Nuke's 3D system

2012-08-19 Thread Ivan Busquets
A little late to the party, but just wanted to add my thanks to Jack for
sharing this.
This is a really awesome addition to J_Ops, and it has a great performance
too!

As an idea, and seeing how some of the above problems came from the
auto-calculated center of mass, maybe you could add a visual representation
(like a non-interactive viewer handle) of where the CoM is when it's not
overridden by the user?
That way it would at least be easier to detect the cases where it's off.

Cheers,
Ivan

On Sun, Aug 19, 2012 at 3:44 PM, Frank Rueter  wrote:

> Hi Jack,
>
> thanks, but that was still giving odd results. I have adjusted the CoM a
> bit more (linked to an axis for better control and that seems to give the
> expected result):
>
> set cut_paste_input [stack 0]
> version 6.3 v8
>
> push $cut_paste_input
> Cube {
>  cube {-0.2 -0.2 -0.2 0.2 0.5 0.2}
>  translate {0 -0.5 0}
>  rotate {35.26261719 0 0}
>  pivot {0 0.5 0}
>  name torso1
>  selected true
>  xpos 21
>  ypos -130
> }
> J_MulletBody {
>  bodydamping {0.09 0.09}
>  bodycenterofmass {{parent.Axis1.translate x1 0} {parent.Axis1.translate
> x1 -0.167977} {parent.Axis1.translate x1 -0.109994}}
>
>  bodycenterofmassoverride true
>  labelset true
>  name J_MulletBody6
>  label "\[value this.bodytype]-\[value this.coltype]"
>  selected true
>  xpos 21
>  ypos -72
>
> }
> J_MulletConstraint {
>  conbodycount One
>  conbodypreview true
>  labelset true
>  name J_MulletConstraint1
>  label "\[value this.contype]"
>  selected true
>  xpos 21
>  ypos -22
>
> }
> J_MulletSolver {
>  name J_MulletSolver1
>  selected true
>  xpos 21
>  ypos 45
> }
> Axis2 {
>  inputs 0
>  translate {0 -0.4 -0.29}
>  name Axis1
>  selected true
>  xpos 197
>  ypos -99
>
> }
>
>
>
>
> On 17/08/12 7:34 PM, Jack Binks wrote:
>
>> Hey Gents,
>>
>> Will have to investigate further, but I think what you're seeing is
>> related to the auto calculated center of mass. Does the below
>> amendment make it more what you expect (body has CoM overriden)?
>>
>> set cut_paste_input [stack 0]
>> version 6.3 v1
>> push $cut_paste_input
>> Cube {
>>   cube {-0.2 -0.2 -0.2 0.2 0.5 0.2}
>>   translate {0 -0.5 0}
>>   rotate {35.26261719 0 0}
>>   pivot {0 0.5 0}
>>   name torso1
>>   selected true
>>   xpos -224
>>   ypos -283
>> }
>> J_MulletBody {
>>   bodydamping {0.09 0.09}
>>   bodycenterofmass {0.15 -0.5 -0.4}
>>   bodycenterofmassoverride true
>>   labelset true
>>   name J_MulletBody6
>>   label "\[value this.bodytype]-\[value this.coltype]"
>>   selected true
>>   xpos -224
>>   ypos -225
>> }
>> J_MulletConstraint {
>>   conbodycount One
>>   conbodypreview true
>>   labelset true
>>   name J_MulletConstraint1
>>   label "\[value this.contype]"
>>   selected true
>>   xpos -224
>>   ypos -175
>> }
>> J_MulletSolver {
>>   name J_MulletSolver1
>>   selected true
>>   xpos -224
>>   ypos -108
>> }
>>
>> Cheers
>> Jack
>>
>> On 16 August 2012 23:41, Marten Blumen  wrote:
>>
>>> that's what I got- I couldn't solve it properly before the deadline. It
>>> appeared to be some combination of the initial object position and the
>>> constraint axis.
>>>
>>> luckily this fit my shot. karabiners can shift within the bolt hanger
>>> when
>>> attached to the rock wall- it added to the realism!
>>>
>>>
>>> On 17 August 2012 10:18, Frank Rueter  wrote:
>>>
 I just had a play with this sort of simple constraint as well and am not
 getting the exected result (the box is not swinging around the
 constraint
 point.
 Am I doing something wrong?


 Cube {
   cube {-0.2 -0.2 -0.2 0.2 0.5 0.2}
   translate {0 -0.5 0}
   rotate {35.26261719 0 0}
   pivot {0 0.5 0}
   name torso1
   selected true
   xpos -464
   ypos -197
 }
 J_MulletBody {
   bodydamping {0.09 0.09}
   labelset true
   name J_MulletBody6
   label "\[value this.bodytype]-\[value this.coltype]"
   selected true
   xpos -464
   ypos -139
 }
 J_MulletConstraint {
   conbodycount One
   conbodypreview true
   labelset true
   name J_MulletConstraint1
   label "\[value this.contype]"
   selected true
   xpos -464
   ypos -89
 }
 J_MulletSolver {
   name J_MulletSolver1
   selected true
   xpos -464
   ypos -22

 }




 On 17/08/12 9:03 AM, Marten Blumen wrote:

 Cool - I had about 12-16 of them swinging on a wall. modeled and
 painted,
 6 hero ones and the rest in the distance.

 I had to bodgy the whole thing, didn't have time to learn it and the
 looming  shot deadline.

 Would really like to have a RBD rope, split into segments, pullling at
 them to make them move.



 On 17 August 2012 08:52, Jack Binks  wrote:

> Cracking, thanks Marten, will have a play!
>
>
> On 16 Aug 2012, at 19:48, Marten Blumen  wrote:
>
> Yeah - its an awesome bit of kit to have in the Nuke toolbox. 

Re: [Nuke-users] Re: Separate particle channel from another object channel

2012-08-19 Thread Ivan Busquets
You could use a FillMat on either the card or the particles to make it a
holdout of the rest of the scene. Or you could even have two
ScanlineRenders, one with the card held out, and one with the particles
held out, to have full control over both before merging them together.

Or, in the same line Frank suggested, you could use additional channels to
create an "id pass" for each part of your scene and use that as a matte to
your color corrections.


Attached is an example of both setups. Hope it helps.


On Fri, Aug 17, 2012 at 3:55 PM, Marten Blumen  wrote:

> I'm not sure how you can have a bad depth channel. can you post an image?
>
> On 18 August 2012 10:33, poiboy  wrote:
>
>> **
>> Marten,
>>
>> I actually thought of that, but the depth channel for the partciles are
>> pretty fubar as well as the card for the projected image.
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>


particles_and_card.nk
Description: Binary data
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Projecting Alpha channel => making holes in card?

2012-08-19 Thread Ivan Busquets
Ah, I see.

If you want to see other parts of the same scene through the cutout hole,
you'll need to add a BlendMaterial (set to over), to tell the shader how it
needs to interact with stuff in the BG.


On Sun, Aug 19, 2012 at 3:59 PM, Marten Blumen  wrote:

> I'm sure I'm doing it wrong. Attached is my test.
>
>
> On 20 August 2012 10:44, Ivan Busquets  wrote:
>
>> Sorry, didn't see your previous reply, Marten.
>> What is it that didn't work for you using a MergeMat?
>>
>> As long as one of the projected textures has an alpha channel, a MergeMat
>> set to "stencil" (with the cutout texture plugged to the A input) should do
>> the job.
>> Is your setup any different?
>>
>>
>>
>>
>> On Sun, Aug 19, 2012 at 3:22 PM, Marten Blumen  wrote:
>>
>>> I couldn't make it work combining 2 x Project3D with a MergeMat. I'm
>>> sure there is a way somehow though.
>>>
>>> On 20 August 2012 10:15, kafkaz wrote:
>>>
>>>> **
>>>> I am not sure if I made myself clear.
>>>>
>>>> I want to project two textures on single card. First texture is RGB
>>>> component, the second is alpha channel which would make holes in that card.
>>>>
>>>> Is it possible?
>>>>
>>>> ___
>>>> Nuke-users mailing list
>>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>>
>>>
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Projecting Alpha channel => making holes in card?

2012-08-19 Thread Ivan Busquets
Sorry, didn't see your previous reply, Marten.
What is it that didn't work for you using a MergeMat?

As long as one of the projected textures has an alpha channel, a MergeMat
set to "stencil" (with the cutout texture plugged to the A input) should do
the job.
Is your setup any different?



On Sun, Aug 19, 2012 at 3:22 PM, Marten Blumen  wrote:

> I couldn't make it work combining 2 x Project3D with a MergeMat. I'm sure
> there is a way somehow though.
>
> On 20 August 2012 10:15, kafkaz  wrote:
>
>> **
>> I am not sure if I made myself clear.
>>
>> I want to project two textures on single card. First texture is RGB
>> component, the second is alpha channel which would make holes in that card.
>>
>> Is it possible?
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Projecting Alpha channel => making holes in card?

2012-08-19 Thread Ivan Busquets
Unless I'm misreading your question, a MergeMaterial set to "stencil"
should be all you need to combine them.



On Sun, Aug 19, 2012 at 3:15 PM, kafkaz
wrote:

> **
> I am not sure if I made myself clear.
>
> I want to project two textures on single card. First texture is RGB
> component, the second is alpha channel which would make holes in that card.
>
> Is it possible?
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] unwanted label on gizmo/group?

2012-07-24 Thread Ivan Busquets
Are any of the knobs in your gizmo called "output", "channels",
"maskChannelInput" or "unpremult"?

autolabel.py explicitly looks for those as part of the automatic labeling,
so if you have any knobs named like that they will be picked up.

If that's the case, you can either rename them to something else, or
override autolabel() if you'd rather not have that behaviour.


Hope that helps.

Ivan



On Tue, Jul 24, 2012 at 7:51 PM, Jordan Olson  wrote:

> hey guys!
> I was making a group node today, adding expressions, linking knobs,
> etc. Then when I checked it out in the main node graph, I noticed it
> had a label underneath the name : which read "(- / false)".
> Where is this one coming from, and how can I get rid of this label?
> can't seem to figure this one out.
> All I have internally is three nodes, two of which have expressions on
> their "disable" knobs.
>
> cheers,
> Jordan
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Reconcile3d "output live" still bust

2012-07-19 Thread Ivan Busquets
Agree.
Most of the snap3d functions could benefit from taking a format argument
(and default to the root format if not specified).
They could also use a context argument, so they can be evaluated for
different frames/view.

And, while on the subject, if any changes are to be done to the snap3d
module, please fix the return of snap3d.cameraProjectionMatrix()
There should be a bug ID for it, but I can't find it anymore. I believe the
problem is the order in which snap3d.cameraProjectionMatrix() multiplies
all the matrices that produce the final camera matrix.
As it is, the return is:

s * t * p * m * camTransform  - (NDC-to-pixels * offset-NDC-to-unit
square * projection * win_scale & offset * camera transform)

Whereas it should be:

s * t * m * p * camTransform--  (That is, win_scale & offset needs
to be multiplied BEFORE the projection matrix instead of AFTER)

You can check this by using either Frank's or Jose's examples with a camera
that has window_translate values other than 0.
Attached is a script that shows the problem, along with a modified version
of that function that produces the desired output.

Thanks,
Ivan


On Thu, Jul 19, 2012 at 10:53 AM, Jose Fernandez de Castro <
pixelcowbo...@gmail.com> wrote:

> As a side note the problem with both alternate approaches that we showed
> (which do the same thing) is that the snap 3d function does not take a
> format argument, and only returns the values for the root format, so the
> result is only correct for that format. Maybe we should put in a request
> for the function to take in a format/resolution argument?
>
>
> On Wed, Jul 18, 2012 at 10:12 PM, Jan Dubberke  wrote:
>
>> yeh thanks for that again: it works just fine. it does choke a wee bit in
>> the gui every now and then but it totally does what I was aiming for.
>>
>> I also liked Jose's approach so thanks for that too - very creative!
>>
>> I guess I'll just wait for new releases then and hope it gets
>> incorporated as a one stop solution? question mark
>>
>> cheers everyone
>>
>>
>>
>>
>>  that's what I did to work around this. Fairly rough and untested though:
>>>
>>> CheckerBoard2 {
>>>  inputs 0
>>>  name CheckerBoard1
>>>  selected true
>>>  xpos -148
>>>  ypos -140
>>> }
>>> Transform {
>>>  translate {{"\[python -execlocal cam\\ =\\
>>> nuke.toNode('Camera1')\\naxis\**\ =\\ nuke.toNode('Axis1')\\nwm\\ =\\
>>> axis\\\['world_matrix'\\].**valueAt(nuke.frame())\\nxform\**\ =\\
>>> nuke.math.Vector3(wm\\\[3\\],\**\ wm\\\[7\\],\\ wm\\\[11\\])\\nret\\ =\\
>>> nukescripts.snap3d.**projectPoint(cam,\\ xform).x]"} {"\[python
>>> -execlocal cam\\ =\\ nuke.toNode('Camera1')\\naxis\**\ =\\
>>> nuke.toNode('Axis1')\\nwm\\ =\\
>>> axis\\\['world_matrix'\\].**valueAt(nuke.frame())\\nxform\**\ =\\
>>> nuke.math.Vector3(wm\\\[3\\],\**\ wm\\\[7\\],\\ wm\\\[11\\])\\nret\\ =\\
>>> nukescripts.snap3d.**projectPoint(cam,\\ xform).y]"}}
>>>  center {1024 778}
>>>  name Transform1
>>>  selected true
>>>  xpos -148
>>>  ypos -68
>>> }
>>> Camera2 {
>>>  inputs 0
>>>  name Camera1
>>>  selected true
>>>  xpos -420
>>>  ypos -70
>>> }
>>> push $cut_paste_input
>>> Axis2 {
>>>  rotate {0 -4 0}
>>>  name parentAxis
>>>  selected true
>>>  xpos -366
>>>  ypos -323
>>> }
>>> Axis2 {
>>>  translate {-0.392 -0.0439976 -2.14105}
>>>  name Axis1
>>>  selected true
>>>  xpos -293
>>>  ypos -238
>>> }
>>> Scene {
>>>  name Scene1
>>>  selected true
>>>  xpos -293
>>>  ypos -129
>>> }
>>> push 0
>>> ScanlineRender {
>>>  inputs 3
>>>  output_motion_vectors_type accurate
>>>  name ScanlineRender1
>>>  selected true
>>>  xpos -303
>>>  ypos -16
>>> }
>>> Merge2 {
>>>  inputs 2
>>>  name Merge1
>>>  selected true
>>>  xpos -148
>>>  ypos -16
>>> }
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 19/07/12 2:37 PM, Jose Fernandez de Castro wrote:
>>>
 I'm wondering if anyone has used the snap3d.project points
  successfully to achieve this (I mean, if it's actually stable and
  usable). For starters it seems like it only takes the root format  of the
 script, but it might be possible to cheat it through the  win_scale u v.
 Anyway, just curious, an example setup:

 set cut_paste_input [stack 0]
 version 6.3 v8
 Axis2 {
 inputs 0
 translate {{curve i x1048 -1.39628 x1088 -0.984272 x1108
  -0.1986213923 x1119 0.374129} {curve i x1048 0.555943 x1088
  0.375871 x1102 0.41049599 x1108 0.419869 x1119  -0.533795}
 {curve i x1048 0 x1088 0 x1108 0 x1119 0}}
 name Axis3
 selected true
 xpos 236
 ypos 21
 }
 push $cut_paste_input
 Camera2 {
 translate {{curve x1104 0} {curve x1080 0.3 x1104 0} {curve x1104
  7.05191}}
 rotate {0 5 0}
 focal 13.5
 name Camera3
 selected true
 xpos -25
 ypos 68
 addUserKnob {20 ProjectFrame l "Project Frame"}
 addUserKnob {3 frameproj l "Projection Frame"}
 frameproj 1000
 addUserKnob {6 currFrame l "Set to current frame" +STARTLINE}
>

Re: [Nuke-users] DeepFromImage: what depth.Z range is it expecting?

2012-05-08 Thread Ivan Busquets
When using the depth input, it expects the depth channel to conform to
Nuke's standard (1/distance).

If your Arnold depth shader is outputting real depth values, you should be
able to just add an expression node to turn depth.Z into "1/depth.Z".


On Tue, May 8, 2012 at 7:01 PM, Paul Hudson  wrote:

> Hi all,
>
> I am attempting to use the DeepFromImage node with a render from
> Arnold.  My objects are about 100 units from camera.  I cannot get
> anything that looks close to correct until I rescale my depth pass to
> be between 1 and 0 (with 1 being 0 units from camera and 0 being
> infinity).  Is the DeepFromImage node setup to be convenient with
> Noise, Ramp, etc nodes but not actual renders?
>
>
> -Paul
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] python [topnode] equivalent

2012-04-23 Thread Ivan Busquets
What Hugo said.
You can find more info here:

http://docs.python.org/library/stdtypes.html#string-formatting

As for the "16" in the int() command (or "0" in the example I sent), that
is the base for the string-to-integer conversion. If the argument is not
given, it uses a base of 10 by default. You can pass it a value of 16 so it
will interpret hex characters correctly, or 0 to let python guess the best
base to use for the given string. In this case, this works because the
string will always start with '0x', which Python will interpret as an
hexadecimal.

More info:
http://docs.python.org/library/functions.html#int

Hope that clarifies it a bit.

Cheers,
Ivan

On Mon, Apr 23, 2012 at 12:12 PM, Hugo Léveillé  wrote:

>   Its called string formatting
>
>  ex:
>  age = '16'
> print "Hi, I am " + age + " years old"
>
>  is the same as:
>
>  print "Hi, I am %s years old" % age
>
>  It has the advantage of making clearer string writing as well as
> converting int and floats to string as well
>
>  ex:
>
>  "I am %s years old and I have %s dollars" % (10 , 3.5)
>
>
>
>
>On Mon, Apr 23, 2012, at 12:00, Adam Hazard wrote:
>
>  ok, cool, I think I understand it better now. Thanks, guys. Also, if you
> don't mind, another question, what exactly is the '%' doing in this code.
> And I have used 'int' before, and seen the '16' posted around, what exactly
> are those doing, I am guessing that is what converts the value from string
> to integer?
>
> -Adam
>
> On 04/23/2012 10:48 AM, Nathan Rusch wrote:
>
>   The problem isn’t hex vs. int; the value you’re getting back from the
> Python knob is identical to the hex value returned by the nuke.tcl call.
> The issue you’re running into is that the nuke.tcl call is returning the
> hex value as a string, so you need to cast it to a numeric type before you
> can actually use it.
>
>  n = nuke.selectedNode()
>  tile_color = int(nuke.tcl('value [topnode %s].tile_color' % n.name()),
> 16)
>
>
>  -Nathan
>
>
>  From: Adam Hazard 
>  Sent: Monday, April 23, 2012 10:12 AM
>  To: Nuke user discussion 
>  Subject: Re: [Nuke-users] python [topnode] equivalent
>
>  Thanks Ivan.
> This was pretty much exactly what I was looking for. However I had to
> change it a little bit because this was returning the tile color hex value,
> if I understand all this correctly, and my function needs just the integer
> value. As I can't assign or set a tile_color using hex, or I haven't been
> able to figure it out.
>
> Anyways, for whatever reason this does the trick, kinda mixing your code
> with what I had before. I am still not very sure why the tile_color has 2
> different value formats.
>
> n = nuke.selectedNode()
> topnode_name = nuke.tcl("full_name [topnode %s]" % n.name())
> topnode = nuke.toNode(topnode_name)
> tile_col = topnode['tile_color'].value()
>
> Thanks again and much appreciated.
> Adam
>
>
> On 04/20/2012 06:47 PM, Ivan Busquets wrote: Or if you just want the
> tile_color of the top node, you could of course do:
>
> n = nuke.selectedNode()
>
> tile_color = nuke.tcl("value [topnode %s].tile_color" % n.name())
>
> Hope that helps
>
>
>  On Fri, Apr 20, 2012 at 6:41 PM, Ivan Busquets wrote:
> You can use nuke.tcl() within python to execute a tcl command.
>
> So, in your case, something like this should work:
>
>n = nuke.selectedNode()
>
> topnode_name = nuke.tcl("full_name [topnode %s]" % n.name())
>
> topnode = nuke.toNode(topnode_name)
>
>
>
>  On Fri, Apr 20, 2012 at 6:30 PM, Adam Hazard  wrote:
> Hopefully a quick question,
>
> If I currently have a node selected somewhere in a tree, and I want to
> access the topnodes tile color using python, how would I do so? Using
> [[topnode].tile_color] doesn't seem to work as it is tcl? Looking around it
> seems you need to check dependecies of all the nodes or something, but I
> haven't been able to get anything to work.  Is there no way to convert the
> tcl function to work in python?
>
> Thanks in advance for any help,
> Adam
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
>
>
>
> ___
> Nuke-users mailing listnuke-us...@support.thefoundry.co.uk, 
> http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
>
> --

Re: [Nuke-users] python [topnode] equivalent

2012-04-20 Thread Ivan Busquets
Or if you just want the tile_color of the top node, you could of course do:

n = nuke.selectedNode()

tile_color = nuke.tcl("value [topnode %s].tile_color" % n.name())

Hope that helps


On Fri, Apr 20, 2012 at 6:41 PM, Ivan Busquets wrote:

> You can use nuke.tcl() within python to execute a tcl command.
>
> So, in your case, something like this should work:
>
>   n = nuke.selectedNode()
>
> topnode_name = nuke.tcl("full_name [topnode %s]" % n.name())
>
> topnode = nuke.toNode(topnode_name)
>
>
>
> On Fri, Apr 20, 2012 at 6:30 PM, Adam Hazard  wrote:
>
>> Hopefully a quick question,
>>
>> If I currently have a node selected somewhere in a tree, and I want to
>> access the topnodes tile color using python, how would I do so? Using
>> [[topnode].tile_color] doesn't seem to work as it is tcl? Looking around it
>> seems you need to check dependecies of all the nodes or something, but I
>> haven't been able to get anything to work.  Is there no way to convert the
>> tcl function to work in python?
>>
>> Thanks in advance for any help,
>> Adam
>> __**_
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.**co.uk,
>> http://forums.thefoundry.co.**uk/ <http://forums.thefoundry.co.uk/>
>> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users<http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users>
>>
>
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] python [topnode] equivalent

2012-04-20 Thread Ivan Busquets
You can use nuke.tcl() within python to execute a tcl command.

So, in your case, something like this should work:

  n = nuke.selectedNode()

topnode_name = nuke.tcl("full_name [topnode %s]" % n.name())

topnode = nuke.toNode(topnode_name)



On Fri, Apr 20, 2012 at 6:30 PM, Adam Hazard  wrote:

> Hopefully a quick question,
>
> If I currently have a node selected somewhere in a tree, and I want to
> access the topnodes tile color using python, how would I do so? Using
> [[topnode].tile_color] doesn't seem to work as it is tcl? Looking around it
> seems you need to check dependecies of all the nodes or something, but I
> haven't been able to get anything to work.  Is there no way to convert the
> tcl function to work in python?
>
> Thanks in advance for any help,
> Adam
> __**_
> Nuke-users mailing list
> Nuke-users@support.thefoundry.**co.uk,
> http://forums.thefoundry.co.**uk/ 
> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: ??? redguard1.glow ???

2012-04-18 Thread Ivan Busquets
>
> The main downside I'm curious about with searching for specific channels
> as you're doing is having to maintain a list of 'baddies' to search for.
>

To me, that's a key part of it. There's so many possible permutations that
it's almost impossible to keep track of all of them unless you use a
procedural approach, like the one you described.

The other procedural approach I have seen used with some success is to look
for any add_layer lines that contain one of the default layersets (rgb,
rgba, alpha, depth, etc), and remove those altogether, since they will
exist no matter what, and the problem originates when somethin non-standard
is added to one of those layersets).

I think the best strategy right now is a combo of:
1 - Parsing the scripts onScriptLoad to diagnose them
2 - Making sure all gizmos are clean
3 - Intercepting copy-pasted data through a dropData callback.

But I still think that the real fix for all of this should come from
completely disallowing the addition of new channels to the default layers.
For starters, if you try to do it through the interface, it doesn't let
you. The only way these channels could have been created in the first place
is either through tcl or Python commands, which don't enforce the same sort
of restriction.
I believe one of the reasons to not disallow such channels right now is so
it won't break old scripts that have them. But at this point, and seeing
how much effort everyone seems to be putting into this, it sounds to me
like most people would be happy to trade that off.

That's just my opinion, of course. But the truth is, with this issue having
travelled the whole world, I see people that are being discouraged from
creating new channel names for fear of creating havoc. And that's a real
shame, since there's nothing wrong with creating new channels.



Thanks for sharing that approach to compare used channels vs channels in
the script, Erik!
:)

Cheers,
Ivan


On Wed, Apr 18, 2012 at 3:03 PM, Erik Winquist  wrote:

>
> Yeah, sorry..   i wasn't very clear.
>
> We're still cleaning the text of the .nk script via regex, but like you
> describe, that's happening from a python function in nuke once the script
> is open.  An addOnScriptLoad() callback checks the script's channels as I'd
> described previously and if any suspect channels are discovered, at that
> point the user is alerted and given the choice whether they want to clean
> the script or not.  If they choose to, a backup copy of the script is made
> and then the .nk text is cleaned and nuke immediately exits.
>
> I'm interested in how you're launching a new nuke instance from another
> process.  Re-opening the cleaned script automatically would definitely be
> preferable to just having nuke quit like we're doing now.
>
>
> The main downside I'm curious about with searching for specific channels
> as you're doing is having to maintain a list of 'baddies' to search for.
>
>
> erik
>
>
> John RA Benson wrote:
>
> The big headache is, however, that pesky "add_layers" with bad channels
> will still be stuck in the script. At least with 6.2, the only way to
> remove the layer was with a text editor, which is why I favored the regex
> approach outside of nuke.  In practice, we run a version of this from nuke
> to clean up a lot of issues. Hitting the button does a few things with nuke
> to fix internal stuff, but then saves the script and runs this solely as a
> text operation on the file. The open (and infected) script is then closed
> and the cleaned up script is relaunched (but as a separate process - just
> using nuke.scriptOpen(...) ends up just re-introducing the bad layers into
> the already open session. Despite 'closing' it, the bad layers and channels
> are still in memory).
>
>
> http://www.nukepedia.com/python/regular-expressions-re-module-to-fix-broken-scripts/#findChannelscovers
>  using the blacklisted layers and finding the channels inside them.
> Since we use '-?' (find a match 0 or 1 times) as a prefix to the regex
> expression for the channel we're looking for (based on the bad layer's
> channels), we find both 'rgb.red' and '-rgb.red'.
>
> A whitelist of channels to keep is a good idea, but so far hasn't been
> necessary. I guess in our case, when a bad channel has been introduced, it
> hasn't been carrying a good channel with it in the add_layers function.
>
> Cheers
> JRAB
>
>
> Erik Winquist wrote:
>
>
> We've been wrestling with this like many others.
>
> Instead of searching for specific layers/channels in specific
> configurations, I've instead opted to compare what channels Nuke says a
> script contains:
>
> nuke.channels()
>
> vs. what channels all of the script's nodes report they're using:
>
> allnodes = nuke.allNodes(recurseGroups=True)
> allchans = []
> for n in nuke.allNodes():
> for c in n.channels():
> allchans.append(c)
>
> scriptchans = set(nuke.channels())
> usedchans = set(allchans)
> notused = scriptchans - usedchans
>
> In the above example, 'notused' is a 

Re: [Nuke-users] Re: Creating infinate zoom and image nesting

2012-04-09 Thread Ivan Busquets
Ok, so here's a few pointers that should let you make it work with either
of the examples.

1. Common to all 3 examples. Keep your transform nodes stacked together.
Adding the Grade node between transforms is breaking concatenation between
them.

2. In the first example, use a "cloned" transform on each one of the
branches, instead of doing it after everything is merged.

3. The Card3D example, as Randy suggested, should work similar to the
cloned transform example, with the convenience that you don't have to clone
anything. You would plug one camera to all the Card3D nodes, and drive your
zoom with that camera

4. For the full 3D setup to work, you need to use MergeMat nodes, not
regular Merges. Have a look at the example I sent before.


Here's your own script with those fixes in place, in case something isn't
clear. Hope that helps.



set cut_paste_input [stack 0]
version 6.3 v1
BackdropNode {
 inputs 0
 name BackdropNode1
 tile_color 0x8e8e3800
 label "Regular Transform"
 note_font_size 42
 selected true
 xpos -6788
 ypos -569
 bdwidth 724
 bdheight 867
}
BackdropNode {
 inputs 0
 name BackdropNode2
 tile_color 0x7171c600
 label "Card 3D"
 note_font_size 42
 selected true
 xpos -5908
 ypos -569
 bdwidth 680
 bdheight 869
}
BackdropNode {
 inputs 0
 name BackdropNode3
 tile_color 0x8e8e3800
 label "3D Scanline render"
 note_font_size 42
 selected true
 xpos -5124
 ypos -569
 bdwidth 691
 bdheight 871
}
CheckerBoard2 {
 inputs 0
 boxsize 300
 centerlinewidth 10
 name CheckerBoard4
 selected true
 xpos -5458
 ypos -419
}
Grade {
 white {3 1 1 1}
 name Grade4
 selected true
 xpos -5458
 ypos -347
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform9
 selected true
 xpos -5458
 ypos -273
}
set N34d05e80 [stack 0]
Dot {
 name Dot7
 selected true
 xpos -5284
 ypos -150
}
Dot {
 name Dot3
 label "Original close up size"
 selected true
 xpos -5284
 ypos 234
}
CheckerBoard2 {
 inputs 0
 boxsize 300
 centerlinewidth 10
 name CheckerBoard2
 selected true
 xpos -6338
 ypos -337
}
Grade {
 white {3 1 1 1}
 name Grade1
 selected true
 xpos -6338
 ypos -265
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform18
 selected true
 xpos -6338
 ypos -147
}
set N68125270 [stack 0]
Dot {
 name Dot9
 selected true
 xpos -6149
 ypos -150
}
Dot {
 name Dot1
 label "Original close up size"
 selected true
 xpos -6149
 ypos 234
}
push $N68125270
Transform {
 scale 0.333
 center {1024 778}
 name Transform19
 selected true
 xpos -6338
 ypos -89
}
clone node12feb1e10|Transform|77695 Transform {
 scale 3
 center {1024 778}
 name Transform23
 label "Use this as your master zoom"
 selected true
 xpos -6335
 ypos -21
}
set C2feb1e10 [stack 0]
CheckerBoard2 {
 inputs 0
 boxsize 200
 centerlinewidth 10
 name CheckerBoard3
 selected true
 xpos -6558
 ypos -425
}
Grade {
 white {1 3 1 1}
 name Grade2
 selected true
 xpos -6558
 ypos -353
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform20
 selected true
 xpos -6558
 ypos -260
}
Transform {
 scale 0.5
 center {1024 778}
 name Transform21
 selected true
 xpos -6558
 ypos -185
}
clone $C2feb1e10 {
 xpos -6556
 ypos -100
 selected true
}
CheckerBoard2 {
 inputs 0
 boxsize 100
 centerlinewidth 10
 name CheckerBoard10
 selected true
 xpos -6778
 ypos -488
}
Grade {
 white {1 1 3 1}
 name Grade3
 selected true
 xpos -6778
 ypos -416
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform22
 selected true
 xpos -6778
 ypos -269
}
clone $C2feb1e10 {
 xpos -6778
 ypos -139
 selected true
}
Merge2 {
 inputs 2
 name Merge1
 selected true
 xpos -6558
 ypos 7
}
Merge2 {
 inputs 2
 name Merge2
 selected true
 xpos -6338
 ypos 103
}
Dot {
 name Dot2
 label "Re-sized close up"
 selected true
 xpos -6310
 ypos 253
}
push $cut_paste_input
Camera2 {
 translate {0 0 0.677}
 name Camera2
 selected true
 xpos -4960
 ypos 149
}
CheckerBoard2 {
 inputs 0
 boxsize 300
 centerlinewidth 10
 name CheckerBoard8
 selected true
 xpos -4674
 ypos -473
}
Grade {
 white {3 1 1 1}
 name Grade8
 selected true
 xpos -4674
 ypos -401
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform13
 selected true
 xpos -4674
 ypos -305
}
set N61e7bfd0 [stack 0]
Transform {
 scale 0.333
 center {1024 778}
 name Transform14
 selected true
 xpos -4674
 ypos -137
}
CheckerBoard2 {
 inputs 0
 boxsize 200
 centerlinewidth 10
 name CheckerBoard9
 selected true
 xpos -4894
 ypos -473
}
Grade {
 white {1 3 1 1}
 name Grade9
 selected true
 xpos -4894
 ypos -401
 postage_stamp true
}
Transform {
 rotate 45
 center {1024 778}
 name Transform15
 selected true
 xpos -4894
 ypos -311
}
Transform {
 scale 0.5
 center {1024 778}
 name Transform16
 selected true
 xpos -4894
 ypos -233
}
CheckerBoard2 {
 inputs 0
 boxsize 100
 centerlinewidth 10
 name CheckerBoard7
 selected true
 xpos -5114
 ypos -489
}
Grade {
 white {1 1 3 1}
 name Grade7
 selected true
 xpos -5114
 ypos -417
 postage_stamp true
}
Transform {
 rotate 45
 c

Re: [Nuke-users] Creating infinate zoom and image nesting

2012-04-09 Thread Ivan Busquets
Hi,

In Nuke, I think there's 2 ways you could go about this:

1 - Keep all your transforms together for each image, before merging them
together. That means you'll probably want to have one master transform that
drives the camera move, and have it cloned (or expression linked) as the
last transform of each one of your images. Then, above that transform, just
position, scale each image to line them up. And if you need to do any
masking to blend them together, make sure you do that before the transforms.

2 - The workflow you describe from AE & Flame can be achieved by moving to
a 3D setup instead. One of the really cool things about Nuke that doesn't
get enough rep is the fact that geometry honors concatenation when looking
up its textures. So, say you have an image, scale it way down, and put it
on a card. If you render that through a camera that gets very close to the
card (and therefore scales it up again), you'll see that it's still
concatenating with the transformations before the card. And, most
importantly, this stays true even when you use MergeMaterial nodes.
So, for the case of the "Earth Zoom", you could use a setup like this  (and
if you're going to be adding cloud layers, etc, they might need to have
some parallax, so I would definitely recommend the 3D setup in this case):

set cut_paste_input [stack 0]
version 6.3 v4
StickyNote {
 inputs 0
 name StickyNote1
 label "because of the way geometry textures itself honoring\nconcatenation
of transforms, you can get close to your scaled\ndown images without
loosing detail down here"
 selected true
 xpos -1564
 ypos 342
}
push $cut_paste_input
Camera2 {
 translate {{curve x1 0} {curve x1 0} {curve x1 -0.528 x20 1.528}}
 name Camera1
 selected true
 xpos -1586
 ypos 250
}
CheckerBoard2 {
 inputs 0
 name CheckerBoard1
 selected true
 xpos -1221
 ypos -27
}
Transform {
 scale 0.1
 center {1024 778}
 name Transform8
 label "sacled way down"
 selected true
 xpos -1228
 ypos 61
}
CheckerBoard2 {
 inputs 0
 name CheckerBoard7
 selected true
 xpos -1318
 ypos -144
}
Transform {
 scale 0.3
 center {1024 778}
 name Transform9
 label "sacled way down"
 selected true
 xpos -1318
 ypos -55
}
ColorWheel {
 inputs 0
 gamma 0.45
 name ColorWheel1
 label "BG = widest image"
 selected true
 xpos -1428
 ypos -230
}
MergeMat {
 inputs 2
 name MergeMat2
 selected true
 xpos -1428
 ypos -49
}
MergeMat {
 inputs 2
 name MergeMat1
 selected true
 xpos -1428
 ypos 67
}
Card2 {
 translate {0 0 -0.713866}
 control_points {3 3 3 6

1 {-0.5 -0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {0 0 0}
1 {0 -0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166865
0} 0 {0 0 0} 0 {0.5 0 0}
1 {0.5 -0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166865 0} 0 {0 0
0} 0 {1 0 0}
1 {-0.5 0 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {0 0.5 0}
1 {0 0 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0.166716 0} 0
{0 -0.166716 0} 0 {0.5 0.5 0}
1 {0.5 0 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0.166716 0} 0 {0
-0.166716 0} 0 {1 0.5 0}
1 {-0.5 0.5 0} 0 {0.166865 0 0} 0 {0 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {0 1 0}
1 {0 0.5 0} 0 {0.166716 0 0} 0 {-0.166716 0 0} 0 {0 0 0} 0 {0
-0.166865 0} 0 {0.5 1 0}
1 {0.5 0.5 0} 0 {0 0 0} 0 {-0.166865 0 0} 0 {0 0 0} 0 {0 -0.166865
0} 0 {1 1 0} }
 name Card1
 selected true
 xpos -1428
 ypos 158
}
push 0
ScanlineRender {
 inputs 3
 output_motion_vectors_type accurate
 name ScanlineRender1
 selected true
 xpos -1428
 ypos 270
}


Hope that helps.

Cheers,
Ivan



On Mon, Apr 9, 2012 at 6:09 PM, mesropa
wrote:

> **
> I have been trying for the last few days in creating an Earth Zoom also
> known as a Cosmic Zoom or as I like to think of it simply as "image
> nesting". I found a tutorial for it in AE and it seams straight forward. It
> can also be done in Flame with the same logical steps, however I have been
> unable to do the same thing using Nuke. The problem is that once a node
> passes through a merge the pixels are baked down. You can have transform
> nodes one after the other doing inverse things and because of CONCATENATING
> they will cancel each other out without effect. but if you scale something
> down using a transform and merge it with another plate the output can not
> be inversely scaled back up with out degradation. Short of creating giant
> 30K and larger images (using a reformat to nest them ) I can't make
> something work as efficiently as possible. Below is the tutorial of the
> After Effects setup. If any one can give some pointers that would be an
> amazing help
>
> http://www.videocopilot.net/tutorial/earth_zoom/
>
> Thanks
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@su

Re: [Nuke-users] Re: Ocula Disparity from Depth Channels

2012-04-03 Thread Ivan Busquets
A little late to the party, but just wanted to add my 2 cents in case
it helps someone make a full dollar :)

@Thomas: the process Michael described above is exactly what you need
if your starting point is already a world position pass. You should
only need the pass rendered for one eye and the opposite camera to get
disparity data.

Unfortunately, this is a lot more tedious to do with standard nodes
than it would be by writing a dedicated plugin, specially if you want
to account for any possible variation with the Cameras.
For example, you can get the transformation matrix of your cameras
from their world_matrix knob, but you can't get the projection matrix
(unless you're writing a plugin, that is). So, you need to manually
figure out the camera-to-screen transformation using the knobs from
the camera's projection tab. For simple cases, you can use just the
focal and horizontal aperture values, but if you need to account for
window_translate, window_scale and roll changes, then it gets messy
very easily.

That said, I've put together a little Nuke script (attached) to go
from world position to disparity. It could be more compact, but it's
split out to show the different transforms and conversions between
coordinate systems one by one, so hopefully it'll be easier to
understand. Keep in mind that, as stated previously, this one doesn't
account for changes to the win_translate or win_scale knobs, though.

Hope that helps.

Cheers,
Ivan



On Sun, Apr 1, 2012 at 12:52 PM, Michael Habenicht  wrote:
> Hi Thomas,
>
> you are right the pworld pass is already the first part. We have the screen
> space and the coresponding world position. But to be able to calculate the
> disparity you need the screen space position for this particular point
> viewed through the second camera. It is possible to calculate this based on
> the camera for sure as this is what the scanline render node and reconcile3d
> node do. But don't ask me about the math.
>
> Best regards,
> Michael
>
> Am 01.04.2012 18:08, schrieb thoma:
>>
>> Hi Michael,
>>
>> We're using Arnold. If i have my stereo Pworld passes and stereo cameras
>> in nuke couldn't i make this work? When you say world position projected
>> to screen space isn't that essentially what a Ppass is or are you
>> talking about something more? I tried doing some logic operations on the
>> left and right eyes of the Ppass to see if it would yield anything
>> meaningful but no luck
>>
>> Thanks
>> Thomas
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


world_to_disparity.nk
Description: Binary data
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: difference between uv project and project 3d

2012-03-24 Thread Ivan Busquets
Normalized Device Coordinates

For prman, that's the viewing device (camera) frustum, normalized to a -1
to 1 box.

As Michael said, the NDC matrix is what you would use to project a point
into camera space, and then you'd use a camera-to-world matrix to get a
coordinate in world space.
 On Mar 24, 2012 5:02 PM, "Randy Little"  wrote:

> what does ndc stand for (normal depth coord?)
> Randy S. Little
> http://www.rslittle.com
>
>
>
>
> On Sat, Mar 24, 2012 at 15:01, Michael Garrett 
> wrote:
> > Right, from memory it's doing this:
> > - construct ndc coordinates for x and y
> > - use depth for position.z in camera space
> > - use the camera projection matrix to convert ndc x and y to camera space
> > position x and y
> > - use camera transformation matrix to translate and rotate to position in
> > world space.
> >
> > Michael
> >
> >
> >
> > On Mar 24, 2012, at 5:11 AM, ari Rubenstein  wrote:
> >
> > Ivan,
> > If one doesn't alter camera clip planes and one understands inversion of
> > varying depth between maya, nuke and such... otherwise it's pretty
> > straightforward like Nathan's ?
> >
> > Ari
> >
> > Sent from my iPhone
> >
> > On Mar 23, 2012, at 8:44 PM, Michael Garrett 
> wrote:
> >
> > Totally agree it's made all the difference since the Shake days.  Thanks
> > Ivan for contributing this (and all the other stuff!).
> >
> > Ari, I do have a gizmo version of a depth to Pworld conversion but it
> > assumes raw planar depth from camera.  Though once you start factoring in
> > near and far clipping planes, and different depth formats, it gets a bit
> > more complicated.  Ivan may have something to say on this.
> >
> > Michael
> >
> >
> >
> > On 23 March 2012 03:16, ari Rubenstein  wrote:
> >>
> >> Wow, much appreciated.
> >>
> >>
> >> Thinking back to how artists and studios in the film industry used to
> hold
> >> tight their techniques for leverage and advantage, it's great to see how
> >> much "this" comp community encourages and props up one another for
> creative
> >> advancement for all.
> >>
> >> Thanks again Ivan
> >>
> >> Ari
> >>
> >>
> >> Sent from my iPhone
> >>
> >> On Mar 23, 2012, at 3:16 AM, Ivan Busquets 
> wrote:
> >>
> >> Hey Ari,
> >>
> >> Here's the plugin I mentioned before.
> >>
> >> http://www.nukepedia.com/gizmos/plugins/3d/stickyproject/
> >>
> >> There's only compiled versions for Nuke 6.3 (MacOS and Linux64), but
> I've
> >> uploaded the source code as well, so someone else can compile it for
> Windows
> >> if needed
> >>
> >> Hope it proves useful.
> >> Cheers,
> >> Ivan
> >>
> >> On Wed, Mar 21, 2012 at 2:55 PM,  wrote:
> >>>
> >>> thanks Frank for the clarification.
> >>>
> >>> thanks Ivan for digging that plugin up if ya can.  i have a solution I
> >>> wrapped into a tool as well, but I'd love to see your approach as well.
> >>>
> >>>
> >>>
> >>> , >>
> >>> >> 2)  if you've imported an obj sequence with UV's already on (for an
> >>> >> animated, deformable piece of geo)... and your using a static camera
> >>> >> (say
> >>> >> a single frame of your shot camera)... is there a way to do
> something
> >>> >> akin
> >>> >> to Maya's "texture reference object" whereby the UV's are changed
> >>> >> based
> >>> >> on
> >>> >> this static camera, for all the subsequent frames of the obj
> sequence
> >>> >> ?
> >>> >
> >>> >
> >>> > I've got a plugin that does exactly that. I'll see if I can share on
> >>> > Nukepedia soon.
> >>> >
> >>> > Cheers,
> >>> > Ivan
> >>> >
> >>> > On Wed, Mar 21, 2012 at 2:31 PM, Frank Rueter  >
> >>> > wrote:
> >>> >
> >>> >> 1 - UVProject creates UVs from scratch, with 0,0 in the lower left
> of
> >>> >> the
> >>> >> camera frustum and1,1 in the upper right.
> >>> >>
> >>> >> 2 - been waiting for that feature a l

Re: [Nuke-users] Re: difference between uv project and project 3d

2012-03-24 Thread Ivan Busquets
Thanks guys, I also think we have a great community, and it's a pleasure to
share when possible, as much as it is to learn from everyone who
participates.

@Thorsten: thanks for the Windows compile. I'll upload it to Nukepedia as
well.

Cheers,
Ivan

On Fri, Mar 23, 2012 at 5:44 PM, Michael Garrett wrote:

> Totally agree it's made all the difference since the Shake days.  Thanks
> Ivan for contributing this (and all the other stuff!).
>
> Ari, I do have a gizmo version of a depth to Pworld conversion but it
> assumes raw planar depth from camera.  Though once you start factoring in
> near and far clipping planes, and different depth formats, it gets a bit
> more complicated.  Ivan may have something to say on this.
>
> Michael
>
>
>
>
> On 23 March 2012 03:16, ari Rubenstein  wrote:
>
>> Wow, much appreciated.
>>
>>
>> Thinking back to how artists and studios in the film industry used to
>> hold tight their techniques for leverage and advantage, it's great to see
>> how much "this" comp community encourages and props up one another for
>> creative advancement for all.
>>
>> Thanks again Ivan
>>
>> Ari
>>
>>
>> Sent from my iPhone
>>
>> On Mar 23, 2012, at 3:16 AM, Ivan Busquets 
>> wrote:
>>
>> Hey Ari,
>>
>> Here's the plugin I mentioned before.
>>
>> http://www.nukepedia.com/gizmos/plugins/3d/stickyproject/
>>
>> There's only compiled versions for Nuke 6.3 (MacOS and Linux64), but I've
>> uploaded the source code as well, so someone else can compile it for
>> Windows if needed
>>
>> Hope it proves useful.
>> Cheers,
>> Ivan
>>
>> On Wed, Mar 21, 2012 at 2:55 PM,  wrote:
>>
>>> thanks Frank for the clarification.
>>>
>>> thanks Ivan for digging that plugin up if ya can.  i have a solution I
>>> wrapped into a tool as well, but I'd love to see your approach as well.
>>>
>>>
>>>
>>> , >>
>>> >> 2)  if you've imported an obj sequence with UV's already on (for an
>>> >> animated, deformable piece of geo)... and your using a static camera
>>> >> (say
>>> >> a single frame of your shot camera)... is there a way to do something
>>> >> akin
>>> >> to Maya's "texture reference object" whereby the UV's are changed
>>> based
>>> >> on
>>> >> this static camera, for all the subsequent frames of the obj sequence
>>> ?
>>> >
>>> >
>>> > I've got a plugin that does exactly that. I'll see if I can share on
>>> > Nukepedia soon.
>>> >
>>> > Cheers,
>>> > Ivan
>>> >
>>> > On Wed, Mar 21, 2012 at 2:31 PM, Frank Rueter 
>>> > wrote:
>>> >
>>> >> 1 - UVProject creates UVs from scratch, with 0,0 in the lower left of
>>> >> the
>>> >> camera frustum and1,1 in the upper right.
>>> >>
>>> >> 2 - been waiting for that feature a long time ;).It should be logged
>>> as
>>> >> a
>>> >> feature request but would certainly be good to report again to make
>>> sure
>>> >> (and to push it in priority)
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> On 3/22/12 8:28 AM, a...@curvstudios.com wrote:
>>> >>
>>> >>> couple more questions:
>>> >>>
>>> >>> 1)  if imported geo does not already have UV's, will UVproject
>>> create a
>>> >>> new set or does it require them to...replace them ?
>>> >>>
>>> >>> 2)  if you've imported an obj sequence with UV's already on (for an
>>> >>> animated, deformable piece of geo)... and your using a static camera
>>> >>> (say
>>> >>> a single frame of your shot camera)... is there a way to do something
>>> >>> akin
>>> >>> to Maya's "texture reference object" whereby the UV's are changed
>>> based
>>> >>> on
>>> >>> this static camera, for all the subsequent frames of the obj
>>> sequence ?
>>> >>>
>>> >>> ..sorry if I'm too verbose...that was sort of a stream of
>>> consciousness
>>> >>> question.  Basically I'm asking if there is an easie

Re: [Nuke-users] Re: difference between uv project and project 3d

2012-03-23 Thread Ivan Busquets
Hey Ari,

Here's the plugin I mentioned before.

http://www.nukepedia.com/gizmos/plugins/3d/stickyproject/

There's only compiled versions for Nuke 6.3 (MacOS and Linux64), but I've
uploaded the source code as well, so someone else can compile it for
Windows if needed

Hope it proves useful.
Cheers,
Ivan

On Wed, Mar 21, 2012 at 2:55 PM,  wrote:

> thanks Frank for the clarification.
>
> thanks Ivan for digging that plugin up if ya can.  i have a solution I
> wrapped into a tool as well, but I'd love to see your approach as well.
>
>
>
> , >>
> >> 2)  if you've imported an obj sequence with UV's already on (for an
> >> animated, deformable piece of geo)... and your using a static camera
> >> (say
> >> a single frame of your shot camera)... is there a way to do something
> >> akin
> >> to Maya's "texture reference object" whereby the UV's are changed based
> >> on
> >> this static camera, for all the subsequent frames of the obj sequence ?
> >
> >
> > I've got a plugin that does exactly that. I'll see if I can share on
> > Nukepedia soon.
> >
> > Cheers,
> > Ivan
> >
> > On Wed, Mar 21, 2012 at 2:31 PM, Frank Rueter 
> > wrote:
> >
> >> 1 - UVProject creates UVs from scratch, with 0,0 in the lower left of
> >> the
> >> camera frustum and1,1 in the upper right.
> >>
> >> 2 - been waiting for that feature a long time ;).It should be logged as
> >> a
> >> feature request but would certainly be good to report again to make sure
> >> (and to push it in priority)
> >>
> >>
> >>
> >>
> >> On 3/22/12 8:28 AM, a...@curvstudios.com wrote:
> >>
> >>> couple more questions:
> >>>
> >>> 1)  if imported geo does not already have UV's, will UVproject create a
> >>> new set or does it require them to...replace them ?
> >>>
> >>> 2)  if you've imported an obj sequence with UV's already on (for an
> >>> animated, deformable piece of geo)... and your using a static camera
> >>> (say
> >>> a single frame of your shot camera)... is there a way to do something
> >>> akin
> >>> to Maya's "texture reference object" whereby the UV's are changed based
> >>> on
> >>> this static camera, for all the subsequent frames of the obj sequence ?
> >>>
> >>> ..sorry if I'm too verbose...that was sort of a stream of consciousness
> >>> question.  Basically I'm asking if there is an easier way then my
> >>> current
> >>> method where I export an obj sequence with UV's, project3D on a single
> >>> frame, render with scanline to unwrapped UV, then input that into the
> >>> full
> >>> obj sequence to get my "paint" to stick throughout.
> >>>
> >>> oy, sorry again.
> >>>
> >>> Ari
> >>> Blue Sky
> >>>
> >>>
> >>>
> >>>  ivanbusquets wrote:
> 
> > You can think of UVProject as a "baked" or "sticky" projection.
> >
> > The main difference is how they'll behave if you transform/deform
> > your
> > geometry AFTER your projection.
> >
> > UVProject "bakes" the UV values into each vertex, so if you transform
> > those vertices later on, they'll still pull the textures from the
> > same
> > coordinate.
> >
> > The other difference between UVProject and Project3D is how they
> > behave
> > when the aspect ratio of the camera window is different than the
> > aspect
> > ratio of the projected image.
> > With UVProject, projection is defined by both the horizontal and
> > vertical aperture. Project3D only takes the horizontal aperture, and
> > preserves the aspect ratio of whatever image you're projecting.
> >
> >
> > Hope that makes sense.
> >
> > Cheers,
> > Ivan
> >
> >
> > On Tue, Mar 20, 2012 at 4:50 PM, coolchipper  wrote:
> >
> > hey Nukers, may be a very basic question, but i
> > wanted
> >> to know
> >>
> > what is the difference between the two, i do a lot of clean up
> > work everyday and i am kind of confused when to use the uv project
> > node an when to go for a project 3d node.i know that uv project
> > project a mapping coordinates to a mesh, in one of frank videos he
> > used the uv project node to clean up dolly tracks,that might have
> > been done using the project 3d node too, so whats the difference
> > in using uv project node for cleanup work? thanks ..
> > [img][/img]
> >
> >>
> >>
> >>
> >>
> 
>  Thanks Ivan.
> 
> 
> 
>  __**_
>  Nuke-users mailing list
>  Nuke-users@support.thefoundry.**co.uk<
> Nuke-users@support.thefoundry.co.uk>,
>  http://forums.thefoundry.co.**uk/ 
>  http://support.thefoundry.co.
> **uk/cgi-bin/mailman/listinfo/**nuke-users<
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users>
> 
> >>>
> >>> __**_
> >>> Nuke-users mailing list
> >>> Nuke-users@support.thefoundry.**co.uk<
> Nuke-users@support.thefoundry.co.uk>,
> >>> http://foru

Re: [Nuke-users] Re: difference between uv project and project 3d

2012-03-21 Thread Ivan Busquets
It's just a modified version of UVProject that freezes the context of the
projection, so each vertex carries the UV as it was on the frame where it
was frozen.
So, it's essentially the same thing you would get with rendering in
UV-space and re-plugging that to animated geo, but it all happens in a
single 3D scene instead, and you avoid the double filtering.




On Wed, Mar 21, 2012 at 2:55 PM,  wrote:

> thanks Frank for the clarification.
>
> thanks Ivan for digging that plugin up if ya can.  i have a solution I
> wrapped into a tool as well, but I'd love to see your approach as well.
>
>
>
> , >>
> >> 2)  if you've imported an obj sequence with UV's already on (for an
> >> animated, deformable piece of geo)... and your using a static camera
> >> (say
> >> a single frame of your shot camera)... is there a way to do something
> >> akin
> >> to Maya's "texture reference object" whereby the UV's are changed based
> >> on
> >> this static camera, for all the subsequent frames of the obj sequence ?
> >
> >
> > I've got a plugin that does exactly that. I'll see if I can share on
> > Nukepedia soon.
> >
> > Cheers,
> > Ivan
> >
> > On Wed, Mar 21, 2012 at 2:31 PM, Frank Rueter 
> > wrote:
> >
> >> 1 - UVProject creates UVs from scratch, with 0,0 in the lower left of
> >> the
> >> camera frustum and1,1 in the upper right.
> >>
> >> 2 - been waiting for that feature a long time ;).It should be logged as
> >> a
> >> feature request but would certainly be good to report again to make sure
> >> (and to push it in priority)
> >>
> >>
> >>
> >>
> >> On 3/22/12 8:28 AM, a...@curvstudios.com wrote:
> >>
> >>> couple more questions:
> >>>
> >>> 1)  if imported geo does not already have UV's, will UVproject create a
> >>> new set or does it require them to...replace them ?
> >>>
> >>> 2)  if you've imported an obj sequence with UV's already on (for an
> >>> animated, deformable piece of geo)... and your using a static camera
> >>> (say
> >>> a single frame of your shot camera)... is there a way to do something
> >>> akin
> >>> to Maya's "texture reference object" whereby the UV's are changed based
> >>> on
> >>> this static camera, for all the subsequent frames of the obj sequence ?
> >>>
> >>> ..sorry if I'm too verbose...that was sort of a stream of consciousness
> >>> question.  Basically I'm asking if there is an easier way then my
> >>> current
> >>> method where I export an obj sequence with UV's, project3D on a single
> >>> frame, render with scanline to unwrapped UV, then input that into the
> >>> full
> >>> obj sequence to get my "paint" to stick throughout.
> >>>
> >>> oy, sorry again.
> >>>
> >>> Ari
> >>> Blue Sky
> >>>
> >>>
> >>>
> >>>  ivanbusquets wrote:
> 
> > You can think of UVProject as a "baked" or "sticky" projection.
> >
> > The main difference is how they'll behave if you transform/deform
> > your
> > geometry AFTER your projection.
> >
> > UVProject "bakes" the UV values into each vertex, so if you transform
> > those vertices later on, they'll still pull the textures from the
> > same
> > coordinate.
> >
> > The other difference between UVProject and Project3D is how they
> > behave
> > when the aspect ratio of the camera window is different than the
> > aspect
> > ratio of the projected image.
> > With UVProject, projection is defined by both the horizontal and
> > vertical aperture. Project3D only takes the horizontal aperture, and
> > preserves the aspect ratio of whatever image you're projecting.
> >
> >
> > Hope that makes sense.
> >
> > Cheers,
> > Ivan
> >
> >
> > On Tue, Mar 20, 2012 at 4:50 PM, coolchipper  wrote:
> >
> > hey Nukers, may be a very basic question, but i
> > wanted
> >> to know
> >>
> > what is the difference between the two, i do a lot of clean up
> > work everyday and i am kind of confused when to use the uv project
> > node an when to go for a project 3d node.i know that uv project
> > project a mapping coordinates to a mesh, in one of frank videos he
> > used the uv project node to clean up dolly tracks,that might have
> > been done using the project 3d node too, so whats the difference
> > in using uv project node for cleanup work? thanks ..
> > [img][/img]
> >
> >>
> >>
> >>
> >>
> 
>  Thanks Ivan.
> 
> 
> 
>  __**_
>  Nuke-users mailing list
>  Nuke-users@support.thefoundry.**co.uk<
> Nuke-users@support.thefoundry.co.uk>,
>  http://forums.thefoundry.co.**uk/ 
>  http://support.thefoundry.co.
> **uk/cgi-bin/mailman/listinfo/**nuke-users<
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users>
> 
> >>>
> >>> __**_
> >>> Nuke-users mailing list
> >>> Nuke-users@support.thefoundry.**co.uk<

Re: [Nuke-users] Re: difference between uv project and project 3d

2012-03-21 Thread Ivan Busquets
>
> 2)  if you've imported an obj sequence with UV's already on (for an
> animated, deformable piece of geo)... and your using a static camera (say
> a single frame of your shot camera)... is there a way to do something akin
> to Maya's "texture reference object" whereby the UV's are changed based on
> this static camera, for all the subsequent frames of the obj sequence ?


I've got a plugin that does exactly that. I'll see if I can share on
Nukepedia soon.

Cheers,
Ivan

On Wed, Mar 21, 2012 at 2:31 PM, Frank Rueter  wrote:

> 1 - UVProject creates UVs from scratch, with 0,0 in the lower left of the
> camera frustum and1,1 in the upper right.
>
> 2 - been waiting for that feature a long time ;).It should be logged as a
> feature request but would certainly be good to report again to make sure
> (and to push it in priority)
>
>
>
>
> On 3/22/12 8:28 AM, a...@curvstudios.com wrote:
>
>> couple more questions:
>>
>> 1)  if imported geo does not already have UV's, will UVproject create a
>> new set or does it require them to...replace them ?
>>
>> 2)  if you've imported an obj sequence with UV's already on (for an
>> animated, deformable piece of geo)... and your using a static camera (say
>> a single frame of your shot camera)... is there a way to do something akin
>> to Maya's "texture reference object" whereby the UV's are changed based on
>> this static camera, for all the subsequent frames of the obj sequence ?
>>
>> ..sorry if I'm too verbose...that was sort of a stream of consciousness
>> question.  Basically I'm asking if there is an easier way then my current
>> method where I export an obj sequence with UV's, project3D on a single
>> frame, render with scanline to unwrapped UV, then input that into the full
>> obj sequence to get my "paint" to stick throughout.
>>
>> oy, sorry again.
>>
>> Ari
>> Blue Sky
>>
>>
>>
>>  ivanbusquets wrote:
>>>
 You can think of UVProject as a "baked" or "sticky" projection.

 The main difference is how they'll behave if you transform/deform your
 geometry AFTER your projection.

 UVProject "bakes" the UV values into each vertex, so if you transform
 those vertices later on, they'll still pull the textures from the same
 coordinate.

 The other difference between UVProject and Project3D is how they behave
 when the aspect ratio of the camera window is different than the aspect
 ratio of the projected image.
 With UVProject, projection is defined by both the horizontal and
 vertical aperture. Project3D only takes the horizontal aperture, and
 preserves the aspect ratio of whatever image you're projecting.


 Hope that makes sense.

 Cheers,
 Ivan


 On Tue, Mar 20, 2012 at 4:50 PM, coolchipper  wrote:

 hey Nukers, may be a very basic question, but i wanted
> to know
>
 what is the difference between the two, i do a lot of clean up
 work everyday and i am kind of confused when to use the uv project
 node an when to go for a project 3d node.i know that uv project
 project a mapping coordinates to a mesh, in one of frank videos he
 used the uv project node to clean up dolly tracks,that might have
 been done using the project 3d node too, so whats the difference
 in using uv project node for cleanup work? thanks ..
 [img][/img]

>
>
>
>
>>>
>>> Thanks Ivan.
>>>
>>>
>>>
>>> __**_
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.**co.uk,
>>> http://forums.thefoundry.co.**uk/ 
>>> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>>>
>>
>> __**_
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.**co.uk,
>> http://forums.thefoundry.co.**uk/ 
>> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>>
>>  __**_
> Nuke-users mailing list
> Nuke-users@support.thefoundry.**co.uk,
> http://forums.thefoundry.co.**uk/ 
> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] difference between uv project and project 3d

2012-03-20 Thread Ivan Busquets
You can think of UVProject as a "baked" or "sticky" projection.

The main difference is how they'll behave if you transform/deform your
geometry AFTER your projection.

UVProject "bakes" the UV values into each vertex, so if you transform those
vertices later on, they'll still pull the textures from the same coordinate.

The other difference between UVProject and Project3D is how they behave
when the aspect ratio of the camera window is different than the aspect
ratio of the projected image.
With UVProject, projection is defined by both the horizontal and vertical
aperture. Project3D only takes the horizontal aperture, and preserves the
aspect ratio of whatever image you're projecting.


Hope that makes sense.

Cheers,
Ivan


On Tue, Mar 20, 2012 at 4:50 PM, coolchipper <
nuke-users-re...@thefoundry.co.uk> wrote:

> **
> hey Nukers, may be a very basic question, but i wanted to know what is the
> difference between the two, i do a lot of clean up work everyday and i am
> kind of confused when to use the uv project node an when to go for a
> project 3d node.i know that uv project project a mapping coordinates to a
> mesh, in one of frank videos he used the uv project node to clean up dolly
> tracks,that might have been done using the project 3d node too, so whats
> the difference in using uv project node for cleanup work? thanks .. 
> [image:
> Very Happy]
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Luminance / Chroma B44 Compressed EXRs in Nuke

2012-03-14 Thread Ivan Busquets
>
> Right, well I wouldn't call that "unstable".
>

As Nathan said before, nobody is questioning its actual stability. The name
distinction here was only between "stable" and "feature" releases. I'm
sorry if you interpreted it that way when I quoted 1.6.1 as being the
"latest stable release", but you've already been given an explanation for
the naming scheme, so let's just drop that argument. :)

In my opinion, 1.7 is perfectly stable. If anyone can identify an issue
> with it, I'll gladly fix the bug myself. It is open source, after all.
>

Nothing stops you from doing the same thing for Nuke. The source for
exrReader and exrWriter is available in the NDK, so you can recompile
against 1.7 if you need long channel names in Nuke.

Whether Nuke should use 1.7 by default or not is debatable, in my opinion.
I understand your point, and was just giving a possible reason why they
would be reluctant to.


Cheers,
Ivan

On Wed, Mar 14, 2012 at 9:59 AM, fnordware <
nuke-users-re...@thefoundry.co.uk> wrote:

> **
> *peter wrote:*
> The issue was that in 1.70, long channel names were added in a way that
> broke compatibility with older versions of OpenEXR:
>
>
> Right, well I wouldn't call that "unstable". That would be like saying a
> new version of Nuke is unstable because old versions didn't recognize a
> node that was only in the new version.
>
> If you're worried about writing out files that will be incompatible with
> older versions, then just crop the channel names before passing them to
> OpenEXR. But since these files are out there, you may as well use the 1.7
> library so you can read them. Ideally you'd let the user check a box to
> write out long channel names if they really wanted to.
>
> The other time the EXR format was expanded was to allow for tiled images.
> It will soon be expanded again for these deep image buffers. At least I'm
> pretty confident Nuke will upgrade to the latest library when that happens.
>
>
> Brendan
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Luminance / Chroma B44 Compressed EXRs in Nuke

2012-03-13 Thread Ivan Busquets
Hi,

I've never heard 1.7 called unstable. It's been out for nearly 2 years
> without the need for an update.
>

Yes, I wasn't implying that 1.7 is unstable. I was just saying that 1.6.1
is the latest stable release "by definition", as stated here:

http://www.openexr.com/downloads.html


Other than that, I don't claim to know the design reasons for staying on a
particular version. My point was that there are compatibility concerns that
could play a role in that decision, specially considering that most vendors
(afaik), have not moved to 1.7 either.

With regards to 4:2:0 support, you should definitely get in touch with
support. What I meant was that this is most likely not due to outdated exr
libraries, but to an oversight (or intentional decision to leave it out) in
the exrReader code.

Hope that clarifies my previous answer a bit. :)

Cheers,
Ivan

On Tue, Mar 13, 2012 at 2:58 PM, fnordware <
nuke-users-re...@thefoundry.co.uk> wrote:

> **
> *ivanbusquets wrote:*
> 1.6.1 is the latest stable release, and there are other reasons not to
> move to 1.7, like the fact that exr files with long channel names are not
> backwards compatible.
>
>
> I've never heard 1.7 called unstable. It's been out for nearly 2 years
> without the need for an update.
>
> I looked and you are right, 1.7 was the first actual release to support
> the long file names, although the feature was added to the OpenEXR
> repository in 2008. Since Nuke supports channel names longer than 32
> characters, I'd think they'd be eager to support it in EXR instead of
> clipping the name or whatever they do. But even if you choose not to write
> long-channel EXRs, there's no excuse for not being able to read them.
>
> Between this and the 4:2:0 thing, I think it's Nuke that's the bottleneck
> for compatibility now.
>
>
> Brendan
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Luminance / Chroma B44 Compressed EXRs in Nuke

2012-03-12 Thread Ivan Busquets
For what it's worth, I don't think this is due to outdated libraries.

Nuke's exrReader uses OpenEXR 1.6.1. In fact, it's slightly above that.
IIRC from a post to this list a while ago, it's a checkout from somewhere
between 1.6.1 and 1.7 releases, after the addition of StringVector and
MultiView attributes, but before support for long channel names.

1.6.1 is the latest stable release, and there are other reasons not to move
to 1.7, like the fact that exr files with long channel names are not
backwards compatible. I don't think there's many vendors using 1.7, if any,
and I suppose most are now waiting for OpenEXR 2.0 before updating their
EXR libs.

If Nuke is indeed not reading chroma subsampled EXRs correctly, I would
imagine this is not because of the libs used, but because it's simply not
handled in the exrReader (though I might be wrong).
As for long channel names, I think they were added in 1.7, not 1.6. So no,
Nuke's default exrReader won't read them.




On Sun, Mar 11, 2012 at 4:36 PM, fnordware <
nuke-users-re...@thefoundry.co.uk> wrote:

> **
> *Jed Smith wrote:*
> It seems like Nuke does not handle chroma subsampled EXR files properly?
> Is this a bug? Has anyone else experienced this problem?
>
>
> It looks like Nuke is actually using an outdated version of the OpenEXR
> library. Nuke, of all programs!
>
> The 4:2:0 Luminance/Chroma sampling was added to OpenEXR after the initial
> release. The library now provides a class that will handle the conversion
> to and from 4:4:4 RGB, but you have to be using a version of the library
> that supports it and be on the lookout for that situation.
>
> Something else I recently noticed: Nuke can't handle EXR files if they
> have channel names longer than 32 characters (at least the Mac version).
> This is another sign they're using the old library. OpenEXR originally had
> that limit, but it was expanded to 256 characters in 2007 with OpenEXR 1.6.
> Older versions of the library will reject these files.
>
> But the Luminance/Chroma stuff was added in 2004 with OpenEXR 1.2!
>
>
> Brendan
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] UVProject precison/V scaling

2012-03-01 Thread Ivan Busquets
Does your camera horizontal and vertical aperture match the aspect of your
image?

Unlike Project3D, UVProject uses both horizontal and vertical aperture to
define the projection frustum, so if they don't match the aspect of your
image (or card), you'll get some stretching.

Is that what you're seeing?



On Thu, Mar 1, 2012 at 2:25 PM, Julik Tarkhanov wrote:

> So I am experimenting with UVProject and I've noticed something strange.
>
> In this script https://gist.github.com/676a31ebfb24a3d689b9
> I am doing a ProjectUV and then comparing it's result with the original
> checkerboard. The discrepancy is actually huge, which is something I didn't
> expect - and the discrepancy changes with the format of the image (as if
> ProjectUV applies incorrect fitting to the camera gate).
>
> Am  doing something wrong here? All this happens while the standard
> Project3D works fine.
>
> --
> Julik Tarkhanov | HecticElectric | Keizersgracht 736 1017 EX
> Amsterdam | The Netherlands | tel. +31 20 330 8250
> cel. +31 61 145 06 36 | http://hecticelectric.nl
>
>
>
>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] spline warper

2012-02-06 Thread Ivan Busquets
Hey Randy,

Not sure if that fixes the expected behavior for you, but I think you
should check visibility ON in your boundary shape for it to have any effect.


On Mon, Feb 6, 2012 at 9:44 AM, Randy Little  wrote:

>
> Randy S. Little
> http://www.rslittle.com 
>
>
>
>
> On Mon, Feb 6, 2012 at 02:12, Wouter Klouwen wrote:
>
>> On 06/02/2012 06:29, Randy Little wrote:
>>
>>> Has anyone found that making a border shape in the spline warper in
>>> 6.3v4 doesn't do anything?  BBox boundary works ok but as soon as you
>>> turn off BBOX boundary curve all the boundary curves fail.
>>>
>>
>> Hi Randy,
>>
>> Could you send a repro script?
>>
>> Thanks,
>>Wouter
>>
>> --
>> Wouter Klouwen, Software Engineer
>> The Foundry, 6th Floor, Comms Building, 48 Leicester Sq, London WC2H LT
>> Tel: +442079686828 • Fax: +4420 79308906 • thefoundry.co.uk
>> The Foundry Visionmongers Ltd • Reg.d in England and Wales No: 4642027
>> __**_
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.**co.uk,
>> http://forums.thefoundry.co.**uk/ 
>> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] tracking from animated 3d object

2012-01-23 Thread Ivan Busquets
Hi Mark,

Try either of these:

http://www.nukepedia.com/plugins/3d/geopoints/

http://www.nukepedia.com/python-scripts/3d/animatedsnap3d/


On Mon, Jan 23, 2012 at 10:47 AM, markJustison <
nuke-users-re...@thefoundry.co.uk> wrote:

> **
> I'm wondering if it's possible to do a 2d track from a vertex on an
> animated 3d object.
> I have a provided camera track but I'd like to track a background element
> that has an animated 3d object representing it. Autoreconcile won't work in
> this case because the motion of the vertex isn't based on the camera.
> Is there a way to parent an axis to a vertex so I can get tracking from it?
>
> Thanks,
> -- Mark
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Gizmo placement

2012-01-23 Thread Ivan Busquets
How are you 'creating' the node?

When you add a menu command to create your gizmo-node, make sure you use:

nuke.createNode('nodeClass')

instead of

nuke.nodes.nodeClass()



On Mon, Jan 23, 2012 at 12:30 PM, r...@hexagonstudios.com <
r...@hexagonstudios.com> wrote:

> **
>
>  I did look into the X and Y pos and it is not in the add knobs section.
> The only xypos that is in there is for the nodes inside the group.
>
>
>   #! X:/apps/Nuke6.1v5/Nuke6.1.exe -nx
> version 6.1 v5
> Gizmo {
> tile_color 0x99
> selected true
>
> inputs 2
> help "LX_CA V1.1\nCreated by: Rob Bannister\n\nBased on the CA found in
> the Retro node by julian van mil. You can download his plugin on
> Nukepedia.\n\nOperation: \nYou can base your CA on 4 opereations. Full,
> Radial Falloff, Lumanance or alpha Mask, or a combination of Radial falloff
> and Lumanance.\n\nMask:\n- Source will be based on the input source and all
> the controls are handled with the gizmo.\n\n-Luma Mask, input souce into
> the mask input and use the gizmo controls to create your mask.\n\n- Alpah
> mask, create your mask externally and input the alpha into the mask
> input.\n\nUse the preview falloff to adjust your mattes."
> tile_color 0x99
> addUserKnob {20 User}
> addUserKnob {4 Operation M {Full Radial "Luma or Mask" "Radial + Luma" ""
> ""}}
> addUserKnob {4 LumaMatte l Mask M {Source "Luma Mask" "Alpha Mask"}}
> addUserKnob {4 PreviewFalloff l "Preview Falloff" M {Result Falloff}}
> addUserKnob {41 multiplier l amount T Dot14.multiplier}
> addUserKnob {41 mixRay l mixRays T moxDot1.mixRay}
> addUserKnob {41 size l blur T Blur2.size}
> addUserKnob {41 which T Switch3.which}
> addUserKnob {26 ""}
> addUserKnob {26 Falloff l "" +STARTLINE T "Radial Falloff"}
> addUserKnob {41 softness T Radial1.softness}
> addUserKnob {41 size_1 l Blur T Blur3.size}
> addUserKnob {41 scale T Transform_radialscale.scale}
> addUserKnob {26 ""}
> addUserKnob {26 Tolerance l "" +STARTLINE T "Luma Tolerance"}
> addUserKnob {41 blackpoint T Grade_luma.blackpoint}
> addUserKnob {41 whitepoint T Grade_luma.whitepoint}
> addUserKnob {41 size_2 l Erode T FilterErode_luma.size}
> addUserKnob {41 size_3 l Blur T Blur_luma.size}
> }
>
> "group nodes"
>
> } end_group
>
>
>
> On January 23, 2012 at 7:00 AM 
> nuke-users-requ...@support.thefoundry.co.ukwrote:
>
> > Send Nuke-users mailing list submissions to
> > nuke-users@support.thefoundry.co.uk
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> >
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
> > or, via email, send a message with subject or body 'help' to
> > nuke-users-requ...@support.thefoundry.co.uk
> >
> > You can reach the person managing the list at
> > nuke-users-ow...@support.thefoundry.co.uk
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of Nuke-users digest..."
> >
> >
> > Today's Topics:
> >
> >1. Re: Unwrap Latlong(Equirectangular) image (ruchitinfushion)
> >
> >
> > --
> >
> > Message: 1
> > Date: Mon, 23 Jan 2012 04:11:23 +
> > From: "ruchitinfushion" 
> > Subject: [Nuke-users] Re: Unwrap Latlong(Equirectangular) image
> > To: nuke-users@support.thefoundry.co.uk
> > Message-ID: <1327291883.m2f.29...@forums.thefoundry.co.uk>
> > Content-Type: text/plain; charset="utf-8"
> >
> > Ok, will do that..and thanx for your help.Enjoy
> >
> >
> >
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL:
> http://support.thefoundry.co.uk/cgi-bin/mailman/private/nuke-users/attachments/20120123/37dcd842/attachment.html
> >
> > --
> >
> > ___
> > Nuke-users mailing list
> > Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> > http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
> >
> >
> > End of Nuke-users Digest, Vol 47, Issue 19
> > **
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Read Geo - all objects

2012-01-19 Thread Ivan Busquets
As Ron said, it depends on the FBX file, and whether objects have all
transforms baked in, or reference a "parent" transformation.

Basically, constrains and parent transforms are stored as "separate
objects" in the FBX file, and Nuke does not resolve them when you're
reading a single object (the individual object is read, but not other
transform objects it might depend on).



On Thu, Jan 19, 2012 at 11:45 AM, Ron Ganbar  wrote:

> Hi Thomas,
> I had similar issues today, but reverse.
> Someone exported a whole scene in FBX for me. When I look at "all object"
> the scene is perfect. But when I select individual objects they sometimes
> appear near world centre instead of where they should be.
> I asked the 3D guy to make sure all animations and locations are baked.
> That there are no constrains or groups or anything but objects and their
> translations. Seemed to help.
> Also, splitting up the scene into separate fbxs for every object makes for
> a much quicker render and handeling.
>
>
> Ron Ganbar
> email: ron...@gmail.com
> tel: +44 (0)7968 007 309 [UK]
>  +972 (0)54 255 9765 [Israel]
> url: http://ronganbar.wordpress.com/
>
>
>
> On 19 January 2012 21:13, thoma  wrote:
>
>> **
>> Hi all,
>>
>> I've been having an issue with the readGeo node where checking the 'all
>> objects' tickbox results in the geo scaling, translating, and sometimes
>> skewing/squashing arbitrarily. The fbx contains multiple objects and when
>> using the readGeo for the individual pieces the scene comes together in a
>> completely different place in world space. Has anyone else run into this
>> issue? As far as i know Nuke operates in decimeters - so my guess is that
>> this relates to scene scale discrepancies between maya and nuke but that
>> doesn't account for the difference between 'all objects' and individual
>> objects...any help is appreciated
>>
>> Thomas
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] tcl label expression

2012-01-19 Thread Ivan Busquets
Hi Ron,

That's because the fbx_node_name knob has the SAVE_MENU flag, which means
all menu items get saved with the script (making that the actual value of
the knob)

You can see the difference if you copy-paste your node into a text editor,
vs another node with an enumeration knob, like a shuffle.
Or try running this in the Script editor, which should show the difference
between using the nuke.SAVE_MENU flag or not:

n = nuke.createNode('NoOp')
k1 = nuke.Enumeration_Knob('test1', 'test1', ['a', 'b', 'c'])
k2 = nuke.Enumeration_Knob('test2', 'test2', ['a', 'b', 'c'])
k2.setFlag(nuke.SAVE_MENU)
n.addKnob(k1)
n.addKnob(k2)


If you want to get the actual selected value from an fbx_node_name knob,
you could use something like this instead:

[python
{nuke.thisNode()['fbx_node_name'].values()[int(nuke.thisNode()['fbx_node_name'].getValue())]}]

Hope that helps.

Cheers,
Ivan


On Thu, Jan 19, 2012 at 4:42 AM, Ron Ganbar  wrote:

> Hi guys,
> I want to use a tcl expression in the label that will show me from an FBX
> read_geo the selected object (called node in the fbx import dialog).
>
> If I use this in a ReadGeo's label:
> [value fbx_node_name]
> I get the whole list from the dropdown menu instead of just the selected
> one. Feels like a bug to me, cause when I use the same tcl code on a
> Shuffle node I just get the selected value from the dropdown menu, not the
> whole list.
>
> Thanks!
> Ron Ganbar
> email: ron...@gmail.com
> tel: +44 (0)7968 007 309 [UK]
>  +972 (0)54 255 9765 [Israel]
> url: http://ronganbar.wordpress.com/
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Viewer colour sampler 8-bit values?

2012-01-08 Thread Ivan Busquets
Ugh, nevermind. It seems to be on 6.3v1 only, so I assume it was just a
hiccup in that one version alone.

Sorry for the noise. :)


On Sat, Jan 7, 2012 at 11:40 PM, Ivan Busquets wrote:

> Huh, weird. Just checked, and it seems to be just a linear mapping in 6.3,
> but indeed that's not the case for 6.2.
>
> In 6.2, it does seem to apply an sRGB curve to the rgb values (as you
> said), but not to the alpha.
>
>
>
> On Sat, Jan 7, 2012 at 11:34 PM, Ivan Busquets wrote:
>
>> Hi Ciaran,
>>
>> I don't think that's the case. I believe it's just a linear mapping of
>> 0-1 to 0-255.
>>
>> But the easiest way to check would be to make a 0-1 ramp across a 256
>> pixel-wide format, and sample that.
>>
>>
>>
>> On Fri, Jan 6, 2012 at 7:36 PM, Ciaran Wills wrote:
>>
>>> If I choose '8 bit' from the menu on the viewer's colour sampler how
>>> exactly is it getting those 8-bit values from the float pixels?
>>>
>>> My guess is it seems to be applying a linear->sRGB conversion,
>>> regardless of what the viewer lookup is set to - is that right?
>>>
>>> The content of this e-mail (including any attachments) is strictly
>>> confidential and may be commercially sensitive. If you are not, or believe
>>> you may not be, the intended recipient, please advise the sender
>>> immediately by return e-mail, delete this e-mail and destroy any copies.
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Viewer colour sampler 8-bit values?

2012-01-07 Thread Ivan Busquets
Huh, weird. Just checked, and it seems to be just a linear mapping in 6.3,
but indeed that's not the case for 6.2.

In 6.2, it does seem to apply an sRGB curve to the rgb values (as you
said), but not to the alpha.



On Sat, Jan 7, 2012 at 11:34 PM, Ivan Busquets wrote:

> Hi Ciaran,
>
> I don't think that's the case. I believe it's just a linear mapping of 0-1
> to 0-255.
>
> But the easiest way to check would be to make a 0-1 ramp across a 256
> pixel-wide format, and sample that.
>
>
>
> On Fri, Jan 6, 2012 at 7:36 PM, Ciaran Wills wrote:
>
>> If I choose '8 bit' from the menu on the viewer's colour sampler how
>> exactly is it getting those 8-bit values from the float pixels?
>>
>> My guess is it seems to be applying a linear->sRGB conversion, regardless
>> of what the viewer lookup is set to - is that right?
>>
>> The content of this e-mail (including any attachments) is strictly
>> confidential and may be commercially sensitive. If you are not, or believe
>> you may not be, the intended recipient, please advise the sender
>> immediately by return e-mail, delete this e-mail and destroy any copies.
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Viewer colour sampler 8-bit values?

2012-01-07 Thread Ivan Busquets
Hi Ciaran,

I don't think that's the case. I believe it's just a linear mapping of 0-1
to 0-255.

But the easiest way to check would be to make a 0-1 ramp across a 256
pixel-wide format, and sample that.



On Fri, Jan 6, 2012 at 7:36 PM, Ciaran Wills  wrote:

> If I choose '8 bit' from the menu on the viewer's colour sampler how
> exactly is it getting those 8-bit values from the float pixels?
>
> My guess is it seems to be applying a linear->sRGB conversion, regardless
> of what the viewer lookup is set to - is that right?
>
> The content of this e-mail (including any attachments) is strictly
> confidential and may be commercially sensitive. If you are not, or believe
> you may not be, the intended recipient, please advise the sender
> immediately by return e-mail, delete this e-mail and destroy any copies.
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] importing boujou point cloud via FBX from maya

2012-01-05 Thread Ivan Busquets
I'll eat my words, then. :)

That's good to know. Thanks Deke.

On Thu, Jan 5, 2012 at 5:50 PM, Deke Kincaid  wrote:

> We fixed this in Nuke 6.1 so readGeo properly sees point clouds.  You only
> have to change the "object type" from "Mesh" to "Point Cloud".  If this
> doesn't work then something may be broken and you should send the fbx into
> support(pending it is fbx 2010 or earlier).
>
> It also works the other way around.  A point cloud exported to FBX from
> Nuke will show up as a bunch of locators in Maya.
>
> -deke
>
>
> On Thu, Jan 5, 2012 at 17:40, Ivan Busquets wrote:
>
>> I might be wrong, but I don't think ReadGeo can read "Maya locators"
>> exported to an FBX file.
>>
>> I thought the point cloud option was just to read all vertices of
>> geometry as points.
>>
>> For locators, you'd need to import the FBX into an Axis node (although
>> then you can only read them one at a time)
>>
>> To bring in the full point cloud from a boujou file, you could use the
>> "import_boujou" tcl script that comes with Nuke. Or, this python-ported
>> version:
>>
>> http://www.nukepedia.com/python-scripts/import-export/importboujou/
>>
>>
>>
>>
>> On Thu, Jan 5, 2012 at 4:13 PM, Deke Kincaid wrote:
>>
>>> are you using fbx 2010(nuke doesn't support 2011 or 2012)?  You should
>>> be able to pick point cloud from the drop down in the readGeo.  Also I
>>> suggest you export the point cloud to a separate fbx file from the camera.
>>>
>>> -deke
>>>
>>>
>>> On Thu, Jan 5, 2012 at 15:10, Bill Gilman  wrote:
>>>
>>>> Hey all
>>>>
>>>> I'm trying to track a shot in Boujou and bring the camera, point cloud
>>>> and ground plane into Nuke via a Maya FBX file.  What do I need to do to
>>>> indicate that the cloud of locators in Maya are a point cloud that the
>>>> ReadGeo node can understand?
>>>>
>>>> Also, the camera comes in fine but none of the geometry makes it over.
>>>>  Any help would be appreciated, thanks
>>>>
>>>> Bill
>>>> 323-428-0913___
>>>> Nuke-users mailing list
>>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>>
>>>
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] importing boujou point cloud via FBX from maya

2012-01-05 Thread Ivan Busquets
I might be wrong, but I don't think ReadGeo can read "Maya locators"
exported to an FBX file.

I thought the point cloud option was just to read all vertices of geometry
as points.

For locators, you'd need to import the FBX into an Axis node (although then
you can only read them one at a time)

To bring in the full point cloud from a boujou file, you could use the
"import_boujou" tcl script that comes with Nuke. Or, this python-ported
version:

http://www.nukepedia.com/python-scripts/import-export/importboujou/



On Thu, Jan 5, 2012 at 4:13 PM, Deke Kincaid  wrote:

> are you using fbx 2010(nuke doesn't support 2011 or 2012)?  You should be
> able to pick point cloud from the drop down in the readGeo.  Also I suggest
> you export the point cloud to a separate fbx file from the camera.
>
> -deke
>
>
> On Thu, Jan 5, 2012 at 15:10, Bill Gilman  wrote:
>
>> Hey all
>>
>> I'm trying to track a shot in Boujou and bring the camera, point cloud
>> and ground plane into Nuke via a Maya FBX file.  What do I need to do to
>> indicate that the cloud of locators in Maya are a point cloud that the
>> ReadGeo node can understand?
>>
>> Also, the camera comes in fine but none of the geometry makes it over.
>>  Any help would be appreciated, thanks
>>
>> Bill
>> 323-428-0913___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke & Alembic

2012-01-04 Thread Ivan Busquets
Ok, I believe it was just the MacOS Nuke 6.3 version that had the problem.

I think it's fixed now. Let me know if that's not the case.

Download link:
http://dl.dropbox.com/u/17836731/ABCNuke_plugins_macos.zip

Cheers,
Ivan


On Wed, Jan 4, 2012 at 11:14 PM, Ivan Busquets wrote:

> Hi Gary,
>
> Yes, the precompiled version should be all you need, but that's obviously
> not the case :(
>
> Looks like I compiled the MacOS version against the HDF5 libraries
> dynamically, not statically.
>
> Let me see if I can fix this and will post a new download link.
>
> Thanks for the heads up.
>
>
> On Wed, Jan 4, 2012 at 10:57 PM, Gary Jaeger  wrote:
>
>> ok forgive my noobish question, but is the pre-compiled version all we
>> need to give this a shot? I'm getting a message:
>>
>> Library not loaded: /opt/local/lib/libhdf5_hl.7.dylib
>>
>> thanks for any help
>>
>> On Dec 31, 2011, at 1:01 AM, Ivan Busquets wrote:
>>
>> If you want to try it out without the hassle of compiling it, here's a
>> couple of links to pre-compiled versions for Mac and Linux, each with a
>> version for Nuke 6.2 and 6.3. (again, only ABCReadGeo available so far).
>> I'll try to upload them to Nukepedia as well, but the upload links were not
>> working for me today.
>>
>>
>>
>> Gary Jaeger // Core Studio
>> 249 Princeton Avenue
>> Half Moon Bay, CA 94019
>> 650 728 7060
>> http://corestudio.com
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke & Alembic

2012-01-04 Thread Ivan Busquets
Hi Gary,

Yes, the precompiled version should be all you need, but that's obviously
not the case :(

Looks like I compiled the MacOS version against the HDF5 libraries
dynamically, not statically.

Let me see if I can fix this and will post a new download link.

Thanks for the heads up.


On Wed, Jan 4, 2012 at 10:57 PM, Gary Jaeger  wrote:

> ok forgive my noobish question, but is the pre-compiled version all we
> need to give this a shot? I'm getting a message:
>
> Library not loaded: /opt/local/lib/libhdf5_hl.7.dylib
>
> thanks for any help
>
> On Dec 31, 2011, at 1:01 AM, Ivan Busquets wrote:
>
> If you want to try it out without the hassle of compiling it, here's a
> couple of links to pre-compiled versions for Mac and Linux, each with a
> version for Nuke 6.2 and 6.3. (again, only ABCReadGeo available so far).
> I'll try to upload them to Nukepedia as well, but the upload links were not
> working for me today.
>
>
>
> Gary Jaeger // Core Studio
> 249 Princeton Avenue
> Half Moon Bay, CA 94019
> 650 728 7060
> http://corestudio.com
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke & Alembic

2012-01-02 Thread Ivan Busquets
The more the merrier! :)

I certainly wasn't aware of that, so thanks for the info, Nathan.

On Mon, Jan 2, 2012 at 9:55 PM, Nathan Rusch wrote:

>   For what it’s worth, I think the Alembic team is also eventually
> planning to release a reference implementation of a Nuke Alembic reader
> (possibly several) the same way they have done for Maya.
>
> -Nathan
>
>
>  *From:* Ivan Busquets 
> *Sent:* Monday, January 02, 2012 9:41 PM
> *To:* Nuke user discussion 
> *Subject:* Re: [Nuke-users] Nuke & Alembic
>
> Thanks for the kind words, Paolo.
>
> I do know that AtomKraft supports Alembic (which is great), and just to
> clarify, I didn't meant to undermine AtomKraft in any way by posting this
> here. Likewise, I imagine Nuke will support it natively soon too, and I do
> look forward to that.
>
> I just thought that, with Alembic being an open-source framework, it would
> be a good idea to have an open-source implementation for Nuke as well,
> hoping that this will help Alembic get more popular amongst the Nuke
> community.
>
> That said, I do have a couple of questions I'd like to bounce off you with
> regards to the Alembic support in AtomKraft, but I'll do that in private to
> avoid clobbering the list, if that's cool.
>
> Cheers,
> Ivan
>
>
> On Mon, Jan 2, 2012 at 1:48 AM, Paolo Berto wrote:
>
>> Very nice work Ivan. Congrats.
>>
>> I like the idea of selectively load a child object, we'll add that too.
>>
>> I'd like to point out again that our AtomReadGeo node which reads ABC,
>> Houdini (B)GEO) and OBJ (we decided not to read FBX) does not check
>> out a license, meaning you can just dload AK, plug & play! Happy Nuke
>> Year :)
>>
>>
>> Paolo
>>
>> ps - docs (not updated to 1.0) here:
>> http://www.jupiter-jazz.com/docs/atomkraft/AtomReadGeo/index.html
>>
>>
>>
>>
>> On Mon, Jan 2, 2012 at 3:13 AM, Michael Garrett 
>> wrote:
>> > Great!  Thanks Ivan!  Looking forward to checking this out, and
>> comparing to
>> > the AtomKraft implementation.  Thanks for uploading compiled versions
>> too.
>> >
>> > Michael
>> >
>> >
>> > On 31 December 2011 09:01, Ivan Busquets 
>> wrote:
>> >>
>> >> Happy holidays, Nukers!
>> >>
>> >> Sorry for the spam to both the users and dev lists, but I thought this
>> >> might be of interest to people on both.
>> >>
>> >> Since Alembic seems to be gaining some traction amongst the Nuke
>> community
>> >> these days, I wanted to share the following:
>> >>
>> >> I've been working on a set of Alembic-related plugins in my spare time,
>> >> and it's come to the point where I don't have the time (or the skills)
>> to
>> >> bring them much further, so I've decided to open source the project so
>> >> anyone can use / modify / contribute as they please.
>> >>
>> >> The project is freely available here:
>> >>
>> >> http://github.com/ivanbusquets/ABCNuke/
>> >>
>> >> And includes the following plugins:
>> >>
>> >> - ABCReadGeo
>> >> - ABCAxis
>> >> - ABCCamera
>> >>
>> >> But only ABCReadGeo is released so far (need to clean up the rest, but
>> >> hopefully they will follow soon).
>> >>
>> >> If you want to try it out without the hassle of compiling it, here's a
>> >> couple of links to pre-compiled versions for Mac and Linux, each with a
>> >> version for Nuke 6.2 and 6.3. (again, only ABCReadGeo available so
>> far).
>> >> I'll try to upload them to Nukepedia as well, but the upload links
>> were not
>> >> working for me today.
>> >>
>> >> http://dl.dropbox.com/u/17836731/ABCNuke_plugins_macos.zip
>> >> http://dl.dropbox.com/u/17836731/ABCNuke_plugins_linux.zip
>> >>
>> >> Also, here's a link with a few example scripts, along with the Alembic
>> >> files and media required, to show some of the features of ABCReadGeo
>> >>
>> >> http://dl.dropbox.com/u/17836731/examples.zip
>> >>
>> >> And a couple of screenshots to know what to expect from the interface,
>> >> etc.
>> >>
>> >> http://dl.dropbox.com/u/17836731/ABCReadGeo_screenshot1.png
>> >> http://dl.dropbox.com/u/17836731/ABCReadGeo_screenshot2.png
>> >>
>> >>

Re: [Nuke-users] Nuke & Alembic

2012-01-02 Thread Ivan Busquets
Thanks for the kind words, Paolo.

I do know that AtomKraft supports Alembic (which is great), and just to
clarify, I didn't meant to undermine AtomKraft in any way by posting this
here. Likewise, I imagine Nuke will support it natively soon too, and I do
look forward to that.

I just thought that, with Alembic being an open-source framework, it would
be a good idea to have an open-source implementation for Nuke as well,
hoping that this will help Alembic get more popular amongst the Nuke
community.

That said, I do have a couple of questions I'd like to bounce off you with
regards to the Alembic support in AtomKraft, but I'll do that in private to
avoid clobbering the list, if that's cool.

Cheers,
Ivan


On Mon, Jan 2, 2012 at 1:48 AM, Paolo Berto  wrote:

> Very nice work Ivan. Congrats.
>
> I like the idea of selectively load a child object, we'll add that too.
>
> I'd like to point out again that our AtomReadGeo node which reads ABC,
> Houdini (B)GEO) and OBJ (we decided not to read FBX) does not check
> out a license, meaning you can just dload AK, plug & play! Happy Nuke
> Year :)
>
>
> Paolo
>
> ps - docs (not updated to 1.0) here:
> http://www.jupiter-jazz.com/docs/atomkraft/AtomReadGeo/index.html
>
>
>
>
> On Mon, Jan 2, 2012 at 3:13 AM, Michael Garrett 
> wrote:
> > Great!  Thanks Ivan!  Looking forward to checking this out, and
> comparing to
> > the AtomKraft implementation.  Thanks for uploading compiled versions
> too.
> >
> > Michael
> >
> >
> > On 31 December 2011 09:01, Ivan Busquets  wrote:
> >>
> >> Happy holidays, Nukers!
> >>
> >> Sorry for the spam to both the users and dev lists, but I thought this
> >> might be of interest to people on both.
> >>
> >> Since Alembic seems to be gaining some traction amongst the Nuke
> community
> >> these days, I wanted to share the following:
> >>
> >> I've been working on a set of Alembic-related plugins in my spare time,
> >> and it's come to the point where I don't have the time (or the skills)
> to
> >> bring them much further, so I've decided to open source the project so
> >> anyone can use / modify / contribute as they please.
> >>
> >> The project is freely available here:
> >>
> >> http://github.com/ivanbusquets/ABCNuke/
> >>
> >> And includes the following plugins:
> >>
> >> - ABCReadGeo
> >> - ABCAxis
> >> - ABCCamera
> >>
> >> But only ABCReadGeo is released so far (need to clean up the rest, but
> >> hopefully they will follow soon).
> >>
> >> If you want to try it out without the hassle of compiling it, here's a
> >> couple of links to pre-compiled versions for Mac and Linux, each with a
> >> version for Nuke 6.2 and 6.3. (again, only ABCReadGeo available so far).
> >> I'll try to upload them to Nukepedia as well, but the upload links were
> not
> >> working for me today.
> >>
> >> http://dl.dropbox.com/u/17836731/ABCNuke_plugins_macos.zip
> >> http://dl.dropbox.com/u/17836731/ABCNuke_plugins_linux.zip
> >>
> >> Also, here's a link with a few example scripts, along with the Alembic
> >> files and media required, to show some of the features of ABCReadGeo
> >>
> >> http://dl.dropbox.com/u/17836731/examples.zip
> >>
> >> And a couple of screenshots to know what to expect from the interface,
> >> etc.
> >>
> >> http://dl.dropbox.com/u/17836731/ABCReadGeo_screenshot1.png
> >> http://dl.dropbox.com/u/17836731/ABCReadGeo_screenshot2.png
> >>
> >>
> >> Here's some Key features of ABCReadGeo:
> >> - Selective reading of different objects in an Alembic archive. For
> >> example, you may read a single object from an archive that has multiple
> >> objects, without the speed penalty of reading through the whole archive.
> >> - Bbox mode for each object. (much faster if you don't need to load the
> >> full geometry)
> >> - Ability to interpolate between geometry samples
> >> - Retiming / offseting geometry animation
> >>
> >>
> >> Disclaimers:
> >> - It's the first time I have a go at a project of this size, and the
> first
> >> time I use Cmake, so I'll appreciate any comments / suggestions on
> improving
> >> both the code and the presentation.
> >> - Overall, I've tried to focus on performance, but I'm sure there will
> be
> >> cases where things brea

[Nuke-users] Nuke & Alembic

2011-12-31 Thread Ivan Busquets
Happy holidays, Nukers!

Sorry for the spam to both the users and dev lists, but I thought this
might be of interest to people on both.

Since Alembic seems to be gaining some traction amongst the Nuke community
these days, I wanted to share the following:

I've been working on a set of Alembic-related plugins in my spare time, and
it's come to the point where I don't have the time (or the skills) to bring
them much further, so I've decided to open source the project so anyone can
use / modify / contribute as they please.

The project is freely available here:

http://github.com/ivanbusquets/ABCNuke/

And includes the following plugins:

- ABCReadGeo
- ABCAxis
- ABCCamera

But only ABCReadGeo is released so far (need to clean up the rest, but
hopefully they will follow soon).

If you want to try it out without the hassle of compiling it, here's a
couple of links to pre-compiled versions for Mac and Linux, each with a
version for Nuke 6.2 and 6.3. (again, only ABCReadGeo available so far).
I'll try to upload them to Nukepedia as well, but the upload links were not
working for me today.

http://dl.dropbox.com/u/17836731/ABCNuke_plugins_macos.zip
http://dl.dropbox.com/u/17836731/ABCNuke_plugins_linux.zip

Also, here's a link with a few example scripts, along with the Alembic
files and media required, to show some of the features of ABCReadGeo

http://dl.dropbox.com/u/17836731/examples.zip

And a couple of screenshots to know what to expect from the interface, etc.

http://dl.dropbox.com/u/17836731/ABCReadGeo_screenshot1.png
http://dl.dropbox.com/u/17836731/ABCReadGeo_screenshot2.png


Here's some Key features of ABCReadGeo:
- Selective reading of different objects in an Alembic archive. For
example, you may read a single object from an archive that has multiple
objects, without the speed penalty of reading through the whole archive.
- Bbox mode for each object. (much faster if you don't need to load the
full geometry)
- Ability to interpolate between geometry samples
- Retiming / offseting geometry animation


Disclaimers:
- It's the first time I have a go at a project of this size, and the first
time I use Cmake, so I'll appreciate any comments / suggestions on
improving both the code and the presentation.
- Overall, I've tried to focus on performance, but I'm sure there will be
cases where things break or are not handled properly. If you have an
Alembic file that's not being interpreted correctly, I would very much like
to know. :)

And that's it. Hope people find it useful.

Happy New Year everyone!

Ivan
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: AtomKraft 1.0

2011-12-30 Thread Ivan Busquets
Hi Paolo,

Just wanted to chime in and join everyone else in saying congrats on your
1.0 release!

Had a chance to play a little bit more with AtomKraft, and it's an amazing
set of tools. It has the potential to make Nuke a viable tool for look
development, or at least integrate it into most look development workflows.

So congrats again, and Happy New Year to you, Moritz, and all the
jupiterians (jupiterites?).

Cheers,
Ivan

On Thu, Dec 29, 2011 at 7:29 PM, Paolo Berto wrote:

> It's coming, no worries. We are just waiting to push out the new
> website with it.
>
> And the good news is that there will be *no need* to get a license
> from us, you just download and it works straight away.
> And yes, it allows commercial work.
>
> Paolo
>
>
> On Fri, Dec 30, 2011 at 6:21 AM, mattdleonard
>  wrote:
> > Hiya,
> >
> > Just wondering of the free license has been released yet.
> >
> > I know it's the holidays but I just wanted to check I'm not missing
> > anything.
> >
> > Been having a play over Christmas and the plugin is fantastic, I can see
> > this being added into our Nuke training classes pretty quick.
> >
> > Many thanks for a great suite of tools,
> >
> > Matt
> >
> >
> > 
> >
> > Sphere VFX Ltd
> > 3D . VFX . Training
> > www.spherevfx.com
> >
> > ___
> > Nuke-users mailing list
> > Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> > http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
>
> --
> paolo berto durante
> ex-galactic president, etc.
> /*jupiter jazz*/ visual research — hong kong
> www.jupiter-jazz.com
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] SplineWarp python question

2011-12-14 Thread Ivan Busquets
Each control point has both the source and destination attributes. The
confusing part is that the naming of those attributes makes more sense for
a standard roto shape.

So, your source curve is:
controlPoint.center

and the dest curve is:
controlPoint.featherCenter

Also, if you're getting the position data out of each attribute, keep in
mind that the dest points are stored as an offset relative to the src
point, instead of an absolute position.

Have a look at the output of this:

node = nuke.selectedNode()

curves = node['curves']

sourcecurve = curves.toElement('Bezier1')

for p in sourcecurve:

print p.center.getPosition(nuke.frame()),
p.featherCenter.getPosition(nuke.frame())



Hope that helps.
Cheers,
Ivan




On Wed, Dec 14, 2011 at 4:21 PM, Magno Borgo  wrote:

> **
> Hello!
>
> I trying to figure out how to access the *control points *of the* **
> destination* curves of the SplineWarp node via python.
>
> The points of the source curves are easy:
>
> node = nuke.selectedNode()
>
> curves = node['curves']
>
> sourcecurve = curves.toElement('Ellipse1')
>
>
> With sourcecurve[0], sourcecurve[1], etc i can access each control point.
>
>
>
> Any help?
>
>
> --
> **
> Magno Borgo
>
> www.borgo.tv
> www.boundaryvfx.com
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Edges

2011-11-24 Thread Ivan Busquets
I see. Well, I'm sure you know you could use a MergeExpression, or wrap it
all into a gizmo if this is something you need often, so I suppose you're
just looking for opinions on whether such a merge operation should exist by
default.

Personally, I prefer having to unpremult/premult explicitly, so there's a
visual clue of what's going on in the script, and because it gives me a bit
more control over what I want to premult/unpremult. Say you want to merge
all channels, but you only want to unpremult rgb, because all other layers
already come unpremultiplied. That would be hard/obscure to handle in a
single merge operation.

But again, that's just an opinion, and if you run into this repeatedly,
then it's fair to think there should be a simpler way to handle it :)


On Thu, Nov 24, 2011 at 1:54 PM, Ron Ganbar  wrote:

> True, Ivan,
> but I'm hoping to have an operation inside Merge that will do that for me.
> Am I the only one who runs into this kind of issue repeatedly?
>
>
>
> Ron Ganbar
> email: ron...@gmail.com
> tel: +44 (0)7968 007 309 [UK]
>  +972 (0)54 255 9765 [Israel]
> url: http://ronganbar.wordpress.com/
>
>
>
> On 24 November 2011 23:04, Ivan Busquets  wrote:
>
>> Sorry for the overly simplified answer.
>> Didn't mean to say you can just "min" the two images together (unless
>> both are just a matte), but that you can unpremult, "min" only the alpha
>> channel of both, and then premult again, so you don't have to shuffle
>> things back and forth.
>>
>>
>> set cut_paste_input [stack 0]
>> version 6.3 v1
>> Dot {
>>  inputs 0
>>  name Dot2
>>  label "premultiplied img with holdout matte"
>>  selected true
>>  xpos -398
>>  ypos 30
>> }
>> push $cut_paste_input
>> Dot {
>>  name Dot1
>>  label "your premultiplied img"
>>  selected true
>>  xpos -588
>>  ypos -100
>> }
>> Unpremult {
>>  name Unpremult2
>>  selected true
>>  xpos -616
>>  ypos -9
>> }
>> Merge2 {
>>  inputs 2
>>  operation min
>>  Achannels alpha
>>  Bchannels alpha
>>  output alpha
>>  name Merge6
>>  selected true
>>  xpos -616
>>  ypos 28
>> }
>> Premult {
>>   name Premult4
>>  selected true
>>  xpos -616
>>  ypos 80
>> }
>>
>>
>>
>>
>> On Thu, Nov 24, 2011 at 12:22 PM, Ivan Busquets 
>> wrote:
>>
>>> Why not use a simple min between both?
>>>
>>> On Thu, Nov 24, 2011 at 12:15 PM, Ron Ganbar  wrote:
>>>
>>>> Hi all,
>>>> I've been thinking about this for a while, and I'm consulting you guys
>>>> in order to see how wrong I'm getting this.
>>>> [example below]
>>>>
>>>> When using the Mask operation under Merge to hold one image inside of
>>>> another image where both images have an edge that's exactly the same, the
>>>> edge that's the same is getting degraded - as in, it gets darker because of
>>>> the multiplication that occurs. This happens a lot when working with full
>>>> CG shots rather than CG over plate bg work.
>>>> To get around this what I normally do is unpremult the image, min both
>>>> mattes, then premult the result of the min with the RGB again. This
>>>> produces the correct results - at least as far as the part of the edge that
>>>> shouldn't change. Feels to me like this should be made simpler, no?
>>>> Am I wrong about this?
>>>>
>>>> In the example below you can see what I mean. The antialiased edge that
>>>> both shapes share gets darker after the Merge.
>>>>
>>>> Thanks all.
>>>> R
>>>>
>>>>
>>>> Paste this into your DAG:
>>>>
>>>> set cut_paste_input [stack 0]
>>>> version 6.3 v1
>>>> RotoPaint {
>>>>  inputs 0
>>>>  curves {AnimTree: "" {
>>>>  Version: 1.2
>>>>  Flag: 0
>>>>  RootNode: 1
>>>>  Node: {
>>>>   NodeName: "Root" {
>>>>Flag: 512
>>>>NodeType: 1
>>>>Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
>>>>NumOfAttributes: 11
>>>>"vis" S 0 1 "opc" S 0 1 "mbo" S 0 1 "mb" S 0 1 "mbs" S 0 0.5 "fo" S
>>>> 0 1 "fx" S 0 0 "fy" S 0 0 "ff" S 0 1 "

Re: [Nuke-users] Edges

2011-11-24 Thread Ivan Busquets
Sorry for the overly simplified answer.
Didn't mean to say you can just "min" the two images together (unless both
are just a matte), but that you can unpremult, "min" only the alpha channel
of both, and then premult again, so you don't have to shuffle things back
and forth.


set cut_paste_input [stack 0]
version 6.3 v1
Dot {
 inputs 0
 name Dot2
 label "premultiplied img with holdout matte"
 selected true
 xpos -398
 ypos 30
}
push $cut_paste_input
Dot {
 name Dot1
 label "your premultiplied img"
 selected true
 xpos -588
 ypos -100
}
Unpremult {
 name Unpremult2
 selected true
 xpos -616
 ypos -9
}
Merge2 {
 inputs 2
 operation min
 Achannels alpha
 Bchannels alpha
 output alpha
 name Merge6
 selected true
 xpos -616
 ypos 28
}
Premult {
 name Premult4
 selected true
 xpos -616
 ypos 80
}




On Thu, Nov 24, 2011 at 12:22 PM, Ivan Busquets wrote:

> Why not use a simple min between both?
>
> On Thu, Nov 24, 2011 at 12:15 PM, Ron Ganbar  wrote:
>
>> Hi all,
>> I've been thinking about this for a while, and I'm consulting you guys in
>> order to see how wrong I'm getting this.
>> [example below]
>>
>> When using the Mask operation under Merge to hold one image inside of
>> another image where both images have an edge that's exactly the same, the
>> edge that's the same is getting degraded - as in, it gets darker because of
>> the multiplication that occurs. This happens a lot when working with full
>> CG shots rather than CG over plate bg work.
>> To get around this what I normally do is unpremult the image, min both
>> mattes, then premult the result of the min with the RGB again. This
>> produces the correct results - at least as far as the part of the edge that
>> shouldn't change. Feels to me like this should be made simpler, no?
>> Am I wrong about this?
>>
>> In the example below you can see what I mean. The antialiased edge that
>> both shapes share gets darker after the Merge.
>>
>> Thanks all.
>> R
>>
>>
>> Paste this into your DAG:
>>
>> set cut_paste_input [stack 0]
>> version 6.3 v1
>> RotoPaint {
>>  inputs 0
>>  curves {AnimTree: "" {
>>  Version: 1.2
>>  Flag: 0
>>  RootNode: 1
>>  Node: {
>>   NodeName: "Root" {
>>Flag: 512
>>NodeType: 1
>>Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
>>NumOfAttributes: 11
>>"vis" S 0 1 "opc" S 0 1 "mbo" S 0 1 "mb" S 0 1 "mbs" S 0 0.5 "fo" S 0
>> 1 "fx" S 0 0 "fy" S 0 0 "ff" S 0 1 "ft" S 0 0 "pt" S 0 0
>>   }
>>   NumOfChildren: 1
>>   Node: {
>>NodeName: "Bezier1" {
>> Flag: 576
>> NodeType: 3
>> CurveGroup: "" {
>>  Transform: 0 0 S 1 1 0 S 1 1 0 S 1 1 0 S 1 1 1 S 1 1 1 S 1 1 0 S 1 1
>> 885 S 1 1 936
>>  Flag: 0
>>  NumOfCubicCurves: 2
>>  CubicCurve: "" {
>>   Type: 0 Flag: 8192 Dim: 2
>>   NumOfPoints: 18
>>   0 S 1 1 40 S 1 1 15 0 0 S 1 1 600 S 1 1 1195 0 0 S 1 1 -40 S 1 1
>> -15 0 0 S 1 1 -10 S 1 1 15 0 0 S 1 1 340 S 1 1 830 0 0 S 1 1 5 S 1 1 -7.5 0
>> 0 S 1 1 -176.25 S 1 1 69.375 0 0 S 1 1 520 S 1 1 350 0 0 S 1 1 176.25 S 1 1
>> -69.375 0 0 S 1 1 -20 S 1 1 -20 0 0 S 1 1 1070 S 1 1 565 0 0 S 1 1 40 S 1 1
>> 40 0 0 S 1 1 15 S 1 1 -25 0 0 S 1 1 1390 S 1 1 1000 0 0 S 1 1 -15 S 1 1 25
>> 0 0 S 1 1 25 S 1 1 -10 0 0 S 1 1 795 S 1 1 800 0 0 S 1 1 -25 S 1 1 10 0
>>  }
>>  CubicCurve: "" {
>>   Type: 0 Flag: 8192 Dim: 2
>>   NumOfPoints: 18
>>   0 S 1 1 40 S 1 1 15 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -40 S 1 1 -15 0 0
>> S 1 1 -10 S 1 1 15 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 5 S 1 1 -7.5 0 0 S 1 1
>> -176.25 S 1 1 69.375 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 176.25 S 1 1 -69.375 0 0
>> S 1 1 -20 S 1 1 -20 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 40 S 1 1 40 0 0 S 1 1 15
>> S 1 1 -25 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -15 S 1 1 25 0 0 S 1 1 25 S 1 1 -10
>> 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -25 S 1 1 10 0
>>  }
>>  NumOfAttributes: 44
>>  "vis" S 0 1 "r" S 0 1 "g" S 0 1 "b" S 0 1 "a" S 0 1 "ro" S 0 0 "go"
>> S 0 0 "bo" S 0 0 "ao" S 0 0 "opc" S 0 1 "bm" S 0 0 "inv" S 0 0 "mbo" S 0 0
>> "mb" S 0 1 "mbs" S 0 0.5 "mbsot" S 0 0 "mbso" S 0 0 "fo" S 0 1 "fx" S 0 0
>> "fy" S 0 0 "

Re: [Nuke-users] Edges

2011-11-24 Thread Ivan Busquets
Why not use a simple min between both?

On Thu, Nov 24, 2011 at 12:15 PM, Ron Ganbar  wrote:

> Hi all,
> I've been thinking about this for a while, and I'm consulting you guys in
> order to see how wrong I'm getting this.
> [example below]
>
> When using the Mask operation under Merge to hold one image inside of
> another image where both images have an edge that's exactly the same, the
> edge that's the same is getting degraded - as in, it gets darker because of
> the multiplication that occurs. This happens a lot when working with full
> CG shots rather than CG over plate bg work.
> To get around this what I normally do is unpremult the image, min both
> mattes, then premult the result of the min with the RGB again. This
> produces the correct results - at least as far as the part of the edge that
> shouldn't change. Feels to me like this should be made simpler, no?
> Am I wrong about this?
>
> In the example below you can see what I mean. The antialiased edge that
> both shapes share gets darker after the Merge.
>
> Thanks all.
> R
>
>
> Paste this into your DAG:
>
> set cut_paste_input [stack 0]
> version 6.3 v1
> RotoPaint {
>  inputs 0
>  curves {AnimTree: "" {
>  Version: 1.2
>  Flag: 0
>  RootNode: 1
>  Node: {
>   NodeName: "Root" {
>Flag: 512
>NodeType: 1
>Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
>NumOfAttributes: 11
>"vis" S 0 1 "opc" S 0 1 "mbo" S 0 1 "mb" S 0 1 "mbs" S 0 0.5 "fo" S 0 1
> "fx" S 0 0 "fy" S 0 0 "ff" S 0 1 "ft" S 0 0 "pt" S 0 0
>   }
>   NumOfChildren: 1
>   Node: {
>NodeName: "Bezier1" {
> Flag: 576
> NodeType: 3
> CurveGroup: "" {
>  Transform: 0 0 S 1 1 0 S 1 1 0 S 1 1 0 S 1 1 1 S 1 1 1 S 1 1 0 S 1 1
> 885 S 1 1 936
>  Flag: 0
>  NumOfCubicCurves: 2
>  CubicCurve: "" {
>   Type: 0 Flag: 8192 Dim: 2
>   NumOfPoints: 18
>   0 S 1 1 40 S 1 1 15 0 0 S 1 1 600 S 1 1 1195 0 0 S 1 1 -40 S 1 1 -15
> 0 0 S 1 1 -10 S 1 1 15 0 0 S 1 1 340 S 1 1 830 0 0 S 1 1 5 S 1 1 -7.5 0 0 S
> 1 1 -176.25 S 1 1 69.375 0 0 S 1 1 520 S 1 1 350 0 0 S 1 1 176.25 S 1 1
> -69.375 0 0 S 1 1 -20 S 1 1 -20 0 0 S 1 1 1070 S 1 1 565 0 0 S 1 1 40 S 1 1
> 40 0 0 S 1 1 15 S 1 1 -25 0 0 S 1 1 1390 S 1 1 1000 0 0 S 1 1 -15 S 1 1 25
> 0 0 S 1 1 25 S 1 1 -10 0 0 S 1 1 795 S 1 1 800 0 0 S 1 1 -25 S 1 1 10 0
>  }
>  CubicCurve: "" {
>   Type: 0 Flag: 8192 Dim: 2
>   NumOfPoints: 18
>   0 S 1 1 40 S 1 1 15 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -40 S 1 1 -15 0 0
> S 1 1 -10 S 1 1 15 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 5 S 1 1 -7.5 0 0 S 1 1
> -176.25 S 1 1 69.375 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 176.25 S 1 1 -69.375 0 0
> S 1 1 -20 S 1 1 -20 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 40 S 1 1 40 0 0 S 1 1 15
> S 1 1 -25 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -15 S 1 1 25 0 0 S 1 1 25 S 1 1 -10
> 0 0 S 1 1 0 S 1 1 0 0 0 S 1 1 -25 S 1 1 10 0
>  }
>  NumOfAttributes: 44
>  "vis" S 0 1 "r" S 0 1 "g" S 0 1 "b" S 0 1 "a" S 0 1 "ro" S 0 0 "go" S
> 0 0 "bo" S 0 0 "ao" S 0 0 "opc" S 0 1 "bm" S 0 0 "inv" S 0 0 "mbo" S 0 0
> "mb" S 0 1 "mbs" S 0 0.5 "mbsot" S 0 0 "mbso" S 0 0 "fo" S 0 1 "fx" S 0 0
> "fy" S 0 0 "ff" S 0 1 "ft" S 0 0 "src" S 0 0 "stx" S 0 0 "sty" S 0 0 "str"
> S 0 0 "sr" S 0 0 "ssx" S 0 1 "ssy" S 0 1 "ss" S 0 0 "spx" S 0 1024 "spy" S
> 0 778 "stot" S 0 0 "sto" S 0 0 "sv" S 0 0 "sf" S 0 1 "sb" S 0 1 "nv" S 0 1
> "view1" S 0 1 "ltn" S 0 1 "ltm" S 0 1 "ltt" S 0 0 "tt" S 0 4 "pt" S 0 0
> }
>}
>NumOfChildren: 0
>   }
>  }
> }
> }
>  toolbox {selectAll {
>   { selectAll ssx 1 ssy 1 sf 1 }
>   { createBezier ssx 1 ssy 1 sf 1 sb 1 tt 4 }
>   { createBSpline ssx 1 ssy 1 sf 1 sb 1 }
>   { createEllipse ssx 1 ssy 1 sf 1 sb 1 }
>   { createRectangle ssx 1 ssy 1 sf 1 sb 1 }
>   { brush ssx 1 ssy 1 sf 1 sb 1 }
>   { eraser src 2 ssx 1 ssy 1 sf 1 sb 1 }
>   { clone src 1 ssx 1 ssy 1 sf 1 sb 1 }
>   { reveal src 3 ssx 1 ssy 1 sf 1 sb 1 }
>   { dodge src 1 ssx 1 ssy 1 sf 1 sb 1 }
>   { burn src 1 ssx 1 ssy 1 sf 1 sb 1 }
>   { blur src 1 ssx 1 ssy 1 sf 1 sb 1 }
>   { sharpen src 1 ssx 1 ssy 1 sf 1 sb 1 }
>   { smear src 1 ssx 1 ssy 1 sf 1 sb 1 }
> } }
>  toolbar_brush_hardness 0.20003
>  toolbar_lifetime_type all
>  toolbar_source_transform_scale {1 1}
>  toolbar_source_transform_center {320 240}
>  colorOverlay 0
>  lifetime_type "all frames"
>  motionblur_shutter_offset_type centred
>  source_black_outside true
>  createNewTrack {{-1} "-1\t(none)\t-1" "1000\tNew Track Layer\t1000"}
>  name RotoPaint1
>  selected true
>  xpos -306
>  ypos -156
> }
> set N221a3540 [stack 0]
> Unpremult {
>  name Unpremult1
>  selected true
>  xpos -280
>  ypos -82
> }
> set N2962c380 [stack 0]
> push $cut_paste_input
> RotoPaint {
>  curves {AnimTree: "" {
>  Version: 1.2
>  Flag: 0
>  RootNode: 1
>  Node: {
>   NodeName: "Root" {
>Flag: 512
>NodeType: 1
>Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
>NumOfAttributes: 11
>"vis" S 0 1 "opc" S 0 1 "mbo" S 0 1 "mb" S 0 1 "mbs" S 0 0.5 "fo" S 0 1
> "fx" S 0 0 "fy" S 0

Re: [Nuke-users] ??? redguard1.glow ???

2011-11-21 Thread Ivan Busquets
Searching through the archives, it looks like there's also a few cases of
shared snippets and copy/pasted scripts sent to this list that were
infected.
They could have propagated just by copy-pasting them back.

These "layer infections" are so hard to track down and completely get rid
of...
Makes me think that maybe there should be a more restrictive policy
enforced by Nuke itself, where a layer/channel won't be created unless it
meets a certain criteria?



On Mon, Nov 21, 2011 at 1:46 PM, Howard Jones wrote:

> Any tools that might be in the public domain? On nukepedia?
>
> I suspect that's where I have it from.
>
> Howard
>
> On 21 Nov 2011, at 21:38, Dan Walker  wrote:
>
> Well, I've found it in several of our tools, which will then be propagated
> into scene files.
>
> I betcha Nukepedia has some gizmos that are infected too and again, if
> someone is reusing a config/resource file, downloaded tools or tools
> they've borrowed from other facilities and they've been incorporated into a
> pipeline or for personal use in their shots, there is the possibility of
> contamination.
>
> If this is causing scene files to crash, then it's a bigger issue in which
> the Foundry should be involved.
>
> Dan
>
>
>
>
> On Mon, Nov 21, 2011 at 12:58 PM, Dennis Steinschulte 
> <
> den...@rebelsofdesign.com> wrote:
>
>> Err.. actually it's the MCP …d'oh
>> well no job at MPC for me anymore
>>
>> cheers
>>
>> On 21.11.2011, at 21:49, Dennis Steinschulte wrote:
>>
>> hey ned,
>>
>> this is interesting information, but i haven't worked on TRON or somehow
>> 'near' the cinema world, lately
>> So far I deleted everything in the scripts (haven't been many in the few
>> days i figured out the add_layer part). But today, while showing several
>> old comps (6.0.1 nearly over a year old), the RED GREEN BLUE (aka the red
>> guard - virus ;) ) showed up all of a sudden.
>> OMG, the MPC really taking over all scripts… the past .. the future???
>>
>> cheers, the anxiously Dennis
>>
>>
>> On 21.11.2011, at 20:50, Ned Wilson wrote:
>>
>> This thread is great, I haven't seen this issue going around in a while!
>> I was a compositor on Tron at DD, and that is exactly where this "channel
>> virus" came from. The red guards were the armies of programs that CLU used
>> as his muscle in the machine world of Tron. They can be seen in the
>> background of many shots, and they wear helmets and carry spears which glow
>> in some cases. The costume designers on that show did an amazing job. Those
>> lines on the suits that glow were actually practical. However, there were
>> some cases where they wanted the glow enhanced, or the electrical portions
>> of the suits were malfunctioning, so we did this work digitally. Once a
>> look was established, someone at DD made a gizmo for the glow enhancement,
>> hence the redguard1.glow layer.
>>
>> This thing is insidious. It quickly spread to pretty much every comp on
>> Tron. Whenever you cut and paste a node from a script which has this layer,
>> it would embed the layer creation code in the cut and paste stack, as
>> someone on the list demonstrated. Every time a script was reused, a gizmo
>> exported, or an artist shared some nodes with another artist, the channel
>> virus was propagated. The redguard1.glow layer started showing up elsewhere
>> in DD, it surfaced on Real Steel and Transformers 3.
>>
>> As mentioned previously in the thread, the only way to get rid of this is
>> with a text editor, or if you're handy with sed or awk you can probably
>> figure that out too. Every Nuke script in the entire facility must be
>> checked, plus every single gizmo and Nuke script that is found in the
>> NUKE_PATH environment. Don't forget user's home directories either.
>>
>>
>> On Nov 19, 2011, at 12:07 PM, Dan Walker wrote:
>>
>> I'm gonna try a show wide search for this in all our Nuke comps on
>> Monday.
>>
>> Will let ya know what I find too.
>>
>>
>>
>> On Sat, Nov 19, 2011 at 10:38 AM, Diogo Girondi <
>> diogogiro...@gmail.com> wrote:
>>
>>> I'll look for those in some scripts I have here. But I honestly don't
>>> remember seeing any of those layers showing up in 6.3v2 and earlier
>>> versions.
>>>
>>>
>>> On 19/11/2011, at 13:53, Ean Carr < eanc...@gmail.com>
>>> wrote:
>>>
>>> Well, what a coincidence. I just found a script at our facility with
>>> this:
>>>
>>> add_layer {rgba rgba.beta redguard1.glow}
>>>
>>> Fun times.
>>>
>>> -Ean
>>>
>>> On Sat, Nov 19, 2011 at 12:12 PM, Ean Carr < 
>>> eanc...@gmail.com> wrote:
>>>
 Our little "virus" layer is rgba.beta. Can't seem to get rid of the
 little rascal. -Ean


 On Sat, Nov 19, 2011 at 11:56 AM, Howard Jones <
 mrhowardjo...@yahoo.com> wrote:

> I've sent this to support - but it could be a legacy thing, I'm on
> 6.2v2 here so maybe 6.3 has the cure?
>
> Howard
>
>   --
> *From:* Dennis Steinschulte < 
> den...@rebelsofdesign.com>
> *To:* Howard Jones < mrhowar

Re: [Nuke-users] Gamma and Alpha

2011-11-15 Thread Ivan Busquets
I hope there was a bet involved... :)

If you need further proof, you could use this script:

set cut_paste_input [stack 0]
version 6.2 v4
BackdropNode {
 inputs 0
 name BackdropNode2
 tile_color 0x7171c600
 label BG
 note_font_size 42
 selected true
 xpos 2434
 ypos 13246
 bdheight 156
}
BackdropNode {
 inputs 0
 name BackdropNode3
 tile_color 0x8e8e3800
 label "Convert back to linear\n\n(assuming your destination\napp will
convert\nto sRGB when importing)"
 note_font_size 22
 selected true
 xpos 2103
 ypos 14009
 bdwidth 286
 bdheight 218
}
BackdropNode {
 inputs 0
 name BackdropNode4
 tile_color 0x8e8e3800
 label Compare
 note_font_size 42
 selected true
 xpos 2613
 ypos 13944
 bdwidth 360
 bdheight 167
}
BackdropNode {
 inputs 0
 name BackdropNode1
 tile_color 0x8e8e3800
 label FG
 note_font_size 42
 selected true
 xpos 2191
 ypos 13222
 bdheight 190
}
add_layer {rgba redguard1.glow}
ColorWheel {
 inputs 0
 gamma 0.45
 name A
 selected true
 xpos 2201
 ypos 13300
}
Blur {
 size 100
 name Blur46
 selected true
 xpos 2201
 ypos 13374
}
Dot {
 name Dot30
 selected true
 xpos 2235
 ypos 13479
}
set N18ce1090 [stack 0]
CheckerBoard2 {
 inputs 0
 name B
 selected true
 xpos 2444
 ypos 13326
}
set Nd0e8d830 [stack 0]
Dot {
 name Dot31
 selected true
 xpos 2478
 ypos 13639
}
set N985ce2a0 [stack 0]
Colorspace {
 colorspace_out sRGB
 name Colorspace2
 selected true
 xpos 2444
 ypos 13704
}
set Ned256de0 [stack 0]
Merge2 {
 inputs 2
 operation stencil
 name Merge100
 label B*(1-a)
 selected true
 xpos 2444
 ypos 13852
}
push $N18ce1090
push $N985ce2a0
Merge2 {
 inputs 2
 name Merge101
 selected true
 xpos 2207
 ypos 13634
}
Colorspace {
 colorspace_out sRGB
 name Colorspace1
 selected true
 xpos 2207
 ypos 13698
}
Merge2 {
 inputs 2
 operation from
 name Merge102
 selected true
 xpos 2207
 ypos 13857
}
set Nd30cc020 [stack 0]
Unpremult {
 name Unpremult4
 selected true
 xpos 2207
 ypos 14144
}
Colorspace {
 colorspace_in sRGB
 name Colorspace3
 selected true
 xpos 2207
 ypos 14168
}
Premult {
 name Premult6
 selected true
 xpos 2207
 ypos 14196
}
Write {
 name Write2
 label "write out here\n"
 selected true
 xpos 2207
 ypos 14263
}
push $Nd30cc020
push $Ned256de0
Dot {
 name Dot32
 selected true
 xpos 2657
 ypos 13709
}
Merge2 {
 inputs 2
 name Merge103
 label "Comped in sRGB space"
 selected true
 xpos 2623
 ypos 14057
}
push $N18ce1090
Dot {
 name Dot33
 selected true
 xpos 2755
 ypos 13479
}
push $Nd0e8d830
Dot {
 name Dot34
 selected true
 xpos 2917
 ypos 13354
}
Merge2 {
 inputs 2
 name Merge104
 label "Comped in linear"
 selected true
 xpos 2883
 ypos 14024
}
Colorspace {
 colorspace_out sRGB
 name Colorspace4
 label "Post sRGB conversion"
 selected true
 xpos 2883
 ypos 14066
}


On Tue, Nov 15, 2011 at 12:30 AM, Ron Ganbar  wrote:

> I knew I was right. (You guys just proved an old argument I had with
> someone).
> Oh, the joys of self gratification.
>
>
> Ron Ganbar
> email: ron...@gmail.com
> tel: +44 (0)7968 007 309 [UK]
>  +972 (0)54 255 9765 [Israel]
> url: http://ronganbar.wordpress.com/
>
>
>
> On 15 November 2011 10:28, Ivan Busquets  wrote:
>
>> Hi Gavin,
>>
>> As you said yourself, the equation cannot be solved UNLESS you know both
>> variables on one of the sides.
>> In other words, you'd need to have the BG image in order to prep a FG
>> image so it can be comped in sRGB space and match the results of a linear
>> comp.
>>
>> So is there no way to output a PSD or PNG or TIFF which will look the
>>> same as my composite in Nuke over a white background?
>>
>>
>> If you need to get the same results on a white background, you could prep
>> your FG element such that:
>>
>> X = (  (FG * alpha + (1 - alpha)) ^ 2.2  - (1 - alpha)  /  alpha  ) ^
>> (1/2.2)
>>
>> Where X is the FG image you'd want to export to be comped on a white BG.
>> But of course, this will only give you a match when comping the FG over a
>> WHITE BG. If the BG changes, then you'd need to prep a different FG to go
>> with it.
>>
>> Hope that helps.
>>
>> Cheers,
>> Ivan
>>
>>
>>
>> On Mon, Nov 14, 2011 at 12:13 PM, Gavin Greenwalt <
>> im.thatone...@gmail.com> wrote:
>>
>>> How are Nuke users handling workflows in which they need to deliver
>>> images with alpha that will be composited in sRGB space not linear space?
>>>
>>> Essentially we have a situation where you would need to find equations
>>> for u and v such that (xy + z(1-y))^(1-2.2) = (uv + z^(1-2.2)(1-v)).
>>>
>>> My initial impression is that it's impossible since the simplified
>>> version of this conundrum would be (x+y)^2 = (u+v)  which I believe is
>>> mathematic

Re: [Nuke-users] Gamma and Alpha

2011-11-15 Thread Ivan Busquets
Hi Gavin,

As you said yourself, the equation cannot be solved UNLESS you know both
variables on one of the sides.
In other words, you'd need to have the BG image in order to prep a FG image
so it can be comped in sRGB space and match the results of a linear comp.

So is there no way to output a PSD or PNG or TIFF which will look the same
> as my composite in Nuke over a white background?


If you need to get the same results on a white background, you could prep
your FG element such that:

X = (  (FG * alpha + (1 - alpha)) ^ 2.2  - (1 - alpha)  /  alpha  ) ^
(1/2.2)

Where X is the FG image you'd want to export to be comped on a white BG.
But of course, this will only give you a match when comping the FG over a
WHITE BG. If the BG changes, then you'd need to prep a different FG to go
with it.

Hope that helps.

Cheers,
Ivan



On Mon, Nov 14, 2011 at 12:13 PM, Gavin Greenwalt
wrote:

> How are Nuke users handling workflows in which they need to deliver images
> with alpha that will be composited in sRGB space not linear space?
>
> Essentially we have a situation where you would need to find equations for
> u and v such that (xy + z(1-y))^(1-2.2) = (uv + z^(1-2.2)(1-v)).
>
> My initial impression is that it's impossible since the simplified version
> of this conundrum would be (x+y)^2 = (u+v)  which I believe is
> mathematically impossible to solve... right?   So is there no way to output
> a PSD or PNG or TIFF which will look the same as my composite in Nuke over
> a white background?
>
> Thanks,
> Gavin
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] OFlow: Source Frame at the current Frame using Speed?

2011-10-25 Thread Ivan Busquets
Here, this is what I would do it if "speed" is not constant:

(please note the expression could use some further work to avoid divisions
by 0, etc, but you get the idea)

set cut_paste_input [stack 0]
version 6.3 v4
push $cut_paste_input
Text {
 font /Library/Fonts/Arial.ttf
 size 240
 yjustify center
 box {457 389 1371 1167}
 center {914 778}
 name Text2
 selected true
 xpos -353
 ypos -32
}
Crop {
 box {0 0 1828 1556}
 name Crop1
 selected true
 xpos -353
 ypos -6
}
set Na4151610 [stack 0]
OFXuk.co.thefoundry.time.oflow_v100 {
 method Blend
 timing Speed
 timingFrame 1
 timingSpeed {{curve x1 1 x38 3}}
 filtering Normal
 warpMode Simple
 correctLuminance false
 automaticShutterTime false
 shutterTime 0
 shutterSamples 1
 vectorDetail 0.2
 smoothness 0.5
 blockSize 6
 Tolerances 0
 weightRed 0.3
 weightGreen 0.6
 weightBlue 0.1
 showVectors false
 cacheBreaker false
 name OFlow
 selected true
 xpos -240
 ypos 71
}
push $Na4151610
NoOp {
 name NoOp1
 selected true
 xpos -353
 ypos 156
 addUserKnob {20 User}
 addUserKnob {7 avg_speed}
 avg_speed {{"OFlow.timingSpeed.integrate(OFlow.first_frame,frame) /
(frame-OFlow.first_frame)"}}
 addUserKnob {7 test l frame R 0 1000}
 test {{"(frame-OFlow.first_frame) * avg_speed + OFlow.first_frame" i}}
}


On Tue, Oct 25, 2011 at 3:31 PM, Ivan Busquets wrote:

> Yes, you're right Howard.
> That expression only works assuming the speed is constant.
>
> If the speed parameter is animated, you would have to find the average
> speed up until that point (the integrate from start to current, divided by
> current-start), and use that in place of "timingSpeed"
>
>
>
> On Tue, Oct 25, 2011 at 2:52 PM, Howard Jones wrote:
>
>> Hi
>>
>> Just thought I'd try this but no luck, could be me.
>>
>> Howard
>>
>> set cut_paste_input [stack 0]
>> version 6.3 v5
>> push $cut_paste_input
>> add_layer {rgba rgba.beta}
>> Text {
>>  font /Library/Fonts/Arial.ttf
>>  yjustify center
>>  box {480 270 1440 810}
>>  center {960 540}
>>  name Text1
>>  selected true
>>  xpos 1073
>>  ypos 139
>> }
>> set N2114dbf0 [stack 0]
>> OFXuk.co.thefoundry.time.oflow_v100 {
>>  method Motion
>>  timing Speed
>>  timingFrame 1
>>  timingSpeed {{curve x16 0.5 x38 3}}
>>  filtering Normal
>>  warpMode Simple
>>  correctLuminance false
>>  automaticShutterTime false
>>  shutterTime 0
>>  shutterSamples 1
>>  vectorDetail 0.2
>>  smoothness 0.5
>>  blockSize 6
>>  Tolerances 0
>>  weightRed 0.3
>>  weightGreen 0.6
>>  weightBlue 0.1
>>  showVectors false
>>  cacheBreaker false
>>  name OFlow
>>  selected true
>>  xpos 1073
>>  ypos 242
>> }
>> push $N2114dbf0
>> NoOp {
>>  name NoOp1
>>  selected true
>>  xpos 976
>>  ypos 238
>>  addUserKnob {20 User}
>>  addUserKnob {7 test l frame R 0 1000}
>>  test {{"(frame-OFlow.first_frame) * OFlow.timingSpeed +
>> OFlow.first_frame"}}
>> }
>>
>>
>> --
>> *From:* Ivan Busquets 
>> *To:* Nuke user discussion 
>> *Sent:* Tuesday, 25 October 2011, 19:56
>> *Subject:* Re: [Nuke-users] OFlow: Source Frame at the current Frame
>> using Speed?
>>
>> If it's set to speed, something like this should do it:
>>
>> (frame-OFlow.first_frame) * OFlow.timingSpeed + OFlow.first_frame
>>
>>
>> On Tue, Oct 25, 2011 at 11:43 AM, David Schnee wrote:
>>
>> **
>> Does anyone know how to derive the actual source frames on the current
>> frame when using the 'Speed' timing method in OFlow?  I'm looking to get a
>> curve to export ascii data of the source frames on the current frame for a
>> range.
>>
>> Cheers,
>> -Schnee
>>
>> --
>>
>> \/ davids / comp \/ 177
>> /\ tippettstudio /\ sno
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] OFlow: Source Frame at the current Frame using Speed?

2011-10-25 Thread Ivan Busquets
Yes, you're right Howard.
That expression only works assuming the speed is constant.

If the speed parameter is animated, you would have to find the average speed
up until that point (the integrate from start to current, divided by
current-start), and use that in place of "timingSpeed"


On Tue, Oct 25, 2011 at 2:52 PM, Howard Jones wrote:

> Hi
>
> Just thought I'd try this but no luck, could be me.
>
> Howard
>
> set cut_paste_input [stack 0]
> version 6.3 v5
> push $cut_paste_input
> add_layer {rgba rgba.beta}
> Text {
>  font /Library/Fonts/Arial.ttf
>  yjustify center
>  box {480 270 1440 810}
>  center {960 540}
>  name Text1
>  selected true
>  xpos 1073
>  ypos 139
> }
> set N2114dbf0 [stack 0]
> OFXuk.co.thefoundry.time.oflow_v100 {
>  method Motion
>  timing Speed
>  timingFrame 1
>  timingSpeed {{curve x16 0.5 x38 3}}
>  filtering Normal
>  warpMode Simple
>  correctLuminance false
>  automaticShutterTime false
>  shutterTime 0
>  shutterSamples 1
>  vectorDetail 0.2
>  smoothness 0.5
>  blockSize 6
>  Tolerances 0
>  weightRed 0.3
>  weightGreen 0.6
>  weightBlue 0.1
>  showVectors false
>  cacheBreaker false
>  name OFlow
>  selected true
>  xpos 1073
>  ypos 242
> }
> push $N2114dbf0
> NoOp {
>  name NoOp1
>  selected true
>  xpos 976
>  ypos 238
>  addUserKnob {20 User}
>  addUserKnob {7 test l frame R 0 1000}
>  test {{"(frame-OFlow.first_frame) * OFlow.timingSpeed +
> OFlow.first_frame"}}
> }
>
>
> --
> *From:* Ivan Busquets 
> *To:* Nuke user discussion 
> *Sent:* Tuesday, 25 October 2011, 19:56
> *Subject:* Re: [Nuke-users] OFlow: Source Frame at the current Frame using
> Speed?
>
> If it's set to speed, something like this should do it:
>
> (frame-OFlow.first_frame) * OFlow.timingSpeed + OFlow.first_frame
>
>
> On Tue, Oct 25, 2011 at 11:43 AM, David Schnee  wrote:
>
> **
> Does anyone know how to derive the actual source frames on the current
> frame when using the 'Speed' timing method in OFlow?  I'm looking to get a
> curve to export ascii data of the source frames on the current frame for a
> range.
>
> Cheers,
> -Schnee
>
> --
>
> \/ davids / comp \/ 177
> /\ tippettstudio /\ sno
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] OFlow: Source Frame at the current Frame using Speed?

2011-10-25 Thread Ivan Busquets
If it's set to speed, something like this should do it:

(frame-OFlow.first_frame) * OFlow.timingSpeed + OFlow.first_frame


On Tue, Oct 25, 2011 at 11:43 AM, David Schnee  wrote:

> **
> Does anyone know how to derive the actual source frames on the current
> frame when using the 'Speed' timing method in OFlow?  I'm looking to get a
> curve to export ascii data of the source frames on the current frame for a
> range.
>
> Cheers,
> -Schnee
>
> --
>
> \/ davids / comp \/ 177
> /\ tippettstudio /\ sno
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] merge all: mask/stencil VS in/out

2011-10-12 Thread Ivan Busquets
Hey Jan!

Annoying one for sure. Specially since the older Merge node did have the
expected behaviour.

This has bitten me a few times, and while you could obviously write your own
workaround gizmo/plugin, I find the easiest approach is to set up a couple
of shortcuts for 'Stencil' and 'Mask' that use the Merge node class instead
of Merge2

your_menu_item.addCommand("Merge/Merges/Stencil", "nuke.createNode('Merge',
'operation stencil')", icon = "MergeOut.png")
your_menu_item.addCommand("Merge/Merges/Mask", "nuke.createNode('Merge',
'operation mask')", icon = "MergeIn.png")

Hope that helps. But it might be worth giving The Foundry a nudge too.


On Wed, Oct 12, 2011 at 2:12 PM, Jan Dubberke  wrote:

> Hi all,
>
> "merge all" doesn't seem to work with additional channels when set to
> stencil/mask.
>
> It does work using in/out (which I'm avoiding to use for all the obvious
> reasons)
> IMO this a bug (and am actually baffled that I didn't come across earlier)
>
> please have a look at the attached script snippet where I'm trying to alter
> the "mask" channel. so set your viewer to "mask" and compare the 2 merge
> nodes
>
> does this make sense to anyone?
> cheers,
> Jan
>
>
>
> set cut_paste_input [stack 0]
> version 6.2 v2
> CheckerBoard2 {
>  inputs 0
>  name CheckerBoard1
>  selected true
>  xpos 255
>  ypos -169
> }
> Roto {
>  output mask
>  curves {AnimTree: "" {
>  Version: 1.2
>  Flag: 0
>  RootNode: 1
>  Node: {
>  NodeName: "Root" {
>   Flag: 512
>   NodeType: 1
>   Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
>   NumOfAttributes: 10
>   "vis" S 0 1 "opc" S 0 1 "mbo" S 0 1 "mb" S 0 1 "mbs" S 0 0.5 "fo" S 0 1
> "fx" S 0 0 "fy" S 0 0 "ff" S 0 1 "ft" S 0 0
>  }
>  NumOfChildren: 1
>  Node: {
>   NodeName: "Ellipse1" {
>Flag: 576
>NodeType: 3
>CurveGroup: "" {
> Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1057.5 S 0 727.5
> Flag: 0
> NumOfCubicCurves: 2
> CubicCurve: "" {
>  Type: 0 Flag: 8192 Dim: 2
>  NumOfPoints: 12
>  0 S 0 -329.99 S 0 0 0 0 S 0 1022.5 S 0 285 0 0 S 0 329.99 S 0 0 0 0 S
> 0 0 S 0 -280.284 0 0 S 0 1620 S 0 792.5 0 0 S 0 0 S 0 280.284 0 0 S 0 329.99
> S 0 0 0 0 S 0 1022.5 S 0 1300 0 0 S 0 -329.99 S 0 0 0 0 S 0 0 S 0 280.284 0
> 0 S 0 425 S 0 792.5 0 0 S 0 0 S 0 -280.284 0
> }
> CubicCurve: "" {
>  Type: 0 Flag: 8192 Dim: 2
>  NumOfPoints: 12
>  0 S 0 -329.99 S 0 0 0 0 S 0 0 S 0 0 0 0 S 0 329.99 S 0 0 0 0 S 0 0 S 0
> -280.284 0 0 S 0 0 S 0 0 0 0 S 0 0 S 0 280.284 0 0 S 0 329.99 S 0 0 0 0 S 0
> 0 S 0 0 0 0 S 0 -329.99 S 0 0 0 0 S 0 0 S 0 280.284 0 0 S 0 0 S 0 0 0 0 S 0
> 0 S 0 -280.284 0
> }
> NumOfAttributes: 43
> "vis" S 0 1 "r" S 0 1 "g" S 0 1 "b" S 0 1 "a" S 0 1 "ro" S 0 0 "go" S 0
> 0 "bo" S 0 0 "ao" S 0 0 "opc" S 0 1 "bm" S 0 0 "inv" S 0 0 "mbo" S 0 0 "mb"
> S 0 1 "mbs" S 0 0.5 "mbsot" S 0 0 "mbso" S 0 0 "fo" S 0 1 "fx" S 0 0 "fy" S
> 0 0 "ff" S 0 1 "ft" S 0 0 "src" S 0 0 "stx" S 0 0 "sty" S 0 0 "str" S 0 0
> "sr" S 0 0 "ssx" S 0 1 "ssy" S 0 1 "ss" S 0 0 "spx" S 0 1024 "spy" S 0 778
> "stot" S 0 0 "sto" S 0 0 "sv" S 0 0 "sf" S 0 1 "sb" S 0 1 "nv" S 0 1 "view1"
> S 0 1 "ltn" S 0 690 "ltm" S 0 690 "ltt" S 0 0 "tt" S 0 6
>}
>   }
>   NumOfChildren: 0
>  }
>  }
> }
> }
>  toolbox {selectAll {
>  { selectAll ssx 1 ssy 1 sf 1 }
>  { createBezier ssx 1 ssy 1 sf 1 sb 1 tt 4 }
>  { createBSpline ssx 1 ssy 1 sf 1 sb 1 }
>  { createEllipse ssx 1 ssy 1 sf 1 sb 1 tt 6 }
>  { createRectangle ssx 1 ssy 1 sf 1 sb 1 }
>  { brush ssx 1 ssy 1 sf 1 sb 1 }
>  { eraser src 2 ssx 1 ssy 1 sf 1 sb 1 }
>  { clone src 1 ssx 1 ssy 1 sf 1 sb 1 }
>  { reveal src 3 ssx 1 ssy 1 sf 1 sb 1 }
>  { dodge src 1 ssx 1 ssy 1 sf 1 sb 1 }
>  { burn src 1 ssx 1 ssy 1 sf 1 sb 1 }
>  { blur src 1 ssx 1 ssy 1 sf 1 sb 1 }
>  { sharpen src 1 ssx 1 ssy 1 sf 1 sb 1 }
>  { smear src 1 ssx 1 ssy 1 sf 1 sb 1 }
> } }
>  toolbar_brush_hardness 0.20003
>  toolbar_lifetime_type all
>  toolbar_source_transform_scale {1 1}
>  toolbar_source_transform_**center {320 240}
>  colorOverlay 0
>  lifetime_type "all frames"
>  lifetime_start 690
>  lifetime_end 690
>  motionblur_shutter_offset_type centred
>  source_black_outside true
>  name Roto1
>  selected true
>  xpos 255
>  ypos -56
> }
> set N2277ad20 [stack 0]
> push $cut_paste_input
> Roto {
>  output alpha
>  curves {AnimTree: "" {
>  Version: 1.2
>  Flag: 0
>  RootNode: 1
>  Node: {
>  NodeName: "Root" {
>   Flag: 512
>   NodeType: 1
>   Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1024 S 0 778
>   NumOfAttributes: 10
>   "vis" S 0 1 "opc" S 0 1 "mbo" S 0 1 "mb" S 0 1 "mbs" S 0 0.5 "fo" S 0 1
> "fx" S 0 0 "fy" S 0 0 "ff" S 0 1 "ft" S 0 0
>  }
>  NumOfChildren: 1
>  Node: {
>   NodeName: "Ellipse1" {
>Flag: 512
>NodeType: 3
>CurveGroup: "" {
> Transform: 0 0 S 0 0 S 0 0 S 0 0 S 0 1 S 0 1 S 0 0 S 0 1057.5 S 0 727.5
> Flag: 0
> NumOfCubicCurves: 2
> CubicCurve: "" {
>  Type: 0 Flag: 8192 Dim: 2
>  NumOfPoints: 12
>  0 S 0 -

Re: [Nuke-users] Do not display the error state of a node which isinside a Gizmo

2011-10-12 Thread Ivan Busquets
Both error and hasError are available in the expression parser, actually.

node.error returns true if using the Node would result in an error (even if
the error comes from somewhere else upstream)
node.hasError only returns true when an error is raised within the node
itself.

As J said, just using "hasError" as an expression in the disable knob of a
Read node should do the trick there.


On Wed, Oct 12, 2011 at 10:38 AM, Nathan Rusch wrote:

> I think J meant error instead of hasError. .hasError() is a Python
> method, while error is the Nuke expression command.
>
> -Nathan
>
> -Original Message- From: Dorian Fevrier
> Sent: Wednesday, October 12, 2011 9:45 AM
> To: nuke-users@support.thefoundry.**co.uk
> Subject: Re: [Nuke-users] Do not display the error state of a node which
> isinside a Gizmo
>
>
> Thanks for your answer!
>
> To be honest, I do not really understand. :(
> But it gave me an idea
>
> def returnFalse():
>  return False
> node.hasError = returnFalse
>
> # Result: Traceback (most recent call last):
>  File "", line 1, in 
> AttributeError: 'Node' object attribute 'hasError' is read-only
>
> You was talking about overload hasError function?
>
> Thanks in advance. :)
>
> Regards,
>
> Dorian
>
> On 10/12/2011 06:22 PM, J Bills wrote:
>
>> someone else might have a better answer, but off the top of my head,
>> if you put "hasError" in the disable knob of the offending node, I
>> believe that will fix it.
>>
>> On Wed, Oct 12, 2011 at 4:53 AM, Dorian Fevrier
>>  wrote:
>>
>>> Hi Nuke users,
>>>
>>> I'm searching something that appear to be simple but I don't find any way
>>> to
>>> do this.
>>>
>>> I have a Gizmo node with some switch and read nodes inside.
>>>
>>> Following the case, the read node can have a bad file value (generated by
>>> an
>>> expression) and be in "ERROR" and "ERROR" is wrote on the Gizmo.
>>>
>>> Is there a simple way to avoid this node to return his "ERROR" state on
>>> the
>>> Gizmo?
>>>
>>> Actually, the error message is write on it but, because I use switch, the
>>> gizmo work perfectly...
>>>
>>> I hope someone already encountered this before.
>>>
>>> Thanks in advance,
>>>
>>> Dorian
>>> __**_
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.**co.uk,
>>> http://forums.thefoundry.co.**uk/ 
>>> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>>>
>>>
>>>  __**_
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.**co.uk,
>> http://forums.thefoundry.co.**uk/ 
>> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>>
>>
>>
> __**_
> Nuke-users mailing list
> Nuke-users@support.thefoundry.**co.uk,
> http://forums.thefoundry.co.**uk/ 
> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
> __**_
> Nuke-users mailing list
> Nuke-users@support.thefoundry.**co.uk,
> http://forums.thefoundry.co.**uk/ 
> http://support.thefoundry.co.**uk/cgi-bin/mailman/listinfo/**nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Grid oddness

2011-10-11 Thread Ivan Busquets
Like Jerry and Colin said, I think it's just an sampling problem (or lack of
antialiasing) on a very fine pattern. Most likely due to the Viewer doing
non-filtered transforms.

You probably know about moire patterns already, but here's some info just in
case:
http://en.wikipedia.org/wiki/Moir%C3%A9_pattern

You can set the viewer 1:1 and replicate the issue by scaling down the image
using different filters. Impulse (no filtering), shows the same issue.

set cut_paste_input [stack 0]
version 6.2 v4
push $cut_paste_input
Grid {
 number {128 97.25}
 name Grid2
 selected true
 xpos 27769
 ypos -3783
}
set N15c997f0 [stack 0]
Reformat {
 type scale
 scale 0.5
 filter Impulse
 name Reformat4
 label Impulse
 selected true
 xpos 27724
 ypos -3693
}
push $N15c997f0
Reformat {
 type scale
 scale 0.5
 name Reformat3
 label Cubic
 selected true
 xpos 27823
 ypos -3692
}



On Tue, Oct 11, 2011 at 8:02 AM, Colin Alway  wrote:

> I think it's to do with the way nuke scales images down to display them in
> the viewer.
> If you apply it to a small image format, and view the image at 1:1 do you
> still see the darkening?
>
>
> On 11 October 2011 14:20, Ron Ganbar  wrote:
>
>> Hi all,
>> paste this grid node into Nuke and have a look at it. On
>> two separate computers, one with Nuke6.3v1 and the other 6.3v2 the image
>> gets darker towards the top right. How odd. Can anybody else confirm or
>> offer a reason for this?
>>
>> set cut_paste_input [stack 0]
>> version 6.3 v1
>> push $cut_paste_input
>> Grid {
>>  number {128 97.25}
>>  name Grid1
>>  selected true
>>  xpos -40
>>  ypos -77
>> }
>>
>>
>> Ron Ganbar
>> email: ron...@gmail.com
>> tel: +44 (0)7968 007 309 [UK]
>>  +972 (0)54 255 9765 [Israel]
>> url: http://ronganbar.wordpress.com/
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
>
> --
> colin alway
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

[Nuke-users] "All plugins- > Update..." slow in 6.3?

2011-10-05 Thread Ivan Busquets
Hi,

Has anyone noticed the "All plugins -> Update" command taking a lot longer
in Nuke 6.3 than it does in 6.2?

Not sure if it's specific to my/our setup, so I'm curious if anyone else has
noticed a difference between both versions.

Thanks,
Ivan
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Odd gamma behaviour

2011-10-05 Thread Ivan Busquets
Yep, I don't think __alpha is defined when building on more modern machines
(didn't know the history behind it, though. Thanks Jonathan)

Ben, thanks for the credit, but this is not what I meant in the original
post.

What I was trying to say was that the original issue Ron posted was due to
filters acting on a square region. If you're applying a large blur to
something like a circle, pixels will still be filtered based on a square
region. This is generally ok, since the pixels at the corners of your image
(further from the center of the circle) will be filtered using a lot more
black samples than, say, any other pixels along the sides of the image
(which would be closer to the circle). So, they'll get smaller values, as
you would expect.

The problem comes when you push those smaller values (with a gamma
operation, for example) far enough that they start getting close to any
other value in the image. Then you end up with a "square", as Ron's example
was showing.

Hope that makes more sense. :)

Cheers,
Ivan

On Wed, Oct 5, 2011 at 2:29 PM, Jonathan Egstad wrote:

> > ---snip--
> >
> > float G = gamma[z];
> > // patch for linux alphas because the pow function behaves badly
> > // for very large or very small exponent values.
> > #ifdef __alpha
> > if (G < 0.008f)
> >   G = 0.0f;
> > if (G > 125.0f)
> >   G = 125.0f;
> > #endif
> >
> > ---snip---
>
> I might be wrong, but I think this clamp patch is only enabled when built
> on on DEC Alpha machines.  You may want to double-check that the __alpha
> token gets enabled during a build on an Intel machine.
>
> For those interested in Nuke history - the DEC Alpha was one of the first
> Linux boxes used in production at Digital Domain back in the mid '90s and
> Nuke ran like blazes on them compared to the relatively pokey SGIs.
>
> -jonathan
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Transparancy grid

2011-10-05 Thread Ivan Busquets
You can register multiple viewerProcesses, not IP's.

That's why I was recommending to use viewerProcesses for anything that needs
to be shared across a show (like a 3D lut, any additional "looks", crop
guides, etc), and leave the IP free for the artists to use anything they
want in there.

It's just an opinion, but I find people make a lot more use of the IP if it
doesn't interfere with anything else (like, they won't loose any of the
show's predefined looks if they switch their IP on and off)

Cheers,
Ivan

On Wed, Oct 5, 2011 at 12:37 PM, Deke Kincaid  wrote:

> You can define any number of gizmos as separate viewer processes just like
> srgb/rec709, etc  So you can have more then one IP essentially.
>
> -deke
>
>
> On Wed, Oct 5, 2011 at 11:52, Randy Little  wrote:
>
>> Yeah I mean it would be nice to have more then one IP.   LIke you could
>> have several IP groups.   Does that make since?   Is there an easy way to
>> have several IP groups.   Never tried it.
>>
>> I think its that I miss shake built in overlays.
>>
>> Randy S. Little
>> http://www.rslittle.com <http://reel.rslittle.com>
>>
>>
>>
>>
>>
>> On Wed, Oct 5, 2011 at 12:18, Ivan Busquets wrote:
>>
>>> If your show is using viewerProcess, then you still have the old Input
>>> Process for yourself, right?
>>> You can set up Input Process to happen either before or after the
>>> viewerProcess, depending on your needs, but you don't need to turn off
>>> either of them to see the other.
>>>
>>> Unless I'm misreading and your show's viewer options are actually set up
>>> as an Input Process node. If that's the case, I'd definitely recommend
>>> moving that into the viewerProcess dropdown, so the users still get the
>>> Input Process slot free to use for anything they need (an overlay, turning
>>> on/off an anaglyph view, a certain look, etc).
>>>
>>> I agree that this seems like a standard option in every other comp
>>> package, but having the ability to use Input Process for anything you need
>>> makes it a lot more flexible, IMHO.
>>>
>>>
>>>
>>>
>>> On Wed, Oct 5, 2011 at 10:31 AM, Randy Little wrote:
>>>
>>>> Ron what I am saying is that I wouldn't want to be messing around with a
>>>> SHOW template viewer process that may have all kinds of hooks inside of it.
>>>>   It would be nice if nuke could show "Alpha" as Transparent.   I always
>>>> feel like Nukes viewer is just antique even compared to what was capable in
>>>> Shake 2.  ( I know its way faster but the viewer options are so limited)
>>>> LIke to do what you are saying in an environment where its safe to do so
>>>> would also turn on/off all your other view processes.  Then you have to 
>>>> have
>>>> the group open somewhere and go hunt for it just to toggle alpha
>>>> transparency on and off.Does it work?  Sure.  Does it seem like almost
>>>> every other compositing program dating back to at least combustion and 
>>>> maybe
>>>> even composite had or has this feature? Yes.is it a killer?  No.
>>>> It sure would be nice though to have more of those viewer overlays that
>>>> shake had though.
>>>>
>>>> Randy S. Little
>>>> http://www.rslittle.com <http://reel.rslittle.com>
>>>>
>>>>
>>>>
>>>>
>>>> On Wed, Oct 5, 2011 at 11:09, Ron Ganbar  wrote:
>>>>
>>>>> You simply make a bigger viewer process with more options in it that
>>>>> can be turned on and off.
>>>>>
>>>>>
>>>>>
>>>>> Ron Ganbar
>>>>> email: ron...@gmail.com
>>>>> tel: +44 (0)7968 007 309 [UK]
>>>>>  +972 (0)54 255 9765 [Israel]
>>>>> url: http://ronganbar.wordpress.com/
>>>>>
>>>>>
>>>>>
>>>>> On 5 October 2011 19:06, Randy Little  wrote:
>>>>>
>>>>>> Yeah how does that work if you already have a view process for a job.
>>>>>>
>>>>>> Randy S. Little
>>>>>> http://www.rslittle.com <http://reel.rslittle.com>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Oct 5, 2011 at 10:39, Deke Kinca

Re: [Nuke-users] Transparancy grid

2011-10-05 Thread Ivan Busquets
If your show is using viewerProcess, then you still have the old Input
Process for yourself, right?
You can set up Input Process to happen either before or after the
viewerProcess, depending on your needs, but you don't need to turn off
either of them to see the other.

Unless I'm misreading and your show's viewer options are actually set up as
an Input Process node. If that's the case, I'd definitely recommend moving
that into the viewerProcess dropdown, so the users still get the Input
Process slot free to use for anything they need (an overlay, turning on/off
an anaglyph view, a certain look, etc).

I agree that this seems like a standard option in every other comp package,
but having the ability to use Input Process for anything you need makes it a
lot more flexible, IMHO.



On Wed, Oct 5, 2011 at 10:31 AM, Randy Little wrote:

> Ron what I am saying is that I wouldn't want to be messing around with a
> SHOW template viewer process that may have all kinds of hooks inside of it.
>   It would be nice if nuke could show "Alpha" as Transparent.   I always
> feel like Nukes viewer is just antique even compared to what was capable in
> Shake 2.  ( I know its way faster but the viewer options are so limited)
> LIke to do what you are saying in an environment where its safe to do so
> would also turn on/off all your other view processes.  Then you have to have
> the group open somewhere and go hunt for it just to toggle alpha
> transparency on and off.Does it work?  Sure.  Does it seem like almost
> every other compositing program dating back to at least combustion and maybe
> even composite had or has this feature? Yes.is it a killer?  No.
> It sure would be nice though to have more of those viewer overlays that
> shake had though.
>
> Randy S. Little
> http://www.rslittle.com 
>
>
>
>
> On Wed, Oct 5, 2011 at 11:09, Ron Ganbar  wrote:
>
>> You simply make a bigger viewer process with more options in it that can
>> be turned on and off.
>>
>>
>>
>> Ron Ganbar
>> email: ron...@gmail.com
>> tel: +44 (0)7968 007 309 [UK]
>>  +972 (0)54 255 9765 [Israel]
>> url: http://ronganbar.wordpress.com/
>>
>>
>>
>> On 5 October 2011 19:06, Randy Little  wrote:
>>
>>> Yeah how does that work if you already have a view process for a job.
>>> Randy S. Little
>>> http://www.rslittle.com 
>>>
>>>
>>>
>>>
>>>
>>> On Wed, Oct 5, 2011 at 10:39, Deke Kincaid wrote:
>>>
 You can make the viewer go through any gizmo/group.  Just take the
 example Ron gave and register it as a viewer process.

 -deke


 On Wed, Oct 5, 2011 at 06:14, Ron Ganbar  wrote:

> Make a checkerboard and put everything over it? Wrap it up in a group
> and use it as VIEWER_INPUT.
>
>
> Ron Ganbar
> email: ron...@gmail.com
> tel: +44 (0)7968 007 309 [UK]
>  +972 (0)54 255 9765 [Israel]
> url: http://ronganbar.wordpress.com/
>
>
>
> On 5 October 2011 15:03, blemma wrote:
>
>> **
>> Hi.
>>
>> Is there a way to view the alpha as a transparancy grid? Like AE or
>> Fusion.
>>
>> Thanks
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>


 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

>>>
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] nuke python - ShuffleCopy

2011-09-30 Thread Ivan Busquets
This is a Python syntax conflict. In that context, "in" has a meaning in
Python, and that will take precedence over the named keyword you're trying
to pass as an argument.

Try this instead, which should work:

s = nuke.nodes.ShuffleCopy()
s['in'].setValue('rgba')


On Fri, Sep 30, 2011 at 10:12 AM, Matias Volonte wrote:

> when I try to create throught python a shuffleCopy node the following way I
> get an error:
>
> nuke.nodes.ShuffleCopy(in='rgba')
>
> If I create this node manually and I print it, that parameter appears there
> and it is fine.
>
> what is wrong with this? thanks.
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Python Constraint in One axis alone

2011-09-08 Thread Ivan Busquets
You can pass an index as an argument to setExpression (the index being the
field you want to set the expression to)

b['translate'].setExpression('%s.translate'%st.name(), 0)  // To set the
expression for translate.x only




On Thu, Sep 8, 2011 at 8:43 AM, Matias Volonte wrote:

> Hello, I need some help, this is the issue I have,
>
> instead of constraining all the axis like this:
>
> b['translate'].setExpression('%s.translate'%st.name())
>
> i would like to constraint only the X axis, how can I do this? thanks
>
> ___
> Nuke-users mailing list
> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: Passing an object track through multiple transforms

2011-09-06 Thread Ivan Busquets
> Nuke's appears to be not be an integer, but the values in your tree appear
> to either be 0.0 or 0.5, which is slightly odd
>

Bbox boundaries in Nuke are also integers (just like Shake's DOD). The
output value is always n.0 or n.5 because I'm averaging the center of the
bbox.

Of course this is by no means an accurate way of getting the transformation
of a given point, but more an idea of something he could do without
resorting to the NDK.

Ideally, you'd want to write a plugin for that, I agree. Either one that
exposes the concatenated matrix, or one where you could plug a stack of
transforms and directly apply the result to one or more points. A
"Reconcile2D" ? :)


On Tue, Sep 6, 2011 at 8:47 PM, Ben Dickson  wrote:

> Heh, I remember trying the exact same thing in Shake years ago, to
> transform a roto point instead of using a 4-point stablise - the problem is
> the dod was an integer
>
> Nuke's appears to be not be an integer, but the values in your tree appear
> to either be 0.0 or 0.5, which is slightly odd
>
> Seems like it'd be fairly simple to make a plugin which exposes the 2D
> transform matrix,
> http://docs.thefoundry.co.uk/**nuke/63/ndkdevguide/knobs-and-**
> handles/output-knobs.html<http://docs.thefoundry.co.uk/nuke/63/ndkdevguide/knobs-and-handles/output-knobs.html>
>
> Ivan Busquets wrote:
>
>>it looks like nuke gives a rounding error using that setup (far
>>values are .99902 instead of 1.0).  probably negligible but I like
>>1.0 betta.
>>
>>
>> One small thing about both those UV-map generation methods. Keep in mind
>> that STMap samples pixels at the center, so you'll need to account for that
>> half-pixel difference in your expression. Otherwise the resulting map is
>> going to introduce a bit of unnecessary filtering when you feed it to an
>> STmap.
>>
>> An expression like this should give you a 1-to-1 result when you feed it
>> into an STMap:
>> --**--
>> set cut_paste_input [stack 0]
>> version 6.3 v2
>> push $cut_paste_input
>> Expression {
>>  expr0 (x+0.5)/(width)
>>  expr1 (y+0.5)/(height)
>>  name Expression2
>>  selected true
>>  xpos -92
>>  ypos -143
>> }
>> --**--
>>
>> With regards to the original question, though, it's a shame that one
>> doesn't have access to the concatenated 2d matrix from 2D transform nodes
>> within expressions. Otherwise you could just multiply your source point by
>> the concatenated matrix and get its final position. This information is
>> indeed passed down the tree, but it's not accessible for anything but
>> plugins (that I know).
>>
>> You could probably take advantage of the fact that the bbox is transformed
>> the same way as your image, and you CAN ask for the bbox boundaries using
>> expressions. So, you could have something with a very small bbox centered
>> around your point of interest, transform that using the same transforms
>> you're using for your kites, and then get the center of the transformed
>> bbox, if that makes sense. It's a bit convoluted, but it might do the trick
>> for you.
>>
>> Here's an example:
>> --**--
>> set cut_paste_input [stack 0]
>> version 6.3 v2
>> push $cut_paste_input
>> Group {
>>  name INPUT_POSITION
>>  selected true
>>  xpos -883
>>  ypos -588
>>  addUserKnob {20 User}
>>  addUserKnob {12 position}
>>  position {1053.5 592}
>> }
>>  Input {
>>  inputs 0
>>  name Input1
>>  xpos -469
>>  ypos -265
>>  }
>>  Rectangle {
>>  area {{parent.position.x i x1 962} {parent.position.y i x1 391} {area.x+1
>> i} {area.y+1 i}}
>>  name Rectangle1
>>  selected true
>>  xpos -469
>>  ypos -223
>>  }
>>  Output {
>>  name Output1
>>  xpos -469
>>  ypos -125
>>  }
>> end_group
>> Transform {
>>  translate {36 0}
>>  center {1052 592}
>>  shutteroffset centred
>>  name Transform1
>>  selected true
>>  xpos -883
>>  ypos -523
>> }
>> set C48d17580 [stack 0]
>> Transform {
>>  translate {0 -11}
>>  rotate -34
>>  center {1052 592}
>>  shutteroffset centred
>>  name Transform2
>>  selected true
>>  xpos -883
>>  ypos -497
>> }
>> set C4489ddc0 [stack 0]
>> Transform {
>>  scale 1.36
>>  center {1052 592}
>>  shutteroffset centred

Re: [Nuke-users] Re: Passing an object track through multiple transforms

2011-09-06 Thread Ivan Busquets
>
> it looks like nuke gives a rounding error using that setup (far values are
> .99902 instead of 1.0).  probably negligible but I like 1.0 betta.
>

One small thing about both those UV-map generation methods. Keep in mind
that STMap samples pixels at the center, so you'll need to account for that
half-pixel difference in your expression. Otherwise the resulting map is
going to introduce a bit of unnecessary filtering when you feed it to an
STmap.

An expression like this should give you a 1-to-1 result when you feed it
into an STMap:

set cut_paste_input [stack 0]
version 6.3 v2
push $cut_paste_input
Expression {
 expr0 (x+0.5)/(width)
 expr1 (y+0.5)/(height)
 name Expression2
 selected true
 xpos -92
 ypos -143
}


With regards to the original question, though, it's a shame that one doesn't
have access to the concatenated 2d matrix from 2D transform nodes within
expressions. Otherwise you could just multiply your source point by the
concatenated matrix and get its final position. This information is indeed
passed down the tree, but it's not accessible for anything but plugins (that
I know).

You could probably take advantage of the fact that the bbox is transformed
the same way as your image, and you CAN ask for the bbox boundaries using
expressions. So, you could have something with a very small bbox centered
around your point of interest, transform that using the same transforms
you're using for your kites, and then get the center of the transformed
bbox, if that makes sense. It's a bit convoluted, but it might do the trick
for you.

Here's an example:

set cut_paste_input [stack 0]
version 6.3 v2
push $cut_paste_input
Group {
 name INPUT_POSITION
 selected true
 xpos -883
 ypos -588
 addUserKnob {20 User}
 addUserKnob {12 position}
 position {1053.5 592}
}
 Input {
  inputs 0
  name Input1
  xpos -469
  ypos -265
 }
 Rectangle {
  area {{parent.position.x i x1 962} {parent.position.y i x1 391} {area.x+1
i} {area.y+1 i}}
  name Rectangle1
  selected true
  xpos -469
  ypos -223
 }
 Output {
  name Output1
  xpos -469
  ypos -125
 }
end_group
Transform {
 translate {36 0}
 center {1052 592}
 shutteroffset centred
 name Transform1
 selected true
 xpos -883
 ypos -523
}
set C48d17580 [stack 0]
Transform {
 translate {0 -11}
 rotate -34
 center {1052 592}
 shutteroffset centred
 name Transform2
 selected true
 xpos -883
 ypos -497
}
set C4489ddc0 [stack 0]
Transform {
 scale 1.36
 center {1052 592}
 shutteroffset centred
 name Transform3
 selected true
 xpos -883
 ypos -471
}
set C4d2c2290 [stack 0]
Group {
 name OUT_POSITION
 selected true
 xpos -883
 ypos -409
 addUserKnob {20 User}
 addUserKnob {12 out_position}
 out_position {{"(input.bbox.x + input.bbox.r) / 2"} {"(input.bbox.y +
input.bbox.t) / 2"}}
}
 Input {
  inputs 0
  name Input1
  selected true
  xpos -469
  ypos -265
 }
 Output {
  name Output1
  xpos -469
  ypos -125
 }
end_group
CheckerBoard2 {
 inputs 0
 name CheckerBoard2
 selected true
 xpos -563
 ypos -623
}
clone $C48d17580 {
 xpos -563
 ypos -521
 selected true
}
clone $C4489ddc0 {
 xpos -563
 ypos -495
 selected true
}
clone $C4d2c2290 {
 xpos -563
 ypos -469
 selected true
}


Cheers,
Ivan






On Tue, Sep 6, 2011 at 6:09 PM, J Bills  wrote:

> sure - looks even cleaner than the ramps crap done from memory - actually,
> now that I look at it for some reason it looks like nuke gives a rounding
> error using that setup (far values are .99902 instead of 1.0).  probably
> negligible but I like 1.0 betta.  nice one AK.
>
> so play around with this, joshua -
>
>
> set cut_paste_input [stack 0]
> version 6.2 v4
>
> Constant {
>  inputs 0
>  channels rgb
>  name Constant2
>  selected true
>  xpos 184
>  ypos -174
>
> }
> Expression {
>  expr0 x/(width-1)
>  expr1 y/(height-1)
>  name Expression2
>  selected true
>  xpos 184
>  ypos -71
>
> }
> NoOp {
>  name WARP_GOES_HERE
>  tile_color 0xff00ff
>  selected true
>  xpos 184
>  ypos 11
>
> }
> Shuffle {
>  out motion
>  name Shuffle
>  label "choose motion\nor other output\nchannel"
>  selected true
>  xpos 184
>  ypos 83
>
> }
> push 0
> STMap {
>  inputs 2
>  channels motion
>  name STMap1
>  selected true
>  xpos 307
>  ypos 209
>
> }
>
>
>
>
>
> On Tue, Sep 6, 2011 at 5:23 PM, Anthony Kramer 
> wrote:
>
>> Heres a 1-node UVmap for you:
>>
>> set cut_paste_input [stack 0]
>> version 6.3 v2
>> push $cut_paste_input
>> Expression {
>>  expr0 x/(width-1)
>>  expr1 y/(height-1)
>>  name Expression2
>>  selected true
>>  xpos -480
>>  ypos 2079
>> }
>>
>>
>>
>> On Tue, Sep 6, 2011 at 4:46 PM, J Bills  wrote:
>>
>>> sure - that's what he's saying.  think of the uv map as creating a
>>> "blueprint" of your transforms or distortions.
>>>
>>> after you have that blueprint, you can run whatever you want through the
>>> same distortion and repu

Re: [Nuke-users] nuke renders and server loads

2011-09-06 Thread Ivan Busquets
Or, to go even further and remove any differences between multi-channel vs
non multi-channel exrs, have a look at this script instead (attached)

Even when you're reading in the same multi-channel exr, my experience is
that shuffling out to rgba and doing merges in rgba only uses less resources
than picking the channels in the merges.



On Tue, Sep 6, 2011 at 6:37 PM, Ivan Busquets wrote:

> Sure, I understand what you're saying.
> The example is only bundled that way because I didn't want to send a huge
> multi-channel exr file.
> But if you were to write out each generator to a file, plus a multi-channel
> exr at the end of all the Copy nodes, and then redo those trees with actual
> inputs, the results are pretty much the same.
>
> At least, that's what I used in my original test.
>
> Sorry the example was half baked. Does that make sense?
>
>
>
> On Tue, Sep 6, 2011 at 6:32 PM, Deke Kincaid wrote:
>
>> Hi Ivan
>>
>> The thing is in your slower one in red of your example your first
>> copying/shuffling everything to another channel before merging them from
>> their respective channels.  The fast one there isn't any shuffling around of
>> channels first.  Your going in the opposite direction(shuffling to other
>> channels instead of to rgba).  The act of actually moving channels around is
>> what causes the hit no matter which direction your going.
>>
>> To make the test equal you would need to use generators that allow you to
>> create in a specific channel.  The Checkerboard and Colorbars in your
>> example doesn't have this ability.
>>
>>  -deke
>>
>> On Mon, Sep 5, 2011 at 23:06, Ivan Busquets wrote:
>>
>>> Hi,
>>>
>>> Found the script I sent a while back as an example of picking layers in
>>> merges using up more resources.
>>> Just tried it in 6.3, and I still get similar results.
>>>
>>> Script attached for reference. Try viewing/rendering each of the two
>>> groups while keeping an eye on memory usage of your Nuke process.
>>>
>>> Cheers,
>>> Ivan
>>>
>>>
>>>
>>> On Mon, Sep 5, 2011 at 11:42 AM, Ivan Busquets 
>>> wrote:
>>>
>>>> Another thing is it sounds like you are shuffling out the channels to
>>>>> the rgb before you merge them.  This also does quite a hit in speed.  It 
>>>>> is
>>>>> far faster to merge and pick the channels you need rather then shuffling
>>>>> them out first.
>>>>>
>>>>
>>>> That's interesting. My experience has usually been quite the opposite. I
>>>> find the same operations done in Merges after shuffling to rgb are faster,
>>>> and definitely use less resources, than picking the relevant layers inside
>>>> the Merge nodes.
>>>>
>>>> Back in v5, I sent a script to support as an example of this behavior,
>>>> more specifically how using layers within the Merge nodes caused memory
>>>> usage to go through the roof (and not respect the memory limit in the
>>>> preferences). At the time, this was logged as a memory leak bug. I don't
>>>> think this was ever resolved, but to be fair this is probably less of an
>>>> issue nowadays with higher-specced workstations.
>>>>
>>>> Hearing that you find it faster to pick layers in a merge node than
>>>> shuffling & merging makes me very curious, though. I wonder if, given 
>>>> enough
>>>> memory (so it's not depleted by the mentioned leak/overhead), some scripts
>>>> may indeed run faster that way. Do you have any examples?
>>>>
>>>> And going back to the original topic, my experience with multi-channel
>>>> exr files is:
>>>>
>>>> - Separate exr sequences for each aov/layer is faster than a single
>>>> multi-channel exr, yes. As you mentioned, exr stores additional
>>>> channels/layers in an interleaved fashion, so the reader has to step 
>>>> through
>>>> all of them before going to the next scanline, even if you're not using 
>>>> them
>>>> all. Even if you read each layer separately and copy them all into layers 
>>>> in
>>>> your script (so you get the equivalent of a multi-channel exr), this is
>>>> still faster than using a multi-channel exr file.
>>>>
>>>> - When merging different layers coming from the same stream, I find
>>>> performance to be better when shuffling lay

Re: [Nuke-users] nuke renders and server loads

2011-09-06 Thread Ivan Busquets
Sure, I understand what you're saying.
The example is only bundled that way because I didn't want to send a huge
multi-channel exr file.
But if you were to write out each generator to a file, plus a multi-channel
exr at the end of all the Copy nodes, and then redo those trees with actual
inputs, the results are pretty much the same.

At least, that's what I used in my original test.

Sorry the example was half baked. Does that make sense?


On Tue, Sep 6, 2011 at 6:32 PM, Deke Kincaid  wrote:

> Hi Ivan
>
> The thing is in your slower one in red of your example your first
> copying/shuffling everything to another channel before merging them from
> their respective channels.  The fast one there isn't any shuffling around of
> channels first.  Your going in the opposite direction(shuffling to other
> channels instead of to rgba).  The act of actually moving channels around is
> what causes the hit no matter which direction your going.
>
> To make the test equal you would need to use generators that allow you to
> create in a specific channel.  The Checkerboard and Colorbars in your
> example doesn't have this ability.
>
> -deke
>
> On Mon, Sep 5, 2011 at 23:06, Ivan Busquets wrote:
>
>> Hi,
>>
>> Found the script I sent a while back as an example of picking layers in
>> merges using up more resources.
>> Just tried it in 6.3, and I still get similar results.
>>
>> Script attached for reference. Try viewing/rendering each of the two
>> groups while keeping an eye on memory usage of your Nuke process.
>>
>> Cheers,
>> Ivan
>>
>>
>>
>> On Mon, Sep 5, 2011 at 11:42 AM, Ivan Busquets wrote:
>>
>>> Another thing is it sounds like you are shuffling out the channels to the
>>>> rgb before you merge them.  This also does quite a hit in speed.  It is far
>>>> faster to merge and pick the channels you need rather then shuffling them
>>>> out first.
>>>>
>>>
>>> That's interesting. My experience has usually been quite the opposite. I
>>> find the same operations done in Merges after shuffling to rgb are faster,
>>> and definitely use less resources, than picking the relevant layers inside
>>> the Merge nodes.
>>>
>>> Back in v5, I sent a script to support as an example of this behavior,
>>> more specifically how using layers within the Merge nodes caused memory
>>> usage to go through the roof (and not respect the memory limit in the
>>> preferences). At the time, this was logged as a memory leak bug. I don't
>>> think this was ever resolved, but to be fair this is probably less of an
>>> issue nowadays with higher-specced workstations.
>>>
>>> Hearing that you find it faster to pick layers in a merge node than
>>> shuffling & merging makes me very curious, though. I wonder if, given enough
>>> memory (so it's not depleted by the mentioned leak/overhead), some scripts
>>> may indeed run faster that way. Do you have any examples?
>>>
>>> And going back to the original topic, my experience with multi-channel
>>> exr files is:
>>>
>>> - Separate exr sequences for each aov/layer is faster than a single
>>> multi-channel exr, yes. As you mentioned, exr stores additional
>>> channels/layers in an interleaved fashion, so the reader has to step through
>>> all of them before going to the next scanline, even if you're not using them
>>> all. Even if you read each layer separately and copy them all into layers in
>>> your script (so you get the equivalent of a multi-channel exr), this is
>>> still faster than using a multi-channel exr file.
>>>
>>> - When merging different layers coming from the same stream, I find
>>> performance to be better when shuffling layers to rgba and keeping merges to
>>> operate on rgba. (although this is the opposite of what Deke said, so your
>>> mileage may vary)
>>>
>>> Cheers,
>>> Ivan
>>>
>>> On Thu, Sep 1, 2011 at 1:55 PM, Deke Kincaid wrote:
>>>
>>>> Exr files are interleaved.  So when you look at some scanlines, you need
>>>> to read in every single channel in the EXR from those scanlines even if you
>>>> only need one of them.  So if you have a multichannel file with 40 channels
>>>> but you only use rgba and one or two matte channels, then your going to
>>>> incur a large hit.
>>>>
>>>> Another thing is it sounds like you are shuffling out the channels to
>>>> the rgb before you merge them.  T

Re: [Nuke-users] nuke renders and server loads

2011-09-05 Thread Ivan Busquets
Hi,

Found the script I sent a while back as an example of picking layers in
merges using up more resources.
Just tried it in 6.3, and I still get similar results.

Script attached for reference. Try viewing/rendering each of the two groups
while keeping an eye on memory usage of your Nuke process.

Cheers,
Ivan


On Mon, Sep 5, 2011 at 11:42 AM, Ivan Busquets wrote:

> Another thing is it sounds like you are shuffling out the channels to the
>> rgb before you merge them.  This also does quite a hit in speed.  It is far
>> faster to merge and pick the channels you need rather then shuffling them
>> out first.
>>
>
> That's interesting. My experience has usually been quite the opposite. I
> find the same operations done in Merges after shuffling to rgb are faster,
> and definitely use less resources, than picking the relevant layers inside
> the Merge nodes.
>
> Back in v5, I sent a script to support as an example of this behavior, more
> specifically how using layers within the Merge nodes caused memory usage to
> go through the roof (and not respect the memory limit in the preferences).
> At the time, this was logged as a memory leak bug. I don't think this was
> ever resolved, but to be fair this is probably less of an issue nowadays
> with higher-specced workstations.
>
> Hearing that you find it faster to pick layers in a merge node than
> shuffling & merging makes me very curious, though. I wonder if, given enough
> memory (so it's not depleted by the mentioned leak/overhead), some scripts
> may indeed run faster that way. Do you have any examples?
>
> And going back to the original topic, my experience with multi-channel exr
> files is:
>
> - Separate exr sequences for each aov/layer is faster than a single
> multi-channel exr, yes. As you mentioned, exr stores additional
> channels/layers in an interleaved fashion, so the reader has to step through
> all of them before going to the next scanline, even if you're not using them
> all. Even if you read each layer separately and copy them all into layers in
> your script (so you get the equivalent of a multi-channel exr), this is
> still faster than using a multi-channel exr file.
>
> - When merging different layers coming from the same stream, I find
> performance to be better when shuffling layers to rgba and keeping merges to
> operate on rgba. (although this is the opposite of what Deke said, so your
> mileage may vary)
>
> Cheers,
> Ivan
>
> On Thu, Sep 1, 2011 at 1:55 PM, Deke Kincaid wrote:
>
>> Exr files are interleaved.  So when you look at some scanlines, you need
>> to read in every single channel in the EXR from those scanlines even if you
>> only need one of them.  So if you have a multichannel file with 40 channels
>> but you only use rgba and one or two matte channels, then your going to
>> incur a large hit.
>>
>> Another thing is it sounds like you are shuffling out the channels to the
>> rgb before you merge them.  This also does quite a hit in speed.  It is far
>> faster to merge and pick the channels you need rather then shuffling them
>> out first.
>>
>> -deke
>>
>> On Thu, Sep 1, 2011 at 12:37, Ryan O'Phelan wrote:
>>
>>> Recently I've been trying to evaluate the load of nuke renders on our
>>> file server, and ran a few tests comparing multichannel vs. non-multichannel
>>> reads, and my initial test results were opposite of what I was expecting.
>>> My tests showed that multichannel comps rendered about 20-25% slower, and
>>> made about 25% more load on the server in terms of disk reads. I was
>>> expecting the opposite, since there are fewer files being called with
>>> multichannel reads.
>>>
>>> For what it's worth, all reads were zip1 compressed EXRs and I tested
>>> real comps, as well as extremely simplified comps where the multichannel
>>> files were branched and then fed into a contact sheet. I was monitoring
>>> performance using the performance monitor on the file server using only 20
>>> nodes and with almost nobody using the server.
>>>
>>> Can anyone explain this? Or am I wrong and need to redo these tests?
>>>
>>> Thanks,
>>> Ryan
>>>
>>>
>>>
>>> ___
>>> Nuke-users mailing list
>>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>>
>>
>>
>> ___
>> Nuke-users mailing list
>> Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>>
>
>


layers_vs_shuffles.nk
Description: Binary data
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

  1   2   >