Re: Open source json camera I/O platform

2013-07-10 Thread Sandy Sutherland
Gene - this would be of huge interest I think - every studio I have been 
at we have needed to do this, and have always fudged it - never had the 
time to even start looking at creating a tool!


So - a big yes from me!

S.

On 2013/07/10 5:09 AM, Gene Crucean wrote:

Hey folks,

Who's in the mood for some open-source camera I/O code? I'm kind of 
getting bummed out on having to write the same camera tools at every 
studio just to get a simple, lightweight and most importantly reliable 
camera pushed around from app to app. FBX does *not* cut it, not to 
mention it's not available for all apps that could make use of it. So 
I thought I would whip up a spec based on json and offer it up open 
source so anyone willing to donate some time, could create some simple 
tools to import/export for their favorite app. The spec is VERY 
lightweight and doesn't include some things that I'm sure someone will 
want... but please let me know your thoughts.


I already have a Softimage plugin working (consider it alpha). At this 
point it only has minor sanity checking and logic tests and I'm sure 
there are a zillion ways to improve it. But it's a start. The goal is 
to at least have plugins that all read and write the same spec from 
Houdini, Softimage, Maya, Max, Blender, Nuke... and more. I've built 
the Soft one using PyQt and it would be nice to maintain some 
consistency between apps, so I'm hopeful that the other versions could 
be based off of the same .ui file.


What do you guys think? Any interest in this? I know it's a simple 
thing but I'm sure a lot of you also write these tools at studios 
quite a bit too and could possibly be into something like this.


Check out the spec and source, and if you have time, play with the 
Soft plugin here: https://bitbucket.org/crewshin/json-cam



If you have completely zero interest in this... no worries. Thanks for 
looking.



--
-Gene
www.genecrucean.com http://www.genecrucean.com




Re: 2014 weirdness

2013-07-10 Thread Matt Morris
I've had the scene explorer issues in 2012, mainly with larger scenes, get
over a certain size and it starts to misbehave. Not the ref models issue
though.



On 9 July 2013 18:34, Matt Lind ml...@carbinestudios.com wrote:

 I have a suspicion this is a 2012 bug that you’ve imported into 2014
 because I have not had any of these issues in 2013 or 2014 testing, but
 have seen them in 2010-2012.  I do remember weird hidden gremlins living in
 2012 files which is partly why we skipped that version.

 ** **

 Try making 2014 content from scratch and see if the problem persists.

 ** **

 ** **

 Matt

 ** **

 ** **

 *From:* softimage-boun...@listproc.autodesk.com [mailto:
 softimage-boun...@listproc.autodesk.com] *On Behalf Of *Tim Crowson
 *Sent:* Tuesday, July 09, 2013 8:46 AM
 *To:* softimage@listproc.autodesk.com
 *Subject:* 2014 weirdness

 ** **

 I don't know what's going on with Soft, but both 2014 and 2014 SP1 are
 giving me some funky behavior...

 - Removing referenced models from the scene, either by deletion or
 resolution change, takes ages. By ages I mean 1-10 minutes depending on the
 model. They used to offload in a snap.
 - On several of our scenes, the PlayControl.GlobalIn and
 PlayControl.GlobalOut values just don't stick. In fact, the 'Out' value
 will increase everytime I open a scene.
 - Sometimes render regions just don't do anything. It's like a draw a
 markee on the screen and nothing. XSI doesn't start processing anything at
 all. Just sits there. I have to restart XSI to get my render region to do
 anything. This problem is renderer-agnostic.
 - My Scene Explorer will refuse to refresh on frequent occasion. I'll have
 to hit f5 to manually refresh it. This happens a lot when simply selecting
 items in the explorer and renaming them. I can select them, but I don't see
 them as being selected. But they are selected in the viewport. I can then
 hit f2 to rename them, but it's not until I actually hit f5 that my
 explorer shows the item as selected and renamed.

 None of this happened in previous versions (although we did skip 2013).
 Anyone else have similar experiences?

 -- 

  

 *Tim Crowson
 **Lead CG Artist*

 *Magnetic Dreams, Inc.
 *2525 Lebanon Pike, Building C. Nashville, TN 37214
 *Ph*  615.885.6801 | *Fax*  615.889.4768 | www.magneticdreams.com
 tim.crow...@magneticdreams.com

 *Confidentiality Notice: This email, including attachments, is
 confidential and should not be used by anyone who is not the original
 intended recipient(s). If you have received this e-mail in error please
 inform the sender and delete it from your mailbox or any other storage
 mechanism. Magnetic Dreams, Inc cannot accept liability for any statements
 made which are clearly the sender's own and not expressly made on behalf of
 Magnetic Dreams, Inc or one of its agents.*

  




-- 
www.matinai.com


clara.io

2013-07-10 Thread Stefan Kubicek

I'm a bit surprised this hasn't been posted here yet, i hope I'm not spoiling 
anything:

--
-
  Stefan Kubicek   ste...@keyvis.at
-
   keyvis digital imagery
  Alfred Feierfeilstraße 3
   A-2380 Perchtoldsdorf bei Wien
Phone:  +43 (0) 699 12614231
 www.keyvis.at
--   This email and its attachments are--
-- confidential and for the recipient only --



Re: ICE subdivide geo

2013-07-10 Thread Simon Reeves
Thanks Andreas that's a good idea!


Simon Reeves
London, UK
*si...@simonreeves.com*
*www.simonreeves.com*
*
*


On 10 July 2013 10:07, Andreas Böinghoff boeingh...@s-farm.de wrote:

  I'm sure there are more elegant ways, but you could do that quick and
 dirty: Disconnect all polys, subdivide them and merge them again (see the
 picture). If your meshes getting bigger this is no option, because it's
 getting slow.

 Andreas




 On 7/8/2013 3:38 PM, Simon Reeves wrote:

 Does anyone know if it's possible to subdivide geo in ICE in the non
 smoothing form, tessellating..

 like when you right click on selected polys  'Subdivide Polygons' rather
 than 'Local Subdivision refinement'



 Simon Reeves
 London, UK
 *si...@simonreeves.com*
 *www.simonreeves.com*
 *
 *






Re: clara.io

2013-07-10 Thread Tim Leydecker

This claraio example pushes me to point out that it would still
be nice to be sure that every workfile or content stays inside the
private IP address space at any given time. At all times.

Personally, I simply don´t like the idea of effectively handing over
the absolute control over my intellectual property to any kind of remote
or even unknown entity/authority that may have completely divergent
interests to my own without even needing to state so in advance.

Clouds are happily described as giving you that air of freedom but
handing over data to a cloud effectively just means handing over your
data to someone else.

I don´t see why I would want to do that.

It would be nice if Claraio is made to run self-contained in a private network
without any strings attached.

Cheers,

tim







On 10.07.2013 11:32, Rob Wuijster wrote:

I think most of us got the email ;-)

But all these new (internet based) tools makes it very interesting for what 
lies ahead.


Rob

\/-\/\/

On 10-7-2013 11:26, Stefan Kubicek wrote:

I'm a bit surprised this hasn't been posted here yet, I hope I'm not spoiling 
anything, but post date is 8th of July, so...
http://exocortex.com/blog/introducing_claraio







Re: clara.io

2013-07-10 Thread Eugen Sares

+1
No offence, but I also don't get too excited about the idea of 
my/customers data pending in some unknown place with unknown people 
having potential control over it.
Besides, it's risky to rely on a working internet connection all the 
time for work. There are too many things that can go wrong, like in all 
complex systems.
How about troubleshooting/workarounds if something hangs? And there can 
hang a lot in any complex 3d application.
What if you forget to pay your bills? What about being forced for 
whatever upgrades? Will you get cut off the chance to continue work?


This does not mean the Exocortex guys are not idealistic, but you cannot 
be sure what will happen in the more distant future, when you have 
settled comfy in that system and got dependant on it.


Cloud is evil... it means total control. Clever business idea for 
managers it might sound, but I for my part dislike it.


My kind of old-schoolish opinion...


Am 10.07.2013 11:49, schrieb Tim Leydecker:

This claraio example pushes me to point out that it would still
be nice to be sure that every workfile or content stays inside the
private IP address space at any given time. At all times.

Personally, I simply don´t like the idea of effectively handing over
the absolute control over my intellectual property to any kind of remote
or even unknown entity/authority that may have completely divergent
interests to my own without even needing to state so in advance.

Clouds are happily described as giving you that air of freedom but
handing over data to a cloud effectively just means handing over your
data to someone else.

I don´t see why I would want to do that.

It would be nice if Claraio is made to run self-contained in a private 
network

without any strings attached.

Cheers,

tim







On 10.07.2013 11:32, Rob Wuijster wrote:

I think most of us got the email ;-)

But all these new (internet based) tools makes it very interesting 
for what lies ahead.



Rob

\/-\/\/

On 10-7-2013 11:26, Stefan Kubicek wrote:
I'm a bit surprised this hasn't been posted here yet, I hope I'm not 
spoiling anything, but post date is 8th of July, so...

http://exocortex.com/blog/introducing_claraio









Re: clara.io

2013-07-10 Thread Angus Davidson
While I am also very careful about the cloud I am excited about the
potential. My main worry is ease of use. For example while I am very
impressed with what Lagoa can do. Its totally unusable on our internet
speeds. 

There needs to be some middle ground found to make them far more
practical. Whether its a case of having an option of maybe downloading the
assets libraries locally or something else you cannot have an application
check back with the server nearly every time you perform an action.

If your are going to host something on the cloud that can say be check at
start up etc. There are opportunities for example for much better handling
of licences. I am sorry there is no reason why we should have to screw
around with licence managers (which all tend hate any other license
manager) when it totally possible to either have it check based on mac
address, IP, or even a login.

There are so many things that the cloud could be useful for without having
to do everything there.

Also I find it very doubtful that companies that tend to deal a lot with
brands etc will be happy with their stuff being held in a place that is
not under their total control.

That being said one of the where having the whole thing in the cloud they
can be very useful is in education. For a student to be able to work at
home and at school , not worrying about wether they are on the right OS,
or using the right version.( Looking at you ADSK ;) would be a massive
win. For instructors to be able to collaborate in a meaningful way. again
pure awesomeness.

I am somewhat caught between Geekish optimism and old timer practicality.

Angus



On 2013/07/10 12:13 PM, Eugen Sares sof...@mail.sprit.org wrote:

+1
No offence, but I also don't get too excited about the idea of
my/customers data pending in some unknown place with unknown people
having potential control over it.
Besides, it's risky to rely on a working internet connection all the
time for work. There are too many things that can go wrong, like in all
complex systems.
How about troubleshooting/workarounds if something hangs? And there can
hang a lot in any complex 3d application.
What if you forget to pay your bills? What about being forced for
whatever upgrades? Will you get cut off the chance to continue work?

This does not mean the Exocortex guys are not idealistic, but you cannot
be sure what will happen in the more distant future, when you have
settled comfy in that system and got dependant on it.

Cloud is evil... it means total control. Clever business idea for
managers it might sound, but I for my part dislike it.

My kind of old-schoolish opinion...


Am 10.07.2013 11:49, schrieb Tim Leydecker:
 This claraio example pushes me to point out that it would still
 be nice to be sure that every workfile or content stays inside the
 private IP address space at any given time. At all times.

 Personally, I simply don´t like the idea of effectively handing over
 the absolute control over my intellectual property to any kind of remote
 or even unknown entity/authority that may have completely divergent
 interests to my own without even needing to state so in advance.

 Clouds are happily described as giving you that air of freedom but
 handing over data to a cloud effectively just means handing over your
 data to someone else.

 I don´t see why I would want to do that.

 It would be nice if Claraio is made to run self-contained in a private
 network
 without any strings attached.

 Cheers,

 tim







 On 10.07.2013 11:32, Rob Wuijster wrote:
 I think most of us got the email ;-)

 But all these new (internet based) tools makes it very interesting
 for what lies ahead.


 Rob

 \/-\/\/

 On 10-7-2013 11:26, Stefan Kubicek wrote:
 I'm a bit surprised this hasn't been posted here yet, I hope I'm not
 spoiling anything, but post date is 8th of July, so...
 http://exocortex.com/blog/introducing_claraio






table width=100% border=0 cellspacing=0 cellpadding=0 
style=width:100%; 
tr
td align=left style=text-align:justify;font face=arial,sans-serif 
size=1 color=#99span style=font-size:11px;This communication is 
intended for the addressee only. It is confidential. If you have received this 
communication in error, please notify us immediately and destroy the original 
message. You may not copy or disseminate this communication without the 
permission of the University. Only authorised signatories are competent to 
enter into agreements on behalf of the University and recipients are thus 
advised that the content of this message may not be legally binding on the 
University and may contain the personal views and opinions of the author, which 
are not necessarily the views and opinions of The University of the 
Witwatersrand, Johannesburg. All agreements between the University and 
outsiders are subject to South African Law unless the University agrees in 
writing to the contrary. /span/font/td
/tr
/table




Re: clara.io

2013-07-10 Thread Stefan Kubicek

+1 on pretty much all arguments about privacy - especially some advertising 
agencies
can be totally anal about security, they are almost bound to disallow 
cloud-based storage of data.
The same probably goes for film work. Allowing installation on a dedicated 
server for total user control
would be a big plus here.

As for problems with low internet bandwidth in certain locations, I think this 
is where time is working for them.

What I really like is the collaborative potential - working on the same scene 
with others simultaneously can be an interesting design tool.



While I am also very careful about the cloud I am excited about the
potential. My main worry is ease of use. For example while I am very
impressed with what Lagoa can do. Its totally unusable on our internet
speeds.

There needs to be some middle ground found to make them far more
practical. Whether its a case of having an option of maybe downloading the
assets libraries locally or something else you cannot have an application
check back with the server nearly every time you perform an action.

If your are going to host something on the cloud that can say be check at
start up etc. There are opportunities for example for much better handling
of licences. I am sorry there is no reason why we should have to screw
around with licence managers (which all tend hate any other license
manager) when it totally possible to either have it check based on mac
address, IP, or even a login.

There are so many things that the cloud could be useful for without having
to do everything there.

Also I find it very doubtful that companies that tend to deal a lot with
brands etc will be happy with their stuff being held in a place that is
not under their total control.

That being said one of the where having the whole thing in the cloud they
can be very useful is in education. For a student to be able to work at
home and at school , not worrying about wether they are on the right OS,
or using the right version.( Looking at you ADSK ;) would be a massive
win. For instructors to be able to collaborate in a meaningful way. again
pure awesomeness.

I am somewhat caught between Geekish optimism and old timer practicality.

Angus



On 2013/07/10 12:13 PM, Eugen Sares sof...@mail.sprit.org wrote:


+1
No offence, but I also don't get too excited about the idea of
my/customers data pending in some unknown place with unknown people
having potential control over it.
Besides, it's risky to rely on a working internet connection all the
time for work. There are too many things that can go wrong, like in all
complex systems.
How about troubleshooting/workarounds if something hangs? And there can
hang a lot in any complex 3d application.
What if you forget to pay your bills? What about being forced for
whatever upgrades? Will you get cut off the chance to continue work?

This does not mean the Exocortex guys are not idealistic, but you cannot
be sure what will happen in the more distant future, when you have
settled comfy in that system and got dependant on it.

Cloud is evil... it means total control. Clever business idea for
managers it might sound, but I for my part dislike it.

My kind of old-schoolish opinion...


Am 10.07.2013 11:49, schrieb Tim Leydecker:

This claraio example pushes me to point out that it would still
be nice to be sure that every workfile or content stays inside the
private IP address space at any given time. At all times.

Personally, I simply don´t like the idea of effectively handing over
the absolute control over my intellectual property to any kind of remote
or even unknown entity/authority that may have completely divergent
interests to my own without even needing to state so in advance.

Clouds are happily described as giving you that air of freedom but
handing over data to a cloud effectively just means handing over your
data to someone else.

I don´t see why I would want to do that.

It would be nice if Claraio is made to run self-contained in a private
network
without any strings attached.

Cheers,

tim







On 10.07.2013 11:32, Rob Wuijster wrote:

I think most of us got the email ;-)

But all these new (internet based) tools makes it very interesting
for what lies ahead.


Rob

\/-\/\/

On 10-7-2013 11:26, Stefan Kubicek wrote:

I'm a bit surprised this hasn't been posted here yet, I hope I'm not
spoiling anything, but post date is 8th of July, so...
http://exocortex.com/blog/introducing_claraio









table width=100% border=0 cellspacing=0 cellpadding=0 
style=width:100%;
tr
td align=left style=text-align:justify;font face=arial,sans-serif size=1 color=#99span 
style=font-size:11px;This communication is intended for the addressee only. It is confidential. If you have received this communication in error, please notify 
us immediately and destroy the original message. You may not copy or disseminate this communication without the permission of the University. Only authorised signatories are 

Re: ICE subdivide geo

2013-07-10 Thread Matthew Graves
Good solution. I think you need to get VertexIndex for the disconnect and
PolygonIndex for the subdivision.


On Wed, Jul 10, 2013 at 10:34 AM, Simon Reeves si...@simonreeves.comwrote:

 Thanks Andreas that's a good idea!



 Simon Reeves
 London, UK
 *si...@simonreeves.com*
 *www.simonreeves.com*
 *
 *


 On 10 July 2013 10:07, Andreas Böinghoff boeingh...@s-farm.de wrote:

  I'm sure there are more elegant ways, but you could do that quick and
 dirty: Disconnect all polys, subdivide them and merge them again (see the
 picture). If your meshes getting bigger this is no option, because it's
 getting slow.

 Andreas




 On 7/8/2013 3:38 PM, Simon Reeves wrote:

 Does anyone know if it's possible to subdivide geo in ICE in the non
 smoothing form, tessellating..

 like when you right click on selected polys  'Subdivide Polygons' rather
 than 'Local Subdivision refinement'



 Simon Reeves
 London, UK
 *si...@simonreeves.com*
 *www.simonreeves.com*
 *
 *







Re: ICE subdivide geo

2013-07-10 Thread Andreas Böinghoff
Or you change the Component Type in the Disconnect Component 
Compound to Polygon ;-)


On 7/10/2013 2:38 PM, Matthew Graves wrote:
Good solution. I think you need to get VertexIndex for the disconnect 
and PolygonIndex for the subdivision.



On Wed, Jul 10, 2013 at 10:34 AM, Simon Reeves si...@simonreeves.com 
mailto:si...@simonreeves.com wrote:


Thanks Andreas that's a good idea!



Simon Reeves
London, UK
/si...@simonreeves.com mailto:si...@simonreeves.com/
/www.simonreeves.com http://www.simonreeves.com/
/
/


On 10 July 2013 10:07, Andreas Böinghoff boeingh...@s-farm.de
mailto:boeingh...@s-farm.de wrote:

I'm sure there are more elegant ways, but you could do that
quick and dirty: Disconnect all polys, subdivide them and
merge them again (see the picture). If your meshes getting
bigger this is no option, because it's getting slow.

Andreas




On 7/8/2013 3:38 PM, Simon Reeves wrote:

Does anyone know if it's possible to subdivide geo in ICE in
the non smoothing form, tessellating..

like when you right click on selected polys  'Subdivide
Polygons' rather than 'Local Subdivision refinement'



Simon Reeves
London, UK
/si...@simonreeves.com mailto:si...@simonreeves.com/
/www.simonreeves.com http://www.simonreeves.com/
/
/










Re: ICE subdivide geo

2013-07-10 Thread Matthew Graves
My bad I didn't even look in Disconnect Component PPG when I set up my
test scene. Thanks this is something that might help me soon.


On Wed, Jul 10, 2013 at 2:03 PM, Andreas Böinghoff boeingh...@s-farm.dewrote:

  Or you change the Component Type in the Disconnect Component
 Compound to Polygon ;-)

 On 7/10/2013 2:38 PM, Matthew Graves wrote:

 Good solution. I think you need to get VertexIndex for the disconnect and
 PolygonIndex for the subdivision.


 On Wed, Jul 10, 2013 at 10:34 AM, Simon Reeves si...@simonreeves.comwrote:

 Thanks Andreas that's a good idea!



 Simon Reeves
 London, UK
  *si...@simonreeves.com*
 *www.simonreeves.com*
 *
 *


   On 10 July 2013 10:07, Andreas Böinghoff boeingh...@s-farm.de wrote:

  I'm sure there are more elegant ways, but you could do that quick and
 dirty: Disconnect all polys, subdivide them and merge them again (see the
 picture). If your meshes getting bigger this is no option, because it's
 getting slow.

 Andreas




 On 7/8/2013 3:38 PM, Simon Reeves wrote:

 Does anyone know if it's possible to subdivide geo in ICE in the non
 smoothing form, tessellating..

 like when you right click on selected polys  'Subdivide Polygons'
 rather than 'Local Subdivision refinement'



 Simon Reeves
 London, UK
 *si...@simonreeves.com*
 *www.simonreeves.com*
 *
 *










RE: clara.io

2013-07-10 Thread Marc-Andre Carbonneau
What I really like is the collaborative potential - working on the same scene 
with others simultaneously can be an interesting design tool.

I understand the power but I wonder how they solve the problem, how does it 
work? I guess it's some sort of referencing pipeline where you can put a model 
local inside the scene, modify it and save. While the other person only sees 
the reference up to when you decide to publish a new version?



-Original Message-
From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Stefan Kubicek
Sent: 10 juillet 2013 07:30
To: softimage@listproc.autodesk.com
Subject: Re: clara.io

+1 on pretty much all arguments about privacy - especially some 
+advertising agencies
can be totally anal about security, they are almost bound to disallow 
cloud-based storage of data.
The same probably goes for film work. Allowing installation on a dedicated 
server for total user control would be a big plus here.

As for problems with low internet bandwidth in certain locations, I think this 
is where time is working for them.

What I really like is the collaborative potential - working on the same scene 
with others simultaneously can be an interesting design tool.




Re: Open source json camera I/O platform

2013-07-10 Thread Michael Heberlein
Sounds like a good idea. There are a few things I would like to add:

I'd prefer one common class/module for file IO and all the necessary
conversions like millimeters -- inches, focal length -- horizontal view
angle, picture ratio -- film aperture, etc. so each application module
can derive from this and stay as lightweight as possible.

Another module could handle the dialogs but if they're optional, it's
easier to integrate the importer/exporter into a scripted pipeline or
toolbars. Plugins could just use the correct application module but be
separate files again so people don't have to use them.

Also, to make it not just another almost-useful tool, don't forget less
common properties like optical center shift etc. required by stereo setups,
for example. As you already wrote in SPEC.txt, all expected units have to
be defined.

Michael



On Wed, Jul 10, 2013 at 8:53 AM, Sandy Sutherland sandy.mailli...@gmail.com
 wrote:

  Gene - this would be of huge interest I think - every studio I have been
 at we have needed to do this, and have always fudged it - never had the
 time to even start looking at creating a tool!

 So - a big yes from me!

 S.


 On 2013/07/10 5:09 AM, Gene Crucean wrote:

  Hey folks,

  Who's in the mood for some open-source camera I/O code? I'm kind of
 getting bummed out on having to write the same camera tools at every studio
 just to get a simple, lightweight and most importantly reliable camera
 pushed around from app to app. FBX does *not* cut it, not to mention it's
 not available for all apps that could make use of it. So I thought I would
 whip up a spec based on json and offer it up open source so anyone willing
 to donate some time, could create some simple tools to import/export for
 their favorite app. The spec is VERY lightweight and doesn't include some
 things that I'm sure someone will want... but please let me know your
 thoughts.

  I already have a Softimage plugin working (consider it alpha). At this
 point it only has minor sanity checking and logic tests and I'm sure there
 are a zillion ways to improve it. But it's a start. The goal is to at least
 have plugins that all read and write the same spec from Houdini, Softimage,
 Maya, Max, Blender, Nuke... and more. I've built the Soft one using PyQt
 and it would be nice to maintain some consistency between apps, so I'm
 hopeful that the other versions could be based off of the same .ui file.

  What do you guys think? Any interest in this? I know it's a simple thing
 but I'm sure a lot of you also write these tools at studios quite a bit too
 and could possibly be into something like this.

  Check out the spec and source, and if you have time, play with the Soft
 plugin here: https://bitbucket.org/crewshin/json-cam


  If you have completely zero interest in this... no worries. Thanks for
 looking.


  --
 -Gene
 www.genecrucean.com





Re: Open source json camera I/O platform

2013-07-10 Thread Cesar Saez
Sounds great!

WebGL stuff doesn't have specs for json files? Perhaps we can work with
that and keep it consistent.


Re: clara.io

2013-07-10 Thread Alan Fregtman
If it's anything like in *Lagoa* http://home.lagoa.com/ then the scene is
up-to-date at all times on the browser, but more importantly, the server.
The server sends the latest state at first open then streams little changes
asynchronously as they happen.

Think of it like a chatroom where when you enter the room you're given the
chatlog of all conversation up to that point, and then you watch things
happen. It's pretty similar.



On Wed, Jul 10, 2013 at 9:29 AM, Marc-Andre Carbonneau 
marc-andre.carbonn...@ubisoft.com wrote:

 What I really like is the collaborative potential - working on the same
 scene with others simultaneously can be an interesting design tool.

 I understand the power but I wonder how they solve the problem, how does
 it work? I guess it's some sort of referencing pipeline where you can put a
 model local inside the scene, modify it and save. While the other person
 only sees the reference up to when you decide to publish a new version?



 -Original Message-
 From: softimage-boun...@listproc.autodesk.com [mailto:
 softimage-boun...@listproc.autodesk.com] On Behalf Of Stefan Kubicek
 Sent: 10 juillet 2013 07:30
 To: softimage@listproc.autodesk.com
 Subject: Re: clara.io

 +1 on pretty much all arguments about privacy - especially some
 +advertising agencies
 can be totally anal about security, they are almost bound to disallow
 cloud-based storage of data.
 The same probably goes for film work. Allowing installation on a dedicated
 server for total user control would be a big plus here.

 As for problems with low internet bandwidth in certain locations, I think
 this is where time is working for them.

 What I really like is the collaborative potential - working on the same
 scene with others simultaneously can be an interesting design tool.





Re: clara.io

2013-07-10 Thread Ben Houston
Tim wrote:

 It would be nice if Claraio is made to run self-contained in a private network
 without any strings attached.

It can work this way. :-)  I have a Clara.io server running on my PC
right now.  Exocortex is open to source licensing of Clara.io for
custom deployments inside of studios, just like we do source licensing
of Exocortex Crate.

But really you get a lot of benefits by allowing others to maintain
the infrastructure for you and also to make use of cloud computing on
demand for things like rendering.

-- 
Best regards,
Ben Houston
Voice: 613-762-4113 Skype: ben.exocortex Twitter: @exocortexcom
http://Exocortex.com - Passionate CG Software Professionals.


Re: clara.io

2013-07-10 Thread Ben Houston
Marc-Andre Carbonneau wrote:
 I understand the power but I wonder how they solve the problem, how does it 
 work? I guess it's some sort of referencing pipeline where you can put a 
 model local inside the scene, modify it and save. While the other person only 
 sees the reference up to when you decide to publish a new version?

Have you used Google docs where two people can type at the same time
and you see incremental chances nearly immediate?  That is how
Clara.io works.  But you will also be able to do reference models to
control who works on what to avoid the chaos that would come from 20
people working on the same scene.  We will allow for referencing via
tags as well, but I can't really go into the details of future planned
features to much.

-- 
Best regards,
Ben Houston
Voice: 613-762-4113 Skype: ben.exocortex Twitter: @exocortexcom
http://Exocortex.com - Passionate CG Software Professionals.



Re: clara.io

2013-07-10 Thread Ben Houston
Eugen Sares wrote:
 I also don't get too excited about the idea of my/customers
 data pending in some unknown place with unknown people having potential
 control over it.

That is understandable.  I do think that cloud-services such as Gmail,
Google Docs and Dropbox have gained a lot of popularity in a lot of
diverse sectors.  I think that if we can prove that we are reliable,
secure and useful, we can get converts.

 How about troubleshooting/workarounds if something hangs?

You will be able to go back to older versions as we save every single
change and allow you to go back to whatever history you want.

 What if you forget to pay your bills? What about being forced for whatever
 upgrades?

Clara.io will be based on a subscription model like Github and Dropbox
and it will be priced pretty aggressively, starting at $10 a month is
our current idea, although if you do a lot of rendering we will have
to charge more as it costs us a lot more.

 Will you get cut off the chance to continue work?

We can handle intermittent internet connections as long as the
connection comes back before you close your browser window.  We show
you when the connection is down.

 Cloud is evil... it means total control. Clever business idea for managers
 it might sound, but I for my part dislike it.

We are not looking for control, seriously we aren't.  We are looking
to make things simple, easy and cost effective.  Really I want to make
life easier for artists and software developers, that is all.

-- 
Best regards,
Ben Houston
Voice: 613-762-4113 Skype: ben.exocortex Twitter: @exocortexcom
http://Exocortex.com - Passionate CG Software Professionals.


image clip

2013-07-10 Thread Alastair Hearsum

Here's a funny one:

I have a bunch of particles representing people in a crowd.
I set an integer attribute called /*clip_frame*/ on these particles.
I have hundred image clips and I set the */time source/* of the clip to 
be the attribute that I set in the ice tree rather than /*scene_time*/.


When all these numbers are the same, MR renders as you'd expect straight 
off the bat pretty quickly. When these are all different numbers I get a 
helluva slowdown pre-render. It takes an absolute age to get its act 
together before it starts rendering.


Any ideas?

Thanks


Alastair


--
Alastair Hearsum
Head of 3d
GLASSWORKS
33/34 Great Pulteney Street
London
W1F 9NP
+44 (0)20 7434 1182
glassworks.co.uk http://www.glassworks.co.uk/
Glassworks Terms and Conditions of Sale can be found at glassworks.co.uk
(Company registered in England with number 04759979. Registered office 
25 Harley Street, London, W1G 9BR. VAT registration number: 86729)

Please consider the environment before you print this email.
DISCLAIMER: This e-mail and attachments are strictly privileged, private 
and confidential and are intended solely for the stated recipient(s). 
Any views or opinions presented are solely those of the author and do 
not necessarily represent those of the Company. If you are not the 
intended recipient, be advised that you have received this e-mail in 
error and that any use, dissemination, forwarding, printing, or copying 
of this e-mail is strictly prohibited. If this transmission is received 
in error please kindly return it to the sender and delete this message 
from your system.


Re: image clip

2013-07-10 Thread Jens Lindgren
If you're using Arnold you're better of asking on the SItoA list.

/Jens


On Wed, Jul 10, 2013 at 4:39 PM, Alastair Hearsum
hear...@glassworks.co.ukwrote:

  Here's a funny one:

 I have a bunch of particles representing people in a crowd.
 I set an integer attribute called *clip_frame* on these particles.
 I have hundred image clips and I set the *time source* of the clip to be
 the attribute that I set in the ice tree rather than *scene_time*.

 When all these numbers are the same, MR renders as you'd expect straight
 off the bat pretty quickly. When these are all different numbers I get a
 helluva slowdown pre-render. It takes an absolute age to get its act
 together before it starts rendering.

 Any ideas?

 Thanks


 Alastair


 --
  Alastair Hearsum
  Head of 3d
 [image: GLASSWORKS]
  33/34 Great Pulteney Street
 London
 W1F 9NP
 +44 (0)20 7434 1182
 glassworks.co.uk http://www.glassworks.co.uk/
  Glassworks Terms and Conditions of Sale can be found at glassworks.co.uk
  (Company registered in England with number 04759979. Registered office 25
 Harley Street, London, W1G 9BR. VAT registration number: 86729)
  Please consider the environment before you print this email.
  DISCLAIMER: This e-mail and attachments are strictly privileged, private
 and confidential and are intended solely for the stated recipient(s). Any
 views or opinions presented are solely those of the author and do not
 necessarily represent those of the Company. If you are not the intended
 recipient, be advised that you have received this e-mail in error and that
 any use, dissemination, forwarding, printing, or copying of this e-mail is
 strictly prohibited. If this transmission is received in error please
 kindly return it to the sender and delete this message from your system.




-- 
Jens Lindgren
--
Lead Technical Director
Magoo 3D Studios http://www.magoo3dstudios.com/


Re: image clip

2013-07-10 Thread Jens Lindgren
I see now that you're taliking about MR *facepalm*


On Wed, Jul 10, 2013 at 4:47 PM, Jens Lindgren
jens.lindgren@gmail.comwrote:

 If you're using Arnold you're better of asking on the SItoA list.

 /Jens


 On Wed, Jul 10, 2013 at 4:39 PM, Alastair Hearsum 
 hear...@glassworks.co.uk wrote:

  Here's a funny one:

 I have a bunch of particles representing people in a crowd.
 I set an integer attribute called *clip_frame* on these particles.
 I have hundred image clips and I set the *time source* of the clip to be
 the attribute that I set in the ice tree rather than *scene_time*.

 When all these numbers are the same, MR renders as you'd expect straight
 off the bat pretty quickly. When these are all different numbers I get a
 helluva slowdown pre-render. It takes an absolute age to get its act
 together before it starts rendering.

 Any ideas?

 Thanks


 Alastair


 --
  Alastair Hearsum
  Head of 3d
 [image: GLASSWORKS]
  33/34 Great Pulteney Street
 London
 W1F 9NP
 +44 (0)20 7434 1182
 glassworks.co.uk http://www.glassworks.co.uk/
  Glassworks Terms and Conditions of Sale can be found at glassworks.co.uk
  (Company registered in England with number 04759979. Registered office
 25 Harley Street, London, W1G 9BR. VAT registration number: 86729)
  Please consider the environment before you print this email.
  DISCLAIMER: This e-mail and attachments are strictly privileged, private
 and confidential and are intended solely for the stated recipient(s). Any
 views or opinions presented are solely those of the author and do not
 necessarily represent those of the Company. If you are not the intended
 recipient, be advised that you have received this e-mail in error and that
 any use, dissemination, forwarding, printing, or copying of this e-mail is
 strictly prohibited. If this transmission is received in error please
 kindly return it to the sender and delete this message from your system.




 --
 Jens Lindgren
 --
 Lead Technical Director
 Magoo 3D Studios http://www.magoo3dstudios.com/




-- 
Jens Lindgren
--
Lead Technical Director
Magoo 3D Studios http://www.magoo3dstudios.com/


Re: image clip

2013-07-10 Thread Michael Heberlein
If you have random time offsets for each particle, it will take more time
to load all the different files but maybe you can use time offset groups to
reduce the IO overhead. Just use a limited range of (non-animated) random
integers, scale the result and add it to the current time.


On Wed, Jul 10, 2013 at 4:49 PM, Jens Lindgren
jens.lindgren@gmail.comwrote:

 I see now that you're taliking about MR *facepalm*


 On Wed, Jul 10, 2013 at 4:47 PM, Jens Lindgren 
 jens.lindgren@gmail.com wrote:

 If you're using Arnold you're better of asking on the SItoA list.

 /Jens


 On Wed, Jul 10, 2013 at 4:39 PM, Alastair Hearsum 
 hear...@glassworks.co.uk wrote:

  Here's a funny one:

 I have a bunch of particles representing people in a crowd.
 I set an integer attribute called *clip_frame* on these particles.
 I have hundred image clips and I set the *time source* of the clip to
 be the attribute that I set in the ice tree rather than *scene_time*.

 When all these numbers are the same, MR renders as you'd expect straight
 off the bat pretty quickly. When these are all different numbers I get a
 helluva slowdown pre-render. It takes an absolute age to get its act
 together before it starts rendering.

 Any ideas?

 Thanks


 Alastair


 --
  Alastair Hearsum
  Head of 3d
 [image: GLASSWORKS]
  33/34 Great Pulteney Street
 London
 W1F 9NP
 +44 (0)20 7434 1182
 glassworks.co.uk http://www.glassworks.co.uk/
  Glassworks Terms and Conditions of Sale can be found at
 glassworks.co.uk
  (Company registered in England with number 04759979. Registered office
 25 Harley Street, London, W1G 9BR. VAT registration number: 86729)
  Please consider the environment before you print this email.
  DISCLAIMER: This e-mail and attachments are strictly privileged,
 private and confidential and are intended solely for the stated
 recipient(s). Any views or opinions presented are solely those of the
 author and do not necessarily represent those of the Company. If you are
 not the intended recipient, be advised that you have received this e-mail
 in error and that any use, dissemination, forwarding, printing, or copying
 of this e-mail is strictly prohibited. If this transmission is received in
 error please kindly return it to the sender and delete this message from
 your system.




 --
 Jens Lindgren
 --
 Lead Technical Director
 Magoo 3D Studios http://www.magoo3dstudios.com/




 --
 Jens Lindgren
 --
 Lead Technical Director
 Magoo 3D Studios http://www.magoo3dstudios.com/



Re: Open source json camera I/O platform

2013-07-10 Thread Gene Crucean
Some good ideas Michael. I'll look into them today or tomorrow.

Thanks


On Wed, Jul 10, 2013 at 6:54 AM, Michael Heberlein
micheberl...@gmail.comwrote:

 Sounds like a good idea. There are a few things I would like to add:

 I'd prefer one common class/module for file IO and all the necessary
 conversions like millimeters -- inches, focal length -- horizontal view
 angle, picture ratio -- film aperture, etc. so each application module
 can derive from this and stay as lightweight as possible.

 Another module could handle the dialogs but if they're optional, it's
 easier to integrate the importer/exporter into a scripted pipeline or
 toolbars. Plugins could just use the correct application module but be
 separate files again so people don't have to use them.

 Also, to make it not just another almost-useful tool, don't forget less
 common properties like optical center shift etc. required by stereo setups,
 for example. As you already wrote in SPEC.txt, all expected units have to
 be defined.

 Michael



 On Wed, Jul 10, 2013 at 8:53 AM, Sandy Sutherland 
 sandy.mailli...@gmail.com wrote:

  Gene - this would be of huge interest I think - every studio I have
 been at we have needed to do this, and have always fudged it - never had
 the time to even start looking at creating a tool!

 So - a big yes from me!

 S.


 On 2013/07/10 5:09 AM, Gene Crucean wrote:

  Hey folks,

  Who's in the mood for some open-source camera I/O code? I'm kind of
 getting bummed out on having to write the same camera tools at every studio
 just to get a simple, lightweight and most importantly reliable camera
 pushed around from app to app. FBX does *not* cut it, not to mention it's
 not available for all apps that could make use of it. So I thought I would
 whip up a spec based on json and offer it up open source so anyone willing
 to donate some time, could create some simple tools to import/export for
 their favorite app. The spec is VERY lightweight and doesn't include some
 things that I'm sure someone will want... but please let me know your
 thoughts.

  I already have a Softimage plugin working (consider it alpha). At this
 point it only has minor sanity checking and logic tests and I'm sure there
 are a zillion ways to improve it. But it's a start. The goal is to at least
 have plugins that all read and write the same spec from Houdini, Softimage,
 Maya, Max, Blender, Nuke... and more. I've built the Soft one using PyQt
 and it would be nice to maintain some consistency between apps, so I'm
 hopeful that the other versions could be based off of the same .ui file.

  What do you guys think? Any interest in this? I know it's a simple
 thing but I'm sure a lot of you also write these tools at studios quite a
 bit too and could possibly be into something like this.

  Check out the spec and source, and if you have time, play with the Soft
 plugin here: https://bitbucket.org/crewshin/json-cam


  If you have completely zero interest in this... no worries. Thanks for
 looking.


  --
 -Gene
 www.genecrucean.com






-- 
-Gene
www.genecrucean.com


Re: image clip

2013-07-10 Thread Alastair Hearsum
Thanks for that but I have a very specific sync number for each clip. I 
feed that into the ice tree as a string to array node, the string coming 
from a spreadsheet where I've analysed all the clips for sync points. 
For each different action that I want to sync I have a different list of 
numbers.



A

Alastair Hearsum
Head of 3d
GLASSWORKS
33/34 Great Pulteney Street
London
W1F 9NP
+44 (0)20 7434 1182
glassworks.co.uk http://www.glassworks.co.uk/
Glassworks Terms and Conditions of Sale can be found at glassworks.co.uk
(Company registered in England with number 04759979. Registered office 
25 Harley Street, London, W1G 9BR. VAT registration number: 86729)

Please consider the environment before you print this email.
DISCLAIMER: This e-mail and attachments are strictly privileged, private 
and confidential and are intended solely for the stated recipient(s). 
Any views or opinions presented are solely those of the author and do 
not necessarily represent those of the Company. If you are not the 
intended recipient, be advised that you have received this e-mail in 
error and that any use, dissemination, forwarding, printing, or copying 
of this e-mail is strictly prohibited. If this transmission is received 
in error please kindly return it to the sender and delete this message 
from your system.

On 10/07/2013 16:02, Michael Heberlein wrote:
If you have random time offsets for each particle, it will take more 
time to load all the different files but maybe you can use time offset 
groups to reduce the IO overhead. Just use a limited range of 
(non-animated) random integers, scale the result and add it to the 
current time.



On Wed, Jul 10, 2013 at 4:49 PM, Jens Lindgren 
jens.lindgren@gmail.com mailto:jens.lindgren@gmail.com wrote:


I see now that you're taliking about MR *facepalm*


On Wed, Jul 10, 2013 at 4:47 PM, Jens Lindgren
jens.lindgren@gmail.com mailto:jens.lindgren@gmail.com
wrote:

If you're using Arnold you're better of asking on the SItoA list.
/Jens


On Wed, Jul 10, 2013 at 4:39 PM, Alastair Hearsum
hear...@glassworks.co.uk mailto:hear...@glassworks.co.uk
wrote:

Here's a funny one:

I have a bunch of particles representing people in a crowd.
I set an integer attribute called /*clip_frame*/ on these
particles.
I have hundred image clips and I set the */time source/*
of the clip to be the attribute that I set in the ice tree
rather than /*scene_time*/.

When all these numbers are the same, MR renders as you'd
expect straight off the bat pretty quickly. When these are
all different numbers I get a helluva slowdown pre-render.
It takes an absolute age to get its act together before it
starts rendering.

Any ideas?

Thanks


Alastair


-- 
Alastair Hearsum

Head of 3d
GLASSWORKS
33/34 Great Pulteney Street
London
W1F 9NP
+44 (0)20 7434 1182 tel:%2B44%20%280%2920%207434%201182
glassworks.co.uk http://www.glassworks.co.uk/
Glassworks Terms and Conditions of Sale can be found at
glassworks.co.uk http://glassworks.co.uk
(Company registered in England with number 04759979.
Registered office 25 Harley Street, London, W1G 9BR. VAT
registration number: 86729)
Please consider the environment before you print this email.
DISCLAIMER: This e-mail and attachments are strictly
privileged, private and confidential and are intended
solely for the stated recipient(s). Any views or opinions
presented are solely those of the author and do not
necessarily represent those of the Company. If you are not
the intended recipient, be advised that you have received
this e-mail in error and that any use, dissemination,
forwarding, printing, or copying of this e-mail is
strictly prohibited. If this transmission is received in
error please kindly return it to the sender and delete
this message from your system.




-- 
Jens Lindgren

--
Lead Technical Director
Magoo 3D Studios http://www.magoo3dstudios.com/




-- 
Jens Lindgren

--
Lead Technical Director
Magoo 3D Studios http://www.magoo3dstudios.com/






Re: image clip

2013-07-10 Thread Cristobal Infante
Most of the times is a scalar value that it getting fed to the offset when
your instances want only integers...

If you are sure it's not that, maybe is just the amount, have you tried
using standins instead?. This is where Arnold is king though ;).





On 10 July 2013 16:40, Alastair Hearsum hear...@glassworks.co.uk wrote:

  Thanks for that but I have a very specific sync number for each clip. I
 feed that into the ice tree as a string to array node, the string coming
 from a spreadsheet where I've analysed all the clips for sync points. For
 each different action that I want to sync I have a different list of
 numbers.


 A


  Alastair Hearsum
  Head of 3d
 [image: GLASSWORKS]
  33/34 Great Pulteney Street
 London
 W1F 9NP
 +44 (0)20 7434 1182
 glassworks.co.uk http://www.glassworks.co.uk/
  Glassworks Terms and Conditions of Sale can be found at glassworks.co.uk
  (Company registered in England with number 04759979. Registered office 25
 Harley Street, London, W1G 9BR. VAT registration number: 86729)
  Please consider the environment before you print this email.
  DISCLAIMER: This e-mail and attachments are strictly privileged, private
 and confidential and are intended solely for the stated recipient(s). Any
 views or opinions presented are solely those of the author and do not
 necessarily represent those of the Company. If you are not the intended
 recipient, be advised that you have received this e-mail in error and that
 any use, dissemination, forwarding, printing, or copying of this e-mail is
 strictly prohibited. If this transmission is received in error please
 kindly return it to the sender and delete this message from your system.
  On 10/07/2013 16:02, Michael Heberlein wrote:

 If you have random time offsets for each particle, it will take more time
 to load all the different files but maybe you can use time offset groups to
 reduce the IO overhead. Just use a limited range of (non-animated) random
 integers, scale the result and add it to the current time.


 On Wed, Jul 10, 2013 at 4:49 PM, Jens Lindgren 
 jens.lindgren@gmail.com wrote:

  I see now that you're taliking about MR *facepalm*


 On Wed, Jul 10, 2013 at 4:47 PM, Jens Lindgren 
 jens.lindgren@gmail.com wrote:

  If you're using Arnold you're better of asking on the SItoA list.

 /Jens


 On Wed, Jul 10, 2013 at 4:39 PM, Alastair Hearsum 
 hear...@glassworks.co.uk wrote:

  Here's a funny one:

 I have a bunch of particles representing people in a crowd.
 I set an integer attribute called *clip_frame* on these particles.
 I have hundred image clips and I set the *time source* of the clip to
 be the attribute that I set in the ice tree rather than *scene_time*.

 When all these numbers are the same, MR renders as you'd expect
 straight off the bat pretty quickly. When these are all different numbers I
 get a helluva slowdown pre-render. It takes an absolute age to get its act
 together before it starts rendering.

 Any ideas?

 Thanks


 Alastair


 --
  Alastair Hearsum
  Head of 3d
 [image: GLASSWORKS]
  33/34 Great Pulteney Street
 London
 W1F 9NP
 +44 (0)20 7434 1182 %2B44%20%280%2920%207434%201182
 glassworks.co.uk http://www.glassworks.co.uk/
  Glassworks Terms and Conditions of Sale can be found at
 glassworks.co.uk
  (Company registered in England with number 04759979. Registered office
 25 Harley Street, London, W1G 9BR. VAT registration number: 86729)
  Please consider the environment before you print this email.
  DISCLAIMER: This e-mail and attachments are strictly privileged,
 private and confidential and are intended solely for the stated
 recipient(s). Any views or opinions presented are solely those of the
 author and do not necessarily represent those of the Company. If you are
 not the intended recipient, be advised that you have received this e-mail
 in error and that any use, dissemination, forwarding, printing, or copying
 of this e-mail is strictly prohibited. If this transmission is received in
 error please kindly return it to the sender and delete this message from
 your system.




  --
 Jens Lindgren
 --
 Lead Technical Director
 Magoo 3D Studios http://www.magoo3dstudios.com/




 --
 Jens Lindgren
 --
 Lead Technical Director
 Magoo 3D Studios http://www.magoo3dstudios.com/






Re: Open source json camera I/O platform

2013-07-10 Thread Jordi Bares
I really love the idea, if we can help surely that would good… Houdini 
import/export


Jordi Bares
jordiba...@gmail.com

On 10 Jul 2013, at 14:54, Michael Heberlein micheberl...@gmail.com wrote:

 Sounds like a good idea. There are a few things I would like to add:
 
 I'd prefer one common class/module for file IO and all the necessary 
 conversions like millimeters -- inches, focal length -- horizontal view 
 angle, picture ratio -- film aperture, etc. so each application module can 
 derive from this and stay as lightweight as possible.
 
 Another module could handle the dialogs but if they're optional, it's easier 
 to integrate the importer/exporter into a scripted pipeline or toolbars. 
 Plugins could just use the correct application module but be separate files 
 again so people don't have to use them.
 
 Also, to make it not just another almost-useful tool, don't forget less 
 common properties like optical center shift etc. required by stereo setups, 
 for example. As you already wrote in SPEC.txt, all expected units have to be 
 defined.
 
 Michael
 
 
 
 On Wed, Jul 10, 2013 at 8:53 AM, Sandy Sutherland sandy.mailli...@gmail.com 
 wrote:
 Gene - this would be of huge interest I think - every studio I have been at 
 we have needed to do this, and have always fudged it - never had the time to 
 even start looking at creating a tool!
 
 So - a big yes from me!
 
 S.
 
 
 On 2013/07/10 5:09 AM, Gene Crucean wrote:
 Hey folks,
 
 Who's in the mood for some open-source camera I/O code? I'm kind of getting 
 bummed out on having to write the same camera tools at every studio just to 
 get a simple, lightweight and most importantly reliable camera pushed around 
 from app to app. FBX does *not* cut it, not to mention it's not available 
 for all apps that could make use of it. So I thought I would whip up a spec 
 based on json and offer it up open source so anyone willing to donate some 
 time, could create some simple tools to import/export for their favorite 
 app. The spec is VERY lightweight and doesn't include some things that I'm 
 sure someone will want... but please let me know your thoughts.
 
 I already have a Softimage plugin working (consider it alpha). At this point 
 it only has minor sanity checking and logic tests and I'm sure there are a 
 zillion ways to improve it. But it's a start. The goal is to at least have 
 plugins that all read and write the same spec from Houdini, Softimage, Maya, 
 Max, Blender, Nuke... and more. I've built the Soft one using PyQt and it 
 would be nice to maintain some consistency between apps, so I'm hopeful that 
 the other versions could be based off of the same .ui file.
 
 What do you guys think? Any interest in this? I know it's a simple thing but 
 I'm sure a lot of you also write these tools at studios quite a bit too and 
 could possibly be into something like this.
 
 Check out the spec and source, and if you have time, play with the Soft 
 plugin here: https://bitbucket.org/crewshin/json-cam
 
 
 If you have completely zero interest in this... no worries. Thanks for 
 looking.
 
 
 -- 
 -Gene
 www.genecrucean.com
 
 



OT: Bruce Lee + Jhonnie Walker + The Mill = Change The Game

2013-07-10 Thread David Rivera
Hi. I thought maybe some of you had seen this:
http://www.youtube.com/watch?v=8_rsJvmGHpU

I´m just glad Softimage gets involved in the process of creating this legendary 
commercial.
Bests.

David R.


Re: image clip

2013-07-10 Thread Alastair Hearsum

Thanks but I don't have instances just an image clip per particle.




Alastair Hearsum
Head of 3d
GLASSWORKS
33/34 Great Pulteney Street
London
W1F 9NP
+44 (0)20 7434 1182
glassworks.co.uk http://www.glassworks.co.uk/
Glassworks Terms and Conditions of Sale can be found at glassworks.co.uk
(Company registered in England with number 04759979. Registered office 
25 Harley Street, London, W1G 9BR. VAT registration number: 86729)

Please consider the environment before you print this email.
DISCLAIMER: This e-mail and attachments are strictly privileged, private 
and confidential and are intended solely for the stated recipient(s). 
Any views or opinions presented are solely those of the author and do 
not necessarily represent those of the Company. If you are not the 
intended recipient, be advised that you have received this e-mail in 
error and that any use, dissemination, forwarding, printing, or copying 
of this e-mail is strictly prohibited. If this transmission is received 
in error please kindly return it to the sender and delete this message 
from your system.

On 10/07/2013 17:07, Cristobal Infante wrote:
Most of the times is a scalar value that it getting fed to the offset 
when your instances want only integers...


If you are sure it's not that, maybe is just the amount, have you 
tried using standins instead?. This is where Arnold is king though ;).





On 10 July 2013 16:40, Alastair Hearsum hear...@glassworks.co.uk 
mailto:hear...@glassworks.co.uk wrote:


Thanks for that but I have a very specific sync number for each
clip. I feed that into the ice tree as a string to array node, the
string coming from a spreadsheet where I've analysed all the clips
for sync points. For each different action that I want to sync I
have a different list of numbers.


A


Alastair Hearsum
Head of 3d
GLASSWORKS
33/34 Great Pulteney Street
London
W1F 9NP
+44 (0)20 7434 1182 tel:%2B44%20%280%2920%207434%201182
glassworks.co.uk http://www.glassworks.co.uk/
Glassworks Terms and Conditions of Sale can be found at
glassworks.co.uk http://glassworks.co.uk
(Company registered in England with number 04759979. Registered
office 25 Harley Street, London, W1G 9BR. VAT registration number:
86729)
Please consider the environment before you print this email.
DISCLAIMER: This e-mail and attachments are strictly privileged,
private and confidential and are intended solely for the stated
recipient(s). Any views or opinions presented are solely those of
the author and do not necessarily represent those of the Company.
If you are not the intended recipient, be advised that you have
received this e-mail in error and that any use, dissemination,
forwarding, printing, or copying of this e-mail is strictly
prohibited. If this transmission is received in error please
kindly return it to the sender and delete this message from your
system.
On 10/07/2013 16:02, Michael Heberlein wrote:

If you have random time offsets for each particle, it will take
more time to load all the different files but maybe you can use
time offset groups to reduce the IO overhead. Just use a limited
range of (non-animated) random integers, scale the result and add
it to the current time.


On Wed, Jul 10, 2013 at 4:49 PM, Jens Lindgren
jens.lindgren@gmail.com
mailto:jens.lindgren@gmail.com wrote:

I see now that you're taliking about MR *facepalm*


On Wed, Jul 10, 2013 at 4:47 PM, Jens Lindgren
jens.lindgren@gmail.com
mailto:jens.lindgren@gmail.com wrote:

If you're using Arnold you're better of asking on the
SItoA list.
/Jens


On Wed, Jul 10, 2013 at 4:39 PM, Alastair Hearsum
hear...@glassworks.co.uk
mailto:hear...@glassworks.co.uk wrote:

Here's a funny one:

I have a bunch of particles representing people in a
crowd.
I set an integer attribute called /*clip_frame*/ on
these particles.
I have hundred image clips and I set the */time
source/* of the clip to be the attribute that I set
in the ice tree rather than /*scene_time*/.

When all these numbers are the same, MR renders as
you'd expect straight off the bat pretty quickly.
When these are all different numbers I get a helluva
slowdown pre-render. It takes an absolute age to get
its act together before it starts rendering.

Any ideas?

Thanks


Alastair


-- 
Alastair Hearsum

Head of 3d
GLASSWORKS
33/34 Great Pulteney Street
London
W1F 9NP
+44 

Re: softimage and 3Dcoat Applink

2013-07-10 Thread David Rivera
Ive been using softimage with 3dcoat quite some time:
https://vimeo.com/15391838

No fuzz. All stars.
David R.





 From: Tim Leydecker bauero...@gmx.de
To: softimage@listproc.autodesk.com 
Sent: Friday, June 28, 2013 3:26 AM
Subject: softimage and 3Dcoat Applink
 

Hi guys,


anyone using 3dcoat with softimage here?

How well do the voxel modeling tools work, would you
consider it comfortable to use 3dcoat to clean up raw
scan data (with holes) into solid volumes?

Does 3dcoat play nicely with softimage in general?

Anything you´d find important to point out?


Cheers,


tim

Re: OT: Bruce Lee + Jhonnie Walker + The Mill = Change The Game

2013-07-10 Thread Emilio Hernandez
Cool!

I'll hope that with this amazing work, more studios will start taking more
in consideration Softimage.


2013/7/10 David Rivera activemotionpictu...@yahoo.com

 Hi. I thought maybe some of you had seen this:
 http://www.youtube.com/watch?v=8_rsJvmGHpU

 I´m just glad Softimage gets involved in the process of creating this
 legendary commercial.
 Bests.

 David R.




--


Re: OT: Bruce Lee + Jhonnie Walker + The Mill = Change The Game

2013-07-10 Thread John Richard Sanchez
Thats pretty cool. Where can I see the actual commercial?!


On Wed, Jul 10, 2013 at 1:19 PM, Emilio Hernandez emi...@e-roja.com wrote:

 Cool!

 I'll hope that with this amazing work, more studios will start taking more
 in consideration Softimage.


 2013/7/10 David Rivera activemotionpictu...@yahoo.com

 Hi. I thought maybe some of you had seen this:
 http://www.youtube.com/watch?v=8_rsJvmGHpU

 I´m just glad Softimage gets involved in the process of creating this
 legendary commercial.
 Bests.

 David R.




 --




-- 
www.johnrichardsanchez.com


Re: OT: Bruce Lee + Jhonnie Walker + The Mill = Change The Game

2013-07-10 Thread Andres Stephens
Ditto... and very cool facial rig setup! =) 



-Draise



From: John Richard Sanchez
Sent: ‎Wednesday‎, ‎July‎ ‎10‎, ‎2013 ‎1‎:‎08‎ ‎PM
To: XSI List to post


Thats pretty cool. Where can I see the actual commercial?!





On Wed, Jul 10, 2013 at 1:19 PM, Emilio Hernandez emi...@e-roja.com wrote:



Cool!


I'll hope that with this amazing work, more studios will start taking more in 
consideration Softimage.






2013/7/10 David Rivera activemotionpictu...@yahoo.com




Hi. I thought maybe some of you had seen this:

http://www.youtube.com/watch?v=8_rsJvmGHpU




I´m just glad Softimage gets involved in the process of creating this legendary 
commercial.

Bests.




David R.




-- 




-- 

www.johnrichardsanchez.com

Re: clara.io

2013-07-10 Thread Rares Halmagean
Very cool.  From an artists perspective, the ability to create content 
from anywhere without being encumbered by technical issues is 
attractive. I'll +1 the concern for IP security although I think any 
provider will be compelled by the pressures inherent in competing in the 
market place to maintain strict security measures because anything less 
would mean a disastrous loss in business.



On 7/10/2013 4:26 AM, Stefan Kubicek wrote:
I'm a bit surprised this hasn't been posted here yet, I hope I'm not 
spoiling anything, but post date is 8th of July, so...

http://exocortex.com/blog/introducing_claraio





--
*Rares Halmagean
___
*visual development and 3d character  content creation.
*rarebrush.com* http://rarebrush.com/


Re: OT: Bruce Lee + Jhonnie Walker + The Mill = Change The Game

2013-07-10 Thread John Richard Sanchez
I got it.
http://www.youtube.com/watch?v=3v1eiKAYOo8


On Wed, Jul 10, 2013 at 2:08 PM, John Richard Sanchez 
youngupstar...@gmail.com wrote:

 Thats pretty cool. Where can I see the actual commercial?!


 On Wed, Jul 10, 2013 at 1:19 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Cool!

 I'll hope that with this amazing work, more studios will start taking
 more in consideration Softimage.


 2013/7/10 David Rivera activemotionpictu...@yahoo.com

 Hi. I thought maybe some of you had seen this:
 http://www.youtube.com/watch?v=8_rsJvmGHpU

 I´m just glad Softimage gets involved in the process of creating this
 legendary commercial.
 Bests.

 David R.




 --




 --
 www.johnrichardsanchez.com




-- 
www.johnrichardsanchez.com


Re: clara.io

2013-07-10 Thread Andres Stephens
Nice! Just saw this today. 

Pardon my jut in! Reminds me a bit of the virtual spaces trueSpace 7.61 beta 8 
had back in 2009, with 3D collaborative servers for collaborative modeling - 
shading - scripting - animation - etc. Kinda worked on intranets or the 
internet, was a fun concept at the time, though with not much use (or maybe 
because Microsoft just shut them down..) Unlike tS, this one is in the browser, 
nice. 
I can see this as very useful for tutoring, training, sharing techniques and 
collaborative projects. The trick is organizing the social or communal aspects 
for it I think 



-Draise


On 7/10/2013 4:26 AM, Stefan Kubicek wrote:


I'm a bit surprised this hasn't been posted here yet, I hope I'm not spoiling 
anything, but post date is 8th of July, so... 
http://exocortex.com/blog/introducing_claraio 






-- 
 Rares Halmagean
___
visual development and 3d character  content creation. 
rarebrush.com

Re: Open source json camera I/O platform

2013-07-10 Thread jo benayoun
Hey Gene,
that's a very good idea and if well developed could de facto become useful
to many studio outta here (most comes with their own because of this lack
of standard that would answer any needs).

Some rough notes though if I may:
- unit: as soon as you store transforms in a file, you want to keep track
of what units have been used (houdini, maya, etc are not using the same
scene units as units are usually set per show)
- camera rigs: cameras can be quite complex hierarchies, would be great to
have a way to describe what our camera rig is and it to be correctly
exported.
- scale: required (even if 1 is always assumed)
- double precision: do we really need to store so high precision doubles
for transforms? (having an option would be nice)
- file format: your backend should not be that tied to the exporting code
(use abstraction)... for many studios, json is not suitable and they might
prefer for legacy reasons to use others or use binary formats for
compression.
- spec version: backward compatibility reasons
- channels: allow the export of channels/extra attributes (would be
parameters in softimage)
- frame ranges: use frame ranges.  if your range is from 0 to 99, you dont
want to write an array of numbers, considering that:
10 + (90 * 2 characters) + 99 commas + ']' + '[' = 291 bytes (with no
counting whitespaces + newlines chars)
  while
'0-99x1' = 6 bytes...
- namespaces: believe me this will very quickly become a nightmare!
- ids: you always want ids... (for very quick comparison, and integrity
checks)... we can't rely on names.
- metadatas: what department produced it? for what role? has it been
published as an asset? how many cameras? etc. You usually cant guess right
what they would be, so allow your users to have a storage place where they
can store them (available in your API as a dict or whatever datastructure
for clients to manipulate them).
- ...

hope this contributes!
:)
--jonathan



2013/7/10 Jordi Bares jordiba...@gmail.com

 I really love the idea, if we can help surely that would good… Houdini
 import/export


  Jordi Bares
 jordiba...@gmail.com

 On 10 Jul 2013, at 14:54, Michael Heberlein micheberl...@gmail.com
 wrote:

 Sounds like a good idea. There are a few things I would like to add:

 I'd prefer one common class/module for file IO and all the necessary
 conversions like millimeters -- inches, focal length -- horizontal view
 angle, picture ratio -- film aperture, etc. so each application module
 can derive from this and stay as lightweight as possible.

 Another module could handle the dialogs but if they're optional, it's
 easier to integrate the importer/exporter into a scripted pipeline or
 toolbars. Plugins could just use the correct application module but be
 separate files again so people don't have to use them.

 Also, to make it not just another almost-useful tool, don't forget less
 common properties like optical center shift etc. required by stereo setups,
 for example. As you already wrote in SPEC.txt, all expected units have to
 be defined.

 Michael



 On Wed, Jul 10, 2013 at 8:53 AM, Sandy Sutherland 
 sandy.mailli...@gmail.com wrote:

  Gene - this would be of huge interest I think - every studio I have
 been at we have needed to do this, and have always fudged it - never had
 the time to even start looking at creating a tool!

 So - a big yes from me!

 S.


 On 2013/07/10 5:09 AM, Gene Crucean wrote:

  Hey folks,

  Who's in the mood for some open-source camera I/O code? I'm kind of
 getting bummed out on having to write the same camera tools at every studio
 just to get a simple, lightweight and most importantly reliable camera
 pushed around from app to app. FBX does *not* cut it, not to mention it's
 not available for all apps that could make use of it. So I thought I would
 whip up a spec based on json and offer it up open source so anyone willing
 to donate some time, could create some simple tools to import/export for
 their favorite app. The spec is VERY lightweight and doesn't include some
 things that I'm sure someone will want... but please let me know your
 thoughts.

  I already have a Softimage plugin working (consider it alpha). At this
 point it only has minor sanity checking and logic tests and I'm sure there
 are a zillion ways to improve it. But it's a start. The goal is to at least
 have plugins that all read and write the same spec from Houdini, Softimage,
 Maya, Max, Blender, Nuke... and more. I've built the Soft one using PyQt
 and it would be nice to maintain some consistency between apps, so I'm
 hopeful that the other versions could be based off of the same .ui file.

  What do you guys think? Any interest in this? I know it's a simple
 thing but I'm sure a lot of you also write these tools at studios quite a
 bit too and could possibly be into something like this.

  Check out the spec and source, and if you have time, play with the Soft
 plugin here: https://bitbucket.org/crewshin/json-cam


  If you have completely zero 

Re: Softimage 2014 SP2 released

2013-07-10 Thread John Richard Sanchez
Still don't understand why you cant find this in the subscription center.
Just getting around to installing and I stupidly try and look in my
subscription center for upgrades.


On Mon, Jul 8, 2013 at 1:40 AM, Hsiao Ming Chia 
hsiao.ming.c...@autodesk.com wrote:

 Opps...I meant 'Thanks Stephen!'

 From: Hsiao Ming Chia
 Sent: Monday, 8 July, 2013 1:39 PM
 To: softimage@listproc.autodesk.com
 Subject: RE: Softimage 2014 SP2 released

 Thanks Stefan!

 The direct link to the SP2 is
 http://usa.autodesk.com/adsk/servlet/ps/dl/item?siteID=123112id=21964996linkID=12544121

 As mentioned in the Readme, some plugins may need to be recompiled in
 order for them to work with SP2.
 This is especially so if the plugin happens to be using ICE SDK functions.

 From: softimage-boun...@listproc.autodesk.com [mailto:
 softimage-boun...@listproc.autodesk.com] On Behalf Of Gene Crucean
 Sent: Thursday, 4 July, 2013 2:27 AM
 To: softimage@listproc.autodesk.com
 Subject: Re: Softimage 2014 SP2 released

 Seriously man.

 On a separate topic... holy mother of fast updates. There must have been
 some big bugs in 2014.

 On Wed, Jul 3, 2013 at 11:24 AM, Sandy Sutherland 
 sandy.mailli...@gmail.commailto:sandy.mailli...@gmail.com wrote:
 Stephen - I hope you are sending Autodesk a bill every month for your
 ongoing Softimage support - :-P

 S.


 On 2013/07/03 8:21 PM, Stephen Blair wrote:
 Autodesk Softimage 2014 Service Pack 2 has been released
 http://goo.gl/6LWRd


 Bugs Fixed in this Release

 SOFT-9094
 Caching a Crowd FX simulation is broken

 SOFT-9089
 Very slow access to the ICE attribute arrays

 *Note:
 You may need to recompile your plugins, if they were compiled using
 Softimage versions prior to 2014 SP2.




 --
 -Gene
 www.genecrucean.comhttp://www.genecrucean.com




-- 
www.johnrichardsanchez.com


Re: Open source json camera I/O platform

2013-07-10 Thread Steven Caron
not that there isn't room for a lightweight and free plugin for camera IO
with minimal dependencies but alembic's camera support is pretty good. is
that not working for you?

now that alembic has it's own python API you don't need to use exocortex
plugin's. by using alembic you don't have to re-implement support for maya,
nuke, houdini, etc. yes, i know building from source is a pain, but we
should push them to make binaries available for various platforms.

steven

On Tue, Jul 9, 2013 at 9:09 PM, Gene Crucean
emailgeneonthel...@gmail.comwrote:


 What do you guys think? Any interest in this? I know it's a simple thing
 but I'm sure a lot of you also write these tools at studios quite a bit too
 and could possibly be into something like this.




Re: Open source json camera I/O platform

2013-07-10 Thread Alok Gandhi
The only downside I might see with Alembic Camera is the inability to
customize and extend according to one's need. To suite the various needs of
the productions which might change with each show/project having full
development control over simple python classes is way more rapidly
developed. Also the reader and writer plugins for various DCC apps can be
very easily implemented and incorporated in the existing pipeline using
python. And then there is the question of cross-platform development which
is a breeze using python.

Not that I have anything against the Alembic Camera, of course it is
awesome but might involve a lot of headache with having to compile the
whole libraries especially on windows just for the cause of a camera IO and
this might be a daunting task for TDs/Devs who do not have much experience
with compiling C++ code on various platforms.

I would recommend (as suggested before in this thread) to have base class
as the data container (basically a python dict) and then to derive from
that class a class for each DCC, then use this derived class in the
reader/writer plugins.

In fact as a bonus you could use the same classes for export/import of
light data between various DCC.

The only optimization to look for is for the same values at different
frames and to implement some sort of compression to make the file size
smaller. RLE is one compression that can be implemented but there are other
awesome easy to implement in python (In fact alembic does this brilliantly
and it is one of the strongest USP for it).

Finally, if the writer plugins are not going to use fcurve (bezier)
plotting then you need to do more work in the plugins for the interpolation
(which according to my view is also fairly easily implemented using various
universally accepted methods like bezier, hermite for position and scaling
and slerp/nlerp for rotations).


On Wed, Jul 10, 2013 at 3:09 PM, Steven Caron car...@gmail.com wrote:

 not that there isn't room for a lightweight and free plugin for camera IO
 with minimal dependencies but alembic's camera support is pretty good. is
 that not working for you?

 now that alembic has it's own python API you don't need to use exocortex
 plugin's. by using alembic you don't have to re-implement support for maya,
 nuke, houdini, etc. yes, i know building from source is a pain, but we
 should push them to make binaries available for various platforms.

 steven

 On Tue, Jul 9, 2013 at 9:09 PM, Gene Crucean emailgeneonthel...@gmail.com
  wrote:


 What do you guys think? Any interest in this? I know it's a simple thing
 but I'm sure a lot of you also write these tools at studios quite a bit too
 and could possibly be into something like this.




--


Re: Open source json camera I/O platform

2013-07-10 Thread Ludovick Michaud
+1 to Steve.

I was wondering why not use Alembic. After using it from Maya to Houdini,
to Nuke, back to Softimage then from Softimage to Maya and so forth. It's
been the best one I've got to use so far. There's a few glitches that comes
with the exocortex plug-ins but like Steve mention, I'd be one for writing
a in-house custom one for Softimage. The ones that comes with Maya, Houdini
and Nuke haven't failed me yet (knocking on wood)

Also reading from EXR is very nice tool as well when you go from RenderMan
or Vray to Nuke (I haven't had to figure this out from Arnold or Mental
Ray). No need to carry an extra camera file. All the info is embedded into
the file per frames and nuke reads it like it's camera data. Not much
experience on this one. But frankly the few times I got to use that exr
camera in Nuke I was very pleased with the fact that I didn't need a camera
file.

Now I'm sure there are other reasons for needing a more specific tools. But
if you're looking to just transfer the camera data, I've personally given
up on fbx a long time ago and learned to rely on Alembic as being the most
cross platform solution available at the moment.

Ludo

Ludovick William Michaud
mobile: *214.632.6756*
*www.linkedin.com/in/ludovickwmichaud*
+Shading / Lighting / Compositing
+CG Supervisor / Sr. Technical Director / Creative Director



On Wed, Jul 10, 2013 at 2:09 PM, Steven Caron car...@gmail.com wrote:

 not that there isn't room for a lightweight and free plugin for camera IO
 with minimal dependencies but alembic's camera support is pretty good. is
 that not working for you?

 now that alembic has it's own python API you don't need to use exocortex
 plugin's. by using alembic you don't have to re-implement support for maya,
 nuke, houdini, etc. yes, i know building from source is a pain, but we
 should push them to make binaries available for various platforms.

 steven

 On Tue, Jul 9, 2013 at 9:09 PM, Gene Crucean emailgeneonthel...@gmail.com
  wrote:


 What do you guys think? Any interest in this? I know it's a simple thing
 but I'm sure a lot of you also write these tools at studios quite a bit too
 and could possibly be into something like this.




Re: Open source json camera I/O platform

2013-07-10 Thread Gene Crucean
Hey Jo,

Quick thoughts...


 - unit: as soon as you store transforms in a file, you want to keep track
of what units have been used (houdini, maya, etc are not using the same
scene units as units are usually set per show)

This (imo) should always be the software's default unit, and changed
conceptually for each project. For eg. We use Houdini, Softimage and Maya
(among others). Our current project is using meters because of scene scale.
We didn't change a single unit type in any of the apps and just assume that
1 unit = 1 meter. Super easy and the way it should be done imo. So I guess
to answer your question... I feel it's a bad idea to store units in the
file itself.


 - camera rigs: cameras can be quite complex hierarchies, would be great
to have a way to describe what our camera rig is and it to be correctly
exported.

This goes along with why we are even creating these tools in the first
place. A nice baked down, simple, lightweight and reliable camera transfer
format. I don't want to pass around a full rig. I just want a flat baked
cam in global space.


 - scale: required (even if 1 is always assumed)

Always assume it's 1 and handle accordingly in each apps plugins. No need
to include this in the file.


 - double precision: do we really need to store so high precision doubles
for transforms? (having an option would be nice)

What is the downside? A tiny bit bigger file size?


 - file format: your backend should not be that tied to the exporting code
(use abstraction)... for many studios, json is not suitable and they might
prefer for legacy reasons to use others or use binary formats for
compression.

No tool will ever work for *all* studios. JSON is a huge standard and


 - spec version: backward compatibility reasons

This is v1.0.


 - channels: allow the export of channels/extra attributes (would be
parameters in softimage)

Which channels are you referring to?


 - frame ranges: use frame ranges.  if your range is from 0 to 99, you
dont want to write an array of numbers, considering that:
10 + (90 * 2 characters) + 99 commas + ']' + '[' = 291 bytes (with no
counting whitespaces + newlines chars)
  while
'0-99x1' = 6 bytes...

Again is this more about file size? Because huge frame ranges still produce
tiny files. I left it so that each channel posx, posy, posz, rotx, roty,
rotz all have independent frame ranges just in case we ever had a need to
support offset ranges on each axis. I understand that I'm contradicting
myself in my own exporter, but I thought it might be nice to at least leave
the spec open to things in the future. As it is now, just use len(posx) to
get the frame count and loop accordingly.


 - namespaces: believe me this will very quickly become a nightmare!

Arnold doesn't use namespaces. I'm just saying. Nice, but not necessary.
Having said that, I'm completely open... what is your main argument for
them?


 - ids: you always want ids... (for very quick comparison, and integrity
checks)... we can't rely on names.

Hmm... ??


 - metadatas: what department produced it? for what role? has it been
published as an asset? how many cameras? etc. You usually cant guess right
what they would be, so allow your users to have a storage place where they
can store them (available in your API as a dict or whatever datastructure
for clients to manipulate them).

{Camera1:{meta:{software:Softimage 2014,
anythingYouWant:data}}}coming right up.

Do you think it's important to define exactly what key:values are expected
and limit it to just those? Or just leave the metadata dictionary
completely open?





Please don't take my email as a bashing. I really appreciate your comments
and I hope more come from it. I just want people to understand the point of
this tool. It's not an all encompassing format like FBX. That's
*exactly*what I don't want. I envision this as a VERY simple tool that
only has what
is needed to define a camera and it's transforms in a format that 99.9% of
all apps will be able to support and read. JSON is a very supported
format and there are libraries to work with it for all languages used in
this industry. Python has native support. For eg...

import json

jsonData = open(/Path/To/camera.cam)data = json.load(jsonData)jsonData.close()


Boom... data is a dictionary. Use as needed.



On Wed, Jul 10, 2013 at 11:22 AM, jo benayoun jobenay...@gmail.com wrote:

 Hey Gene,
 that's a very good idea and if well developed could de facto become useful
 to many studio outta here (most comes with their own because of this lack
 of standard that would answer any needs).

 Some rough notes though if I may:
 - unit: as soon as you store transforms in a file, you want to keep track
 of what units have been used (houdini, maya, etc are not using the same
 scene units as units are usually set per show)
 - camera rigs: cameras can be quite complex hierarchies, would be great to
 have a way to describe what our camera rig is and it to be correctly
 exported.
 - scale: 

Re: Open source json camera I/O platform

2013-07-10 Thread Steven Caron
alembic has a python api, so that mean's you can rapidly develop and re
implemented if you choose. and a pretty extensive camera class.
http://docs.alembic.io/python/
http://docs.alembic.io/python/alembic/abcg.html#alembic.AbcGeom.CameraSample

are you sure the camera isn't extensible? all alembic object's can be
extended with arbitrary user data...
http://docs.alembic.io/python/examples.html#write-non-standard-data-for-a-polymesh

agreed, the only headache is the compilation. but again... we should push
the maintainers to provide binaries to make this easier. i am sure if pyqt
didn't come pre packaged binaries the pyqtforsoftimage plugin would be used
a lot less. so let's encourage them to do so.

lastly, i think the file size isn't a big issue... at least not for a
camera only. yes, alembic does compression but their other features which
make it great (data deduplication) is probably not going to apply to a
camera.

just so you guys know, i am not against the idea. by all means make the
tool and make it awesome! i personally like to stop and ask, 'do i really
need to reinvent the wheel?' it is very easy to jump to this conclusion,
but you need to ask yourself that question or avoid repeating the mistakes.
and when it comes to interop formats this has been going on forever and is
usually the reason a project like alembic started in the first place.

On Wed, Jul 10, 2013 at 12:43 PM, Alok Gandhi alok.gandhi2...@gmail.comwrote:

 The only downside I might see with Alembic Camera is the inability to
 customize and extend according to one's need. To suite the various needs of
 the productions which might change with each show/project having full
 development control over simple python classes is way more rapidly
 developed. Also the reader and writer plugins for various DCC apps can be
 very easily implemented and incorporated in the existing pipeline using
 python. And then there is the question of cross-platform development which
 is a breeze using python.


 Not that I have anything against the Alembic Camera, of course it is
 awesome but might involve a lot of headache with having to compile the
 whole libraries especially on windows just for the cause of a camera IO and
 this might be a daunting task for TDs/Devs who do not have much experience
 with compiling C++ code on various platforms.

 I would recommend (as suggested before in this thread) to have base class
 as the data container (basically a python dict) and then to derive from
 that class a class for each DCC, then use this derived class in the
 reader/writer plugins.

 In fact as a bonus you could use the same classes for export/import of
 light data between various DCC.

 The only optimization to look for is for the same values at different
 frames and to implement some sort of compression to make the file size
 smaller. RLE is one compression that can be implemented but there are other
 awesome easy to implement in python (In fact alembic does this brilliantly
 and it is one of the strongest USP for it).

 Finally, if the writer plugins are not going to use fcurve (bezier)
 plotting then you need to do more work in the plugins for the interpolation
 (which according to my view is also fairly easily implemented using various
 universally accepted methods like bezier, hermite for position and scaling
 and slerp/nlerp for rotations).





RE: clara.io

2013-07-10 Thread Marc-Andre Carbonneau
Ok, I see. Can't wait to try it out.
On the a cloud related note, Amazon cuts cloud prices up to 80%!

http://venturebeat.com/2013/07/10/amazon-cuts-cloud-prices-up-to-80-your-move-google-microsoft/#LKTbuudOcBA6RFD7.99



-Original Message-
From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Ben Houston
Sent: 10 juillet 2013 10:17
To: softimage@listproc.autodesk.com
Subject: Re: clara.io

Marc-Andre Carbonneau wrote:
 I understand the power but I wonder how they solve the problem, how does it 
 work? I guess it's some sort of referencing pipeline where you can put a 
 model local inside the scene, modify it and save. While the other person only 
 sees the reference up to when you decide to publish a new version?

Have you used Google docs where two people can type at the same time and you 
see incremental chances nearly immediate?  That is how Clara.io works.  But you 
will also be able to do reference models to control who works on what to avoid 
the chaos that would come from 20 people working on the same scene.  We will 
allow for referencing via tags as well, but I can't really go into the details 
of future planned features to much.

--
Best regards,
Ben Houston
Voice: 613-762-4113 Skype: ben.exocortex Twitter: @exocortexcom 
http://Exocortex.com - Passionate CG Software Professionals.




Re: Open source json camera I/O platform

2013-07-10 Thread Gene Crucean
So Alembic has two deal breaking aspects from my pov.

1: It applies operators to everryyythinggg. I hate this about Alembic

2: It's not free. Yes the spec is free to make plugins and whatnot, but
this would firmly put this tool out of reach for a *lot *of studios. Not to
mention it's much more complicated to whip up an exporter for some random
app using Alembic.

3: My main reason is that using python makes it really simple to add into
the pipeline in nice flexible way.


On Wed, Jul 10, 2013 at 12:46 PM, Ludovick Michaud 
ludovickwmich...@gmail.com wrote:

 +1 to Steve.

 I was wondering why not use Alembic. After using it from Maya to Houdini,
 to Nuke, back to Softimage then from Softimage to Maya and so forth. It's
 been the best one I've got to use so far. There's a few glitches that comes
 with the exocortex plug-ins but like Steve mention, I'd be one for writing
 a in-house custom one for Softimage. The ones that comes with Maya, Houdini
 and Nuke haven't failed me yet (knocking on wood)

 Also reading from EXR is very nice tool as well when you go from RenderMan
 or Vray to Nuke (I haven't had to figure this out from Arnold or Mental
 Ray). No need to carry an extra camera file. All the info is embedded into
 the file per frames and nuke reads it like it's camera data. Not much
 experience on this one. But frankly the few times I got to use that exr
 camera in Nuke I was very pleased with the fact that I didn't need a camera
 file.

 Now I'm sure there are other reasons for needing a more specific tools.
 But if you're looking to just transfer the camera data, I've personally
 given up on fbx a long time ago and learned to rely on Alembic as being the
 most cross platform solution available at the moment.

 Ludo

 Ludovick William Michaud
 mobile: *214.632.6756*
 *www.linkedin.com/in/ludovickwmichaud*
 +Shading / Lighting / Compositing
 +CG Supervisor / Sr. Technical Director / Creative Director



 On Wed, Jul 10, 2013 at 2:09 PM, Steven Caron car...@gmail.com wrote:

 not that there isn't room for a lightweight and free plugin for camera IO
 with minimal dependencies but alembic's camera support is pretty good. is
 that not working for you?

 now that alembic has it's own python API you don't need to use exocortex
 plugin's. by using alembic you don't have to re-implement support for maya,
 nuke, houdini, etc. yes, i know building from source is a pain, but we
 should push them to make binaries available for various platforms.

 steven

 On Tue, Jul 9, 2013 at 9:09 PM, Gene Crucean 
 emailgeneonthel...@gmail.com wrote:


 What do you guys think? Any interest in this? I know it's a simple thing
 but I'm sure a lot of you also write these tools at studios quite a bit too
 and could possibly be into something like this.





-- 
-Gene
www.genecrucean.com


Re: Open source json camera I/O platform

2013-07-10 Thread Steven Caron
1. that's exocortex's default plugin behavior, you can customize the
behavior with their python api. we do this at whiskytree and only get the
ops we want. we customize import and export based on asset type ie. cameras.

2. and 3. use the free python api alembic comes with... reimplement the
camera only export in softimage or program x in python. use the existing
spec/camera class. reuse existing free maya, houdini, and nuke plugins.

again, it is not my intention to discourage you... just want you to think
about the time you are going to spend on foundation stuff which is pretty
much done.

On Wed, Jul 10, 2013 at 1:30 PM, Gene Crucean
emailgeneonthel...@gmail.comwrote:

 So Alembic has two deal breaking aspects from my pov.

 1: It applies operators to everryyythinggg. I hate this about Alembic

 2: It's not free. Yes the spec is free to make plugins and whatnot, but
 this would firmly put this tool out of reach for a *lot *of studios. Not
 to mention it's much more complicated to whip up an exporter for some
 random app using Alembic.

 3: My main reason is that using python makes it really simple to add into
 the pipeline in nice flexible way.




Re: Open source json camera I/O platform

2013-07-10 Thread Alan Fregtman
Gene can speak for himself but I get the vibe he's probably going for
simplicity, much like how OBJ is the simplest static mesh format and
because it's so incredibly easy to write out, it's so widely used (even if
there's a few extras that not all exporters/importers handle, but the core
stuff works.)

Using Alembic would be smart, no doubt, but the complexity it brings in
terms of managing compilation on multiple platforms outweighs the
portability of a potentially simpler spec that's easy to write and
relatively easy to read. Filesize-wise compression can be used regardless.
I don't see that as a concern.

Anyway, that's my opinion. What are your thoughts on this, Gene?



On Wed, Jul 10, 2013 at 4:42 PM, Steven Caron car...@gmail.com wrote:

 1. that's exocortex's default plugin behavior, you can customize the
 behavior with their python api. we do this at whiskytree and only get the
 ops we want. we customize import and export based on asset type ie. cameras.

 2. and 3. use the free python api alembic comes with... reimplement the
 camera only export in softimage or program x in python. use the existing
 spec/camera class. reuse existing free maya, houdini, and nuke plugins.

 again, it is not my intention to discourage you... just want you to think
 about the time you are going to spend on foundation stuff which is pretty
 much done.


 On Wed, Jul 10, 2013 at 1:30 PM, Gene Crucean 
 emailgeneonthel...@gmail.com wrote:

 So Alembic has two deal breaking aspects from my pov.

 1: It applies operators to everryyythinggg. I hate this about Alembic

 2: It's not free. Yes the spec is free to make plugins and whatnot, but
 this would firmly put this tool out of reach for a *lot *of studios. Not
 to mention it's much more complicated to whip up an exporter for some
 random app using Alembic.

 3: My main reason is that using python makes it really simple to add into
 the pipeline in nice flexible way.




Re: Open source json camera I/O platform

2013-07-10 Thread Gene Crucean
Btw, I hope anyone wanting to use this is waiting until we finalize the
spec. We're internally switching up the way the keys/animation is stored.
Going with a dictionary for each frame which helps with supporting more
features in the future.

How's this look guys?
https://bitbucket.org/crewshin/json-cam/src/d3f1fbda7dcfd3422a785be9c09fe88732aa18d3/test.cam?at=modularizing



-- 
-Gene
www.genecrucean.com


Re: Open source json camera I/O platform

2013-07-10 Thread Gene Crucean
Yeah I think Alan summed up my thoughts. I just want something that's
stupidly simple to work with, incredibly flexible for pipelines of any
type... and zero additional bs.


On Wed, Jul 10, 2013 at 2:34 PM, Steven Caron car...@gmail.com wrote:

 yep, i get that... i think there is room for a lightweight and free plugin
 with minimal dependencies. alembic is surely a beast to compile.


 On Wed, Jul 10, 2013 at 2:27 PM, Alan Fregtman alan.fregt...@gmail.comwrote:

 Gene can speak for himself but I get the vibe he's probably going for
 simplicity, much like how OBJ is the simplest static mesh format and
 because it's so incredibly easy to write out, it's so widely used (even if
 there's a few extras that not all exporters/importers handle, but the core
 stuff works.)

 Using Alembic would be smart, no doubt, but the complexity it brings in
 terms of managing compilation on multiple platforms outweighs the
 portability of a potentially simpler spec that's easy to write and
 relatively easy to read. Filesize-wise compression can be used regardless.
 I don't see that as a concern.

 Anyway, that's my opinion. What are your thoughts on this, Gene?




-- 
-Gene
www.genecrucean.com


Re: Open source json camera I/O platform

2013-07-10 Thread Steven Caron
i hear ya... here is a plugin michelle sandroni wrote for this task...
might help you work through the code faster for maya and max.

http://www.threesixty3d.com/software/freeware/ms_CameraConverter_v2-2.zip


On Wed, Jul 10, 2013 at 2:42 PM, Gene Crucean
emailgeneonthel...@gmail.comwrote:

 Yeah I think Alan summed up my thoughts. I just want something that's
 stupidly simple to work with, incredibly flexible for pipelines of any
 type... and zero additional bs.




Extrude Polygon Islands with different transforms

2013-07-10 Thread Matthew Graves
Hi all,
I am trying to extrude selected polygons using Extrude Polygon Islands (ok
so far) and but set an individual transform for each polygon, so polygon 2
might have translation of 1 in y and polygon 5 might be 7.
I have done this with a loop where each iteration deals with one polygon
extrusion however I would like a solution that does not use loops.
When I give the Extrude Polygon Islands a set for polygon Index it picks
the correct polygons but when I give it a per polygon set for transform
it either does not transform or picks the first transform in the set only
and applies that to all polygons (depending how I set it up). Ideally I
would have liked to pass it arrays of polygon Index and transform but
transform wont take arrays.

I will be very grateful if anyone can help
Thanks
Matt


Re: Open source json camera I/O platform

2013-07-10 Thread jo benayoun
Hey Gene,
no worries, I don't take it wrong.  It is tough to compile a set of
arguments in a so short amount of time, but I'll give it a try.
So forgive me if I repeat myself or I am not enough detailled.
At the end there is always a good reason mostly driven by our own
experience, the production or the studio toolset.

I do believe like Alan said that you want to keep that simple and
lightweight, and this is with this in mind that I suggested
those additions.  I am not reasoning for your tool to become THE standard
but being enough generic and flexible for studios considering
its integration.

I will try to elaborate on some of my notes.

- units are usually in productions set per show whatever software's
defaults are, in other words, whatever go through the
  pipeline must be talked in that unit.  In real-life, nothing is that
perfect and often a third-party user (individuals, dept, software)
  tweaks the unit which lead to data-corruption and headaches for the
pipeline tds if the camera gets exported without considering this.  I
  would say most of the time this data would be there for pure debug
purpose though as your format is intended to be cross-applications
  and might even be cross-shows, you cannot force/restrict/deduce the
metrics used.  Your file might also get lost somewhere in the
  middle of the jungle, without this critical piece of informations, the
data is good for garbage.
  It happened more than once at work that data get exported with a
different unit that the pipeline was supposed to support
  by the anim dept, fucking up lighter's work for example.

- spec version, this information should be stored in the file itself, even
if you're working with version is 1.0.  Once the first cam
  file will be exported and published as an asset, the asset and file will
be sealed because of possible dependencies.  With that
  piece of informations, you will be able to make additions/extensions to
the file format without breaking existings.

- I mentioned the scale for pure flexibility, an artist/dept might be
working at a smaller scale than another, this informations is
  required to scale accordingly your camera transforms.  Actually this
should always be the case, when transforms are saved (not in a matrix
  form), the scale or globalScale should follow.  To transpose this to a
similar issue, transforms are usually baked out from the
  global space which piss off shot modelers.  The shot modeler does his job
fixing deformations, the geometry is baked out, but the rig
  is modified and so all the shapes must be done again.  Again pure
convenience and flexibility.

- concerning the camera rigs, the idea is not to track down an entire
hierarchy but being aware of that hierarchy.  Cameras are more than
  a null in a scene and even if you're willing to simplify this, you cannot
assume everybody want.  A side effect of being able to
  describe a camera rig to your library and this library to understand it
is you dont even have to code specific procedures to
  extract informations as the library will do it for you according the
description you've done of the rig.

- high-precision doubles is not necessarily something wanted to be output,
because they waste space, are expensive to compute (double semantics)
  and conversion are slow (bytes to doubles).  FYI, UltraJSON offers a
similar option.

- JSON is a huge standard for the web and maybe for some small studios but
I am not aware of a lot of studios using it in their pipeline (they
  usually prefer standards such as yaml for config files and xml for
whatever else, and when comes the need of speed, google protobuf).
Actually
  some of them ended up creating their own, as Dreamworks or R+H.  Anyway,
here I am talking about your architecture: you want to abstract the backend
  for clients to add in a highly convenient way support for the format they
have choosen.  My first thought is the support of a binary file format,
  they dont want to wait after you to add it.

- Channels are extra properties attached to a scene object (softimage
object parameters, Maya node attributes, ...)
  A lot of informations are stored in those, and other applications may
require them to properly re-build a camera from the data.
  Again, you dont know what will be their specifics (naming conv, which
ones must be ignored, etc -- it depends a lot from the
  artist/dept/studio).

- what you call tiny a file size may be not acceptable size for others.
exporting a camera with a frame range spanning 1000 frames will result
  in 2mb on disk just to store a frame range which can be specify as simply
as 1-1001x1.  Time is also spent in reading those data.
  Size is just an argument as convenience could be, this makes the data
dense and editing might be tedious as error-prone.
  We had a need once to export very dense geometries with their internal
structure.  Because our pipeline was not designed to handle
  so much density, the firt try has been a disaster resulting in easily an
hour 

Re: Open source json camera I/O platform

2013-07-10 Thread Raffaele Fragapane
Units and scale, and how they correlate, are extremely important even in a
light weight, portable format. Actually, probably more so in one than in a
more complex scenario.

You can't assume people will not care about those because their workflow
will be independent of it like yours was for a few example productions,
because very frequently they won't have the choice.

As an interop format that deals with external contributions (rendering
engines and so on being heavily dependent on it) you WILL bump into scale
factors whether you like it or not, and the format will be rendered useless
by not having control over those when it'll happen.

There are things one omits or includes for the sake of forecfully
encouraging good practices, those are practical-philosophical choices and
are fine, and best made by a benevolent dictator for the project (any half
decent programming language out there has craptons of those, and they are
important to give languages and format identity and coherence).
Scaling and units are not one of those, they are a fundamental requirement
implicit to what you set off to describe with these files.


Re: Open source json camera I/O platform

2013-07-10 Thread Alok Gandhi
Hey Gene looking at your schema I do not see animated value for parameters
like focal length, near and far planes. Though near and far are not usually
keyed but you never know. I have worked on a stereoscopic project and we
did need to plot the clipping planes. Anyways, focal length does fairly get
animated at times. In the interest of generality and being generic I would
make room for values for nearly all animatable parameters. For the case of
optimization the writer plugin can use only one value in the list if the
parameter is not animated else it will take all the key frame values. Also
I would not care for the whole keyframe and tangent data in the list but
would simply read the values of all parameters at each frame and plot the
same when I am reading the values. But what you are doing with keyframes
value storage also works, in fact I think it reduces the file size
considerably in case you do not have much keyframes in the source scene.


On Wed, Jul 10, 2013 at 7:41 PM, Raffaele Fragapane 
raffsxsil...@googlemail.com wrote:

 Units and scale, and how they correlate, are extremely important even in a
 light weight, portable format. Actually, probably more so in one than in a
 more complex scenario.

 You can't assume people will not care about those because their workflow
 will be independent of it like yours was for a few example productions,
 because very frequently they won't have the choice.

 As an interop format that deals with external contributions (rendering
 engines and so on being heavily dependent on it) you WILL bump into scale
 factors whether you like it or not, and the format will be rendered useless
 by not having control over those when it'll happen.

 There are things one omits or includes for the sake of forecfully
 encouraging good practices, those are practical-philosophical choices and
 are fine, and best made by a benevolent dictator for the project (any half
 decent programming language out there has craptons of those, and they are
 important to give languages and format identity and coherence).
 Scaling and units are not one of those, they are a fundamental requirement
 implicit to what you set off to describe with these files.




--


Re: OT: Bruce Lee + Jhonnie Walker + The Mill = Change The Game

2013-07-10 Thread Sebastien Sterling
Does anyone know what was done in Soft ? was it modelled in soft ?


On 10 July 2013 20:11, John Richard Sanchez youngupstar...@gmail.comwrote:

 I got it.
 http://www.youtube.com/watch?v=3v1eiKAYOo8


 On Wed, Jul 10, 2013 at 2:08 PM, John Richard Sanchez 
 youngupstar...@gmail.com wrote:

 Thats pretty cool. Where can I see the actual commercial?!


 On Wed, Jul 10, 2013 at 1:19 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Cool!

 I'll hope that with this amazing work, more studios will start taking
 more in consideration Softimage.


 2013/7/10 David Rivera activemotionpictu...@yahoo.com

 Hi. I thought maybe some of you had seen this:
 http://www.youtube.com/watch?v=8_rsJvmGHpU

 I´m just glad Softimage gets involved in the process of creating this
 legendary commercial.
 Bests.

 David R.




 --




 --
 www.johnrichardsanchez.com




 --
 www.johnrichardsanchez.com



Re: OT: Bruce Lee + Jhonnie Walker + The Mill = Change The Game

2013-07-10 Thread Steven Caron
looks like it was lit and rendered in softimage and arnold i would guess.


On Wed, Jul 10, 2013 at 5:01 PM, Sebastien Sterling 
sebastien.sterl...@gmail.com wrote:

 Does anyone know what was done in Soft ? was it modelled in soft ?



Re: Open source json camera I/O platform

2013-07-10 Thread Raffaele Fragapane
When you'll have at most a few dozen curves, even on a thousand frame long
sequence, I honestly don't think cheapening the data matters one iota.
You can always introduce a zip compression stage to the I/O.

Optimizing early and ending data poor is always a mistake. Purging is easy,
both on I/O and in dev terms, adding data you don't have is usually betwene
painful and downright impossible.

If footprint was a concern here, sure, it'd make sense, on something that
on a bad day will have a hundred parameters at the most (and for a mono cam
I'd struggle to think of a hundred parameters I'd want animated) saving 16
floats per frame instead of 64 makes little difference in practical terms.


On Thu, Jul 11, 2013 at 10:01 AM, Alok Gandhi alok.gandhi2...@gmail.comwrote:

 Hey Gene looking at your schema I do not see animated value for parameters
 like focal length, near and far planes. Though near and far are not usually
 keyed but you never know. I have worked on a stereoscopic project and we
 did need to plot the clipping planes. Anyways, focal length does fairly get
 animated at times. In the interest of generality and being generic I would
 make room for values for nearly all animatable parameters. For the case of
 optimization the writer plugin can use only one value in the list if the
 parameter is not animated else it will take all the key frame values. Also
 I would not care for the whole keyframe and tangent data in the list but
 would simply read the values of all parameters at each frame and plot the
 same when I am reading the values. But what you are doing with keyframes
 value storage also works, in fact I think it reduces the file size
 considerably in case you do not have much keyframes in the source scene.


 On Wed, Jul 10, 2013 at 7:41 PM, Raffaele Fragapane 
 raffsxsil...@googlemail.com wrote:

 Units and scale, and how they correlate, are extremely important even in
 a light weight, portable format. Actually, probably more so in one than in
 a more complex scenario.

 You can't assume people will not care about those because their workflow
 will be independent of it like yours was for a few example productions,
 because very frequently they won't have the choice.

 As an interop format that deals with external contributions (rendering
 engines and so on being heavily dependent on it) you WILL bump into scale
 factors whether you like it or not, and the format will be rendered useless
 by not having control over those when it'll happen.

 There are things one omits or includes for the sake of forecfully
 encouraging good practices, those are practical-philosophical choices and
 are fine, and best made by a benevolent dictator for the project (any half
 decent programming language out there has craptons of those, and they are
 important to give languages and format identity and coherence).
 Scaling and units are not one of those, they are a fundamental
 requirement implicit to what you set off to describe with these files.




 --




-- 
Our users will know fear and cower before our software! Ship it! Ship it
and let them flee like the dogs they are!


Re: OT: Bruce Lee + Jhonnie Walker + The Mill = Change The Game

2013-07-10 Thread Sebastien Sterling
Could they not do that in maya ? it seems strange to switch just for
rendering, then again, there is a feature being made in belgium, being
animated in maya and rendered in Modo so... i guess they must enjoy working
like that.

At least now i have a good excuse to drink whiskey :P


On 11 July 2013 02:03, Steven Caron car...@gmail.com wrote:

 looks like it was lit and rendered in softimage and arnold i would guess.


 On Wed, Jul 10, 2013 at 5:01 PM, Sebastien Sterling 
 sebastien.sterl...@gmail.com wrote:

 Does anyone know what was done in Soft ? was it modelled in soft ?




Siggraph 2013

2013-07-10 Thread Eric Thivierge
So haven't heard much about Siggraph aside from Matt's message about the
dinner on the list. What's the status of that Matt?

Would like to catch up with anyone from the list as usual.


Eric Thivierge
http://www.ethivierge.com


unsubscribe

2013-07-10 Thread margot



Margot Edström, DETTA

+46 73 975 39 83
Stora Varvsgatan 14
211 19 Malmö

www.thisisdetta.se



Re: Siggraph 2013

2013-07-10 Thread Ben Houston
I'm up for the dinner as well if it is happening. :-)
-ben

On Wed, Jul 10, 2013 at 8:40 PM, Eric Thivierge ethivie...@gmail.com wrote:
 So haven't heard much about Siggraph aside from Matt's message about the
 dinner on the list. What's the status of that Matt?

 Would like to catch up with anyone from the list as usual.

 
 Eric Thivierge
 http://www.ethivierge.com



-- 
Best regards,
Ben Houston
Voice: 613-762-4113 Skype: ben.exocortex Twitter: @exocortexcom
http://Exocortex.com - Passionate CG Software Professionals.


RE: Siggraph 2013

2013-07-10 Thread Matt Lind
I have received exactly 2 replies, most years I have nearly 100 by now.  Just 
been too busy at work to respond to the 2, but I saw them if it matters.

I was hoping to hold the dinner at 'Don the BeachComber's Restaurant' in sunset 
beach for the following reasons:

1) SIGGRAPH is in the middle of Disneyland.  That means lots of vacationers at 
the height of the season and jockeying for reservations in restaurants.  Up to 
8 people is not a problem, beyond that will require great organization.  Most 
years the dinner's attendance is in the 20-25 range.

2) Anything near Disneyland is over priced and a chain restaurant.

3) Anything near Disneyland tends to be very focused on enforcing rules - 
because that's the Disney way.  This is probably the stickler for me as 
traditionally I've had very unreliable RSVPs from people wanting to attend this 
dinner.  ~50% of people who say they'll come never show up, and don't inform 
me.  Likewise I often have unannounced guests pop in at the last minute forcing 
me to find extra chairs.

In the early years of this dinner, I flew by the seat of my pants calling 
around town and picking the restaurant only minutes before we showed up - 
mostly because I was not from that town and unfamiliar with the territory.  
Although I was often lucky, I learned many restaurants do not accommodate large 
parties on short notice, and others insist on the entire party being present 
before being seated.  For this reason I now choose the restaurant knowing it 
will not be crowded so last minute adjustments are not a big deal.  I cannot 
make that assurance in the middle of Disneyland and would hate to turn someone 
away.

I realize getting to Don's is a bit out of the way, but this is the first 
Siggraph that's been held in my backyard and so I have more familiarity with 
the area to judge the situation.  If anybody needs a ride to/from the 
restaurant, let me know and I'll see if I can enlist a few local friends to 
make it easier.

If Don's is a problem, say so now so I can look for a local venue.

Sends responses to: matt(dot)lind(at)mantom(dot)net


Matt







From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Eric Thivierge
Sent: Wednesday, July 10, 2013 5:40 PM
To: softimage@listproc.autodesk.com
Subject: Siggraph 2013

So haven't heard much about Siggraph aside from Matt's message about the dinner 
on the list. What's the status of that Matt?

Would like to catch up with anyone from the list as usual.


Eric Thivierge
http://www.ethivierge.com


RE: Open source json camera I/O platform

2013-07-10 Thread Matt Lind
I started a toolset a few years ago based on XML as well.  It works and I can 
store robust data, but the downside is the file sizes are huge and slow to 
read/write.  Memory becomes an issue at some point.  If you only want to 
transfer cameras or simple stuff, it works fine, but large scenes with lots of 
animation data is not advised with XML as other formats may be better suited.

Matt





From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Tim Crowson
Sent: Wednesday, July 10, 2013 5:34 PM
To: softimage@listproc.autodesk.com
Subject: Re: Open source json camera I/O platform

This is great to see! I have something similar in the wings that's only 
partially implemented, but in XML instead. It stores most of the stuff Jo was 
talking about. I wrote it as a way to export cameras and nulls to Nuke and 
Fusion. My goal is to have a series of tools for various apps all writing and 
reading a common XML file. Cameras and nulls can be exported, then updated if 
something changes. With custom UI for setting options. Anyway, I haven't 
finished writing all the different plugins, but I've got a couple of apps 
covered already. I debated between JSON and XML and finally just went with XML.

Glad to see this in the works, Gene! Can't wait to see more!

-Tim C

On 7/10/2013 7:10 PM, Raffaele Fragapane wrote:
When you'll have at most a few dozen curves, even on a thousand frame long 
sequence, I honestly don't think cheapening the data matters one iota.
You can always introduce a zip compression stage to the I/O.
Optimizing early and ending data poor is always a mistake. Purging is easy, 
both on I/O and in dev terms, adding data you don't have is usually betwene 
painful and downright impossible.
If footprint was a concern here, sure, it'd make sense, on something that on a 
bad day will have a hundred parameters at the most (and for a mono cam I'd 
struggle to think of a hundred parameters I'd want animated) saving 16 floats 
per frame instead of 64 makes little difference in practical terms.

On Thu, Jul 11, 2013 at 10:01 AM, Alok Gandhi 
alok.gandhi2...@gmail.commailto:alok.gandhi2...@gmail.com wrote:
Hey Gene looking at your schema I do not see animated value for parameters like 
focal length, near and far planes. Though near and far are not usually keyed 
but you never know. I have worked on a stereoscopic project and we did need to 
plot the clipping planes. Anyways, focal length does fairly get animated at 
times. In the interest of generality and being generic I would make room for 
values for nearly all animatable parameters. For the case of optimization the 
writer plugin can use only one value in the list if the parameter is not 
animated else it will take all the key frame values. Also I would not care for 
the whole keyframe and tangent data in the list but would simply read the 
values of all parameters at each frame and plot the same when I am reading the 
values. But what you are doing with keyframes value storage also works, in fact 
I think it reduces the file size considerably in case you do not have much 
keyframes in the source scene.

On Wed, Jul 10, 2013 at 7:41 PM, Raffaele Fragapane 
raffsxsil...@googlemail.commailto:raffsxsil...@googlemail.com wrote:
Units and scale, and how they correlate, are extremely important even in a 
light weight, portable format. Actually, probably more so in one than in a more 
complex scenario.
You can't assume people will not care about those because their workflow will 
be independent of it like yours was for a few example productions, because very 
frequently they won't have the choice.
As an interop format that deals with external contributions (rendering engines 
and so on being heavily dependent on it) you WILL bump into scale factors 
whether you like it or not, and the format will be rendered useless by not 
having control over those when it'll happen.
There are things one omits or includes for the sake of forecfully encouraging 
good practices, those are practical-philosophical choices and are fine, and 
best made by a benevolent dictator for the project (any half decent programming 
language out there has craptons of those, and they are important to give 
languages and format identity and coherence).
Scaling and units are not one of those, they are a fundamental requirement 
implicit to what you set off to describe with these files.



--



--
Our users will know fear and cower before our software! Ship it! Ship it and 
let them flee like the dogs they are!

--




Re: softimage and 3Dcoat Applink

2013-07-10 Thread Raffaele Fragapane
I can't speak for the interaction with Soft, but for retopo everywhere I
look 3DC is getting hailed as really damn good.
We used it on a project here to clean some of the biggest, most irregularly
sampled, largest surface ever LIDARs.

Talking LIDARs big enough that they would take minutes of surveying driving
through natural reserves on a Jeep, with enough detail be able to tell what
jacket the people who got picked up in it were wearing. The laser shadowing
holes, since you can't exactly drive into protected areas, were frequent
and big.

If it can deal with that, it can deal with pretty much anything.


On Fri, Jun 28, 2013 at 6:26 PM, Tim Leydecker bauero...@gmx.de wrote:

 Hi guys,


 anyone using 3dcoat with softimage here?

 How well do the voxel modeling tools work, would you
 consider it comfortable to use 3dcoat to clean up raw
 scan data (with holes) into solid volumes?

 Does 3dcoat play nicely with softimage in general?

 Anything you´d find important to point out?


 Cheers,


 tim




-- 
Our users will know fear and cower before our software! Ship it! Ship it
and let them flee like the dogs they are!


Re: Open source json camera I/O platform

2013-07-10 Thread Gene Crucean
Thanks for the input guys! I'm ingesting all of it :)

I'm quite against adding units into the main camera section of the file...
but what about adding them to the metadata section? I really don't
understand why anyone would want this in the file though. Units should only
be conceptual imo. Autodesk says that 1 SI unit = 1 decimeter, but it has
no concept of units... at all. Our current project is in meters, so
conceptually we just know that 1 SI unit = 1 meter. Did we change anything?
Nope. Same thing in Maya... 1 unit = 1 meter. Didn't change a thing on the
Maya side either. I would love for someone to give me an example of why
this should be different.

Either way, I'll have an update tomorrow at some point, along with i/o for
Houdini and updated Soft scripts. Maya is next and then hopefully I can
talk one of our Nuke dev's into banging out an importer. Unless someone on
here knows it's API and want's to donate some skills (once the 1.0 spec is
finished). Same with any other apps :)

Cheers


On Wed, Jul 10, 2013 at 6:59 PM, Matt Lind ml...@carbinestudios.com wrote:

 I started a toolset a few years ago based on XML as well.  It works and I
 can store robust data, but the downside is the file sizes are huge and slow
 to read/write.  Memory becomes an issue at some point.  If you only want to
 transfer cameras or simple stuff, it works fine, but large scenes with lots
 of animation data is not advised with XML as other formats may be better
 suited.

 ** **

 Matt

 ** **

 ** **

 ** **

 ** **

 ** **

 *From:* softimage-boun...@listproc.autodesk.com [mailto:
 softimage-boun...@listproc.autodesk.com] *On Behalf Of *Tim Crowson
 *Sent:* Wednesday, July 10, 2013 5:34 PM
 *To:* softimage@listproc.autodesk.com
 *Subject:* Re: Open source json camera I/O platform

 ** **

 This is great to see! I have something similar in the wings that's only
 partially implemented, but in XML instead. It stores most of the stuff Jo
 was talking about. I wrote it as a way to export cameras and nulls to Nuke
 and Fusion. My goal is to have a series of tools for various apps all
 writing and reading a common XML file. Cameras and nulls can be exported,
 then updated if something changes. With custom UI for setting options.
 Anyway, I haven't finished writing all the different plugins, but I've got
 a couple of apps covered already. I debated between JSON and XML and
 finally just went with XML.

 Glad to see this in the works, Gene! Can't wait to see more!

 -Tim C

 

 On 7/10/2013 7:10 PM, Raffaele Fragapane wrote:

 When you'll have at most a few dozen curves, even on a thousand frame long
 sequence, I honestly don't think cheapening the data matters one iota.

 You can always introduce a zip compression stage to the I/O.

 Optimizing early and ending data poor is always a mistake. Purging is
 easy, both on I/O and in dev terms, adding data you don't have is usually
 betwene painful and downright impossible.

 If footprint was a concern here, sure, it'd make sense, on something that
 on a bad day will have a hundred parameters at the most (and for a mono cam
 I'd struggle to think of a hundred parameters I'd want animated) saving 16
 floats per frame instead of 64 makes little difference in practical terms.
 

 ** **

 On Thu, Jul 11, 2013 at 10:01 AM, Alok Gandhi alok.gandhi2...@gmail.com
 wrote:

 Hey Gene looking at your schema I do not see animated value for parameters
 like focal length, near and far planes. Though near and far are not usually
 keyed but you never know. I have worked on a stereoscopic project and we
 did need to plot the clipping planes. Anyways, focal length does fairly get
 animated at times. In the interest of generality and being generic I would
 make room for values for nearly all animatable parameters. For the case of
 optimization the writer plugin can use only one value in the list if the
 parameter is not animated else it will take all the key frame values. Also
 I would not care for the whole keyframe and tangent data in the list but
 would simply read the values of all parameters at each frame and plot the
 same when I am reading the values. But what you are doing with keyframes
 value storage also works, in fact I think it reduces the file size
 considerably in case you do not have much keyframes in the source scene.**
 **

 ** **

 On Wed, Jul 10, 2013 at 7:41 PM, Raffaele Fragapane 
 raffsxsil...@googlemail.com wrote:

 Units and scale, and how they correlate, are extremely important even in a
 light weight, portable format. Actually, probably more so in one than in a
 more complex scenario.

 You can't assume people will not care about those because their workflow
 will be independent of it like yours was for a few example productions,
 because very frequently they won't have the choice.

 As an interop format that deals with external contributions (rendering
 engines and so on being heavily dependent on it) you WILL bump into scale
 

Re: Open source json camera I/O platform

2013-07-10 Thread Raffaele Fragapane
The unit needs only be an arbitrary value in the header.
Autodesk can say whatever it wants, but the truth is that if you change
maya from imperial to metric at the beginning of a project (and you might
have that on client's side) there will be repercussions, and if your
cameras were intended as 1cm but get imported as 1 inch things will be out
of whack. Majorly.

Several parameters, especially so if this will get a further level of
abstraction later on, are actually world scale dependent.
The flm back can change with a unit change (in some apps it does, in some
it doesn't), several rendering and grooming parameters change and so on.

As for scale, I've had plenty instances when the camera was scaled for
various reasons, frequently enough to be relevant entire chunks of a pipe
would rely on a stupid-renderman-trick style scaled camera.


On Thu, Jul 11, 2013 at 2:10 PM, Gene Crucean
emailgeneonthel...@gmail.comwrote:

 Thanks for the input guys! I'm ingesting all of it :)

 I'm quite against adding units into the main camera section of the file...
 but what about adding them to the metadata section? I really don't
 understand why anyone would want this in the file though. Units should only
 be conceptual imo. Autodesk says that 1 SI unit = 1 decimeter, but it has
 no concept of units... at all. Our current project is in meters, so
 conceptually we just know that 1 SI unit = 1 meter. Did we change anything?
 Nope. Same thing in Maya... 1 unit = 1 meter. Didn't change a thing on the
 Maya side either. I would love for someone to give me an example of why
 this should be different.

 Either way, I'll have an update tomorrow at some point, along with i/o for
 Houdini and updated Soft scripts. Maya is next and then hopefully I can
 talk one of our Nuke dev's into banging out an importer. Unless someone on
 here knows it's API and want's to donate some skills (once the 1.0 spec is
 finished). Same with any other apps :)

 Cheers


 On Wed, Jul 10, 2013 at 6:59 PM, Matt Lind ml...@carbinestudios.comwrote:

 I started a toolset a few years ago based on XML as well.  It works and I
 can store robust data, but the downside is the file sizes are huge and slow
 to read/write.  Memory becomes an issue at some point.  If you only want to
 transfer cameras or simple stuff, it works fine, but large scenes with lots
 of animation data is not advised with XML as other formats may be better
 suited.

 ** **

 Matt

 ** **

 ** **

 ** **

 ** **

 ** **

 *From:* softimage-boun...@listproc.autodesk.com [mailto:
 softimage-boun...@listproc.autodesk.com] *On Behalf Of *Tim Crowson
 *Sent:* Wednesday, July 10, 2013 5:34 PM
 *To:* softimage@listproc.autodesk.com
 *Subject:* Re: Open source json camera I/O platform

 ** **

 This is great to see! I have something similar in the wings that's only
 partially implemented, but in XML instead. It stores most of the stuff Jo
 was talking about. I wrote it as a way to export cameras and nulls to Nuke
 and Fusion. My goal is to have a series of tools for various apps all
 writing and reading a common XML file. Cameras and nulls can be exported,
 then updated if something changes. With custom UI for setting options.
 Anyway, I haven't finished writing all the different plugins, but I've got
 a couple of apps covered already. I debated between JSON and XML and
 finally just went with XML.

 Glad to see this in the works, Gene! Can't wait to see more!

 -Tim C

 

 On 7/10/2013 7:10 PM, Raffaele Fragapane wrote:

 When you'll have at most a few dozen curves, even on a thousand frame
 long sequence, I honestly don't think cheapening the data matters one iota.
 

 You can always introduce a zip compression stage to the I/O.

 Optimizing early and ending data poor is always a mistake. Purging is
 easy, both on I/O and in dev terms, adding data you don't have is usually
 betwene painful and downright impossible.

 If footprint was a concern here, sure, it'd make sense, on something that
 on a bad day will have a hundred parameters at the most (and for a mono cam
 I'd struggle to think of a hundred parameters I'd want animated) saving 16
 floats per frame instead of 64 makes little difference in practical terms.
 

 ** **

 On Thu, Jul 11, 2013 at 10:01 AM, Alok Gandhi alok.gandhi2...@gmail.com
 wrote:

 Hey Gene looking at your schema I do not see animated value for
 parameters like focal length, near and far planes. Though near and far are
 not usually keyed but you never know. I have worked on a stereoscopic
 project and we did need to plot the clipping planes. Anyways, focal length
 does fairly get animated at times. In the interest of generality and being
 generic I would make room for values for nearly all animatable parameters.
 For the case of optimization the writer plugin can use only one value in
 the list if the parameter is not animated else it will take all the key
 frame values. Also I would not care for the whole keyframe and tangent data
 

Re: Deforming instances on strands

2013-07-10 Thread Edy Susanto Lim
Hi, I think StrandDeform is a rendertime effect.
means you'll only see it when you render if your render engine supports it.

no other trick as far as I know if you're restricted to particle instance
and strand.

Cheers,
edy


On Sat, Jun 15, 2013 at 3:51 AM, Antonin Messier antoni...@gmail.comwrote:

 Sorry for interrupting your Friday beer, folks, but aZZZny other trick to
 deforming instances on strands than using instance shape and setting the
 StrandDeform property to True? All the doc seems to say it's that simple,
 but no matter what I try, my instances stay attached to my particles rather
 than deforming on my strands.

 (Example with a sample scene attached.)




-- 
Edy Susanto Lim
TD
http://sawamura.neorack.com