Re: OT - extruding in Maya?

2020-07-02 Thread Matt Lind
It has been 6 years since retirement. People move on.

Matt


> Message: 1
> Date: Thu, 2 Jul 2020 11:46:39 +0100
> From: Chris Marshall 
> Subject: Re: OT - extruding in Maya?
> To: 
>
>
> It's getting very quiet in here
--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


GLSL example needed

2020-05-14 Thread Matt Lind
Does anybody have a simple example of using the GLSL_shader in the rendertree 
to sample textures on objects?

I'm using the GLSL_Shader with inputs from the OGL_Texture shader for the 
texture information.  I have specified the uniform names of the textures and 
call them in code by the same name, but apparently there are additional step(s) 
that must be take for the texture to be seen by the GLSL vertex and fragment 
shaders.

The details:
- one polygon mesh
- two texture projections, each with unique texture coordinates and image 
clip.
- float parameter [01] to blend between texture 1 and texture 2.

In WebGL it would look something like this:

Vertex Shader:

attribute vec2 aTexCoord1;
attribute vec2 aTexCoord2;
uniform sampler2D uSampler1;
uniform sampler2D uSampler2;
varying vec2 vTexCoord1;
varying vec2 vTexCoord2;
vec4 texColor1;
vec4 texColor2;

void main()
{
texColor1  = texture2D( uSampler1, aTexCoord1 );
texColor2  = texture2D( uSampler2, aTexCoord2 );

vTexCoord1 = aTexCoord1;
vTexCoord2 = aTexCoord2;

gl_Position = //vertex transformation code here
}


Fragment Shader:

varying vec2 vTexCoord1;
varying vec2 vTexCoord2;
uniform sampler2D uSampler1;
uniform sampler2D uSampler2;
uniform float t;
vec4 texColor1;
vec4 texColor2;

void main()
{
texColor1 = texture2D( uSampler1, vTexCoord1 );
texColor2 = texture2D( uSampler2, vTexCoord2 );

gl_FragColor = ( texColor1 * t ) + ( texColor2 * (1.0 - t) );
}



Thanks,

Matt


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Friday Flashback #409

2020-05-03 Thread Matt Lind
I still have that T-shirt.

It's not a DS shirt, it's a SIGGRAPH '98 T-shirt with logos from all the 
Softimage products on the front (DS, 3D, Sumatra, Eddie, Toonz, ...), and a 
huge ugly banner on the back advertising Softimage at Siggraph.

Very fruitful year for Softimage swag.

Matt



Date: Fri, 1 May 2020 16:41:34 -0400
From: Stephen Blair stephenrbl...@gmail.com
Subject: Friday Flashback #409
To: softimage@listproc.autodesk.com


DS insignia tshirt


https://u9432639.ct.sendgrid.net/ls/click?upn=A5uD99yDGgJFqsHo0L78rjo3fW-2BI05z8hPddj-2BYSrjut8V8Kr3SyUl8pLj-2FCGT-2FDLSRVI-2BJtA9WEfSslUfvcKBeq39MM4UMlr8qjXGeMzyr5eAItyf4Oswqj13ZSv0cLHuzjn15e9cCpEDuWPypLX5rk3XswSPY7xwhJo-2F-2BGiTa9A7vzluNJre-2FXSZcqIp3Rf9Wi6Ac5vQF1My2TGVmPNwV66ZOEDDFbHB3C30xnZAw-2B7pPFpkPA1Ls2bkpEVFa6UwG4p8y-2BsIcUF4TcZ-2BJ3r6fQ38E-2FPp2hlcpSnQPzqh4etNEepUpgkArAaBINDevf4OHQHrU0sDHNSn49VWL2CTav7zsKLYURVt87uspkQud2Pzr-2FakNuyapA9Nnsi9wD4ykwTJ9R0rIiPhBypr-2B-2FQM66oJoR4k9WkYtNy1apiB3UCGxj2rcX1gvjBxcsr0YgQoRFq-2BhfErKD2y-2FNxEUQ3lNURAv9vimKUkx-2FbStx-2F0x6hjiJksVp6QIGDmNraKp8vA3au1gOxv-2BkxxZ9V0HdNPKImfy5fOiyMfixRhM0ta5cwe5zuj7OX3Yd04H5PYU4EanpgpTVfJZ6S-2BZ4hCTfbgO5-2BPXg8paatv3nPtvEGWpQj6JFcfRGd-2BOspfAlehuIZG-2F5MffpRuHgPIk3xYlP7yyGisX8EB4SgyjmSzK1-2BkpegbIJ0-2FudfJzVM23Htwi0y-2Bbiz68wQUVkxah-2FiK7mhMbwj0XBzzx1APpjM-2FSbXQlDY0OYLmLvRgPkolOFxGdKSz4dEIOge-2FYel-2Bae7N-2B3FoUHHbnwi4mAdaHntYqS5ms2zr86sgoZwpdknZJcEDqUV-2BbRmdNUyP1fbw8tyywIssoaJ7FOTZKbHm2LWULyVSde8cSirLLIZR9wh8twzrbPFyKsV-2B56nDr7iebezM-2FaIdUo0SY88U-2Fzn76pecw6GvAmLKPmcbTRAaEzlW4tz4FSrXOlGKxLcsjzAbkgpHkWiAygCveFpdIbzYBrMR3KbXCtwVU3RY5A63yscrpMFbBBbJY4saLvnCQxbgLdoNwp06CFArHgyHKZd6NYOAhrP-2FtvcKPlsDwgP4g95NlZoVS4rL3TjyhgX-2BrhlXBRKX0l8pI9NzdsxXw6Dg0iyeLW3yrJNa-2FGHv-2BwzQXFj4nqBd18yVzEnqZBFzJkhwmQPHtjrb004S7H1PVpIs-2BtQHWiVs4ydhobZ46zo5xLMbQotHsjC8rtdmD0vA5nuEHE0MYcDm354XxNzxHHtqnU8DNkD0a0v8by-2F1XVnHLpiqKiuEdZu2-2BuV5HlaC05KUCX9UH0zCLWDlkfOT0pWBJJjRbpC5vMoZo-2BI5U4keApHBFT63xNflrm_x9fWPgxQbfi69QJnHJqUKZsAJHrwlN1lgOIh62WX2fSlQQ-2BpA44DqOcqHEGFpDr756q-2FbZ3vKEIqOmt4t8ImRHqF7B6uxfbBI8A3l2Tl0RtGNgW4kmUBYT0mYFqCiVf0gBbkglKTKqdAnCPM8DMnPIhOpmo9qlteuXs9GUlM2sM-2FYSRJ7aZiBqTVwAwHisdu8bekbGmjlosG024x3exienr8jhnkp7LJVzygmjV8jnY-3D
 4Q8okgretCqU7Q16klX4nVwzihkulLBkZf72IQk=  


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: anyone know how ICE "get closest location" actually works

2020-03-30 Thread Matt Lind
'Get closest Locations' is patent protected, so you won't be able to do 
anything commercial from the algorithm as implemented (but you may be able 
to find it's description at the patent office).

I spoke with the author of that tool at great length a few years ago as I 
had questions about certain behaviors (bugs).  He said the patented part of 
importance is the acceleration structures, which is a tree to store results 
of the searches.  The benefits come from making certain assumptions about 
when the tree needs to be updated in order to avoid unnecessary computation.

Location finding on meshes is basic linear algebra.  You can derive it 
yourself using the following:

- distance equation (Pythagorean)
- the plane equation
- point inside a triangle
- projections
- vector dot product (aka inner scalar product)
- vector cross product
- determinant
- linear transformations
- fundamental understanding of barycentric coordinates.

For example, finding closest vertex is no more complicated than iterating 
through all the vertices of the mesh and computing the Pythagorean distance 
(d = sqrt( x^2 + y^2 + z^2) ), then choosing the vertex with the smallest 
value of d.  If you want closest location, the computation is a bit more 
involved, but same general principle.  In either case, you need to make sure 
the origin of your search is in the same coordinate space as your mesh. 
That's where linear transformations and projections come into play.

Brute force techniques are accurate and easy to implement, but perform 
slowly.  That may be adequate for situations where performance is not 
critical such as a pick session.  However, If you want speed, you'll need a 
more intelligent method to search only the parts of the mesh which are 
likely to produce the desired result.  That can get complex really fast 
depending on your situation.

I can't speak too much about implementation on a GPU other than to say you 
have to organize your data and algorithm to consider race conditions and 
other usual issues of multi-threaded environments.

Matt



Date: Sun, 29 Mar 2020 21:41:31 -0400
From: Ed Manning 
Subject: anyone know how ICE "get closest location" actually works
under the hood?
To: "Softimage Users Mailing List." 

need to reverse-engineer it in c++

thanks!



--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Midterm Exam

2020-03-14 Thread Matt Lind
Choosing Softimage|3D mode doesn't disable manipulators.  It changes the 
selection model to mimic Softimage|3D's selection model.

XSI behavior defaults to automatically overwriting the selection list with 
each new selection.  Each selection is transient.

Softimage|3D's selection model forces the user to explicitly select / 
deselect.  Each selection is cumulative.  This mode is often preferred for 
larger or more complex selections such as moving points for deformations on 
higher resolution geometry, or modifying texture UVs.

Matt


Date: Fri, 13 Mar 2020 13:37:55 -0400
From: Luc-Eric Rousseau 
Subject: Re: Midterm Exam
To: "Official Softimage Users Mailing List.

I don't think re-installing or runonce would fix a tool like this. It is
necessarily a user preference.
It can also be the interaction mode in the file menu. If you choose
Softimage3D, the manipulators will be disabled.  If I'm not mistaken, when
you first start the software after a clean install, it asks you to pick the
interaction mode.

--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Bifrost graph - Really trying but it's not happening

2020-02-24 Thread Matt Lind
Meanwhile with Houdini the files get saved in a proprietary file format 
which are not compatible with other Houdini licenses.  You only get a single 
opportunity (lifetime) to convert your files, and that opportunity requires 
sending your data to side FX to have it done manually by their staff.  If 
you're in that position, then it's likely you'll still be generating new 
content while Side FX works on the transition making for a headache in file 
management.

Not exactly practical, Jordi.

Matt



Date: Sat, 22 Feb 2020 19:22:33 +
From: Jordi Bares 
Subject: Re: Bifrost graph - Really trying but it's not happening
To: "Official Softimage Users Mailing List.


Considering the limitations of the indie license in Maya I would not put 
that as an example? for example, with Maya you can only have 1 license per 
studio, with Houdini you have 3 AND these are the full toolset.

Not only that, the Maya indie program seems to be only available for 
Australia, Canada, New Zealand, the UK or the US? which is the reason I 
didn?t even mentioned on my post? it is plain  ridiculous.

jb

--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Friday Flashback #399

2020-02-08 Thread Matt Lind
I don't ever recall this particular wallpaper.

I'm also amused how the flashback number is arbitrarily assigned to these 
posts.

Matt



Date: Fri, 7 Feb 2020 13:55:46 -0500
From: Stephen Blair 
Subject: Friday Flashback #399
To: "Official Softimage Users Mailing List.

Rage -  SOFTIMAGE|XSI 2.0 wallpaper
https://u9432639.ct.sendgrid.net/ls/click?upn=A5uD99yDGgJFqsHo0L78rjo3fW-2BI05z8hPddj-2BYSrjut8V8Kr3SyUl8pLj-2FCGT-2FDLSRVI-2BJtA9WEfSslUfvcKBeq39MM4UMlr8qjXGeMzyr5eAItyf4Oswqj13ZSv0cLHuzjn15e9cCpEDuWPypLX5rk3XswSPY7xwhJo-2F-2BGiTa9A7vzluNJre-2FXSZcqIp3Rf9Wi6Ac5vQF1My2TGVmPNwV66ZOEDDFbHB3C30xnZAw-2B7pPFpkPA1Ls2bkpEVFa6UwG4p8y-2BsIcUF4TcZ-2BJ3r06FVuhzXsLEP7ARxijohoF36iWUigncUHfT6O6BQcSCo11Ogv15Rax-2Fr-2FE-2FA-2BnWvFGuygf-2FUt1OpEbIUv8TSTg7ZHdSj7v9pi1K3wmAns-2BaxsaSandeqNgqqmMOo3MCLuO7g0yZdO112z0DqWEOaRvjykPuMklYShFYyWN0G91j4QRDjPPZn1m5qwtPdWE-2Bp6SYKvPKwkevmaD-2Bg6sTfqMEwB0OwHtBUSqZiySagVmU77HCWZHv-2BQQa3zdRzbuZkraNKtq-2BNStOHyWaUSZVqKVyIQt1jjn9fFYE98XqUFqVkOa-2Fn9VNeP3QvW9lWTb7oSwKz20Uriqjz4ZRwLyIjTJ30S4dqsLomuLKTE07IbHOt56I3g3EAwx1jQ8rBwVJCq8vqy36rSMiLHr0EHF-2BVih3ttO5WmlUdcZe31qIbuRfqSo77Gmqj0E5h96-2BS9jldCSrtNzXwcNdi-2FRZ608M6jYOtq8nsoKCeKEOYb7Nm2dPVLzpWTlw-2F34Caa18D46ZOgM952-2BStg9-2FsMDiIDclaejG7s7gelH308RJ8MyPjSWIxBIrp26ThgVAiKvqD5R-2BqO9-2BQAkPMVkRZxIlV14HqLe7P9YOF9YI9iLqbQIpmVkmFFNEfhQ5JoTZS5Oqj7pdCfLX5jyk8fUlzWd5nYrb7GdkHpXHe3LE7zJUYLSkmZQlBBuYx2OUj5AhFA0ZQDqUV47psxHbcDeklGl9CF1iLQRgYnAq3rtzQQZaTZ6M9WIQIqjzZd-2FBfdyfTU71VlSl2sJRhM3s6Qj3D-2FxVHUT1mRPponJFbfwXJAgytxtuuhwpi6mb6S7L9oMHchIR3tdwox4nsgd7jd3xyK-2FXWzG-2BYw-2BrRSVUuSe6gT3mw6G01uAc5SsszcBIefsiq32J6Wb5yxUOLU1gnaZsWAimC1yDz4hl3xlqYHowyyCjOi6yfbTHWhigsoy6MrrQw5-2BJd6c23mZw8fPcJhFmlzm3JXNVFqxP8INTNm7PAE-2Fn5ONhrkdUTfRK2ZeQXnVwfBJf3CfdFd33WJX-2FO0FSxBUxUJ-2FXqpTwBI4iPwJul8wf0Me7kdNqyl5p_x9fWPgxQbfi69QJnHJqUKZsAJHrwlN1lgOIh62WX2fSw-2FbIZlqhmmqImuJ4Brb26QFMkPeml18N1WotcF0z411tnVSccVkXQYzx1tWkg4eeKOqIvytKGhMRqPKJyfsJneU2GD1bcPhHdgB9MysrxKy4ErQMrd5-2FCiI9dVU4tuABRhD0QgM7nMU5isAQI94dIiFP9RDQtFoSaDgXBjlb9t9GkJGGboy0qSeuysvaHMYImn31-2BsFJX1ppqgtPnaEsg
 tf_RJcNCb4NQzZbmZsCau6GWU4QuU=  


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Softimage mailing list: 2019 by the numbers

2020-01-13 Thread Matt Lind
rnal issue.  Optionally, you could try a 
virtual machine, but the caveat is you might not be able to access the 
licensing in hardware if it uses an older technology (That's what bit me in 
attempts to get SI3D in a virtual machine as the dongle uses a parallel port 
and modern computers don't have them.  I tried a parallel to serial port 
adapter, which works on a physical machine, but not a virtual machine). 
Then, of course, keep all your content in a safe place so it can be accessed 
when needed.  Finally, document all the quirks that need to be known about 
anything that can be a problem in accessing the content.  For example, does 
a particular scene require special handshake to render correctly?  If so, 
what is that handshake?  Document it, and make sure you've taken care of any 
dependencies associated with it.

I think XSI will continue to function for a long time, but the most likely 
problems will be the licensing.  Another potential issue is OpenGL support. 
SI3D still runs on modern graphics cards because they still support OpenGL 
1.x and OpenGL 2.x.  That might not continue for much longer as 
manufacturers look for ways to thin the OpenGL support requirements to get 
driver sizes smaller and more manageable.  XSI in particular is heavily 
coupled with Microsoft's COM/OLE.  While I don't think it'll stop working 
tomorrow, the writing is on the wall.  Netview is already disabled in many 
ways from internet security protocols implemented to inhibit Internet 
Explorer's ActiveX scripting environment.  With Windows 7 support expiring 
this week - you may want to download any support needed on that front as 
newer / current versions of Windows might not allow such features to run 
even if offline.

In short, create an environment that is a snapshot of the application's best 
working form, then preserve it.

Matt





Date: Mon, 13 Jan 2020 12:17:40 +0100
From: ron...@toonafish.nl
Subject: Re: Softimage mailing list: 2019 by the numbers
To: "Official Softimage Users Mailing List.

I stlll use Softimage every day.

I?m slowly trying to switch to Houdini ( for the thrird time ) but it?s not 
much fun when you know there's a Ferrari in the driveway, but have to use 
the bycicle to get from A to B :-)

Maybe for complex Special FX Houdinin is great, but for modeling and 
character stuff I keep on running into strangely convoluted workflows or 
basic stuff that?s just not possible or takes so much time to create a 
procedural workaround for you forgot what you wanted to do in the first 
place by the time you?re done.

Besides that I still have to work on some old projects in Soft every now and 
then.

Btw, how will we be able to revisit old Softimage projects when Autodesk 
stops activating old licenses next year and you have to install it on a new 
computer ? I just did a fresh install on a new workstation so I hope I?m 
covered for a few years. But what if I need to reinstall it in a few years 
on a new workstation ?

Ronald van Vemden

Toonafish | Owner and Creative
--- 
3D Graphics & Animation Toonafish
Cyberfish Laboratories
tel. +31(0)6 46715175
email: ron...@toonafish.nl



On 11 Jan 2020, at 01:39, Matt Lind  wrote:

Most telling statistic is that Stephen Blair is the top poster of the year, 
and he typically only posts the Friday Flashback thread. I almost tried to 
avoid the list this year and I still ended up in the top ten.

Once support left the product, so did the users.

I'm curious to know how many people are still actively using XSI. I don't 
mean tinker with it, but actually using it regularly for meaningful work.

Matt

Date: Thu, 9 Jan 2020 02:33:28 + From: Matt Morris  
Subject: Re: Softimage mailing list: 2019 by the numbers To: "Official 
Softimage Users Mailing List.

Feeling a little maudlin at those figures. Had a quick look back at list 
activity: 2011 ? 2013: holding pretty steady at around 12k posts a year. 
2014: 14k posts (eol announcement) 2015: 4055 (crazy drop off) 2016: 3282 
2017: 2311 2018: 1058 2019: 657

-- Softimage Mailing List. To unsubscribe, send a mail to 
softimage-requ...@listproc.autodesk.com with “unsubscribe” in the subject, 
and reply to confirm.



-- next part -- An HTML attachment was scrubbed… 
URL: 
https://u9432639.ct.sendgrid.net/wf/click?upn=msg1YMqKv6PblZ6nn0inv01qYUEyUw3vT4pz1mW-2BzrMacQxEaFtH0MCboytOBZxKwf8uiWIxxK63b-2FJamAZeLaqkb0E-2BR96OL7OqHRatlOxX-2F0uiTwPEvu3iR-2Fh6XyCga1W2ykJ4WyKa-2BPnFNexVgMgsQ4wz8mH1Ad3R-2BAt3Cdg-3D_a6oQc7tnfcb0GKvoO27fPkrQ0ATQyF1SDBXJOg7-2FbuROad09aQt2y9GXIz28Rldt21NQ8pBSiMXl0UOKlp18hojRNJeeUrnwRuO9PCes7aLvDl94c8XIzjh5rMOF-2FWY8NJD5MdQuO-2FiOzKYQev5zL9OR1ZPS-2BzIXXY3o4-2BHC6aEKH9fU6yuw0O4vXBES067ztSApi9yrB8-2BPNTBj-2FVC6QUdNvFCE41Mi1yJboid9IG-2BpehAbbKFo16uA1HzRpZP4

--

Message: 3 Date: Mon, 13 Jan 2020 11:19:14 + From: Jordi Bares 
 Subject: Re: Softimage mailing list: 2019 by the 
numbers

Re: Softimage mailing list: 2019 by the numbers

2020-01-13 Thread Matt Lind
The announcement was 5 years ago, going on 6.  Not 8 years.

Matt



Date: Mon, 13 Jan 2020 11:19:14 +
From: Jordi Bares jordiba...@gmail.com
Subject: Re: Softimage mailing list: 2019 by the numbers
To: "Official Softimage Users Mailing List.

Regarding your question, I am also interested, it has been more than 8 years 
since the announcement so it is quite telling there are still artists using 
it. XSI was so freaking good!

jb






--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Softimage mailing list: 2019 by the numbers

2020-01-11 Thread Matt Lind
No, don't think so, Jordi.

Houdini is learnable, even by non-VFX artists, but it requires a lot of hand 
holding to get comfortable.  If you're learning by yourself, it's an uphill 
climb, especially in because of all the recent massive revamping of the 
application as documentation is out of date and many workflows require 
knowledge of how earlier versions of Houdini behaved.

The biggest weakness of Houdini tutorials is lack of structure to teaching 
it.  Every tutorial is it's own little world and there are plenty of people 
like Rohan showing how to do different things, but nobody ties it into the 
bigger story arc of how Houdini works in a structured approach so you can 
progress from novice to expert and tackle your own ideas with aplomb. 
SideFX provides tutorials to get you out of the box covering basics such as 
user interface, windows, and basic stuff, but then they just kind of leave 
you hanging there.  There isn't an agreed upon best practices arc in any of 
the learning content, just a lot of 'that’s just how we've always done it'. 
Okaybut why?

For example, in many videos objects are created and then the instructor dips 
into the subnetwork of the object to create more objects.  See Rohan's 
videos on making a procedural table.  Never is it explained why he must 
create all the objects in a single subnetwork, or why you cannot connect 
different objects through the network to create the same result.  He also 
terminates many networks with a 'null' or transform node, but doesn't go 
into detail explaining why that should be done other than to say that's how 
he does it and no to worry about the details.  Tutorials by others do more 
or less the same thing leaving many important details on the table without 
explanation.  This leads to other problems.

A more expected type tutorial is one by Jeff Lait of SideFX where he covers 
a single topic of 64 bit computing support through attributes.  He focuses 
on a single narrow concept, but goes to town on all the nuances including 
explaining how many historical elements of Houdini do not support 64 bit 
computing unless you know which levers and switches to throw.  Then he 
points out how to do it, in detail.  The acquired knowledge and technique is 
usable in almost any project.  I cannot say the same for many other 
tutorials which are built in very specific manners limiting their benefit.

Another element of frustration is how many tools are not single nodes, but 
are actually networks of nodes bundled as a preset.  You try to work with 
them only to discover there is a parameter value deep in that network that 
has to be toggled before it behaves to expectation, but it requires decent 
knowledge of the application to know that, and to know where to look.  Ugh.

Example: recreating the XSI direction constraint with lag.

In XSI you can make the camera follow an object around the scene very easily 
by applying a direction constraint to the camera, then reducing the blend 
parameter so it can be allowed to travel within the frame a bit so it's not 
dead center all the time.  Easy peasy.  Try this in Houdini and you'll pull 
your hair out to the point your scalped.  While it's straight forward to 
apply a direction constraint in Houdini, it's not so easy to get the lag to 
behave like in XSI where you want the target to be able to travel ahead a 
bit within the frame as desired.  You have to have knowledge of CHOPS 
networks and meddle with FCurves to finesse it further.  Even then there's 
no guarantee you can achieve the same result.  If you botch the process of 
applying the constraint, or need to apply other constraints to the same 
object, then you get into a mess of CHOPS networks in who knows what state. 
The display in the network view doesn't make this point obvious as there are 
no links connecting the CHOPS to the constraint.  There's a lot of room for 
improvement on conveying this relationship.

Houdini is a very powerful application, but the training material leaves a 
lot to be desired and is not nearly as easy as you claim.


Matt




Date: Sat, 11 Jan 2020 14:30:26 +
From: Jordi Bares 
Subject: Re: Softimage mailing list: 2019 by the numbers
To: "Official Softimage Users Mailing List.

What kind of work do you do? Or aim to?

Houdini is not hard at all unless you want to get into abstract graphics and 
serious VFX.

jb


Date: Sat, 11 Jan 2020 18:09:13 +
From: Jordi Bares jordiba...@gmail.com
Subject: Re: Softimage mailing list: 2019 by the numbers
To: "Official Softimage Users Mailing List.

Check Rohan Dalvi tutorials, some free!


Re: Softimage mailing list: 2019 by the numbers

2020-01-10 Thread Matt Lind
Most telling statistic is that Stephen Blair is the top poster of the year, and 
he typically only posts the Friday Flashback thread.  I almost tried to avoid 
the list this year and I still ended up in the top ten.

Once support left the product, so did the users.

I'm curious to know how many people are still actively using XSI. I don't mean 
tinker with it, but actually using it regularly for meaningful work.


Matt






Date: Thu, 9 Jan 2020 02:33:28 +

From: Matt Morris 

Subject: Re: Softimage mailing list: 2019 by the numbers

To: "Official Softimage Users Mailing List.


Feeling a little maudlin at those figures.

Had a quick look back at list activity:

2011 – 2013: holding pretty steady at around 12k posts a year. 2014: 14k posts 
(eol announcement) 2015: 4055 (crazy drop off) 2016: 3282 2017: 2311 2018: 1058 
2019: 657



--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: Last Install of XSI ( I think) Should I use 2014 sp2 or 2015

2020-01-05 Thread Matt Lind
As for choice of 2015 vs. 2014.  It depends on your needs.

If stability is important or you work with real time content heavily, you’ll 
want 2014 SP2.
2015 has more features, but also more bugs including regressions from 2014 in 
the area of authoring content for games and similar workflow.  The known 
limitations list provided with the release is not comprehensive.

I have the XSI v1.0 installer right here at my desk.  It’s 29 Mb, not 460 Mb.  
That’s for software alone, not content databases or documentation.  For 
comparison, Softimage|3D v3.9.2 was released with XSI v1.0, and it’s installer 
was only 86 Mb inclusive of all features.  I wouldn’t call XSI bare metal code 
as anything with a Microsoft COM/OLE core is anything but bare to the metal.  I 
would say it was rather bare on features, however ;-) (e.g. no polygon 
modeling).

Here’s a breakdown of installer sizes through the years.  I don’t have every 
release, but I have enough to illustrate the trend.  Assume 32 bit unless 
stated otherwise:

XSI 1.0: 29 Mb
XSI 1.53: 31 Mb
XSI 2.01: 83 Mb
XSI 2.03: 82 Mb
XSI 3.01: 103 Mb
XSI 3.5: 118 Mb
XSI 3.5.1.1: 118 Mb
XSI 4.0: 183 Mb
XSI 4.2: 185 Mb
XSI 5.0: 349 Mb
XSI 5.11: 372 Mb
XSI 6.01: 397 Mb
XSI 6.5: 406 Mb
XSI 7.0: 479 Mb
XSI 7.01: 477 Mb
XSI 7.5: 551 Mb
Softimage 2011 SP1: 1.08 Gb
Softimage 2013 SP1: 1.44 Gb
Softimage 2014 SP2: 1.42 Gb (64 bit)
Softimage 2015: 808 Mb (64 bit)
Softimage 2015 SP1: 809 Mb (64 bit)
Softimage 2015 SP2: 809 Mb (64 bit)

Notice some service packs were smaller than the main release.

Matt





Date: Sun, 5 Jan 2020 00:07:22 +0100
From: “Sven Constable” sixsi_l...@imagefront.de
Subject: RE: Last Install of XSI ( I think) Should I use 2014 sp2 or 2015
To: "'Official Softimage Users Mailing List.

20 gigs of texture data?

As a sidenode the XSI 1.0 installer had about 460MB, so it roughly doubled in 
size over the years. Yes, very compact compared to 3ds max for example. It was 
never crammed with third party stuff, loads of textures and stuff. Just some 
bare metal, very fine code. ;)
--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Midterm Exam

2019-12-21 Thread Matt Lind
Wait until you see the final exam ;-)

If you use the animation mixer with any regularity, you'll recognize the 
questions primarily test your knowledge of animation mixing rules directly 
from the manuals.  The exam should only be hard if you don't use the mixer.

Matt



Date: Fri, 20 Dec 2019 22:35:06 –0500
From: Francois Lord flordli...@gmail.com
Subject: Re: Midterm Exam
To: softimage@listproc.autodesk.com

That is one hard test!


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Midterm Exam

2019-12-20 Thread Matt Lind
I used to give this to my students as the midterm exam for an Advanced XSI 
course which focused on the animation mixer and mental ray rendering.  It is a 
closed book exam which means no notes, no books, no computers, no reference 
materials.  You are only allowed a pencil, eraser, and one blank 8.5 x 11 inch 
sheet of scratch paper.

Each question must be answered with complete sentences to receive credit.  No 
points for bonus question until all other questions have been answered to 
completion.

Exam duration: 60 minutes.

The best score by my students was a 96.  The average score was 84.  Let’s see 
how you do.

NOTE: This exam was created when XSI v3.5 was the current release (2003).  Some 
behaviors / workflows may be different compared to what you use today.



-
Name:
Date:


PART I: Animation

1) What requirements must be met for two action clips to be able to blend their 
values within the Animation mixer?

2) Name three advantages XSI constraints have over key frame techniques for 
animating objects:

3) Name three advantages for using the animation mixer to animate scene 
elements (as opposed to setting key frames or applying constraints directly on 
the objects):

4) How does a 'linked parameter' affect an object's animation differently from 
a constraint or FCurve?  What are the advantages of a linked parameter?

5) A sphere is animated via an FCurve from the global origin to roll along the 
X-axis until it reaches X=10 at frame 50.  The key frames are applied to the 
sphere's global position and local orientation parameters.  From frames 51 
through 100, the sphere is animated via a pose constraint from X=10 to another 
arbitrary point in 3D space designated by another object.  The sphere moves 
from X=10 to the designated target via an adjustment of the constraint's 
blending value from 0 to 1 (full off to full on).  If the constraint is then 
stored as a source and instanced onto the animation mixer as an action clip on 
frame 1 (thru 50), what happens to the sphere when the play button is pressed?

6) Name two methods/tools for adjusting influence from one action clip to 
another on the animation mixer (i.e.: transfer full influence from 1st clip to 
the 2nd clip):

7) What is the difference between an action 'source' and an action ‘clip’ in 
the animation mixer?  How can editing data within each affect the rest of the 
scene?

8) How do you make an exact copy of an action clip in  the animation mixer 
(including clip effects and other adjustments)?

9) What is an offset map?

10) What are the restrictions of an offset map?  (i.e. pros and cons)

11) Name 3 possible uses of an offset map:

12) What is the difference between a pose clip and an action clip?  Name 3 
reasons why you may need to use a pose clip:


PART II: General tools


13) How does the explorer view differ from the schematic view?

14) Name 3 things that can be performed in the explorer view that cannot be 
performed in the schematic view:

15) What's the difference between the Match translation/rotation/scale tools 
and the position/orientation/scale constraints?

16) What's the difference between a selected parameter and a marked parameter?  
What's the purpose of each?

17) What requirements must be met before a parameter can be marked?

18) Name three possible methods for marking parameter(s) of scene elements:

19) What is a proxy parameter?  What are possible uses of a proxy parameter?

20) What is the Object viewer and what advantage(s) does it have over other 
viewers?

21) What is a model and why should it be used?


BONUS:

Name as many methods as you can for editing parameter values: (1 point each)


---




Matt Lind
Animator / Technical Director
Softimage Certified Instructor:
   Softimage|3D
   Softimage|XSI
matt(dot)lind(at)mantom(dot)net



--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Friday Flashback - 20 years ago today

2019-12-06 Thread Matt Lind
Found this while exhuming old email for another purpose.   A digest email from 
the Softimage|3D mail list from exactly 20 years ago today.  I removed the 
attached emails.



For reference - IRIX was still a prevalent platform (but rapidly losing 
ground), XSI was still known as “Sumatra” and only rumored to exist, VHS tapes 
were the rage in video technology, and many companies around the world were in 
panic mode preparing for Y2K which was only a few weeks away.





SOFTIMAGE 3D Digest Monday, December 6 1999 Volume 01 : Number 1152



In this issue:



  Re: VHS Tapes Stefan Tchakarov 

  Re: VHS Tapes Stefan Tchakarov 

  Re: FOV effect"Michael Arias" 

  mocap "laurence hall" 

  VRML- Animation   "Dirk Wolfram" 

  Re: Irix problem  Adam Seeley 

  Re: VRML- Animation   Hajo ten Thije 

  Keyshape problem ...  "DC Comeau" 

  Re: Keyshape problem ...  "benny" 

  Re: Camera FOV rig (thanks)   Diego Gutierrez Perez 


  Unrecoverable scene?  Diego Gutierrez Perez 


  RE: sumatra delays"Tapu, Dragos " 

  RE: Exporting a model.Bruce Priebe 

  Re: 3D texture NO deformation animatica 

  Re: mocap "Alvin Sebastian Hoo" 

  Re: sumatra delays"Jeffrey Twiss" 

  RE: sumatra delays"Nick Michaleski" 


  RE: mocap Gordon Cameron 

  RE: mocap Gordon Cameron 

  Re: sumatra delaysStephan Haidacher 

  water surface and wave... "Calvin Cheung" 

  Re: mocap "Lawrence Nimrichter" 


  Re: Irix problem  keith 

  Re: mocap Ken Cope 

  RE: mocap Gordon Cameron 



See the end of the digest for information on subscribing to the 3D or 3D-Digest 
mailing lists and on how to retrieve back issues.



--

End of SOFTIMAGE 3D Digest V1 #1152
***

To subscribe to 3D-Digest, send the command:

subscribe 3d-digest

in the body of a message to "majord...@softimage.com".  If you want
to subscribe something other than the account the mail is coming from,
such as a local redistribution list, then append that address to the
"subscribe" command; for example, to subscribe "local-3d-digest":

subscribe 3d-digest local-3d-dig...@your.domain.net

A non-digest (direct mail) version of this list is also available; to
subscribe to that instead, replace all instances of "3d-digest"
in the commands above with "3d".
--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Friday flashback #392

2019-11-29 Thread Matt Lind
Remember?  I'm still using a serial dongle.

Matt

--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: The Softimage mailing list

2019-11-18 Thread Matt Lind
I would suggest sooner.  Say December 15, 2019.

For the past few years, as Luc-Eric alluded to, there have been hiccups with 
the list server near the holidays.  Better to make the transition before the 
next hiccup while everybody is still in their daily routines to notice what is 
happening.  Not that this list gets much traffic anymore, but making the 
transition sooner would reduce the noise and panic from people asking what’s 
going on.

Matt





From: softimage-requ...@listproc.autodesk.com
Sent: Monday, November 18, 2019 5:48 PM
To: softimage@listproc.autodesk.com
Subject: Re: The Softimage mailing list


Well, that day has come.

I am not watching this list as often as I used to. My Softimage days are behind 
me now. I used to open it up to present some example scenes I had for my Arnold 
class at NAD that I did not have the time to convert to Maya. But now all my 
scenes work in Maya so I don't need to open it anymore.

I still miss Softimage. Maya simply doesn't have the elegant workflow to match. 
Houdini is better but not as simple and intuitive as Soft. I have to admit 
though that I've had a lot of fun learning Solaris in the past few months. It 
reminds me of when I first played with ICE. It reminds me of the sheer power 
that is accessible at your fingertips but you don't know how to harness it yet.

You guys keep talking about the end of a community. As Luc-Eric said, we just 
need to move over to the google group, which currently server as archive. It's 
been archiving the posts since many years now. It has 1328 registered members. 
I am the owner of that group and I can switch it from an archive to a normal 
group at any time. We just need to decide on a date.

It could be on January 1st.

Francois
--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: Friday Flashback #387

2019-10-15 Thread Matt Lind
Yes, the snow storm.  If I have it at all, it's on some old CDs buried in a 
cardboard box somewhere.

The FXTree was featured in XSI 2.0.  I have Mark's videos showing it off.

Manta was also used a little for XSI 3.0 to show off the revamped animation 
mixer and features such as cycle detection / looping.


Matt



Date: Tue, 15 Oct 2019 15:39:24 +0200
From: “Sven Constable” 
Subject: RE: Friday Flashback #387
To: "'Official Softimage Users Mailing List.

That scene with the snow storm? I remember it vaguely from a presentation of 
XSI 2.0. Or perhaps it was version 3 because the FX Tree was also featured. 
But Manta was in it, walking against the snow (old particle system) :)

Sven


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: Friday Flashback #387

2019-10-14 Thread Matt Lind
There’s an outside chance I still have the source data as I had to demo Manta 
for the local reseller back in the day as part of the XSI 2.0 release.  I 
definitely have a work-in-progress animatic.


Matt




Date: Mon, 14 Oct 2019 11:49:06 +0200
From: Stefan Kubicek s...@tidbit-images.com
Subject: Re: Friday Flashback #387
To: Official Softimage Users Mailing “List.”

I remember this one, thanks for posting :-)


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: render tree mixer/texture limit

2019-09-06 Thread Matt Lind
To get around the node limit, did you ever consider the FXTree?

Each texture in the scene is represented by a clip in the FXTree's Clips 
menu with an input and output.  This allows you to apply image FX (paint, 
composite, layer, transform, crop, filter, ...) to each texture without 
increasing the size of the render tree.


Matt


Date: Fri, 6 Sep 2019 11:25:44 -0400
From: Kris Rivel 
Subject: Re: render tree mixer/texture limit
To: "Official Softimage Users Mailing List.

Yeah its an odd project and honestly we tried Maya, C4D and Houdini and Soft 
was the best option! Basically have a bunch of growing textures UV mapped 
onto tracked geometry to simulate a spreading ink/paint looking thing. 
Simulations were too difficult to control, too much R and didn't allow us 
to tweak little parts of the effect or create seams, etc. Made a crap load 
of offset animated textures, brought them into the render tree, moved a 
bunch of texture supports around and done. Looks/works great…just hit a 
ceiling with the node limit. Oh well.


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

RE: Friday Flashback #384

2019-09-02 Thread Matt Lind
The logic is the menu shows the shading mode that will become active when 
you middle click the menu.  If no mode had been chosen yet, then it can't be 
anything other than NONE.  Therefore NONE would make perfect sense, 
especially if you read the manuals. :)

If you start up the application then immediately middle click the shade mode 
menu, most computers will switch to SHADE mode, but not all computers did 
that.  Some stayed in wireframe.  Therefore it would be wrong to display 
WIRE or SHADE in the viewport title bar as the initial value.  The only 
acceptable time WIRE or SHADE could be the initial value is if the 
application were programmed to ensure one of those modes would be active 
upon middle click.

Displaying the next mode makes more sense when the menu has only two choices 
such as a boolean value (check box), but even then I would prefer to see the 
current setting.  The shade mode menu had many choices depending on what 
kind of view it was.  So to show what was next was rather confusing when 
everything else in the UI was displaying the current setting.

In my opinion it should've displayed the current shade mode.  Believe it or 
not, there are instances where two different shade modes can produce the 
same result which only makes the displayed value all the more confusing. 
When you're in a collaborative environment working with other people and 
discussing things, you need to keep track of various settings for 
comparisons / critiques of work so you can give constructive criticism and 
instruction.  Can you imagine if every menu in the application showed you 
what would happen under a middle click instead of the actual value?  You 
wouldn't be able to figure anything out.  It could say SCHEMATIC, but you'd 
be clearly looking at the top view as evidenced by the objects in the 
display.  Instead of showing the current frame number, the timeline shows 
the next frame you'll skip to when you set a keyframe (or the previous key 
you already set).  Huh???

The only time I'd find it handy is if it could display which click will 
crash the application.  There were plenty of those opportunities.

Matt




Date: Mon, 2 Sep 2019 00:44:52 +0200
From: “Sven Constable” 
Subject: RE: Friday Flashback #384
To: "'Official Softimage Users Mailing List.

So a consistent, logical solution would have been “NONE” as the default 
label, I agree. But wouldn't it be more confusing especially since it it's 
only displayed once at start and then never again? Every viewport had to 
display something at start (wireframe back then) so its not that 
disorienting because it showed the correct shading mode at first launch.

As soon as the user started to work and switching back and forth between 
shading modes (most likely in the camera view only), it switched gears and 
changed to show which mode the user will switch to. Kind of a work mode that 
kicked in as the user changed the view for the very first time.

I might be a quirk somehow but it was useful to me (after I got it) and I 
think it was intended. ;)

Sven


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

RE: Friday Flashback #384

2019-09-01 Thread Matt Lind
Yes, it was only the case before the first switch, but that is exactly the 
problem as not all viewports will endure a switch.

It's very common to change shade mode in viewport B as that's the official 
camera view, but less so in the other viewports as they often remain as 
front, right, or top projections with no need to be anything but wireframe. 
From time to time one or more of the other viewports will be converted to a 
perspective view and in those cases shade mode changes are common.

If the desire was to have the previous mode displayed, then the solution is 
to show SHADE not WIRE as the initial value (or show an empty value such as 
'-').  By showing WIRE as the initial value, and have the behavior change 
upon use, it puts the system into a mixed viewing mode where some viewports 
(front, right, top) will show WIRE instead of SHADE because they haven't 
been manipulated yet, but the perspective view will likely show SHADE when 
it's in wireframe mode.

If you've started a fresh session of Softimage|3D then this is likely not 
too big a deal as you have all your edits at the front of your mind. 
However, if this is a scene loaded from disk that you worked on some time 
ago, you're not going to remember which viewports have had shade mode 
changes and which haven't, in which case seeing WIRE vs. SHADE when two 
adjacent viewports use the same shade mode will be disorienting.

Matt


Date: Sun, 1 Sep 2019 14:44:33 +0200
From: “Sven Constable” 
Subject: RE: Friday Flashback #384
To: "'Official Softimage Users Mailing List.

Alright, maybe the word quirk is appropiate, however I'd rather call it 
“unusual but useful”. :) I think the initial value after launch (WIRE) was 
just because there was no preassigned state. I agree it might confuse a user 
at first, having two viewports using different shading modes effectively 
while both having “WIRE"in the header. But that was only the case before the 
first switch. After that, all viewports used the "before rule”, no?

Sven


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

RE: Friday Flashback #384

2019-09-01 Thread Matt Lind
It's definitely a quirk.

When you launch the application the title bar shows "WIRE" as the shade 
mode.  Which it is.  But when you toggle to SHADE mode, it still shows WIRE 
because that's what'll happen if you click it.  In other words, the rules 
changed.  A big no-no because it'll deceive the user.  It isn't until the 
menu is changed again that SHADE or another mode will be displayed.

Example:

If you adjust the shade mode of viewport B to be Shaded mode but don't touch 
viewport A, then both will display WIRE as the mode, but for different 
reasons.  That's the problem.  The bug is the initial value is WIRE when it 
should be something else, OR, change the logic to show the current mode 
instead of the toggled mode.

Matt




Date: Sat, 31 Aug 2019 00:46:28 +0200
From: “Sven Constable” 
Subject: RE: Friday Flashback #384
To: "'Official Softimage Users Mailing List.

I don?t think it was a quirk but intendend. :) Even that feature likely felt 
illogical at first, it's intriguing how much thought went into the tiniest 
bits of the GUI. Btw it was the only feature where a “before-state” was 
displayed, I think.

Workwise it made sense because switching to a viewmode like Shaded or 
Textured would took some time. Knowing to which viewmode you'll switch was 
surely an advantage.

Besides this, its kinda obvious which viewmodeis currently used because the 
viewport is displaying it already. Therefore, having a label that indicates 
to which viewmode you will switch is more useful. Otherwise it would be 
redundant information.

I work that way even today (middle click to switch modes) and to be honest I 
would prefer the old way telling me to which viewmode I'll switch. Rather 
than in which viewmode I am currently in.

Sven


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: Thoughts on USD in comparison to Softimage 'Referenced Models'

2019-08-30 Thread Matt Lind
I worked at Omation too.  While I didn't write O-Net or metascene, I sat in 
the same office as those who did and was part of discussions.

As Steven said, metascene wasn't that similar to USD, but there were also 
two versions.  Helge's version was the first version, but it was replaced by 
Adam and Mike when he left, and their version was much more developed.  It 
was mostly designed to work around XSI's limitations with access to scene 
contents through the .scntoc as all the file formats were black boxes.  The 
main problem we were bumping into was that 32 bit computers couldn't hold 
all the data we needed to load and render.  We needed a system to offload 
contents selectively when it got to be too much.  We also had significant 
problems with data corruption and needed a means to work with data reliably 
by having some form of control of it so it could be inspected and repaired 
if needed.

Barnyard had a very deep revolving door of talent coming and going, and with 
that, a lot of inside knowledge of how things were done often disappeared 
without record because it was an ad-hoc production. and very little was 
documented.  In my particular case, after I fixed the animation pipeline I 
assumed shader writing responsibilities, but my predecessor did not leave 
source code behind for many shaders, or documentation of which ones were 
deployed in production for situations when multiple versions of the code 
existed.  So for me, metascene was a way to track down which scene files had 
gone through the pipeline using these rogue undocumented shaders that needed 
to be addressed when the shots got kicked back for re-rendering.  Without a 
shader .dll, the artist cannot open the scene file.  Without the source code 
to the shaders, I couldn't produce a shader .dll.  That meant the entire 
shot would have to be rebuilt from scratch making management very unhappy. 
I spent many long nights reverse engineering these shots to figure out how 
to rewrite the shader code to match the results of shots that had already 
gone through the pipeline with the rogue shaders - and across two different 
platforms.  Ugh.

When artists created work at their workstation, they'd iterate all day long 
until they had something worthy of submission for approval.  At that point 
they'd publish to O-Net (the system), and plugins on the back end would 
intercept the scene data and re-encode it into a list of files in a 
proprietary file format.  Each hierarchy in the scene would become it's own 
file, pretty much like a model, but described in XML instead (or other 
format).  the relations of these files would be recorded in a SQL database 
and other places so when they go onto the render farm they could be rendered 
and tracked.  Once the system was fully fleshed out, front ends were made to 
automate tasks so scenes could be re-rendered if necessary, or fabricated 
entirely from the command line and submitted for rendering (mostly for 
rudimentary tasks like mattes).  I cannot remember for sure, but I think 
later in the production all scenes were loaded from O-Net and not the scene 
file created from the previous save operation.  This meant the scene was 
assembled from the re-encoded and tracked data, not the scene file.  This 
helped limit corruptions and other issues.

O-Net / Metascene was more a scene manipulation and analytics tool.  But as 
stated above, Softimage's file formats were black boxes, so there were 
limits to what could be done.  Not all assets could be successfully 
re-encoded and produce the same results.  A lot of guess work was involved.

USD is a specification of how to handle data and resolve disputes when 
collisions arise.  It's not an analytical tool, but analytic tools can be 
written to work with USD files.

Matt





Date: Thu, 22 Aug 2019 08:41:32 -0700
From: Steven Caron 
Subject: Re: Thoughts on USD in comparison to Softimage 'Referenced Models'
To: softimage@listproc.autodesk.com

It would be purely speculative to comment about Animal Logic's pipeline. So 
I won't.

I can see what your saying about people using Softimage probably appreciate 
something like USD more than users of other packages but the desire to work 
this way has been around for a while. We have had slice of this with 
Softimage a lot longer than most, we also had better render passes then 
everyone else for probably a decade. Maybe links are being made but just not 
as deep as you think. People have wanted something like USD for many years, 
I think you see people embracing it because they needed it so bad.

I did work at Omation, I honestly am a little fuzzy about the details but I 
used (and tested) Helge, Adam, and Mike's work on 'metascene'. But it wasn't 
that similar to USD and it kinda solved some issues which were unique to 
Softimage at the time. Which then Helge went to work at Avid and helped 
address with the 6.0 referencing rewrite. 


--
Softimage Mailing List.
To unsubscribe, send a mail to 

Re: Thoughts on USD in comparison to Softimage 'Referenced Models'

2019-08-30 Thread Matt Lind
Models are not really comparable to USD.  Models are more similar to 
Houdini's concept of Digital Asset than a scene.

The purpose of a model was to be a container for an asset so it could be 
more easily included in large scenes and independently manipulated with 
regards to versioning and level of detail control.  The main message of 
'Sumatra' and DS was that artists were to be removed from the burden of low 
level manipulation, such as setting key frames, and able to focus more on 
higher levels of control such as choreography via animation mixing.

XSI Models are derived from the "hierarchy" (.hrc) from predecessor 
Softimage|3D.  One could argue the Model is a superset of the .hrc as the 
main difference is the addition of the animation mixer (independent 
timeline).  The .hrc had level of detail control and other mechanisms, but 
they were rarely used due to lack of tools to do it in a practical manner by 
the end user.  The Softimage|3D implementation required a lot of digging 
through menus and strong self-discipline in organization of files.  For some 
operations, one had to jump out to the command line and use DbTools - a 
proprietary environment for running shell scripts developed by Softimage.

Softimage|3D had an ASCII text scene file format which related multiple 
files, this was closer to the concept of USD.  The main file was the Scene 
Description (.dsc) which contained all the high level information of what 
elements were in the scene and how they were related.  This included 
relations such as parent/child, master/slave relation of constraints, light 
and model associations, groupings, custom effects and their parameter 
settings, asset LOD, and the list goes on.  It even includes position of 
nodes as represented in the schematic view,  The Scene Description was 
accompanied by the Scene Setup file (.sts) which contained description of 
the user's environment such as how the viewports were arranged, current 
settings, render parameters, etc.  All the other files pointed by the Scene 
Description and Scene Setup were binary files and contained the actual 
content (cameras, lights, models (hierarchies), materials, animation, ...). 
This meant a scene was a conglomerate of many files - handy for versioning 
and substitutions, but took forever to load/save when the database started 
to fill up.  A single scene could consist of thousands of files, and each 
file would have to be checked individually for version collision, and sync 
with the other assets in the database.  If scene save operation was 
interrupted (e.g. crash), you had many orphaned files floating around in 
your database creating all sorts of havoc.  Since the default number of 
versions kept was 4, and all hard drives of the day were disc drives with 
small caches and slow spin rates and 10 Mbps ethernet (if lucky), it took 
forever to get work done.  That was a strong motivation behind XSI using a 
single file for it's scene file format, and the .scntoc to mimic (to a much 
lesser degree) the Scene Description.  Ironically, Alias|Wavefront did the 
exact opposite as Power Animator used a single file scene format, but 
migrated to the Softimage|3D style with Maya.

USD is more robust in that it's a rule based language to do the work of 
assembly and has some amount of intelligence to handle collisions and other 
problems.  XSI models had limited ability to do that in the delta, but it 
was self contained.  XSI Scenes had some back door exceptions to the rule to 
accommodate models in certain situations, but otherwise were passive and not 
available to the user to manipulate.  The Softimage|3D scene description 
language was more primitive than USD in that it was more like a passive list 
of commands to execute than a language with decision making intelligence. 
Although error checking existed, it was largely assumed the contents of the 
file were ordered and compliant.  If a model was listed twice, for example, 
the parser would not catch it and made every effort to import the model and 
establish all the relations.  It would be left to the other operations 
downstream to catch the problem and report it.

USD isn't rocket science, but it serves a very important role that's been 
long needed.

Matt



Date: Tue, 27 Aug 2019 10:13:05 +0100
From: Jordi Bares 
Subject: Re: Thoughts on USD in comparison to Softimage 'Referenced Models'
To: "Official Softimage Users Mailing List.

I guess the concepts are rooted in our reality and therefore it is only 
natural the approaches are not that dissimilar, but it is fun to realise 
Softimage Models were so advanced at the time as to be compared and all to 
USD.

IMHO USD is a major leap forward and models and references are a minor 
attribute as the real meat is the whole versioning in combination with 
layering which somehow makes me thing of XML document conversions for the 
web and data mining but I digress..

USD is a framework and the scope is enormous, to the point that you don?t 

Re: render tree mixer/texture limit

2019-08-30 Thread Matt Lind
I believe mental ray has a limit of 64 textures per triangle, not per 
object.  However, since XSI only allows one material per triangle (not to be 
confused with material shader), the limit is effectively 64 textures per 
material.  That is described in the mental ray manual in the early sections, 
but I'm too lazy to look it up right now.  I wouldn't be surprised if other 
renderers have the same limit as it's a common choice.

A good test would be to apply a default material to the object (without 
textures).  Next, create two polygon clusters and apply unique materials to 
each cluster, each with 64 textures.  None of the textures used by the first 
cluster material should be used any the second cluster material, and vice 
versa.  If it renders successfully, it means the texture limit is per 
triangle (per material).  If it fails, it means there's likely a bug in the 
XSI to mental ray translator and you should file a bug report to 
supp...@softimage.com.

Matt


Date: Thu, 29 Aug 2019 23:17:40 -0400
From: Kris Rivel 
Subject: render tree mixer/texture limit
To: softimage@listproc.autodesk.com

Hey all…been ages but still here! Weird question but anyone know if the 
render tree has a max limit on the number of nodes, 8mixers or textures it 
can load beyond any memory issue…just physical “number” limit? I have about 
65+ texture nodes plugged into a stack of mixers and all was working 
fantastic but I noticed some are missing and only show up in render if I 
unplug a few. So I'm thinking that I've hit some sort of ceiling. I have no 
memory issues at all, they're just not there and mysteriously show up if I 
cut some out. If there is a limit, anyway to bypass it?

Kris


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

RE: Friday Flashback #384

2019-08-30 Thread Matt Lind
I just fired up my copy of SI3D and can confirm.  The viewport title bar 
shows the shading mode that will be activated when clicked, not the current 
shade mode.

SI3D had a lot of those quirks.

Matt




Date: Fri, 30 Aug 2019 14:32:11 +0200
From: “Sven Constable” 
Subject: RE: Friday Flashback #384
To: "'Official Softimage Users Mailing List.

IIRC it was always that way in SI3D and in a few versions of XSI too. 
Indicating which view mode was used before the current mode, so when you 
middle click to switch it will revert to that view mode. Actually quite 
handy.

Sven


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: Friday Flashback #383

2019-08-23 Thread Matt Lind
This screenshot takes me back.  I like how the first tool listed in the 
"Tools" section is "Delete All".  What kind of message were you guys trying 
to send? ;-)  Also interesting "Revert" is further down the list.  Shouldn't 
it be immediately next to Delete All?

The screenshot is from the Twister era, but not Twister specific.

A "phenomena" is mental ray terminology for any number of shaders connected 
to create an effect.  Most often it's what we'd refer to as a render tree 
graph, but often it would be an encapsulated shader preset such as the 
material shaders which were exposed to users as individual shaders but were 
actually connections of many shaders under the hood.

The Phong material shader, for example, was actually a phenomena because 
phong.dll only computed the direct illumination (specular highlight, 
diffuse, ambient terms).  The reflection, refraction, incandescence, 
translucence, and global illumination were outsourced to different shaders 
behind the curtain in a shared library.  Phong, Lambert, Constant, Ward, 
Cook-Torrance, Anisotropic, and other illumination shaders were constructed 
in this way. While the material library has since been overhauled and 
updated, the toon shaders and many others still use the phenomena 
declaration and can be verified by inspecting the shader .spdl in a text 
editor.

Many of the mental images supplied effects were phenomena too such as the 
car paint shader.


Matt


Date: Fri, 23 Aug 2019 13:47:33 -0400
From: Stephen Blair 
Subject: Friday Flashback #383
To: "Official Softimage Users Mailing List. 



An old Softimage Sumatra screenshot. Still a bit of DS stuff showing. And 
Get > Phenomenon ? That?s Twister stuff isn?t it?

https://u9432639.ct.sendgrid.net/wf/click?upn=lIXdN6W56FnEjHCwrBXqOq0HQNpV0huvAGw1zu6Xp8eVQuk2cNZiNFjx2k-2FfTNchE18B7lYLnvRU16qf6Hf-2B5l0okETS4xN3UfKCvMXrkPWwhyjGUy-2FUyWUtCRdZkikdOpu6H3p-2BPpukiavd970ESMVtwgTLFyWgHE-2B6dIOd-2BYYne0SOtZj7q2SXSx2xaZxXG9rXk0D-2FK8X4MdU8-2FLqO9Zi5aUvr-2Bs2Th50IHG183NxXvx8j-2FUg4KOYlIdOhTKSR2vl5UnZkv4DjE0ZqAf4flasXQSDdY19kPnPo4fe3zB-2FUgewTOt7MAUDFmrzGrKJJEs290V66JrOJltWkVEc1XOfqIUVaXVfr55XHTe0i-2B0W9La0R5H-2BAjiQ0vnSJnL6s_a6oQc7tnfcb0GKvoO27fPkrQ0ATQyF1SDBXJOg7-2FbuQZlPCKSaXuwsQkZez2lLdGgu9cCAqgnBI-2FvxTbgh5RFtNc8sNeVQRWEX6UGdH8dTg-2BEuhRs6iVHonjN9dqV4qVVxbhnZ75aZq7MnqDtnZcpUOEGUjq4HSDCBg-2B9-2BIqpXWmIQrBkZc-2FgWasN0Uwk9okPzpCxD7si1plCcf5VZ0Z-2BBrvQf0ui2cc98ta-2F9FNi14-3D


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: Friday Flashback #382

2019-08-16 Thread Matt Lind
Me too.

I imported old work from Softimage|3D only to discover that despite the 
abundance of new features by comparison, XSI's toon shading couldn't produce 
the same look.  Now I have to re-light my scene.  :(

Specific differences between SI3D and XSI toon shading:

1. SI3D toon shading followed a phong shading model to place highlights and 
shadows (specular, diffuse, ambient).  XSI draws the lines between these 
boundaries differently.  To match the look you must manually specify the 
thresholds per material.  Even when you match the look from a particular 
camera angle, everything is ruined when the subject, camera, or light moves.

2.  SI3D toon shading had the option to make the ink lines the same color as 
the material's ambient color.  In XSI you must use the override feature and 
specify the ink color manually per shader.  This creates additional nodes in 
the rendertree in the form of color share nodes to plug into the ambient 
(surface) and ink ports to ensure they stay in sync.

3. SI3D ink lines taper very nicely to a sharp point when the line is drawn 
because geometry folds back on itself (as opposed to a material or object 
boundary).  XSI toon shaders don't do that quite as well.  The distance 
threshold doesn't seem to respect the values well either.

That's not to say I don't like the XSI toon shading, as I was one of the 
primary beta testers of this tool back in the day.  I just don't recall 
having these problems back then.  Something changed over the years.

The other irony is that the original scene was created overnight in SI3D, 
but it's taking me much longer than that to do it in XSI despite more tools, 
features, and much faster hardware at my disposal.   H.


Matt


Date: Fri, 16 Aug 2019 13:08:28 –0400
From: Sven Constable
Subject: RE: Friday Flashback #382
To: "Official Softimage Users Mailing List.

used the MR toon shading just yesterday. What a coincidence :) 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

RE: Friday Flashback #345

2018-07-27 Thread Matt Lind
The tessellation control is the ability to see the wireframe of the geometry 
after displacement calculations.  that feature does exist in mental ray, 
however, Softimage chose not to implement it as it required the use of 
contour shaders which Softimage also did not implement until XSI 7 or later.

Refresh optimization was implemented as "pixel tagging" which worked in 
early versions of Sumatra/XSI.  It was very buggy and removed around XSI 2 
or 3.

In order for the feature to work, shaders had to implement specific routines 
to communicate with the renderer core.  There were also problems for certain 
types of shaders, such as rayswitch or sprite where the shader would either 
modify or bypass the ray counters throwing off mental ray's internal state 
of what was happening.  This lead to situations where pixels would either be 
falsely tagged or not tagged at all.  In the end, it was more work to figure 
out which pixels to tag than to just render the images brute force.  the 
idea to tag the pixels was more for cases where you'd render the entire 
scene in a single pass.  With the introduction of passes, pixel tagging 
wasn't as high of a priority.

Matt



Date: Sat, 28 Jul 2018 00:30:15 +0200
From: “Sven Constable” 
Subject: RE: Friday Flashback #345
To: "'Official Softimage Users Mailing List.

Nice read. The features were amazing at the time (many of them are still by 
today). However, two features were not accomplished I think :)

User Interface “Tesselation Control: Display of tesselated geometry used in 
rendering (e.g. for displacement mapping)”

Maybe they referred to subdees. If so, it's kinda correct but didn't include 
displacement mapping by mental ray.

Interactive Rendering

“Refresh Optimization: Re-renders only the pixels affected by tuning”

I don't think so. Mental ray introduced a feature called “incremental echo”, 
to speed up scene translation. Was demoed by someone from Softimage and I 
still have the video. But it was several years after Twister was announced. 
Also that feature wasn't per pixel based, to my knowledge.

Other than that it still amazes me how ahead of it's time Softimage was.

Sven


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: Friday Flashback #345

2018-07-27 Thread Matt Lind
Here’s the accompanying Twister sales sheet available at the Softimage booth 
the day the announcement was made:


https://u7507473.ct.sendgrid.net/wf/click?upn=5SmYwFIJXHmC5X9wAP0G6mg4oLGBuQENbeDkYXezg3m6vjHxJcC6rUMd8QE2MtqzI-2F9k-2BWBXKW27XRR1WxPwQbX9jjJbw57aMI7fvZB3rSESDquECTsqSY46C9bBIiK61wa6FetSii6vEKfaGKAG5nS9GO7NluaqwyXJExbWhCiROv9vj93KO7Ns0nVHVPsFZMtpdPwyMRrTyqRKIN4d0jnU4X1hYLXAtv3ugr3DEBFjHHvA8pouuv7Z6mmZLT42cwLmTcCXVoOL6YB2qz-2BWodOZCeveJO9s5smpsAzGKQtV3sPXDD4IonWTMtBogXBUAlf6i8gbGlj-2F4ej98-2By3UJcD2Dx7QN-2FjiSws7kuKz8ndbynli67TMdiPr73RNSu0oumjtx1URGDht6yVf1Acdw-3D-3D_a6oQc7tnfcb0GKvoO27fPkrQ0ATQyF1SDBXJOg7-2FbuTyczslZWG-2Fj5ghTLHLJzNhUQv-2FwAX5NGN6k9r-2FUYPxGWapu4vuZWAtTquJFgyQA9dmIgMcHPYz20Eq-2BerSI93uOzQVUdX9msY2nl3PAcFUpP7S8vXvZDie5mHMHt-2Fm4VMAytylLWTfAn3Xj8zARz3QzZYG3X-2F2OozeDRt2wTCvVPDGaH8iiLEvs2KR73yPElQ-3D


Matt





Message: 1 Date: Fri, 27 Jul 2018 14:20:07 -0400
From: Stephen Blair 
Subject: Friday Flashback #345
To: "Official Softimage Users Mailing List.

https://u7507473.ct.sendgrid.net/wf/click?upn=5SmYwFIJXHmC5X9wAP0G6mg4oLGBuQENbeDkYXezg3m6vjHxJcC6rUMd8QE2MtqzowgyFFK4aAsDEzrdrVTV4Q6qbgbc-2FgnnpGob6G467zR75G56-2BuWz1AMtPXsoVdDXV-2BcQeKP7tI8SfI-2Feh9je47atZhc-2BurJ60RvbttIed4dAEMQ7ZLXe4P8CjxI3MuVA-2BC1YI4W9cuRTblUkTV0ThqzAfVQBEMZKrNTAdc7ACQuSit3iaW3y0fVt0pWCCM1YEkq9w4SeiTQoxADGO666O0lt-2BEHcLIgs9FogOixvSKF3THu6MkRH5yQ3bXya3AlojKNy0i682JdfkQnCinDPcNF911SqV7938FS3Ofl-2FuW0NeDPrVF1wy3buYDqgeSeFYoE-2FPzqi7DcIr40Dr7wY4g-3D-3D_a6oQc7tnfcb0GKvoO27fPkrQ0ATQyF1SDBXJOg7-2FbuTyczslZWG-2Fj5ghTLHLJzNhUQv-2FwAX5NGN6k9r-2FUYPxGYvSs04SNRXPo-2FerFOx0dy1wNN9qPNLamNF2NCk87BDe8sondOUaMRHBngPMQl5LX3OFu7Xln1bKwm7xhgOSKJ-2BOnHkT0YnW6reS8-2FmJrKog7m6st2PJuUuoUrcrzQIXeAh24grbyUB-2FYFCdryoTF1A-3D;
 


Message-ID:



Content-Type: text/plain; charset="utf-8"

Microsoft Press Release: Softimage ?Twister? Enters Beta at SIGGRAPH 1998 
"We fully expect Twister to set a new benchmark in the industry as the first 
truly interactive renderer.?

https://u7507473.ct.sendgrid.net/wf/click?upn=5SmYwFIJXHmC5X9wAP0G6mg4oLGBuQENbeDkYXezg3m6vjHxJcC6rUMd8QE2Mtqz-2FXo36ChPR9BZMic-2BoJT8MbfWxXKnjONyU2ErQkmGoFzzyh5Z7S-2BUHEC0gizR7KJvltBjKzd6-2F1sXCtmYG2AAm43lLzzWyfIuD0v5j8Zid2JrWBpy7VvRi3IiT5Wn5xiQyaNyrpSieJhCDTlHeBCl5VkfD56qKiSYg4GPXAc7XG8mlehxe37yFnF-2BnWq-2FOyw70Q7-2F6d-2FbssWlv-2BSAELb9q6qSqYFfBjsrYAHlc4cwW6FxLbjWqfuL3eeLKqhsxUrot3kPzbfWyyJm8c2nOY2I0OpLVOwzHbmUHw041w0LJTa9D-2Bnn395fbFXCnnnVRjX9_a6oQc7tnfcb0GKvoO27fPkrQ0ATQyF1SDBXJOg7-2FbuTyczslZWG-2Fj5ghTLHLJzNhUQv-2FwAX5NGN6k9r-2FUYPxGUW7wmWSyIyZXUC6Rl8uJQUn79LXZlT9sO-2FhQIiYkjoKEkknvdbXYK8Rk2O-2F7nzE9Gl28XqURq7rAoeNu-2FYXZJRCWUecj0GMWTVqfxy9KT1EWwqlEHJVqug752cgDm8v8wkZARv8v7kCUzrzSqKnojo-3D
 
 -- next part -- An HTML attachment was scrubbed… 
URL: 
https://u7507473.ct.sendgrid.net/wf/click?upn=4NJpoo1h1GOdyqBB3sJimai-2BQC-2FGskUXuPx0XSl6rNdr5Gg0XscjA7dk-2BFQcvfCTrPyNVTe21bXODeNjelkdcj7ocyG7Qzl-2FhzVov6tKwKiHo-2By5Gc72Uz7Pb1BR8hSkbny37XlpF5jrBD9lM-2Bk54Q98-2BaKzehkQ7XqUI2Mtjn4-3D_a6oQc7tnfcb0GKvoO27fPkrQ0ATQyF1SDBXJOg7-2FbuTyczslZWG-2Fj5ghTLHLJzNhUQv-2FwAX5NGN6k9r-2FUYPxGZWltUAxAM-2Ffs3xkUELLkfJNJ9EVK2FtuH3UMhZD4fB6oFpqnuhZm4lVt-2BHG-2B07Uz3A7ZEItXA21NoWeabGD0seHQAELpaAG96Y9EsJJVPlBkV1ZzWNh2pOuBgf7pmvoNSaDQdTnfHd17eFHWBW9RhA-3D


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: Vertex Colors (Weight Editor) in 2015

2018-07-17 Thread Matt Lind
I wouldn't bother with a grid data object.  Make a non-modal dialog with a 
color selector with an "Apply" button.  When clicked, the selected polygon 
nodes are modified.  Quick, easy to code, and scales well.

Vertex colors are coded according to the data type parameter specified in 
the vertex color property.  In the old days the default range would be 
integers in range [0...255], but somewhere around Softimage 2011 the default 
was changed to floating point [0...1].

Matt




Date: Tue, 17 Jul 2018 20:16:42 +0900
From: Martin Yara 
Subject: Re: Vertex Colors (Weight Editor) in 2015
To: "Official Softimage Users Mailing List.

Thanks Matt.

Actually the problem with the Weights Editor is present in 2014 too.

We can't increase 2013 licenses, so we are considering moving into 2015 but 
we have this problem with the vertices colors. And it seems that it will be 
impossible to solve it.

So I'm starting to create a cheap Weight Editor version with just scripting 
to test if I can get better performance before trying anything in C++.

So I made a grid data to emulate the Weight Editor grid and made the last 
cell a “color” cell. Now I'm not sure how to change this color. I think it 
wants a RGB888 code and I'm don't know how to change RGB values that the 
Vertex Color gives to RGB888. A little help here please?

Martin

-
About the grid color, No, it wasn't RGB888 and it's not RGB565, or hex, or 
10bit either. I'm lost.

Martin

On Tue, Jul 17, 2018 at 8:17 PM, Martin Yara  wrote:


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: Vertex Colors (Weight Editor) in 2015

2018-07-15 Thread Matt Lind
2015 has regressions in a number of areas, that's why I suggest sticking 
with 2014 SP2.  2014 SP2 is much more stable than 2013.

Any time you edit the mesh with the weight editor open, Softimage has to 
evaluate the entire object construction history to see if any changes were 
made to envelope weights, vertex colors, texture UVs, user normals, point 
positions, etc...  That's why it's so slow.

To use an analogy, it's similar in nature to when you're working in the 
windows explorer in a folder that has thousands of files and you want to 
rename one or more of them.  Every little edit you make takes forever 
because the entire folder has to be evaluated to update the display of all 
the icons and selection status.  If you perform the same task from the 
command line, you'll get instant response and none of that extra baggage.

It's the approach you take to the problem, not the speed of the tools.

Matt


Date: Sun, 15 Jul 2018 16:55:08 +0900
From: Martin Yara 
Subject: Vertex Colors (Weight Editor) in 2015
To: "Official Softimage Users Mailing List.

Hi list

I'm still working here in Softimage 2013 ! This is because Softimage 2015 
new Weight Editor is really slow with Vertices Colors. And it's an old game 
that started before Autodesk killed Softimage.

Doing some tests and playing with all the options I can see I've noticed 
that deactivating the “Filter” (Select Zero Cells Filter Off), it improves 
quite a lot the performance, but it is still way slower than 2013.

These are my results with a polygon mesh with 2400 points. This test consist 
in select all 2400 points from scratch and measure how much does it take to 
be refreshed in the Weight Editor from the moment I end my click to 
Softimage come back to live:

2013: 3 seconds 2015: 1 minute 2015 w/o Filter : 14 seconds

(measured by hand with a phone chronometer)

So, my new “discovery” improves quite a lot the performance, as you can see, 
but it is still super slow.

Is there anyway to get back the 2013 performance ?

This without going into full C++ creating a Weight Editor clone only for 
Vertices Colors and hope that it performs better ? I'm considering that 
option, although I'm not a C++ expert and don't have any guarantee that it 
will work better.

Thanks

Martin 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: Friday Flashback #342

2018-06-15 Thread Matt Lind
I’ve got one of those pins.  Acquired it 2nd hand, but I have it 
nonetheless.

https://u7507473.ct.sendgrid.net/wf/click?upn=5SmYwFIJXHmC5X9wAP0G6mg4oLGBuQENbeDkYXezg3m6vjHxJcC6rUMd8QE2Mtqzzv1mWEUkVJoyJm6R8MtWg5Q-2F6vUs-2F9f2TCzwgaPdxj43KLiP3c6eBTEiC0GFkRypP6dL-2BY7lxWEhDvBHm-2F2ExEJDsGEXc4GSqR8z5odqFeCc6NpNfO41-2BGaZKBpmqedFbWy3aZFZugsv7JMCeuXlglSmFewL0mCaLj4LYjdgGiYnXJXtQUtwVc-2FrstxOMpqXSq4iUMWqzPI0lT2Ut0O4nUgG2RbY0Ssk6SFEKXZ88C77ObLgnLX7IckuzvR9keqFmfKqPZ-2FzROvulTJEmuaaJlr0vD9cHZ9f74SnG2S8DPGsIrJnhP-2FycDeM0N5dH13k5OySvQ-2BNmt23nn3SXIMvKDc0PJzw24HCDAmU0nG-2Bp6BoqbTanrhQd0aSRGY-2ByJRhb7cQ1wNqCbkgQJGcJRFNS9zCeUk9HHlsbaRGZ9XiFgY-3D_a6oQc7tnfcb0GKvoO27fPkrQ0ATQyF1SDBXJOg7-2FbuTKKTXg8yRt6OhO-2F-2FToEILQeZQXWzP6dRIiYc8HT8-2BRfbbVEVVEBwyD5Jt-2BzdAtZ30Wlg3vMKnOiNS1mXsbr2yhNlmyGeuLnWJVffoUBj38M34oyYgD3JNwbmtFc9-2BODbVyLK0zcnDpPYqN5ZU5h9ecY9dKMiMe-2FseK2Vkx4O2no8vouUl04XHzi6N98Dyo1qk-3D

Matt





Date: Fri, 15 Jun 2018 08:22:55 –0400
From: Stephen Blair 
Subject: Friday Flashback #342
To: "Official Softimage Users Mailing List.

https://u7507473.ct.sendgrid.net/wf/click?upn=5SmYwFIJXHmC5X9wAP0G6mg4oLGBuQENbeDkYXezg3m6vjHxJcC6rUMd8QE2MtqzowgyFFK4aAsDEzrdrVTV4Q6qbgbc-2FgnnpGob6G467zR75G56-2BuWz1AMtPXsoVdDXV-2BcQeKP7tI8SfI-2Feh9je47atZhc-2BurJ60RvbttIed4dAEMQ7ZLXe4P8CjxI3MuVA-2BC1YI4W9cuRTblUkTV0ThqzAfVQBEMZKrNTAdc7ACQuSit3iaW3y0fVt0pWCCM1YEkq9w4SeiTQoxADGO666OwF-2Bjt5myhTfEK7G-2BlDL9ejqvcF5dEjHKDiM4mpvTYjeTLTB0mfDLqtY-2B9xr-2B4aagXNU6KKBimzMNHlbYR-2BxQ68mUKFZ8i3SmCC-2F9oD-2B266evQcJf-2BiHDV6g-2F4Fw-2F92tjg-3D-3D_a6oQc7tnfcb0GKvoO27fPkrQ0ATQyF1SDBXJOg7-2FbuTKKTXg8yRt6OhO-2F-2FToEILQeZQXWzP6dRIiYc8HT8-2BRfc7LxmYkXe7aPe-2BA-2BgDrPpy6daBOPhS1d8m2bWBf1d1f-2BMH5lWuCXJ-2BgIbUGNeeHmqcm5ScWacAiqh8SctJjzgBYXMYwFIFfntiEpqVK7jQn20YHefuMPlhHBe9eqw5hSrKjr7qto50qwlqcWFAu6uA-3D;
 


Message-ID:



Content-Type: text/plain; charset="utf-8"

Something for the Jurassic Park 25th anniversary 
https://u7507473.ct.sendgrid.net/wf/click?upn=5SmYwFIJXHmC5X9wAP0G6mg4oLGBuQENbeDkYXezg3m6vjHxJcC6rUMd8QE2Mtqz-2FXo36ChPR9BZMic-2BoJT8MenD6og0v2IhN61kQYPcLfmLgh5XG79DXgO9X-2BdMitdItDywr1XZsfh44N5fEcyFitrveaDto72rigVjZ5TaTbIFoxN9W5g-2Bt0chafYWasEBuMlzCq7cHJv0eApiOknHFR20svF4lISV1i08bz7coW4R6-2BliggNf5G66LMDUFGD7SBUg14Tg2rcQtBBxydGkQguY91UW1DyP5bJ8VJS3W2S2g4uAlNanF7SEScj1F-2BeLXaD8VSjYNUUwBv8u-2FtHiqFH11l-2B-2F8r50mGWtKXxaTPJEW0B2gtFOCIG9aPDv5-2FXh_a6oQc7tnfcb0GKvoO27fPkrQ0ATQyF1SDBXJOg7-2FbuTKKTXg8yRt6OhO-2F-2FToEILQeZQXWzP6dRIiYc8HT8-2BRfekjkd74R43IiCfnW-2B-2F6925-2BGPxjOKnibpiw-2B-2BIGrOUjXJdwOzVRI4phsYU3oGrkynUGoVIxtlg0AwuUTU0Usk-2BDzV12dmddqG89tFZ3bIldvskaxgcvL2FV3EztkaYMJdi7KVTouMWAbIAok9hml1E-3D
 
 -- next part -- An HTML attachment was scrubbed… 
URL: 
https://u7507473.ct.sendgrid.net/wf/click?upn=4NJpoo1h1GOdyqBB3sJimai-2BQC-2FGskUXuPx0XSl6rNdr5Gg0XscjA7dk-2BFQcvfCTrPyNVTe21bXODeNjelkdcj7ocyG7Qzl-2FhzVov6tKwKiIJ3iRZWAZCj0SJ6DNteIw7xDUaxiqVseWiMEpvwl8G3lwsBNiXGN0j636r5kgsFs-3D_a6oQc7tnfcb0GKvoO27fPkrQ0ATQyF1SDBXJOg7-2FbuTKKTXg8yRt6OhO-2F-2FToEILQeZQXWzP6dRIiYc8HT8-2BRfdRKsbc4OZEV6ZPAciR1wUMXnFl4wp7OJVtLKiRQIZRSeQuWqtsyGlDhRi0jbmuBZ7aRjYMWE1KdTVAwB78qmkITW1qdw4F-2BnOs3D52yX62ekPuET1EXxELHalG9e9QRV2iQw-2BW4dYibkMYGKaNuiXs-3D


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: A pain in the Arnold…

2018-06-01 Thread Matt Lind
I distinctly remember looking at cityscape scenes in XSI a number of years 
ago when another user had similar problems.

If memory serves, the geometry is horribly corrupt and each material was 
duplicated per polygon or polygon cluster.  There were winged edges with 
more than 3 polygons sharing the same edge, normals facing both ways on the 
same polygon, etc.  Not at all surprised you're having problems.

Although editing the geometry in XSI induced a lot of crashes, it was 
necessary to fix the problem.  Also removing the userNormals property helped 
with shading issues.  Finally, performing a delete unused materials combined 
with a simple script to merge/consolidate redundant materials cleaned up the 
rest.

Since you're doing it all in Maya, I suspect you have the same problems. 
First check the geometry for user normals, or whatever Maya's equivalent is. 
If they exist, remove them.  That should remove most of the problems.  Then 
go into the geometry and unshare those winged edges and leave them as 
discrete polygons.  That shouldn't have any negative affect on the rendered 
result.  Finally, write a script to scan all the objects to see which shader 
nodes they're using and do a 'diff' between them as the shaders are likely 
duplicate copies.  Once you find a duplicate, unshare it and replace it with 
the original copy.

A lot of elbow grease, but should fix the problems in the end.

Matt





Date: Thu, 31 May 2018 21:14:26 +
From: “Ponthieux, Joseph G. (LARC-E1A)[LITES II]”
Subject: A pain in the Arnold…
To: “softimage@listproc.autodesk.com”

Howdy yall,

I thought I would post here before I escalate this but… Arnold on Maya 2018 
is producing an error in render…

I've got an ESRI City Engine scene imported into Maya 2018, It was working 
fine under Maya 2017 and rendering fine in Mental Ray. But mental Ray is 
gone now.



It renders in the Maya Software render fine.

In Arnold it produces what looks like triangulation (Tessellation) errors 
rendering some triangles darker than others.

It only produces the problem on materials with texture maps.

Disconnecting the textures from the diffuse color removes the problem. But 
removes the texture also. But this seems to indicate its not a lighting, 
normals, or shading error.

I've turned literally everything in Arnold settings off or neutral and no 
change.

Its not shadows, nor anti-aliasing, nor duplicate polygons, nor a bad mesh.

Forcing a triangulate on the mesh can make different triangles darker but 
the problem does not go away.

All reflections, shadows, motion blur, etc have been turned off in Maya and 
Arnold for Render Settings and Object. No change.



Any thoughts?

Thanks Joey


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: Houdini : non VFX jobs?

2018-05-14 Thread Matt Lind


> On this I believe you are way too close to Softimage because it is not 
> trivial either to follow a complex scene,
> or a character? not saying it is not easier (it is) but it is not trivial 
> either.

I disagree.  A graph can be traversed and relevant nodes displayed in a 
view.  Softimage, Maya, Max, are all node graphs under the hood, but the 
data views such as the schematic merely display subsets of the nodes which 
contain certain characteristics (parent/child relationship).  Houdini has 
parent/child relationships too, but there isn't a convenient place where 
they are displayed in isolation of other properties of the scene.  Tools 
could be written to traverse and display only the nodes which define a 
parent/child relationship.  This is, in my opinion, a low hanging fruit that 
could be addressed.


>> As for networks and subnetworks. Great, you have a system. Most people do 
>> not,
>> or if they do, it will not be the same system as yours. THAT is the 
>> point.
>
>
> Same as with Passes, Partitions, Groups, Overrides and Layers in 
> Softimage?
> we build a consensus on how to use it (everything on the BG partition 
> hidden
> for example) and even tools to move things to the right partitions based 
> on
> one acting as template, etc..

Not quite the same thing.

With passes, partitions, groups, etc.. Softimage defines the structure and 
users merely label the parts in some way that is intuitive to them. 
Partitions can only appear inside of passes, overrides always appear 
immediately below the partition or object which it overrides, and so forth.

In Houdini, the networks are much more arbitrary.  The user does more than 
label things.  They also define structure of the assets.  The user can 
impose self restraint and stick to a naming scheme, template for arranging 
elements in the network view, etc.., but there is no consistent structure 
which all users will see uniformly imposed by Houdini in the manner you see 
with Softimage.  This can be disorienting to the non-technical user as the 
data and presentation can be radically different.



> It is strange because it is precisely the very sophisticated HDAs system 
> that allows
> Houdini to scale teams massively while keeping complexity under control.
>
> A good example;

You're comparing apples to oranges here.  The point is to get an intuitive 
understanding of the data you're working with.  You're talking about 
something completely different.


Matt








Message: 1 Date: Mon, 14 May 2018 10:16:35 +0100
From: Jordi Bares <jordiba...@gmail.com>
Subject: Re: Houdini : non VFX jobs?
To: "Official Softimage Users Mailing List.


On 14 May 2018, at 00:01, Matt Lind <speye...@hotmail.com> wrote: you're 
dissecting things at a more granular level than is intended, and as a result 
you're losing sight of the overall discussion.

a new user coming into Houdini doesn't have that historical background, nor 
does he/she care. He only sees a lot of special case tools that require 
inside knowledge to understand and use. That is the immediate point of 
frustration that isn't resolved well with documentation, and in many cases, 
not even discussed at all. This is one deterrent from adopting Houdini from 
the generalist's perspective.



You are right this could bring a lot of entry level comfort and easier 
transition. May comment it with the guys at SideFX.



Houdini doesn't have good tools for dealing with the macro view of a scene 
for the generalist. When you open a scene you're not familiar with, or one 
you haven't opened in a very long time, you want to get a general overview 
of it's structure in a few seconds. That is the purpose of mentioning the 
schematic view as it provides that overview at a glance. Does it tell you 
everything? No, of course not, but it doesn't have to either. It does tell 
you the links between nodes such as who is constrained to whom, where the 
envelopes reside, which nodes have shapes/lattices/etc. and very importantly 
? hierarchical relationships to understand how rigs are put together. Again, 
we're talking about the big picture. Explorer??? that?s for micro-level work 
when you want the dirty details on an object.



On this I believe you are way too close to Softimage because it is not 
trivial either to follow a complex scene, or a character? not saying it is 
not easier (it is) but it is not trivial either.



It's not good for the broader picture as you have to spending a lot of time 
clicking on nested node after node until you find what you're looking for, 
and even then there's often a lot more information displayed than you need 
leading to excessive noise. That's exactly the same problem with ICE 
compounds as digging into nested compound after nested compound you begin to 
lose sight over the bigger picture you're trying to grasp. This isn't a 
discussion about which is more powerful, it's about 

Re: Houdini : non VFX jobs?

2018-05-13 Thread Matt Lind
you're dissecting things at a more granular level than is intended, and as a 
result you're losing sight of the overall discussion.

a new user coming into Houdini doesn't have that historical background, nor 
does he/she care.  He only sees a lot of special case tools that require 
inside knowledge to understand and use.  That is the immediate point of 
frustration that isn't resolved well with documentation, and in many cases, 
not even discussed at all.  This is one deterrent from adopting Houdini from 
the generalist's perspective.

Houdini doesn't have good tools for dealing with the macro view of a scene 
for the generalist.  When you open a scene you're not familiar with, or one 
you haven't opened in a very long time, you want to get a general overview 
of it's structure in a few seconds.  That is the purpose of mentioning the 
schematic view as it provides that overview at a glance.  Does it tell you 
everything?  No, of course not, but it doesn't have to either.  It does tell 
you the links between nodes such as who is constrained to whom, where the 
envelopes reside, which nodes have shapes/lattices/etc. and very 
importantly - hierarchical relationships to understand how rigs are put 
together.  Again, we're talking about the big picture.  Explorer???  that’s 
for micro-level work when you want the dirty details on an object.  It's not 
good for the broader picture as you have to spending a lot of time clicking 
on nested node after node until you find what you're looking for, and even 
then there's often a lot more information displayed than you need leading to 
excessive noise.  That's exactly the same problem with ICE compounds as 
digging into nested compound after nested compound you begin to lose sight 
over the bigger picture you're trying to grasp.  This isn't a discussion 
about which is more powerful, it's about presenting information that is 
better suited for high level working for the non-technical user.

As for networks and subnetworks.  Great, you have a system.  Most people do 
not, or if they do, it will not be the same system as yours.  THAT is the 
point.  There is no consistent or uniform way of having information 
presented to you to get the high level picture of what's going on in the 
scene.  There needs to be some base level of communicating to the user where 
things are placed, how they relate to each other, and so on, and not require 
the user to dig, dig, dig, dig, to get oriented to find 'basic' information. 
Someone can easily build a forest and hide 50,000 trees and other 
geographical features inside of a single network or subnetwork which appears 
as a single node in the network view, and even build it recursively.  That 
is not informative.  This is where Houdini needs to improve.  In contrast, 
although it can be done, it's pretty difficult to hide those details in 
Softimage's Schematic view.  You open the scene, BAM! you see the complexity 
right away.

I'm not suggesting Houdini be rebuilt from the ground up.  I'm highlighting 
sticking points between it's current state and why more generalists don't 
adopt it.  When you get into a larger production pipeline, as much as you 
need the low level power Houdini provides with assets and such, there is 
just as much need at the opposite end of the spectrum with getting users 
into the pipeline to do work.  Many of whom are not thoroughly trained and 
need to learn on the fly, and probably won't have a great deal of interest 
learning all the ins and outs beyond the bare necessities to get their job 
done to satisfaction.  As production scales up, the quality of your users 
tends to drop because you have the matter of filling seats to crank out work 
by a specific deadline, and each seat has a salary cap.  Therefore, whatever 
pipeline you have, it must accommodate these less than ideal users.  Many 
generalists struggle with learning and/or forming good habits even when 
given good instruction as you're forcing non-technical people into a 
technical environment.  It's alien to them in a migraine headache creating 
type of way because an artist is generally right-brained while technical 
users are generally left-brained.  A schematic view is right-brained 
approach.  Explorer/networks is a left-brained approach.  While Houdini has 
a functional equivalent of a schematic view in the network view, it doesn't 
provide the same information the generalist seeks because it requires 
additional attention to detail to dissect the graphs in a more left-brained 
approach.  Houdini needs more right-brained tools and interfaces to 
accommodate the generalist.


Matt




Message: 2 Date: Sun, 13 May 2018 22:48:16 +0100
From: Jordi Bares <jordiba...@gmail.com>
Subject: Re: Houdini : non VFX jobs?
To: "Official Softimage Users Mailing List.

This thread is getting really really useful, thanks Matt?

More comments below.



On 13 May 2018, at 21:00, Matt Lind <speye...@hotmail.com> wrote: Another 
example is 

Re: Houdini : non VFX jobs?

2018-05-13 Thread Matt Lind

An example of the boiler plate burden is exactly what was already 
discussed - modeling and tweaking as that's a good bulk of the early work. 
Bad first impressions can be a major deterrent.

Another example is the need to learn the various categories of operators 
(SOPS, CHOPS, VOPS, ...).  Sometimes nodes from different categories do the 
same thing.  that adds confusion.  If nodes from one category cannot work 
with a node of a different category, then that's a problem too.  This is 
where documentation is sorely needed.  It's not strictly a case of a SOP 
does this and a VOP does that, but rather a discussion about strategy.  When 
is it appropriate to use the various OPs?  When should a SOP be used in 
place of wrangled nodes, or vice versa?  that is a huge void in the 
documentation and a place where users easily get lost and frustrated to the 
point they throw in the towel.

In short, Houdini has a lot of spring cleaning to do to tidy things up for 
the generalist.  Right now it's an idiosyncratic development environment. 
It can be very powerful, but it requires a lot of inside knowledge to use 
it.  The generalist doesn't want to (or need to) deal with the inside 
knowledge.  They need something they can hit the ground running without 
fuss.


As for the show dependencies thingy, that's just it.  I don't want to see 
more wires inside of a graph which is already very crowded, messy, and 
lacking structure.  There needs to be a way to illustrate the structured 
connectivity at a high level so users aren't forced into the weeds to get 
basic information.  With ICE or the rendertree in Softimage, the nodes were 
text-based so you could follow the logic while hiding unconnected ports. 
However, even ICE trees could get very complex very quickly, so the use of 
compounds were introduced, and while that helped, it wasn't the same as a 
schematic view as compounds could be recursively nested to very deep levels 
hiding the very information you sought.  Houdini's nodes are very iconic, 
but not very descriptive as to what they do.  You can see various node icon 
shapes, but that still doesn't tell you the logic in the same way as 
following an ICE tree or rendertree.  The design/layout of the network view 
leads to lots of bloat very fast making it difficult to keep track of your 
work when you get beyond simple models.  While networks make a lot of sense 
for VFX work, they are often less than ideal for character driven work. 
Character work benefits more from straightforward relationships which are 
easy to identify and follow as characters are often a hub for other work 
such as VFX, simulations, attachments, constraint interactions, and other 
details which come later in the pipeline.  People working in those later 
steps need to be able to quickly jump into the asset and immediately know 
what to do and where to do it.  They can't be burdened with a messy network 
graph which they must study to the N'th degree before they understand where 
to start.


Matt




Message: 2 Date: Sun, 13 May 2018 17:28:12 +0100
From: Jordi Bares <jordiba...@gmail.com>
Subject: Re: Houdini : non VFX jobs?
To: "Official Softimage Users Mailing List.


below



On 12 May 2018, at 23:26, Matt Lind <speye...@hotmail.com> wrote:

I wouldn't steer towards uber nodes. The larger a node gets, the more 
maintenance it requires and more taxing it becomes as a bottleneck. If a 
node gets too big, you may end up with a situation where it becomes really 
popular from having a larger feature set and everybody and his cousin uses 
the node in every project. At that point the node can become an albatross 
around the developer's neck because any tweaks to the node could cause 
negative ripple effect throughout the community should something go wrong. 
The whole point of having a node system is to guard against that scenario by 
distributing the workload and only use the features you need. Uber nodes 
would automatically add bloat to your workflow from the many features you 
often wouldn't use but have to come along for the ride.



I was referring to the kind of ?uber node? you find in Softimage where you 
don?t have to do all the heavy lifting? certainly I agree with you, 
monolithic Albatros is not the idea of uber-node I had in mind. :-)



I think what's needed are more dedicated nodes for modeling, texturing, and 
animation tasks to fill in the current voids. There also needs to be some 
more UI polish to work with modeling and character animation workflows. Both 
are merely the base level adequate. They need to improve into good or great.



My take is that in order to compete in the modelling market the edit SOPs 
and the Retopo SOP will have to be extended to bring a lot more 
functionality and this is where I see the non-procedural approach 
acceptable. Right now these are very limited compared with Softimage.



Houdini needs a few modules to account for workflows where a node base 
system simply doesn

Re: Houdini : non VFX jobs?

2018-05-12 Thread Matt Lind
I wouldn't steer towards uber nodes.  The larger a node gets, the more 
maintenance it requires and more taxing it becomes as a bottleneck.  If a 
node gets too big, you may end up with a situation where it becomes really 
popular from having a larger feature set and everybody and his cousin uses 
the node in every project.   At that point the node can become an albatross 
around the developer's neck because any tweaks to the node could cause 
negative ripple effect throughout the community should something go wrong. 
The whole point of having a node system is to guard against that scenario by 
distributing the workload and only use the features you need.  Uber nodes 
would automatically add bloat to your workflow from the many features you 
often wouldn't use but have to come along for the ride.

I think what's needed are more dedicated nodes for modeling, texturing, and 
animation tasks to fill in the current voids.  There also needs to be some 
more UI polish to work with modeling and character animation workflows. 
Both are merely the base level adequate.  They need to improve into good or 
great.

Houdini needs a few modules to account for workflows where a node base 
system simply doesn't make any sense or provide advantage.  Think pushing 
and pulling points on geometry to sculpt a character, or tweaking texture 
UVs for game assets.  Building a network with hundreds of nodes containing 
all the tweaks is counter productive beyond a handful.  It would be better 
to make a dedicated user interface to work on that task in long session 
form, then merely bake out the stack of tweaks as a single node in the tree 
when all is said and done - or something to that effect.  Perhaps the user 
would apply markers to decide how many tweaks can be bundled together as a 
single node upon completion in the same fashion a user can define an 
arbitrary point as a restore point when updating Windows.

The FCurve editor is mostly OK, but the layout of tools on all sides of the 
windows needs a rethink.  While they're making good use of screen space, it 
puts more burden on the mind of the user to keep track of all the tools and 
be more conscious of pointing and clicking with the mouse when tweaking 
FCurve Key values so as to avoid inadvertently clicking a tool placed on the 
perimeter of the FCurve editing workspace.  Sometimes it's better to have 
emptiness on one or more sides of the workspace.

What needs most attention is management of large networks of ops as when 
dealing with character rigging as you need some degree of assessment of how 
the character's parts are hooked up to function.  A schematic view makes 
that fairly straightforward and the parts that are overdriven by expressions 
or other tools are easy enough to locate with arrows and wires connecting 
them.   Doing the same in Houdini on a complex character is quite a chore as 
the trees of nodes don't necessarily illustrate the patterns of parent/child 
relationship or trickle down behavior one would expect to be able to follow. 
This makes the process of rigging a bit counter-productive from an 
organizational standpoint and puts extra burden on new users or users who 
haven't seen the asset before and need to become familiar with it before 
they begin work.  It requires a great deal more study to get up to speed.

What most non-technical artists complain about is the lack of attention to 
detail in getting boiler plate tasks done.  Not because the application 
isn't capable, but because it requires a lot more time and energy than 
should be necessary.  It's kind of like having to rebuild your car from 
scratch every time you want to go grocery shopping.  Even if all you have to 
buy is a carton of milk, the effort to get there is just not worth it. 
Furthermore, the houdini manuals aren't particularly good at describing how 
to make use of the system for these types of tasks.  There's documentation 
on individual nodes and interfaces, but there really isn't anything to tie 
it all together in a harmony that makes sense to the end user.  One hand 
isn't talking to the other.  I am a technical user and found this to be the 
most frustrating part of learning Houdini.  While there are videos, the last 
thing I want to do is spend hours and hours scrubbing through videos to find 
the one nugget I need to get to the next step of the task.

I would like to use Houdini, but am choosing to not pursue it until I see 
more adoption for character and modeling work.

Matt



Message: 1 Date: Sat, 12 May 2018 09:34:28 +0100
From: Jordi Bares 
Subject: Re: Houdini : non VFX jobs?
To: "Official Softimage Users Mailing List.


OT: Amiga

2018-05-11 Thread Matt Lind
with all this talk about dinosaurs, I was reminded of a little project I 
need to tackle.

By any chance would anybody still have an Amiga computer, or know someone 
who does?  I have an old project from years ago that I would like to exhume 
that is stored on a series of floppy disks.  While my PC has a floppy drive, 
PC's cannot read Amiga formatted disks as Amiga used a proprietary data 
format that requires reading the disk in a fashion that PC's are unable to 
accommodate without low level drivers to modify the controllerand even 
then it's a roll of the dice.

The floppies are in good shape and the stored files are written directly to 
the disk.  No weird compression, multi-volume archives, or stuff like that. 
I just need the data off the disks, preferably as files, not as disk images.

Please contact me offline if you can help.

Thanks,

Matt


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Houdini : non VFX jobs?

2018-05-11 Thread Matt Lind
Given Houdini is a node based system, there is a simple paradox at play that 
in order to get the level of cohesiveness Softimage employed, tools need to 
share information and work together.  A node based system, by design, 
requires each node to act independently.  To get the Softimage workflow in 
Houdini requires either monolithic nodes with enough intelligence to cover 
all the bases of a particular task, or the UI needs to take control and hide 
the nodes behind the scenes slapping user's wrists if they attempt to fiddle 
with the nodes involved.  In either case, it works against a node based 
system's mantra.

In short, I don't think it's possible for Houdini to ever become another 
Softimage.  You'll have to settle for something that has great power but 
some degree of cumbersome workflow.


Matt




Message: 2 Date: Fri, 11 May 2018 18:44:10 +0100
From: Alastair Hearsum 
Subject: Re: Houdini : non VFX jobs?
To: softimage@listproc.autodesk.com

I think there is real danger in pinning all this grumbling on lack of 
familiarity and not acknowledging that there are some fundamental design 
issues . The first step to recovery is to admit that there a problem. As 
everyone knows there is some fantastic technology in there but its strung 
together in an awful way. Its like putting the organs of a 20 year old in an 
octagenarian; each organ very capable in its own right but not in the ideal 
host to get the best out if it.


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Any Dinosaurs Still Lurking?

2018-05-11 Thread Matt Lind
I wouldn't call myself a dinosaur, but I'm still here.

I remember that SIGGRAPH.  Nothing like having 10 mosquitos fly in your 
mouth when trying to drink your beer.  I drove all the way from Chicago to 
attend it.  Did the trip in 18 hours flat, nonstop, but for me it was the 
show where if it could go wrong, it did go wrong.

For example, Kim Aldis and I were invited by Maggie to show examples of 
using the XSI SDK in production for the Softimage SDK summit.  Upon checking 
into my hotel the night before and plugging in my computer, I discovered all 
my addons had been corrupted and my original source code to those addons was 
on a CD back home leaving me nothing to show.  I think Kim experienced 
something similar.  The next day at the SDK summit after the Softimage SDK 
developers finished their lectures, MC Maggie told everybody in the room to 
gather around the table where Kim and I were sitting (this was all 
unscripted), then spent a few minutes hyping us up as the best XSI users 
worldwide to set the stage.   Maggie then gave us the floor, but Kim and I 
both kind of shrugged our shoulders because neither of us had anything 
tangible to show.  So we tried to turn it into an impromptu Q+A session, but 
it was a long 15 minutes of crickets.  The misery didn't end there...

I was also invited by Dave Lajoie to give a presentation how to write 
shaders at the Softimage mental ray summit.  I was really intent on making a 
good showing as I had developed a suite of light shaders for 3rd party 
distribution I wanted to show off.  But again, my addons containing all my 
shaders were corrupt and the source code was at home on a CD.  I tried to 
explain to Dave, but he just cut me off and reassured me everything would be 
alright.  He was thinking I was merely a little nervous from butterflies or 
whatever.  Anyway, I lugged my desktop computer to the summit, hooked it up 
to the projector and had to figure out how to fill 20 minutes with nothing 
to show.  As I look around the room I see Thomas Driemeyer, the head of 
development for mental images watching intently.  Standing next to him is 
Marc Stevens and other important people from Softimage.  So I can't do any 
fibbing to the pass the time, I needed to be accurate.  Frantically 
searching my hard drive I found some old code for a light shader, but it was 
a really early version that I knew had many bugs.  So on the spot I 
improvised by introducing the mental ray manuals, where to find information 
to write shaders, and specifically, how to understand the manuals as that 
was a complaint I often saw on the list (most people only read the softimage 
documentation which was often misleading).  After a few minutes I saw 
disappointed faces in the crowd, so I took a deep breath, loaded my buggy 
code into visual studio, and started the demonstration.  Before I could get 
too far, Dave crawls up on his hands and knees and informs me with a hand 
gesture I have 2 minutes remaining.  So I quickly rushed through what I 
could of my light shader code and showed a few pre-rendered images of what 
it could do, then wrapped up.  Ugh...

There were many other mis-capades at that show, but I digress.

Upon returning home from the show, I discovered XSI had a bug in the addon 
system.  In early versions of XSI, all installed addons were stored in a 
single file, not separate files like they are today.  Adding or removing an 
addon meant the application would add/remove the relevant data from the 
file.  But in the specific case of deleting an addon, there was a bug where 
it introduced a byte offset error by deleting too much or too little 
information.  All addons before the location of the error were fine, but all 
addons appearing after the error were corrupted as data would be offset or 
missing.  If you ever deleted the first addon in the file, then you 
effectively corrupted all of them.  The only remedy was to reinstall XSI.

Matt




Message: 2 Date: Fri, 11 May 2018 11:23:14 –0500
From: Bradley Gabe witha...@gmail.com
Subject: Any Dinosaurs Still Lurking?
To: softimage@listproc.autodesk.com

Just curious?

Now that I?m a resident in San Antonio, I was reminiscing about old 
SIGGRAPHs on the Riverwalk, and came to the realization that the Softimage 
mailing lists, for me at least, were my Facebook before there was official 
social media.

San Antonio still owes me a camera!


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: test

2018-04-30 Thread Matt Lind
Depends on your perspective.

From a reading perspective, the new formatting is an upgrade as I can now 
read the message in the sea of text.

For replying/editing, it’s a downgrade as the message is a picture/object 
that cannot be trimmed/edited in a message.

You'd think by 2018 somebody would've figured this out by now.

Matt



Message: 1 Date: Mon, 30 Apr 2018 18:59:36 +
From: Luc-Eric Rousseau 
Subject: test
To: “softimage@listproc.autodesk.com” 

Message-ID: <462f0e8a-2cdc-4d20-90b5-be5aaa0ee...@autodesk.com> 
Content-Type: text/plain; charset="utf-8"

Is there a formatting problem today? -- next part --  
An HTML attachment was scrubbed… URL: 
http://listproc.autodesk.com/mailman/private/softimage/attachments/20180430/cddc7a87/attachment.html


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.If you'd like to unsubscribe 
and stop receiving these emails click here: 
http://mail.dev.tinkercad.com/wf/unsubscribe?upn=a6oQc7tnfcb0GKvoO27fPkrQ0ATQyF1SDBXJOg7-2FbuTijhZdP1HUOkN5kEU4hzJMpqZRVgepXF9OBzoJV2g1SzN52y5Kbw3b66-2Fm1rGoO1WWxtFY2OGKxtnMvXmHkXH173jrPz7gDABC2J-2BktVHrZDv3u86Eg029Wc16R4kctltoR18E-2BDXEc6YnVsViCgcPMhI5Q-2B9GSXMWs9ZLXaIIlz9qCMHUY-2FXFz9xLEVawDqM-3D.


RE: OT not so OT: Blender 2.8 (new release) will have Softimage

2018-03-05 Thread Matt Lind
Nice gesture, but that ship has likely sailed.  It's been 4 years (today) 
since Autodesk announced the termination of Softimage.  It's not like the 
developers haven't found other work yet.

Matt




Date: Mon, 5 Mar 2018 13:18:18 -0500
From: Pierre Schiller 
Subject: OT not so OT: Blender 2.8 (new release) will have Softimage
coders on board?
To: "Official Softimage Users Mailing List.

"The best User interface design has been designed by you SOFTIMAGE
CODERS". This is Ton Roosendaal inviting Softimage CODERS to JOIN Blender
2.8 project. THIS IS SERIOUS BUSINESS! #WhyBlender3D

 
#b3d

 
#LongLiveSoftimage

https://urldefense.proofpoint.com/v2/url?u=https-3A__youtu.be_Tqo-2DMVKnbFg-3Ft-3D44m49s=DwIFaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=Q0MgnyWuf5xYE33GUsONB_cEHJTifhLi8-VF2ty5FbQ=skRwrjiknn-A1UWLO5gr6UBMOq4cHwPJsMLqKM4gXSQ=
Click play and celebrate!

Let?s thread this to infinity or until they complete blender 2.8! :D
I?ve been advancing on my own with Blender inside the core code and also
over the UI 
https://urldefense.proofpoint.com/v2/url?u=https-3A__wp.me_p4qGvb-2D5j=DwIFaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=Q0MgnyWuf5xYE33GUsONB_cEHJTifhLi8-VF2ty5FbQ=ARL8SkZw5y268HB0_axLqqo2mTPmwKZrz5ltD94ZioA=
 
. I tell you it?s very much possible to
make the Blender Foundation take notice on most of the things that can be
improved on Blender. I?m also collecting feedback here:
https://urldefense.proofpoint.com/v2/url?u=https-3A__wp.me_p4qGvb-2D75=DwIFaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=Q0MgnyWuf5xYE33GUsONB_cEHJTifhLi8-VF2ty5FbQ=5jLKH8ubOa_BWnuAidaYX0SbVpVC6dsM-FjKJHwZGmM=

-David R. 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: old XSI versions (1-3)

2018-02-26 Thread Matt Lind
The first windows version of Softimage|3D was 3.01 for Windows NT 3.5.  It 
was the only version to have the IRIX interface.  Softimage|3D v3.51 was the 
2nd release on Windows NT 4.0 and the first to have the ugly Windows 
interface which lasted until it's retirement.

XSI was always a native windows application.  Early versions of XSI came 
bundled with Softimage|3D, not the other way around.  This was mostly to 
fill the holes of XSI's incomplete feature set with respect to polygon 
modeling and particles.  Once those holes were filled, XSI shipped on it's 
own and the price was raised.

Matt




Date: Mon, 26 Feb 2018 11:12:26 +0100
From: Rob Wuijster 
Subject: Re: old XSI versions (1-3)
To: softimage@listproc.autodesk.com

Wasn't v3 the first real Windows version?
I can recall that v1 and 2 were a 'complement' to the Irix SI|3D installers.
My archive goes back to v.3.5.1 for the first Windows version.


Rob
\/-\/\/


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: old XSI versions (1-3)

2018-02-26 Thread Matt Lind
You sure about that?  I have v3.x discs, there is no IRIX version, but there 
is a Linux version.

SPM modified the dongle every time you launched the application.  Therefore, 
even if you roll back the clock on your system, SPM will detect the more 
recent / future date on the dongle and abort the application.

Matt


Date: Mon, 26 Feb 2018 10:58:35 +0100
From: "Sven Constable" 
Subject: RE: old XSI versions (1-3)
To: "'Official Softimage Users Mailing List.
https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_forum_-23-21forum_xsi-5Flist=DwIFAw=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=Gi3_3slOn3EG1qcCmcJHwunpeuG5vtJyoHdlXH4A1tA=VB4SNqdMrmv8_l_D374jdMYX_8H0Qft5EMUuHdd-0D8='"

Message-ID: <001901d3aee8$5e451df0$1acf59d0$@imagefront.de>
Content-Type: text/plain; charset="us-ascii"

There were IRIX versions at least up to version 3. Would't have thought that
too and only know it because I have the installer here for that version. But
I think it was definitely the last for IRIX. I think the timebombed old
versions using SPM can be installed and will most likely work by setting the
date back.  I would love to test it.

Sven 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: old XSI versions (1-3)

2018-02-25 Thread Matt Lind
An installer without a license key won't do you any good.

I have some of the original installation CDs.  However, only versions using 
FlexLM licensing will function.  SPM versions (XSI v2.0 through v7.01) with 
permanent licenses were actually time bombed for 12 years from data of 
issue - which have since expired.  So even though you paid for lifetime use, 
you only got 12 years...which didn't make me happy.  Only Softimage|3D and 
XSI v1.x had an IRIX version.

Matt


Date: Sun, 25 Feb 2018 00:35:50 +0100
From: "Sven Constable" 
Subject: old XSI versions (1-3)
To: "softimage@listproc.autodesk.com"

Message-ID: <01d3adc8$344a8600$9cdf9200$@imagefront.de>
Content-Type: text/plain; charset="us-ascii"

Hey list,



can anyone point me to downloads of XSI versions 1,2 and 3? It's really just
of personal interest. Not talking about pirated software and no cracks. Just
the straight versions of the XSI installer from Softimage at that time.  For
Windows and IRIX. Any versions between these are also welcome.



Sven


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: Friday Flashback #330

2018-02-11 Thread Matt Lind
It was a time when there was great inspiration in the air to do things in 3D 
as there were no limits to what you could imagine.  There was a whole 
universe of things you could apply 3D to and you'd be champing at the bit to 
do it first.  It was also a weird time as the internet hadn't been fully 
leveraged yet, people were still accustomed to sending letters and bills 
through the post office instead of using email, and we're producing tons of 
digital content for analog mediums like film or dying mediums like video 
tape.  Games had to be ultra simple, so the lack of detail almost acted as a 
space for you to imagine what it was supposed to be to fill in the blanks. 
It was like being in the 21st century entrepreneur serving a 20th century 
consumer.  It made you feel like you could see the future others could not.

I do not miss the times of expensive equipment and having to beg for favors 
at a post house to get a demo reel produced, or having to pick and choose 
which studios to apply for work because of a limited supply of demo reels on 
hand to mail out.  I also do not miss the feeling of the employers owning 
you because only through their equipment could you practice your craft.  But 
I get what you're poking at, you like the exclusionary aspect where you were 
one of a select handful to make it into a emerging field.  There was some 
charm to that.

I think what I miss the most is the ingenuity on display competing with 
different ideas of where the future of the technology should go.  A lot of 
really good ideas (many of which are significantly better than what we use 
today) are lying on the floor somewhere instead of being actively used.

Matt






Date: Sun, 11 Feb 2018 04:29:43 +0100
From: "Sven Constable" 
Subject: RE: Friday Flashback #330
To: "'Official Softimage Users Mailing List.

That time was more interesting, wasn't it? We had to fight against technical
limitations and prepare a ground for anything. 3D was so exciting and new,
we had everything under control. Then it became standard and we loose
grounds. I'm kidding. Not loosing grounds :) But 3D is not the same as it
were back then.
Sometimes I miss the old days, when 3D was expensive and rare.


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: Friday Flashback #330

2018-02-10 Thread Matt Lind
I meant working in the 'Dot Com' era nearly killed a lot of us too as we 
were putting in so many hours with (by today's standards) very primitive 
tools in what was the wild west of 3D's uprising as a medium.

Working with stop-motion was a long and grueling process too, but there was 
a structure and process to it and your lives had a rhythm which could be 
managed.

Working in digital was the wild west where everything was an experiment 
because few standards had been established yet.  That required lots of trial 
and error to figure it all out, and then lots of lobbying to get your 
methods accepted and adopted as the way to do it.  Apply all that on top of 
back breaking production schedules to get content produced was very hard on 
animators.  In the early part of my career, it wasn't unusual for me to 
spend 100-120 hours per week at the office.  There was an 14 month stretch 
where I almost never saw the sun other than when in transit to get lunch.  A 
lot of that was from working for heavily mismanaged studios with large 
ambitions and big budgets.  Gave me access to technologies and top tier 
programmers I wouldn't have had otherwise, but came at the cost of personal 
well being as deadlines were extremely unrealistic, and failure to deliver 
meant closure of the studio and loss of job (which eventually happened 
anyway).  Back in those days hardware and software were too expensive to 
purchase for home use, so if you needed a demo reel, you were likely using 
company equipment in the off hours.  So, you either did the work, or you 
didn't work at all.

Matt





Date: Sun, 11 Feb 2018 02:39:58 +0100
From: "Sven Constable" 
Subject: RE: Friday Flashback #330
To: "'Official Softimage Users Mailing List.
https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_forum_-23-21forum_xsi-5Flist=DwIFAw=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=6sva7jE3WQQZ-AeMfxXWvwS1ZRPyS4zwCD42vVNCHNk=kUcfGP0t5vViiy3w5VkBPsVLsdy5-HRt-OnAlqI9Pn4='"

Oh I think I misunderstood you when you said It killed us. You meant killing
in a positive way, right? Sorry, that was lost in translation.
Sven


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: Friday Flashback #330

2018-02-10 Thread Matt Lind
Back then modeling with NURBS was the norm, not the exception.

SI3D didn't have any texture unfolding projection methods (only planar, 
cylindrical or spherical).  Creating custom UV layouts was possible, but a 
real pain as the UV layout editor was designed for low resolution polygon 
meshes used in early 3D games, not movie quality high resolution geometry, 
and you could only manipulate one UV at a time.  Graphics hardware capable 
of displaying textures on geometry still cost a premium, so workflow often 
included jumping out to the renderer to check in on your progress from time 
to time.  To create custom UV layouts required a huge amount of manual labor 
moving points individually, or you'd resort to clever repurposing of other 
tools such as RenderMap.  But any respectable studio with a decent budget 
would've exported the geometry to a 3D paint program like Amazon Paint or 
DNA's Flesh for that work.  Working with polygons pre-2000(ish) was a real 
chore due to lack of decent hardware acceleration and tools.  Finally, don’t 
forget SI3D was limited to 60,000 triangles per scene.  You could increase 
that limit by modifying an environment variable, but doing so ran the risk 
of corrupting memory and other issues with the graphics hardware.  When you 
have to animate scenes containing many bugs like in Starship troopers, 
working with NURBS allowed a much higher number of bugs to appear onscreen 
before you hit those limits as you could reduce the bugs' geometry to 1x1 
interpolation in U and V.  That's another prime reason why NURBS were used 
so much back in the day.

NURBS had significantly better system performance during animation playback 
compared to polygons.  Still does today. A lot of it has to do with the 
geometry description as NURBS requires only a handful of control point 
positions as input even for large and complex shapes.  The rest is derived 
from interpolation which can be performed in hardware and highly optimized 
without much fuss as the geometry has a very well defined organization which 
is highly scalable and predictable.  Graphics libraries, like OpenGL, can 
use triangle strips and other optimization methods to draw large amounts of 
geometry quickly with minimal overhead.  Polygons (and subDs) are arbitrary 
and often cannot take advantage of those optimizations.  XSI has many core 
features optimized for working with NURBS such as auto-LOD control when 
manipulating the camera, manipulating the geometry, or performing animation 
playback.  While many people, especially today's generation of artists, are 
deeply against using NURBS at all, I think that mentality is a big mistake. 
A lot of that thinking has to do with not properly learning what NURBS are 
or having a decent environment to work with them.  NURBS aren't meant for 
every modeling or animation task, but for some they provide elegant 
solutions which do not exist (or do not compute very well) for other types 
of geometry.

Matt




Date: Sat, 10 Feb 2018 20:05:38 +0100
From: "Sven Constable" 
Subject: RE: Friday Flashback #330
To: "'Official Softimage Users Mailing List.

I wonder if they designed the bugs *a bit* with NURBS modeling in mind. I 
once modelled and rigged that bug in XSI as kind of a training session when 
I switched from SI3D to XSI. Except for the lower part of the torso maybe, 
there are no parts with singularities or other patch modeling difficulties. 
Pretty much all parts are rigid with easy UV topology. Ball joints and no 
enveloping for the most parts if not all. Maybe it were just simple parent 
child hirarchies when they rigged it.

I don't know it of course, but I would guess it was relatively fast to 
animate in terms of performance.



Sven 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: OTish - todays Maya rant and Q - selecting items

2018-02-01 Thread Matt Lind
Flakey behavior can be the result of using a gaming graphics card like a 
GeForce instead a workstation graphics card like a Quadro.

Gaming cards cut many corners to get their speed up, cost down, do one thing 
very well, and mediocre at everything else.  Games tax a few specific 
resources pretty hard, but the number of different resources they tax is 
fairly thin.

Workstations cards, on the other hand, are designed for a wider array of 
uses providing more depth with extra buffers, overlay planes, and so forth 
because their purpose is to be reliable in most any situation.  Many 
applications lean on these extra buffers to work properly.  If you are using 
a gaming card, you may run into situations where the application either 
takes a performance hit, or makes bad decisions because the buffer does not 
exist.

When I was at Carbine, we had a mix of graphics cards.  My workstation had a 
midrange Quadro while the artists had top end GeForces.  There were many 
issues where they would run into flakiness that my workstation could not 
reproduce.  For example, if multiple windows occupied the same screen space, 
applications would get confused which one had focus.  If you were using 
photoshop and clicked on a particular part of the screen to paint, the 
application would think you were using the explorer window in XSI because 
the explorer window was in the same 2D screen space, but on a different 
layer.  Likewise, if you were in XSI and did something in a viewport, the 
application would think you were working in your email client if it was open 
in the same screen location.  Quadros did not have this problem.

In another example, an artist needed to do video capture to demonstrate a 
modeling technique, but each time he did his capture session, the 
application would only capture the active viewport in XSI, not the entire 
screen.  If the schematic view had focus, then only the schematic view would 
be captured while everything else was black because XSI's interface was 
comprised of multiple windows embedded into a framework, and each window was 
on a different overlay plane.  Gaming cards typically only have support for 
a few overlay planes (if that).  Again, the Quadro did not have this issue - 
it supported multiple overlay planes to capture all the windows in the XSI 
interface.

Matt




Date: Thu, 1 Feb 2018 11:19:06 +0100 (CET)
From: Morten Bartholdy 
Subject: OTish - todays Maya rant and Q - selecting items
To: "Userlist, Softimage" 

Am I the only one being seriously annoyed by the seeming inaccuracy of 
selection of items in Maya viewports?

I click directly on geometry with plenty of screenspace around, and Maya 
selects an adjacent object. Orbit a bit and dolly in, try again - Maya 
selects another irrelevant adjacent object. Click elsewhere on object I want 
to select, and finally I get to select it. WTF were they thinking?

I find myself selecting adjacent objects and rig elements, making quick 
selection sets (so I can quickly find and unhide them again) and hiding 
them, just to select one particular object. Needless to say it is a massive 
waste of time.

Is there a reason behind this madness or is is something like depth sorting 
inaccuracy or what??


Morten


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: friday flashback #328

2018-01-31 Thread Matt Lind
I used many older weaker computers prior to entering the industry, but my 
first professional workstation was an SGI Indigo R4000 with 32 MB RAM and a 
4mm DAT tape drive.  The hardware shading choices were constant, diffuse, or 
wireframe.  No textures.  Rendering a 640 x 480 image with anti-aliasing 
took 6 minutes per frame (average), and that was without reflection, 
refraction or transparency.

I feel working under such tight conditions helps one be more efficient and 
keep everything in perspective.  Certainly helped with games development. 
It always floors me how artists today add floods of unnecessary clutter to 
their workflow by always showing scene stats, multiple floating windows, 
full textured/shaded viewports, etc... then complain all the time about 
their machine being slow.

Matt



Date: Sat, 27 Jan 2018 17:53:25 -0500
From: Stephen Blair 
Subject: friday flashback #328
To: "Official Softimage Users Mailing List.

In 1997, we were working with 128MB of RAM? seems inconceivable now
https://urldefense.proofpoint.com/v2/url?u=https-3A__wp.me_powV4-2D3ut=DwIBaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=Si5l9B9ujJThtrNVPHDzJcKakhWPKT7jeakpJOeVGqs=9CRIACcYoXwEy1qHgLXfi8MyvCUjczVvJBLeHagB9Vs=


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Render tokens with hardware renderer

2018-01-31 Thread Matt Lind
I don't think there is anything ready to go with a push button.  You'll 
likely have to render in two passes (image pass, slate pass) and composite 
them.

The only other way to do it is write your own hardware shader so it can be 
rendered with the rest of the scene.  Such a shader will likely have to be 
written with code, not the existing shader nodes in the render tree library.

Matt


Date: Mon, 29 Jan 2018 23:51:13 +0100
From: "Sven Constable" 
Subject: Render tokens with hardware renderer
To: "softimage@listproc.autodesk.com"

Hi list,



sometimes I have passes already set up, still beeing in animatic stage.
Render tokens seems to not work with hardware rendering. Only with software
rendering or when capturing the viewport directly. Is there any way to use
render tokens with hardware rendering (shaded,hidden line removal, OGL) ?



Sven 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: I am more thinking of the 4k monitor for as an animator's

2018-01-25 Thread Matt Lind
Yes, there are advantages to any new technology, but the question is whether 
those advantages are "needed" at the premium price of being an early 
adopter?  If you're trying to control costs and be practical, the answer is 
usually no.  Waiting a little bit and buying when the price comes down is 
often a better use of money.

I would love to have a new fangled computer with 10 processors each with 100 
cores and multiple graphics cards with SLI linkage and VR everything, but do 
I need it?  no.  My 10 year old computer still functions just fine for my 
purposes.  In some ways, it's better as it exposes bottlenecks in my code 
more readily than a faster computer which is so much more efficient it 
effectively masks the problems I need to find and eradicate.

Not saying don't buy a 4K monitor.  Just saying it isn't needed for the 
purpose given.

Matt



Date: Thu, 25 Jan 2018 12:47:40 +0100
From: Mirko Jankovic <mirkoj.anima...@gmail.com>
Subject: Re: I am more thinking of the 4k monitor for as an animator's
workstation.

Well actually 4k monitor does have it's advantages.
Nice big viewport where you can see all those details better is already a
plus, but having extra space to fire up graph editor and maybe some GUI
controls if they are available.. it helps.
Honestly I wouldn't go back to smaller...
As a matter of fact I'm even thinking about waiting for cintiq 32 inch 4k
to move to that. But need to figure out if it is worth all the cache :)


On Thu, Jan 25, 2018 at 12:00 PM, Matt Lind <speye...@hotmail.com> wrote:

> Animators generally don't need 4K monitors as they mostly work in 
> wireframe
> or shaded mode only care about broad details such as feet penetrating the
> floor.  Details like that are clearly visible even on 1080p monitors. 
> Even
> if they aren't, primitive rig devices can be set up to make it obvious,
> such
> as putting a colored plane where penetration may occur, or applying an
> expression to raise a flag.  Just examples.
>
> Artists working with color and lighting, on the other hand, may need 4K
> monitors.
>
> Matt
>
>
>
>
>
> Date: Wed, 24 Jan 2018 14:54:40 -0500
> From: "Leoung O'Young" <digim...@digimata.com>
> Subject: Re: Softimage 2015 R1 Redshift and 4K?
>
> I am more thinking of the 4k monitor for as an animator's workstation.
>
>
> --
> Softimage Mailing List.
> To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com
> with "unsubscribe" in the subject, and reply to confirm.
>



-- 
Mirko Jankovic
*https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cgfolio.com_mirko-2Djankovic=DwIBaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=kFH_WPLP3bizDVMXireAaS_3EzrzZz7qF_ybuckJbZk=2mlat73IQAjldTkK6qOsWgn26997gC9R1xwS4RBRUjU=
<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cgfolio.com_mirko-2Djankovic=DwIBaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=kFH_WPLP3bizDVMXireAaS_3EzrzZz7qF_ybuckJbZk=2mlat73IQAjldTkK6qOsWgn26997gC9R1xwS4RBRUjU=>*

Need to find freelancers fast?
www.cgfolio.com

Need some help with rendering an Redshift project?
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gpuoven.com_=DwIBaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=kFH_WPLP3bizDVMXireAaS_3EzrzZz7qF_ybuckJbZk=m3tGpzm_OhVokJsNYYKSQAnvvNzMr6puUkS1qUUN-IQ=
-- next part --
An HTML attachment was scrubbed...
URL: 
http://listproc.autodesk.com/pipermail/softimage/attachments/20180125/f4549b1b/attachment.html

--

Message: 3
Date: Thu, 25 Jan 2018 14:37:15 +0100
From: Alex Doss <alexd...@gmail.com>
Subject: Re: Softimage 2015 R1 Redshift and 4K?
To: "Official Softimage Users Mailing List.
https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_forum_-23-21forum_xsi-5Flist=DwIFAw=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=X908I5RsJddq3HqXq1hz62LupPPCATcVlWpto6s-CWI=zhpqv44U92ketTHORJ8RzvaLZTGgIuzntYyHHRHk9JQ=;
<softimage@listproc.autodesk.com>
Message-ID:
<cakjrrlfvni8kzmvquvbn96ex_hwzdyhgmjq+b+olmdiwupu...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

U have to be real quick now.
The "funny" thing is that Nvidia is at least trying to regulate the
situation.
Funny, in quotes, coz its just unrealistic for it to try and regulate that:
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.polygon.com_2018_1_23_16921356_nvidia-2Dgraphics-2Dcards-2Dsold-2Dout-2Dcryptocurrency-2Dminers=DwIFaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=83fq0qK5NI_zCHl0RK_KuZSUVpY-QOvdvjqiYIn9uo4=M2W7zy0iWvUVS9z3WaNHePP3-UcY6GbzARPm7kFOEAQ=
A

On 25 January 2018 at 11:41, Ognjen Vukovic <ognj...@g

Re: I am more thinking of the 4k monitor for as an animator's workstation.

2018-01-25 Thread Matt Lind
Animators generally don't need 4K monitors as they mostly work in wireframe 
or shaded mode only care about broad details such as feet penetrating the 
floor.  Details like that are clearly visible even on 1080p monitors.  Even 
if they aren't, primitive rig devices can be set up to make it obvious, such 
as putting a colored plane where penetration may occur, or applying an 
expression to raise a flag.  Just examples.

Artists working with color and lighting, on the other hand, may need 4K 
monitors.

Matt





Date: Wed, 24 Jan 2018 14:54:40 -0500
From: "Leoung O'Young" 
Subject: Re: Softimage 2015 R1 Redshift and 4K?

I am more thinking of the 4k monitor for as an animator's workstation.


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Friday Flashback #326

2018-01-14 Thread Matt Lind
Not a dream.  In fact it's true today too.

The important detail to make it work is you had to export the scene to 
mental ray's .mi2 format.  Once in the .mi2 format, a Softimage scene is no 
different from a scene from any other DCC, including Alias|Wavefront.

Matt



Date: Sat, 13 Jan 2018 15:40:05 -0500
From: Ed Manning 
Subject: Re: Friday Flashback #326
To: "Official Softimage Users Mailing List.

I seem to remember getting a closed-door demo of Twister that included the
ability to render Alias/Wavefront scenes. Could that have been a dream?


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: retrieving renderer/version from a scene?

2018-01-11 Thread Matt Lind
Each version of Softimage comes with a very specific version of mental ray. 
Therefore, if you know the version of Softimage used to render the scene 
(which is available in the .scntoc), then you should know the version of 
mental ray used as well.

if you rendered with a 3rd party renderer, such as Arnold, then you would be 
responsible for keeping track of versioning as it's essentially a plugin and 
outside the scope of Softimage.  Most 3rd party tools would leave some kind 
of metadata in the scene, such as a custom property, containing the version 
information as they need it to ensure you are using a version compatible 
with the shaders you've applied on your objects/partitions.

Matt



Date: Wed, 10 Jan 2018 11:33:32 +0100
From: Rob Wuijster 
Subject: retrieving renderer/version from a scene?
To: softimage@listproc.autodesk.com

Hi all,

Just a quick question, but is there a way to figure out what
renderer/version was used for a scene?
I have some older scenes that need to be re-rendered, but I don't want
to run into issues due to more recent versions and depreciated options
It looks like the .scntoc doesn't have any info on this.

-- 

cheers!

Rob 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: Safe to delete the default pass?

2017-12-15 Thread Matt Lind
It's especially bad if you rename the scene root to be the same name as the 
scene's file name.

While it's technically allowed to rename scene root, delete default pass, 
etc.  It's really not advised.

Matt




Date: Fri, 15 Dec 2017 19:26:01 +0100
From: "Sven Constable" 
Subject: RE: Safe to delete the default pass?
To: "'Official Softimage Users Mailing List.

Renaming the scene root? Wow, I never thought of that!
Sven 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: ICE set Color by samples

2017-12-11 Thread Matt Lind
A more effective solution would be to implement a debug mode into the 
shaders to render only the selected channel(s).  It's significantly easier 
and more reliable than what you're proposing.

Matt


Date: Mon, 11 Dec 2017 21:55:54 +0900
From: Martin Yara 
Subject: ICE set Color by samples
To: softimage@listproc.autodesk.com

Hi,

We are using vertex colors as data to manipulate the real time shaders.
Each channel has a different effect.

The problem is that when I'm using too many things in the same spot it gets
really colorfull and I can't tell right away how much of each channel I'm
using.

For that I wrote a simple tool to separate the RGBA channels (display only
the selected channel in the viewport) using ICE and scripting. For this I
simply create an additional vertex color property and copy the values of
the desired channel. It gets the job done, but I though it would be cool if
I could use the Color property (self.color) so it won't appear in my Weight
Editor. I just couldn't because it seems to work only with points.

Is it possible to do this while keeping my colors per sample (different
colors per sample in the same vertex)

Thanks

Martin 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: end of another era

2017-11-21 Thread Matt Lind
e examples, but the point is (in my experience) 
most things people have griped about with mental ray are due to their own 
lack of understanding, not the technology.

Matt



Date: Tue, 21 Nov 2017 17:12:20 +0100
From: Mirko Jankovic <mirkoj.anima...@gmail.com>
Subject: Re: end of another era
To: "Official Softimage Users Mailing List.
https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_forum_-23-21forum_xsi-5Flist=DwIGaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=P5Qsb5ew7b3KyCKq9ZSV_RtlHRBljAHIMTwKi_Jz2Ro=LWXoxeTh0C7EVhDYt4o4d2KYpU-OHjTAP7Igu229RVA=;
<softimage@listproc.autodesk.com>
Message-ID:
<CALOZhv2B-RwxU4UESR+v4Gs6H=h8dqxv8pp73ubkoljq8kg...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

So in a nutshell same  with maya you need team of TDs, programmers and tech
support to actually use it to full potential.
And someone wonders why it was pushed back so fast with other solutions?
WHY would anyone have to go into knowing complex math behind the hub
instead of actually focusing on what he should do make pretty images...
?

On Tue, Nov 21, 2017 at 5:06 PM, Matt Lind <speye...@hotmail.com> wrote:

> A lot of people don't know this, but mental ray received a HUGE update 
> last
> year with acceleration as much as 20x for global illumination and related.
> The version that ships with Softimage didn't receive this update, of
> course.
>
> After acquiring mental ray, Nvidia let it rot for a few years before
> deciding they should put effort into it after all, then put significant
> work
> into it, but not until after many people went to other options.
>
> As much flak as people put towards mental ray, it's actually a very good
> renderer, but it requires you have knowledge of raytracing algorithms to
> make best use of it.  The problem is most users only used the interactive
> version which was hamstrung by the XSI interface which created most of the
> problems related to crashing due to memory constraints and other issues.
> If
> you ever used mental ray from the command line on it's own, you'd know it
> was actually very fast and stable.  If it had continued to be included as 
> a
> standalone renderer after XSI v2.x, it would've been more popular as those
> serious about rendering would've gone to the command line and/or written
> their own scripted UI for it such as with Qt.  Classic case of marketing
> ruining a product's profitability.
>
> My only complaint as a shader writer was that mental ray was overly
> compartmentalized which sometimes made it difficult to access parts of the
> scene to create comprehensive shading effects.  What you could do with a
> single uber shader inside another renderer, required teamwork between
> multiple smaller shaders in mental ray.  It was also not documented very
> well in the advanced areas.  You had to really figure it out on your own.
> While the documentation was accurate, a lot of it didn't make sense until
> you already know how the renderer worked.  But once you did, you had a lot
> of power at your finger tips.
>
> Matt
>
>
>
> Date: Tue, 21 Nov 2017 09:01:52 +0100
> From: Olivier Jeannel <facialdel...@gmail.com>
> Subject: Re: end of another era
> To: Anto Matkovic <a...@matkovic.com>, "Official Softimage Users
>
> MR wasn't heaven, but it's again another widely used abandonned software.
> Of course the provider will never tell the user that he has stop its
> development, no, instead the users finds out waiting months, then years,
> that nothing happen or changed on the platform he thinks everything is
> fine.
>
>
> --
> Softimage Mailing List.
> To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com
> with "unsubscribe" in the subject, and reply to confirm.
>



-- 
Mirko Jankovic
*https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cgfolio.com_mirko-2Djankovic=DwIFaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=TYkLqOztz1pMrehKIaRumz6JhntJjQjHv4bFhmqaE60=mTuAUFjEbMO5EHCskfL1UdDdCmXrmRo-stMb8uCvw60=
<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cgfolio.com_mirko-2Djankovic=DwIFaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=TYkLqOztz1pMrehKIaRumz6JhntJjQjHv4bFhmqaE60=mTuAUFjEbMO5EHCskfL1UdDdCmXrmRo-stMb8uCvw60=>*

Need to find freelancers fast?
www.cgfolio.com

Need some help with rendering an Redshift project?
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gpuoven.com_=DwIFaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=TYkLqOztz1pMrehKIaRumz6JhntJjQjHv4bFhmqaE60=NAtYcc6R8zOUbG6KJl8WvG0xPwqn3Y57bWsNre6dm1E=
-- next part --
An HTML attachment was scrubbed

Re: end of another era

2017-11-21 Thread Matt Lind
Surprisingly, writing a toon shader isn't as difficult as you'd think.  Like 
any good tool, the trick is understanding the user's needs and workflow. 
That's what the toon shaders got right compared to other solutions, such as 
ability to hide ink line seams of intersecting surfaces.  There are also 
many undocumented features in the toon shaders inherited from mental ray for 
free, such as texturing ink lines.


Matt




Date: Tue, 21 Nov 2017 08:17:16 +
From: Matt Morris 
Subject: Re: end of another era

Still find myself using Mr when a toon shading job comes along. Hopefully
Arnold's new shaders will be up to scratch!


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: end of another era

2017-11-21 Thread Matt Lind
A lot of people don't know this, but mental ray received a HUGE update last 
year with acceleration as much as 20x for global illumination and related. 
The version that ships with Softimage didn't receive this update, of course.

After acquiring mental ray, Nvidia let it rot for a few years before 
deciding they should put effort into it after all, then put significant work 
into it, but not until after many people went to other options.

As much flak as people put towards mental ray, it's actually a very good 
renderer, but it requires you have knowledge of raytracing algorithms to 
make best use of it.  The problem is most users only used the interactive 
version which was hamstrung by the XSI interface which created most of the 
problems related to crashing due to memory constraints and other issues.  If 
you ever used mental ray from the command line on it's own, you'd know it 
was actually very fast and stable.  If it had continued to be included as a 
standalone renderer after XSI v2.x, it would've been more popular as those 
serious about rendering would've gone to the command line and/or written 
their own scripted UI for it such as with Qt.  Classic case of marketing 
ruining a product's profitability.

My only complaint as a shader writer was that mental ray was overly 
compartmentalized which sometimes made it difficult to access parts of the 
scene to create comprehensive shading effects.  What you could do with a 
single uber shader inside another renderer, required teamwork between 
multiple smaller shaders in mental ray.  It was also not documented very 
well in the advanced areas.  You had to really figure it out on your own. 
While the documentation was accurate, a lot of it didn't make sense until 
you already know how the renderer worked.  But once you did, you had a lot 
of power at your finger tips.

Matt



Date: Tue, 21 Nov 2017 09:01:52 +0100
From: Olivier Jeannel 
Subject: Re: end of another era
To: Anto Matkovic , "Official Softimage Users

MR wasn't heaven, but it's again another widely used abandonned software.
Of course the provider will never tell the user that he has stop its
development, no, instead the users finds out waiting months, then years,
that nothing happen or changed on the platform he thinks everything is fine.


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Q: texture sequence 'script'?

2017-11-06 Thread Matt Lind
There used to be instructions in the manuals, but I don't see them any more. 
There are bits and blurbs hinting it's possible in the section discussing 
loading (creating) sources and clips, but no tutorial.

>From memory:

Create a text file with .scr file extension.  Must be '.scr'.
Assign using usual technique for images, but pick the script file instead of 
a bitmap file.
each line in file represents one frame of playback sequence.
first line = first frame of active playback range (not first frame of 
scene).
Each image should be specified on it's own line.
Each image must be specified using full path and file name.
Images may be repeated/specified in any order.
Each image can be of different file format than the previous/next.

TIP:
you may be able to manipulate timing in the animation mixer if you know 
where to dig.

Possible caveats:

- frame step != 1 (may pick wrong/undesired line from script on a given 
frame)
- playback == realtime (may pick wrong/undesired line from script on a given 
frame).
- relative file paths inside script may not be supported.
- TIFF LZW compression --> causes load in memory uncompressed when 
rendering.


Matt





Date: Mon, 6 Nov 2017 10:37:03 +0100
From: Rob Wuijster 
Subject: Q: texture sequence 'script'?


Hi all,

Having a bit of a brain freeze atm.
I'm pretty sure there was a way to texture a object with a textfile
defining a sequence of images.

Anyone can point me in the right direction, cannot seem to find it atm...

-- 

vr.gr.,

Rob Wuijster
E r...@casema.nl


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: Softimage - not going away...

2017-10-27 Thread Matt Lind
It likely died for the reason you just stated - you 'eventually' wanted to 
learn.  Problem is most people had the same sentiments.

They made the right move initially of targeting the space between the other 
DCCs, but I think staying there long term was a mistake.


Matt


Date: Fri, 27 Oct 2017 17:34:02 +
From: Andres Stephens 
Subject: RE: Softimage - not going away...
To: "Official Softimage Users Mailing List.

This is devastating news!!! WTF!? I was betting on learning this and 
eventually investing in it. I didn?t want to be locked into one software the 
more I got into proceduralism.  Why!?

-Draise


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: UV Island Border

2017-10-25 Thread Matt Lind
This task should be scripted.  If coded efficiently, it will run in 
milliseconds.

If ICE didn't exist, your choice would be scripted command vs. scripted 
operator (or compiled operator).  Would you write this tool as a scripted 
operator?  Probably not because the tool only needs to run once, not several 
times.  Scripting vs. ICE should be decided in the same manner as anything 
produced in ICE will be an operator.

One of the few exceptions is if the task is very large and parallel 
processing is possible making a significant difference.  Scripting is 
single-threaded whereas ICE is multi-threaded, but this is only relevant for 
tasks which can be parallelized as there will be no gain for serial tasks.

Matt




Date: Thu, 26 Oct 2017 02:36:42 +0900
From: Martin 
Subject: Re: UV Island Border
To: "Official Softimage Users Mailing List.

Thanks I?ve got no idea how to do that in ICE. I?ll probably go with 
scripting even if it will be much slower.

Martin
Sent from my iPhone


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: XSI Size difference issues. Pointlocators etc.

2017-10-16 Thread Matt Lind
Geometry.GetClosestLocations() and related operate in the local coordinate 
space of the searched object.  If that object has scaling at values other 
than (1,1,1), then the radius will be affected by the scale values.  It's 
often recommended to provide a transformation to the GetClosestLocations() 
method to indicate the coordinate space you wish to operate within. 
Generally, it's the identity coordinate space (meaning, no conversions).

To make life simple, convert your starting position to the local coordinate 
space of the object you plan to search.  Set up an acceleration cache, then 
conduct your search.  The returned PointLocator objects will be described in 
the local coordinate space of the searched object.  When the search has 
completed, you can call the necessary Geometry methods to evaluate the 
PointLocator objects.  After those results are obtained, convert to whatever 
coordinate space you need.  If you get strange results, try freezing the 
scaling of your geometry before searching.


Matt



Date: Mon, 16 Oct 2017 09:17:42 +0300
From: Andrew Prostrelov 
Subject: XSI Size difference issues. Pointlocators etc.
To: "Official Softimage Users Mailing List.
https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_forum_-23-21forum_xsi-5Flist=DwIFAw=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=S-BPUlaF8mjY1U2Z9Le1yO3A764oWQrbWlc31D0W0-k=AgNAyJyh5DGBYWCicbgHz95yVmohl-6O3xul3IZmCjg=;

Today's question is about size difference.
When we get pointlocators via GetClosestLocationsWithinRadius() or other
similar methods we alwayse got size problems.
When our object have dence geometry and vertexes lies close to each other
GetClosestLocationsWithinRadius() method pick vertexes to soon. Even if we
provide it with radius 0.01 on a dence mesh it may pick verts from center
of polygon.
In this case user would try zoom in to get more precise result, but zoom
values does not affect GetClosestLocationsWithinRadius() method. Only if he
scale up object he would get the result he want.

The good example of how it should work is AddEdgeTool. Even if you have
dense mesh this tool works according your current zoom value. If you zoom
closer to geo it became more precise.

So to mimic this accuracy change i try different approaches:
01. GetClosestLocationsWithinRadius() , filter by radius value.
02. GetRaycastIntersections() , get first and second closest verts ,
reconstruct CLine representation of closest edge, get vert by parameter
value filtering.
03. GetRaycastIntersections() , get first and second closest verts ,
convert to screen space , reconstruct CLine representation of closest edge,
get vert by parameter value filtering.
04. GetRaycastIntersections() , get first closest vert, get vector length
from current  intersection with geo and closest vert, filter by vector
length value.
05. GetRaycastIntersections() , get first closest vert, convert to screen
space , get vector length from current  intersection with geo and closest
vert, filter by vector length value.

All this approaches have the same size problem.
I assume that for all this approaches the filter value should be calculated
via screen space coordinates. This way we got current component size as it
was drawn to screen, in other words we preserve zoom value.
But in this case all filtering values also changes and in result we still
doesn't calculate zoomed components right.

Here is the code:
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.dropbox.com_s_udfor0ts81yg9l6_PickVertes.cpp-3Fdl-3D0=DwIBaQ=76Q6Tcqc-t2x0ciWn7KFdCiqt6IQ7a_IF9uzNzd_2pA=GmX_32eCLYPFLJ529RohsPjjNVwo9P0jVMsrMw7PFsA=Ud_aoL80QOH6qlpe0_7aqJ03UeoOmjVPd-tGAX2za4k=6PmiSNJjP-LIAtse1Wn0ECnHpJDBsZuYmMjEDeLwaGA=
 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Porting to Maya

2017-10-10 Thread Matt Lind
thanks, Martin.

A few follow up questions:

1. What kind of animation are you baking?  (e.g. besides transforms, what 
parameters are supported?)

2.  Vertex colors - I think Maya expects RGBA values.  If you send only RGB,
then that would explain the 'rotation' of values as the Red value of the 2nd
polygon node will map to the alpha channel of the first polygon node (face
vertex) Maya is expecting.  And then ripples to all the latter nodes as
well.

3.  UVs - what do you mean by 'merge' UVs?  You mean like 'heal' as it's
known in Softimage?  or do you mean put all the UVs into a single texture
projection?  Also, how do you handle the case of multiple objects sharing
the same material, but with different UVs on their respective geometries?
Example: 3 objects share the same material and phong shader, but the image
node driving the ambient and diffuse ports of the phong shader references a
different set of UV coordinates (texture projection) on each of the 3
objectsbecause that's the only way to do it.  How do you replicate that
setup in Maya?  Same question for materials applied to clusters which are
shared across multiple objects.

4. Neutral Pose - what is the technical equivalent of a neutral pose in
Maya?  Are the neutral pose issues specific to .fbx?

5.  Constraints.  What constraints do you need most (besides position,
scale, orientation, direction, and pose)?  After rebuilding your constraints
in Maya, how do you handle cases where multiple constraints applied to the
same object affect the same attributes?  Example:  in Softimage, applying 2
position constraints on the same object results in 2 position constraint
operators being applied to the object.  In Maya, applying 2 point
constraints results in one point constraint operator with 2 inputs being
blended.  When more constraints are applied to the same object, additional
nodes, such as a 'pairBlend' node, may be inserted to resolve the conflicts.
How do you control/organize the logic of how the constraints are applied
(e.g. what is Maya's logic for determining whether to insert a pairBlend
node vs. plugging another object into the input of the constraint operator?)

6. Expressions.  How important are expressions to your needs?  What features
of Softimage expressions do you need most?


7.  How much time does it take you to finish your work after it is imported
into Maya from Softimage?  Minutes? Hours? Days?

Matt




Date: Mon, 9 Oct 2017 13:30:34 +0900
From: Martin Yara 
Subject: Re: Porting to Maya
To: "Official Softimage Users Mailing List.

I've been using that workflow for a few years. Softimage to Maya, Maya to
Softimage. Mainly for character modeling and animation, shape animation.

I scripted most of it. Softimage Script -> FBX -> Maya batch with a Maya
Script. Works pretty fine, but it has to be a little customized depending
on the project. It can be done in one click, and the part that takes the
most time is the FBX conversion.

1. Modeling (including bones, weights, all except rig), Animation (baked
and using 2 compatibles rigs in Softimage and Maya).

2 and 3.
Lots of things, and I'm sure a lot of them you already know, but just in
case:

- Vertex Color. Usually if you match the FBX version to the Maya version it
will export fine. The problem is Softimage only has FBX 2015, so exporting
to 2016 didn't work very well sometimes.
We were doing a Maya 2016 project and exporting to FBX caused the Vertex
Color to be "rotated" like the old FBX UV problem. Weird enough, if I clean
the mesh by export / importing to OBJ, copy weights from my old mesh and
other things before exporting to FBX, it usually works fine. But even more
weird, importing this bugged FBX into Maya, and setting the Alpha Channel
to 1.0 fixed it. Yeah, I don't know why.

- UVs. You have to rename at least your main Tex.Projection to map1 before
exporting or it will get messy inside Maya. And merge all UVs in Maya once
imported, because all your UVs will be separated. Selecting All UVs and
merge them with a very low threshold value works fine.

- Materials. Depending on the Maya version and how complicated your
Materials are you will have to rebuild them. And obviously fix the texture
paths. Delete Scene Material.

- Delete Neutral Pose in Softimage before exporting or you will have an
extra locator or bone.

- Unlock Normals. When you import into Maya, the normals will be locked,
and if you don't unlock them before doing anything in your mesh, your
normals will get messed up pretty quickly.

- Remove Namespaces in Maya.

- Just in case, check that the weights are normalized. I don't know if that
is normal, but I had a few problems with this so I normalize everytime I
import into Maya.

- Vertices numbers are the same. So if you are using shape animation with
different objects, then it will be easily exportable with a custom script,
just write the points positions and load them in Maya without having to use
FBX everytime. I did it with JSON and 

Porting to Maya

2017-10-08 Thread Matt Lind
I'm curious to know how many other people are still using Softimage and 
porting their work to Maya via .fbx or another route?

1) What kind of work are you doing in this workflow?  (character rigging? 
environment modeling?  motion graphics?, ...)

2) Which features can you not get across (or not get across easily)?

3) What do you still have to do after conversion of data (repair, polish, 
cleanup, ...), and how much time does it take to do it?


Matt

PS - Please trim your responses to only carry the immediate post you are 
responding to.  For those of us on the digest form of the list, it's 
difficult to find the message in the sea of reply-included text on threads 
with long histories.


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Clusters

2017-09-21 Thread Matt Lind
When you get a reference to the geometry object, you're getting a snapshot 
at a specific moment in time independent of the source object.  If you make 
modifications to the geometry after you obtain your reference, results may 
be invalid.  Therefore, make sure you get the Geometry after you apply the 
last extrusion.  Alternately, try the CGeometryAccessor class. While 
intended for use in exporters, it may be of use in your case.

You may want to pass the construction history mode as an argument to 
GetGeometry().  This ensures you're getting the correct snapshot of the 
geometry.  Also doesn't hurt to specify the frame on the timeline which to 
get that snapshot.  Not required, but the more inputs you define, the more 
narrow the expected results should be making troubleshooting a little 
quicker and easier.

For comparison, try the following JScript code to see if you get the same 
behavior:

var oObject   = Selection(0);
var oGeometry = oObject.ActivePrimitive.Geometry;
var oPolygons = oGeometry.Polygons;
LogMessage( "Polygons[" + oPolygons.Count + "]: " + ( 
oPolygons.IndexArray ).toArray(), siComment );

If the script code works but C++ fails, then consider using COM/OLE in C++ 
to dig into the same methods as used by the scripting engine.  Remember, the 
pure C++ API you are currently using was not part of the original XSI design 
spec.  It was bolted on later.  There are areas where the implementation is 
incomplete and/or different than what you experience going through the 
scripting engine.  The COM/OLE interface is the original C++ interface of 
XSI, but when XSI 1.0 was released, there was a lot of complaints from 
customers that it was not compatible with Linux and other operating systems. 
So Softimage put effort into developing a pure C++ API and pretty much 
played catch-up the rest of the product's history.  COM/OLE is more of a 
burden to code with, but will give you the exact same results as returned 
through the scripting engine - because it is the scripting engine.

One area you are forced to use COM/OLE is dealing with SubComponents as the 
pure C++ API doesn't implement the SubComponent class.  So start there for 
your tutorial.  I believe the C++ reference has such a tutorial in the 
beginning of the SDK docs.

Matt





Date: Thu, 21 Sep 2017 21:34:50 +0300
From: Andrew Prostrelov 
Subject: Re: Clusters
To: "Official Softimage Users Mailing List.


Clusters again i guess.
I have an object. It have 2 edge Clusters and a some Operators applied.
When i try to get all object polygons like so:
PolygonMesh curMesh(save_obj.GetActivePrimitive().GetGeometry());
CPolygonFaceRefArray all_polys = curMesh.GetPolygons();
I don't have all of them. I don't have last extruded polygons.
But if i delete edge clusters. The same code above bring back all polygons
of this object.
Is there any workaround about this problem other than freez it all ?


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: maya constrain from SI perspective

2017-09-10 Thread Matt Lind
The irony here is the features you gripe about were likely ripped off from 
Softimage|3D almost verbatim.

Back in the 1990's when Softimage was busy rewriting Softimage|3D from the 
ground up to what we know today as XSI, Alias|Wavefront was busy making 
themselves relevant again as they desperately needed market share after 
Softimage|3D nearly pummeled them into the ground.  One technique was to rip 
features off from other applications.  If you look at Maya's SDK you'll 
discover a lot of familiar looking features.  Skinning in Maya, for example, 
looks almost identical to how it was handled in Softimage|3D with 5 
different types of envelopes to choose from.  Softimage realized the mistake 
in that approach and corrected it by having a single uber-envelope operator 
inside of XSI, but Maya stuck with the past.

If you have past experience with Softimage|3D, it may prove useful for 
navigating the Maya waters.


Matt



Date: Sun, 10 Sep 2017 19:26:19 +0200
From: Mirko Jankovic 
Subject: Re: maya constrain from SI perspective
To: "Official Softimage Users Mailing List.


It is not such huge problem when you create from start or something like
that. But when you get referenced rig that is locked and you cant; do any
changes but have to work with what you have
If there was option to make changes on the rig it should be manageable
somehow but rigs are referenced and locked no changes possible so...

I simply can;t understand that something so basic and important can be so
complicated and problematic :)
?


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Friday Flashback #315

2017-09-08 Thread Matt Lind
Softimage's watershed moment.

While in the limelight of Bunny, Blue Sky publicly announced they were 
switching from Softimage to Maya.


Matt




Date: Fri, 8 Sep 2017 11:05:42 -0400
From: Stephen Blair 
Subject: Friday Flashback #315
To: "Official Softimage Users Mailing List.
https://groups.google.com/forum/#!forum/xsi_list;


1998 Bunny hops to fame
http://wp.me/powV4-3sC


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Prevent Softimage from translating (English to Japanese)

2017-09-06 Thread Matt Lind
Double check your PPG Logic code to make sure you aren't inadvertently 
setting the font or button size.

the buttons in the 2nd and 3rd rows seem to be better behaved.  The 2nd row 
decreases in size slightly after the first iteration of presses, but 
stabilizes thereafter.  The 3rd row seems to be fine.  Are you using the 
same function to define the labels of the buttons, or is the code unique per 
button?   What happens if the last value is something other than 256?  Do 
the button fonts still decrease recursively?  What happens if the 256 is 
defined as 256.0?

Matt



Date: Wed, 6 Sep 2017 20:14:58 +0900
From: Martin Yara 
Subject: Prevent Softimage from translating (English to Japanese)
To: "softimage@listproc.autodesk.com"

Is there a way to prevent Softimage translating words?

If someone is using Softimage in Japanese, it will translate my PPG options
and destroying the whole layout.

I managed to keep some words in my group titles by adding a few spaces, but
I can't prevent Softimage from trying to "translate" my numbers. If that
makes any sense.

I think it is a font problem, but I don't see any option for font size or
font type.

I have a small button with numbers in it like 2.0.
This number changes to 256 if I check a checkbox.
If I uncheck the checkbox, it will be back to 2.0
Pretty simple, and it works fine in English.

But in Japanese, the fonts are automatically shrinked once they change.
If I uncheck the box it will be back to 2.0 with the shrinked font.
And if I check again the box, it will change to 256 and shrink even more !
And so on until you can't see anything.

I recorded it :
https://www.youtube.com/watch?v=9E-gTSIDULE

Does anyone have an idea to prevent this from happening ?

Martin 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: add command to hotkey

2017-09-05 Thread Matt Lind
Directly - no.  You'll have to hack into the keyboard preferences file to 
add arguments, but doing so would be risking corruption as those files are 
really finicky, and even if you don't corrupt the file, chances of getting 
the functionality you want are pretty low.

Your primary option is to write a self installing command with several 
wrapper functions, exposed as registered commands, containing fixed argument 
lists matching what you want.  Users map their hotkeys to call the wrappers. 
When the wrappers are called, they call the main plugin with their fixed 
argument lists.  The main plugin is the brains that does the usual 
validation and error checking.  If the main plugin is a native Softimage 
command, then you only write the wrappers in your self installing plugin.

Matt


Date: Wed, 6 Sep 2017 13:32:06 +0900
From: Martin Yara 
Subject: add command to hotkey
To: "softimage@listproc.autodesk.com"

Hi, this may be a pretty basic question but is it possible to add a Command
with arguments to a hotkey ?

Like:
command ( "arg1", "arg2")

Or do I have to create a custom command for every combination if I want to
make them "hotkeyable" ?

Thanks

Martin


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Clusters

2017-09-02 Thread Matt Lind
C:\Program Files (x86)\Autodesk\Help\softimage2014\en_us\sdkguide\index.html

you want CClusterElementArray.FindIndex()

Clusters are metadata of the geometry.  The elements in a cluster are the 
indices in a lookup table that maps the cluster to the geometry.  You must 
get the indices from the cluster, then look them up in the lookup table 
using CClusterElementArray.FindIndex() (or another equivalent method 
available elsewhere in the SDK).  The returned value is the index of the 
subcomponent as seen on the geometry.

If the cluster is complete, then the indices in the cluster usually map 1:1 
to the geometry indices, but that is not a guarantee.  Why?  Because the 
geometry's construction history may have live topology operators 
adding/removing components.  As long as those operators are live, they'll 
reserve the subcomponent indices they point to even if they are deleted. 
Once the operators are frozen or deleted, the topology is updated and the 
geometry indices are re-ordered to account for the additions/removals caused 
by the operators.  The cluster index lookup table is then updated so the 
indices in the cluster map to the proper geometry subcomponent indices. 
Think of a cluster as an associative array.  The cluster element index is 
the key, the mapped subcomponent index returned is the value.  e.g.

GeometrySubComponentIndex = ClusterElementArray.FindIndex( 
clusterElementIndex );

Example:

if a complete polygon cluster is created on a polygon mesh cube, the cluster 
element list will have 6 entries numbered 0-5.  If you request index 3, 
you'll get a returned value of 3.  Now delete polygon 3 and you'll notice 
polygons 4 and 5 hold their respective indices, and will continue to do so 
as long as the delete component operator remains in the construction 
history.  The moment you freeze construction history, polygons 4 and 5 will 
slide down to fill the void created by the deletion of polygon 3, and will 
then be renumbered as polygon indices 3 and 4 to keep the list of 
subcomponent indices contiguous.  polygon index 5 will no longer exist.  The 
ClusterElementArray indices, however, will always be numbered 0...N-1, where 
N is the number of elements in the cluster.

The cluster element list lookup table is really important if the cluster is 
not complete (usually the case for point, edge, and polygon clusters).  If 
you simply grab the indices contained in the cluster, you'll get incorrect 
results as those indices are the indices of the cluster lookup table, not 
the geometry subcomponent indices they represent.  That is why you use 
ClusterElementArray.FindIndex().


Matt


Date: Sat, 2 Sep 2017 09:40:19 +0300
From: Andrew Prostrelov 
To: "Official Softimage Users Mailing List.
https://groups.google.com/forum/#!forum/xsi_list;

Ok. Today question is about Clusters in C++.
I'm trying to get actualized geometry elements from cluster.
And i thought that i can get it this way
for (LONG i = 0; i < curClusters.GetCount(); i++)
{
CRef curref; curref.Set(curClusters[i]);
Application().LogMessage(L"CollectClusterComps::curClusters[i]:
" + curClusters[i] );
Cluster cur_cls(curref);
if (cur_cls.IsValid())
{
CLongArray prev_comps = cur_cls.GetElements().GetArray();
Application().LogMessage(L"CollectClusterComps::prev_comps:
" + prev_comps.GetAsText() );
}
}
But this way i get wrong components ids.
So. I guess if i can literally see right components indices

[image: ?? ??? 1]

there should be a way to get it without changing current selection
(Cluster.SelectMembers() commands etc).


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: What were they thinking....

2017-08-31 Thread Matt Lind
What you described, Joey, is nothing more than point of reference.  What is 
local in one perspective and global in another, can be modeled as 
parent/child relationships in many cases.  It can be done, it's just a 
matter of studying the ripple effect of changing a core fundamental feature. 
It may not be a practical investment of time, but it can be done.

I am sure any one of us could make a very long laundry list of what we'd 
like to see carried over from Softimage.  I think it would be better to 
describe them from a function point of view rather than specific named 
feature.

For example, ability to use global transforms to manipulate an object 
instead of just local as Maya currently supports.

The ability to define your keyboard commands so the stuff you use 80% of the 
time doesn't get stepped on by esoteric stuff.  For example, if "A" frames 
all objects in a viewport, then that key should not be used for something 
less important (or completely unimportant) such as changing your layout to 
be in animation mode.  Softimage put all the important stuff in the center 
of the keyboard where your hand naturally rests, and put the lesser 
important stuff at the perimeter.  the less important it was, the more hoops 
you had to jump through to access it.  That's how it should be.  Maya has no 
system.  Example: must use ALT-MMB to pan the camera.  WTF?  Not only is 
that out of the way, but requires uncommon use of key and mouse to perform. 
It's certainly hard on my arthritic wrists.


Probably the most important change I would like to see is rewriting of the 
documentation.  Whoever wrote the current docs has poor organizational 
skills and doesn't have a mastery of the English language.  Topics 
frequently point to other pages only for those pages to point back to where 
you started without answering your question.  Many pages have so little 
information it's not worth having pages for them.  The SDK documents aren't 
much better as they fail to mention some very important pieces too as 
everything is written from the point of view of hindsight (i.e. written as 
if you already know the SDK.  Not designed for newcomers).  Heck, the C++ 
SDK docs don't even alphabetize the methods available in a class. 
Seriously?  Ever look at a larger class like MFnMesh and try to find the one 
method you need to get UV space info?  It's a chore.  As a direct 
comparison, take a look at the Maya Python API docs which describe many of 
the same methods.  notice they are alphabetized, and while that doesn't 
solve the problem, it certainly makes it less of a chore.

comparison - C++ vs. Python documentation of MFnMesh class:

C++: 
http://help.autodesk.com/view/MAYAUL/2017/ENU/?guid=__cpp_ref_class_m_fn_mesh_html
Python: 
http://help.autodesk.com/view/MAYAUL/2017/ENU/?guid=__py_ref_class_open_maya_1_1_m_fn_mesh_html


Ideally, I'd like to see the SDK docs written like the Softimage scripting 
object model documentation where the methods were listed above, and the 
properties listed below in grid fashion.  That was a powerful arrangement of 
information to make learning easy.

Matt









Date: Thu, 31 Aug 2017 16:11:08 +
From: "Ponthieux, Joseph G. (LARC-E1A)[LITES II]"

Subject: RE: What were they thinking

I know this is not going to be popular, but I'm going to suggest that no one 
should get their hopes up about ever seeing that changed.

Folks need to understand that transforms, matrices, centers (pivots) and 
their breakout and order are deeply embedded in Maya's internal structure. 
Further, when they were established PA and TAV were used as precedence for 
their design. For example some of it is considered from the vantage point of 
a model centric zero world position, because prior to Maya, everything in 
TAV's modeler (Model) was modeled from a world zero relationship to the 
model in Model. The model was then imported into its animation editor 
(PreView), or other tools like Dynamation, and what was world zero for the 
model in Model became the Transform center for the object in Preview.

If you are old enough to be familiar with TAV's behavior, and to have used 
it, you would understand why Maya was designed the way it was. You can't 
take XSI or SI3D's way of doing these things and compare them 1:1 to Maya. 
They are inherently different and for specific reasons. XSI and SI3D gave us 
an abstraction layer for centre/pivot control which, in my own opinion, was 
not only unique but radically out of step with the rest of the CGI world. If 
one wants to argue that it was forward thinking I suspect argument could be 
made, but it sure made it easy, maybe even too easy, to alter pivots 
mid-stream in SI.

Once you get used to pivots and understand how to edit pivots (or rather 
when not to edit pivots) in Maya, they are really not that difficult to deal 
with. But you literally have to ignore the way you were doing it in 
Softimage and take it from the Maya way. If you try to 

Re: What were they thinking....

2017-08-30 Thread Matt Lind
I've bumped into a few myself:

G - repeats last command.  Especially annoying if you accidentally press it 
after importing a file as you cannot undo an import.

A - resets the UI to animation layout, including rearranging all your 
windows.  Considering "A" is used heavily for 'frame all' in the viewports, 
whoever thought this was a good idea should be shot.  If you don't pay 
attention to your mouse cursor location, your user experience will be 
upendednot that you could ever measure the difference in productivity.

I still have yet to be presented a viable case where deleting all the code 
in the script editor upon execute is a beneficial feature.



Date: Tue, 29 Aug 2017 09:22:16 + (UTC)
From: Anto Matkovic 
Subject: Re: What were they thinking
To: "Official Softimage Users Mailing List.

I've also experienced 'ghost' hotkeys in some other cases. Here it happens 
with certain modeling operators, let's say when M or R is pressed after 
extruding, Maya tries to switch the worskpace to modeling or rigging, 
something like that - for now I'm not completely sure about "procedure'', 
which by the way leads to dismantled interface, not to certain workspace 
suddenly requested by Maya (not me...). ?While I'm pretty sure I do not have 
any hotkey related to workspaces. Thanks for posting this, at least I know 
I'm not alone :).
Regarding constraints, IK and 'famous'' pairBlend node, somehow got 
everything smoother by avoiding Maya constraints, IK and especially 
pairBlend thing as much as possible, trying to replace them as much by 
simple nodal setups, something like bunch of ''remap value'' nodes, driven 
by position of locator, or even a part of 2 bone IK chain driven only by 
node setup (still using orient constrain for global orientation of IK 
chain) - but, yeah, this really is not a solution for quick setups.The way 
how pairBlend is implemented, by deleting the node, once there is no 
animated blend, makes it completely and definitively unusable for quick 
setups.Also, only no scripted way to safely copy - paste a part of rig, 
seems to be saving the copy of scene, deleting everything else, checking out 
are all necessary nodes still in place (as deleting can take unwanted parts 
of networks) - and finally, copying back into original scene. So, yeah, yet 
another one, unusable for quick work.
 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: What were they thinking....

2017-08-25 Thread Matt Lind
The Maya SDK is no better.

Excruciating teeth-pulling experience to do really basic things as concepts 
are not explained, or explained well.  Every node is purpose-built and has 
it's own secret handshakes to use making it difficult to write generalized 
and reusable code to perform common tasks.  Using the SDK basically involves 
studying the graph as seen in the node editor, dissecting how it was built, 
then repeating it in your code...only to find out even if you replicate the 
exact same setup it doesn't behave the same.  There are additional hidden 
tricks you must know to get those last pieces to drop into place.  You can 
very easily fall into the trap of attempting to write your own abstraction 
layer just to make the pieces less cumbersome to use, but just when you 
think you've wrapped everything nicely, Maya throws you one of it's endless 
supply of idiosyncratic surprises.

Example:  constraints

In softimage, each constraint is a separate operator that lives in an 
object's construction history.  Every time you add a constraint, it is added 
to the construction history in the order which it was applied.  A lot more 
may be going on under the hood, but to the end user it's very straight 
forward.

In Maya, if you attempt to add more than one of the same type of constraint 
to an object (e.g. two point constraints), instead of making two distinct 
constraint operator nodes like in Softimage, Maya consolidates them into a 
single constraint node with multiple inputs blended internally - but you 
have to supply your own blendweight slider to do that (they don't mention 
that in the SDK docs).  Since each constraint type has slightly different 
inputs and outputs, you write your own abstraction layer to handle the 
differences, only to discover that if two different types of constraints 
affecting the same attribute of an object are applied (e.g. point and parent 
constraint competing for the 'position' attribute), Maya throws the curve 
ball of inserting a 'pairBlend' node, which is like mix2colors node, but for 
transforms instead of colors.  Great.  Now you must revise your logic in 
your abstraction layer to account for that.  Then you start testing again 
applying a point constraint, then a parent constraint, then another type of 
constraint which also competes for the position attributeonly to 
discover Maya now removes the pairBlend node and rearranges the constraints 
into an entirely different arrangement you cannot predict.  This is why Maya 
will always suck.  Probably also explains why a lot of the C++ sample code I 
see wraps MEL commands instead of digging into the dependency graph.

I haven't followed Maya development, but from a distance it appears they're 
focusing on revamping the underlying core right now and will worry about the 
UI later.  However, given the idiosyncratic framework, I honestly don't see 
a slick and user friendly UI (a la Softimage forthcoming) at any point in 
time.  The way Maya is (currently) built won't allow it.

In short, they weren't thinking.

Matt



Date: Fri, 25 Aug 2017 09:41:26 -0700
From: Meng-Yang Lu 
Subject: Re: What were they thinking
To: "Official Softimage Users Mailing List.
https://groups.google.com/forum/#!forum/xsi_list;

The copy and paste is pretty bad.  Haven't done it in years because of the
PTSD, but I remembered it would put "__pasted__" on the names of the
objects that were pasted over, assuming you wanted to do that in the first
place.

It's not a finely-tuned generalist tool like Softimage is out of the box.
And before, you could forgive it's shortcomings because in the Motif UI
days, it was ugly, but stupid fast.  Hotbox, plus marking menus, plus
hotkeys made you fast.  Now the UI lag pretty much sapped the joy from
those UI features.

Maya has had a tough time adapting to the times.  I see other developers
more in-tuned with the day to day tasks of production and developing tools
that help artists get through their day.  Not sure why ADSK can't move the
needle in a meaningful way when it comes to Maya releases.  I feel like
they should go and buy new computers, install maya out of the box, and try
to put together a 3 min short film.  The pitfalls would be pretty obvious
imo.

peace,

-Lu 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Otish - is there a Maya equivalent of XSI's Deform by Curve?

2017-08-22 Thread Matt Lind
The flipping problem is due to a property of curves.

A parametric curve's normal always points in the direction of concavity at a 
given location.  The normal will flip where the curve transitions from 
concave to convex (and vice versa), such as at the mid-section of an "S" 
curve.  Tools will often apply a "Frenet" frame which is an added layer of 
math that walks along the curve and unifies the normal direction by applying 
local rotations to buffer the flipping, but Maya doesn't appear to be using 
one.

Specifying a linear quantity such as an up vector will not resolve the issue 
as you need a correction along the flow of the curve, not a fixed point or 
direction in a linear coordinate space.

Matt


Date: Tue, 22 Aug 2017 17:25:02 +0200 (CEST)
From: Morten Bartholdy 
Subject: Re: Otish - is there a Maya equivalent of XSI's Deform by
Curve?
To: "Official Softimage Users Mailing List.

I have just tried all the options for World Up Type - they just produce 
different types of flipping.

I will try lattice on curve and see how it goes.

MB


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: XSI SelectTool context menus.

2017-08-18 Thread Matt Lind
As stated earlier, look up PluginRegistrar.RegisterMenu().

You cannot arbitrarily insert your custom tool into the factory menus of 
Softimage.  You can, however, insert your tool into a menu at a reserved 
anchor point for custom tools.  Most menus have reserved anchor points 
located immediately before and/or immediately after the native tools in that 
menu - including the context menu accessed via RMB.  Those anchor points are 
described in the SDK docs via PluginRegistrar.RegisterMenu().

You do not have access to all of the application's menus, but there are a 
LOT of anchor points.  If you want to see all their locations, then make a 
self-installing plugin and register a menu at every location defined by 
iterating through the enums using a 'for' loop in your XSILoadPlugin() 
callback.  When you start accessing menus, you'll see exactly what you can 
access.  Custom tools will have a "W" in the left margin of the menu (or U). 
A plugin can register more than one menu if you want to make it accessible 
from multiple places within the application.

Your plugin does not have any control over the context.  It can only respond 
to it.  You can do trivial things like add/remove submenus, or 
enable/disable them as defined in the SDK docs, but if you want to do more 
advanced things like make animated emoji's in full color which cast pixie 
dust all over your screen when you click it, then you'll have to explorer an 
external toolkit to do that.

Matt




Date: Sat, 19 Aug 2017 08:15:19 +0300
From: Andrew Prostrelov 
Subject: Re: XSI SelectTool context menus.
To: "Official Softimage Users Mailing List.
https://groups.google.com/forum/#!forum/xsi_list;

Ok. Lets try to approach this problem from a different angle.
CustomTools have callback Deactivate, right ?! So i suppose that we can
temporarily deactivate
our CustomTool and jump to SelectTool (process RMB and invoke menu), and
back to our CustomTool again.


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: XSI SelectTool context menus.

2017-08-16 Thread Matt Lind
Look up "PluginRegistrar.RegisterMenu" in the SDK docs.


Mat


Date: Wed, 16 Aug 2017 20:45:15 +0300
From: Andrew Prostrelov 
Subject: XSI SelectTool context menus.
To: "Official Softimage Users Mailing List.

Hello guys.
I'm looking for a way to add SelectTool context menus to my CustomTool.
This menus: http://imgur.com/a/oUyLa
Is there any way to reach Select Tool RMB menu ?
I guess this menus are dynamically generated, so maybe they can be reached
outside of SelectTool.


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: softimage/windows10/dualscreen

2017-07-13 Thread Matt Lind
I should add:

I get better stability with the Quadro over the GeForce.  GeForce has faster 
clock speed, but the Quadro is better at supporting multiple overlay planes 
and other buffers which translates to less flakiness.

For example, when working in a paint program, like Krita, if multiple 
windows were cascaded on top of each other (eg. Krita in front of a web 
browser, email client, explorer, etc... in the same screen space) selection 
would become flakey.  I'd be working in Krita to access the color swatch 
palette but every click would register as if I was trying to read the email 
located on that line in my email client in the window behind Krita. 
Likewise, using a screen capture utility like Fraps would capture the wrong 
window when using a GeForce.

The Quadro doesn't have issues like that.  Everything works as it should.


Matt




-Original Message- 
From: Matt Lind
Sent: Thursday, July 13, 2017 3:03 PM
To: softimage@listproc.autodesk.com
Subject: Re: softimage/windows10/dualscreen

When dealing with systems, only install what you need.

I only install the graphics drivers.  I don't install Nview, PhysX, game
controllers, or the other stuff.  Currently using driver 382.05 because that
version is appropriate for my hardware.  You'll have to do the legwork to
determine the appropriate version for yours.  The Nvidia driver selector can
usually figure it out.  Just make sure to query for the graphics driver and
not beta drivers or other variants.

Probably what makes the biggest difference is always performing a 'clean'
install.  It'll induce an extra reboot in the process, but it's only a
minute or two.  When you try to upgrade drivers in place (or applications in
general for that matter) the installers try to get too smart and end up
leaving legacy stuff 'just to be safe'.  In the end that stuff only gums up
the works.


Matt





Date: Thu, 13 Jul 2017 11:11:57 +
From: Andi Farhall <hack...@outlook.com>
Subject: Re: softimage/windows10/dualscreen
To: "Official Softimage Users Mailing List.

My default position was to install the latest drivers unless there was
information to the contrary. When you say other crap do you mean everything
except the actual driver itself? For the less informed (like me) how can i
tell if they're incorrect? 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: softimage/windows10/dualscreen

2017-07-13 Thread Matt Lind
When dealing with systems, only install what you need.

I only install the graphics drivers.  I don't install Nview, PhysX, game 
controllers, or the other stuff.  Currently using driver 382.05 because that 
version is appropriate for my hardware.  You'll have to do the legwork to 
determine the appropriate version for yours.  The Nvidia driver selector can 
usually figure it out.  Just make sure to query for the graphics driver and 
not beta drivers or other variants.

Probably what makes the biggest difference is always performing a 'clean' 
install.  It'll induce an extra reboot in the process, but it's only a 
minute or two.  When you try to upgrade drivers in place (or applications in 
general for that matter) the installers try to get too smart and end up 
leaving legacy stuff 'just to be safe'.  In the end that stuff only gums up 
the works.


Matt





Date: Thu, 13 Jul 2017 11:11:57 +
From: Andi Farhall 
Subject: Re: softimage/windows10/dualscreen
To: "Official Softimage Users Mailing List.

My default position was to install the latest drivers unless there was 
information to the contrary. When you say other crap do you mean everything 
except the actual driver itself? For the less informed (like me) how can i 
tell if they're incorrect?


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: softimage/windows10/dualscreen

2017-07-13 Thread Matt Lind
I have a nearly 10 year old Dell Precision Workstation which had an Nvidia 
GeForce GTX 260, and is now equipped with a Nvidia Quadro K4200.  Neither 
had any issues.

Check your drivers, they might be old or incorrect.  Don't install the other 
Nvidia crap they try to bundle into the driver.

Matt


Date: Thu, 13 Jul 2017 11:35:18 +0100
From: "adrian wyer" 
Subject: RE: softimage/windows10/dualscreen
To: "'Official Softimage Users Mailing

my money is on the old quadro time for a new 1080ti?

take old yella out back and do the honourable thing mate

a

Adrian Wyer
Fluid Pictures
75-77 Margaret St.
London
W1W 8SY
++44(0) 207 580 0829


adrian.w...@fluid-pictures.com

www.fluid-pictures.com



Fluid Pictures Limited is registered in England and Wales.
Company number:5657815
VAT number: 872 6893 71 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: softimage/windows10/dualscreen

2017-07-13 Thread Matt Lind
Check your drivers, I've been running Softimage on Windows 10 for the past 
year without issue.

Matt



Date: Thu, 13 Jul 2017 10:18:30 +
From: Andi Farhall 
Subject: Re: softimage/windows10/dualscreen
To: "Official Softimage Users Mailing List.

Content-Type: text/plain; charset="utf-8"



first day of using soft and win 10 and It's not going well scene save - 
everything goes black, render region - everything goes black, looks like 
it's back to 7 at the weekend. 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Extreme slowdown on windows 10

2017-07-07 Thread Matt Lind
Anybody else experience extremely slow copy/move/delete operations on their 
C drive as of the latest windows update last week?

file operations on my other drives are normal, but on the C drive they take 
forever and a day.  Even something as simple as creating a text file in a 
new empty folder takes up to 5 minutes.  I have tried changing permissions 
and ownership of files to full access recursively, but to no avail.

I imagine this has something to do with UAC or some other windows specific 
security policy, but which?

thanks,

Matt



--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: no frame numbers in output filename?

2017-07-05 Thread Matt Lind
Turn off sequence rendering and frame numbers will be omitted.

Matt




Date: Wed, 5 Jul 2017 10:22:09 -0500
From: Orlando Esponda 
Subject: Re: no frame numbers in output filename?
To: softimage 

I don't think there's a way to do that,  but you could use regex to find
the pattern easily.


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: Friday Flashback #305

2017-06-17 Thread Matt Lind
That was largely a marketing decision.  You have to remember the 
circumstances of the time:

Due to being late to market with Sumatra and trailing Maya by 2 years, 
Softimage had to make some sales concessions to stop the bleeding of 
customers leaving.  Therefore, anybody who stayed on maintenance for 
Softimage|3D v3.8x and v3.9x would be guaranteed a copy of XSI when it 
shipped - even if it shipped after the maintenance contract completed.  New 
customers who purchased XSI 1.x automatically received a copy of 
Softimage|3D as well.

By the time Softimage|3D v3.9.3 was ready to ship, XSI was no longer 
including copies of Softimage|3D, and only a handful of customers (game 
developers) still used Softimage|3D for serious work.  There had already 
been many service packs in the form of v3.9.1, v3.9.1.1, v3.9.2, v3.9.2.2, 
etc...  Releasing a v3.9.3 would be interpreted as another bug-fix service 
pack and not generate any sales interest.  So they upped it to 4.0.

Matt




Date: Sat, 17 Jun 2017 17:24:50 +0200
From: "Sven Constable" 
Subject: RE: Friday Flashback #305
To: "'Official Softimage Users Mailing List.

So version 3.9.3 was officially released as version 4.0. Not entirely 
correct procedure, I would say ;)


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: uvs transfer from nurbs to polys

2017-06-17 Thread Matt Lind
Softimage|3D supported Texture UV coordinates in the NURBS to polygon mesh 
conversion, but XSI never did.  It was the only geometry type available in 
XSI 1.0, and some enhancements were implemented in XSI v3.x, but NURBS 
development was more or less abandoned after that.

GATOR wasn't introduced until XSI v5.0

Scripting your own NURBS to polygon mesh converter with texture coordinates 
isn't difficult.  You can think of a NURBS surface as a deformed grid 
comprised of 4-sided polygons, and the Texture UVW coordinates are 
guaranteed to be contiguous.  If you build the geometry correctly, 
transferring texture UVW coordinates is practically copy/paste.

Things only get complex if trim curves are involved.

Matt




Date: Sat, 17 Jun 2017 14:22:52 +0200
From: "Sven Constable" 
Subject: RE: uvs transfer from nurbs to polys
To: "'Official Softimage Users Mailing List.

My first thought was it was replaced by gator and therefore deprecated.  Out
of curiosity I checked from version 2011, 7.01, 6.02 down to 4.0. It's not
in there, even 4.0 didn't had gator. Must be replaced in an earlier version.
I wonder why.

Sven 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


SDK: creating a dynamic non-modal PPG in C++

2017-06-07 Thread Matt Lind
Hopefully someone is still around to answer this question:

I have written tons of scripts over the years.  One technique I use for 
creating non-modal dialogs is to create a view and populate it with a 
dynamically generated custom property so nothing needs to hang around at the 
scene root.  This allows the tool to be non-modal so the user can use it 
again and again without having to re-launch the tool or be locked into only 
using that tool.

example (JScript):

// Create a dynamic custom property not attached to anything in the scene.
var oCustomProperty = XSIFactory.CreateObject( "CustomProperty" );
oCustomProperty.Name = "MyPPG";

// Add a scalar parameter called "somevalue" with default value 0.5 and 
range [0...1]
oCustomProperty.AddParameter( "somevalue", siFloat, siClassifUnknown, 
siSilent, "", "", "", 0.5, 0, 1, 0, 1 );

// Create the layout, add the parameter, then rename it 'ratio'.
var oPPGLayout = oCustomProperty.PPGLayout;
oPPGLayout.Clear();

var oPPGItem = oPPGLayout.AddItem( "somevalue", "ratio", siControlNumber );
oPPGItem.LabelMinPixels = 90;

// define callbacks
oPPGLayout.Language = "JScript"
oPPGLayout.Logic = OnInit.toString() + somevalue_OnChanged.toString()

// embed the custom property into a 'view'
var oView = Application.Desktop.ActiveLayout.CreateView( "Property Panel" );
oView.SetAttribute( "targetcontent", oCustomProperty );

// set dimensions, then display the view as a non-modal dialog box
oView.Resize( 500, 300 );
oView.Visibility = true;

function OnInit()
{
PPG.Refresh();
}

function somevalue_OnChanged()
{
LogMessage( "Hey, put it back!", siComment );
}


I have some old C++ code I need to update and noticed it creates a custom 
property at the scene root, then uses InspectObj() to display the custom 
property as a modal dialog.  I want to convert it to use the above technique 
to make it non-modal and not litter the scene root.  I have already 
converted all the code and am successfully displaying the view, but clicking 
buttons, adjusting sliders, etc.. is not triggering the PPGEvent callback 
(equivalent to _OnInit() and _OnChanged() in scripting) to allow the tool to 
respond to user interaction.  The difficulty is with PPGLayout.PutLogic() 
(PPGLayout.Logic in scripting) which only accepts scripted code.

Question:  How do I define/register the callbacks for a non-modal custom 
property in a C++ (other than defining a self installing custom property 
plugin)?

Matt



--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: point cloud render visibility

2017-06-02 Thread Matt Lind
If the only thing being rendered was the point cloud, you could deactivate 
secondary rays and shadows in your mental ray rendering preferences for the 
render pass in question.


Matt




Date: Fri, 2 Jun 2017 15:57:55 +
From: "Ponthieux, Joseph G. (LARC-E1A)[LITES II]"

Subject: RE: point cloud render visibility
To: "softimage@listproc.autodesk.com"

Never mind. You have to select the box to the left of each point cloud 
render visibility item to make the boolean on the right switchable.

Learn something new every day


Joey


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


RE: ICE instance and travel

2017-05-23 Thread Matt Lind
Scaling an object to zero is a no-no.  Doing so often induces crashes in the 
renderer because it produces triangles with zero area (many algorithms 
depend on non-zero area), or sends zeroes into parts of equations where zero 
is illegal (divide by zero, for example).  mental ray has implemented 
safeguards to catch most cases, but it doesn't catch all cases.  I'm 
somewhat sure the same is true for other renderers.

Matt




Date: Tue, 23 May 2017 21:39:27 +0200
From: "Sven Constable" 
Subject: RE: ICE instance and travel
To: "'Official Softimage Users Mailing List.


Well, that would possibly create problems with camera fly-throughs and 
reflections :) BTW I wonder what it means for the renderer I you would scale 
them to zero. I never did that with masters in a scene. 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Another rip question

2017-05-23 Thread Matt Lind
I just did a quick mockup in JScript using the raycast method and can 
confirm it works.  With a little study of the algorithm, I found steps that 
could be eliminated to improve performance.

2D line intersection testing is still faster for finding edge intersections, 
though.

Matt



Date: Mon, 22 May 2017 22:35:37 +
From: Matt Lind <speye...@hotmail.com>
Subject: Re: Another rip question
To: "softimage@listproc.autodesk.com"

Content-Type: text/plain; charset="iso-8859-1"

I can't say one way or another as your code which does that work is not
visible in the example.  However, computing intersection from the 2D
projection should work.  Check your math and also make sure your values are
all described in the same coordinate system.

Although it would be more expensive, you could do a raycast along the edge
from first vertex towards it's second vertex and see if it hits the plane
defined by the slicing plane.  Basically a line-plane intersection test.
You would only do this for edges which have already been verified as
intersecting the slice plane from the 2D projection test.

Keep working on the 2D intersection test.  It should work.

Matt


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Another rip question

2017-05-22 Thread Matt Lind
I can't say one way or another as your code which does that work is not 
visible in the example.  However, computing intersection from the 2D 
projection should work.  Check your math and also make sure your values are 
all described in the same coordinate system.

Although it would be more expensive, you could do a raycast along the edge 
from first vertex towards it's second vertex and see if it hits the plane 
defined by the slicing plane.  Basically a line-plane intersection test. 
You would only do this for edges which have already been verified as 
intersecting the slice plane from the 2D projection test.

Keep working on the 2D intersection test.  It should work.

Matt



Date: Sat, 20 May 2017 22:11:53 +0300
From: Andrew Prostrelov 
Subject: Re: Another rip question
To: "Official Softimage Users Mailing List.
https://groups.google.com/forum/#!forum/xsi_list;

Ok. I tryed method with a projection on a viewplane.
It calculate cut edges right and fast.
But i stuck with a cut edge intersection coordintaes.
Here is a code: http://c2n.me/3KAWXF5
In short words:
Project edge to viewplane (2D space). Construct CLine representation of
projected edge.
Construct CLine representation of drag line (or cut line) and project it
also on a viewplane.
Get intersection between this 2D Cline representations.
Get Parameter value of this 2D intersection coord.
Get via this Parameter value of 2D intersection a 3D intersection (build
Cline representation of edge in world space coords and get coord from 2D
Parameter value).
But unfortunatelly this method gave a cut line distortions. I guess
Parameter value that calculated in 2D space shouldn't be used for 3D space.
How should i get edges intersection coordinates ? 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Another rip question

2017-05-01 Thread Matt Lind
If you want something a little faster, you could project the line defining 
your slicing plane and all the mesh's points onto a virtual plane 
perpendicular to the camera's point of view, then compute which side of the 
slicing line each vertex resides and flag it as -1, 0, or 1 for below, on, 
or above the line respectively.  As you traverse the edges of the mesh, 
compare the flags of the two vertices comprising the edge to see if they are 
on opposite sides of the slicing line.  If yes, then select the edge.  Since 
you know the slicing line passes through the edge, then by definition it 
must intersect both triangles sharing the edge (unless the edge is parallel 
to the slicing line).  That implies exactly one other edge of each triangle 
intersects with the slicing line.  So test the next available edge of each. 
Regardless of the outcome of the test, you know which 2 edges of the 
triangle intersect the slicing line, so you can skip testing the 3rd.  If 
you have enough information flagged, you may be able to extend the 
implication to neighboring triangles to eliminate those which share the edge 
which does not intersect the slicing line, and likewise, more rapidly walk 
the mesh to find the other edges that do.  It should be obvious you'll need 
a data structure which keeps track of which edges/triangles you've visited. 
If you extend the logic to a 2nd slicing line perpendicular to the first and 
record left/on/right side of the 2nd slicing line, you have an algorithm for 
finding intersection of a point inside of a triangle.  A little 
unconventional, but it works.

With some analysis and deduction, you can speed up the algorithm a lot more.


Matt


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Another rip question

2017-05-01 Thread Matt Lind
Yes, that is the basic idea.  You also have the normals available to 
determine facing angle, and can do occlusion testing.  The SDK Triangle 
object provides the polygon index which can then be used to look up neighbor 
edges, vertices and polygons (if needed).

If you test the intersection point and triangle using barycentric 
coordinates, they’ll tell you which direction to travel next if the 
intersection is outside the triangle.  That information will help you 
reduce/eliminate further ray-plane tests for other triangles on the mesh and 
other points on your slicing line/plane.  The first few triangles will be 
the most expensive, but should get faster and faster as more triangles are 
eliminated from testing.  Even on large meshes, this shouldn't take too long 
on modern hardware.  If you want to get smart, you can treat it like a 
render and build a tree for faster lookups for more targeted ray casting, 
but that will likely only pay off for meshes with significant polygon 
counts.

The priority of the picking aspect of your tool is precision and 
reliability, not speed.  While what I suggest is quite a bit slower than 
supercover, it also provides better results and gives you more information 
to work with to make your tool more robust.  Just my opinion, but I don't 
think a user will complain much if picking edges takes a half second vs. 
milliseconds, but they will definitely complain if picking misses edges or 
picks the wrong edges or doesn't slice the mesh exactly right.  A reliable 
tool that gets it right the first time is always more valuable than a tool 
that runs in 1000x faster but produces wrong results.  Just ask any artist 
who has to work to 3:00 am every night.  Proof of that is Autodesk products. 
How many people are happy using Maya or 3DSMax and all their bugs?  Modelers 
are usually extremely picky about placement and orientation of edges.  If 
they have to reapply the tool multiple times to get them where they want 
them, they probably won't use the tool for long.

Matt


Date: Mon, 1 May 2017 09:09:18 +0300
From: Andrew Prostrelov 
Subject: Re: Another rip question
To: "Official Softimage Users Mailing List.
https://groups.google.com/forum/#!forum/xsi_list;



Oh. I see, so the idea is to get both intersections. The fact that we have
more than one intersection tell us that one of this intersection is on a
front side and others are on a back side. So we can operate with a Z or a
simple distance value.
Ok. Also triangle gave us a plane and we can operate with a Plane to Plane
intersection. If intersection Parameter is out of u=1, v=1 range we get out
of triangle range. Well that's all variations i see so far.


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Re: Another rip question

2017-04-30 Thread Matt Lind
I think the core of your issue is you chose an algorithm that's not well 
suited to solving the problem.  The algorithm is fast, but it has an 
achilles heel with respect to errors where the line of intersection passes 
exactly through a vertex of the mesh you're cutting.  The algorithm is also 
dependent on screen resolution.  If you have a low resolution screen 
(viewport) in the area being tested, then expect inaccuracies.  Given the 
oblique angle of the camera relative to the side of the mesh exhibiting the 
issue, and the fact your mesh is quite dense relative to those pixels, it 
looks to me that is the problem.  While it's always preferred to do things 
in the speediest fashion possible, your immediate task doesn't require that 
kind of speed.

I had to do a similar task last week.  I prototyped my solution in JScript 
for convenience, and using a classic ray-plane intersection test, converting 
all mesh vertex positions to global space, testing every triangle on the 
mesh, and so on.  Despite all that extra overhead, I still got performance 
good enough for your problem.  Convert that to C++ and do some optimization 
by reducing which triangles to test, etc.. you should have no problem with a 
classic ray-plane intersection algorithm.  Alternatively, you could do line 
intersection tests on the polygon edges instead.  That may be a bit faster, 
but a little more hassle as once you find the edges, you have to do a little 
work to find which polygons they belong to.

To determine which polygons are facing the cameraif you know how the 
supercover algorithm works, you shouldn't be asking that question. ;-)


Matt




Date: Sun, 30 Apr 2017 10:24:11 +0300
From: Andrew Prostrelov 
Subject: Re: Another rip question
To: "Official Softimage Users Mailing List.
https://groups.google.com/forum/#!forum/xsi_list;


Oh. I know this thread ;). It gave me idea of cutting by parametric plane
and not by vector projections.
But unfortunately there are no answer for my question in this thread and i
don't have any ideas left.

By the way there is another interesting question:
how should i differentiate visible side of mesh components and non visible
for current Camera position.

http://stackoverflow.com/questions/9709970/algorithm-or-software-for-slicing-a-mesh
So ... CGAL maybe.
I don't know does some one have an experience of using CGAL lib ?


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Up vector on particles flowing on a path

2017-04-27 Thread Matt Lind
A few cross products:

direction vector x global Y axis = binormal
binormal x direction = your desired up vector

order is important.  Also normalize the resulting vector to be a unit vector 
to make other downstream operations predictable.


Matt





Date: Wed, 26 Apr 2017 18:37:03 -0400
From: Kris Rivel 
Subject: Up vector on particles flowing on a path
To: softimage@listproc.autodesk.com

I'm sure this has come up but how can I get a proper up vector on particles 
flowing along a path? The align on velocity in the flow on path node 
overrides anything else. I want them to align on velocity/direction but also 
stand straight up pointing at Y.

Kris


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: envelope weights from mesh to lattice

2017-04-21 Thread Matt Lind
In scripting, accessing points on a lattice is no different than accessing 
points on any other scene object, but you'll need to read the lattice 
operator's parameters to know the dimensions, point distribution, and 
deformer profile (curved vs. linear) - which may be important as cage 
deformers do not support different deformation profiles.

Rather than create a cage deformer, consider nulls as deformers instead. 
Iterating through the lattice points you can create a null in it's place, 
add to the envelope, then copy the weights across.  At a technical level, 
there's nothing a cage can do that nulls can't, but a null can more easily 
be manipulated/constrained to other scene elements than trying to do the 
same with cage vertices as it degrades to shape animation.  A Null also has 
rotation and scaling capabilities whereas the cage can only provide position 
(points).

Matt




Date: Fri, 21 Apr 2017 11:40:26 +0100
From: Matt Morris <matt...@gmail.com>
Subject: Re: envelope weights from mesh to lattice
To: "Official Softimage Users Mailing List.

I did wonder whether you could access that point order to build a standin
mesh, before finding out the cage deformer was a much simpler way to go.

Its an old rig built by someone else which uses a lattice enveloped to
nulls to deform a pack, and I've been trying to find a way to paint the
weights on the lattice and smooth them out (pack logo was deforming too
much in places) - using a mesh I could smooth weights and cage deform the
lattice.

On 21 April 2017 at 01:06, Matt Lind <speye...@hotmail.com> wrote:

> A lattice doesn't have geometry as it's just a collection of ordered
> points.
> That's why you cannot get a topology description.
>
> Many tools, such as GATOR, expect polygon mesh, NURBS Curve, or NURBS
> Surface as input because some of the math operations require access to
> triangles or other geometry data.  Since a lattice lacks some of that
> information, GATOR won't accept it.
>
> It will be easier to solve this problem via scripting than trying to 
> figure
> it out in ICE.
>
> I'm really curious why you have set a lattice as an envelope.  e.g. the 
> use
> case.
>
>
>
> Matt 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: envelope weights from mesh to lattice

2017-04-20 Thread Matt Lind
A lattice doesn't have geometry as it's just a collection of ordered points. 
That's why you cannot get a topology description.

Many tools, such as GATOR, expect polygon mesh, NURBS Curve, or NURBS 
Surface as input because some of the math operations require access to 
triangles or other geometry data.  Since a lattice lacks some of that 
information, GATOR won't accept it.

It will be easier to solve this problem via scripting than trying to figure 
it out in ICE.

I'm really curious why you have set a lattice as an envelope.  e.g. the use 
case.



Matt




Date: Thu, 20 Apr 2017 11:39:39 +0100
From: Matt Morris 
Subject: envelope weights from mesh to lattice
To: "softimage@listproc.autodesk.com"


Hi chaps, that is, if anyone is still out there!

I'm trying to control envelope weights on a lattice, but struggling. I've
created a mesh made of a series of grids that match the point positions of
the latice, and painted the mesh with the weights I'd like to transfer, but
can't seem to get them across. GATOR doesn't work with lattices. In ice I'm
trying getting closest location -> get envelopeweightsperdeformer, and
setting the same on the lattice, but it seems to end up with altered but
unrelated weights.

I thought maybe I could reorder the mesh to match the lattice points
somehow, but can't get a topology description from the lattice to rebuild
the mesh.

Any suggestions welcome!
Cheers,
Matt


-- 
www.matinai.com
-- next part --
An HTML attachment was scrubbed...
URL: 
http://listproc.autodesk.com/pipermail/softimage/attachments/20170420/585ab051/attachment.html

--

Message: 2
Date: Thu, 20 Apr 2017 13:31:07 +0200 (CEST)
From: Morten Bartholdy 
Subject: Re: envelope weights from mesh to lattice
To: Matt Morris , "Official Softimage Users Mailing
List. https://groups.google.com/forum/#!forum/xsi_list;

Message-ID: <127142133.23844.1492687867...@webmail.surftown.com>
Content-Type: text/plain; charset=UTF-8

I would think your best bet is using your meshes as cage deformers. I know, 
it will not be as interactive as a lattice, but it will work.

MB



> Den 20. april 2017 klokken 12:39 skrev Matt Morris :
>
>
> Hi chaps, that is, if anyone is still out there!
>
> I'm trying to control envelope weights on a lattice, but struggling. I've
> created a mesh made of a series of grids that match the point positions of
> the latice, and painted the mesh with the weights I'd like to transfer, 
> but
> can't seem to get them across. GATOR doesn't work with lattices. In ice 
> I'm
> trying getting closest location -> get envelopeweightsperdeformer, and
> setting the same on the lattice, but it seems to end up with altered but
> unrelated weights.
>
> I thought maybe I could reorder the mesh to match the lattice points
> somehow, but can't get a topology description from the lattice to rebuild
> the mesh.
>
> Any suggestions welcome!
> Cheers,
> Matt
>
>
> -- 
> www.matinai.com
> --
> Softimage Mailing List.
> To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com 
> with "unsubscribe" in the subject, and reply to confirm.


--

___
Softimage mailing list
Softimage@listproc.autodesk.com
http://listproc.autodesk.com/mailman/listinfo/softimage


End of Softimage Digest, Vol 101, Issue 35
** 


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


Re: Random Thoughts about H.

2017-03-28 Thread Matt Lind
I haven't used Houdini in this context, but I do know a little math.

The point reference frame in Softimage not much more than an orthogonal 
basis comprised of the U tangent, V tangent, and surface Normal (for 
surfaces).  For polygon meshes it follows some rules such as the winding of 
the polygon to determine the first edge which becomes a pseudo X axis.  That 
crossed with the normal produces the Z axis.  The combination of the three 
produces the matrix which represents the frame.

In other words, if you know the rules of the geometry construction, creating 
the reference frame shouldn't be more than a few cross products and 
normalizing.  Actually, you don't need to know the rules, you just need 'a' 
rule which is consistently followed.


Matt




Date: Tue, 28 Mar 2017 10:04:47 +0100
From: 
Subject: Re: Random Thoughts about H.
To: "Official Softimage Users Mailing


They fixed it quicker than it took me to make a video showing how annoying 
it could be!

I have a question regarding attributes.. In ICE. I use ?pointreferenceframe? 
all the time for finding the orientation at a surface for making deformers.
In Houdini, there are not all these useful attributes that ICE has by 
default. I can make a thing that calculates a homemade orientation a-la 
PointReferenceFrame, and it works but it takes up a huge amount of nodes. 
Also, you can?t just ?get? it. It needs loads of inputs on the tree that?s 
finding it.
Firstly. I don?t see that you can store a matrix per-point attribute. Is it 
possible?
How would an experienced Houdini person deal with getting/calculating 
Pointreferenceframe on a surface?


--
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.


  1   2   3   4   5   6   7   8   9   10   >