Most of your information is incorrect.

First, I never said nor implied "you're not smart enough to use software X" 
mentality.  I said most users do not take the time to learn the software to 
use it properly.  That has nothing to do with being smart or dumb.  It's 
about investing time to learn so you apply the principles correctly in 
context.  You can be the smartest person ever to walk the face of the earth, 
but if you don't take the time to learn how your tools work, you should 
expect many failures: http://www.dailyreckoning.com.au/images/dr20120709.jpg

Majority of artists I've dealt with in production have never once opened the 
manuals to learn how to use the renderer.  They rely purely on intuition 
flipping switches that sound cool, then shove the scene off to be rendered. 
When it fails to render, they complain and blame the renderer.  Let's give 
an actual example:

Many years ago I joined a feature film when it was already well beyond 50% 
through production.  they had a lot of problems with rendering trees. Scenes 
would get pushed off to the renderer, but any scene with more than 3 trees 
would crash taking the computer down.  For several meetings there were 
grumblings how mental ray should be ditched, so I was assigned the task of 
researching the issue.  Inside of Softimage the scenes looked fairly simple 
and was very responsive to the mouse and timeline.  A street with trees 
lining the sides of the road.  No more than 6 or 7 trees in view of the 
camera at any one time.  The tree trunks only had resolution of roughly 
20K - 180K triangles depending on the tree, and each tree trunk had one or 
two bitmap textures at resolution of 2048 x 2048 pixels in 16 bit color. 
The scene as a whole had 30 bitmap textures, if that.  I inspected the 
shaders and didn't see anything other than standard phong, lambert, and 
color correction nodes in the various rendertrees.  No exotic features 
activated either.  Hmm, I thought.  So I dumped to .mi2 files and rendered 
using the mental ray standalone in heavy verbose mode to get more 
information what mental ray was actually doing.  Before it crashed, the log 
indicated there were more than 20 billion triangles in the scene exhausting 
available RAM.  That's right, 20 billion - and the scene was still in the 
process of loading.  Naturally I'm scratching my head as to the source.  So 
I reopened the scene and took another look and that's when I discovered the 
leaves of the trees were created with particle instancing.  That is, the 
artist created a single clump of leaves in very high detail (~15,000 
triangles for a handful of leaves), and instanced them around to every 
branch of every tree.  When mental ray loaded the scene, it had to 
dynamically allocate the memory to hold each instanced set of leaves.  Since 
each tree had hundreds/thousands of mounting points for the clump of 
instanced leaves, the instancing inflated the triangle count to 20 billion+ 
exhausting available memory causing the renderer to crash.

If the artist had used a different approach to setting up the scene, such as 
using a delay load geometry shader or modeling the leaves directly onto the 
trees, mental ray would in turn use a different strategy in allocating 
memory to render the scene and perhaps not crash.  This is my point about 
people needing to take the time to learn the renderer and the rendering 
process, and stop blaming the renderer.  99% of artists don't do that.  So 
when they complain about mental ray in these contexts, what they're really 
doing is shouting out their own lack of preparation.  Again, it has nothing 
to do with being smart or dumb, but it has everything to do with being 
prepared and responsibility of learning tools of your craft.


Not sure of their present release schedule as I haven't kept up in the past 
couple of years, but when I was more involved, Mental Images was in habit of 
releasing patches and updates quite regularly.  Often every few weeks.

What you have understand is the business relationship in how you access 
mental ray rendering.  Mental Images licenses their technology to other 
businesses, like Autodesk.  Autodesk is effectively the customer and the one 
receiving the regular maintenance, patches, and support from mental images. 
Autodesk in turn then integrates the rendering technology into their DCC 
products and extends their own technical support to their customers.  It is 
Autodesk who decides to only update their integration of mental ray once or 
twice per year, not mental images.  So direct your complaint to Autodesk on 
that front.  This arrangement is also why you get integrated mental ray and 
not standalone mental ray.  That was likely a cost driven decision as 
standalone licenses would cost more (read that as, having standalone 
licenses in addition to integrated rendering would cost more).  Softimage|3D 
was the last application to offer both.  In this scenario, Autodesk is like 
your local reseller - they're intended to solve your problems, but actually 
just get in the way.

If you want better support per your complaints, you can purchase directly 
from mental images and get mental ray standalone and more frequent updates. 
It's the same rendering technology, but unhindered by the overhead of the 
DCC which translates to faster performance and more stability - especially 
at load time when integrations suffer the most.  Your DCC also only exposes 
options to rendering features which the DCC supports, but Mental Ray has 
many additional features beyond that - some of which would solve problems 
many complain about.  All those check boxes, sliders, and menus you access 
in your DCC are command line flags you can activate with the mental ray 
standalone, but mental ray has additional flags.  I have used both 
integrated and standalone rendering with mental ray quite a bit over the 
years, and have also written a considerable number of shaders.  standalone 
is by far more stable, faster, easier to debug/troubleshoot, and scales very 
well.  No it's not perfect, but for pure rendering it beats integrated 
rendering hands down.  A lot of that is due to the integration, not the 
renderer itself.

So about your whole 'user base is struggling' comment.... that's back to my 
earlier point of not taking the time to learn.  when I started using mental 
ray back in the 1990s, I was just an artist/animator.  I didn't know how to 
code.  I was working in games and had to make cinematic sequences for a game 
called "SnowCrash" that never made it to market.  I had to do a lot of 
futuristic stuff on a budget and thought mental ray would be a good medium 
for generating special effects with the OZ shaders and rendermap. 
Unfortunately, I didn't know how to use mental ray and the Softimage|3D 
documentation was less than useful as Softimage's concept of materials, 
lighting, and other techniques were often backwards.  That's when I cracked 
open the mental images written documentation for mental ray and began 
reading.  While some of the documentation was terse or organized in a way I 
didn't exactly consider user friendly, the programming documentation was 
very logical, straightforward and to the point which actually made it all 
make sense.  The coded examples were very well written for the purpose of 
being informative how the renderer actually works.  that in turn gave me 
insight how to use mental ray inside of Softimage|3D, and inspired me to 
learn to code so I could write shaders to enhance my artistic experiences. 
that effort to learn how the renderer worked paid dividends later in my 
career and prompted me to pursue a computer science degree.

RedShift is in a different league of renderer, being hardware based, and is 
justifiable in the context of this discussion as it offers a very 
significant benefit over mental ray.  My main argument is refuting the idea 
of using a competing renderer n the same class at additional cost when it 
has negligible advantages in the holistic context such as Arnold or 
3Delight.  Sure each renderer has it's pros and cons, but to spend money on 
them and claim they're easier to learn when you haven't taken the time to 
learn the renderer you already have and paid for - that's not a solid 
argument.  I'm not saying there aren't situations where a move to another 
renderer is warranted.

Matt




Date: Sun, 29 May 2016 06:35:48 -0700
From: Derek Jenson <derekjen...@hotmail.com>
Subject: RE: Anybody still using mental ray?

I think the biggest problem with the stability of MR was with the concept of 
only releasing a single update once a year which was tided to the 3D 
program. That was unrealistic idealism. There was also pressure to give 
customers the lastest and least tested version of MR with each yearly DCC 
update. 3D is too bleeding edge for that release model to be stable. Being 
XSI's only renderer option for a long time, stability certainly became an 
issue.

If MR updates were released with the frequency (and flexibility of 
rollbacks) like all 3rd party engines, everyone would have fonder memories 
of the software.

The developers of MR also worked in complete isolation with regard to 
communication with their customer base. The RS guys have bent over backward 
to educate and update their clients, and I really appreciate the support. 
IMO, you can only partially point the finger at users for not using a 
software as intended. With information/training being so easily accessible 
now the "you're not smart enough to use software X" mentally of the early 
years is void. If a whole user base is struggling with a technology... then 
something with that tech is flawed; not the other way around.

The flexibility of MR and 3delight are unmatched (in  XSI), but the speed 
demands forced on this biz make Redshift indispensable for keeping pace.

------
Softimage Mailing List.
To unsubscribe, send a mail to softimage-requ...@listproc.autodesk.com with 
"unsubscribe" in the subject, and reply to confirm.

Reply via email to