This article helped me eliminate all of my issues on normal mapping. It's a 
well written, and clear text, so go ahead, and read it.

Tangent space normal map main feature is that it's independent of the world 
space transformation. If Ultimapper changes the normal it must be wrong. 
However, as far as I know, the tangent space might change, because tangent 
space is calculated using the average normal within a UV shell, so if your 
normal change (like rotation of the object), the tangent space is changed as 
well. I do not know exactly, I'll ask our CTO...

Cheers


Szabolcs

-----Original Message-----
From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Tim Leydecker
Sent: Tuesday, January 07, 2014 6:13 AM
To: softimage@listproc.autodesk.com
Subject: Re: ultimapper issues - tangent space normal maps

The relation between the lowrez and highrez mesh is the responsibility of the 
modeler, imho.

Retopo has become a lot easier but shouldn´t be mistaken to be automagic just 
because there´s now several options in various programs to autogenerate some 
sort of cage around a mesh.

Of course, it´s good to talk and find a general approach on how to handle 
bevels or rims, how to sculpt highrez stuff to make sure it transfers nicely 
and looking into maybe even temporarily subdividing a uv´d lowrez mesh to 
improve bake results. Fat edges don´tlook nice in a highrez but may just stand 
out better from a distance later...

The final cleanup and adjustment (involving some trial and error) of the lowrez 
mesh should really be the seen as the resposibility of the person that created 
the meshes and not passed on to the guy just painting the texture. Ideally, you 
provide options to iterate quickly and brute force it if really neccessary. 
Fiddling looks like struggling to the uneducated observer.

I´ve worked in two, three scenarios that where shotgun style task driven and 
found that some types of personalities may be tempted to forward the problem 
for the sake of finishing their task.

This results in unfair workload that may slip the attention of the production 
team.

Been on the receiving end a few times, including a feature film involving trees 
without treetops in an establishing total shot.

That leads to some sort of frustration avoidable by actively sharing and 
distributing responsibility.
But for that you need a sup and production team actually willing to involve 
people in the decision making process.

Which seems rare to me, out of experience. Quite a few productions I worked on 
suffered from the ego of the people more than from tight deadlines, limited 
budgets or even artistic limitations. Maybe that´s a german phenomenon.

Back to normals, the reason why I brought up the edgespill and 2D>3D 
interpretation of images is because it´s likely you´ll want to mix our baked 
highrez>lowrez normal map with highfrequency surface details derrived from  a 
2D process or even various sources.

In terms of first of all creating a good bake, I try to model a clean, 
subdivideable basemesh and bake from high subdivision to lowrez of the same 
mesh, to avoid the scenario you face.

But that is overkill and limiting both in terms of effort needed to model and 
resulting time it takes.

People sticking stuff together, be it dynameshes, voxels or cubes will iterate 
more, producing more, maybe even better results.

After a lot of trials and Errors, a highrez can be a pile of goo as long as it 
looks awesome - which is what counts.

That said, the lowrez mesh should be clean and well made, no cheap shortcuts.

Still, I prefer a clean highrez mesh but it´s not supportable in a production 
environment.


Cheers,


tim







On 07.01.2014 05:35, Matt Lind wrote:
> The bigger problem that needs eyes on it is determining how the low res mesh 
> details correlate to the high res mesh details.  Maya uses a cage concept to 
> limit the search distance, but that doesn't address the issue of finding an 
> appropriate match for a specific detail common to the two meshes.  One 
> possible solution is to duplicate the low res mesh and ask the user to push 
> and pull points around under the direction that rays will be cast from the 
> low res mesh to the duplicate mesh along the line that matches the details.  
> If the high res mesh is encountered along that path, then the normal will be 
> transferred to the low res mesh.  That works, but is probably more labor 
> intensive to set up than any user would want to deal with.
>
> I am only trying to solve a very specific problem of being able to transfer a 
> tangent space normal map from one object to another using our proprietary 
> tangent space algorithm.  I'm still at the prototyping stage and testing with 
> standard tangent space algorithms to validate my math before proceeding to 
> our proprietary algorithm which has a few added wrinkles.
>
>
> Edge spill in the context of an ultimapper-like transfer process is really 
> about oversampling.  As long as the entire texel is tested against a triangle 
> and not just the centroid of the texel, there shouldn't be any issues.  What 
> can be a problem is if a texel is used by multiple triangles on different 
> parts of the mesh (i.e. the UVs are not unique).  That's when you run into 
> garbage data contaminating your normal map.  If your UVs are unique and 
> there's at least one pixel of safe zone around each UV Island, and you adjust 
> oversampling to do some sort of stochastic sampling to ensure all parts of a 
> texel are considered, then you shouldn't have any problems with edge spill 
> looking like crap or allowing undesired values to bleed in.
>
> As for the normal map from image or heightfield techniques.  That's an 
> entirely different ballgame as the as the tool is making assumptions about a 
> 2D space to fabricate a 3rd dimension.  While it will produce valid results, 
> it may not always be desired results.  Higher resolution data will produce 
> better results, but it'll never be as accurate as having 3D data as a source.
>
>
> Matt
>
>
>
> -----Original Message-----
> From: softimage-boun...@listproc.autodesk.com
> [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Tim
> Leydecker
> Sent: Monday, January 06, 2014 7:19 PM
> To: softimage@listproc.autodesk.com
> Subject: Re: ultimapper issues - tangent space normal maps
>
> It is great to have flexibility in the search methods.
>
> I´m familiarizing with xNormal at the moment and just went through the 
> Normalmap sampling options, e.g. 3x3,5x5 etc. I can´t say I am sure I have a 
> favourite search method for a specific task or know why, yet.
>
>   From my artistic standpoint, I have a good idea what I want a specific 
> Normalmap to look like, it´s just a bumpmap with additional info about it´s 
> orientation to lightsources. Easy enough to read in 2D and translate into a 
> guestimate what it´s going to give me for details (in the specs) in an 
> otherwise boringly flat surface.
>
> I would likely favour a clean version over the one with artifacts from 
> scaling.
>
> This includes avoiding edgespill, harsh contrast and overly pushing intensity 
> to start me with.
>
> To a developer implementing a Normalmap feature, it´s probably
> blasphemy but if you look into
> ndo/ndo2 and what options it´s giving an artist to influence/suggest surface 
> detail, it´s just cool.
>
> ndo/ndo2 or crazybump or xnormal start to hurt when you do "normals from 
> heightmap/photos) or from a painted diffuse map and look at what consequently 
> happens to the edges of your uvshells.
>
> It´s difficult to judge how much clean edgespill is going to be needed, I try 
> 16x at 4K but that already takes away a lot of map space just for making sure 
> downscaling to 1K may work.
>
> Why I´m saying this?
>
> It would be nice if you make sure edgespill around your UV shells is first of 
> all there and ideally not maxed out into rainbow colors as in, let´s say 
> Mudbox. Adding layers to such an area afterwards is really difficult 
> otherwise and may give you artifacts creeping in on your map area fron the 
> seams.
>
> Cheers,
>
> tim
>
>
>
>
> On 06.01.2014 21:06, Matt Lind wrote:
>> OK, so what I'm hearing is we both agree ultimapper is wrong.  That's what I 
>> needed to know.
>>
>> I'll file a bug on ultimapper and proceed under the assumption my code is 
>> correct.
>>
>> Thanks.
>>
>> As for looking up a normal on a high res mesh from a low res mesh, 
>> ultimapper is using raycast along the low res mesh's normal to find the 
>> appropriate location on the high res mesh.  If the ray shoots off into outer 
>> space without hitting anything, a 2nd ray is cast in the opposite direction. 
>>  If that ray hits nothing, the normal is recorded as (0.5, 0.5, 1) 
>> indicating the tangent normal map stores the geometry normal as is.
>>
>> If you do a closest location search as you suggest, the results are often 
>> quite different.  Using the example scene I provided in a previous message, 
>> the raycast method as described above results in a circle being drawn on 
>> each face of the cube.  If you do a closest location search, the entire cube 
>> will be filled with normals and that map will have heavy amounts of 
>> distortion.  In some cases that may be desireable or more appropriate than 
>> raycasting.  In either case, I don't think there's a blanket solution to 
>> that problem.  The search method has to be tailored to the specific case.
>>
>>
>> Matt
>>
>>
>>
>>
>> -----Original Message-----
>> From: softimage-boun...@listproc.autodesk.com
>> [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Tim
>> Leydecker
>> Sent: Monday, January 06, 2014 11:20 AM
>> To: softimage@listproc.autodesk.com
>> Subject: Re: ultimapper issues - tangent space normal maps
>>
>> What does xnormal do for two meshes with non-zero transforms?
>>
>> Out of a gut feeling, I would say that a tangent space normal map should be 
>> independent of an object´s world space transformation, because if it where 
>> dependent on that worldspace position, it would degrade the tangent space 
>> map into an incorrectly created object space normal map.
>>
>> It doesn´t make sense to take worldorientation of an object into account for 
>> a tangent space map. Here the mother of all is one and she is perpendicular 
>> to the face.
>>
>> Nobody else has binormals anyway, sort of.
>>
>> In terms of using empathy, I would guess that the code for Ultimapper was 
>> tested against two objects in the origin and this resulted in the 
>> vertexpositions being used as in (my pseudologic) worldspace=objectspace.
>>
>> I would opt to have the tagentspace map created solely based on the distance 
>> between two closest points (e.g. closest distance between in highrez and the 
>> lowrez).
>>
>> This way, the map will work, regardly of where it is or at what orientation 
>> to the origin it was created.
>>
>> tim
>>
>>
>>
>>
>> On 06.01.2014 19:34, Matt Lind wrote:
>>> It's a simple question of what is the expected result.
>>>
>>> Should the tangents and bitangents stay oriented relative to the mesh, or 
>>> should they stay put in world space and acknowledge the transformation of 
>>> the object?  My code is working under the assumption of the former, 
>>> ultimapper is giving me the latter.
>>>
>>> See example scene I provided in my previous message.
>>>
>>>
>>> Matt
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: softimage-boun...@listproc.autodesk.com
>>> [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of
>>> Szabolcs Matefy
>>> Sent: Monday, January 06, 2014 12:22 AM
>>> To: softimage@listproc.autodesk.com
>>> Subject: RE: ultimapper issues - tangent space normal maps
>>>
>>> Have you tried other solutions? Try it with xNormal to check your results. 
>>> In my opinion Ultimapper is quite useless without cage. Since we left 
>>> Ultimapper out of the formula, we have no issues at all.
>>>
>>> Back to your problem. As far as I know, there are three normal
>>> mapping type, world, object and tangent space normal maps. World
>>> space is the best for static object, that have no transformation at
>>> all. Object space normal maps allows object transformation, while
>>> tangent space normal maps allow deformation as well. If tangent
>>> normal map changes when you transform the object, it might be a bug.
>>> I'm not into the math of tangent space normal maping, but as I
>>> mentioned, without cage Ultimapper is aquite useless, so we dropped
>>> it. Consider moving onto xNormal it's quite reliable tool
>>>
>>> Cheers
>>>
>>> Szabolcs
>>> -----Original Message-----
>>> From: softimage-boun...@listproc.autodesk.com
>>> [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Matt
>>> Lind
>>> Sent: Saturday, January 04, 2014 2:13 AM
>>> To: softimage@listproc.autodesk.com
>>> Subject: RE: ultimapper issues - tangent space normal maps
>>>
>>> It's not a normalization issue as the normal vectors are normalized in 
>>> Euler space before being converted to RGB color space.  If it were a post 
>>> process problem, there would be differences in all cases.  So far I only 
>>> see the difference when one or both meshes are transformed indicating it's 
>>> a coordinate space computation issue.
>>>
>>> There is no issue with a cage either.  See my previous reply to the this 
>>> thread with example scene.  The cage is only relevant when there are many 
>>> layers of overlapping surfaces.  In my example it's a simple cube and 
>>> sphere, so no need for a cage.
>>>
>>>
>>>
>>> Matt
>>>
>>>
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: softimage-boun...@listproc.autodesk.com
>>> [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Tim
>>> Leydecker
>>> Sent: Friday, January 03, 2014 3:11 AM
>>> To: softimage@listproc.autodesk.com
>>> Subject: Re: ultimapper issues - tangent space normal maps
>>>
>>> Hi Matt,
>>>
>>> A shift in the final intensity could come from a per channel normalisation.
>>>
>>> You´d get different results if you don´t have such normalisation/levels 
>>> operation as a postprocess of your saving calculations to file.
>>>
>>> But it should be easy enough to test if suc a normalisation would give you 
>>> similar results to XSI. In the dirtiest&cheapest way, in Photoshop>Auto 
>>> Levels.
>>>
>>> Since Szabolcs already pointed out that there is no cage option in 
>>> Ultimapper, e.g. no manual control of a min and max searchdistance for 
>>> calculations, I´d guess the min and max is fixedly determined by the 
>>> maximum distance between highrez and lowrez mesh and the results are 
>>> "smoothed out" by remapping to 0-1 per channel for best use of the file´s 
>>> available intensity steps.
>>>
>>> I could be completely wrong, thought.
>>>
>>> In general, I will most likely use ZBrush and CrazyBump to create and 
>>> modify Normals in a let´s say, artsy partsy mashed potato kind of way that 
>>> gives me the look I want without knowing much more than Green>light from 
>>> Ground, Red>light from Right to work in Cryengine/UDK/3DSMax.
>>>
>>> Cheers,
>>>
>>> tim
>>>
>>>
>>>
>>> On 03.01.2014 07:51, Szabolcs Matefy wrote:
>>>> Hey Matt,
>>>>
>>>> Your result might be different because of the tangent space
>>>> calculation. I suppose that the normal map calculation might be done in 
>>>> object space, then Ultimapper converts it into tangent space. Ultimapper 
>>>> could be quite good, but lacks a very important feature, the cage. So 
>>>> finally we dropped in favor of xNormal.
>>>>
>>>> You might check few things (I'm not a programmer, so I may be wrong).
>>>> Check the transforms. In my experience transforms has effect how vertex 
>>>> normals are calculated. Certain distance from the origin might result 
>>>> imprecision (is this the right word?), and the farther the object is from 
>>>> the origin, the bigger this imprecision is.
>>>>
>>>> There are discrepancies, for sure, because these tools have
>>>> different approach to derive tangent space. For example, Softimage
>>>> uses the vertex color to store the tangents, and binormal is
>>>> calculated from this. But, if your smoothing on the geo and on the
>>>> tangent space property differs, you won't get any usable normal map. For 
>>>> example the smoothing on tangents made Ultimapper quite useless for us, so 
>>>> I wrote an exporter for xNormal, and since then we have no issue at all. 
>>>> As our technical chief explained, a normal is correct only if the normal 
>>>> baking and displayer use the same tangent calculation. He wrote a tangent 
>>>> space calculator for xNormal, that uses the same algorithm CryEngine uses. 
>>>> So, unless your game engine approached tangent space differently than 
>>>> Softimage, you won't get good result.
>>>>
>>>> I think the whole game pipeline should be redesigned in Softimage.
>>>>
>>>> *From:*softimage-boun...@listproc.autodesk.com
>>>> [mailto:softimage-boun...@listproc.autodesk.com] *On Behalf Of
>>>> *Matt Lind
>>>> *Sent:* Friday, January 03, 2014 5:17 AM
>>>> *To:* softimage@listproc.autodesk.com
>>>> *Subject:* ultimapper issues - tangent space normal maps
>>>>
>>>> I am writing a modified ultimapper to convert tangent space normal
>>>> maps from one mesh to another.  The tool is needed because our
>>>> tangent space normal maps are not encoded in the standard way and 
>>>> softimage's tools cannot be modified to support our proprietary tangent 
>>>> space.  For prototyping I'm using the softimage tangent space and tangents 
>>>> property to do the transfer so I can check my math against ultimapper.  
>>>> Once I get a 1:1 match, I'll modify the logistics to support our 
>>>> proprietary stuff.
>>>>
>>>> So far when the hi and low res meshes are untransformed I get a 1:1
>>>> match with ultimapper, but when I transform one or both meshes a
>>>> wide discrepancy appears between my result and the softimage
>>>> ultimapper result.  The softimage result tends to be significantly 
>>>> brighter on the red and green channels, mostly on the green.  In some 
>>>> cases, the colors are not even close to the same.  The odd part is when I 
>>>> trace through the process step by step to debug, my numbers look correct 
>>>> both visually and mathematically.  I'm in a weird situation in that I do 
>>>> not know who's result is more correct, mine or Softimage.
>>>>
>>>> Some of our artists have mentioned there have been some
>>>> discrepancies compared to other commercial normal mapping tools (beyond 
>>>> flipping the Y axis).  Has anybody had issues getting correct results from 
>>>> ultimapper when transferring tangent space normal maps between meshes?
>>>>
>>>> Matt
>>>>
>>>
>>>
>>>
>>>
>>
>>
>
>

Reply via email to