---- Original Message -----
> >
> > On the contrary, I think that putting norm and scaled/int in the
> > same sack is talking apples and oranges...   Normalization, like
> > fixed-point integers, affects the interpretation of the 32bit
> > integer in memory, namely the scale factor that it should be
> > multiplied. Whereas  the only difference between _SSCALED and
> > _SINT is the final data type (int vs float) -- the value is
> > exactly the same (modulo rounding errors).
> >
> > The pure vs non-pure integer is really a "policy" flag, which means
> > please do not implicetly convert this to a float at any point of
> > the vertex pipeline. And I believe that policy flags should be
> > outside enum pipe_format.
> 
> While I'm tending to agree with you Jose, the other thing that we
> haven't discussed yet is that we need to add new pack/unpack
> interfaces to u_format for all these types anyways to get a integer
> clean path (no float converts in the pipeline), increasing the size
> of
> the u_format_table anyways. With separate types we could overload the
> float pack/unpack. ones most likely.

I'm not sure where u_format's code is used in hardware pipe drivers nowadays, 
but for software rendering in general and state trakcer software fallback of 
blits in particular, we could never just overload the float unpack/pack 
entry-points, as they would corrupt 32bit integers higher/smaller than +/- 1 << 
24.  I think we'd need pack/unpack functions that accept/return colors in 32bit 
integers instead of floats, or another type big enough.

So either we add (un)pack_rgba_uint32 and (un)pack_rgba_sint32 entry-points for 
those integer formats (and it only needs to be added for the int formats), or a 
simply add a new xxx_double entrypoint if we think that they will never be used 
in a critical path.

It's similar to what's currently done for (un)pack_z_32unorm and 
(u)pack_s_8uscaled -- 32unorm can loose precision when converted to float (no 
biggie for colors but inadimissible for depth values) and 8uscaled is much 
smaller/faster to use than float, so u_format has these special entry-points, 
but they are defined only on depthstencil formats. 


I think that software renderers texture sampling code too, will need to be 
rewritten substantially, to allow the sampling of integers using integer 
intermediates.   Draw module interpolation code too, will need to use double 
intermediates to interpolate integer attributes when clipping, etc.   All this 
needs to happen, _regardless_ you choose to pass the "pure integer" policy in 
pipe_format enum, or elsewhere.  The fact is that integer values must not be 
converted to float intermediates in such case -- while our current code often 
assumes that float is general enough for everything.


Honestly, with LLVM code generation things are substantially easier, because 
once basic code generation is in place, we can code generate integer 
expressions just as easily as float ones.  So I think that for each module that 
needs updating, we should weight our options: is this code performance 
sensitive, and if so would it be better served with LLVM code generation?  If 
it's not performance sensitive, we should simply bump the C code from floats to 
doubles.  Ditto, if LLVM is a better fit, so that the C version can still be 
used as a reference / debugging-friendlier version than LLVM JIT code.  
Otherwise, we'll have to resort to CPP templates, to generate the 
float/uint/sint C versions.


At any rate, I believe that a lot of this does not usually apply to hardware 
pipe drivers. I think it might be wiser to get the hardware paths done first, 
given the hardware should do the right thing in most if not all cases, and then 
start getting the software paths in shape.


Jose
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to