Re: [Mesa-dev] Low interpolation precision for 8 bit textures using llvmpipe

2019-04-15 Thread Dominik Drees
On 4/12/19 5:32 PM, Roland Scheidegger wrote:
> Am 12.04.19 um 14:34 schrieb Dominik Drees:
>> Hi Roland!
>>
>> On 4/11/19 8:18 PM, Roland Scheidegger wrote:
>>> What version of mesa are you using?
>> The original results were generated using version 19.0.2 (from the arch
>> linux repositories), but I got the same results using the current git
>> version (98934e6aa19795072a353dae6020dafadc76a1e3).
> Alright, both of these would use the GALLIVM_PERF var.
> 
>>> The debug flags were changed a while ago (so that those perf tweaks can
>>> be disabled on release builds too), it needs to be either:
>>> GALLIVM_PERF=no_rho_approx,no_brilinear,no_quad_lod
>>> or easier
>>> GALLIVM_PERF=no_filter_hacks (which disables these 3 things above
>>> together)
>>>
>>> Although all of that only really affects filtering with mipmaps (not
>>> sure if you do?).
>> Using GALLIVM_PERF does not a make a difference, either, but that should
>> be expected because I'm not using mipmaps, just "regular" linear
>> filtering (GL_NEAREST).
>>>
>>>
>>> (more below)
>> See my responses below as well.
>>>
>>>
>>> Am 11.04.19 um 18:00 schrieb Dominik Drees:
>>>> Running with the suggested flags in the environment does not change the
>>>> result for the test case I described below. The results with and without
>>>> the environment variables set are pixel-wise equal.
>>>>
>>>> By the way, and if this of interest: For GL_NEAREST sampling the results
>>>> from hardware and llvmpipe are equal as well.
>>>>
>>>> Best,
>>>> Dominik
>>>>
>>>> On 4/11/19 4:36 PM, Ilia Mirkin wrote:
>>>>> llvmpipe takes a number of shortcuts in the interest of speed which
>>>>> cause inaccurate texturing. Try running with
>>>>>
>>>>> GALLIVM_DEBUG=no_rho_approx,no_brilinear,no_quad_lod
>>>>>
>>>>> and see if the issue still occurs.
>>>>>
>>>>> Cheers,
>>>>>
>>>>>      -ilia
>>>>>
>>>>>
>>>>>
>>>>> On Thu, Apr 11, 2019 at 8:30 AM Dominik Drees 
>>>>> wrote:
>>>>>>
>>>>>> Hello, everyone!
>>>>>>
>>>>>> I have a question regarding the interpolation precision of llvmpipe.
>>>>>> Feel free to redirect me to somewhere else if this is not the right
>>>>>> place to ask. Consider the following scenario: In a fragment shader we
>>>>>> are sampling from a 16x16, 8 bit texture with values between 0 and 3
>>>>>> using linear interpolation. Then we write white to the screen if the
>>>>>> sampled value is > 1/255 and black otherwise. The output looks very
>>>>>> different when rendered with llvmpipe compared to the result
>>>>>> produced by
>>>>>> rendering hardware (for both intel (mesa i965) and nvidia (proprietary
>>>>>> driver)).
>>>>>>
>>>>>> I've uploaded examplary output images here
>>>>>> (https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fimgur.com%2Fa%2FD1udpezdata=02%7C01%7Csroland%40vmware.com%7Cbdef52eb504c4078f9f808d6be96da17%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C636905952501149697sdata=vymggYHZTDLwKNh7RpcM1eSyhVA2L%2BfHNchvYS8yQPQ%3Dreserved=0)
>>>>>>
>>>>>>
>>>>>> and the corresponding fragment shader here
>>>>>> (https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpastebin.com%2Fpa808Reqdata=02%7C01%7Csroland%40vmware.com%7Cbdef52eb504c4078f9f808d6be96da17%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C636905952501149697sdata=%2FqKVJCXFS4UswynKeSoqCKivTHAb2o%2FZwVE1nwNms3M%3Dreserved=0).
>>>>>>
>>> The shader looks iffy to me, how do you use that vec4 in the if clause?
>>>
>>>
>>>>>>
>>>>>> My hypothesis is that llvmpipe (in contrast to hardware) only uses
>>>>>> 8 bit
>>>>>> for the interpolation computation when reading from 8 bit textures and
>>>>>> thus loses precision in the lower bits. Is that correct? If so, does
>>>>>> anyone know of a workaround?
>>>
>>> So, in theory it is indeed possible the results are less accurate with
>>> llvmpipe (I believe all recent hw does rgba8 filtering with more than 8
>>> bit prec

Re: [Mesa-dev] Low interpolation precision for 8 bit textures using llvmpipe

2019-04-12 Thread Dominik Drees

Hi Roland!

On 4/11/19 8:18 PM, Roland Scheidegger wrote:

What version of mesa are you using?
The original results were generated using version 19.0.2 (from the arch 
linux repositories), but I got the same results using the current git 
version (98934e6aa19795072a353dae6020dafadc76a1e3).

The debug flags were changed a while ago (so that those perf tweaks can
be disabled on release builds too), it needs to be either:
GALLIVM_PERF=no_rho_approx,no_brilinear,no_quad_lod
or easier
GALLIVM_PERF=no_filter_hacks (which disables these 3 things above together)

Although all of that only really affects filtering with mipmaps (not
sure if you do?).
Using GALLIVM_PERF does not a make a difference, either, but that should 
be expected because I'm not using mipmaps, just "regular" linear 
filtering (GL_NEAREST).



(more below)

See my responses below as well.



Am 11.04.19 um 18:00 schrieb Dominik Drees:

Running with the suggested flags in the environment does not change the
result for the test case I described below. The results with and without
the environment variables set are pixel-wise equal.

By the way, and if this of interest: For GL_NEAREST sampling the results
from hardware and llvmpipe are equal as well.

Best,
Dominik

On 4/11/19 4:36 PM, Ilia Mirkin wrote:

llvmpipe takes a number of shortcuts in the interest of speed which
cause inaccurate texturing. Try running with

GALLIVM_DEBUG=no_rho_approx,no_brilinear,no_quad_lod

and see if the issue still occurs.

Cheers,

    -ilia



On Thu, Apr 11, 2019 at 8:30 AM Dominik Drees 
wrote:


Hello, everyone!

I have a question regarding the interpolation precision of llvmpipe.
Feel free to redirect me to somewhere else if this is not the right
place to ask. Consider the following scenario: In a fragment shader we
are sampling from a 16x16, 8 bit texture with values between 0 and 3
using linear interpolation. Then we write white to the screen if the
sampled value is > 1/255 and black otherwise. The output looks very
different when rendered with llvmpipe compared to the result produced by
rendering hardware (for both intel (mesa i965) and nvidia (proprietary
driver)).

I've uploaded examplary output images here
(https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fimgur.com%2Fa%2FD1udpezdata=02%7C01%7Csroland%40vmware.com%7Cbdef52eb504c4078f9f808d6be96da17%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C636905952501149697sdata=vymggYHZTDLwKNh7RpcM1eSyhVA2L%2BfHNchvYS8yQPQ%3Dreserved=0)

and the corresponding fragment shader here
(https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpastebin.com%2Fpa808Reqdata=02%7C01%7Csroland%40vmware.com%7Cbdef52eb504c4078f9f808d6be96da17%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C636905952501149697sdata=%2FqKVJCXFS4UswynKeSoqCKivTHAb2o%2FZwVE1nwNms3M%3Dreserved=0).

The shader looks iffy to me, how do you use that vec4 in the if clause?




My hypothesis is that llvmpipe (in contrast to hardware) only uses 8 bit
for the interpolation computation when reading from 8 bit textures and
thus loses precision in the lower bits. Is that correct? If so, does
anyone know of a workaround?


So, in theory it is indeed possible the results are less accurate with
llvmpipe (I believe all recent hw does rgba8 filtering with more than 8
bit precision).
For formats fitting into rgba8, we have a fast path in llvmpipe
(gallivm) for the lerp, which unpacks the 8bit values into 16bit values,
does the lerp with that and packs back to 8 bit. The result is
accurately rounded there (to 8 bit) but only for 1 lerp step - for a 2d
texture there are 3 of those (one per direction, and a final one
combining the result). And yes this means the filtered result only has 8
bits.
Do I understand you correctly in that for the 2D case, the results of 
the first two lerps (done in 16 bit) are converted to 8 bit, then 
converted back to 16 bit for the final (second stage) lerp?


If so and if I'm understanding this correctly, for 2D (i.e., a 2-stage 
linear interpolation) we potentially have an error in the order of one 
bit for the final 8 bit value due to the intermediate 16->8->16 
conversion. For sampling from a 3D texture (i.e., a 3-stage linear 
interpolation) the effect would be amplified: The extra stage could 
cause an error with a magnitude of two bits of the final 8 bit result 
(if I'm doing the math in my head correctly).


Is there any (conceptual) reason why the result of a one dimensional 
interpolation step is reduced back to 8 bits before the second stage 
interpolation? Would avoiding these conversions not actually be faster 
(in addition to the improved accuracy)?


I do believe you should not rely on implementations having more accuracy
- as far as I know the filtering we do is conformant there (it is tricky
to do better using the fast path).
In principle you are correct. In our regressiontests we actually have 
(per test) configurable thresholds for maximum pixel distance/maximum 
number 

Re: [Mesa-dev] Low interpolation precision for 8 bit textures using llvmpipe

2019-04-11 Thread Dominik Drees
Running with the suggested flags in the environment does not change the 
result for the test case I described below. The results with and without 
the environment variables set are pixel-wise equal.


By the way, and if this of interest: For GL_NEAREST sampling the results 
from hardware and llvmpipe are equal as well.


Best,
Dominik

On 4/11/19 4:36 PM, Ilia Mirkin wrote:

llvmpipe takes a number of shortcuts in the interest of speed which
cause inaccurate texturing. Try running with

GALLIVM_DEBUG=no_rho_approx,no_brilinear,no_quad_lod

and see if the issue still occurs.

Cheers,

   -ilia



On Thu, Apr 11, 2019 at 8:30 AM Dominik Drees  wrote:


Hello, everyone!

I have a question regarding the interpolation precision of llvmpipe.
Feel free to redirect me to somewhere else if this is not the right
place to ask. Consider the following scenario: In a fragment shader we
are sampling from a 16x16, 8 bit texture with values between 0 and 3
using linear interpolation. Then we write white to the screen if the
sampled value is > 1/255 and black otherwise. The output looks very
different when rendered with llvmpipe compared to the result produced by
rendering hardware (for both intel (mesa i965) and nvidia (proprietary
driver)).

I've uploaded examplary output images here (https://imgur.com/a/D1udpez)
and the corresponding fragment shader here (https://pastebin.com/pa808Req).

My hypothesis is that llvmpipe (in contrast to hardware) only uses 8 bit
for the interpolation computation when reading from 8 bit textures and
thus loses precision in the lower bits. Is that correct? If so, does
anyone know of a workaround?

A little bit of background about the use case: We are trying to move the
CI of Voreen (https://www.uni-muenster.de/Voreen/) to the Gitlab-CI
running in docker without any hardware dependencies. Using llvmpipe for
our regression tests works in principle, but shows significant
differences in the raycasting rendering of an 8-bit-per-voxel dataset.
(The effect is of course less visible than the constructed example case
linked above, but still quite noticeable for a human.)

Any help or pointers would be appreciated!

Best,
Dominik

--
Dominik Drees

Department of Computer Science
Westfaelische Wilhelms-Universitaet Muenster

email: dominik.dr...@wwu.de
web: https://www.wwu.de/PRIA/personen/drees.shtml
phone: +49 251 83 - 38448

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


--
Dominik Drees

Department of Computer Science
Westfaelische Wilhelms-Universitaet Muenster

email: dominik.dr...@wwu.de
web: https://www.wwu.de/PRIA/personen/drees.shtml
phone: +49 251 83 - 38448



smime.p7s
Description: S/MIME Cryptographic Signature
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] Low interpolation precision for 8 bit textures using llvmpipe

2019-04-11 Thread Dominik Drees

Hello, everyone!

I have a question regarding the interpolation precision of llvmpipe. 
Feel free to redirect me to somewhere else if this is not the right 
place to ask. Consider the following scenario: In a fragment shader we 
are sampling from a 16x16, 8 bit texture with values between 0 and 3 
using linear interpolation. Then we write white to the screen if the 
sampled value is > 1/255 and black otherwise. The output looks very 
different when rendered with llvmpipe compared to the result produced by 
rendering hardware (for both intel (mesa i965) and nvidia (proprietary 
driver)).


I've uploaded examplary output images here (https://imgur.com/a/D1udpez) 
and the corresponding fragment shader here (https://pastebin.com/pa808Req).


My hypothesis is that llvmpipe (in contrast to hardware) only uses 8 bit 
for the interpolation computation when reading from 8 bit textures and 
thus loses precision in the lower bits. Is that correct? If so, does 
anyone know of a workaround?


A little bit of background about the use case: We are trying to move the 
CI of Voreen (https://www.uni-muenster.de/Voreen/) to the Gitlab-CI 
running in docker without any hardware dependencies. Using llvmpipe for 
our regression tests works in principle, but shows significant 
differences in the raycasting rendering of an 8-bit-per-voxel dataset. 
(The effect is of course less visible than the constructed example case 
linked above, but still quite noticeable for a human.)


Any help or pointers would be appreciated!

Best,
Dominik

--
Dominik Drees

Department of Computer Science
Westfaelische Wilhelms-Universitaet Muenster

email: dominik.dr...@wwu.de
web: https://www.wwu.de/PRIA/personen/drees.shtml
phone: +49 251 83 - 38448



smime.p7s
Description: S/MIME Cryptographic Signature
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev