On Wed, Sep 12, 2018 at 7:29 PM, Ian Romanick <i...@freedesktop.org> wrote: > From: Ian Romanick <ian.d.roman...@intel.com> > > The optimizer recently added the ability to replace a compare with a > subtraction under certain circumstances. This can fail for integers. > For inputs a = 0x80000000, b = 4, int(0x80000000) < 4, but > int(0x80000000) - 4 overflows and results in 0x7ffffffc. That's not > less than zero, so the flags get set differently than for (a < b). > > This really only affected the signed comparisons because the subtract > would always have a signed source types, so it wouldn't be seen as a > match for the compare with unsigned source types. > > Signed-off-by: Ian Romanick <ian.d.roman...@intel.com> > Cc: Matt Turner <matts...@gmail.com> > --- > tests/spec/CMakeLists.txt | 1 + > .../fs-absoluteDifference-int.shader_test | 85 +++++++++++++++++++ > .../fs-absoluteDifference-uint.shader_test | 85 +++++++++++++++++++ > .../vs-absoluteDifference-int.shader_test | 96 > ++++++++++++++++++++++ > .../vs-absoluteDifference-uint.shader_test | 96 > ++++++++++++++++++++++ > 5 files changed, 363 insertions(+) > create mode 100644 > tests/spec/glsl-1.30/execution/fs-absoluteDifference-int.shader_test > create mode 100644 > tests/spec/glsl-1.30/execution/fs-absoluteDifference-uint.shader_test > create mode 100644 > tests/spec/glsl-1.30/execution/vs-absoluteDifference-int.shader_test > create mode 100644 > tests/spec/glsl-1.30/execution/vs-absoluteDifference-uint.shader_test > > diff --git a/tests/spec/CMakeLists.txt b/tests/spec/CMakeLists.txt > index 4df9d331d..28abf3634 100644 > --- a/tests/spec/CMakeLists.txt > +++ b/tests/spec/CMakeLists.txt > @@ -2,6 +2,7 @@ add_subdirectory (amd_framebuffer_multisample_advanced) > add_subdirectory (amd_depth_clamp_separate) > add_subdirectory (amd_performance_monitor) > add_subdirectory (amd_pinned_memory) > +add_subdirectory (amd_transform_feedback3_lines_triangles) > add_subdirectory (arb_arrays_of_arrays) > add_subdirectory (arb_base_instance) > add_subdirectory (arb_bindless_texture) > diff --git > a/tests/spec/glsl-1.30/execution/fs-absoluteDifference-int.shader_test > b/tests/spec/glsl-1.30/execution/fs-absoluteDifference-int.shader_test > new file mode 100644 > index 000000000..cdac53cdf > --- /dev/null > +++ b/tests/spec/glsl-1.30/execution/fs-absoluteDifference-int.shader_test > @@ -0,0 +1,85 @@ > +[require] > +GL >= 3.0 > +GLSL >= 1.30 > + > +[vertex shader passthrough] > + > +[fragment shader] > +#extension GL_EXT_shader_integer_mix: enable > + > +// { A, B, absoluteDifference(A, B) } > +uniform ivec3 data[40]; > + > +out vec4 color; > + > +uint abs_diff(int a, int b) > +{ > + /* This can fail if the compiler replaces the (a < b) with the result of > + * one of the subtractions. For inputs a = 0x80000000, b = 4, > + * int(0x80000000) < 4, but int(0x80000000)-4 overflows and results in > + * 0x7ffffffc. That's not less than zero, so the flags get set > + * differently than for (a < b). > + */ > +#ifndef GL_EXT_shader_integer_mix > + return (a < b) ? uint(b - a) : uint(a - b); > +#else > + return mix(uint(a - b), uint(b - a), a < b); > +#endif
Why bother with this? It'll end up testing different code depending on the extensions that the driver supports, which seems like a really bad idea. -ilia _______________________________________________ Piglit mailing list Piglit@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/piglit