[Piglit] [PATCH 3/3] egl: eglQueryDeviceStringEXT() takes an EGLDevice

2016-09-07 Thread James Jones
The first parameter of eglQueryDeviceStringEXT()
is an EGLDevice, not an EGLDisplay.

Signed-off-by: James Jones 
---
 tests/egl/spec/egl_ext_device_query/egl_ext_device_query.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/egl/spec/egl_ext_device_query/egl_ext_device_query.c 
b/tests/egl/spec/egl_ext_device_query/egl_ext_device_query.c
index 950abbb..d6f171d 100644
--- a/tests/egl/spec/egl_ext_device_query/egl_ext_device_query.c
+++ b/tests/egl/spec/egl_ext_device_query/egl_ext_device_query.c
@@ -95,11 +95,11 @@ main(void)
if (!piglit_check_egl_error(EGL_BAD_ATTRIBUTE))
piglit_report_result(PIGLIT_FAIL);
 
-   devstring = queryDeviceString(dpy, 0xbad1dea);
+   devstring = queryDeviceString(device, 0xbad1dea);
if (!piglit_check_egl_error(EGL_BAD_PARAMETER))
piglit_report_result(PIGLIT_FAIL);
 
-   devstring = queryDeviceString(dpy, EGL_EXTENSIONS);
+   devstring = queryDeviceString(device, EGL_EXTENSIONS);
if (devstring == NULL) {
printf("Empty device extension string\n");
piglit_report_result(PIGLIT_WARN);
-- 
1.9.1

___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH 1/3] egl: Check for EGL_EXT_device_base

2016-09-07 Thread James Jones
EGL_EXT_device_base is equivalent to
EGL_EXT_device_query && EGL_EXT_device_enumeration

Signed-off-by: James Jones 
---
 .../spec/egl_ext_device_enumeration/egl_ext_device_enumeration.c  | 8 +---
 tests/egl/spec/egl_ext_device_query/egl_ext_device_query.c| 6 --
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git 
a/tests/egl/spec/egl_ext_device_enumeration/egl_ext_device_enumeration.c 
b/tests/egl/spec/egl_ext_device_enumeration/egl_ext_device_enumeration.c
index ce717bf..0d29a6b 100644
--- a/tests/egl/spec/egl_ext_device_enumeration/egl_ext_device_enumeration.c
+++ b/tests/egl/spec/egl_ext_device_enumeration/egl_ext_device_enumeration.c
@@ -38,10 +38,12 @@ main(void)
const char *client_exts = eglQueryString(EGL_NO_DISPLAY, 
EGL_EXTENSIONS);
bool has_client_ext =
client_exts &&
-   piglit_is_extension_in_string(client_exts,
+   ((piglit_is_extension_in_string(client_exts,
"EGL_EXT_device_query") &&
-   piglit_is_extension_in_string(client_exts,
-   "EGL_EXT_device_enumeration");
+ piglit_is_extension_in_string(client_exts,
+   "EGL_EXT_device_enumeration")) ||
+piglit_is_extension_in_string(client_exts,
+   "EGL_EXT_device_base"));
 
if (!has_client_ext) {
printf("EGL_EXT_device_enumeration not supported\n");
diff --git a/tests/egl/spec/egl_ext_device_query/egl_ext_device_query.c 
b/tests/egl/spec/egl_ext_device_query/egl_ext_device_query.c
index 721483f..950abbb 100644
--- a/tests/egl/spec/egl_ext_device_query/egl_ext_device_query.c
+++ b/tests/egl/spec/egl_ext_device_query/egl_ext_device_query.c
@@ -40,8 +40,10 @@ main(void)
const char *client_exts = eglQueryString(EGL_NO_DISPLAY, 
EGL_EXTENSIONS);
bool has_client_ext =
client_exts &&
-   piglit_is_extension_in_string(client_exts,
-   "EGL_EXT_device_query");
+   (piglit_is_extension_in_string(client_exts,
+   "EGL_EXT_device_query") ||
+piglit_is_extension_in_string(client_exts,
+   "EGL_EXT_device_base"));
 
if (!has_client_ext) {
printf("EGL_EXT_device_query not supported\n");
-- 
1.9.1

___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH 0/3] Misc. EGLDevice test fixes

2016-09-07 Thread James Jones
I found a few test issues and errors when running the new
egl_ext_device_query and egl_ext_device_enumeration tests.  With
these patches, both tests pass on the binary NVIDIA drivers.

James Jones (3):
  egl: Check for EGL_EXT_device_base
  egl: eglQueryDevicesEXT returns EGL_TRUE on success
  egl: eglQueryDeviceStringEXT() takes an EGLDevice

 tests/egl/spec/egl_ext_device_enumeration/egl_ext_device_enumeration.c | 10 
++
 tests/egl/spec/egl_ext_device_query/egl_ext_device_query.c | 10 
++
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH 2/3] egl: eglQueryDevicesEXT returns EGL_TRUE on success

2016-09-07 Thread James Jones
This is the case even when  is NULL or
 < .  The only
error conditions mentioned in the spec are for
 being NULL, or  <= 0 when
 != NULL.

Signed-off-by: James Jones 
---
 tests/egl/spec/egl_ext_device_enumeration/egl_ext_device_enumeration.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/tests/egl/spec/egl_ext_device_enumeration/egl_ext_device_enumeration.c 
b/tests/egl/spec/egl_ext_device_enumeration/egl_ext_device_enumeration.c
index 0d29a6b..0b524d5 100644
--- a/tests/egl/spec/egl_ext_device_enumeration/egl_ext_device_enumeration.c
+++ b/tests/egl/spec/egl_ext_device_enumeration/egl_ext_device_enumeration.c
@@ -57,7 +57,7 @@ main(void)
piglit_report_result(PIGLIT_FAIL);
}
 
-   if (queryDevices(0, NULL, &numdevs)) {
+   if (!queryDevices(0, NULL, &numdevs)) {
printf("Failed to get device count\n");
piglit_report_result(PIGLIT_FAIL);
}
-- 
1.9.1

___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH] winsys-framework Default to showing window

2015-08-17 Thread James Jones
For those who don't want to click-through, I'll just paste the very 
brief portion of the spec on Pixel Ownership that Alex linked to here:


[1] "The first test is to determine if the pixel at location in the 
framebuffer is currently owned by the GL (more precisely, by this GL 
context). If it is not, the window system decides the fate the incoming 
fragment. Possible results are that the fragment is discarded or that 
some subset of the subsequent per-fragment operations are applied to the 
fragment. This test allows the window system to control the GL's 
behavior, for instance, when a GL window is obscured."


The important part is the last sentence: The GL does not guarantee 
rendering to an obscured pixel will do anything.  Unmapped windows are 
definitely obscured pixels.  I've never heard anyone argue against that 
interpretation, though I've heard some grumbling from people that wish 
this spec language didn't exist.  As long as it exists though, we'll 
take advantage of it in our driver, so I don't think it makes any sense 
for tests to ignore it, at least by default.  Any other driver is free 
to implement the same optimization, but I don't know if any other X11 GL 
drivers do at the moment.


Thanks,
-James

[1] 
https://www.opengl.org/documentation/specs/version1.1/glspec1.1/node94.html


(Also present in the OpenGL 4.5 core spec as section 14.9.1, page 461)
https://www.opengl.org/registry/doc/glspec45.core.pdf

On 08/14/2015 04:33 PM, Ilia Mirkin wrote:

Most tests do indeed use an FBO, either directly or via the fbo
winsys, and all will be well. However some don't (glean, probably
others) because they were written in the stone ages. Those are the
ones that have problems.

Perhaps we should just suck up the slower-ness and use that as an
incentive to finally convert those tests over to The New Way (tm).

Unfortunately I'm ill-equipped to either agree or disagree with your
claim that "using the default framebuffer without mapping it is
undefined behavior". Hopefully someone "in the know" can comment?

   -ilia


On Fri, Aug 14, 2015 at 7:27 PM, Alexander Goins  wrote:

 From NVIDIA's side, I don't think it would be that big of a problem to just 
keep running our tests with PIGLIT_FORCE_WINDOW=1 or -fbo. However, my 
reasoning is that Piglit should default to the 'correct' behavior according to 
the OpenGL spec. While the NVIDIA driver is the only one I know of that 
actually enforces pixel ownership, any driver theoretically could do so while 
still adhering to the spec.

I agree that having a window pop up for every test is annoying, but using the 
default framebuffer without mapping it is undefined behavior. The 'correct' way 
to do offscreen rendering would be using FBOs which Piglit supports with the 
-fbo argument. Is there a reason why this can't/shouldn't be used for running 
Piglit offscreen? I originally removed the ability to render offscreen with the 
default framebuffer altogether, but I figured some users might still desire 
this behavior so instead decided to leave it in as a non-default option.

Thanks,
Alex

-Original Message-
From: ibmir...@gmail.com [mailto:ibmir...@gmail.com] On Behalf Of Ilia Mirkin
Sent: Friday, August 14, 2015 8:07 AM
To: Alexander Goins
Cc: piglit@lists.freedesktop.org
Subject: Re: [Piglit] [PATCH] winsys-framework Default to showing window

Interesting. PIGLIT_FORCE_WINDOW was introduced in this change:

commit b797406fb850c5e3c9fdaaebae36127b5e5f8954
Author: Paul Berry 
Date:   Tue Jun 11 16:34:32 2013 -0700

 Add PIGLIT_FORCE_WINDOW environment variable.

 Setting PIGLIT_FORCE_WINDOW=1 causes all piglit windowsystem tests
 (those that weren't invoked with the "-fbo" flag) to show the window
 prior to calling piglit_display().  Setting PIGLIT_FORCE_WINDOW=0 (the
 default) preserves piglit's existing behaviour, which is to only show
 the window when the test is run in "manual" mode (with no "-auto"
 argument supplied).

 Setting PIGLIT_FORCE_WINDOW=1 slows down the piglit run, and produces
 more annoying windows on the screen, but it has been observed to
 produce more repeatable results when testing with the nVidia
 proprietary driver for Linux.

 Reviewed-by: Chad Versace 

Which explicitly mentions the nvidia blob driver thing. Having more windows 
popping up during piglit runs is going to be annoying... and slowing things 
down will also be unwelcome. Is there anything else that can be done here?

   -ilia

On Thu, Aug 13, 2015 at 1:49 PM, agoins  wrote:

winsys-framework incorrectly assumed that it will always have
ownership of the pixels in its own buffers. If using the default
framebuffer (i.e., when Piglit is not running in FBO mode,) ownership
will not be acquired unless the window is mapped.

While this is not typically a problem because buffers are separate,
NVIDIA's Unified Back Buffer feature makes all windows share a back
buffer while still conforming to the OpenGL spec. If Piglit does

Re: [Piglit] Issue with Piglit pixel ownership assumptions

2015-08-12 Thread James Jones
The PIGLIT_NO_WINDOW patch seems like the way to go to me, since 
defaulting to unreliable behavior seems bad, but allowing users to fall 
back to it to test on drivers that don't enforce pixel ownership seems 
fine.  I'd prefer if someone outside NVIDIA could review as well, but for:


0001-winsys-framework-Default-to-showing-window.patch

Reviewed-by: James Jones 

I'll commit this soon if no one objects.

Thanks,
-James

On 07/20/2015 02:44 PM, Alexander Goins wrote:

Hello all,

During some recent debugging, I seem to have discovered  an issue with
Piglit that could cause problems with some drivers. With -auto, and
without -fbo or PIGLIT_FORCE_WINDOW=1, Piglit defaults to rendering to
the default framebuffer without mapping a window. This means that it
doesn't take pixel ownership, which could cause a problem with some
drivers, particularly NVIDIA's with Unified Back Buffer enabled. The
OpenGL spec requires that pixel ownership be established in order to
guarantee that the window system doesn't clobber the output. There are
more details in the patch descriptions.

It would probably be best to deprecate the functionality of not showing
a window altogether, making users use FBOs if they want offscreen
rendering. I made a patch that simply removes that functionality. As a
softer solution, however, I also made a patch that makes displaying the
window the default, and replaced PIGLIT_FORCE_WINDOW with
PIGLIT_NO_WINDOW, a flag that allows tests to run without mapping a
window. This way, users that have been relying on the existing
functionality can continue using it.

Both patches are included.

Thanks,

Alex


___
Piglit mailing list
Piglit@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH] s3tc-errors: Use -4, not -3 as an invalid x offset

2015-03-09 Thread James Jones
The code in question passes "-3" for the X offset
of a TexSubImage2D operation.  TexSubImage2D is
defined to generate INVALID_VALUE for offsets that
are less than the negative border width, and
that is what the test expects.  However,
TexSubImage2D is also defined to generate
INVALID_OPERATION for offsets that are not
multiples of 4 when using S3TC textures, as this
test does.  Therefore, implementations can
legitimately generate either error here.  To avoid
ambiguity, use -4 instead.

Passes on GeForce GTX 680 (binary driver 346.47)

Signed-off-by: James Jones 
---
 tests/texturing/s3tc-errors.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/texturing/s3tc-errors.c b/tests/texturing/s3tc-errors.c
index 0ed1279..659cfe1 100644
--- a/tests/texturing/s3tc-errors.c
+++ b/tests/texturing/s3tc-errors.c
@@ -267,7 +267,7 @@ test_format(int width, int height, GLfloat *image, GLenum 
requested_format)
pass = check_gl_error2(GL_INVALID_OPERATION, GL_INVALID_VALUE) && pass;
 
/* Try compressed subimage with invalid offset - should not work */
-   x = -3;
+   x = -4;
y = 8;
w = 4;
h = 4;
-- 
1.9.1

___
Piglit mailing list
Piglit@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 5/5] glx_ext_create_context_es2_profile: Verify that the invalid ES versions are rejected

2012-06-12 Thread James Jones
On 6/12/12 3:13 PM, Ian Romanick wrote:
> On 06/12/2012 02:40 PM, James Jones wrote:
>> On 6/12/12 2:25 PM, Ian Romanick wrote:
>>> From: Ian Romanick
>>>
>>> NVIDIA's closed-source driver fails this test.  GLXBadProfileARB is
>>> generated instead of BadMatch.  The specification isn't specific about
>>> what error should be generated, but I don't think GLXBadProfileARB is
>>> correct.  GLXBadProfileARB seems to only be generated in cases where
>>> the bit mask specified for GLX_CONTEXT_PROFILE_MASK_ARB is invalid.
>>> Cases where the GLX_CONTEXT_PROFILE_MASK_ARB is incongruous with the
>>> specified version either result in GLX_CONTEXT_PROFILE_MASK_ARB being
>>> ignored or BadMatch.
>>
>> GLXBadProfileARB fell out more naturally from our implementation. I
>> see no good argument for either side. I think I could construct an
>> equally convincing case for generating GLXBadProfileARB, but I don't
>> have a strong opinion. If you really want it to be BadMatch, I can
>> update our implementation and the spec to conform to that as well, so we
>> can be consistent at least.
> 
> Since it appears you are the only shipping implementation, I'm content
> with having the first one be the winner.  I'll submit a bug against the
> spec with some suggested language change.
> 
> Something like...
> 
> Replace:
> 
>  "... If the GLX_CONTEXT_ES2_PROFILE_BIT_EXT bit is set, and the
>   requested version is 2.0, then a context implementing OpenGL ES 2.0
>   is returned; otherwise, if the requested OpenGL version is less than
>   3.2, GLX_CONTEXT_PROFILE_MASK_ARB is ignored and the functionality
>   of the context is determined solely by the requested version."
> 
> with
> 
>  "... If the GLX_CONTEXT_ES2_PROFILE_BIT_EXT bit is set, and the
>   requested version is 2.0, then a context implementing OpenGL ES 2.0
>   is returned.  If the GLX_CONTEXT_ES2_PROFILE_BIT_EXT bit is set,
>   and the requested version is not 2.0, the error GLXBadProfileARB is
>   generated.  Otherwise, if the requested OpenGL version is less than
>   3.2, GLX_CONTEXT_PROFILE_MASK_ARB is ignored and the functionality
>   of the context is determined solely by the requested version."
> 
> The original language is, as worded, broken anyway.  It implies that if
> you specify GLX_CONTEXT_ES2_PROFILE_BIT_EXT and 2.1, that a desktop
> OpenGL 2.1 context should be created.
> 
> Generating GLXBadProfileARB is also consistent with
> 
>"* If attribute GLX_CONTEXT_PROFILE_MASK_ARB has no bits set; has
>   any bits set other than GLX_CONTEXT_CORE_PROFILE_BIT_ARB and
>   GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB; has more than one of
>   these bits set; or *if the implementation does not support the
>   requested profile*, then GLXBadProfileARB is generated."
>   (emphasis mine)

Right.  There are no other errors specified in the general sections of the 
GLX_ARB_create_context spec.  Would it be better to just add a clause to the 
GLXBadProfileARB bullet point above:

"...; or if the version and profile requested are incompatible for any 
reason, ..."

Thanks,
-James

nvpublic
___
Piglit mailing list
Piglit@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 4/5] glx_ext_create_context_es2_profile: Verify that indirect-rendering is impossible

2012-06-12 Thread James Jones
On 6/12/12 3:12 PM, Ian Romanick wrote:
> On 06/12/2012 02:56 PM, James Jones wrote:
>> On 6/12/12 2:47 PM, Ian Romanick wrote:
>>> On 06/12/2012 02:35 PM, James Jones wrote:
>>>> On 6/12/12 2:25 PM, Ian Romanick wrote:
>>>>> From: Ian Romanick
>>>>>
>>>>> The spec doesn't forbid indirect rendering with OpenGL ES 2.0.
>>>>> There's no protocol defined, so it seems impossible that this could
>>>>> ever work.
>>>>>
>>>>> NVIDIA's closed-source driver fails this test.  An indirect-rendering
>>>>> ES2 context is created.  I have not verified whether this context is
>>>>> actually usable, or it is garbage.
>>>>
>>>> Yeah, I didn't test, but in theory it "just works" with the existing
>>>> GLX protocol, so I saw no reason to disable it.  Since GLX owns the
>>>> GL command protocol specification, not GL or GLES, and this extension
>>>> is creating contexts using GLX, I see no reason this should be
>>>> considered a required error.
>>>
>>> So... how do commands like glReleaseShaderCompiler, glShaderBinary, and
>>> glGetShaderPrecisionFormat, which have no GLX protocol, work?
>>
>> They work just as good as the subset of GL commands that don't have
>> protocol defined today which many implementations (including ours) allow
>> contexts to be created for. I'm not saying it's correct that our
> 
> That seems dodgy. :)

We were more worried about regressing existing working apps by downgrading our 
GL version when we found the bug internally, since no one ever actually 
complained about the inflated version numbers until very recently.

>> implementation currently allows creation of GLES contexts that only
>> partially work (or maybe not at all for all I know), but I don't think
>> it's necessarily correct to require ALL implementations to fail either.
>> Presumably they could have their own in-house protocol in place.
>>
>> For example, we implement "Beta" protocol for many GL operations
>> that  don't have ARB-defined protocol, because we have customers that want
>> indirect rendering support now and the ARB approval process often drags
>> on forever. It only works when using both our client and server, both
>> with beta protocol support enabled, but it is a usable solution and
>> doesn't violate the spec in any way that I know of.
> 
> Right.  I knew you guys did that, and I can't think of anything wrong
> with that.  That's why I didn't make a similar test for desktop GL.
> (see also http://lists.x.org/archives/xorg-devel/2012-June/031602.html).
> 
> I think I'll rework the test to use the context that was created.  If
> that works, the test will pass.  Does that seem like a fair compromise?

Yeah, sounds good.  Depending on what you actually mean by "use", we'll pass or 
fail, but that's fair.

Thanks,
-James

nvpublic
___
Piglit mailing list
Piglit@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/piglit