On 01/27/2012 12:57 AM, Jose Fonseca wrote:
This doesn't make sense from a mathematical POV.

If the allowable size is

   2^floor(log2(MAX_XX_TEXTURE_SIZE)) + 2*bt

then it doesn't matter whether MAX_XX_TEXTURE_SIZE has or not the
border with, as the allowable size is still the same for all common
values of MAX_XX_TEXTURE_SIZE.

It would be very odd to define MAX_XX_TEXTURE_SIZE in terms of
itself, as there would be a singularity for MAX_XX_TEXTURE_SIZE< 4 :
no value of MAX_XX_TEXTURE_SIZE under four will satisfy the
equation:

   MAX_XX_TEXTURE_SIZE = 2^floor(log2(MAX_XX_TEXTURE_SIZE)) + 2

If hypothetically one day the spec allowed higher maximum border sizes, it 
would be even worse.

Which it won't: borders are removed in OpenGL ES and in OpenGL 3.1+ Core profile.

My take is that the border does not need to be implicitly added to
MAX_XX_TEXTURE_SIZE query, and the test/driver is wrong.

Also it doesn't match what NVIDIA produces:

Thanks for checking this. I was going to check it, but I kept getting distracted.

I think the proxy check needs to change. If the maximum texture size is 2048, then a texture with border should be limited to 2046+2. That is, when doing the size check, add the border size in first. I just tried the test on AMD and NVIDIA, and this appears to be what they both do.

My recollection is also that the test just produces a warning in this case.

$ glxinfo -l | grep SIZE
     GL_MAX_TEXTURE_SIZE = 16384
     GL_MAX_3D_TEXTURE_SIZE = 2048
     GL_ALIASED_POINT_SIZE_RANGE = 1, 63
     GL_SMOOTH_POINT_SIZE_RANGE = 1, 63
     GL_MAX_CUBE_MAP_TEXTURE_SIZE_ARB = 16384
     GL_MAX_RECTANGLE_TEXTURE_SIZE_NV = 16384

Jose


----- Original Message -----
As per OpenGL 3.0 specification section 3.9, page 187 in pdf the
maximum
allowable width, height, or depth of a texel array must be at least
2^(k−lod) + 2*bt for image arrays of level-of-detail (lod) 0 through
k,
where k is the log base 2 of MAX_3D_TEXTURE_SIZE, and bt is the
maximum
border width

Currently different values for maximum allowable texture size are
returned
by glGetIntegrv() and proxy textures. glGetIntegrv returns 2048 and
proxy
texture returns (2048 + 2)

This patch fixes Intel oglconform test case: max_values
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=44970

Note: This is a candidate for mesa 8.0 branch.

Signed-off-by: Anuj Phogat<anuj.pho...@gmail.com>
---
  src/mesa/main/get.c |    3 ++-
  1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/src/mesa/main/get.c b/src/mesa/main/get.c
index 5ad6012..4a70109 100644
--- a/src/mesa/main/get.c
+++ b/src/mesa/main/get.c
@@ -1507,7 +1507,8 @@ find_custom_value(struct gl_context *ctx, const
struct value_desc *d, union valu
     case GL_MAX_3D_TEXTURE_SIZE:
     case GL_MAX_CUBE_MAP_TEXTURE_SIZE_ARB:
        p = (GLuint *) ((char *) ctx + d->offset);
-      v->value_int = 1<<  (*p - 1);
+      /* GL 3.0: Add 2 pixels to accmodate border */
+      v->value_int = 1<<  (*p - 1) + 2;
        break;

     case GL_SCISSOR_BOX:
--
1.7.7.4
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to