Re: [osg-users] GL_R32F (and others) float textures are being normalized
Hi, On 21/09/10 15:41, Juan Hernando wrote: Hi J.P., we're using unclamped textures just fine. Also have a look at the difference between glFragColor and glFragData. I think you misunderstood the actual problem. The texture is not written by a fragment shader. I'm filling the texture from the client side and it is inside the vertex shader where clamped values are returned by the texture sampler. And this happens for GL_R32F but not for GL_RGBA32F or GL_LUMINANCE32F. ah, OK. Sorry for the noise. jp Anyways, as I stated in my previous mail I'll just use GL_LUMINANCE32F instead of GL_32F. Thanks and cheers, Juan ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. MailScanner thanks Transtec Computers for their support. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] GL_R32F (and others) float textures are being normalized
Hi J.P., we're using unclamped textures just fine. Also have a look at the difference between glFragColor and glFragData. I think you misunderstood the actual problem. The texture is not written by a fragment shader. I'm filling the texture from the client side and it is inside the vertex shader where clamped values are returned by the texture sampler. And this happens for GL_R32F but not for GL_RGBA32F or GL_LUMINANCE32F. Anyways, as I stated in my previous mail I'll just use GL_LUMINANCE32F instead of GL_32F. Thanks and cheers, Juan ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] GL_R32F (and others) float textures are being normalized
Hi, we're using unclamped textures just fine. Also have a look at the difference between glFragColor and glFragData. cheers jp On 21/09/10 11:01, Juan Hernando wrote: Hi Werner and Robert Thanks for the answers. Clamping to 0.0 to 1.0 range is standard for OpenGL texturing. I believe this now a new GL extension for a float format that isn't clamped to the 0.0 to 1.0 range, so have a look on OpenGL .org and other places online for further info. In ARB_texture_float (http://www.opengl.org/registry/specs/ARB/texture_float.txt) the overview says: "Floating-point components are clamped to the limits of the range representable by their format." And issue 7: "Are floating-point values clamped for the fixed-function GL? [...] For the programmable pipelines, no clamping occurs." If it has been changed, I understand from the revision log that user-controlled clamping was added in 2004, but only for fragment operations (which I don't care because I'm reading the texture, not writing to it). The only transformation should be for fixed point formats (like the typical GL_RGBA, not integer texture formats) that are normalized to [0..1]. As I understand, the current osg::Texture class is already aware of the differences between fixed point, integer and float formats. However, I have found neither code inside Texture.cpp nor specific wording in the spec for user control of color component clamping/normalization during texturing. It seems that there isn't such a thing and the GL implementation just chooses the correct behaviour depending on the texture format. GL_R32F comes from ARB_texture_rg for GL 2.1 and it's in GL 3.0 core. I've used these and GL_RG32F formats before as target formats for FBOs, and by default the results are written and read by the shaders without clamping as expected. My best guess is that the driver is clamping GL_R32F when glTexImage1D is called, hence I'd dare to say it's a driver bug. That osg::Texture doesn't compute the correct InternalFormatType for these relatively new formats is inconsistent but should be harmless. By the way, I'm using the NVIDIA linux driver version 256.44 with a 2.0 context in case someone is curious enough to try other setups. Nevertheless, I've realized my problem is easily solved using GL_LUMINANCE32F_ARB instead of the more bizarre GL_R32F. Regards, Juan ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. MailScanner thanks Transtec Computers for their support. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] GL_R32F (and others) float textures are being normalized
Hi Werner and Robert Thanks for the answers. Clamping to 0.0 to 1.0 range is standard for OpenGL texturing. I believe this now a new GL extension for a float format that isn't clamped to the 0.0 to 1.0 range, so have a look on OpenGL .org and other places online for further info. In ARB_texture_float (http://www.opengl.org/registry/specs/ARB/texture_float.txt) the overview says: "Floating-point components are clamped to the limits of the range representable by their format." And issue 7: "Are floating-point values clamped for the fixed-function GL? [...] For the programmable pipelines, no clamping occurs." If it has been changed, I understand from the revision log that user-controlled clamping was added in 2004, but only for fragment operations (which I don't care because I'm reading the texture, not writing to it). The only transformation should be for fixed point formats (like the typical GL_RGBA, not integer texture formats) that are normalized to [0..1]. As I understand, the current osg::Texture class is already aware of the differences between fixed point, integer and float formats. However, I have found neither code inside Texture.cpp nor specific wording in the spec for user control of color component clamping/normalization during texturing. It seems that there isn't such a thing and the GL implementation just chooses the correct behaviour depending on the texture format. GL_R32F comes from ARB_texture_rg for GL 2.1 and it's in GL 3.0 core. I've used these and GL_RG32F formats before as target formats for FBOs, and by default the results are written and read by the shaders without clamping as expected. My best guess is that the driver is clamping GL_R32F when glTexImage1D is called, hence I'd dare to say it's a driver bug. That osg::Texture doesn't compute the correct InternalFormatType for these relatively new formats is inconsistent but should be harmless. By the way, I'm using the NVIDIA linux driver version 256.44 with a 2.0 context in case someone is curious enough to try other setups. Nevertheless, I've realized my problem is easily solved using GL_LUMINANCE32F_ARB instead of the more bizarre GL_R32F. Regards, Juan ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] GL_R32F (and others) float textures are being normalized
HI Juan, Clamping to 0.0 to 1.0 range is standard for OpenGL texturing. I believe this now a new GL extension for a float format that isn't clamped to the 0.0 to 1.0 range, so have a look on OpenGL .org and other places online for further info. Robert. On Mon, Sep 20, 2010 at 6:14 PM, Juan Hernando wrote: > Dear all, > I'm writing some GLSL code that needs to access a 1D floating point texture > as input in a vertex shader. My problem is that I'm getting > clamped/normalized (not sure which one) values inside GLSL instead of the > full range values. > > For debug purposes I've setup a dummy texture like this: > osg::Image *image = new osg::Image; > float *tmp = new float; > image->setImage(1, 1, 1, GL_R32F, GL_RED, GL_FLOAT, > (unsigned char*)tmp, osg::Image::USE_NEW_DELETE); > tmp[0] = 2 > osg::Texture1D *texture = new osg::Texture1D(); > texture->setFilter(osg::Texture::MIN_FILTER, osg::Texture::NEAREST); > texture->setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST); > texture->setImage(image); > > In this case, the following GLSL expression: > texture1D(the_texture, 0.0).r > returns 1. > > But if I change the image setup to: > float *tmp = new float[4]; > image->setImage(1, 1, 1, GL_RGBA32F, GL_RGBA, GL_FLOAT, > (unsigned char*)tmp, osg::Image::USE_NEW_DELETE); > it works fine. > > Looking at the source code from Texture.cpp I've found that > Texture::computeInternalFormatType() does not deal with GL_R32F, GL_RG32F, > GL_R32UI, ... they all fall into the default clause of the switch statement > which assigns _internalFormatType to NORMALIZED. At the same time I've found > no member function to change that attribute manually. > Is that an omission or am I doing something wrong in the initialization? > If that's an omission, is there an easy workaround that doesn't require > recompiling the library? > > Thanks and best regards, > Juan > ___ > osg-users mailing list > osg-users@lists.openscenegraph.org > http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org > ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] GL_R32F (and others) float textures are being normalized
Hi Juan, I struggled on the same thing. But I found out that a regular GRBA-Texture (32 Bit) comes out in the shaders with a range 0.0 to 1.0 on each channel. Keeping this in mind it's very straight forward to use. - Werner - On Monday 20 September 2010 19:14:37 Juan Hernando wrote: > Dear all, > I'm writing some GLSL code that needs to access a 1D floating point > texture as input in a vertex shader. My problem is that I'm getting > clamped/normalized (not sure which one) values inside GLSL instead of > the full range values. > > For debug purposes I've setup a dummy texture like this: >osg::Image *image = new osg::Image; >float *tmp = new float; >image->setImage(1, 1, 1, GL_R32F, GL_RED, GL_FLOAT, >(unsigned char*)tmp, osg::Image::USE_NEW_DELETE); >tmp[0] = 2 >osg::Texture1D *texture = new osg::Texture1D(); >texture->setFilter(osg::Texture::MIN_FILTER, osg::Texture::NEAREST); >texture->setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST); >texture->setImage(image); > > In this case, the following GLSL expression: >texture1D(the_texture, 0.0).r > returns 1. > > But if I change the image setup to: >float *tmp = new float[4]; >image->setImage(1, 1, 1, GL_RGBA32F, GL_RGBA, GL_FLOAT, >(unsigned char*)tmp, osg::Image::USE_NEW_DELETE); > it works fine. > > Looking at the source code from Texture.cpp I've found that > Texture::computeInternalFormatType() does not deal with GL_R32F, > GL_RG32F, GL_R32UI, ... they all fall into the default clause of the > switch statement which assigns _internalFormatType to NORMALIZED. At the > same time I've found no member function to change that attribute manually. > Is that an omission or am I doing something wrong in the initialization? > If that's an omission, is there an easy workaround that doesn't require > recompiling the library? > > Thanks and best regards, > Juan > ___ > osg-users mailing list > osg-users@lists.openscenegraph.org > http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org -- TEXION Software Solutions TEXION GmbH - Rotter Bruch 26a - D 52068 Aachen - HRB 14999 Aachen Fon: +49 241 475757-0, Fax: +49 241 475757-29, web: http://www.texion.eu Geschäftsführer/Managing Director: Werner Modenbach ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] GL_R32F (and others) float textures are being normalized
Dear all, I'm writing some GLSL code that needs to access a 1D floating point texture as input in a vertex shader. My problem is that I'm getting clamped/normalized (not sure which one) values inside GLSL instead of the full range values. For debug purposes I've setup a dummy texture like this: osg::Image *image = new osg::Image; float *tmp = new float; image->setImage(1, 1, 1, GL_R32F, GL_RED, GL_FLOAT, (unsigned char*)tmp, osg::Image::USE_NEW_DELETE); tmp[0] = 2 osg::Texture1D *texture = new osg::Texture1D(); texture->setFilter(osg::Texture::MIN_FILTER, osg::Texture::NEAREST); texture->setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST); texture->setImage(image); In this case, the following GLSL expression: texture1D(the_texture, 0.0).r returns 1. But if I change the image setup to: float *tmp = new float[4]; image->setImage(1, 1, 1, GL_RGBA32F, GL_RGBA, GL_FLOAT, (unsigned char*)tmp, osg::Image::USE_NEW_DELETE); it works fine. Looking at the source code from Texture.cpp I've found that Texture::computeInternalFormatType() does not deal with GL_R32F, GL_RG32F, GL_R32UI, ... they all fall into the default clause of the switch statement which assigns _internalFormatType to NORMALIZED. At the same time I've found no member function to change that attribute manually. Is that an omission or am I doing something wrong in the initialization? If that's an omission, is there an easy workaround that doesn't require recompiling the library? Thanks and best regards, Juan ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org