[Mesa-dev] [Bug 35200] Mesa 7.6 implementation error: bad datatype in interpolate_int_colors
https://bugs.freedesktop.org/show_bug.cgi?id=35200 --- Comment #6 from Charles Obler 2011-03-18 19:04:05 PDT --- Hello Paul -- I will need detailed step-by-step instructions here. I'd like to be introduced to Linux development utilities -- make, gdb, etc. -- and Linux internals, but that hasn't happened yet. I haven't done C++ in fifteen years (I'm a python / bash / javascript programmer). I'm eager to cooperate and learn, as long as the test doesn't destabilize my system, but you will have to guide me. --- On Fri, 3/18/11, bugzilla-dae...@freedesktop.org wrote: From: bugzilla-dae...@freedesktop.org Subject: [Bug 35200] Mesa 7.6 implementation error: bad datatype in interpolate_int_colors To: readbetweenli...@yahoo.com Received: Friday, March 18, 2011, 6:47 PM https://bugs.freedesktop.org/show_bug.cgi?id=35200 --- Comment #5 from Brian Paul 2011-03-18 11:47:38 PDT --- The large number of warnings come from the fact that this issue is hit whenever a row of pixels is drawn. That happens a lot. If you could build Mesa with gdb and set a breakpoint on _mesa_problem() and print the offending value, that would help. Or grab the latest code from git - the updated warning will emit more info. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [RFC] GL fixed function fragment shaders
On 03/18/2011 02:31 PM, Jakob Bornecrantz wrote: On Mon, Jan 17, 2011 at 10:40 PM, Eric Anholt wrote: On Thu, 13 Jan 2011 17:40:39 +0100, Roland Scheidegger wrote: Am 12.01.2011 23:04, schrieb Eric Anholt: This is a work-in-progress patch series to switch texenvprogram.c from generating ARB_fp style Mesa IR to generating GLSL IR as its product. For drivers without native GLSL codegen, that is then turned into the Mesa IR that can be consumed. However, for 965 we don't use the Mesa IR product and just use the GLSL output, producing much better code thanks to the new backend. This is part of a long term goal to get Mesa drivers off of Mesa IR and producing their instruction stream directly from the GLSL IR. I'm not planning on committing this series immediately, as I've still got a regression in the 965 driver with texrect-many on the last commit. As a comparison, here's one of the shaders from openarena before: So what's the code looking like after conversion to mesa IR? As long as [SNIP] So, there's one extra Mesa IR move added where we could compute into the destination reg but don't. This is a general problem with ir_to_mesa.cpp that affects GLSL pretty badly. I found pretty much the same thing when looking into tunnel: # Fragment Program/Shader 0 0: TXP TEMP[0], INPUT[4].xyyw, texture[0], 2D; 1: MUL TEMP[1].xyz, TEMP[0], INPUT[1]; 2: MOV TEMP[0].xyz, TEMP[1].xyzx; 3: MOV TEMP[0].w, INPUT[1].; 4: MOV TEMP[2], TEMP[0]; 5: MUL TEMP[0].x, INPUT[3]., STATE[1].; 6: MUL TEMP[3].x, TEMP[0]., TEMP[0].; 7: EX2 TEMP[0].x, TEMP[3].-x-x-x-x; 8: MOV_SAT TEMP[3].x, TEMP[0].; 9: ADD TEMP[0].x, CONST[4]., TEMP[3].-x-x-x-x; 10: MUL TEMP[4].xyz, STATE[2].xyzz, TEMP[0].; 11: MAD TEMP[2].xyz, TEMP[1].xyzx, TEMP[3]., TEMP[4].xyzx; 12: MOV OUTPUT[2], TEMP[2]; 13: END # Fragment Program/Shader 0 0: TXP TEMP[0], INPUT[4], texture[0], 2D; 1: MUL_SAT TEMP[1].xyz, TEMP[0], INPUT[1]; 2: MOV_SAT TEMP[1].w, INPUT[1]; 3: MUL TEMP[2].x, STATE[0]., INPUT[3].; 4: MUL TEMP[2].x, TEMP[2]., TEMP[2].; 5: EX2_SAT TEMP[2].x, TEMP[2].-x-x-x-x; 6: LRP OUTPUT[2].xyz, TEMP[2]., TEMP[1], STATE[1]; 7: MOV OUTPUT[2].w, TEMP[1]; 8: END I got similar results, tho the effects are more visible here. Also note that the new shader uses 5 temps compared to 3. The FF setup I think only uses fog (or one texenv modulate) so its not just hard to program texenv that gets effect by this change. Now looking at how this is generated, the new code seems to generate it quite similarly to the old. After that tho things gets interesting, after the generation step the old code is now done and is on the already optimized form you see above. The new code however is far from done. Going through it first go through various common GLSL IR optimizations steps (from the attached text file, the second shader and third shader in the file both are the same just with and without the inlining of GLSL IR). Finally it calls _mesa_optimize_program which gets it to its current form. As for the code itself, it doesn't look as bad as I thought it would, there are a lot of allocations, a fair bit of extra typing tho loc count in the commit stays about the same even less, the reason behind that is that texenv has its own implementation of ureg. Not counting that a conversion to GLSL IR would instead add extra locs. Of course, talking about optimality of Mesa IR is kind of a joke, as for the drivers that directly consume it (i915, 965 VS, r200, and I'm discounting r300+ as they have their own IR that Mesa IR gets translated to and actually optimized), we miss huge opportunities to reduce instruction count due to swizzle sources including -1, 0, 1 as options but Mesa IR not taking advantage of it. If we were doing that right, then the other MOV-reduction pass would hit and that extra move just added here would go away, resulting in a net win. This could be done with any of the IR's (provided numeric swizzling is added) and something that I have been thinking about adding to TGSI. As pretty much all hw supports it natively (exception being svga). Similarly, we add an extra indirection phase according to 915's accounting of those on the second shader, but the fact that we don't schedule those in our GLSL output anyway is a big issue for GLSL on hardware with indirection limits. it's not worse than the original I guess this should be ok, though for those drivers consuming mesa IR I guess it's just more cpu time without any real benefit? Assuming that the setup the app did was already optimal for a programmable GPU, yes. But I suspect that isn't generally the case -- while OA has reasonable looking fixed function setup (other than Mesa IR we produce not using the swizzles), given how painful it is to program using texenv I suspect there are a lot of "suboptimal" shader setups out there that we could actually improve. You posted some GLSL IR cpu optimizations
Re: [Mesa-dev] [RFC] GL fixed function fragment shaders
On Mon, Jan 17, 2011 at 10:40 PM, Eric Anholt wrote: > On Thu, 13 Jan 2011 17:40:39 +0100, Roland Scheidegger > wrote: >> Am 12.01.2011 23:04, schrieb Eric Anholt: >> > This is a work-in-progress patch series to switch texenvprogram.c from >> > generating ARB_fp style Mesa IR to generating GLSL IR as its product. >> > For drivers without native GLSL codegen, that is then turned into the >> > Mesa IR that can be consumed. However, for 965 we don't use the Mesa >> > IR product and just use the GLSL output, producing much better code >> > thanks to the new backend. This is part of a long term goal to get >> > Mesa drivers off of Mesa IR and producing their instruction stream >> > directly from the GLSL IR. >> > >> > I'm not planning on committing this series immediately, as I've still >> > got a regression in the 965 driver with texrect-many on the last >> > commit. >> > >> > As a comparison, here's one of the shaders from openarena before: >> >> So what's the code looking like after conversion to mesa IR? As long >> as > [SNIP] > > So, there's one extra Mesa IR move added where we could compute into the > destination reg but don't. This is a general problem with > ir_to_mesa.cpp that affects GLSL pretty badly. I found pretty much the same thing when looking into tunnel: # Fragment Program/Shader 0 0: TXP TEMP[0], INPUT[4].xyyw, texture[0], 2D; 1: MUL TEMP[1].xyz, TEMP[0], INPUT[1]; 2: MOV TEMP[0].xyz, TEMP[1].xyzx; 3: MOV TEMP[0].w, INPUT[1].; 4: MOV TEMP[2], TEMP[0]; 5: MUL TEMP[0].x, INPUT[3]., STATE[1].; 6: MUL TEMP[3].x, TEMP[0]., TEMP[0].; 7: EX2 TEMP[0].x, TEMP[3].-x-x-x-x; 8: MOV_SAT TEMP[3].x, TEMP[0].; 9: ADD TEMP[0].x, CONST[4]., TEMP[3].-x-x-x-x; 10: MUL TEMP[4].xyz, STATE[2].xyzz, TEMP[0].; 11: MAD TEMP[2].xyz, TEMP[1].xyzx, TEMP[3]., TEMP[4].xyzx; 12: MOV OUTPUT[2], TEMP[2]; 13: END # Fragment Program/Shader 0 0: TXP TEMP[0], INPUT[4], texture[0], 2D; 1: MUL_SAT TEMP[1].xyz, TEMP[0], INPUT[1]; 2: MOV_SAT TEMP[1].w, INPUT[1]; 3: MUL TEMP[2].x, STATE[0]., INPUT[3].; 4: MUL TEMP[2].x, TEMP[2]., TEMP[2].; 5: EX2_SAT TEMP[2].x, TEMP[2].-x-x-x-x; 6: LRP OUTPUT[2].xyz, TEMP[2]., TEMP[1], STATE[1]; 7: MOV OUTPUT[2].w, TEMP[1]; 8: END I got similar results, tho the effects are more visible here. Also note that the new shader uses 5 temps compared to 3. The FF setup I think only uses fog (or one texenv modulate) so its not just hard to program texenv that gets effect by this change. Now looking at how this is generated, the new code seems to generate it quite similarly to the old. After that tho things gets interesting, after the generation step the old code is now done and is on the already optimized form you see above. The new code however is far from done. Going through it first go through various common GLSL IR optimizations steps (from the attached text file, the second shader and third shader in the file both are the same just with and without the inlining of GLSL IR). Finally it calls _mesa_optimize_program which gets it to its current form. As for the code itself, it doesn't look as bad as I thought it would, there are a lot of allocations, a fair bit of extra typing tho loc count in the commit stays about the same even less, the reason behind that is that texenv has its own implementation of ureg. Not counting that a conversion to GLSL IR would instead add extra locs. > > Of course, talking about optimality of Mesa IR is kind of a joke, as for > the drivers that directly consume it (i915, 965 VS, r200, and I'm > discounting r300+ as they have their own IR that Mesa IR gets translated > to and actually optimized), we miss huge opportunities to reduce > instruction count due to swizzle sources including -1, 0, 1 as options > but Mesa IR not taking advantage of it. If we were doing that right, > then the other MOV-reduction pass would hit and that extra move just > added here would go away, resulting in a net win. This could be done with any of the IR's (provided numeric swizzling is added) and something that I have been thinking about adding to TGSI. As pretty much all hw supports it natively (exception being svga). > > Similarly, we add an extra indirection phase according to 915's > accounting of those on the second shader, but the fact that we don't > schedule those in our GLSL output anyway is a big issue for GLSL on > hardware with indirection limits. > >> it's not worse than the original I guess this should be ok, though for >> those drivers consuming mesa IR I guess it's just more cpu time without >> any real benefit? > > Assuming that the setup the app did was already optimal for a > programmable GPU, yes. But I suspect that isn't generally the case -- > while OA has reasonable looking fixed function setup (other than Mesa IR > we produce not using the swizzles), given how painful it is to program > using texenv I suspect there are a lot of "suboptimal" shader setups out > there that we could actua
[Mesa-dev] [Bug 35200] Mesa 7.6 implementation error: bad datatype in interpolate_int_colors
https://bugs.freedesktop.org/show_bug.cgi?id=35200 --- Comment #5 from Brian Paul 2011-03-18 11:47:38 PDT --- The large number of warnings come from the fact that this issue is hit whenever a row of pixels is drawn. That happens a lot. If you could build Mesa with gdb and set a breakpoint on _mesa_problem() and print the offending value, that would help. Or grab the latest code from git - the updated warning will emit more info. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [Bug 35200] Mesa 7.6 implementation error: bad datatype in interpolate_int_colors
https://bugs.freedesktop.org/show_bug.cgi?id=35200 --- Comment #4 from Charles Obler 2011-03-18 11:24:19 PDT --- Hello Paul -- Is there anything I can do to trace or further document the problem? Have you looked at voronoi? It may be calling Mesa repeatedly, with parameters that Mesa is unable to handle. That might explain the large number of error messages. --- On Fri, 3/18/11, bugzilla-dae...@freedesktop.org wrote: From: bugzilla-dae...@freedesktop.org Subject: [Bug 35200] Mesa 7.6 implementation error: bad datatype in interpolate_int_colors To: readbetweenli...@yahoo.com Received: Friday, March 18, 2011, 2:33 AM https://bugs.freedesktop.org/show_bug.cgi?id=35200 --- Comment #3 from Brian Paul 2011-03-17 19:33:05 PDT --- I still cant repro this bug here. But with commit 582570a04c73bc304e16af63621b594e0fc39aea at most 50 of these errors will be emitted by Mesa. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [Bug 35025] [Patch] Serious compiler warnings
https://bugs.freedesktop.org/show_bug.cgi?id=35025 Brian Paul changed: What|Removed |Added Status|NEW |RESOLVED Resolution||FIXED --- Comment #4 from Brian Paul 2011-03-18 11:15:55 PDT --- I've committed your patch. Thanks. BTW, different people are responsible for different parts of Mesa and the various drivers. Generally, those people take care of the patches/bugs that effect their components. Just because the nouveau developers may not have been active lately isn't a reason to condemn the project or other developers. I guess I missed this bug before. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [Bug 35025] [Patch] Serious compiler warnings
https://bugs.freedesktop.org/show_bug.cgi?id=35025 Johannes Obermayr changed: What|Removed |Added Summary|Serious compiler warnings |[Patch] Serious compiler ||warnings -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [Bug 35025] Serious compiler warnings
https://bugs.freedesktop.org/show_bug.cgi?id=35025 --- Comment #3 from Johannes Obermayr 2011-03-18 10:32:20 PDT --- Created an attachment (id=44588) View: https://bugs.freedesktop.org/attachment.cgi?id=44588 Review: https://bugs.freedesktop.org/review?bug=35025&attachment=44588 Fix serious compiler warnings. It is a damning indictment of main (mesa) developers that they cannot add two '#includes' within two weeks ... (A non-developer who (still) does not understand any C/C++ source code [not to mention Mesa's source code] must spend days for doing it.) -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Naked DXTn support via ARB_texture_compression?
On 18 March 2011 14:19, Petr Sebor wrote: > I know that at least our games would benefit from this feature immediately, > but I guess Wine people might welcome this as well, where 'benefit' means - > do not have to > painfully install the external DXT library, which is very likely not needed > at all. > As far as Wine is concerned, not without a proper extension. At this stage having the external library or driconf option is good enough for Wine. In the end this is a legal problem rather than a technical problem though. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] Naked DXTn support via ARB_texture_compression?
Hi, I have not been watching the situation around the DXTn compressed texture formats and Mesa closely for quite some time, but after trying to run our games on Mesa (that is, the 7.11-dev, Gallium) instead of the proprietary binary drivers, I was struck by the fact that there is still no native support for handling compressed texture formats. Well, this is an old story and I understand that there is a common (and probably valid) fear from the possibility of being legally attacked because of using patented algorithms, but maybe there is a simple and hopefully legal way, that might alleviate the problem and make the life of many people a lot easier. Motivation: We already have the compressed texture data, created either by a closed source library (nvdxt) or some other tool that already had to tackle the legal issues. And moreover, I am not interested in using on-the-fly texture compression/decompression features of Mesa itself at all. I just want the texture data, represented by a binary blob to end up somewhere in the hardware and I assume that copying such data around is pretty legal. Quite some time ago, while reading ARB_texture_compression spec, I have hoticed that it is written in a way that it explicitly allows the implementation to know about (and advertise) compressed texture formats, without actually providing compression/decompression itself, yet - ofcourse, with some limitations. Since then our codebase is equipped with the following code: (and as far as I remember, at least in windows, there actually were drivers that didn't advertise the S3TC extensions, yet listed the S3TC formats via the ARB_texture_compression, so this idea is nothing new) < .. snip .. > // Check if the driver features on-the-fly compression to S3TC, // we can be sure it will HW accelerate these formats as well. self.cap.texture_compression_dxt.set(is_gl_supported("GL_EXT_texture_compression_s3tc")); if (! self.cap.texture_compression_dxt) { // If such extension does not exist, try the last resort service. // Even though the driver does not support runtime compression, it can accept // (and probably HW accelerate) rendering in provided compressed texture formats we're enumerating below. GLint num_compressed_formats; self.glGetIntegerv(GL_NUM_COMPRESSED_TEXTURE_FORMATS, &num_compressed_formats); GLint *const compressed_formats(static_cast*>(alloca(sizeof(GLint) * num_compressed_formats))); self.glGetIntegerv(GL_COMPRESSED_TEXTURE_FORMATS, compressed_formats); bool texture_DXT1_support(false); // We're ignoring DXT3, we don't use it. bool texture_DXT5_support(false); // Check for DXT1 and DXT5 formats only, we don't use DXT3 for (GLint idx = 0; idx < num_compressed_formats; ++idx) { if (compressed_formats[idx] == GL_COMPRESSED_RGB_S3TC_DXT1_EXT) { texture_DXT1_support = true; message(GL_MESSAGE "Enumerated DXT1 compressed texture format."); } else if (compressed_formats[idx] == GL_COMPRESSED_RGBA_S3TC_DXT5_EXT) { texture_DXT5_support = true; message(GL_MESSAGE "Enumerated DXT5 compressed texture format."); } } self.cap.texture_compression_dxt.set(texture_DXT1_support || texture_DXT5_support); } < .. snip .. > Sure, this imposes some limitations like ... for example, not being able to use glCompressedTexSubImage with full texture extents, but that is typically not a problem for many games/applications. So, having the Mesa to provide only the way to copy the compressed data to the hardware with the native compressed format support would really save the day, at least for anyone who just wants to use the feature of the hardware he/she owns, without actually using the patented algorithms. I have been looking sparsely over the Mesa code, thinking first I might just hack around the idea a present it with the patch, but it would probably end up just like this - a hack, that should be better architected in by someone fluent with the Mesa source. I know that at least our games would benefit from this feature immediately, but I guess Wine people might welcome this as well, where 'benefit' means - do not have to painfully install the external DXT library, which is very likely not needed at all. What are your opinions? It it something that might be possible to do within Mesa? Kind regards, Petr Sebor -- Petr Sebor / SCS Software [ http://www.scssoft.com ] ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] decoupling XCB from Mesa
Hi all, I am trying to port mesa for a windowing platform called CDI one like XCB,X11. i am successful to all EGL Calls except eglSwapBuffers. so my gles application cause segmentation fault when delegating the final rendered scene to native platform. I managed to port the entire flow of EGL calls by looking out the reference from gles application running on X11 window. but when coming to eglSwapBuffers it is the whole thing handled by the calls "xcb_dri2_copy_region_unchecked" of XCB library for what i am not getting source code to look out. could any one please advice me where from the final gles rendered buffer, has to be taken in Mesa context in egl_dri2.c file meaning before giving to XCB? could any one please tel me where can i get the source code of xcb-dri extensions. i understand from the build system of libxcb-1.5, the souce code for this extensions are managed by some python script (if i understand correct!). i am seriously blocked in this could anyone please help. Thanks in Advance. REgards, Srini. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev