Re: [Mesa-dev] Please bring back __GL_FSAA_MODE
On Fri, 11 Jan 2019 at 13:05, Tom Butler wrote: > In particular I'm looking to get FSAA working in older games via wine, mostly > they don't support antialiasing which is a shame because there's plenty of > GPU power going to waste which could be fixing the jaggles. On nvidia > __GL_FSAA_MODE works as intended provided you set wine's > OffScreenRenderingMode to "backbuffer" which slightly impacts performance but > doesn't matter for older games. I'd love to see the functionality return for > AMD GPUs. > I don't have a particularly strong opinion on the Mesa environment variable, but on the Wine side, the "SampleCount" registry setting [1] is probably what you're looking for. Henri [1] https://wiki.winehq.org/Useful_Registry_Keys ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] last call for autotools
On Fri, 14 Dec 2018 at 15:42, Gert Wollny wrote: > Am Freitag, den 14.12.2018, 01:19 -0500 schrieb Ilia Mirkin: > > meson is not at a point where it Just Works. It ... sometimes works. > > The fact that everyone has scripts which wrap meson is a symptom of > > that. I don't feel good about dumping the system that everyone (and I > > don't just mean people on this list -- I mean the wider open source > > community as well) knows how to use and has worked reliably for years > > (decades, really) to be replaced by a system that everyone is having > > problems with (it's not just me -- others are running into trouble > > too -- just look at this thread). It's just not ready yet. > > I second that, I voiced my concerns in a former thread, especially that > so far this upcoming change has not been officially announced in the > release notes or on mesa-user, and that I don't understand why it is so > urgent to drop autotools when there is still someone who offers to > maintain it and some who prefer to use it. > Just to add my +1, as a user, I prefer the more mainstream build system. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] mesa: Inherit texture view multi-sample information from the original texture images.
On 27 March 2018 at 18:57, Brian Paulwrote: > LGTM. I guess we probably don't have much piglit coverage for texture_view > + MSAA. > > Reviewed-by: Brian Paul > I've pushed this, thanks for the review. A piglit test should be straightforward, but I'll likely be fairly busy for at least the next few months. (WineConf preparations being one of the reasons.) Henri ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH] mesa: Inherit texture view multi-sample information from the original texture images.
Found running "The Witness" in Wine. Without this patch, texture views created on multi-sample textures would have a GL_TEXTURE_SAMPLES of 0. All things considered such views actually work surprisingly well, but when combined with (plain) multi-sample textures in a framebuffer object, the resulting FBO is incomplete because the sample counts don't match. Signed-off-by: Henri Verbeet <hverb...@gmail.com> --- src/mesa/main/teximage.c| 20 ++-- src/mesa/main/teximage.h| 8 src/mesa/main/textureview.c | 12 3 files changed, 26 insertions(+), 14 deletions(-) diff --git a/src/mesa/main/teximage.c b/src/mesa/main/teximage.c index 9e139d746f..8f5351085c 100644 --- a/src/mesa/main/teximage.c +++ b/src/mesa/main/teximage.c @@ -837,8 +837,8 @@ clear_teximage_fields(struct gl_texture_image *img) * Fills in the fields of \p img with the given information. * Note: width, height and depth include the border. */ -static void -init_teximage_fields_ms(struct gl_context *ctx, +void +_mesa_init_teximage_fields_ms(struct gl_context *ctx, struct gl_texture_image *img, GLsizei width, GLsizei height, GLsizei depth, GLint border, GLenum internalFormat, @@ -950,8 +950,8 @@ _mesa_init_teximage_fields(struct gl_context *ctx, GLint border, GLenum internalFormat, mesa_format format) { - init_teximage_fields_ms(ctx, img, width, height, depth, border, - internalFormat, format, 0, GL_TRUE); + _mesa_init_teximage_fields_ms(ctx, img, width, height, depth, border, + internalFormat, format, 0, GL_TRUE); } @@ -5891,9 +5891,9 @@ texture_image_multisample(struct gl_context *ctx, GLuint dims, if (_mesa_is_proxy_texture(target)) { if (samplesOK && dimensionsOK && sizeOK) { - init_teximage_fields_ms(ctx, texImage, width, height, depth, 0, - internalformat, texFormat, - samples, fixedsamplelocations); + _mesa_init_teximage_fields_ms(ctx, texImage, width, height, depth, 0, + internalformat, texFormat, + samples, fixedsamplelocations); } else { /* clear all image fields */ @@ -5920,9 +5920,9 @@ texture_image_multisample(struct gl_context *ctx, GLuint dims, ctx->Driver.FreeTextureImageBuffer(ctx, texImage); - init_teximage_fields_ms(ctx, texImage, width, height, depth, 0, - internalformat, texFormat, - samples, fixedsamplelocations); + _mesa_init_teximage_fields_ms(ctx, texImage, width, height, depth, 0, +internalformat, texFormat, +samples, fixedsamplelocations); if (width > 0 && height > 0 && depth > 0) { if (memObj) { diff --git a/src/mesa/main/teximage.h b/src/mesa/main/teximage.h index 2e950bf42b..bf790af276 100644 --- a/src/mesa/main/teximage.h +++ b/src/mesa/main/teximage.h @@ -130,6 +130,14 @@ _mesa_init_teximage_fields(struct gl_context *ctx, GLsizei width, GLsizei height, GLsizei depth, GLint border, GLenum internalFormat, mesa_format format); +extern void +_mesa_init_teximage_fields_ms(struct gl_context *ctx, + struct gl_texture_image *img, + GLsizei width, GLsizei height, GLsizei depth, + GLint border, GLenum internalFormat, + mesa_format format, + GLuint numSamples, + GLboolean fixedSampleLocations); extern mesa_format diff --git a/src/mesa/main/textureview.c b/src/mesa/main/textureview.c index 89af068fae..9a064ffd71 100644 --- a/src/mesa/main/textureview.c +++ b/src/mesa/main/textureview.c @@ -304,7 +304,8 @@ initialize_texture_fields(struct gl_context *ctx, struct gl_texture_object *texObj, GLint levels, GLsizei width, GLsizei height, GLsizei depth, - GLenum internalFormat, mesa_format texFormat) + GLenum internalFormat, mesa_format texFormat, + GLuint numSamples, GLboolean fixedSampleLocations) { const GLuint numFaces = _mesa_num_tex_faces(target); GLint level, levelWidth = width, levelHeight = height, levelDepth = depth; @@ -326,9 +327,10 @@ initialize_texture_fields(struct gl_context *ctx, return GL_FALSE; } - _mesa_init_teximage_fields(ctx, texImage, + _mesa_init_teximage_fields_ms(ctx, texImage,
[Mesa-dev] WineConf 2018
I hope this won't be considered spam, in which case I'd like to apologise in advance. The annual Wine conference will this year take place from Friday June 29 until Sunday July 1, in The Hague, The Netherlands. In the interest of outreach to other projects, and since I believe Wine would be considered a significant user of Mesa, perhaps there are some Mesa developers that would like to meetup with some Wine developers, or perhaps there are Mesa developers that simply happen to be in the area around that time and would like to drop by. For more information, please see https://wiki.winehq.org/WineConf2018. Henri ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] radeonsi: always initialize max_forced_staging_uploads
On 28 November 2017 at 20:20, Marek Olšákwrote: > Most of the corruption goes away if I flush IBs after every command. > This suggests that the staging upload is done in one context while > drawing is done in another context. The solution is to flush all > contexts before BufferData/BufferSubData in Wine. > Enabling the "csmt" feature in Wine should avoid issues like that. (And that's in fact its primary purpose, despite alleged performance advantages.) If it doesn't, that would be a bug in Wine. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] vulkan/wsi: Avoid waiting indefinitely for present completion in x11_manage_fifo_queues().
On 24 October 2017 at 20:13, Fredrik Höglund <fred...@kde.org> wrote: > On Tuesday 24 October 2017, Henri Verbeet wrote: >> On 24 October 2017 at 16:11, Fredrik Höglund <fred...@kde.org> wrote: >> >> @@ -934,9 +938,18 @@ x11_manage_fifo_queues(void *state) >> >> >> >>while (chain->last_present_msc < target_msc) { >> >> xcb_generic_event_t *event = >> >> -xcb_wait_for_special_event(chain->conn, >> >> chain->special_event); >> >> - if (!event) >> >> -goto fail; >> >> +xcb_poll_for_special_event(chain->conn, >> >> chain->special_event); >> >> + if (!event) { >> >> +int ret = poll(, 1, 100); >> > >> > There is a race condition here where another thread can read the event >> > from the file descriptor in the time between the calls to >> > xcb_poll_for_special_event() and poll(). >> > >> Is that a scenario we care about? Unless I'm misunderstanding >> something, the same kind of thing could happen between >> x11_present_to_x11() and xcb_wait_for_special_event(). > > It cannot, because if another thread reads a special event, xcb will > insert it into the corresponding special event queue and wake the > waiting thread. That's the point of having special event queues. > > But the reason I know this to be a problem is that I have tried to fix > this bug in the same way, and I noticed that it resulted in frequent > random stutters in some apps because poll() was timing out. > Oh, I think I see what you mean. The event wouldn't get lost, but we'd have to wait for the poll to timeout to get to it because it's already read from the fd into the queue. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] vulkan/wsi: Avoid waiting indefinitely for present completion in x11_manage_fifo_queues().
On 24 October 2017 at 20:31, Emil Velikov <emil.l.veli...@gmail.com> wrote: > On 17 October 2017 at 15:18, Henri Verbeet <hverb...@gmail.com> wrote: >> Note that the usage of xcb_poll_for_special_event() requires a version >> of libxcb that includes commit fad81b63422105f9345215ab2716c4b804ec7986 >> to work properly. >> > What should we expect if we're using xcb w/o said commit? It's worth > mentioning in the commit message. For reference, https://cgit.freedesktop.org/xcb/libxcb/commit/?id=fad81b63422105f9345215ab2716c4b804ec7986 What happens without that commit is that xcb_poll_for_special_event() will fail to read the event, effectively preventing any subsequent presents. That's obviously bad. As mentioned in the libxcb commit, a similar issue exists for x11_acquire_next_image_poll_x11() with a timeout, although I suppose that scenario is less common. > it seems like there's no release with the commit, should we bribe Uli > to roll one ;-) > Probably. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] vulkan/wsi: Avoid waiting indefinitely for present completion in x11_manage_fifo_queues().
On 24 October 2017 at 16:11, Fredrik Höglundwrote: >> @@ -934,9 +938,18 @@ x11_manage_fifo_queues(void *state) >> >>while (chain->last_present_msc < target_msc) { >> xcb_generic_event_t *event = >> -xcb_wait_for_special_event(chain->conn, chain->special_event); >> - if (!event) >> -goto fail; >> +xcb_poll_for_special_event(chain->conn, chain->special_event); >> + if (!event) { >> +int ret = poll(, 1, 100); > > There is a race condition here where another thread can read the event > from the file descriptor in the time between the calls to > xcb_poll_for_special_event() and poll(). > Is that a scenario we care about? Unless I'm misunderstanding something, the same kind of thing could happen between x11_present_to_x11() and xcb_wait_for_special_event(). ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH] vulkan/wsi: Avoid waiting indefinitely for present completion in x11_manage_fifo_queues().
In particular, if the window was destroyed before the present request completed, xcb_wait_for_special_event() may never return. Note that the usage of xcb_poll_for_special_event() requires a version of libxcb that includes commit fad81b63422105f9345215ab2716c4b804ec7986 to work properly. Signed-off-by: Henri Verbeet <hverb...@gmail.com> --- This applies on top of "vulkan/wsi: Free the event in x11_manage_fifo_queues()." --- src/vulkan/wsi/wsi_common_x11.c | 19 --- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/src/vulkan/wsi/wsi_common_x11.c b/src/vulkan/wsi/wsi_common_x11.c index 22b067b..ceb0d66 100644 --- a/src/vulkan/wsi/wsi_common_x11.c +++ b/src/vulkan/wsi/wsi_common_x11.c @@ -908,10 +908,14 @@ static void * x11_manage_fifo_queues(void *state) { struct x11_swapchain *chain = state; + struct pollfd pfds; VkResult result; assert(chain->base.present_mode == VK_PRESENT_MODE_FIFO_KHR); + pfds.fd = xcb_get_file_descriptor(chain->conn); + pfds.events = POLLIN; + while (chain->status == VK_SUCCESS) { /* It should be safe to unconditionally block here. Later in the loop * we blocks until the previous present has landed on-screen. At that @@ -934,9 +938,18 @@ x11_manage_fifo_queues(void *state) while (chain->last_present_msc < target_msc) { xcb_generic_event_t *event = -xcb_wait_for_special_event(chain->conn, chain->special_event); - if (!event) -goto fail; +xcb_poll_for_special_event(chain->conn, chain->special_event); + if (!event) { +int ret = poll(, 1, 100); +if (ret < 0) { + result = VK_ERROR_OUT_OF_DATE_KHR; + goto fail; +} else if (chain->status != VK_SUCCESS) { + return NULL; +} + +continue; + } result = x11_handle_dri3_present_event(chain, (void *)event); free(event); -- 2.1.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] blob: Use intptr_t instead of ssize_t
On 13 October 2017 at 19:44, Jason Ekstrandwrote: > ssize_t is a GNU extension and is not available on Windows or MacOS. Not to argue against the patch in any way, but ssize_t is POSIX. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [Mesa-stable] [PATCH] vulkan/wsi: Free the event in x11_manage_fifo_queues().
On 13 October 2017 at 19:23, Emil Velikovwrote: > Please give it time for Vulkan devs to take a look. > Sure, I'm in no particular hurry. Henri ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH] vulkan/wsi: Free the event in x11_manage_fifo_queues().
Cc: mesa-sta...@lists.freedesktop.org Signed-off-by: Henri Verbeet <hverb...@gmail.com> --- I should still have commit access. --- src/vulkan/wsi/wsi_common_x11.c | 1 + 1 file changed, 1 insertion(+) diff --git a/src/vulkan/wsi/wsi_common_x11.c b/src/vulkan/wsi/wsi_common_x11.c index ecdaf91..22b067b 100644 --- a/src/vulkan/wsi/wsi_common_x11.c +++ b/src/vulkan/wsi/wsi_common_x11.c @@ -939,6 +939,7 @@ x11_manage_fifo_queues(void *state) goto fail; result = x11_handle_dri3_present_event(chain, (void *)event); + free(event); if (result != VK_SUCCESS) goto fail; } -- 2.1.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v2 1/1] radeonsi: Use libdrm to get chipset name
On 11 June 2017 at 21:56, Marek Olšák <mar...@gmail.com> wrote: > On Sun, Jun 11, 2017 at 8:25 PM, Henri Verbeet <hverb...@gmail.com> wrote: >> As someone downstream of this, I have to say I find the "family" names >> much more informative than whatever marketing came up with. More >> importantly however, this commit changes the GL_RENDERER string >> reported to applications, like Wine, for existing GPUs in an >> incompatible way. Since I suspect displaying the "marketing" name is >> important to at least some people at AMD, could I request please >> including the family name as well, as is done by for example lspci? > > Yes, if you write the patch with the codename in the existing parentheses. :) > How about the attached patch? From f8fabe4ed6efd7983fc266d10a758a36ddb71d55 Mon Sep 17 00:00:00 2001 From: Henri Verbeet <hverb...@gmail.com> Date: Tue, 13 Jun 2017 01:39:02 +0200 Subject: [PATCH] gallium/radeon: Include the family name in the renderer string if it's not equal to the marketing name. Signed-off-by: Henri Verbeet <hverb...@gmail.com> --- src/gallium/drivers/radeon/r600_pipe_common.c | 32 +++ 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/src/gallium/drivers/radeon/r600_pipe_common.c b/src/gallium/drivers/radeon/r600_pipe_common.c index 48d136a..1cec6d5 100644 --- a/src/gallium/drivers/radeon/r600_pipe_common.c +++ b/src/gallium/drivers/radeon/r600_pipe_common.c @@ -788,17 +788,15 @@ static const char* r600_get_device_vendor(struct pipe_screen* pscreen) return "AMD"; } -static const char* r600_get_chip_name(struct r600_common_screen *rscreen) +static const char *r600_get_marketing_name(struct radeon_winsys *ws) { - const char *mname; - - if (rscreen->ws->get_chip_name) { - mname = rscreen->ws->get_chip_name(rscreen->ws); - if (mname != NULL) - return mname; - } + if (!ws->get_chip_name) + return NULL; + return ws->get_chip_name(ws); +} - /* fall back to family names*/ +static const char *r600_get_family_name(const struct r600_common_screen *rscreen) +{ switch (rscreen->info.family) { case CHIP_R600: return "AMD R600"; case CHIP_RV610: return "AMD RV610"; @@ -876,7 +874,7 @@ static void r600_disk_cache_create(struct r600_common_screen *rscreen) #endif if (res != -1) { rscreen->disk_shader_cache = -disk_cache_create(r600_get_chip_name(rscreen), +disk_cache_create(r600_get_family_name(rscreen), timestamp_str, rscreen->debug_flags); free(timestamp_str); @@ -1326,12 +1324,18 @@ struct pipe_resource *r600_resource_create_common(struct pipe_screen *screen, bool r600_common_screen_init(struct r600_common_screen *rscreen, struct radeon_winsys *ws) { - char llvm_string[32] = {}, kernel_version[128] = {}; + char family_name[32] = {}, llvm_string[32] = {}, kernel_version[128] = {}; struct utsname uname_data; + const char *chip_name; ws->query_info(ws, >info); rscreen->ws = ws; + if ((chip_name = r600_get_marketing_name(ws))) + snprintf(family_name, sizeof(family_name), "%s / ", r600_get_family_name(rscreen)); + else + chip_name = r600_get_family_name(rscreen); + if (uname(_data) == 0) snprintf(kernel_version, sizeof(kernel_version), " / %s", uname_data.release); @@ -1343,8 +1347,8 @@ bool r600_common_screen_init(struct r600_common_screen *rscreen, } snprintf(rscreen->renderer_string, sizeof(rscreen->renderer_string), - "%s (DRM %i.%i.%i%s%s)", - r600_get_chip_name(rscreen), rscreen->info.drm_major, + "%s (%sDRM %i.%i.%i%s%s)", + chip_name, family_name, rscreen->info.drm_major, rscreen->info.drm_minor, rscreen->info.drm_patchlevel, kernel_version, llvm_string); @@ -1396,7 +1400,7 @@ bool r600_common_screen_init(struct r600_common_screen *rscreen, if (rscreen->debug_flags & DBG_INFO) { printf("pci_id = 0x%x\n", rscreen->info.pci_id); printf("family = %i (%s)\n", rscreen->info.family, - r600_get_chip_name(rscreen)); + r600_get_family_name(rscreen)); printf("chip_class = %i\n", rscreen->info.chip_class); printf("gart_size = %i MB\n", (int)DIV_ROUND_UP(rscreen->info.gart_size, 1024*1024)); printf("vram_size = %i MB\n", (int)DIV_ROUND_UP(rscreen->info.vram_size, 1024*1024)); -- 2.1.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v2 1/1] radeonsi: Use libdrm to get chipset name
On 7 June 2017 at 21:54, Marek Olšákwrote: > On Wed, Jun 7, 2017 at 2:07 AM, Marek Olšák wrote: >> On Wed, Jun 7, 2017 at 12:21 AM, Samuel Li wrote: >>> @@ -790,6 +790,15 @@ static const char* r600_get_device_vendor(struct >>> pipe_screen* pscreen) >>> >>> static const char* r600_get_chip_name(struct r600_common_screen *rscreen) >>> { >>> + const char *mname; >>> + >>> + if (rscreen->ws->get_chip_name) { >>> + mname = rscreen->ws->get_chip_name(rscreen->ws); >>> + if (mname != NULL) >>> + return mname; >>> + } >>> + >>> + /* fall back to family names*/ >>> switch (rscreen->info.family) { >>> case CHIP_R600: return "AMD R600"; >>> case CHIP_RV610: return "AMD RV610"; As someone downstream of this, I have to say I find the "family" names much more informative than whatever marketing came up with. More importantly however, this commit changes the GL_RENDERER string reported to applications, like Wine, for existing GPUs in an incompatible way. Since I suspect displaying the "marketing" name is important to at least some people at AMD, could I request please including the family name as well, as is done by for example lspci? ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] Allow setting GL_TEXTURE_COMPARE_MODE on a sampler object without ARB_shadow support.
On 26 June 2015 at 17:38, Brian Paul bri...@vmware.com wrote: Digging up the patch from March for reference: This fixes a GL error warning on r200 in Wine. The GL_ARB_sampler_objects extension does not specify a dependency on GL_ARB_shadow or GL_ARB_depth_texture for this value. Just set the value and don't do anything else. It won't matter without a depth texture being assigned anyway. So I take it that Wine calls glSamplerParameteri(s, GL_TEXTURE_COMPARE_MODE, mode) even when GL_ARB_shadow is not supported and you get a bunch of GL errors? Has this been reported upstream to Wine so they can fix it? Stefan is part of upstream, so yeah, we're aware of this. I see this sort of thing all the time in Windows OpenGL apps and it's tempting to just silence some obscure GL errors, but it's a slippery slope. When I'm debugging a new GL app, seeing GL errors for all the corner cases can actually be very helpful and I'd hate to lose that. Well, the argument is that it's actually a bug in Mesa (if perhaps an obscure one) to generate an GL error in this case. We could add a workaround in Wine if we really have to, but the way we're reading the ARB_sampler_objects spec that shouldn't be needed. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [v4 PATCH 05/10] mesa: helper function for scissor box of gl_framebuffer
On 27 May 2015 at 19:05, Kenneth Graunke kenn...@whitecape.org wrote: If you're using vim, this will give you the correct settings for Mesa: if has(autocmd) au BufNewFile,BufRead */mesa/* set expandtab tabstop=8 softtabstop=3 shiftwidth=3 endif setlocal is more appropriate than set for this kind of thing (because it only applies to the current buffer), although I suppose it doesn't matter that much in practice. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 08/16] st/nine: Change x86 FPU Control word on device creation as on wined3d and windows
On 25 April 2015 at 09:58, Axel Davy axel.d...@ens.fr wrote: static void nine_setup() { fpu_control_t c; _FPU_GETCW(c); /* clear the control word */ c = _FPU_RESERVED; /* enable interrupts (d3d9 doc, wine tests) */ c |= _FPU_MASK_IM | _FPU_MASK_DM | _FPU_MASK_ZM | _FPU_MASK_OM | _FPU_MASK_UM | _FPU_MASK_PM; _FPU_SETCW(c); } The comment is misleading, because the code does more than that. (Hint: What happens to rounding and precision control?) But really, please either explicitly tell people they can't look at Wine (D3D related) source if they want to contribute to st/nine, or just license st/nine under LGPL as well. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 08/16] st/nine: Change x86 FPU Control word on device creation as on wined3d and windows
On 24 April 2015 at 22:09, Axel Davy axel.d...@ens.fr wrote: +static void nine_setup_fpu(void) +{ +#if defined(__GNUC__) (defined(__i386__) || defined(__x86_64__)) +WORD cw; +__asm__ volatile (fnstcw %0 : =m (cw)); +cw = (cw ~0xf3f) | 0x3f; +__asm__ volatile (fldcw %0 : : m (cw)); +#else +WARN_ONCE(FPU setup not supported on non-x86 platforms\n); +#endif +} + This is once again similar enough to the corresponding Wine source that I feel the need to remind you, this time more strongly, that Wine is licensed under LGPL 2.1+. ( For the curious, (warning, LGPL) https://source.winehq.org/git/wine.git/blob/25f0606e84bef7d60ea5c681d19b368660cab8e3:/dlls/d3d9/device.c#l3604) Besides, proper Gallium style would have been to use PIPE_CC_GCC and PIPE_ARCH_X86/PIPE_ARCH_X86_64. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [Piglit] DSA for core profile only? (was Re: [PATCH 2/2] arb_direct_state_access: New test for GetCompressedTextureImage.)
On 18 February 2015 at 00:46, Ilia Mirkin imir...@alum.mit.edu wrote: Wine maybe? (They're compat-only for now, although some work is being done to support core, but that might only be for their D3D10+ layer.) The current plan for Wine is just to add support for core profiles. There may be a case for hardware that can't do core profiles, but I somewhat doubt that any performance difference from DSA will be large enough to justify the effort. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 07/43] st/nine: Fix use of D3DSP_NOSWIZZLE
On 30 January 2015 at 21:34, Axel Davy axel.d...@ens.fr wrote: @@ -2778,7 +2778,7 @@ sm1_parse_get_param(struct shader_translator *tx, DWORD *reg, DWORD *rel) *rel = (1 31) | ((D3DSPR_ADDR D3DSP_REGTYPE_SHIFT2) D3DSP_REGTYPE_MASK2) | ((D3DSPR_ADDR D3DSP_REGTYPE_SHIFT) D3DSP_REGTYPE_MASK) | -(D3DSP_NOSWIZZLE D3DSP_SWIZZLE_SHIFT); +D3DSP_NOSWIZZLE; else *rel = TOKEN_NEXT(tx); } I can't help but notice a certain amount of similarity between the naming and structuring of the SM1 parser here and the one in Wine. I assume that's because it's the one and only logical way to write such a parser. Just in case though, I'd like to explicitly state that we'd welcome anyone reusing Wine's code, provided the terms of the LGPL are respected. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/9] Gallium Nine
On 19 November 2014 00:26, Emil Velikov emil.l.veli...@gmail.com wrote: From a quick look at MSDN it seems to me that going the DDI (like) route would require substantial rework on the wine side. How much contribution from wine can we expect ? Would you have the chance to help with design/coding, or would you be (no disrespect here) limited to answering questions ? Yeah, it would require a significant amount of work on the wined3d side as well, which is part of the reason we'd like to make sure that work can't be avoided. I'm sure that not everything in mesa is perfect yet I've not seen (m)any bug reports from you guys. If/when you guys spot something broken/extremely slow please bugzilla it or send an email to the ML. I think for a part that's because I prefer sending patches when time allows. E.g. around 2010-2011 I sent a couple of patches, mostly for making the Wine tests pass on r600c and later r600g. These days the amount of time I can spend on Mesa is more limited, but at least r600g generally works pretty well for me. I know Stefan regularly runs tests on r300g and r600g and sends bug reports when something breaks. Speaking of feedback, please consider using GLX_MESA_query_renderer. It should help you (at least a bit) with the massive vendor/device/video_memory tables that you currently have. It's already on our (unfortunately fairly large) todo-list. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/9] Gallium Nine
On 19 November 2014 01:55, Marek Olšák mar...@gmail.com wrote: Before we start discussing what we can do about the OpenGL API overhead, we must get rid of the on-demand shader compilation. It's unacceptable to compile shaders when we should be rendering. This is one of the things that Nine fixes. I assume Wine does that because there can be several slightly-different variants of the same shader for various reasons. Well then we'll have to figure out how to reduce that number to 1. Pretty much. Although as Stefan mentioned there are likely going to be cases where it can't be avoided. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/9] Gallium Nine
On 17 November 2014 21:05, Emil Velikov emil.l.veli...@gmail.com wrote: - GL extensions I feel that it's a bit too much to shoot the project down, because it does not introduce GL extensions that will be useful. To clarify, that's not what I said. It's mostly just that I'd like to see some actual evidence for the (implicit) claim that the performance difference is largely due to inherent OpenGL API overhead. Considering the interface note able, would you say that any new implementation towards handling D3D9 in wine is acceptable ? If anything, it would have to be an interface approximately on the level of the DDI, like Jose mentioned. Can we work together so that both project benefit from this effort ? I like to think we've always had good relations with Mesa, even if we don't always agree on everything. In this specific case, I'm afraid we just have a pretty fundamental difference of opinion with the st/nine developers on what the right approach is. Feel free to send me an e-mail if you have Wine related questions / requests in any case though. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/9] Gallium Nine
On 14 November 2014 16:21, Ilia Mirkin imir...@alum.mit.edu wrote: To the best of my knowledge, wine has no intent on merging anything related to nine/st. That's a bit broader than I'd put it, but yes, in it's current form this is not something we'd merge. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/9] Gallium Nine
On 14 November 2014 16:36, Ilia Mirkin imir...@alum.mit.edu wrote: Is there a different form that you believe would be more likely to be merged? The main issue is probably that we'd really like to avoid having two parallel implementations of the high-level d3d9 stuff. I.e., anything dealing with (d3d9) devices, stateblocks, swapchains, etc. We'd potentially be open to using something closer to the Gallium interface instead of OpenGL on the backend in wined3d. In that scenario wined3d would essentially be the statetracker. The main issue with that approach has always been that the Gallium statetracker/driver interface isn't meant to be stable, and neither is the internal interface between wined3d and e.g. d3d9. (So it wouldn't help to e.g. move wined3d into the Mesa tree either.) Another consideration is that while the Gallium interface is a better match than OpenGL for Direct3D in some places, I'm not necessarily convinced that that's something that couldn't be fixed with appropriate GL extensions. To give an example, it's possible that translating D3D bytecode to TGSI instead of GLSL ends up with better shader code for the hardware. Unfortunately that kind of analysis is completely missing as far as I'm aware, but if that were the case, it would probably be fixable by making some improvements to the GLSL compiler. If that's not possible for some reason we could consider adding an extension for authoring shaders in TGSI instead of GLSL, and so on. I guess the basic point is that replacing OpenGL is a pretty big hammer, that would need corresponding amounts of analysis and justification. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/9] Gallium Nine
On 14 November 2014 17:37, Ilia Mirkin imir...@alum.mit.edu wrote: Dave Airlie's virgl work is creating a gallium driver which actually uses OpenGL for hardware. I'm not sure how far he is, but I believe he has enough for GL3. This could be a way forward towards *only* using gallium (since otherwise you'd still have to have an OpenGL-based backend for the hw/platforms that don't have gallium drivers). However gallium will never support fixed-function hardware, so that may still not work for you. Fixed-function hardware is becoming less and less relevant, but on the other hand we try to avoid breaking things that currently work. But yes, that's certainly something that's interesting to see how it turns out. Another consideration is that while the Gallium interface is a better match than OpenGL for Direct3D in some places, I'm not necessarily convinced that that's something that couldn't be fixed with appropriate GL extensions. To give an example, it's possible that translating D3D bytecode to TGSI instead of GLSL ends up with better shader code for the hardware. Unfortunately that kind of analysis is completely missing as far as I'm aware, but if that were the case, it would probably be fixable by making some improvements to the GLSL compiler. If that's not possible for some reason we could consider adding an extension for authoring shaders in TGSI instead of GLSL, and so on. I guess the basic point is that replacing OpenGL is a pretty big hammer, that would need corresponding amounts of analysis and justification. While I don't have this justification, I always just assumed this was due to mismatches between how d3d wanted to do things and how OpenGL let you do things, so you ended up having to do some fairly heavy-handed things in OpenGL solely due to the silliness of the API. Well yes, but the issues tend to be things like those solved by ARB_clip_control, ARB_vertex_array_bgra, ARB_provoking_vertex, etc. Let's say that all such things could be identified and extensions created for, you'd still end up effectively managing 2 backends -- one that assumes that the various d3d-helper extensions are there, and one that doesn't. Yes, but that's much more limited in scope than replacing all of OpenGL. I strongly doubt that the performance increases are due to better d3d9 bytecode - TGSI conversion than - glsl - tgsi conversion -- most serious backends (r600, radeonsi, nouveau) have optimizing compilers that should take care of such issues. It was just an example, but at least in the past I've seen for example the translation for D3D cnd and cmp result in pretty sub-optimal code in r600g. In GLSL 1.30 and up mix() with a bool argument could perhaps make it easier for the driver to end up with something reasonable. But not knowing where the actual differences/advantages are is a large part of what makes it hard to discuss st/nine in concrete terms from a Wine perspective. Anyways, from your comments it sounds like the only way forward and given the current capabilities of nine/st would be to create some sort of out-of-tree solution that plugs into wine, providing native d3d9.dll or whatever it's called. That way you guys aren't stuck maintaining 2 backends, and people can get improved performance on d3d9 games on linux. Henri, if you take the fact that people want to use nine/st in its ~current form on linux as a given, is there a different, simpler approach that I'm overlooking? Probably not. For what it's worth, while I think the approach of doing the analysis mentioned above will ultimately have better results both for Wine and other GL applications, I realise very well that that's real work and not necessarily a lot of fun. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/9] Gallium Nine
On 14 November 2014 17:42, Jose Fonseca jfons...@vmware.com wrote: [...] We'd potentially be open to using something closer to the Gallium interface instead of OpenGL on the backend in wined3d. In that scenario wined3d would essentially be the statetracker. The main issue with that approach has always been that the Gallium statetracker/driver interface isn't meant to be stable, and neither is the internal interface between wined3d and e.g. d3d9. Yes, I don't recommend gallium for that. It sounds you want to design a WINE D3D9 DDI pretty much along the lines of the WDDM D3D9 DDI: http://msdn.microsoft.com/en-us/library/windows/hardware/ff552927(v=vs.85).aspx Basically, the runtime is one, but there would be two implementations of that DDI. Runtime would do validation, keep copy of current state for application state queries, etc. Yeah, essentially. In a way we already have that kind of interface, but it's wined3d internal instead of a proper API. And of course we'll want d3d10/11 as well at some point. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/9] Gallium Nine
On 14 November 2014 17:52, Axel Davy axel.d...@ens.fr wrote: Second d3d9 as gallium state tracker seems much easier than d3d9 on OpenGL. As for me, I contributed only since a few months ago, and was able to implement a lot of things quite easily, for exemple: . Respect the number of backbuffer asked by the app (as far as I know wine doesn't support = 2 and behaves like 1) . Support the Render-ahead d3d9 behaviour (d3d9 doesn't have tripple buffering like Opengl can have) . wine seems to have a lot of issues with stuttering, etc. We have control of throttling and vsync, and thus don't have any particular issue there Most of the stuttering I'm aware of is GLSL compiler related. . We have very good DRI_PRIME support (better than what GLX has currently). The fact that nine was develloped so fast by few devs show well that it was easier. I don't want to sound overly negative, but I'm afraid that what you're seeing is mostly just the first 80% of any project being a lot easier than the last 1% or so. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Suboptimal code generation
On 14 November 2014 18:50, Ilia Mirkin imir...@alum.mit.edu wrote: I can't speak for the radeon guys, but I know I sure would love to see any reports of poor code being generated by nouveau in response to legitimate-seeming TGSI (or GLSL). In some cases, a simple optimization can be added to take care of it, and I'd definitely appreciate the extra pair of eyeballs on driver-generated code :) The report can be as simple as here is the TGSI snippet, take a look at how crappy the code it generates is. At least for nouveau, I can feed that directly into a compiler that can target any of the relevant backends. [Note, r600g didn't have an optimizer enabled until ~1y ago; not sure if your analysis was with or without sb.] It was with sb, but probably before TGSI got FSLT/FSGE/etc. For reference, what currently happens for r600g is something like this: D3D: cnd r[0], r[0].w, c[1], c[2] GLSL: R0.xyzw = (R0.w 0.5 ? ps_c[1].xyzw : ps_c[2].xyzw); TGSI: FSLT TEMP[0].x, IMM[0]., TEMP[0]. UIF TEMP[0]. :0 MOV TEMP[0], CONST[1] ELSE :0 MOV TEMP[0], CONST[2] ENDIF R600: SETGE_DX10 T0.x, 0.5, T0.x CNDE_INT R0.x, T0.x, KC0[1].x, KC0[2].x CNDE_INT R0.y, T0.x, KC0[1].y, KC0[2].y CNDE_INT R0.z, T0.x, KC0[1].z, KC0[2].z CNDE_INT R0.w, T0.x, KC0[1].w, KC0[2].w While ideally that would just be 4 CNDGE's, that's better than what I remember. IIRC there used to be a bunch of int/float conversions as well. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Suboptimal code generation
On 14 November 2014 20:41, Roland Scheidegger srol...@vmware.com wrote: That looks quite ok to me. Slightly suboptimal maybe but quite reasonable - you can't really expect optimal code always. (With the proposal to nuke cnd from tgsi though you'd just generate the same in any case probably.) I suspect the bunch of int/float conversions got away when switching comparison operation stuff in glsl to tgsi translation to using ints ones on hardware which supports integers (a year ago or so). I know this made some tgsi look a bit nicer but I don't know the effects this had on backends (which also probably learned some new tricks since then). Yeah, pretty much. I haven't seen anything particularly bad recently, but that could just be because I haven't been actively looking. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [RFC PATCH 00/16] A new IR for Mesa
I'd like to say up front that while I could imagine that perhaps some of my comments on radeonsi could be perceived as harsh, it's not my intention to pick on radeonsi or anyone working on radeonsi in particular. I just think radeonsi / r600g is probably the best comparison inside Mesa for a driver done with and without a hard LLVM dependency. I think it's perhaps also useful to point out that my comments in this thread are at least in part as advice from a Wine developer, where I think we've had our fair share of experience with external dependencies, rather than in my much more limited role as a Mesa developer. We also briefly evaluated using LLVM for vbscript, jscript and a HLSL compiler in Wine a couple of years ago, but decided it wasn't worth it. On 28 August 2014 05:21, Michel Dänzer mic...@daenzer.net wrote: Sure, it's not impossible, but is that really the kind of process you want users to go through when bisecting a regression? I appreciate your theoretical concern, but in practice, people don't seem to have trouble bisecting radeonsi regressions in general. I suspect you may be getting some selection bias there. As far as Wine users are concerned, we certainly seem to have more r600g users than radeonsi ones. For Wine developers that comparison is even worse; as far as I'm aware none of the regular developers regularly develop on radeonsi. I've seen a couple more casual developers try, but I suspect they essentially gave up once they realized how much work would be required to make the Wine tests pass on radeonsi. Without LLVM, I'm not sure there would be a driver you could avoid. :) R600g didn't really exist either, and that one seems to have worked out fine. I think in a large part because of work done by Jerome and Dave in the early days, but regardless. From what I've seen from SI, I don't think radeonsi needed to be a separate driver to start with, and while its ISA is certainly different from R600-Cayman, it doesn't particularly strike me as much harder to work with. That's getting off-topic, but most of the code that can be shared between radeonsi and r600g is shared now. I've seen Marek in particular put a lot of effort into that, yeah. I just think that effort could have been avoided. But my point was mostly that while I'd estimate most of the work in radeonsi to be in supporting the new ISA, I don't think that work would have been considerably harder than the work to support the R600-Cayman ISA was. And by implication, that I seriously doubt using LLVM there really saved any effort at all. Perhaps more concretely, I think the r600-sb backend works at least as well as the r600-llvm one, and not for lack of effort put into the latter. Back to the more immediate topic though, I think think that on occasion the discussion is framed as Is there any reason using LLVM IR wouldn't work?, while it would perhaps be more appropriate to think of as Would using LLVM IR provide enough advantages to justify adding a LLVM dependency to core Mesa?. Unless you can show me anyone who would prefer swrast or softpipe over llvmpipe for software rendering tests, I'd argue that there effectively As a user, I currently have no reason to build any software renderer at all. But yes, as a developer I generally test against softpipe because it's easier to work with. The bottom line is that today I can build r600g without worrying about LLVM, and have a driver that I can use for Wine development. So yes, using LLVM in core Mesa would add an extra dependency. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [RFC PATCH 00/16] A new IR for Mesa
On 21 August 2014 04:56, Michel Dänzer mic...@daenzer.net wrote: On 21.08.2014 04:29, Henri Verbeet wrote: For whatever it's worth, I have been avoiding radeonsi in part because of the LLVM dependency. Some of the other issues already mentioned aside, I also think it makes it just painful to do bisects over moderate/longer periods of time. More painful, sure, but not too bad IME. In particular, if you know the regression is in Mesa, you can always use a stable release of LLVM for the bisect. You only need to change the --with-llvm-prefix= parameter to Mesa's configure for that. Of course, it could still be mildly painful if you need to go so far back that the current stable LLVM release wasn't supported yet. But how often does that happen? Very rarely for me. Sure, it's not impossible, but is that really the kind of process you want users to go through when bisecting a regression? Perhaps throw in building 32-bit versions of both Mesa and LLVM on 64-bit as well if they want to run 32-bit applications. Without LLVM, I'm not sure there would be a driver you could avoid. :) R600g didn't really exist either, and that one seems to have worked out fine. I think in a large part because of work done by Jerome and Dave in the early days, but regardless. From what I've seen from SI, I don't think radeonsi needed to be a separate driver to start with, and while its ISA is certainly different from R600-Cayman, it doesn't particularly strike me as much harder to work with. Back to the more immediate topic though, I think think that on occasion the discussion is framed as Is there any reason using LLVM IR wouldn't work?, while it would perhaps be more appropriate to think of as Would using LLVM IR provide enough advantages to justify adding a LLVM dependency to core Mesa?. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [RFC PATCH 00/16] A new IR for Mesa
On 20 August 2014 20:13, Kenneth Graunke kenn...@whitecape.org wrote: I've also heard stories from friends of mine who use radeonsi that they couldn't get new GL features or compiler fixes unless they upgrade both Mesa /and/ LLVM, and that LLVM was usually either not released or not available in their distribution for a few months. For whatever it's worth, I have been avoiding radeonsi in part because of the LLVM dependency. Some of the other issues already mentioned aside, I also think it makes it just painful to do bisects over moderate/longer periods of time. I'm sure AMD carefully considered the tradeoff, and that it's worth it for them, but purely as a user/downstream I'd say using LLVM for the radeonsi compiler was a mistake. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/2] i965/fs: Optimize conditional discards.
On 19 August 2014 02:50, Kenneth Graunke kenn...@whitecape.org wrote: +/* Returns a conditional modifier that negates the condition. */ +enum brw_conditional_mod +brw_negate_cmod(uint32_t cmod) +{ + switch (cmod) { + case BRW_CONDITIONAL_Z: + return BRW_CONDITIONAL_NZ; + case BRW_CONDITIONAL_NZ: + return BRW_CONDITIONAL_Z; + case BRW_CONDITIONAL_G: + return BRW_CONDITIONAL_LE; + case BRW_CONDITIONAL_GE: + return BRW_CONDITIONAL_L; + case BRW_CONDITIONAL_L: + return BRW_CONDITIONAL_GE; + case BRW_CONDITIONAL_LE: + return BRW_CONDITIONAL_G; + default: + return ~0; + } +} I suspect you may not care because GLSL seems to leave INF and NaN behaviour mostly undefined, but note that strictly speaking e.g. x x isn't equivalent to !(x = x) when x is NaN. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/3] gallium: Add PIPE_COMPUTE_CAP_MAX_CONSTANT_BUFFER_SIZE
On 24 July 2014 16:55, Marek Olšák mar...@gmail.com wrote: the hardware supports 16 constant buffers. I'm not sure what the constant registers are, but they cannot have anything to do with the Probably the old CFILE constants, of which there actually only were 256, and which IIRC were removed since Evergreen. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 1/2] st/mesa: Handle disabled draw buffers in st_Clear().
This fixes piglit arb_draw_buffers-rt0_none on AMD CEDAR. No piglit regressions on the same hardware. Signed-off-by: Henri Verbeet hverb...@gmail.com --- src/mesa/state_tracker/st_cb_clear.c | 14 ++ 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/src/mesa/state_tracker/st_cb_clear.c b/src/mesa/state_tracker/st_cb_clear.c index 887e58b..e7345a2 100644 --- a/src/mesa/state_tracker/st_cb_clear.c +++ b/src/mesa/state_tracker/st_cb_clear.c @@ -430,14 +430,20 @@ st_Clear(struct gl_context *ctx, GLbitfield mask) st_validate_state( st ); if (mask BUFFER_BITS_COLOR) { + unsigned int color_idx = ~0u; + for (i = 0; i ctx-DrawBuffer-_NumColorDrawBuffers; i++) { - GLuint b = ctx-DrawBuffer-_ColorDrawBufferIndexes[i]; + gl_buffer_index b = ctx-DrawBuffer-_ColorDrawBufferIndexes[i]; + + if (b == -1) +continue; + ++color_idx; if (mask (1 b)) { struct gl_renderbuffer *rb = ctx-DrawBuffer-Attachment[b].Renderbuffer; struct st_renderbuffer *strb = st_renderbuffer(rb); -int colormask_index = ctx-Extensions.EXT_draw_buffers2 ? i : 0; +int colormask_index = ctx-Extensions.EXT_draw_buffers2 ? color_idx : 0; if (!strb || !strb-surface) continue; @@ -447,9 +453,9 @@ st_Clear(struct gl_context *ctx, GLbitfield mask) if (is_scissor_enabled(ctx, rb) || is_color_masked(ctx, colormask_index)) - quad_buffers |= PIPE_CLEAR_COLOR0 i; + quad_buffers |= PIPE_CLEAR_COLOR0 color_idx; else - clear_buffers |= PIPE_CLEAR_COLOR0 i; + clear_buffers |= PIPE_CLEAR_COLOR0 color_idx; } } } -- 1.7.10.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 2/2] st/mesa: Only use idx after validating it in st_manager_add_color_renderbuffer().
In particular, we don't want it to be -1. In practice this is probably unlikely to be an issue, since Attachment[-1] should still be a valid memory location, and the code only reads it. No piglit regressions on AMD CEDAR. Signed-off-by: Henri Verbeet hverb...@gmail.com --- src/mesa/state_tracker/st_manager.c |6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/mesa/state_tracker/st_manager.c b/src/mesa/state_tracker/st_manager.c index 8158450..26bf37e 100644 --- a/src/mesa/state_tracker/st_manager.c +++ b/src/mesa/state_tracker/st_manager.c @@ -834,9 +834,6 @@ st_manager_add_color_renderbuffer(struct st_context *st, if (!stfb) return FALSE; - if (stfb-Base.Attachment[idx].Renderbuffer) - return TRUE; - switch (idx) { case BUFFER_FRONT_LEFT: case BUFFER_BACK_LEFT: @@ -848,6 +845,9 @@ st_manager_add_color_renderbuffer(struct st_context *st, break; } + if (stfb-Base.Attachment[idx].Renderbuffer) + return TRUE; + if (!st_framebuffer_add_renderbuffer(stfb, idx)) return FALSE; -- 1.7.10.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/2] st/mesa: Handle disabled draw buffers in st_Clear().
On 25 December 2013 14:17, Marek Olšák mar...@gmail.com wrote: This looks good, but it's only papering over the real problem, which is that st/mesa doesn't bind NULL colorbuffers and skips them instead. For example, if DRAWBUFFER0 is NONE, st/mesa binds DRAWBUFFER1 to cb[0], but then all writes to gl_FragData[1] are broken, because the draw buffer has been moved to cb[0] and the shader doesn't know about it. That's a good point, although I think at least the first part of this patch would be the same regardless. I.e., it would make the admittedly not very pretty color_idx handling go away, but b being -1 would still need to be explicitly handled. I think st/mesa should bind NULL colorbuffers and drivers should check for NULL colorbuffers and disable the writes accordingly. I think most drivers don't check for NULL colorbuffers, but at least fixing r600g and radeonsi should be very easy by just looking for NULL pointer dereferences and disabling colorbuffers by setting CB_COLORi_INFO.FORMAT=COLOR_INVALID. Sounds good to me, although I can't realistically promise making it happen in the short term. I originally wrote these patches in October or so, and only got around to submitting them now. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] glBlitFramebuffer and sRGB vs piglit
On 16 December 2013 01:37, Marek Olšák mar...@gmail.com wrote: Hi everybody, There is an inconsistence in the piglit glBlitFramebuffer tests. If both src and dst are sRGB, piglit expects this from glBlitFramebuffer: if (dst.num_samples == 1 src.num_samples 1) { enable the sRGB-linear conversion for src reads and the linear-sRGB conversion for dst writes; } else { disable the sRGB conversions; } Is this the intended behavior? Regardless of the GL spec, what behavior do applications expect? (if somebody knows) As far as Wine is concerned, the assumption is that FBO blits don't do colorspace conversion. The current piglit / Mesa behaviour is consistent with that. Doing filtering in sRGB color space (which is pretty much what the first case you mentioned amounts to) also makes sense, although for the case where you have an sRGB internal format with SKIP_DECODE (from EXT_texture_sRGB_decode) it's not necessarily correct. The above is pretty much the de facto standard pre-4.4, and even if the spec perhaps doesn't explicitly specify that behaviour, I think it strongly hints in that direction. Also most of the piglit BlitFramebuffer tests with sRGB formats expect the opposite than what GL 4.4 specifies, and if we implemented the GL 4.4 behavior, all those tests would fail. For reference, GL 4.4 requires this: - if src.format is sRGB, do the sRGB-linear conversion for reads. (I think it can only be disabled with texture views.) - if dst.format is sRGB and GL_FRAMEBUFFER_SRGB is enabled, do the linear-sRGB conversion for dst writes. st/mesa does this: - Always disable the sRGB conversions. I think the older GL specs specify a different behavior for sRGB blits, which roughly corresponds to how st/mesa does it. Yeah, it's a pretty messy situation. So, do you have any answers to the 2 questions above? If I were to guess at intention, it seems that in 4.4 you're supposed to use texture views instead of sRGB decode for enabling / disabling sRGB read conversion, and in that case you would have full control over enabling both read and write sRGB conversion for blits, and implicitly over in what color space filtering happens. I do think 4.4 breaks compatibility with earlier versions here, if not officially at least de facto. I also think that without texture views the 4.4 behaviour doesn't make nearly as much sense. My vote would be to keep the current behaviour for anything pre-4.4. Where that gets messy is probably compatibility contexts, but my understanding is that Mesa has no intention of (ever?) implementing those. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 1/1] i915: Add support for gl_FragData[0] reads.
Similar to 556a47a2621073185be83a0a721a8ba93392bedb, without this reading from gl_FragData[0] would cause a software fallback. Bugzilla: https://bugs.winehq.org/show_bug.cgi?id=33964 Signed-off-by: Henri Verbeet hverb...@gmail.com Cc: 10.0 9.2 9.1 mesa-sta...@lists.freedesktop.org --- src/mesa/drivers/dri/i915/i915_fragprog.c |1 + 1 file changed, 1 insertion(+) diff --git a/src/mesa/drivers/dri/i915/i915_fragprog.c b/src/mesa/drivers/dri/i915/i915_fragprog.c index dff4b9f..34df6fc 100644 --- a/src/mesa/drivers/dri/i915/i915_fragprog.c +++ b/src/mesa/drivers/dri/i915/i915_fragprog.c @@ -146,6 +146,7 @@ src_vector(struct i915_fragment_program *p, case PROGRAM_OUTPUT: switch (source-Index) { case FRAG_RESULT_COLOR: + case FRAG_RESULT_DATA0: src = UREG(REG_TYPE_OC, 0); break; case FRAG_RESULT_DEPTH: -- 1.7.10.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] context sharing of framebuffer objects
On 30 September 2013 02:18, Dave Airlie airl...@gmail.com wrote: So this led me to look at the spec and the mesa code, and I noticed it appears at some point maybe around 3.1 that FBOs are no longer considered shared objects at least in core profile, but mesa always seems to share them, just wondering is someone can confirm I'm reading things correctly, and if so I might try and do a piglit test and a patch. AFAIK the only FBOs that can be shared are ones create through EXT_fbo. (Specifically, see issue 10 in the ARB_fbo spec, and Appendix D in the GL 3.0 spec.) ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] glx: Initialize OpenGL version to 1.0
On 3 September 2013 13:19, Rico Schüller kgbric...@web.de wrote: So yes, we agree here, the version number needs to be fixed. The simplest one is to just change the number. I'm fine with it. I have no strong opinion about it. Though I think it should be consistent across all initialization occurrences (in dri_common.c/dri2_glx.c/drisw_glx.c). FWIW, I think we'd like this patch marked for stable as well. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] r600g: fix color exports when we have no CBs
On 28 August 2013 12:17, Marek Olšák mar...@gmail.com wrote: Yeah, st/mesa also compiles shaders on the first use, so we've got 3 places to fix: Wine, st/mesa, the driver. For what it's worth, while Wine definitely has some room for improvement in this regard, in some cases we don't get the shaders any earlier from the application either. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 1/1] r600g: Implement the new float comparison instructions for Cayman as well.
I assume this should have been part of commit 7727fbb7c5d64348994bce6682e681d6181a91e9. This (obviously) fixes a lot tests. Signed-off-by: Henri Verbeet hverb...@gmail.com --- src/gallium/drivers/r600/r600_shader.c |8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/gallium/drivers/r600/r600_shader.c b/src/gallium/drivers/r600/r600_shader.c index fb766c4..300b5c4 100644 --- a/src/gallium/drivers/r600/r600_shader.c +++ b/src/gallium/drivers/r600/r600_shader.c @@ -6128,10 +6128,10 @@ static struct r600_shader_tgsi_instruction cm_shader_tgsi_instruction[] = { {106, 0, ALU_OP0_NOP, tgsi_unsupported}, {TGSI_OPCODE_NOP, 0, ALU_OP0_NOP, tgsi_unsupported}, /* gap */ - {108, 0, ALU_OP0_NOP, tgsi_unsupported}, - {109, 0, ALU_OP0_NOP, tgsi_unsupported}, - {110, 0, ALU_OP0_NOP, tgsi_unsupported}, - {111, 0, ALU_OP0_NOP, tgsi_unsupported}, + {TGSI_OPCODE_FSEQ, 0, ALU_OP2_SETE_DX10, tgsi_op2}, + {TGSI_OPCODE_FSGE, 0, ALU_OP2_SETGE_DX10, tgsi_op2}, + {TGSI_OPCODE_FSLT, 0, ALU_OP2_SETGT_DX10, tgsi_op2_swap}, + {TGSI_OPCODE_FSNE, 0, ALU_OP2_SETNE_DX10, tgsi_op2_swap}, {TGSI_OPCODE_NRM4, 0, ALU_OP0_NOP, tgsi_unsupported}, {TGSI_OPCODE_CALLNZ,0, ALU_OP0_NOP, tgsi_unsupported}, /* gap */ -- 1.7.10.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/1] mesa: Properly set the fog scale (gl_Fog.scale) to +INF when fog start and end are equal.
On 22 August 2013 00:31, Ian Romanick i...@freedesktop.org wrote: Section 2.1.1 (Floating-point computation) says: The result of providing a value that is not a floating-point number to such a command is unspecified, but must not lead to GL interruption or termination. In IEEE arithmetic, for example, providing a negative zero or a denormalized number to a GL command yields predictable results, while providing a NaN or an infinity yields unspecified results. I /think/ this qualifies for the unspecified results clause. An argument could probably be made the other way, however. Well, the application doesn't directly provide the +INF here, so in that sense I don't think this text really applies. The bit right below that about implied divisions by zero probably does though. Still, I'd argue that +INF or perhaps FLT_MAX would be a more reasonable value than 1.0f, and more importantly that using gl_Fog.scale should give (approximately) the same result as calculating 1.0 / (gl_Fog.end - gl_Fog.start) in the shader. Have you tried it on older GPUs? r300? i915? The patch fixes the relevant Wine D3D tests on AMD RS480 (On top of 9.1.6 anyway, last time I tried r300g just died very early with master. I hope to look into that, eventually.), and NVIDIA NV43. IIRC neither of those has proper IEEE, and just flushes things to +/-FLT_MAX. I don't have full piglit runs for those, but could get them if needed. I'm afraid I don't have much else in terms of older hardware that I can run tests on. I have an i915 and a nv18 or so, but can't run tests on either of those at the moment. Could we get a simple piglit test case that reproduces the issue? Sure. It might take a couple of days before I get to it though. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] glsl: don't eliminate texcoords that can be set by GL_COORD_REPLACE
On 18 August 2013 05:23, Ian Romanick i...@freedesktop.org wrote: Since this also fixes an application, do you have any idea what could be done to make a piglit test to reproduce the failure? We have some folks writing piglit tests for us this summer, and this sounds like a good one for them. :) The basic setup is to render point sprites with GL_COORD_REPLACE on, and then read gl_TexCoord[] in the fragment shader, but don't write it in the vertex shader. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] glsl: don't eliminate texcoords that can be set by GL_COORD_REPLACE
On 9 August 2013 22:40, Marek Olšák mar...@gmail.com wrote: Tested by examining generated TGSI shaders from piglit/glsl-routing. This fixes the relevant Wine d3d9 test, thanks. No piglit changes on Cayman. Reviewed-by: Henri Verbeet hverb...@gmail.com Tested-by: Henri Verbeet hverb...@gmail.com ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 1/1] mesa: Properly set the fog scale (gl_Fog.scale) to +INF when fog start and end are equal.
This was originally introduced by commit ba47aabc9868b410cdfe3bc8b6d25a44a598cba2, but unfortunately the commit message doesn't go into much detail about why +INF would be a problem here. I don't see anything in the spec that would allow 1.0f here. A similar issue exists for STATE_FOG_PARAMS_OPTIMIZED, but allowing infinity there would potentially introduce NaNs where they shouldn't exist, depending on the values of fog end and the fog coord. Since STATE_FOG_PARAMS_OPTIMIZED is only used for fixed function (including ARB_fragment_program with fog option), and the calculation there probably isn't very stable to begin with when fog start and end are close together, it seems best to just leave it alone. This fixes a couple of Wine D3D tests. No piglit changes on Cayman. Signed-off-by: Henri Verbeet hverb...@gmail.com --- src/mesa/program/prog_statevars.c |3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/mesa/program/prog_statevars.c b/src/mesa/program/prog_statevars.c index f6073be..657c6e6 100644 --- a/src/mesa/program/prog_statevars.c +++ b/src/mesa/program/prog_statevars.c @@ -256,8 +256,7 @@ _mesa_fetch_state(struct gl_context *ctx, const gl_state_index state[], value[0] = ctx-Fog.Density; value[1] = ctx-Fog.Start; value[2] = ctx-Fog.End; - value[3] = (ctx-Fog.End == ctx-Fog.Start) - ? 1.0f : (GLfloat)(1.0 / (ctx-Fog.End - ctx-Fog.Start)); + value[3] = (GLfloat)(1.0 / (ctx-Fog.End - ctx-Fog.Start)); return; case STATE_CLIPPLANE: { -- 1.7.10.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Direct3D 9 state tracker
On 22 July 2013 18:48, Stefan Dösinger stefandoesin...@gmail.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Am 2013-07-22 15:39, schrieb Jose Fonseca: It seems to me that this would be more useful if the state tracker targeted not the D3D9 API, but the WDDM D3D9 DDI [2]. Targeting the DDI would allow, e.g., to share more code with rest of WINE (the API-DDI runtime layer); Fwiw, Wine does not use Microsoft's DDI in any way. We use our own interface to abstract between d3d versions, which is a mix of d3d9 and d3d10, with some ddraw-specific extras. Yes, although the DDI would probably be something we could work with on the wined3d level. The current interface would indeed be a bit awkward for us, at least from the point of view of integrating it into Wine itself. If we were to use an API that's not OpenGL in Wine, and we got to choose, we'd probably prefer using Gallium, or something similar to it, directly from wined3d. Any credible long term solution would either need to work with everything from ddraw to d3d11, or at least be capable of being made to work for those. Note also that there are applications that mix e.g. ddraw and d3d9, or ddraw and OpenGL. Those all need to work as well. That's not to say I don't think this is a useful project, at the very least it helps people with Gallium drivers and the applications that work with this state tracker. It also probably gives a good indication of what's possible in terms of performance. However, at this point I don't see how it can turn into a broader, more long term solution for Wine. From our point of view though, and I'm pretty sure I've mentioned this before to Christoph, the more interesting question is where most of the performance difference comes from. I.e., if that's mostly something specific to Mesa's GL implementation, something in how we're using GL, something inherent in the GL API, or something else entirely. I'm sure the additional abstraction layer doesn't help performance much, but I'm not all that convinced that that's enough to explain the difference. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Do we support front buffer rendering with EGL? Do we want to?
On 3 June 2013 22:16, Kenneth Graunke kenn...@whitecape.org wrote: I don't think we should implement front buffer rendering for EGL unless someone presents a compelling use case. In my mind, front buffer rendering is only something used historically...it has all kinds of caveats about synchronization, doesn't fit well into a world with compositing, and virtually everyone wants double buffering anyway so they can present perfect frames. I don't know about compelling, but for what it's worth, some applications running in Wine will use front buffer rendering. Not supporting front buffer rendering would make it more painful for us to support EGL. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] r600g: status of the r600-sb branch
On 19 April 2013 18:01, Vadim Girlin vadimgir...@gmail.com wrote: The choice of C++ (unlike in my previous branch that used C) was mostly driven by the fact that optimization algorithms usually deal with a lot of different complex data structures, containers, etc, and C++ allows to isolate implementation of all such things in separate and easily replaceable classes and concentrate on the logic, making the code more clean and readable. I'm sure it would be good fun to have a discussion about the relative merits of C and C++, though I think I've seen enough actual C++ that you're not going to convince me it's the better language. However, I don't think that should be the main consideration. It's probably more important to consider what current and potential new contributors prefer, and on Linux, particularly for the more low-level stuff, I suspect that pretty much means C. I haven't tried to keep it as a series of independent patches because during the development most changes were pretty intrusive and introduced new features, some parts were seriously reworked/rewritten more than one time, requiring changes in other parts, especially when intermediate representation of the code was changed. It was usually easier for me to simply fix the new regressions in the new code than to revert any changes and lose new features, so bisection wouldn't be very helpful anyway. That's why I didn't even try to keep the history. Anyway most of the code in the branch is new, so I don't think that the history of the patches that rewrite the same code few times during a development would make it more readable than simply reading the final code. I think I'm just going to disagree there. (But of course that's all just my personal opinion, which probably doesn't carry a lot of weight at the moment.) ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] r600g: status of the r600-sb branch
On 19 April 2013 16:48, Vadim Girlin vadimgir...@gmail.com wrote: In the previous status update I said that the r600-sb branch is not ready to be merged yet, but recently I've done some cleanups and reworks, and though I haven't finished everything that I planned initially, I think now it's in a better state and may be considered for merging. I'm interested to know if the people think that merging of the r600-sb branch makes sense at all. I'll try to explain here why it makes sense to me. Personally, I'd be in favour of merging this at some point. While I haven't exactly done extensive testing or benchmarking with the branch, the things I did try at least worked correctly, so I'd say that's a good start at least. I'm afraid I can't claim extensive review either, but I guess the most obvious things I don't like about it are that it's C++, and spread over a large number of 1000 line files. Similarly, I don't really see the point of having a header file for just about each .cpp file. One for private interfaces and one for the public interface should probably be plenty. I'm not quite sure how others feel about that, although I suspect I'm not alone in at least the preference of C over C++. I also suspect it would help if this was some kind of logical, bisectable series of patches instead of a single commit that adds 18k+ lines. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [RFC] GLX_MESA_query_renderer
On 12 March 2013 17:46, Ian Romanick i...@freedesktop.org wrote: Right... the extension also adds an attribute that can only be used with glXCreateContextAttribsARB. Yeah, all I was saying is that it probably wouldn't be too hard to word things along the lines of If glXCreateContextAttribsARB() isn't available GLX_RENDERER_ID_MESA goes away, and only one renderer is available / visible.. Perhaps it's not worth it though. My thinking was that it will be very rare for multiple renderers to support the same GL versions and different extension strings... at least in a way that would cause apps to make different context creation decisions. I guess that makes sense in the very coarse I need at least GL3 way. Part of the thinking is that it would force regularity in how the version is advertised. Otherwise everyone will have a different kind of string, and the currently annoying situation of parsing implementation dependent strings continues. Maybe GLX_RENDERER_VERSION_MESA should also be allowed with glXQueryRendererStringMESA? Yeah, I think that makes sense. I also based this on ISV feedback. Some just wanted to know what the hardware was, and others wanted to know that and who made the driver. I was really trying to get away from just parse this random string for as much of the API as possible. It seems like this should only make things easier for apps... should. In theory you could add a GL vendor ID similar to the PCI vendor ID, but then you'd have to allocate those globally, which would probably be annoying. So, yeah. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [RFC] GLX_MESA_query_renderer
On 2 March 2013 00:14, Ian Romanick i...@freedesktop.org wrote: I added some comments, but I think the extension is pretty much fine for at least Wine's purposes. GLX_ARB_create_context and GLX_ARB_create_context_profile are required. It's probably not a big deal since just about everyone implements these, but I think most of the extension could be implemented without these. There are also cases where more than one renderer may be available per display. For example, there is typically a hardware implementation and a software based implementation. There are cases where an application may want to pick one over the other. One such situation is when the software implementation supports more features than the hardware implementation. I think that makes sense in some cases (although the more common case may turn out to be setups like PRIME where you actually have two different hardware renderers and want to switch between them), but wouldn't you also want to look at the (GL) extension string before creating a context in such a case? I realize issue 9 resolves this as basically not worth the effort, but doesn't that then contradict the text above? (For Wine creating the GLX context is no big deal at this point since we already have that code anyway, but it seems like useful information for (new) applications that want to avoid that.) Additions to the OpenGL / WGL Specifications None. This specification is written for GLX. I think we'd like a WGL spec for wined3d, since it's written on top of Wine's WGL implementation instead of directly on top of GLX. If needed we could also solve that with a Wine internal extension, but we'd like to avoid those where possible. To obtain information about the available renderers for a particular display and screen, void glXQueryRendererIntegerMESA(Display *dpy, int screen, int renderer, int attribute, unsigned int *value); This returned a Bool above. I don't see the glXQueryCurrent*() functions specified at all, but I assume that will be added before the final version of the spec. GLX_RENDERER_VERSION_MESA 3 Major, minor, and patch level of the renderer implementation I guess the trade-of here is that it avoids having to parse version strings in the application, but on the other hand it leaves no room for things like the git sha1 or e.g. beta or rc that you sometimes see in version strings. That probably isn't a big deal for applications themselves, but it may be relevant when a version string is included in a bug report. The string returned for GLX_RENDERER_VENDOR_ID_MESA will have the same format as the string that would be returned by glGetString of GL_VENDOR. It may, however, have a different value. The string returned for GLX_RENDERER_DEVICE_ID_MESA will have the same format as the string that would be returned by glGetString of GL_RENDERER. It may, however, have a different value. But the GL_VENDOR and GL_RENDERER formats are implementation defined, so I'm not sure that wording it like this really adds much over just saying the format for these are implementation defined. 1) How should the difference between on-card and GART memory be exposed? UNRESOLVED. Somewhat related, dxgi / d3d10 distinguishes between DedicatedVideoMemory and SharedSystemMemory (and DedicatedSystemMemory). I'm not sure how much we really care, but I figured I'd at least mention it. 5) How can applications tell the difference between different hardware renderers for the same device? For example, whether the renderer is the open-source driver or the closed-source driver. RESOLVED. Assuming this extension is ever implemented outside Mesa, applications can query GLX_RENDERER_VENDOR_ID_MESA from glXQueryRendererStringMESA. This will almost certainly return different strings for open-source and closed-source drivers. For what it's worth, internally in wined3d we distinguish between the GL vendor and the hardware vendor. So you can have e.g. Mesa / AMD, fglrx / AMD or Apple / AMD for the same hardware. That's all derived from the VENDOR and RENDERER strings, so that approach is certainly possible, but on the other hand perhaps it also makes sense to explicitly make that distinction in the API itself. 6) What is the value of GLX_RENDERER_UNIFIED_MEMORY_ARCHITECTURE_MESA for software renderers? UNRESOLVED. Video (display) memory and texture memory is not unified for software implementations, so it seems reasonable for this to be False. Related to that, are e.g. GLX_RENDERER_VENDOR_ID_MESA, GLX_RENDERER_DEVICE_ID_MESA (integer versions for both) or GLX_RENDERER_VIDEO_MEMORY_MESA really meaningful for software renderers? ___ mesa-dev
Re: [Mesa-dev] r600g: status of my work on the shader optimization
All the Wine D3D tests now (3931289ab5fc7c7e2ab46e6316e55adc19ec3cfc) pass for me on Cayman. I may be able to do some more testing later, and do e.g. a piglit run. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] r600g: status of my work on the shader optimization
Great, I'll do some testing again when I get the chance. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] r600g: status of my work on the shader optimization
On 24 February 2013 17:07, Vadim Girlin vadimgir...@gmail.com wrote: If you'd like to help me with debugging the issues on cayman, then please read the regression debugging section in the r600-sb branch notes [1] (or possibly more up-to-date source here - [2]), it explains how to find the exact shader that causes regression. After locating the guilty shader, you only need to prepend R600_DUMP_SHADERS=2 R600_SB_DUMP=3 to the command line to produce the full dump for that shader, then please send it to me, and I'll do my best to fix the issue. I briefly looked at it yesterday, but ran out of time. I did find out that all of the failures except one (in loop_index_test()) go away if I skip the if-conversion pass. I have the impression that the PRED_SET* instructions like PRED_SETNE_INT don't behave quite the way the ISA docs claim they do wrt. the value written to the destination register, but instead seem to behave more like the regular SET* instructions in that regard. I didn't properly test this, so it's more of a theory at this point, and may just be wrong. Of course we don't really want to use the PRED_SET* instructions to begin with if we're not going to actually use the predicate, so we'll probably just want to convert them to SET* instructions anyway. Somewhat related to that, wined3d generates GLSL of the form dst = (src0 src1) ? 1.0 : 0.0; for the D3D slt instruction (and similar code for instructions like sge, etc.). The TGSI for that looks pretty awful, first doing a SLT, converting the result to an integer, and then branching on that to assign either 1.0 or 0.0 again. The if-conversion pass is fairly helpful there in the sense that it at least gets rid of the branches, but you still end up with a sequence like SETGE, FLT_TO_INT, PRED_SETNE_INT, CNDE_INT, while all that's really needed is the SETGE. That's probably best addressed in either the GLSL compiler or the GLSL - TGSI stage though. Unfortunately I won't be able to test with that system again until at least Thursday, so it'll be a while before I can actually do anything about it. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] r600g: status of my work on the shader optimization
On 14 February 2013 11:42, Christian König deathsim...@vodafone.de wrote: nice work, I think you've made quite a progress here, but on the other hand it should be clear that the LLVM backend is the future and we should concentrate on that. I'm not sure that's really true. My impression is that LLVM has a number of problems that make it annoying to work with, and in the end it could very well turn out that the better approach would have been to just improve the existing code. Of course I'm not really a fan of using C++ (and operator overloading in particular) here either, but I think it would be interesting to see how far this branch can go, and how it compares to the LLVM backend in the end. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] r600g: status of my work on the shader optimization
On 14 February 2013 12:28, Christian König deathsim...@vodafone.de wrote: Well apart from a bit strange coding style and the use of SVN, I can't really see any problems that are related to using LLVM as it is. Well, for one, I don't think LLVM believes in stable APIs or shared libraries, and I think some of the build system issues for example are a result of that. More generally, I have the impression that using LLVM is much more complicated than it really needs to be for what we're trying to use it for in r600g. Of course I'm not a particularly active contributor to r600g these days, so this is really just more of a comment from the sidelines, but my basic point is that I think there's value in exploring other options besides LLVM. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] gallium/u_blitter: Remove the overlapped blit assert from util_blitter_blit_generic().
On 14 December 2012 09:27, Michel Dänzer mic...@daenzer.net wrote: On Fre, 2012-12-14 at 06:04 +0100, Henri Verbeet wrote: This is used by st_BlitFramebuffer() / r600_blit(), and ARB_fbo allows overlapped blits, even though the result is undefined. No piglit regressions on r600g / CYPRESS. Missing Signed-off-by? Yeah, actually. I'll add it when pushing. (Though I'm also under the impression Mesa isn't entirely consistent in Signed-off-by usage.) ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH] gallium/u_blitter: Remove the overlapped blit assert from util_blitter_blit_generic().
This is used by st_BlitFramebuffer() / r600_blit(), and ARB_fbo allows overlapped blits, even though the result is undefined. No piglit regressions on r600g / CYPRESS. --- src/gallium/auxiliary/util/u_blitter.c | 28 1 files changed, 0 insertions(+), 28 deletions(-) diff --git a/src/gallium/auxiliary/util/u_blitter.c b/src/gallium/auxiliary/util/u_blitter.c index 49f01de..7c7e062 100644 --- a/src/gallium/auxiliary/util/u_blitter.c +++ b/src/gallium/auxiliary/util/u_blitter.c @@ -1053,29 +1053,6 @@ void util_blitter_custom_clear_depth(struct blitter_context *blitter, 0, PIPE_FORMAT_NONE, color, depth, 0, NULL, custom_dsa); } -static -boolean is_overlap(int dstx, int dsty, int dstz, - const struct pipe_box *srcbox) -{ - struct pipe_box src = *srcbox; - - if (src.width 0) { - src.x += src.width; - src.width = -src.width; - } - if (src.height 0) { - src.y += src.height; - src.height = -src.height; - } - if (src.depth 0) { - src.z += src.depth; - src.depth = -src.depth; - } - return src.x dstx+src.width src.x+src.width dstx - src.y dsty+src.height src.y+src.height dsty - src.z dstz+src.depth src.z+src.depth dstz; -} - void util_blitter_default_dst_texture(struct pipe_surface *dst_templ, struct pipe_resource *dst, unsigned dstlevel, @@ -1261,11 +1238,6 @@ void util_blitter_blit_generic(struct blitter_context *blitter, return; } - /* Sanity checks. */ - if (dst-texture == src-texture - dst-u.tex.level == src-u.tex.first_level) { - assert(!is_overlap(dstx, dsty, 0, srcbox)); - } /* XXX should handle 3d regions */ assert(srcbox-depth == 1); -- 1.7.2.5 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] st/mesa: remove the prefix 'Gallium 0.4 on' from the renderer string
On 11 December 2012 13:57, Marek Olšák mar...@gmail.com wrote: We already have the Mesa version in the version string, isn't that enough to detect Mesa? In theory, although the vendor string would IMO be the expected place for that. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Mesa (master): r600g: Use default mul/mad function for tgsi-to-llvm
On 6 December 2012 21:34, Tom Stellard t...@stellard.net wrote: I asked idr about this on IRC and he said that IEEE rules are required for GLSL = 1.30 and they are compliant, but not required for GLSL 1.30. stringfellow added that the d3d9 spec required 0*anything = 0, which is probably why the hardware has those instructions. That also means that this will break a couple of d3d9 applications in Wine. That's fine, if perhaps a bit unfortunate, since technically it's not something Wine can depend on anyway, and d3d10 is going to require IEEE conventions. At some point there was talk about a EXT_zero_mul_conventions extension to select one or the other behaviour at the context level. (The main consideration for doing it at the context level instead of e.g. per-shader was that apparently NVIDIA hardware doesn't have separate instructions for this, and instead only has a global switch.) I don't think that extension went anywhere, although I'm not all that clear on the reasons. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Mesa (master): r600g: Use default mul/mad function for tgsi-to-llvm
On 8 December 2012 16:01, Alex Deucher alexdeuc...@gmail.com wrote: What about a mesa specific extension? Most people will be using wine on Linux anyway. Sure, that works for us. Assuming Mesa is interested in a such an extension, of course. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 2/2] mesa: use the EXT_texture_compression_s3tc enable flag for all S3TC extensions
On 2 November 2012 22:22, Marek Olšák mar...@gmail.com wrote: Yeah. However as far as I know, the desktop GL doesn't have a (good) S3TC extension which doesn't require on-line compression. With what you say, it looks like such an extension would be useful for us. Don't you think it would be nice if the OpenGL ARB went ahead and created such an extension? :) It shouldn't be terribly hard to do as a MESA extension either though, if the ARB isn't interested. I'd use it in Wine, if that's worth anything. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] glx/dri2: use uint64_t instead of double to represent time for FPS calculation
On 30 September 2012 21:51, Marek Olšák mar...@gmail.com wrote: Wine or a windows app changes fpucw to 0x7f, causing doubles to be equivalent to floats, which broke the calculation of FPS. For reference, this is done by for example d3d9 when a D3D device is created without D3DCREATE_FPU_PRESERVE set. In the general case applications can do all kinds of terrible things to the FPU control word of course. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/3] meta: Implement sensible behavior for BlitFramebuffer with sRGB.
On 1 August 2012 03:04, Eric Anholt e...@anholt.net wrote: Prior to GL 4.2 spec, there was no guidance on how to implement BlitFramebuffer for sRGB. Mesa chose to implement skipping decode and encode, providing a reasonable behavior for sRGB - sRGB or RGB - RGB, but providing something absurd for mixing and matching the two. In GL 4.2, some text appeared in the spec saying to skip decode (search for no linearization). The only non-absurd interpretation of that would be to have no encode (same as Mesa's current implementation), otherwise sRGB - sRGB blits wouldn't work. However, it seems clear that you should be able to blit sRGB to RGB and RGB to sRGB and get appropriate conversion. The ARB has been discussing the issue, and appears to agree. So, instead implement the same behavior as gallium, and always do the decode if the texture is sRGB, and do the encode if the application asked for it. Breaks piglit fbo-srgb-blit, which was expecting our previous no-conversion behavior. --- Issue 12 in EXT_texture_sRGB requires no conversion on FBO blits, because they're mostly specified in terms of CopyPixels. EXT_framebuffer_sRGB has similar language in issue 8. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] TBOs: Mesa and i965 sampling support.
On 30 March 2012 03:29, Ian Romanick i...@freedesktop.org wrote: I'm not super excited about GL_EXT_gpu_shader4. Do we know of any applications that use that EXT and don't use either GLSL 1.30 or GLSL 1.40? Wine will use it for the texture sampling functions with explicit derivatives, but it can also use ARB_shader_texture_lod for that. It will also use it for GLSL round(), but realistically we should just start using GLSL 1.30. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] unpack functions showing up highly in profiles
On 18 March 2012 17:47, Matt Turner matts...@gmail.com wrote: SC2's call chain for unpack_uint_z_X8_Z24 is unpack_uint_z_X8_Z24 - _mesa_unpack_uint_z_row - _mesa_readpixels - intelReadPixels - copy_tex_sub_image.isra.3 - intelCopyTexSubImage2D - copyteximage - shared_dispatch_stub_324 - surface_load_ds_location - drawPrimitive - wined3d_device_draw_indexed_primitive - IDirect3DDevice9Impl_DrawIndexedPrimitive - ?? That's copying depth data, maybe stencil too. IIRC that used to hit a fallback on at least some Intel hardware, maybe it still does. The code in question on the Wine side will eventually go away, but for the moment you can try setting the HKCU/Software/Wine/Direct3D/AlwaysOffscreen registry key to enabled (you'll probably have to create it) and see if that makes it any better. For the Gallium based drivers a similar issue was fixed by d958202. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Mesa as part of OpenGL-on-OpenGL ES 2.0 (/WebGL)?
2012/3/6 Benoit Jacob bja...@mozilla.com: The goal is to help port real-world applications such as games. Besides OpenGL [ES], the other API that is widely used in the real world is Direct3D (9 and 10), so that's what would be the most interesting. I've heard about a Direct3D implementation on top of Gallium3D but don't know its status. In case this is useful, Wine has a Direct3D implementation on top of OpenGL, that could be made to work with ES as well with some effort. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] mesa: Add unpack_uint_z_row support for floating-point depth buffers
On 1 February 2012 23:12, Brian Paul bri...@vmware.com wrote: +static void +unpack_uint_z_Z32_FLOAT(const void *src, GLuint *dst, GLuint n) +{ + const float *s = ((const float *)src); More parens than necessary there. The entire cast is unnecessary, IMO. But of course that would apply to the other functions in that file as well. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] mesa: Loosen glBlitFramebuffer restrictions on depthstencil buffers (v2)
On 20 January 2012 03:24, Eric Anholt e...@anholt.net wrote: So is it also allowed to blit from S8Z24 to Z24S8 ? Could we also allow to blit from RGBA8 to BGRA8 then, please ? That's already allowed. Yeah, but not for multisampled framebuffers, unless RGBA8 and BGRA8 are considered identical (ARB_fbo): If SAMPLE_BUFFERS for either the read framebuffer or draw framebuffer is greater than zero, no copy is performed and an INVALID_OPERATION error is generated if the dimensions of the source and destination rectangles provided to BlitFramebuffer are not identical, or if the formats of the read and draw framebuffers are not identical. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 1/1] st/mesa: Use util_blit_pixels_writemask() for depth blits as well in st_copy_texsubimage().
This has no piglit regressions on r600g and softpipe. Signed-off-by: Henri Verbeet hverb...@gmail.com --- src/mesa/state_tracker/st_cb_texture.c | 174 +--- 1 files changed, 91 insertions(+), 83 deletions(-) diff --git a/src/mesa/state_tracker/st_cb_texture.c b/src/mesa/state_tracker/st_cb_texture.c index c58a9df..86d5935 100644 --- a/src/mesa/state_tracker/st_cb_texture.c +++ b/src/mesa/state_tracker/st_cb_texture.c @@ -1420,11 +1420,13 @@ st_copy_texsubimage(struct gl_context *ctx, struct pipe_context *pipe = st-pipe; struct pipe_screen *screen = pipe-screen; enum pipe_format dest_format, src_format; - GLboolean use_fallback = GL_TRUE; GLboolean matching_base_formats; GLuint format_writemask, sample_count; struct pipe_surface *dest_surface = NULL; GLboolean do_flip = (st_fb_orientation(ctx-ReadBuffer) == Y_0_TOP); + struct pipe_surface surf_tmpl; + unsigned int dst_usage; + GLint srcY0, srcY1; /* make sure finalize_textures has been called? */ @@ -1472,99 +1474,105 @@ st_copy_texsubimage(struct gl_context *ctx, matching_base_formats = (_mesa_get_format_base_format(strb-Base.Format) == _mesa_get_format_base_format(texImage-TexFormat)); - format_writemask = compatible_src_dst_formats(ctx, strb-Base, texImage); - if (ctx-_ImageTransferState == 0x0) { + if (ctx-_ImageTransferState) { + goto fallback; + } + + if (matching_base_formats + src_format == dest_format + !do_flip) { + /* use surface_copy() / blit */ + struct pipe_box src_box; + u_box_2d_zslice(srcX, srcY, strb-surface-u.tex.first_layer, + width, height, src_box); + + /* for resource_copy_region(), y=0=top, always */ + pipe-resource_copy_region(pipe, + /* dest */ + stImage-pt, + stImage-base.Level, + destX, destY, destZ + stImage-base.Face, + /* src */ + strb-texture, + strb-surface-u.tex.level, + src_box); + return; + } - if (matching_base_formats - src_format == dest_format - !do_flip) - { - /* use surface_copy() / blit */ - struct pipe_box src_box; - u_box_2d_zslice(srcX, srcY, strb-surface-u.tex.first_layer, - width, height, src_box); - - /* for resource_copy_region(), y=0=top, always */ - pipe-resource_copy_region(pipe, -/* dest */ -stImage-pt, -stImage-base.Level, -destX, destY, destZ + stImage-base.Face, -/* src */ -strb-texture, -strb-surface-u.tex.level, -src_box); - use_fallback = GL_FALSE; - } - else if (format_writemask - texBaseFormat != GL_DEPTH_COMPONENT - texBaseFormat != GL_DEPTH_STENCIL - screen-is_format_supported(screen, src_format, - PIPE_TEXTURE_2D, sample_count, - PIPE_BIND_SAMPLER_VIEW) - screen-is_format_supported(screen, dest_format, - PIPE_TEXTURE_2D, 0, - PIPE_BIND_RENDER_TARGET)) { - /* draw textured quad to do the copy */ - GLint srcY0, srcY1; - struct pipe_surface surf_tmpl; - memset(surf_tmpl, 0, sizeof(surf_tmpl)); - surf_tmpl.format = util_format_linear(stImage-pt-format); - surf_tmpl.usage = PIPE_BIND_RENDER_TARGET; - surf_tmpl.u.tex.level = stImage-base.Level; - surf_tmpl.u.tex.first_layer = stImage-base.Face + destZ; - surf_tmpl.u.tex.last_layer = stImage-base.Face + destZ; - - dest_surface = pipe-create_surface(pipe, stImage-pt, - surf_tmpl); - - if (do_flip) { -srcY1 = strb-Base.Height - srcY - height; -srcY0 = srcY1 + height; - } - else { -srcY0 = srcY; -srcY1 = srcY0 + height; - } + if (texBaseFormat == GL_DEPTH_STENCIL) { + goto fallback; + } - /* Disable conditional rendering. */ - if (st-render_condition) { -pipe-render_condition(pipe, NULL, 0); - } + if (texBaseFormat == GL_DEPTH_COMPONENT) { + format_writemask = TGSI_WRITEMASK_XYZW; + dst_usage = PIPE_BIND_DEPTH_STENCIL; + } + else { + format_writemask = compatible_src_dst_formats(ctx, strb-Base, texImage); + dst_usage
Re: [Mesa-dev] Allowing the reading of outputs for some drivers
On 15 November 2011 14:52, Jose Fonseca jfons...@vmware.com wrote: Developer time is important too. And having more code paths shared with other drivers (even at the expense of a few extra CPU cycles every time a shader is created) means that developers has more time to focus on features that can yield substantial improvements on true hotspots (e.g., every time a pixel is rendered). This particular case may not be the best example. But there is a trade off: more specialization means more maintenance burden. I certainly agree with the general principle, though I think that you should take the driver specific IR into account in that consideration. I.e., I'm not sure that in terms of divergence of the generated code you really gain a lot with undoing elimination of output reads in the driver IR compared to not eliminating them in the first place for some drivers. On the other hand, I think it's certainly conceivable that if r600g had a proper hardware specific optimizer it would end up eliminating the code in question anyway as a side effect. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Allowing the reading of outputs for some drivers
On 14 November 2011 14:49, Vadim Girlin vadimgir...@gmail.com wrote: By the way, which drivers do not support reading outputs? I haven't done a full piglit run with llvmpipe, but IIRR the single test mentioned above was also fixed for llvmpipe without this output replacement. IIRC both GLSL IR and Mesa IR took the approach that reading from outputs was up to the driver to handle at some point. At the very least classic r600 did reads from output gprs. I seem to recall that some Intel hardware couldn't read from output registers, I'm not sure if there was any other hardware. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Allowing the reading of outputs for some drivers
On 14 November 2011 17:29, Christoph Bumiller e0425...@student.tuwien.ac.at wrote: And r600, I think, just stores them all in TEMP space and exports them in the end, so it's rather a property of the shader backend that the device (I may be wrong though). Instructions generally all work on GPRs (and a couple of special constants and registers), there are no special output or input registers as such. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH] mesa: Also set the remaining draw buffers to GL_NONE when updating just the first buffer in _mesa_drawbuffers().
Without this we'd miss the last update in a sequence like {COLOR0, COLOR1}, {COLOR0}, {COLOR0, COLOR1}. I originally had a patch for this that called updated_drawbuffers() when the buffer count changed, but later realized that was wrong. The ARB_draw_buffers spec explicitly says The draw buffer for output colors beyond n is set to NONE., and this is queryable state. This fixes piglit arb_draw_buffers-state_change. NOTE: This is a candidate for the 7.10 and 7.11 branches. Signed-off-by: Henri Verbeet hverb...@gmail.com --- src/mesa/main/buffers.c | 27 --- 1 files changed, 12 insertions(+), 15 deletions(-) diff --git a/src/mesa/main/buffers.c b/src/mesa/main/buffers.c index a75c9c2..12103a7 100644 --- a/src/mesa/main/buffers.c +++ b/src/mesa/main/buffers.c @@ -381,6 +381,7 @@ _mesa_drawbuffers(struct gl_context *ctx, GLuint n, const GLenum *buffers, { struct gl_framebuffer *fb = ctx-DrawBuffer; GLbitfield mask[MAX_DRAW_BUFFERS]; + GLuint buf; if (!destMask) { /* compute destMask values now */ @@ -410,13 +411,10 @@ _mesa_drawbuffers(struct gl_context *ctx, GLuint n, const GLenum *buffers, destMask0 = ~(1 bufIndex); } fb-ColorDrawBuffer[0] = buffers[0]; - if (fb-_NumColorDrawBuffers != count) { -updated_drawbuffers(ctx); - fb-_NumColorDrawBuffers = count; - } + fb-_NumColorDrawBuffers = count; } else { - GLuint buf, count = 0; + GLuint count = 0; for (buf = 0; buf n; buf++ ) { if (destMask[buf]) { GLint bufIndex = _mesa_ffs(destMask[buf]) - 1; @@ -436,21 +434,20 @@ _mesa_drawbuffers(struct gl_context *ctx, GLuint n, const GLenum *buffers, } fb-ColorDrawBuffer[buf] = buffers[buf]; } - /* set remaining outputs to -1 (GL_NONE) */ - while (buf ctx-Const.MaxDrawBuffers) { - if (fb-_ColorDrawBufferIndexes[buf] != -1) { - updated_drawbuffers(ctx); -fb-_ColorDrawBufferIndexes[buf] = -1; - } - fb-ColorDrawBuffer[buf] = GL_NONE; - buf++; - } fb-_NumColorDrawBuffers = count; } + /* set remaining outputs to -1 (GL_NONE) */ + for (buf = n; buf ctx-Const.MaxDrawBuffers; buf++) { + if (fb-_ColorDrawBufferIndexes[buf] != -1) { + updated_drawbuffers(ctx); + fb-_ColorDrawBufferIndexes[buf] = -1; + } + fb-ColorDrawBuffer[buf] = GL_NONE; + } + if (fb-Name == 0) { /* also set context drawbuffer state */ - GLuint buf; for (buf = 0; buf ctx-Const.MaxDrawBuffers; buf++) { if (ctx-Color.DrawBuffer[buf] != fb-ColorDrawBuffer[buf]) { updated_drawbuffers(ctx); -- 1.7.2.5 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] mesa: Also set the remaining draw buffers to GL_NONE when updating just the first buffer in _mesa_drawbuffers().
On 23 September 2011 01:48, Eric Anholt e...@anholt.net wrote: In the case of n == 1 with more than one bit set, doesn't this stomp the _ColorDrawBufferIndexes[] we just calculated between n and _NumColorDrawBuffers - 1? Looks like splitting that loop into two would work well. You're right, how about something like the attached patch? Strictly speaking the start index for the first loop should be MAX2(n, fb-_NumColorDrawBuffers), though I'm not sure that's worth it. 0001-mesa-Also-set-the-remaining-draw-buffers-to-GL_NONE-.patch Description: application/pgp-keys ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] RFC: Remove GL_APPLE_client_storage
On 22 September 2011 01:10, Eric Anholt e...@anholt.net wrote: wined3d is the only potential consumer I've ecountered. I don't think I Yeah, although I'm not entirely convinced of the usefulness of wined3d using the extension either. Apple seems to have their fair share of APPLE_client_storage related bugs as well. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Implement NV_fog_distance for Gallium hardware
On 19 September 2011 19:23, Ian Romanick i...@freedesktop.org wrote: Direct3D 9 calls the eye radial fog distance mode range-based fog and Wine's D3D9 implementation will use NV_fog_distance to implement it. Several other open source game engines in Google Code Search use the eye radial fog mode if it is available. I guess the big question is... why? With vertex shaders, this In the case of wined3d, it's mostly a case of existing code using it. The code in question dates back to at least 2004, possibly earlier. If we were implementing this today we probably wouldn't bother with fixed function GL at all. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] meta: fix broken sRGB mipmap generation
On 17 September 2011 19:33, Brian Paul brian.e.p...@gmail.com wrote: From: Brian Paul bri...@vmware.com If we're generating a mipmap for an sRGB texture we need to bypass sRGB-linear conversion. Otherwise the destination mipmap level (drawn with a textured quad) will have the wrong colors. If we can't turn of sRGB-linear conversion (GL_EXT_texture_sRGB_decode) we need to use the software fallback for mipmap generation. Although not directly related to the issue this patch fixes, note that issue 24 in EXT_texture_sRGB (and issue 10 in EXT_texture_sRGB_decode is related to that) recommends to do filtering in linear space. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] S2TC - yet another attempt to solve the S3TC issue
On 9 August 2011 23:45, Marek Olšák mar...@gmail.com wrote: texture, so we'd be noncompliant. Noncompliant is probably better than not working at all. So what do you guys think? In the general case, no. A missing extension is something applications can deal with if they care to, a broken extension isn't. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] r600g: Add support for ROUND
On 7 August 2011 19:03, Lauri Kasanen c...@gmx.com wrote: + /* floor(a + 0.5) */ Why not use RNDNE? ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] r600g: Add support for ROUND
On 8 August 2011 02:24, Jose Fonseca jfons...@vmware.com wrote: There's no wrong or right when there are two equidistant integers -- it's all a matter of convention. But note that rounding to nearest even is a slightly better convention in terms of rounding bias. I.e., not using RNDNE is both likely to be slower and likely to produce worse results. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] r600g: Add support for ROUND
On 8 August 2011 03:58, Jose Fonseca jfons...@vmware.com wrote: It's subjective. It depends on the expected input distribution, which is effectively impossible to characterize in general. One can easily find datasets where one method gives biased results and the other not, and vice versa. And if one takes all possible numbers, they are equally good. This is probably largely irrelevant to the patch in question, but just for arguments sake, I don't think that's true. The function floor(x + .5) will introduce positive bias regardless of input distribution, while for rndne this depends on the ratio of even and uneven inputs. Taking the real numbers as input, always rounding up will produce positive bias, while rndne will have 0 bias. Similarly, I don't think there's any set of inputs for which rounding to nearest even (or pretty much any other scheme) produces larger (absolute) bias than always rounding up. The only thing that could define what's wrong/right here in the OpenGL GLSL / Direct3D HLSL specs, but I don't think they go this level of detail. At least I couldn't find any mention last time I checked. AFAIK HLSL round() only specifies rounding to the nearest integer. GLSL (1.30) round() is similar, but GLSL also has an explicit roundEven() function. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/1] mesa: Call updated_drawbuffers() if the buffer count changes in _mesa_drawbuffers().
On 26 July 2011 02:21, Ian Romanick i...@freedesktop.org wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 07/25/2011 01:23 PM, Henri Verbeet wrote: Without this we'd miss an update when doing a sequence like {COLOR0, COLOR1}, {COLOR0}, {COLOR0, COLOR1}. Is there a piglit test to reproduce this failure? No, I found this using the Wine D3D tests. I can easily write one though. On 26 July 2011 04:29, Eric Anholt e...@anholt.net wrote: Ah, I see. I like this better than setting remaining buffers to NONE. (and with this patch, we could avoid doing so, right?). Yeah. As a follow up we can probably also get rid of setting the remaining buffers to NONE in the n != 1 path, unless there's code that doesn't check _NumColorDrawBuffers. Such code would currently break on the n == 1 path as well though. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 1/1] mesa: Call updated_drawbuffers() if the buffer count changes in _mesa_drawbuffers().
Without this we'd miss an update when doing a sequence like {COLOR0, COLOR1}, {COLOR0}, {COLOR0, COLOR1}. NOTE: This is a candidate for the 7.10 and 7.11 branches. Signed-off-by: Henri Verbeet hverb...@gmail.com --- src/mesa/main/buffers.c |5 - 1 files changed, 4 insertions(+), 1 deletions(-) diff --git a/src/mesa/main/buffers.c b/src/mesa/main/buffers.c index a75c9c2..88fe0b1 100644 --- a/src/mesa/main/buffers.c +++ b/src/mesa/main/buffers.c @@ -445,7 +445,10 @@ _mesa_drawbuffers(struct gl_context *ctx, GLuint n, const GLenum *buffers, fb-ColorDrawBuffer[buf] = GL_NONE; buf++; } - fb-_NumColorDrawBuffers = count; + if (fb-_NumColorDrawBuffers != count) { + updated_drawbuffers(ctx); + fb-_NumColorDrawBuffers = count; + } } if (fb-Name == 0) { -- 1.7.2.5 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/1] mesa: Call updated_drawbuffers() if the buffer count changes in _mesa_drawbuffers().
On 26 July 2011 01:02, Eric Anholt e...@anholt.net wrote: I don't see that, because the while (buf MaxDrawBuffers) loop would notice the change from COLOR1 - NONE. That loop doesn't happen because n == 1 for {COLOR0} (as opposed to {COLOR0, NONE}). Perhaps we should always explicitly set any remaining buffers to NONE though. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 3/3] mesa: Make _mesa_get_compressed_formats match the texture compression specs
On 23 July 2011 10:58, Ian Romanick i...@freedesktop.org wrote: + * explose the 3dc formats through this mechanism. Typo. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 3/3] mesa: Make _mesa_get_compressed_formats match the texture compression specs
On 23 July 2011 16:58, Brian Paul brian.e.p...@gmail.com wrote: Also, look for comptaibility Looks like that is actually in the extension spec like that. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] rationale for GLubyte pointers for strings?
On 19 July 2011 21:39, tom fogal tfo...@sci.utah.edu wrote: I think you have misinterpreted my question. Why not just have glGetString's prototype be: const char* glGetString(GLenum); ? Then (sans the missing const :), your code below would work on *all* platforms, MIPSpro or not, with or without a cast. IIRC the idea was for the GL spec to be language independent, or something along those lines. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 2/2] st/mesa: Handle GL_MAP_INVALIDATE_BUFFER_BIT in st_bufferobj_map_range().
Alternatively, individual drivers could actually implement PIPE_TRANSFER_DISCARD_WHOLE_RESOURCE. As far as I can tell only svga currently implements that, and st_bufferobj_map_range() seems to be the main user. I wonder if in general PIPE_TRANSFER_DISCARD_WHOLE_RESOURCE is something that should just be handled by the state trackers. As for the actual implementation, we could also try a map with PIPE_TRANSFER_DONTBLOCK first, and avoid invalidating _NEW_BUFFER_OBJECT in some cases. I'm not sure if that's worth it without doing more benchmarking though, since in the typical case GL_MAP_INVALIDATE_BUFFER_BIT will probably imply that the buffer is in use. Signed-off-by: Henri Verbeet hverb...@gmail.com --- src/mesa/state_tracker/st_cb_bufferobjects.c | 15 +++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/src/mesa/state_tracker/st_cb_bufferobjects.c b/src/mesa/state_tracker/st_cb_bufferobjects.c index 7374bb0..7aa859e 100644 --- a/src/mesa/state_tracker/st_cb_bufferobjects.c +++ b/src/mesa/state_tracker/st_cb_bufferobjects.c @@ -332,6 +332,21 @@ st_bufferobj_map_range(struct gl_context *ctx, GLenum target, obj-Pointer = st_bufferobj_zero_length; } else { + if (flags PIPE_TRANSFER_DISCARD_WHOLE_RESOURCE) { + struct pipe_resource *buffer; + + buffer = pipe_buffer_create(pipe-screen, + st_obj-buffer-bind, + st_obj-buffer-usage, + st_obj-buffer-width0); + if (buffer) { +st_invalidate_state(ctx, _NEW_BUFFER_OBJECT); +pipe_resource_reference(st_obj-buffer, NULL); +st_obj-buffer = buffer; +flags = ~PIPE_TRANSFER_DISCARD_WHOLE_RESOURCE; + } + } + obj-Pointer = pipe_buffer_map_range(pipe, st_obj-buffer, offset, length, -- 1.7.2.5 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] About merging pipe-video to master
2011/7/12 Christian König deathsim...@vodafone.de: it works with my available hardware (no piglit regressions). The changes to the winsys code is about making a bo optional, even when the reg informations says it isn't. This is useful for registers where only a subset of the bits needs to be informed about a bo relocation, and seems to work pretty fine as long as you don't touch those bits. Well, ok, but I'd expect to find that information in the commit log. As it is, 68cc6bc5d8b6986acc7f5780d705f4ae9be2a446 removes REG_FLAG_NEED_BO, then e602ecf9ef2f66289bcb159fdbdce2c76e3c07c1 adds it back without much of an explanation. Also, what subset is that? After this patch both places that touch the register pass NULL for the bo. + // TODO get BLEND_CLAMP state from rasterizer state Is this comment still accurate? + color_info, ~S_0280A0_BLEND_CLAMP(1), NULL); Did you mean to write ~C_0280A0_BLEND_CLAMP there? On top of that it implements clamp_fragment_color also for the blender state, this is necessary because the blender will otherwise clamp the colour to [0,1] for unsigned and [-1,1] for signed buffers. This is another piece needed to get arb_color_buffer_float working correctly (without the need to recompile the shaders each time). You should probably remove the existing code that does that in r600_shader_from_tgsi() then, at least for r600. Either way, it sounds like this is a mostly independent change from the rest of pipe-video and should go to r600g through the regular way, probably through the mailing list first. +switch (res-usage) { +case PIPE_USAGE_STREAM: +case PIPE_USAGE_STAGING: +case PIPE_USAGE_STATIC: +case PIPE_USAGE_IMMUTABLE: +return FALSE; + +default: +return TRUE; +} At the very least this has whitespace errors. Why do we want this? Like the other change, the commit log for this change (77217af40d67612d1f1089ca188393d27a8a038f) isn't very descriptive. If it wasn't for the commit not being a merge commit, it would even be ambiguous if Merge fix means merging a fix or fixing a merge. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] About merging pipe-video to master
2011/7/12 Christian König deathsim...@vodafone.de: Am Dienstag, den 12.07.2011, 15:44 +0200 schrieb Henri Verbeet: 2011/7/12 Christian König deathsim...@vodafone.de: + // TODO get BLEND_CLAMP state from rasterizer state Is this comment still accurate? Yes it is, the very first generation of R600 chipsets need to know if blend clamping is enabled, to enable an additionally optimisation for the color export (EXPORT_NORM). The problem is that I'm unsure how to get that state from the rasterizer structure into r600_cb, reprogramming color_info in r600_draw_vbo just like Vadim Girlin did for his patches, seems to be a bit to much overhead to me. I don't think you can in the current setup. You'd pretty much have to do something along the lines of r600_spi_update() or r600_update_alpha_ref(). It took me a week to figure out what's going wrong here and why the pipeline doesn't did what I wanted. The downside with my patches is that it disables the export optimisation on the early R600 generation chipsets, but my overall feeling is that it's better to render right and slow instead of fast and wrong. I guess my point was mostly that there's not much of a point in doing the clamping both through BLEND_CLAMP and the fragment shader. Also, I guess we need this for EG+ as well. Thanks for clearing this up. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/2] r600g: introduce r600_bc_src_set_abs helper and fix LOG
On 7 July 2011 06:20, Vadim Girlin vadimgir...@gmail.com wrote: -static void r600_bc_src(struct r600_bc_alu_src *bc_src, +static inline void r600_bc_src(struct r600_bc_alu_src *bc_src, This looks like an unrelated change. Personally I think inline is best left up to the compiler to decide in the majority of cases. Note that static inline will hide unused functions. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 2/2] r600g: introduce r600_bc_src_toggle_neg helper and fix SUB LRP
On 7 July 2011 06:20, Vadim Girlin vadimgir...@gmail.com wrote: +static inline void r600_bc_src_toggle_neg(struct r600_bc_alu_src *bc_src) +{ + bc_src-neg = 1 - bc_src-neg; +} + Not necessarily wrong, but I think bc_src-neg = !bc_src-neg; would be the more common way to write this. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 4/5] mesa: Fix a couple of TexEnv unit limits.
On 6 July 2011 22:03, Ian Romanick i...@freedesktop.org wrote: @@ -419,7 +419,7 @@ _mesa_TexEnvfv( GLenum target, GLenum pname, const GLfloat *param ) ASSERT_OUTSIDE_BEGIN_END(ctx); maxUnit = (target == GL_POINT_SPRITE_NV pname == GL_COORD_REPLACE_NV) - ? ctx-Const.MaxTextureCoordUnits : ctx-Const.MaxTextureImageUnits; + ? ctx-Const.MaxTextureCoordUnits : ctx-Const.MaxCombinedTextureImageUnits; I'm not 100% sure that this is correct. Is there some spec language to back this up? A test case? Page 47 of the 2.1 spec (section 2.11.2, the bit about ActiveTexture()): The active texture unit selector also selects the texture image unit accessed by commands involving texture image processing (section 3.8). Such commands include all variants of TexEnv (except for those controlling point sprite coordi- nate replacement), TexParameter, and TexImage commands, BindTexture, En- able/Disable for any texture target (e.g., TEXTURE_2D), and queries of all such state. If the texture image unit number corresponding to the current value of ACTIVE_TEXTURE is greater than or equal to the implementation-dependent con- stant MAX_COMBINED_TEXTURE_IMAGE_UNITS, the error INVALID_OPERATION is generated by any such command. There's a corresponding section in the ARB_vertex_shader spec. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 1/5] mesa: Check the texture against all units in unbind_texobj_from_texunits().
--- src/mesa/main/texobj.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/src/mesa/main/texobj.c b/src/mesa/main/texobj.c index 565a3a2..0e84b87 100644 --- a/src/mesa/main/texobj.c +++ b/src/mesa/main/texobj.c @@ -899,7 +899,7 @@ unbind_texobj_from_texunits(struct gl_context *ctx, { GLuint u, tex; - for (u = 0; u MAX_TEXTURE_IMAGE_UNITS; u++) { + for (u = 0; u Elements(ctx-Texture.Unit); u++) { struct gl_texture_unit *unit = ctx-Texture.Unit[u]; for (tex = 0; tex NUM_TEXTURE_TARGETS; tex++) { if (texObj == unit-CurrentTex[tex]) { -- 1.7.2.5 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 2/5] mesa: Allow sampling from units = MAX_TEXTURE_UNITS in shaders.
The total number of units used by a shader is limited to MAX_TEXTURE_UNITS, but the actual indices are only limited by MAX_COMBINED_TEXTURE_IMAGE_UNITS, since they're shared between vertex and fragment shaders. --- src/mesa/main/mtypes.h|2 +- src/mesa/main/shaderapi.c |2 +- src/mesa/main/uniforms.c |4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/src/mesa/main/mtypes.h b/src/mesa/main/mtypes.h index f018c75..b881183 100644 --- a/src/mesa/main/mtypes.h +++ b/src/mesa/main/mtypes.h @@ -1856,7 +1856,7 @@ struct gl_program GLbitfield SystemValuesRead; /** Bitmask of SYSTEM_VALUE_x inputs used */ GLbitfield InputFlags[MAX_PROGRAM_INPUTS]; /** PROG_PARAM_BIT_x flags */ GLbitfield OutputFlags[MAX_PROGRAM_OUTPUTS]; /** PROG_PARAM_BIT_x flags */ - GLbitfield TexturesUsed[MAX_TEXTURE_UNITS]; /** TEXTURE_x_BIT bitmask */ + GLbitfield TexturesUsed[MAX_COMBINED_TEXTURE_IMAGE_UNITS]; /** TEXTURE_x_BIT bitmask */ GLbitfield SamplersUsed; /** Bitfield of which samplers are used */ GLbitfield ShadowSamplers; /** Texture units used for shadow sampling. */ diff --git a/src/mesa/main/shaderapi.c b/src/mesa/main/shaderapi.c index b58e30d..cb02e43 100644 --- a/src/mesa/main/shaderapi.c +++ b/src/mesa/main/shaderapi.c @@ -1032,7 +1032,7 @@ validate_samplers(const struct gl_program *prog, char *errMsg) TEXTURE_2D, TEXTURE_1D, }; - GLint targetUsed[MAX_TEXTURE_IMAGE_UNITS]; + GLint targetUsed[MAX_COMBINED_TEXTURE_IMAGE_UNITS]; GLbitfield samplersUsed = prog-SamplersUsed; GLuint i; diff --git a/src/mesa/main/uniforms.c b/src/mesa/main/uniforms.c index 1c4fd82..dd069a3 100644 --- a/src/mesa/main/uniforms.c +++ b/src/mesa/main/uniforms.c @@ -580,7 +580,7 @@ _mesa_update_shader_textures_used(struct gl_program *prog) if (prog-SamplersUsed (1 s)) { GLuint unit = prog-SamplerUnits[s]; GLuint tgt = prog-SamplerTargets[s]; - assert(unit MAX_TEXTURE_IMAGE_UNITS); + assert(unit Elements(prog-TexturesUsed)); assert(tgt NUM_TEXTURE_TARGETS); prog-TexturesUsed[unit] |= (1 tgt); } @@ -674,7 +674,7 @@ set_program_uniform(struct gl_context *ctx, struct gl_program *program, GLuint texUnit = ((GLuint *) values)[i]; /* check that the sampler (tex unit index) is legal */ - if (texUnit = ctx-Const.MaxTextureImageUnits) { + if (texUnit = ctx-Const.MaxCombinedTextureImageUnits) { _mesa_error(ctx, GL_INVALID_VALUE, glUniform1(invalid sampler/tex unit index for '%s'), param-Name); -- 1.7.2.5 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev