[Mesa-dev] [PATCH] i965: Support allow_glsl_layout_qualifier_on_function_parameters option

2018-11-03 Thread Jeffrey Moerman
This adds support for Timothy's new driconf parameter, which fixes
shader compilation in No Mans Sky.
---
 src/mesa/drivers/dri/i965/brw_context.c  | 3 +++
 src/mesa/drivers/dri/i965/intel_screen.c | 1 +
 2 files changed, 4 insertions(+)

diff --git a/src/mesa/drivers/dri/i965/brw_context.c 
b/src/mesa/drivers/dri/i965/brw_context.c
index 6ba64e4e06..e33b75bb2f 100644
--- a/src/mesa/drivers/dri/i965/brw_context.c
+++ b/src/mesa/drivers/dri/i965/brw_context.c
@@ -890,6 +890,9 @@ brw_process_driconf_options(struct brw_context *brw)
ctx->Const.AllowGLSLCrossStageInterpolationMismatch =
   driQueryOptionb(options, 
"allow_glsl_cross_stage_interpolation_mismatch");
 
+   ctx->Const.AllowLayoutQualifiersOnFunctionParameters =
+  driQueryOptionb(options, 
"allow_glsl_layout_qualifier_on_function_parameters");
+
ctx->Const.dri_config_options_sha1 = ralloc_array(brw, unsigned char, 20);
driComputeOptionsSha1(&brw->screen->optionCache,
  ctx->Const.dri_config_options_sha1);
diff --git a/src/mesa/drivers/dri/i965/intel_screen.c 
b/src/mesa/drivers/dri/i965/intel_screen.c
index c3bd30f783..0a9667ce40 100644
--- a/src/mesa/drivers/dri/i965/intel_screen.c
+++ b/src/mesa/drivers/dri/i965/intel_screen.c
@@ -85,6 +85,7 @@ DRI_CONF_BEGIN
   DRI_CONF_ALLOW_GLSL_EXTENSION_DIRECTIVE_MIDSHADER("false")
   DRI_CONF_ALLOW_GLSL_BUILTIN_VARIABLE_REDECLARATION("false")
   DRI_CONF_ALLOW_GLSL_CROSS_STAGE_INTERPOLATION_MISMATCH("false")
+  DRI_CONF_ALLOW_GLSL_LAYOUT_QUALIFIER_ON_FUNCTION_PARAMETERS("false")
   DRI_CONF_ALLOW_HIGHER_COMPAT_VERSION("false")
   DRI_CONF_FORCE_GLSL_ABS_SQRT("false")
 
-- 
2.19.1

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH 00/31] nir: Use a 1-bit data type for booleans

2018-11-03 Thread Jason Ekstrand
Thanks!  I don't mean to be a pest.  However, not everyone is as good about
keeping track of their backlog as you are so I thought it might be worth a
reminder.

--Jason

On Sat, Nov 3, 2018 at 7:59 PM Ian Romanick  wrote:

> I haven't forgotten... I'm planning to dig into this next week.
>
> On 11/02/2018 06:42 AM, Jason Ekstrand wrote:
> > Bump
> >
> > On Mon, Oct 22, 2018 at 5:14 PM Jason Ekstrand  > > wrote:
> >
> > This is something that Connor and I have talked about quite a bit
> > over the
> > last couple of months.  The core idea is to replace NIR's current
> 32-bit
> > 0/-1 D3D10-style booleans with a 1-bit data type.  All in all, I
> > think it
> > worked out pretty well though I really don't like the proliferation
> of
> > 32-bit comparison opcodes we now have kicking around for i965.
> >
> > Why?  No hardware really has a 1-bit type, right?  Well, sort of...
> AMD
> > actually uses 64-bit scalars for booleans with one bit per
> invocation.
> > However, most hardware such as Intel uses some other larger value for
> > booleans.  The real benefit of 1-bit booleans and requiring a
> > lowering pass
> > is that you can do somewhat custom lowering (like AMD wants) and your
> > lowering pass can always tell in an instant if a value is a boolean
> > based
> > on the bit size.  As can be seen in the last patch, this makes it
> really
> > easy to implement a bool -> float lowering pass for hardware that
> > doesn't
> > have real integers where NIR's current booleans are actually rather
> > painful.
> >
> > On Intel, the situation is a bit more complicated.  It's tempting to
> say
> > that we have 32-bit D3D10 booleans.  However, they're not really
> D3D10
> > booleans on gen4-5 because the top 31 bits are undefined garbage
> > and, while
> > iand, ior, ixor, and inot operations work, you have to iand with 1
> > at the
> > last minute to clear off all that garbage.  Also, on all
> generations, a
> > comparison of two N-bit values results in an N-bit boolean, not a
> 32-bit
> > bool.  This has caused the Igalia folks no end of trouble as they've
> > been
> > working on native 8 and 16-bit support.  If, instead, we have a
> > 1-bit bool
> > with a lowering pass and we can lower to whatever we want, then we
> could
> > lower to a set of comparison opcodes that return the same bit-size
> > as they
> > compare and it would match GEN hardware much better.
> >
> > But what about performance?  Aren't there all sorts of neat tricks
> > we can
> > do with D3D10 booleans like b & 1.0f for b2f?  As it turns out, not
> > really;
> > that's about the only one.  There is some small advantage when
> > optimizing
> > shaders that come from D3D if your native representation of booleans
> > matches that of D3D.  However, penultimate patch in this series adds
> > a few
> > small optimizations that get us to actually better than we were
> before.
> > With the entire series, shader-db on Kaby Lak looks like this:
> >
> > total instructions in shared programs: 15084098 -> 14988578
> (-0.63%)
> > instructions in affected programs: 1321114 -> 1225594 (-7.23%)
> > helped: 2340
> > HURT: 23
> >
> > total cycles in shared programs: 369790134 -> 359798399 (-2.70%)
> > cycles in affected programs: 134085452 -> 124093717 (-7.45%)
> > helped: 2149
> > HURT: 720
> >
> > total loops in shared programs: 4393 -> 4393 (0.00%)
> > loops in affected programs: 0 -> 0
> > helped: 0
> > HURT: 0
> >
> > total spills in shared programs: 10158 -> 10051 (-1.05%)
> > spills in affected programs: 1429 -> 1322 (-7.49%)
> > helped: 8
> > HURT: 15
> >
> > total fills in shared programs: 22105 -> 21720 (-1.74%)
> > fills in affected programs: 2853 -> 2468 (-13.49%)
> > helped: 12
> > HURT: 15
> >
> > How about ease of use?  Are they a pain to deal with?  Yes, adding
> > support
> > for 1-bit types was a bit awkward in a few places but most of it was
> > dealing with all the places where we have 32-bit booleans baked into
> > assumptions.  Getting rid of that baking in solves the problem and
> also
> > makes the whole IR more future-proof.
> >
> > All in all, I'd say I'm pretty happy with it.  However, I'd like
> other
> > people (particularly the AMD folks) to play with it a bit and verify
> > that
> > it solves their problems as well.  Also, I added a lowering pass and
> > tried
> > to turn it on in everyone's driver but may not have put it in the
> right
> > spot.  Please double-check my work.  For those wishing to take a
> > look, you
> > can also find the entire series on my gitlab here:
> >
> >
> https://gitlab.freedeskto

Re: [Mesa-dev] [PATCH] amd: remove support for LLVM 6.0

2018-11-03 Thread Marek Olšák
On Fri, Nov 2, 2018 at 10:58 AM Michel Dänzer  wrote:

> On 2018-11-02 10:23 a.m., Samuel Pitoiset wrote:
> > User are encouraged to switch to LLVM 7.0 released in September 2018.
>
> At least two major releases of LLVM should always be supported,
> otherwise we force our downstreams and users to upgrade LLVM and Mesa in
> lockstep.
>

There are no concerns from distro vendors. We are good to go:
https://lists.freedesktop.org/archives/mesa-maintainers/2018-July/thread.html

Having users upgrade to LLVM 7.0 can be a good thing (for quality &
performance).

Marek
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH 00/31] nir: Use a 1-bit data type for booleans

2018-11-03 Thread Ian Romanick
I haven't forgotten... I'm planning to dig into this next week.

On 11/02/2018 06:42 AM, Jason Ekstrand wrote:
> Bump
> 
> On Mon, Oct 22, 2018 at 5:14 PM Jason Ekstrand  > wrote:
> 
> This is something that Connor and I have talked about quite a bit
> over the
> last couple of months.  The core idea is to replace NIR's current 32-bit
> 0/-1 D3D10-style booleans with a 1-bit data type.  All in all, I
> think it
> worked out pretty well though I really don't like the proliferation of
> 32-bit comparison opcodes we now have kicking around for i965.
> 
> Why?  No hardware really has a 1-bit type, right?  Well, sort of...  AMD
> actually uses 64-bit scalars for booleans with one bit per invocation.
> However, most hardware such as Intel uses some other larger value for
> booleans.  The real benefit of 1-bit booleans and requiring a
> lowering pass
> is that you can do somewhat custom lowering (like AMD wants) and your
> lowering pass can always tell in an instant if a value is a boolean
> based
> on the bit size.  As can be seen in the last patch, this makes it really
> easy to implement a bool -> float lowering pass for hardware that
> doesn't
> have real integers where NIR's current booleans are actually rather
> painful.
> 
> On Intel, the situation is a bit more complicated.  It's tempting to say
> that we have 32-bit D3D10 booleans.  However, they're not really D3D10
> booleans on gen4-5 because the top 31 bits are undefined garbage
> and, while
> iand, ior, ixor, and inot operations work, you have to iand with 1
> at the
> last minute to clear off all that garbage.  Also, on all generations, a
> comparison of two N-bit values results in an N-bit boolean, not a 32-bit
> bool.  This has caused the Igalia folks no end of trouble as they've
> been
> working on native 8 and 16-bit support.  If, instead, we have a
> 1-bit bool
> with a lowering pass and we can lower to whatever we want, then we could
> lower to a set of comparison opcodes that return the same bit-size
> as they
> compare and it would match GEN hardware much better.
> 
> But what about performance?  Aren't there all sorts of neat tricks
> we can
> do with D3D10 booleans like b & 1.0f for b2f?  As it turns out, not
> really;
> that's about the only one.  There is some small advantage when
> optimizing
> shaders that come from D3D if your native representation of booleans
> matches that of D3D.  However, penultimate patch in this series adds
> a few
> small optimizations that get us to actually better than we were before.
> With the entire series, shader-db on Kaby Lak looks like this:
> 
>     total instructions in shared programs: 15084098 -> 14988578 (-0.63%)
>     instructions in affected programs: 1321114 -> 1225594 (-7.23%)
>     helped: 2340
>     HURT: 23
> 
>     total cycles in shared programs: 369790134 -> 359798399 (-2.70%)
>     cycles in affected programs: 134085452 -> 124093717 (-7.45%)
>     helped: 2149
>     HURT: 720
> 
>     total loops in shared programs: 4393 -> 4393 (0.00%)
>     loops in affected programs: 0 -> 0
>     helped: 0
>     HURT: 0
> 
>     total spills in shared programs: 10158 -> 10051 (-1.05%)
>     spills in affected programs: 1429 -> 1322 (-7.49%)
>     helped: 8
>     HURT: 15
> 
>     total fills in shared programs: 22105 -> 21720 (-1.74%)
>     fills in affected programs: 2853 -> 2468 (-13.49%)
>     helped: 12
>     HURT: 15
> 
> How about ease of use?  Are they a pain to deal with?  Yes, adding
> support
> for 1-bit types was a bit awkward in a few places but most of it was
> dealing with all the places where we have 32-bit booleans baked into
> assumptions.  Getting rid of that baking in solves the problem and also
> makes the whole IR more future-proof.
> 
> All in all, I'd say I'm pretty happy with it.  However, I'd like other
> people (particularly the AMD folks) to play with it a bit and verify
> that
> it solves their problems as well.  Also, I added a lowering pass and
> tried
> to turn it on in everyone's driver but may not have put it in the right
> spot.  Please double-check my work.  For those wishing to take a
> look, you
> can also find the entire series on my gitlab here:
> 
> 
> https://gitlab.freedesktop.org/jekstrand/mesa/commits/review/nir-1-bit-bool
> 
> Please review!
> 
> --Jason
> 
> Cc: Connor Abbott mailto:cwabbo...@gmail.com>>
> Cc: Timothy Arceri  >
> Cc: Eric Anholt mailto:e...@anholt.net>>
> Cc: Rob Clark mailto:robdcl...@gmail.com>>
> Cc: Karol Herbst mailto:karolher...@gmail.com>>
> Cc: Bas Nieuwenhuizen  >
> Cc: 

[Mesa-dev] [Bug 108647] Kernel parameter GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amdgpu.ppfeaturemask=0xffffffff" artifacts

2018-11-03 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=108647

Bug ID: 108647
   Summary: Kernel parameter GRUB_CMDLINE_LINUX_DEFAULT="quiet
splash amdgpu.ppfeaturemask=0x" artifacts
   Product: Mesa
   Version: 18.3
  Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
  Severity: normal
  Priority: medium
 Component: Other
  Assignee: mesa-dev@lists.freedesktop.org
  Reporter: jaxey...@rupayamail.com
QA Contact: mesa-dev@lists.freedesktop.org

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amdgpu.ppfeaturemask=0x"

When enabled, causes artifacts and glitching.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH] android: radv: add libmesa_git_sha1 static dependency

2018-11-03 Thread Bas Nieuwenhuizen
Reviewed-by: Bas Nieuwenhuizen 
On Fri, Nov 2, 2018 at 3:54 PM Eric Engestrom  wrote:
>
> On Friday, 2018-11-02 13:38:53 +0100, Mauro Rossi wrote:
> > Hi all,
> > could somebody provide Reviewed-by in order to apply in mesa-dev and
> > avoid trivial building error?
>
> Not an expert on Android.mk, but this looks reasonable, and adding that
> dep is definitely right, so:
> Reviewed-by: Eric Engestrom 
>
> > Thanks
> >
> > Mauro
> > On Tue, Oct 30, 2018 at 10:42 PM Mauro Rossi  wrote:
> > >
> > > libmesa_git_sha1 whole static dependency is added to get git_sha1.h header
> > > and avoid following building error:
> > >
> > > external/mesa/src/amd/vulkan/radv_device.c:46:10:
> > > fatal error: 'git_sha1.h' file not found
> > >  ^
> > > 1 error generated.
> > >
> > > Fixes: 9d40ec2cf6 ("radv: Add support for VK_KHR_driver_properties.")
> > > Signed-off-by: Mauro Rossi 
> > > ---
> > >  src/amd/vulkan/Android.mk | 3 ++-
> > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/src/amd/vulkan/Android.mk b/src/amd/vulkan/Android.mk
> > > index 51b03561fa..9574bf54e5 100644
> > > --- a/src/amd/vulkan/Android.mk
> > > +++ b/src/amd/vulkan/Android.mk
> > > @@ -74,7 +74,8 @@ LOCAL_C_INCLUDES := \
> > > $(call 
> > > generated-sources-dir-for,STATIC_LIBRARIES,libmesa_vulkan_util,,)/util
> > >
> > >  LOCAL_WHOLE_STATIC_LIBRARIES := \
> > > -   libmesa_vulkan_util
> > > +   libmesa_vulkan_util \
> > > +   libmesa_git_sha1
> > >
> > >  LOCAL_GENERATED_SOURCES += $(intermediates)/radv_entrypoints.c
> > >  LOCAL_GENERATED_SOURCES += $(intermediates)/radv_entrypoints.h
> > > --
> > > 2.19.1
> > >
> > ___
> > mesa-dev mailing list
> > mesa-dev@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/mesa-dev
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH] radv: remove useless sync after copying query results with compute

2018-11-03 Thread Bas Nieuwenhuizen
Reviewed-by: Bas Nieuwenhuizen 
On Fri, Nov 2, 2018 at 1:44 PM Samuel Pitoiset
 wrote:
>
> The spec says:
>"vkCmdCopyQueryPoolResults is considered to be a transfer
> operation, and its writes to buffer memory must be synchronized
> using VK_PIPELINE_STAGE_TRANSFER_BIT and VK_ACCESS_TRANSFER_WRITE_BIT
> before using the results."
>
> VK_PIPELINE_STAGE_TRANSFER_BIT will wait for compute to be idle,
> while VK_ACCESS_TRANSFER_WRITE_BIT will invalidate both L1 vector
> caches and L2. So, it's useless to set those flags internally.
>
> Signed-off-by: Samuel Pitoiset 
> ---
>  src/amd/vulkan/radv_query.c | 4 
>  1 file changed, 4 deletions(-)
>
> diff --git a/src/amd/vulkan/radv_query.c b/src/amd/vulkan/radv_query.c
> index 57ea22fb84..4153dc2f67 100644
> --- a/src/amd/vulkan/radv_query.c
> +++ b/src/amd/vulkan/radv_query.c
> @@ -755,10 +755,6 @@ static void radv_query_shader(struct radv_cmd_buffer 
> *cmd_buffer,
>
> radv_unaligned_dispatch(cmd_buffer, count, 1, 1);
>
> -   cmd_buffer->state.flush_bits |= RADV_CMD_FLAG_INV_GLOBAL_L2 |
> -   RADV_CMD_FLAG_INV_VMEM_L1 |
> -   RADV_CMD_FLAG_CS_PARTIAL_FLUSH;
> -
> radv_meta_restore(&saved_state, cmd_buffer);
>  }
>
> --
> 2.19.1
>
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] [Bug 102204] GL_ARB_buffer_storage crippled extension on r600, radeonsi and amdgpu Mesa drivers

2018-11-03 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=102204

Matias N. Goldberg  changed:

   What|Removed |Added

 CC||dark_syl...@yahoo.com.ar

--- Comment #10 from Matias N. Goldberg  ---
OK this left me wondering.

I looked further into what happens when you call glBufferStorage, and it seems
that buffer_usage in st_cb_bufferobjects.c
https://github.com/anholt/mesa/blob/master/src/mesa/state_tracker/st_cb_bufferobjects.c#L226
stores all buffers in VRAM unless GL_CLIENT_STORAGE_BIT is used because it
always returns PIPE_USAGE_DEFAULT.

The chosen pipe_resource_usage will end up in si_init_resource_fields
(https://github.com/anholt/mesa/blob/master/src/gallium/drivers/radeonsi/si_buffer.c#L103)
which ends up putting the buffers all buffers in VRAM with Write Combining
(unless GL_CLIENT_STORAGE_BIT is set)

That is... an odd choice. This is specially bad for buffers requested with
GL_MAP_READ_BIT flags, which clearly should not be stored in VRAM with WC bits
set.

This indeed looks like a bug to me. Unfortunately, these hint flags don't map
really well to HW.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] [Bug 102204] GL_ARB_buffer_storage crippled extension on r600, radeonsi and amdgpu Mesa drivers

2018-11-03 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=102204

mirh  changed:

   What|Removed |Added

 CC||awe...@gmail.com

-- 
You are receiving this mail because:
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] [Bug 102204] GL_ARB_buffer_storage crippled extension on r600, radeonsi and amdgpu Mesa drivers

2018-11-03 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=102204

--- Comment #9 from Matias N. Goldberg  ---
Disclaimer: I'm not a Mesa dev.

I saw this ticket by accident and since I'm a heavy user of
GL_ARB_buffer_storage on AMD+Mesa, I took a look in case there's something that
could affect me.

After glancing through the problems, it appears that the problem is "user
error" and the real bug here are the lack of performance warnings to report via
KHR_debug.

GL_ARB_buffer_storage can backfire if you do not use it well.
It would appear that you're performing several operations that are not
supported in HW (such as the use of GL_UNSIGNED_BYTE which requires converting
all indices to GL_UNSIGNED_SHORT in SW) and must be processed to work
correctly.

Or do stuff that is a minefield, like issuing a loop of
glMemoryBarrier+glDrawElementsBaseVertex PER TRIANGLE while reading from a
persistent mapped buffer. That is just not gonna work fast.

The reason GL_CLIENT_STORAGE_BIT improves things is because the buffer is
stored on a CPU buffer (i.e. a good ol' malloc), and when it's time to render,
Mesa does the SW conversions for you, then uploads the data to GPU.
When that flag is not present, Mesa must be constantly downloading & uploading
data from GPU back & forth to accommodate whatever you're doing that is not
natively supported.

The rule of thumb is that persistent mapped memory should be treated as write
only, read once memory.
If I'm writing view matrices and then render that once, I write directly to the
the persistent mapped buffer and use it directly on the shader.
If I'm writing material/pass data that is reused multiple times on multiple
draws, I write it once to the persistent mapped buffer and then perform a
glCopyBufferSubData to another buffer that is not CPU visible at all.

Btw avoid using GL_MAP_COHERENT_BIT. Your calls to glReadPixels to directly
store into the PBO backed by coherent memory could be hurting you a lot.
glReadPixels must do deswizzling; it's not a raw memcpy on the GPU side.
Keeping both the CPU & GPU caches in sync is foolishness here.
If you insist on using GL_MAP_COHERENT_BIT for your glReadPixels; perform the
glReadPixels onto a PBO buffer that is not CPU visible (to perform
deswizzling), then perform a glCopyBufferSubData from that PBO into another PBO
(that is now CPU visible).

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH mesa] util: use *unsigned* ints for bit operations

2018-11-03 Thread Mathias Fröhlich
Hi,

> > Before filing a bug report at gcc I wanted to verify that we are not doing 
> > anything
> > wrong like with aliasing for example. Which is the reason the bug is not 
> > filed yet.
> 
> FYI I filed a bug in fedora, and Jakub tracked it down and is working
> it upstream at:
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87859

Thanks a lot!!!

Mathias



___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] [Bug 102204] GL_ARB_buffer_storage crippled extension on r600, radeonsi and amdgpu Mesa drivers

2018-11-03 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=102204

H4nN1baL  changed:

   What|Removed |Added

Version|17.3|18.3

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] [Bug 102204] GL_ARB_buffer_storage crippled extension on r600, radeonsi and amdgpu Mesa drivers

2018-11-03 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=102204

H4nN1baL  changed:

   What|Removed |Added

 Status|REOPENED|NEEDINFO

--- Comment #8 from H4nN1baL  ---
The addition of that flag makes the difference in this part:
https://github.com/gonetz/GLideN64/blob/7aa360c9007d5b5f8c020d68341585e1f5b24b03/src/Graphics/OpenGLContext/opengl_ColorBufferReaderWithBufferStorage.cpp#L34-L35
With it the problem disappears completely.

I can also add it here:
https://github.com/gonetz/GLideN64/blob/7aa360c9007d5b5f8c020d68341585e1f5b24b03/src/Graphics/OpenGLContext/opengl_BufferedDrawer.cpp#L15-L16
I don't see any difference... Is it only necessary to add it to the first part?

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH mesa] util: use *unsigned* ints for bit operations

2018-11-03 Thread Dave Airlie
Thanks Mathias,

>
> Before filing a bug report at gcc I wanted to verify that we are not doing 
> anything
> wrong like with aliasing for example. Which is the reason the bug is not 
> filed yet.

FYI I filed a bug in fedora, and Jakub tracked it down and is working
it upstream at:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87859

Dave.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev