Re: [Mesa-dev] Testing optimizer

2013-12-17 Thread Patrick Baggett
On Tue, Dec 17, 2013 at 10:59 AM, Paul Berry wrote:

> On 17 December 2013 08:46, Tom Stellard  wrote:
>
>> On Tue, Dec 17, 2013 at 09:57:31AM -0600, Patrick Baggett wrote:
>> > Hi all,
>> >
>> > Is there a way to see the machine code that is generated by the GLSL
>> > compiler for all GPU instruction sets? For example, I would like to
>> know if
>> > the optimizer optimizes certain (equivalent) constructs (or not), and
>> avoid
>> > them if possible. I know there is a lot to optimization on GPUs that I
>> > don't know, but I'd still like to get some ballpark estimates. For
>> example,
>> > I'm curious whether:
>>
>> Each driver has its own environment variable for dumping machine code.
>>
>> llvmpipe: GALLIVM_DEBUG=asm (I think you need to build mesa
>>  with --enable-debug for this to work)
>> r300g: RADEON_DEBUG=fp,vp
>> r600g, radeonsi: R600_DEBUG=ps,vs
>>
>> I'm not sure what the other drivers use.
>>
>> -Tom
>>
>
> I believe every driver also supports MESA_GLSL=dump, which prints out the
> IR both before and after linking (you'll want to look at the version after
> linking to see what optimizations have been applied, since some
> optimizations happen at link time).  Looking at the IR rather than the
> machine code is more likely to give you the information you need, since
> Mesa performs the same IR-level optimizations on all architectures, whereas
> the optimizations that happen at machine code level are vastly different
> from one driver to the next.
>
>
I do want to see both, actually. For example, if a driver implements a
specific optimization (machine code level) and another driver clearly does
not, then that would be considered "interesting" to me.



> Another thing which might be useful to you is Aras Pranckevičius's
> "glsl-optimizer" project (https://github.com/aras-p/glsl-optimizer),
> which performs Mesa's IR-level optimizations on a shader and then
> translates it from IR back to GLSL.
>
> Paul
>
>
Thanks to everyone for the great tips!


>
>> >
>> > //let p1, p2, p3 be vec2 uniforms
>> >
>> > vec4(p1, 0, 0) + vec4(p2, 0, 0) + vec4(p3, 0, 1)
>> >
>> > produces identical machine code as:
>> >
>> > vec4(p1+p2+p3, 0, 1);
>> >
>> > for all architectures supported by Mesa.
>>
>> > ___
>> > mesa-dev mailing list
>> > mesa-dev@lists.freedesktop.org
>> > http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>>
>> ___
>> mesa-dev mailing list
>> mesa-dev@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>>
>
>
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Testing optimizer

2013-12-17 Thread Paul Berry
On 17 December 2013 11:07, Patrick Baggett wrote:

>
>
>
> On Tue, Dec 17, 2013 at 10:59 AM, Paul Berry wrote:
>
>> On 17 December 2013 08:46, Tom Stellard  wrote:
>>
>>> On Tue, Dec 17, 2013 at 09:57:31AM -0600, Patrick Baggett wrote:
>>> > Hi all,
>>> >
>>> > Is there a way to see the machine code that is generated by the GLSL
>>> > compiler for all GPU instruction sets? For example, I would like to
>>> know if
>>> > the optimizer optimizes certain (equivalent) constructs (or not), and
>>> avoid
>>> > them if possible. I know there is a lot to optimization on GPUs that I
>>> > don't know, but I'd still like to get some ballpark estimates. For
>>> example,
>>> > I'm curious whether:
>>>
>>> Each driver has its own environment variable for dumping machine code.
>>>
>>> llvmpipe: GALLIVM_DEBUG=asm (I think you need to build mesa
>>>  with --enable-debug for this to work)
>>> r300g: RADEON_DEBUG=fp,vp
>>> r600g, radeonsi: R600_DEBUG=ps,vs
>>>
>>> I'm not sure what the other drivers use.
>>>
>>> -Tom
>>>
>>
>> I believe every driver also supports MESA_GLSL=dump, which prints out the
>> IR both before and after linking (you'll want to look at the version after
>> linking to see what optimizations have been applied, since some
>> optimizations happen at link time).  Looking at the IR rather than the
>> machine code is more likely to give you the information you need, since
>> Mesa performs the same IR-level optimizations on all architectures, whereas
>> the optimizations that happen at machine code level are vastly different
>> from one driver to the next.
>>
>>
> I do want to see both, actually. For example, if a driver implements a
> specific optimization (machine code level) and another driver clearly does
> not, then that would be considered "interesting" to me.
>

Ok, in that case the environment variable you want for seeing the generated
assembly code for Intel is:

INTEL_DEBUG=vs,fs,gs


>
>
>
>> Another thing which might be useful to you is Aras Pranckevičius's
>> "glsl-optimizer" project (https://github.com/aras-p/glsl-optimizer),
>> which performs Mesa's IR-level optimizations on a shader and then
>> translates it from IR back to GLSL.
>>
>> Paul
>>
>>
> Thanks to everyone for the great tips!
>
>
>>
>>> >
>>> > //let p1, p2, p3 be vec2 uniforms
>>> >
>>> > vec4(p1, 0, 0) + vec4(p2, 0, 0) + vec4(p3, 0, 1)
>>> >
>>> > produces identical machine code as:
>>> >
>>> > vec4(p1+p2+p3, 0, 1);
>>> >
>>> > for all architectures supported by Mesa.
>>>
>>> > ___
>>> > mesa-dev mailing list
>>> > mesa-dev@lists.freedesktop.org
>>> > http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>>>
>>> ___
>>> mesa-dev mailing list
>>> mesa-dev@lists.freedesktop.org
>>> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>>>
>>
>>
>
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Testing optimizer

2013-12-17 Thread Marek Olšák
ST_DEBUG=tgsi ... is also useful. It dumps all shaders generated by
mesa_to_tgsi and glsl_to_tgsi and it's easier to read than the GLSL
IR.

Marek

On Tue, Dec 17, 2013 at 5:46 PM, Tom Stellard  wrote:
> On Tue, Dec 17, 2013 at 09:57:31AM -0600, Patrick Baggett wrote:
>> Hi all,
>>
>> Is there a way to see the machine code that is generated by the GLSL
>> compiler for all GPU instruction sets? For example, I would like to know if
>> the optimizer optimizes certain (equivalent) constructs (or not), and avoid
>> them if possible. I know there is a lot to optimization on GPUs that I
>> don't know, but I'd still like to get some ballpark estimates. For example,
>> I'm curious whether:
>
> Each driver has its own environment variable for dumping machine code.
>
> llvmpipe: GALLIVM_DEBUG=asm (I think you need to build mesa
>  with --enable-debug for this to work)
> r300g: RADEON_DEBUG=fp,vp
> r600g, radeonsi: R600_DEBUG=ps,vs
>
> I'm not sure what the other drivers use.
>
> -Tom
>
>>
>> //let p1, p2, p3 be vec2 uniforms
>>
>> vec4(p1, 0, 0) + vec4(p2, 0, 0) + vec4(p3, 0, 1)
>>
>> produces identical machine code as:
>>
>> vec4(p1+p2+p3, 0, 1);
>>
>> for all architectures supported by Mesa.
>
>> ___
>> mesa-dev mailing list
>> mesa-dev@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Testing optimizer

2013-12-17 Thread Paul Berry
On 17 December 2013 08:46, Tom Stellard  wrote:

> On Tue, Dec 17, 2013 at 09:57:31AM -0600, Patrick Baggett wrote:
> > Hi all,
> >
> > Is there a way to see the machine code that is generated by the GLSL
> > compiler for all GPU instruction sets? For example, I would like to know
> if
> > the optimizer optimizes certain (equivalent) constructs (or not), and
> avoid
> > them if possible. I know there is a lot to optimization on GPUs that I
> > don't know, but I'd still like to get some ballpark estimates. For
> example,
> > I'm curious whether:
>
> Each driver has its own environment variable for dumping machine code.
>
> llvmpipe: GALLIVM_DEBUG=asm (I think you need to build mesa
>  with --enable-debug for this to work)
> r300g: RADEON_DEBUG=fp,vp
> r600g, radeonsi: R600_DEBUG=ps,vs
>
> I'm not sure what the other drivers use.
>
> -Tom
>

I believe every driver also supports MESA_GLSL=dump, which prints out the
IR both before and after linking (you'll want to look at the version after
linking to see what optimizations have been applied, since some
optimizations happen at link time).  Looking at the IR rather than the
machine code is more likely to give you the information you need, since
Mesa performs the same IR-level optimizations on all architectures, whereas
the optimizations that happen at machine code level are vastly different
from one driver to the next.

Another thing which might be useful to you is Aras Pranckevičius's
"glsl-optimizer" project (https://github.com/aras-p/glsl-optimizer), which
performs Mesa's IR-level optimizations on a shader and then translates it
from IR back to GLSL.

Paul


> >
> > //let p1, p2, p3 be vec2 uniforms
> >
> > vec4(p1, 0, 0) + vec4(p2, 0, 0) + vec4(p3, 0, 1)
> >
> > produces identical machine code as:
> >
> > vec4(p1+p2+p3, 0, 1);
> >
> > for all architectures supported by Mesa.
>
> > ___
> > mesa-dev mailing list
> > mesa-dev@lists.freedesktop.org
> > http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Testing optimizer

2013-12-17 Thread Tom Stellard
On Tue, Dec 17, 2013 at 09:57:31AM -0600, Patrick Baggett wrote:
> Hi all,
> 
> Is there a way to see the machine code that is generated by the GLSL
> compiler for all GPU instruction sets? For example, I would like to know if
> the optimizer optimizes certain (equivalent) constructs (or not), and avoid
> them if possible. I know there is a lot to optimization on GPUs that I
> don't know, but I'd still like to get some ballpark estimates. For example,
> I'm curious whether:

Each driver has its own environment variable for dumping machine code.

llvmpipe: GALLIVM_DEBUG=asm (I think you need to build mesa
 with --enable-debug for this to work)
r300g: RADEON_DEBUG=fp,vp
r600g, radeonsi: R600_DEBUG=ps,vs

I'm not sure what the other drivers use.

-Tom

> 
> //let p1, p2, p3 be vec2 uniforms
> 
> vec4(p1, 0, 0) + vec4(p2, 0, 0) + vec4(p3, 0, 1)
> 
> produces identical machine code as:
> 
> vec4(p1+p2+p3, 0, 1);
> 
> for all architectures supported by Mesa.

> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/mesa-dev

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] Testing optimizer

2013-12-17 Thread Patrick Baggett
Hi all,

Is there a way to see the machine code that is generated by the GLSL
compiler for all GPU instruction sets? For example, I would like to know if
the optimizer optimizes certain (equivalent) constructs (or not), and avoid
them if possible. I know there is a lot to optimization on GPUs that I
don't know, but I'd still like to get some ballpark estimates. For example,
I'm curious whether:

//let p1, p2, p3 be vec2 uniforms

vec4(p1, 0, 0) + vec4(p2, 0, 0) + vec4(p3, 0, 1)

produces identical machine code as:

vec4(p1+p2+p3, 0, 1);

for all architectures supported by Mesa.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev