Re: Possible Performance Regression with Mesa

2024-04-25 Thread Daniel Stone
On Thu, 25 Apr 2024 at 13:08, Lucas Stach  wrote:
> I can reproduce the issue, but sadly there is no simple fix for this,
> as it's a bad interaction between some of the new features.
> At the core of the issue is the dmabuf-feedback support with the chain
> of events being as follows:
>
> 1. weston switches to the scanout tranche, as it would like to put the
> surface on a plane
> 2. the client reallocates as linear but does so on the render node
> 3. weston still isn't able to put the buffer on the plane, as it's
> still scanout incompatible due to being non-contig, so needs to fall
> back to rendering
> 4. now we are stuck at a linear buffer being used for rendering, which
> is very non-optimal

Oh man, sorry about that, that shouldn't happen. As long as
drmModeAddFB2 is failing, we should be marking the buffer as
non-importable, and then hinting the client back towards tiled.

That being said, yeah, having the client render to linear and skip
composition is definitely going to be better!

Cheers,
Daniel


Re: Possible Performance Regression with Mesa

2024-04-25 Thread Lucas Stach
Am Donnerstag, dem 25.04.2024 um 07:56 -0300 schrieb Joao Paulo Silva
Goncalves:
> 
> 
> On Thu, Apr 25, 2024 at 5:58 AM Lucas Stach  wrote:
> 
> > Etnaviv added some resource tracking to fix issues with a number of
> > use-cases, which did add some CPU overhead and might cost some
> > performance, but should no be as dramatic as the numbers you are seeing
> > here.
> 
> Good to know. Thanks!
> 
> > Since the glmark2 cumulative score can be skewed quite heavily by
> >  single tests, it would be interesting to compare the results from
> >  individual benchmark tests. Do you see any outliers there or is the
> >  performance drop across the board?
> 
> It seems to have a perfomance impact on overall the individual benchmarks 
> too, for example:

I can reproduce the issue, but sadly there is no simple fix for this,
as it's a bad interaction between some of the new features.
At the core of the issue is the dmabuf-feedback support with the chain
of events being as follows:

1. weston switches to the scanout tranche, as it would like to put the
surface on a plane
2. the client reallocates as linear but does so on the render node
3. weston still isn't able to put the buffer on the plane, as it's
still scanout incompatible due to being non-contig, so needs to fall
back to rendering
4. now we are stuck at a linear buffer being used for rendering, which
is very non-optimal

I'll look into improving this, but can make no commitments as to when
I'll be able to get around to this.

Regards,
Lucas


Re: Possible Performance Regression with Mesa

2024-04-25 Thread Lucas Stach
Hi Joao Paulo,

Am Mittwoch, dem 24.04.2024 um 19:31 -0300 schrieb Joao Paulo Silva
Goncalves:
> Hello all,
> 
> We might have encountered a performance regression after upgrading from Mesa
> 2022.0.3 to 2024.0.2. During our automated hardware tests using LAVA, we 
> noticed
> a lower score on glmark2 when we upgraded from the OpenEmbedded release from
> Kirkstone to Scartgarth. After conducting some internal tests, it doesn't seem
> to be an issue with the kernel or the glmark2 tool version, so we suspect that
> the issue may be related to something within Mesa. We believe that there might
> be something we're overlooking. Do you have any ideas or insights about
> this problem?
> 
Etnaviv added some resource tracking to fix issues with a number of
use-cases, which did add some CPU overhead and might cost some
performance, but should no be as dramatic as the numbers you are seeing
here.

> Here are some details about our hardware platform and some tests we
> have conducted:
> 
> Platform: Toradex Apalis iMX6 - NXP i.MX 6Q/6D Arm Cortex A9 with
> Vivante GC2000 rev 5108 using Etnaviv.
> 
> Tests:
> 
> Kernel Versions - v6.1.87 and v6.9-rc4
> Glmark2 Versions - 2021.12 and 2023.01
> 
> We combined different upstream kernel, Mesa, and glmark2 versions and
> ran glmark2 on each
> combination on a mostly idle system. The benchmark was run 20 times on
> each combination.
> 
> Some Results:
> 
> > Kernel   |   Mesa| glmark2 | Max-Min Score
> v6.1.87 2022.0.32021.12   449-495
> v6.9-rc42022.0.32021.12   452-502
> v6.1.87 2022.0.32023.01   453-504
> v6.9-rc42022.0.32023.01   455-496
> v6.1.87 2024.0.22021.12   301-313
> v6.9-rc42024.0.22021.12   298-320
> v6.1.87 2024.0.22023.01   301-313
> v6.9-rc42024.0.22023.01   295-310

Since the glmark2 cumulative score can be skewed quite heavily by
single tests, it would be interesting to compare the results from
individual benchmark tests. Do you see any outliers there or is the
performance drop across the board?

Regards,
Lucas


Possible Performance Regression with Mesa

2024-04-25 Thread Joao Paulo Silva Goncalves
Hello all,

We might have encountered a performance regression after upgrading from Mesa
2022.0.3 to 2024.0.2. During our automated hardware tests using LAVA, we noticed
a lower score on glmark2 when we upgraded from the OpenEmbedded release from
Kirkstone to Scartgarth. After conducting some internal tests, it doesn't seem
to be an issue with the kernel or the glmark2 tool version, so we suspect that
the issue may be related to something within Mesa. We believe that there might
be something we're overlooking. Do you have any ideas or insights about
this problem?

Here are some details about our hardware platform and some tests we
have conducted:

Platform: Toradex Apalis iMX6 - NXP i.MX 6Q/6D Arm Cortex A9 with
Vivante GC2000 rev 5108 using Etnaviv.

Tests:

Kernel Versions - v6.1.87 and v6.9-rc4
Glmark2 Versions - 2021.12 and 2023.01

We combined different upstream kernel, Mesa, and glmark2 versions and
ran glmark2 on each
combination on a mostly idle system. The benchmark was run 20 times on
each combination.

Some Results:

|Kernel   |   Mesa| glmark2 | Max-Min Score
v6.1.87 2022.0.32021.12   449-495
v6.9-rc42022.0.32021.12   452-502
v6.1.87 2022.0.32023.01   453-504
v6.9-rc42022.0.32023.01   455-496
v6.1.87 2024.0.22021.12   301-313
v6.9-rc42024.0.22021.12   298-320
v6.1.87 2024.0.22023.01   301-313
v6.9-rc42024.0.22023.01   295-310


Regards,
Joao Paulo Goncalves