On 10/20/2016 02:21 AM, Andrew Morton wrote:
On Wed, 19 Oct 2016 06:38:14 +0200 Manfred Spraul
wrote:
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch to full
On 10/20/2016 02:21 AM, Andrew Morton wrote:
On Wed, 19 Oct 2016 06:38:14 +0200 Manfred Spraul
wrote:
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch to full spin_lock()/spin_unlock() for
On Wed, 19 Oct 2016 06:38:14 +0200 Manfred Spraul
wrote:
> Hi,
>
> as discussed before:
> The root cause for the performance regression is the smp_mb() that was
> added into the fast path.
>
> I see two options:
> 1) switch to full spin_lock()/spin_unlock() for the
On Wed, 19 Oct 2016 06:38:14 +0200 Manfred Spraul
wrote:
> Hi,
>
> as discussed before:
> The root cause for the performance regression is the smp_mb() that was
> added into the fast path.
>
> I see two options:
> 1) switch to full spin_lock()/spin_unlock() for the rare codepath,
> then the
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch to full spin_lock()/spin_unlock() for the rare codepath,
then the fast path doesn't need the smp_mb() anymore.
2) confirm that no arch needs
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch to full spin_lock()/spin_unlock() for the rare codepath,
then the fast path doesn't need the smp_mb() anymore.
2) confirm that no arch needs
6 matches
Mail list logo