Re: [Intel-gfx] [PATCH 4/5] drm/i915: Disable semaphore busywaits on saturated systems

2019-05-03 Thread Ville Syrjälä
On Fri, May 03, 2019 at 03:12:01PM +0100, Chris Wilson wrote:
> Quoting Ville Syrjälä (2019-05-03 15:04:57)
> > On Mon, Apr 29, 2019 at 07:00:19PM +0100, Chris Wilson wrote:
> > > Asking the GPU to busywait on a memory address, perhaps not unexpectedly
> > > in hindsight for a shared system, leads to bus contention that affects
> > > CPU programs trying to concurrently access memory. This can manifest as
> > > a drop in transcode throughput on highly over-saturated workloads.
> > 
> > We can't use the singalling semaphore variant?
> 
> That requires us to broadcast a signal to each engine (for which I can
> hear the cries of cross-powerwell wakes), and currently does not work
> with execlists + context-id==0. Or at least it failed in my testing.

Ah. If only we had MI_MWAIT.

-- 
Ville Syrjälä
Intel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 4/5] drm/i915: Disable semaphore busywaits on saturated systems

2019-05-03 Thread Chris Wilson
Quoting Ville Syrjälä (2019-05-03 15:04:57)
> On Mon, Apr 29, 2019 at 07:00:19PM +0100, Chris Wilson wrote:
> > Asking the GPU to busywait on a memory address, perhaps not unexpectedly
> > in hindsight for a shared system, leads to bus contention that affects
> > CPU programs trying to concurrently access memory. This can manifest as
> > a drop in transcode throughput on highly over-saturated workloads.
> 
> We can't use the singalling semaphore variant?

That requires us to broadcast a signal to each engine (for which I can
hear the cries of cross-powerwell wakes), and currently does not work
with execlists + context-id==0. Or at least it failed in my testing.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 4/5] drm/i915: Disable semaphore busywaits on saturated systems

2019-05-03 Thread Ville Syrjälä
On Mon, Apr 29, 2019 at 07:00:19PM +0100, Chris Wilson wrote:
> Asking the GPU to busywait on a memory address, perhaps not unexpectedly
> in hindsight for a shared system, leads to bus contention that affects
> CPU programs trying to concurrently access memory. This can manifest as
> a drop in transcode throughput on highly over-saturated workloads.

We can't use the singalling semaphore variant?

-- 
Ville Syrjälä
Intel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 4/5] drm/i915: Disable semaphore busywaits on saturated systems

2019-05-03 Thread Tvrtko Ursulin


On 29/04/2019 19:00, Chris Wilson wrote:

Asking the GPU to busywait on a memory address, perhaps not unexpectedly
in hindsight for a shared system, leads to bus contention that affects
CPU programs trying to concurrently access memory. This can manifest as
a drop in transcode throughput on highly over-saturated workloads.

The only clue offered by perf, is that the bus-cycles (perf stat -e
bus-cycles) jumped by 50% when enabling semaphores. This corresponds
with extra CPU active cycles being attributed to intel_idle's mwait.

This patch introduces a heuristic to try and detect when more than one
client is submitting to the GPU pushing it into an oversaturated state.
As we already keep track of when the semaphores are signaled, we can
inspect their state on submitting the busywait batch and if we planned
to use a semaphore but were too late, conclude that the GPU is
overloaded and not try to use semaphores in future requests. In
practice, this means we optimistically try to use semaphores for the
first frame of a transcode job split over multiple engines, and fail is
there are multiple clients active and continue not to use semaphores for
the subsequent frames in the sequence. Periodically, trying to
optimistically switch semaphores back on whenever the client waits to
catch up with the transcode results.

With 1 client, on Broxton J3455, with the relative fps normalized by %cpu:

x no semaphores
+ drm-tip
* throttle
++
|*   |
|*+  |
|**+ |
|**+  x  |
|x   *  +**+  x  |
|x  x   **  +***x xx |
|x  x   ** *+***x *x |
|x  x*   +  ** *x *x x   |
| +x xx+x*   + ***   * * x   *   |
| +x xx+x*   * *** +** * xx  *   |
|*   + *  +x*x+*+* ***+*+x*  *   |
|*+ +** *+ + +* + *++** *xxx**x***+*+*++*|
|   |__A_M_| |
|   |___AM_| |
| |A___M||
++
 N   Min   MaxMedian   AvgStddev
x 120   2.60475   3.50941   3.31123 3.21439530.21117399
+ 1202.3826   3.57077   3.25101 3.14141610.28146407
Difference at 95.0% confidence
-0.0729792 +/- 0.0629585
-2.27039% +/- 1.95864%
(Student's t, pooled s = 0.248814)
* 120   2.35536   3.667133.2849 3.20599170.24618565
No difference proven at 95.0% confidence

With 10 clients over-saturating the pipeline:

x no semaphores
+ drm-tip
* patch
++
| ++**   |
| ++**   |
| ++**   |
| ++**   |
| ++xx ***   |
| ++xx ***   |
| ++xxx***   |
| ++xxx***   |
|+++xxx***   |
|+++xx   |
|+++xx   |
|+++xx   |
|+++xx   |
|   xx   |
|   +   xx   |
|   + x x**  |
|  ++ xxx*** |
|  ++ xxx*** |
|  ++ xxx*** |
|  ++ xx |
|  ++    |
|  ++   

Re: [Intel-gfx] [PATCH 4/5] drm/i915: Disable semaphore busywaits on saturated systems

2019-04-30 Thread Chris Wilson
Quoting Tvrtko Ursulin (2019-04-30 09:55:59)
> 
> On 29/04/2019 19:00, Chris Wilson wrote:
> So I am still leaning towards being cautious and just abandoning 
> semaphores for now.

Fwiw, we have another 4 weeks to pull the plug for 5.2.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 4/5] drm/i915: Disable semaphore busywaits on saturated systems

2019-04-30 Thread Chris Wilson
Quoting Tvrtko Ursulin (2019-04-30 09:55:59)
> 
> On 29/04/2019 19:00, Chris Wilson wrote:
> > Asking the GPU to busywait on a memory address, perhaps not unexpectedly
> > in hindsight for a shared system, leads to bus contention that affects
> > CPU programs trying to concurrently access memory. This can manifest as
> > a drop in transcode throughput on highly over-saturated workloads.
> > 
> > The only clue offered by perf, is that the bus-cycles (perf stat -e
> > bus-cycles) jumped by 50% when enabling semaphores. This corresponds
> > with extra CPU active cycles being attributed to intel_idle's mwait.
> > 
> > This patch introduces a heuristic to try and detect when more than one
> > client is submitting to the GPU pushing it into an oversaturated state.
> > As we already keep track of when the semaphores are signaled, we can
> > inspect their state on submitting the busywait batch and if we planned
> > to use a semaphore but were too late, conclude that the GPU is
> > overloaded and not try to use semaphores in future requests. In
> > practice, this means we optimistically try to use semaphores for the
> > first frame of a transcode job split over multiple engines, and fail is
> > there are multiple clients active and continue not to use semaphores for
> > the subsequent frames in the sequence. Periodically, trying to
> > optimistically switch semaphores back on whenever the client waits to
> > catch up with the transcode results.
> > 
> 
> [snipped long benchmark results]
> 
> > Indicating that we've recovered the regression from enabling semaphores
> > on this saturated setup, with a hint towards an overall improvement.
> > 
> > Very similar, but of smaller magnitude, results are observed on both
> > Skylake(gt2) and Kabylake(gt4). This may be due to the reduced impact of
> > bus-cycles, where we see a 50% hit on Broxton, it is only 10% on the big
> > core, in this particular test.
> > 
> > One observation to make here is that for a greedy client trying to
> > maximise its own throughput, using semaphores is the right choice. It is
> > only the holistic system-wide view that semaphores of one client
> > impacts another and reduces the overall throughput where we would choose
> > to disable semaphores.
> 
> Since we acknowledge problem is the shared nature of the iGPU, my 
> concern is that we still cannot account for both partners here when 
> deciding to omit semaphore emission. In other words we trade bus 
> throughput for submission latency.
> 
> Assuming a light GPU task (in the sense of not oversubscribing, but with 
> ping-pong inter-engine dependencies), simultaneous to a heavier CPU 
> task, our latency improvement still imposes a performance penalty on the 
> latter.

Maybe, maybe not. I think you have to be in the position where there is
no GPU latency to be gained for the increased bus traffic to lose.
 
> For instance a consumer level single stream transcoding session with CPU 
> heavy part of the pipeline, or a CPU intensive game.
> 
> (Ideally we would need a bus saturation signal to feed into our logic, 
> not just engine saturation. Which I don't think is possible.)
> 
> So I am still leaning towards being cautious and just abandoning 
> semaphores for now.

Being greedy, the single consumer case is compelling. The same
benchmarks see 5-10% throughput improvement for the single client
(depending on machine).
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 4/5] drm/i915: Disable semaphore busywaits on saturated systems

2019-04-30 Thread Tvrtko Ursulin


On 29/04/2019 19:00, Chris Wilson wrote:

Asking the GPU to busywait on a memory address, perhaps not unexpectedly
in hindsight for a shared system, leads to bus contention that affects
CPU programs trying to concurrently access memory. This can manifest as
a drop in transcode throughput on highly over-saturated workloads.

The only clue offered by perf, is that the bus-cycles (perf stat -e
bus-cycles) jumped by 50% when enabling semaphores. This corresponds
with extra CPU active cycles being attributed to intel_idle's mwait.

This patch introduces a heuristic to try and detect when more than one
client is submitting to the GPU pushing it into an oversaturated state.
As we already keep track of when the semaphores are signaled, we can
inspect their state on submitting the busywait batch and if we planned
to use a semaphore but were too late, conclude that the GPU is
overloaded and not try to use semaphores in future requests. In
practice, this means we optimistically try to use semaphores for the
first frame of a transcode job split over multiple engines, and fail is
there are multiple clients active and continue not to use semaphores for
the subsequent frames in the sequence. Periodically, trying to
optimistically switch semaphores back on whenever the client waits to
catch up with the transcode results.



[snipped long benchmark results]


Indicating that we've recovered the regression from enabling semaphores
on this saturated setup, with a hint towards an overall improvement.

Very similar, but of smaller magnitude, results are observed on both
Skylake(gt2) and Kabylake(gt4). This may be due to the reduced impact of
bus-cycles, where we see a 50% hit on Broxton, it is only 10% on the big
core, in this particular test.

One observation to make here is that for a greedy client trying to
maximise its own throughput, using semaphores is the right choice. It is
only the holistic system-wide view that semaphores of one client
impacts another and reduces the overall throughput where we would choose
to disable semaphores.


Since we acknowledge problem is the shared nature of the iGPU, my 
concern is that we still cannot account for both partners here when 
deciding to omit semaphore emission. In other words we trade bus 
throughput for submission latency.


Assuming a light GPU task (in the sense of not oversubscribing, but with 
ping-pong inter-engine dependencies), simultaneous to a heavier CPU 
task, our latency improvement still imposes a performance penalty on the 
latter.


For instance a consumer level single stream transcoding session with CPU 
heavy part of the pipeline, or a CPU intensive game.


(Ideally we would need a bus saturation signal to feed into our logic, 
not just engine saturation. Which I don't think is possible.)


So I am still leaning towards being cautious and just abandoning 
semaphores for now.


Regards,

Tvrtko


The most noticeable negactive impact this has is on the no-op
microbenchmarks, which are also very notable for having no cpu bus load.
In particular, this increases the runtime and energy consumption of
gem_exec_whisper.

Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
Cc: Dmitry Rogozhkin 
Cc: Dmitry Ermilov 
Cc: Joonas Lahtinen 
---
  drivers/gpu/drm/i915/gt/intel_context.c   |  2 ++
  drivers/gpu/drm/i915/gt/intel_context_types.h |  3 ++
  drivers/gpu/drm/i915/i915_request.c   | 28 ++-
  3 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index 1f1761fc6597..5b31e1e05ddd 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -116,6 +116,7 @@ intel_context_init(struct intel_context *ce,
ce->engine = engine;
ce->ops = engine->cops;
ce->sseu = engine->sseu;
+   ce->saturated = 0;
  
  	INIT_LIST_HEAD(>signal_link);

INIT_LIST_HEAD(>signals);
@@ -158,6 +159,7 @@ void intel_context_enter_engine(struct intel_context *ce)
  
  void intel_context_exit_engine(struct intel_context *ce)

  {
+   ce->saturated = 0;
intel_engine_pm_put(ce->engine);
  }
  
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h

index d5a7dbd0daee..963a312430e6 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -13,6 +13,7 @@
  #include 
  
  #include "i915_active_types.h"

+#include "intel_engine_types.h"
  #include "intel_sseu.h"
  
  struct i915_gem_context;

@@ -52,6 +53,8 @@ struct intel_context {
atomic_t pin_count;
struct mutex pin_mutex; /* guards pinning and associated on-gpuing */
  
+	intel_engine_mask_t saturated; /* submitting semaphores too late? */

+
/**
 * active_tracker: Active tracker for the external rq activity
 * on this intel_context object.
diff --git a/drivers/gpu/drm/i915/i915_request.c