Re: [Intel-gfx] [PATCH v5 1/6] drm/damage_helper: Check if damage clips has valid values

2020-12-14 Thread Simon Ser
> Userspace can set a damage clip with a negative coordinate, negative
> width or height or larger than the plane.
> This invalid values could cause issues in some HW or even worst enable
> security flaws.
>
> v2:
> - add debug messages to let userspace know why atomic commit failed
> due invalid damage clips
>
> Cc: Simon Ser 
> Cc: Gwan-gyeong Mun 
> Cc: Sean Paul 
> Cc: Fabio Estevam 
> Cc: Deepak Rawat 
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: José Roberto de Souza 

After looking at the kernel code, it seems like the kernel already checks for
all of that in drm_atomic_plane_check. Are you aware of this?

> + w = drm_rect_width(&plane_state->src) >> 16;
> + h = drm_rect_height(&plane_state->src) >> 16;

The docs say this should be in FB coordinates, not in SRC_* coordinates. So we
shouldn't need to check any SRC_* prop here.

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Try to guess PCH type even without ISA bridge

2020-12-14 Thread Patchwork
== Series Details ==

Series: drm/i915: Try to guess PCH type even without ISA bridge
URL   : https://patchwork.freedesktop.org/series/84886/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9478_full -> Patchwork_19131_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_19131_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_exec_params@rsvd2-dirt:
- shard-iclb: NOTRUN -> [SKIP][1] ([fdo#109283])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-iclb5/igt@gem_exec_par...@rsvd2-dirt.html

  * igt@gem_exec_reloc@basic-wide-active@rcs0:
- shard-iclb: NOTRUN -> [FAIL][2] ([i915#2389]) +3 similar issues
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-iclb5/igt@gem_exec_reloc@basic-wide-act...@rcs0.html
- shard-kbl:  NOTRUN -> [FAIL][3] ([i915#2389]) +4 similar issues
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-kbl2/igt@gem_exec_reloc@basic-wide-act...@rcs0.html

  * igt@gem_render_copy@y-tiled-to-vebox-linear:
- shard-iclb: NOTRUN -> [SKIP][4] ([i915#768])
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-iclb5/igt@gem_render_c...@y-tiled-to-vebox-linear.html

  * igt@gen9_exec_parse@batch-without-end:
- shard-iclb: NOTRUN -> [SKIP][5] ([fdo#112306])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-iclb5/igt@gen9_exec_pa...@batch-without-end.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-270:
- shard-iclb: NOTRUN -> [SKIP][6] ([fdo#110725] / [fdo#111614])
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-iclb5/igt@kms_big...@y-tiled-64bpp-rotate-270.html

  * igt@kms_big_fb@yf-tiled-8bpp-rotate-0:
- shard-iclb: NOTRUN -> [SKIP][7] ([fdo#110723])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-iclb5/igt@kms_big...@yf-tiled-8bpp-rotate-0.html

  * igt@kms_color_chamelium@pipe-a-ctm-max:
- shard-iclb: NOTRUN -> [SKIP][8] ([fdo#109284] / [fdo#111827]) +3 
similar issues
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-iclb5/igt@kms_color_chamel...@pipe-a-ctm-max.html

  * igt@kms_color_chamelium@pipe-b-ctm-limited-range:
- shard-skl:  NOTRUN -> [SKIP][9] ([fdo#109271] / [fdo#111827]) +3 
similar issues
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-skl9/igt@kms_color_chamel...@pipe-b-ctm-limited-range.html

  * igt@kms_color_chamelium@pipe-d-ctm-max:
- shard-kbl:  NOTRUN -> [SKIP][10] ([fdo#109271] / [fdo#111827]) +6 
similar issues
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-kbl2/igt@kms_color_chamel...@pipe-d-ctm-max.html
- shard-iclb: NOTRUN -> [SKIP][11] ([fdo#109278] / [fdo#109284] / 
[fdo#111827])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-iclb5/igt@kms_color_chamel...@pipe-d-ctm-max.html

  * igt@kms_content_protection@legacy:
- shard-iclb: NOTRUN -> [SKIP][12] ([fdo#109300] / [fdo#111066])
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-iclb5/igt@kms_content_protect...@legacy.html

  * igt@kms_content_protection@uevent:
- shard-kbl:  NOTRUN -> [FAIL][13] ([i915#2105])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-kbl2/igt@kms_content_protect...@uevent.html

  * igt@kms_cursor_crc@pipe-b-cursor-512x512-onscreen:
- shard-iclb: NOTRUN -> [SKIP][14] ([fdo#109278] / [fdo#109279]) +1 
similar issue
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-iclb5/igt@kms_cursor_...@pipe-b-cursor-512x512-onscreen.html

  * igt@kms_cursor_crc@pipe-c-cursor-256x256-sliding:
- shard-skl:  NOTRUN -> [FAIL][15] ([i915#54])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-skl5/igt@kms_cursor_...@pipe-c-cursor-256x256-sliding.html

  * igt@kms_cursor_crc@pipe-c-cursor-64x21-offscreen:
- shard-skl:  [PASS][16] -> [FAIL][17] ([i915#54]) +2 similar issues
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9478/shard-skl4/igt@kms_cursor_...@pipe-c-cursor-64x21-offscreen.html
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-skl1/igt@kms_cursor_...@pipe-c-cursor-64x21-offscreen.html

  * igt@kms_cursor_crc@pipe-d-cursor-128x128-sliding:
- shard-iclb: NOTRUN -> [SKIP][18] ([fdo#109278]) +6 similar issues
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19131/shard-iclb5/igt@kms_cursor_...@pipe-d-cursor-128x128-sliding.html

  * igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions-varying-size:
- shard-hsw:  [PASS][19] -> [FAIL][20] ([i915#2370])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9478/shard-hsw1/igt@kms_

Re: [Intel-gfx] [PATCH v5 1/6] drm/damage_helper: Check if damage clips has valid values

2020-12-14 Thread Mun, Gwan-gyeong
On Mon, 2020-12-14 at 08:55 +, Simon Ser wrote:
> > Userspace can set a damage clip with a negative coordinate,
> > negative
> > width or height or larger than the plane.
> > This invalid values could cause issues in some HW or even worst
> > enable
> > security flaws.
> > 
> > v2:
> > - add debug messages to let userspace know why atomic commit failed
> > due invalid damage clips
> > 
> > Cc: Simon Ser 
> > Cc: Gwan-gyeong Mun 
> > Cc: Sean Paul 
> > Cc: Fabio Estevam 
> > Cc: Deepak Rawat 
> > Cc: dri-de...@lists.freedesktop.org
> > Signed-off-by: José Roberto de Souza 
> 
> After looking at the kernel code, it seems like the kernel already
> checks for
> all of that in drm_atomic_plane_check. Are you aware of this?
> 
> > +   w = drm_rect_width(&plane_state->src) >> 16;
> > +   h = drm_rect_height(&plane_state->src) >> 16;
> 
> The docs say this should be in FB coordinates, not in SRC_*
> coordinates. So we
> shouldn't need to check any SRC_* prop here.
> 
I agree the Simon's opinion. it does check between plane's frame buffer
src geometry and damage clips. (Plane's damage clip might exist outside
of fb src geometry.)
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v5 2/6] drm/i915/display: Check plane damage clips

2020-12-14 Thread Mun, Gwan-gyeong
On Sun, 2020-12-13 at 10:39 -0800, José Roberto de Souza wrote:
> Call the function that validates every damage clips of each plane.
> As in commit 093a3a39 ("drm/i915: Add plane damage clips
> property")
> this property was only enabled for gen12+ only checking it for gen12
> too.
> 
> v2:
> - add logs to underspace understand why commit was rejected
> 
> Cc: Gwan-gyeong Mun 
> Cc: Ville Syrjälä 
> Signed-off-by: José Roberto de Souza 
> ---
>  drivers/gpu/drm/i915/display/intel_sprite.c | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_sprite.c
> b/drivers/gpu/drm/i915/display/intel_sprite.c
> index b7e208816074..cb862bb8d6fb 100644
> --- a/drivers/gpu/drm/i915/display/intel_sprite.c
> +++ b/drivers/gpu/drm/i915/display/intel_sprite.c
> @@ -2492,6 +2492,13 @@ static int skl_plane_check(struct
> intel_crtc_state *crtc_state,
>   if (ret)
>   return ret;
>  
> + if (INTEL_GEN(dev_priv) >= 12) {
> + ret = drm_atomic_helper_check_plane_damage(crtc_state-
> >uapi.state,
> +
we should only exclude the damaged clip which is outside fb src.
therefore if there is at least one damage clip intersect fb src rect,
we should handle this damage clip region.
>  &plane_state
> ->uapi);
> + if (ret)
> + return ret;
> + }
> +
>   /* HW only has 8 bits pixel precision, disable plane if
> invisible */
>   if (!(plane_state->hw.alpha >> 8))
>   plane_state->uapi.visible = false;
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [CI 2/3] drm/i915/pmu: Use raw clock for rc6 estimation

2020-12-14 Thread Tvrtko Ursulin
From: Tvrtko Ursulin 

RC6 is a hardware counter and as such estimating it using the raw clock
during runtime suspend is more appropriate.

Signed-off-by: Tvrtko Ursulin 
References: 34f439278cef ("perf: Add per event clockid support")
Cc: Chris Wilson 
Reviewed-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_pmu.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
index 204253c2f2c0..ca11922e1102 100644
--- a/drivers/gpu/drm/i915/i915_pmu.c
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -163,9 +163,9 @@ static u64 __get_rc6(struct intel_gt *gt)
 
 #if IS_ENABLED(CONFIG_PM)
 
-static inline s64 ktime_since(const ktime_t kt)
+static inline s64 ktime_since_raw(const ktime_t kt)
 {
-   return ktime_to_ns(ktime_sub(ktime_get(), kt));
+   return ktime_to_ns(ktime_sub(ktime_get_raw(), kt));
 }
 
 static u64 get_rc6(struct intel_gt *gt)
@@ -194,7 +194,7 @@ static u64 get_rc6(struct intel_gt *gt)
 * on top of the last known real value, as the approximated RC6
 * counter value.
 */
-   val = ktime_since(pmu->sleep_last);
+   val = ktime_since_raw(pmu->sleep_last);
val += pmu->sample[__I915_SAMPLE_RC6].cur;
}
 
@@ -217,7 +217,7 @@ static void init_rc6(struct i915_pmu *pmu)
pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt);
pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur =
pmu->sample[__I915_SAMPLE_RC6].cur;
-   pmu->sleep_last = ktime_get();
+   pmu->sleep_last = ktime_get_raw();
}
 }
 
@@ -226,7 +226,7 @@ static void park_rc6(struct drm_i915_private *i915)
struct i915_pmu *pmu = &i915->pmu;
 
pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt);
-   pmu->sleep_last = ktime_get();
+   pmu->sleep_last = ktime_get_raw();
 }
 
 #else
-- 
2.25.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [CI 1/3] drm/i915/pmu: Don't grab wakeref when enabling events

2020-12-14 Thread Tvrtko Ursulin
From: Tvrtko Ursulin 

Chris found a CI report which points out calling intel_runtime_pm_get from
inside i915_pmu_enable hook is not allowed since it can be invoked from
hard irq context. This is something we knew but forgot, so lets fix it
once again.

We do this by syncing the internal book keeping with hardware rc6 counter
on driver load.

v2:
 * Always sync on parking and fully sync on init.

Signed-off-by: Tvrtko Ursulin 
Fixes: f4e9894b6952 ("drm/i915/pmu: Correct the rc6 offset upon enabling")
Cc: Chris Wilson 
Reviewed-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_pmu.c | 39 ++---
 1 file changed, 16 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
index 97bb4aaa5236..204253c2f2c0 100644
--- a/drivers/gpu/drm/i915/i915_pmu.c
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -103,11 +103,6 @@ static unsigned int event_bit(struct perf_event *event)
return config_bit(event->attr.config);
 }
 
-static bool event_read_needs_wakeref(const struct perf_event *event)
-{
-   return event->attr.config == I915_PMU_RC6_RESIDENCY;
-}
-
 static bool pmu_needs_timer(struct i915_pmu *pmu, bool gpu_active)
 {
struct drm_i915_private *i915 = container_of(pmu, typeof(*i915), pmu);
@@ -213,13 +208,24 @@ static u64 get_rc6(struct intel_gt *gt)
return val;
 }
 
-static void park_rc6(struct drm_i915_private *i915)
+static void init_rc6(struct i915_pmu *pmu)
 {
-   struct i915_pmu *pmu = &i915->pmu;
+   struct drm_i915_private *i915 = container_of(pmu, typeof(*i915), pmu);
+   intel_wakeref_t wakeref;
 
-   if (pmu->enable & config_mask(I915_PMU_RC6_RESIDENCY))
+   with_intel_runtime_pm(i915->gt.uncore->rpm, wakeref) {
pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt);
+   pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur =
+   pmu->sample[__I915_SAMPLE_RC6].cur;
+   pmu->sleep_last = ktime_get();
+   }
+}
 
+static void park_rc6(struct drm_i915_private *i915)
+{
+   struct i915_pmu *pmu = &i915->pmu;
+
+   pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt);
pmu->sleep_last = ktime_get();
 }
 
@@ -230,6 +236,7 @@ static u64 get_rc6(struct intel_gt *gt)
return __get_rc6(gt);
 }
 
+static void init_rc6(struct i915_pmu *pmu) { }
 static void park_rc6(struct drm_i915_private *i915) {}
 
 #endif
@@ -655,15 +662,10 @@ static void i915_pmu_enable(struct perf_event *event)
 {
struct drm_i915_private *i915 =
container_of(event->pmu, typeof(*i915), pmu.base);
-   bool need_wakeref = event_read_needs_wakeref(event);
struct i915_pmu *pmu = &i915->pmu;
-   intel_wakeref_t wakeref = 0;
unsigned long flags;
unsigned int bit;
 
-   if (need_wakeref)
-   wakeref = intel_runtime_pm_get(&i915->runtime_pm);
-
bit = event_bit(event);
if (bit == -1)
goto update;
@@ -678,13 +680,6 @@ static void i915_pmu_enable(struct perf_event *event)
GEM_BUG_ON(bit >= ARRAY_SIZE(pmu->enable_count));
GEM_BUG_ON(pmu->enable_count[bit] == ~0);
 
-   if (pmu->enable_count[bit] == 0 &&
-   config_mask(I915_PMU_RC6_RESIDENCY) & BIT_ULL(bit)) {
-   pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur = 0;
-   pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt);
-   pmu->sleep_last = ktime_get();
-   }
-
pmu->enable |= BIT_ULL(bit);
pmu->enable_count[bit]++;
 
@@ -726,9 +721,6 @@ static void i915_pmu_enable(struct perf_event *event)
 * an existing non-zero value.
 */
local64_set(&event->hw.prev_count, __i915_pmu_event_read(event));
-
-   if (wakeref)
-   intel_runtime_pm_put(&i915->runtime_pm, wakeref);
 }
 
 static void i915_pmu_disable(struct perf_event *event)
@@ -1187,6 +1179,7 @@ void i915_pmu_register(struct drm_i915_private *i915)
hrtimer_init(&pmu->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
pmu->timer.function = i915_sample;
pmu->cpuhp.cpu = -1;
+   init_rc6(pmu);
 
if (!is_igp(i915)) {
pmu->name = kasprintf(GFP_KERNEL,
-- 
2.25.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [CI 3/3] drm/i915/pmu: Remove !CONFIG_PM code

2020-12-14 Thread Tvrtko Ursulin
From: Tvrtko Ursulin 

Chris spotted that since 16ffe73c186b ("drm/i915/pmu: Use GT parked for
estimating RC6 while asleep") we don't rely on runtime pm internals when
estimating RC6 while asleep. We can remove the ifdef code to simplify and
at the same time wake up the device less when querying RC6 if CONFIG_PM is
not compiled in.

Signed-off-by: Tvrtko Ursulin 
References: 16ffe73c186b ("drm/i915/pmu: Use GT parked for estimating RC6 while 
asleep")
Reported-by: Chris Wilson 
Reviewed-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_pmu.c | 14 --
 1 file changed, 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
index ca11922e1102..37716a89c682 100644
--- a/drivers/gpu/drm/i915/i915_pmu.c
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -161,8 +161,6 @@ static u64 __get_rc6(struct intel_gt *gt)
return val;
 }
 
-#if IS_ENABLED(CONFIG_PM)
-
 static inline s64 ktime_since_raw(const ktime_t kt)
 {
return ktime_to_ns(ktime_sub(ktime_get_raw(), kt));
@@ -229,18 +227,6 @@ static void park_rc6(struct drm_i915_private *i915)
pmu->sleep_last = ktime_get_raw();
 }
 
-#else
-
-static u64 get_rc6(struct intel_gt *gt)
-{
-   return __get_rc6(gt);
-}
-
-static void init_rc6(struct i915_pmu *pmu) { }
-static void park_rc6(struct drm_i915_private *i915) {}
-
-#endif
-
 static void __i915_pmu_maybe_start_timer(struct i915_pmu *pmu)
 {
if (!pmu->timer_enabled && pmu_needs_timer(pmu, true)) {
-- 
2.25.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v5 1/6] drm/damage_helper: Check if damage clips has valid values

2020-12-14 Thread Daniel Vetter
On Mon, Dec 14, 2020 at 09:27:30AM +, Mun, Gwan-gyeong wrote:
> On Mon, 2020-12-14 at 08:55 +, Simon Ser wrote:
> > > Userspace can set a damage clip with a negative coordinate,
> > > negative
> > > width or height or larger than the plane.
> > > This invalid values could cause issues in some HW or even worst
> > > enable
> > > security flaws.
> > > 
> > > v2:
> > > - add debug messages to let userspace know why atomic commit failed
> > > due invalid damage clips
> > > 
> > > Cc: Simon Ser 
> > > Cc: Gwan-gyeong Mun 
> > > Cc: Sean Paul 
> > > Cc: Fabio Estevam 
> > > Cc: Deepak Rawat 
> > > Cc: dri-de...@lists.freedesktop.org
> > > Signed-off-by: José Roberto de Souza 
> > 
> > After looking at the kernel code, it seems like the kernel already
> > checks for
> > all of that in drm_atomic_plane_check. Are you aware of this?
> > 
> > > + w = drm_rect_width(&plane_state->src) >> 16;
> > > + h = drm_rect_height(&plane_state->src) >> 16;
> > 
> > The docs say this should be in FB coordinates, not in SRC_*
> > coordinates. So we
> > shouldn't need to check any SRC_* prop here.
> > 
> I agree the Simon's opinion. it does check between plane's frame buffer
> src geometry and damage clips. (Plane's damage clip might exist outside
> of fb src geometry.)

Since this is causing confusion, please make sure that the igt for damage
clips validates this correctly. I think some of the igts that vmwgfx
people have created have still not yet landed, so we definitely want to
land these.

Note that basic damage clips tests should be fully generic, i.e. not tied
to anything driver specific like our psr testcases are.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [CI,1/3] drm/i915/pmu: Don't grab wakeref when enabling events

2020-12-14 Thread Patchwork
== Series Details ==

Series: series starting with [CI,1/3] drm/i915/pmu: Don't grab wakeref when 
enabling events
URL   : https://patchwork.freedesktop.org/series/84898/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
5f62030b7d49 drm/i915/pmu: Don't grab wakeref when enabling events
e28201228e02 drm/i915/pmu: Use raw clock for rc6 estimation
d781b1a46f9c drm/i915/pmu: Remove !CONFIG_PM code
-:6: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ 
chars of sha1> ("")' - ie: 'commit 16ffe73c186b ("drm/i915/pmu: Use 
GT parked for estimating RC6 while asleep")'
#6: 
Chris spotted that since 16ffe73c186b ("drm/i915/pmu: Use GT parked for

total: 1 errors, 0 warnings, 0 checks, 26 lines checked


___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.DOCS: warning for series starting with [CI,1/3] drm/i915/pmu: Don't grab wakeref when enabling events

2020-12-14 Thread Patchwork
== Series Details ==

Series: series starting with [CI,1/3] drm/i915/pmu: Don't grab wakeref when 
enabling events
URL   : https://patchwork.freedesktop.org/series/84898/
State : warning

== Summary ==

$ make htmldocs 2>&1 > /dev/null | grep i915
Error: Cannot open file ./drivers/gpu/drm/i915/gt/intel_lrc.c
WARNING: kernel-doc './scripts/kernel-doc -rst -enable-lineno -sphinx-version 
1.7.9 -function Logical Rings, Logical Ring Contexts and Execlists 
./drivers/gpu/drm/i915/gt/intel_lrc.c' failed with return code 1


___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 22/69] drm/i915/selftests: Confirm RING_TIMESTAMP / CTX_TIMESTAMP share a clock

2020-12-14 Thread Chris Wilson
We assume that both timestamps are driven off the same clock [reported
to userspace as I915_PARAM_CS_TIMESTAMP_FREQUENCY]. Verify that this is
so by reading the timestamp registers around a busywait (on an otherwise
idle engine so there should be no preemptions).

v2: Icelake (not ehl, nor tgl) seems to be using a fixed 80ns interval
for, and only for, CTX_TIMESTAMP. As far as I can tell, this behaviour
is undocumented.

Signed-off-by: Chris Wilson 
Cc: Mika Kuoppala 
---
 drivers/gpu/drm/i915/gt/selftest_engine_pm.c | 202 +++
 1 file changed, 202 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c 
b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
index b08fc5390e8a..5fcbadc8d4f1 100644
--- a/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/selftest_engine_pm.c
@@ -4,13 +4,214 @@
  * Copyright © 2018 Intel Corporation
  */
 
+#include 
+
 #include "i915_selftest.h"
+#include "intel_gt_clock_utils.h"
 #include "selftest_engine.h"
 #include "selftest_engine_heartbeat.h"
 #include "selftests/igt_atomic.h"
 #include "selftests/igt_flush_test.h"
 #include "selftests/igt_spinner.h"
 
+#define COUNT 5
+
+static int cmp_u64(const void *A, const void *B)
+{
+   const u64 *a = A, *b = B;
+
+   return *a - *b;
+}
+
+static u64 trifilter(u64 *a)
+{
+   sort(a, COUNT, sizeof(*a), cmp_u64, NULL);
+   return (a[1] + 2 * a[2] + a[3]) >> 2;
+}
+
+static u32 *emit_wait(u32 *cs, u32 offset, int op, u32 value)
+{
+   *cs++ = MI_SEMAPHORE_WAIT |
+   MI_SEMAPHORE_GLOBAL_GTT |
+   MI_SEMAPHORE_POLL |
+   op;
+   *cs++ = value;
+   *cs++ = offset;
+   *cs++ = 0;
+
+   return cs;
+}
+
+static u32 *emit_store(u32 *cs, u32 offset, u32 value)
+{
+   *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
+   *cs++ = offset;
+   *cs++ = 0;
+   *cs++ = value;
+
+   return cs;
+}
+
+static u32 *emit_srm(u32 *cs, i915_reg_t reg, u32 offset)
+{
+   *cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_USE_GGTT;
+   *cs++ = i915_mmio_reg_offset(reg);
+   *cs++ = offset;
+   *cs++ = 0;
+
+   return cs;
+}
+
+static void write_semaphore(u32 *x, u32 value)
+{
+   WRITE_ONCE(*x, value);
+   wmb();
+}
+
+static int __measure_timestamps(struct intel_context *ce,
+   u64 *dt, u64 *d_ring, u64 *d_ctx)
+{
+   struct intel_engine_cs *engine = ce->engine;
+   u32 *sema = memset32(engine->status_page.addr + 1000, 0, 5);
+   u32 offset = i915_ggtt_offset(engine->status_page.vma);
+   struct i915_request *rq;
+   u32 *cs;
+
+   rq = intel_context_create_request(ce);
+   if (IS_ERR(rq))
+   return PTR_ERR(rq);
+
+   cs = intel_ring_begin(rq, 28);
+   if (IS_ERR(cs)) {
+   i915_request_add(rq);
+   return PTR_ERR(cs);
+   }
+
+   /* Signal & wait for start */
+   cs = emit_store(cs, offset + 4008, 1);
+   cs = emit_wait(cs, offset + 4008, MI_SEMAPHORE_SAD_NEQ_SDD, 1);
+
+   cs = emit_srm(cs, RING_TIMESTAMP(engine->mmio_base), offset + 4000);
+   cs = emit_srm(cs, RING_CTX_TIMESTAMP(engine->mmio_base), offset + 4004);
+
+   /* Busy wait */
+   cs = emit_wait(cs, offset + 4008, MI_SEMAPHORE_SAD_EQ_SDD, 1);
+
+   cs = emit_srm(cs, RING_TIMESTAMP(engine->mmio_base), offset + 4016);
+   cs = emit_srm(cs, RING_CTX_TIMESTAMP(engine->mmio_base), offset + 4012);
+
+   intel_ring_advance(rq, cs);
+   i915_request_get(rq);
+   i915_request_add(rq);
+   intel_engine_flush_submission(engine);
+
+   /* Wait for the request to start executing, that then waits for us */
+   while (READ_ONCE(sema[2]) == 0)
+   cpu_relax();
+
+   /* Run the request for a 100us, sampling timestamps before/after */
+   preempt_disable();
+   *dt = ktime_get_raw_fast_ns();
+   write_semaphore(&sema[2], 0);
+   udelay(100);
+   write_semaphore(&sema[2], 1);
+   *dt = ktime_get_raw_fast_ns() - *dt;
+   preempt_enable();
+
+   if (i915_request_wait(rq, 0, HZ / 2) < 0) {
+   i915_request_put(rq);
+   return -ETIME;
+   }
+   i915_request_put(rq);
+
+   pr_debug("%s CTX_TIMESTAMP: [%x, %x], RING_TIMESTAMP: [%x, %x]\n",
+engine->name, sema[1], sema[3], sema[0], sema[4]);
+
+   *d_ctx = sema[3] - sema[1];
+   *d_ring = sema[4] - sema[0];
+   return 0;
+}
+
+static int __live_engine_timestamps(struct intel_engine_cs *engine)
+{
+   u64 s_ring[COUNT], s_ctx[COUNT], st[COUNT], d_ring, d_ctx, dt;
+   struct intel_context *ce;
+   int i, err = 0;
+
+   ce = intel_context_create(engine);
+   if (IS_ERR(ce))
+   return PTR_ERR(ce);
+
+   for (i = 0; i < COUNT; i++) {
+   err = __measure_timestamps(ce, &st[i], &s_ring[i], &s_ctx[i]);
+   if (err)
+   break;
+   }
+   intel_context_put(ce);
+

[Intel-gfx] [PATCH 04/69] drm/i915/gt: Replace direct submit with direct call to tasklet

2020-12-14 Thread Chris Wilson
Rather than having special case code for opportunistically calling
process_csb() and performing a direct submit while holding the engine
spinlock for submitting the request, simply call the tasklet directly.
This allows us to retain the direct submission path, including the CS
draining to allow fast/immediate submissions, without requiring any
duplicated code paths, and most importantly greatly simplifying the
control flow by removing reentrancy. This will enable us to close a few
races in the virtual engines in the next few patches.

The trickiest part here is to ensure that paired operations (such as
schedule_in/schedule_out) remain under consistent locking domains,
e.g. when pulled outside of the engine->active.lock

v2: Use bh kicking, see commit 3c53776e29f8 ("Mark HI and TASKLET
softirq synchronous").
v3: Update engine-reset to be tasklet aware

Signed-off-by: Chris Wilson 
Reviewed-by: Mika Kuoppala 
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c |  35 +++--
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |   2 +-
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |   3 +-
 .../drm/i915/gt/intel_execlists_submission.c  | 140 +++---
 drivers/gpu/drm/i915/gt/intel_reset.c |  60 +---
 drivers/gpu/drm/i915/gt/intel_reset.h |   2 +
 drivers/gpu/drm/i915/gt/selftest_context.c|   2 +-
 drivers/gpu/drm/i915/gt/selftest_execlists.c  |  27 ++--
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |   7 +-
 drivers/gpu/drm/i915/gt/selftest_reset.c  |   8 +-
 drivers/gpu/drm/i915/i915_request.c   |  12 +-
 drivers/gpu/drm/i915/i915_request.h   |   1 +
 drivers/gpu/drm/i915/i915_scheduler.c |   4 -
 drivers/gpu/drm/i915/selftests/i915_request.c |   6 +-
 drivers/gpu/drm/i915/selftests/igt_spinner.c  |   3 +
 15 files changed, 162 insertions(+), 150 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 97ceaf7116e8..71bd052628f4 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1003,32 +1003,39 @@ static unsigned long stop_timeout(const struct 
intel_engine_cs *engine)
return READ_ONCE(engine->props.stop_timeout_ms);
 }
 
-int intel_engine_stop_cs(struct intel_engine_cs *engine)
+static int __intel_engine_stop_cs(struct intel_engine_cs *engine,
+ int fast_timeout_us,
+ int slow_timeout_ms)
 {
struct intel_uncore *uncore = engine->uncore;
-   const u32 base = engine->mmio_base;
-   const i915_reg_t mode = RING_MI_MODE(base);
+   const i915_reg_t mode = RING_MI_MODE(engine->mmio_base);
int err;
 
+   intel_uncore_write_fw(uncore, mode, _MASKED_BIT_ENABLE(STOP_RING));
+   err = __intel_wait_for_register_fw(engine->uncore, mode,
+  MODE_IDLE, MODE_IDLE,
+  fast_timeout_us,
+  slow_timeout_ms,
+  NULL);
+
+   /* A final mmio read to let GPU writes be hopefully flushed to memory */
+   intel_uncore_posting_read_fw(uncore, mode);
+   return err;
+}
+
+int intel_engine_stop_cs(struct intel_engine_cs *engine)
+{
+   int err = 0;
+
if (INTEL_GEN(engine->i915) < 3)
return -ENODEV;
 
ENGINE_TRACE(engine, "\n");
-
-   intel_uncore_write_fw(uncore, mode, _MASKED_BIT_ENABLE(STOP_RING));
-
-   err = 0;
-   if (__intel_wait_for_register_fw(uncore,
-mode, MODE_IDLE, MODE_IDLE,
-1000, stop_timeout(engine),
-NULL)) {
+   if (__intel_engine_stop_cs(engine, 1000, stop_timeout(engine))) {
ENGINE_TRACE(engine, "timed out on STOP_RING -> IDLE\n");
err = -ETIMEDOUT;
}
 
-   /* A final mmio read to let GPU writes be hopefully flushed to memory */
-   intel_uncore_posting_read_fw(uncore, mode);
-
return err;
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 499b09cb4acf..99574378047f 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -136,7 +136,7 @@ __queue_and_release_pm(struct i915_request *rq,
list_add_tail(&tl->link, &timelines->active_list);
 
/* Hand the request over to HW and so engine_retire() */
-   __i915_request_queue(rq, NULL);
+   __i915_request_queue_bh(rq);
 
/* Let new submissions commence (and maybe retire this timeline) */
__intel_wakeref_defer_park(&engine->wakeref);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index ee6312601c56..e71eef157231 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i91

[Intel-gfx] [PATCH 56/69] drm/i915: Move scheduler queue

2020-12-14 Thread Chris Wilson
Extract the scheduling queue from "execlists" into the per-engine
scheduling structs, for reuse by other backends.

Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_wait.c  |  1 +
 drivers/gpu/drm/i915/gt/intel_engine_cs.c |  6 +--
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |  2 +-
 drivers/gpu/drm/i915/gt/intel_engine_types.h  | 14 ---
 .../drm/i915/gt/intel_execlists_submission.c  | 26 ++--
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 10 ++---
 drivers/gpu/drm/i915/i915_drv.h   |  1 -
 drivers/gpu/drm/i915/i915_request.h   |  2 +-
 drivers/gpu/drm/i915/i915_scheduler.c | 40 +++
 drivers/gpu/drm/i915/i915_scheduler.h | 14 +++
 drivers/gpu/drm/i915/i915_scheduler_types.h   | 15 +++
 .../gpu/drm/i915/selftests/i915_scheduler.c   |  2 +-
 13 files changed, 77 insertions(+), 58 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h 
b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index 1449f54924e0..99bd7b4f4ffe 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -19,7 +19,7 @@
 
 #include "gt/intel_context_types.h"
 
-#include "i915_scheduler.h"
+#include "i915_scheduler_types.h"
 #include "i915_sw_fence.h"
 
 struct pid;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c 
b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
index b8b91a7564cf..d905d412 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
@@ -12,6 +12,7 @@
 #include "dma_resv_utils.h"
 #include "i915_gem_ioctls.h"
 #include "i915_gem_object.h"
+#include "i915_scheduler.h"
 
 static long
 i915_gem_object_wait_fence(struct dma_fence *fence,
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 01499390aed7..8e9dfa3efe9f 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -591,8 +591,6 @@ void intel_engine_init_execlists(struct intel_engine_cs 
*engine)
memset(execlists->pending, 0, sizeof(execlists->pending));
execlists->active =
memset(execlists->inflight, 0, sizeof(execlists->inflight));
-
-   execlists->queue = RB_ROOT_CACHED;
 }
 
 static void cleanup_status_page(struct intel_engine_cs *engine)
@@ -899,7 +897,7 @@ int intel_engines_init(struct intel_gt *gt)
  */
 void intel_engine_cleanup_common(struct intel_engine_cs *engine)
 {
-   GEM_BUG_ON(!list_empty(&engine->active.requests));
+   i915_sched_fini_engine(&engine->active);
tasklet_kill(&engine->execlists.tasklet); /* flush the callback */
 
cleanup_status_page(engine);
@@ -1222,7 +1220,7 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
}
 
/* ELSP is empty, but there are ready requests? E.g. after reset */
-   if (!RB_EMPTY_ROOT(&engine->execlists.queue.rb_root))
+   if (!i915_sched_is_idle(&engine->active))
return false;
 
/* Ring stopped? */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index c3bb96bf8b69..aea8b6eab5ee 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -273,7 +273,7 @@ static int __engine_park(struct intel_wakeref *wf)
if (engine->park)
engine->park(engine);
 
-   engine->execlists.no_priolist = false;
+   engine->active.no_priolist = false;
 
/* While gt calls i915_vma_parked(), we have to break the lock cycle */
intel_gt_pm_put_async(engine->gt);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 16e1c5299df4..ec719eac4dd2 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -154,11 +154,6 @@ struct intel_engine_execlists {
 */
struct timer_list preempt;
 
-   /**
-* @default_priolist: priority list for I915_PRIORITY_NORMAL
-*/
-   struct i915_priolist default_priolist;
-
/**
 * @ccid: identifier for contexts submitted to this engine
 */
@@ -193,11 +188,6 @@ struct intel_engine_execlists {
 */
u32 reset_ccid;
 
-   /**
-* @no_priolist: priority lists disabled
-*/
-   bool no_priolist;
-
/**
 * @submit_reg: gen-specific execlist submission register
 * set to the ExecList Submission Port (elsp) register pre-Gen11 and to
@@ -239,10 +229,6 @@ struct intel_engine_execlists {
 */
unsigned int port_mask;
 
-   /**
-* @queue: queue of requests, in priority lists
-*/
-   struct rb_root_cached queue;
struct rb_root_cached virtual;
 
/**
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 

[Intel-gfx] [PATCH 48/69] drm/i915: Extract request suspension from the execlists backend

2020-12-14 Thread Chris Wilson
Make the ability to suspend and resume a request and its dependents
generic.

Signed-off-by: Chris Wilson 
---
 .../drm/i915/gt/intel_execlists_submission.c  | 148 +-
 drivers/gpu/drm/i915/i915_scheduler.c | 120 ++
 drivers/gpu/drm/i915/i915_scheduler.h |   5 +
 3 files changed, 129 insertions(+), 144 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 86b15da995ea..2963486714b0 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -2703,166 +2703,26 @@ static void post_process_csb(struct i915_request 
**port,
execlists_schedule_out(*port++);
 }
 
-static void __execlists_hold(struct i915_request *rq)
-{
-   LIST_HEAD(list);
-
-   do {
-   struct i915_dependency *p;
-
-   if (i915_request_is_active(rq))
-   __i915_request_unsubmit(rq);
-
-   clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
-   list_move_tail(&rq->sched.link, &rq->engine->active.hold);
-   i915_request_set_hold(rq);
-   RQ_TRACE(rq, "on hold\n");
-
-   for_each_waiter(p, rq) {
-   struct i915_request *w =
-   container_of(p->waiter, typeof(*w), sched);
-
-   if (p->flags & I915_DEPENDENCY_WEAK)
-   continue;
-
-   /* Leave semaphores spinning on the other engines */
-   if (w->engine != rq->engine)
-   continue;
-
-   if (!i915_request_is_ready(w))
-   continue;
-
-   if (i915_request_completed(w))
-   continue;
-
-   if (i915_request_on_hold(w))
-   continue;
-
-   list_move_tail(&w->sched.link, &list);
-   }
-
-   rq = list_first_entry_or_null(&list, typeof(*rq), sched.link);
-   } while (rq);
-}
-
 static bool execlists_hold(struct intel_engine_cs *engine,
   struct i915_request *rq)
 {
+   bool result;
+
if (i915_request_on_hold(rq))
return false;
 
spin_lock_irq(&engine->active.lock);
-
-   if (i915_request_completed(rq)) { /* too late! */
-   rq = NULL;
-   goto unlock;
-   }
-
-   /*
-* Transfer this request onto the hold queue to prevent it
-* being resumbitted to HW (and potentially completed) before we have
-* released it. Since we may have already submitted following
-* requests, we need to remove those as well.
-*/
-   GEM_BUG_ON(i915_request_on_hold(rq));
-   GEM_BUG_ON(rq->engine != engine);
-   __execlists_hold(rq);
-   GEM_BUG_ON(list_empty(&engine->active.hold));
-
-unlock:
+   result = __intel_engine_hold_request(engine, rq);
spin_unlock_irq(&engine->active.lock);
-   return rq;
-}
-
-static bool hold_request(const struct i915_request *rq)
-{
-   struct i915_dependency *p;
-   bool result = false;
-
-   /*
-* If one of our ancestors is on hold, we must also be on hold,
-* otherwise we will bypass it and execute before it.
-*/
-   rcu_read_lock();
-   for_each_signaler(p, rq) {
-   const struct i915_request *s =
-   container_of(p->signaler, typeof(*s), sched);
-
-   if (s->engine != rq->engine)
-   continue;
-
-   result = i915_request_on_hold(s);
-   if (result)
-   break;
-   }
-   rcu_read_unlock();
 
return result;
 }
 
-static void __execlists_unhold(struct i915_request *rq)
-{
-   LIST_HEAD(list);
-
-   do {
-   struct i915_dependency *p;
-
-   RQ_TRACE(rq, "hold release\n");
-
-   GEM_BUG_ON(!i915_request_on_hold(rq));
-   GEM_BUG_ON(!i915_sw_fence_signaled(&rq->submit));
-
-   i915_request_clear_hold(rq);
-   list_move_tail(&rq->sched.link,
-  i915_sched_lookup_priolist(rq->engine,
- rq_prio(rq)));
-   set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
-
-   /* Also release any children on this engine that are ready */
-   for_each_waiter(p, rq) {
-   struct i915_request *w =
-   container_of(p->waiter, typeof(*w), sched);
-
-   if (p->flags & I915_DEPENDENCY_WEAK)
-   continue;
-
-   /* Propagate any change in error status */
-   if (rq->fence.error)
- 

[Intel-gfx] [PATCH 51/69] drm/i915: Wrap cmpxchg64 with try_cmpxchg64() helper

2020-12-14 Thread Chris Wilson
Wrap cmpxchg64 with a try_cmpxchg()-esque helper. Hiding the old-value
dance in the helper allows for cleaner code.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_utils.h | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_utils.h 
b/drivers/gpu/drm/i915/i915_utils.h
index 54773371e6bd..0b5588e59740 100644
--- a/drivers/gpu/drm/i915/i915_utils.h
+++ b/drivers/gpu/drm/i915/i915_utils.h
@@ -456,4 +456,19 @@ static inline bool timer_expired(const struct timer_list 
*t)
  */
 #define IS_ACTIVE(config) ((config) != 0)
 
+#if IS_ENABLED(CONFIG_64BIT)
+#define try_cmpxchg64(_ptr, _pold, _new) try_cmpxchg(_ptr, _pold, _new)
+#else
+#define try_cmpxchg64(_ptr, _pold, _new)   \
+({ \
+   __typeof__(_ptr) _old = (__typeof__(_ptr))(_pold);  \
+   __typeof__(*(_ptr)) __old = *_old;  \
+   __typeof__(*(_ptr)) __cur = cmpxchg64(_ptr, __old, _new);   \
+   bool success = __cur == __old;  \
+   if (unlikely(!success)) \
+   *_old = __cur;  \
+   likely(success);\
+})
+#endif
+
 #endif /* !__I915_UTILS_H */
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 55/69] drm/i915: Move common active lists from engine to i915_scheduler

2020-12-14 Thread Chris Wilson
Extract the scheduler lists into a related structure, stop sprawling
over struct intel_engine_cs

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 26 +-
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  8 +
 .../drm/i915/gt/intel_execlists_submission.c  |  2 +-
 drivers/gpu/drm/i915/gt/mock_engine.c |  2 +-
 drivers/gpu/drm/i915/i915_scheduler.c | 34 ---
 drivers/gpu/drm/i915/i915_scheduler.h |  3 +-
 drivers/gpu/drm/i915/i915_scheduler_types.h   |  8 +
 7 files changed, 43 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 78c8053ec2b0..01499390aed7 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -593,8 +593,6 @@ void intel_engine_init_execlists(struct intel_engine_cs 
*engine)
memset(execlists->inflight, 0, sizeof(execlists->inflight));
 
execlists->queue = RB_ROOT_CACHED;
-
-   i915_sched_init_ipi(&execlists->ipi);
 }
 
 static void cleanup_status_page(struct intel_engine_cs *engine)
@@ -710,7 +708,7 @@ static int engine_setup_common(struct intel_engine_cs 
*engine)
goto err_status;
}
 
-   intel_engine_init_active(engine, ENGINE_PHYSICAL);
+   i915_sched_init_engine(&engine->active, ENGINE_PHYSICAL);
intel_engine_init_execlists(engine);
intel_engine_init_cmd_parser(engine);
intel_engine_init__pm(engine);
@@ -775,28 +773,6 @@ static int measure_breadcrumb_dw(struct intel_context *ce)
return dw;
 }
 
-void
-intel_engine_init_active(struct intel_engine_cs *engine, unsigned int subclass)
-{
-   INIT_LIST_HEAD(&engine->active.requests);
-   INIT_LIST_HEAD(&engine->active.hold);
-
-   spin_lock_init(&engine->active.lock);
-   lockdep_set_subclass(&engine->active.lock, subclass);
-
-   /*
-* Due to an interesting quirk in lockdep's internal debug tracking,
-* after setting a subclass we must ensure the lock is used. Otherwise,
-* nr_unused_locks is incremented once too often.
-*/
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
-   local_irq_disable();
-   lock_map_acquire(&engine->active.lock.dep_map);
-   lock_map_release(&engine->active.lock.dep_map);
-   local_irq_enable();
-#endif
-}
-
 static struct intel_context *
 create_pinned_context(struct intel_engine_cs *engine,
  unsigned int hwsp,
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index d8b4cc086fef..16e1c5299df4 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -245,8 +245,6 @@ struct intel_engine_execlists {
struct rb_root_cached queue;
struct rb_root_cached virtual;
 
-   struct i915_sched_ipi ipi;
-
/**
 * @csb_write: control register for Context Switch buffer
 *
@@ -316,11 +314,7 @@ struct intel_engine_cs {
 
struct intel_sseu sseu;
 
-   struct {
-   spinlock_t lock;
-   struct list_head requests;
-   struct list_head hold; /* ready requests, but on hold */
-   } active;
+   struct i915_sched_engine active;
 
/* keep a request in reserve for a [pm] barrier under oom */
struct i915_request *request_pool;
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 0fbc84d94173..996f12e3dba8 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -4940,7 +4940,7 @@ intel_execlists_create_virtual(struct intel_engine_cs 
**siblings,
 
snprintf(ve->base.name, sizeof(ve->base.name), "virtual");
 
-   intel_engine_init_active(&ve->base, ENGINE_VIRTUAL);
+   i915_sched_init_engine(&ve->base.active, ENGINE_VIRTUAL);
intel_engine_init_execlists(&ve->base);
 
ve->base.cops = &virtual_context_ops;
diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c 
b/drivers/gpu/drm/i915/gt/mock_engine.c
index 2f830017c51d..c00bc0f4afec 100644
--- a/drivers/gpu/drm/i915/gt/mock_engine.c
+++ b/drivers/gpu/drm/i915/gt/mock_engine.c
@@ -355,7 +355,7 @@ int mock_engine_init(struct intel_engine_cs *engine)
 {
struct intel_context *ce;
 
-   intel_engine_init_active(engine, ENGINE_MOCK);
+   i915_sched_init_engine(&engine->active, ENGINE_MOCK);
intel_engine_init_execlists(engine);
intel_engine_init__pm(engine);
intel_engine_init_retire(engine);
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index 10b17a879176..88b2c0bf853c 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -115,12 +115,36 @@ static void ipi_schedule(struct work_struct *wrk)
} whi

[Intel-gfx] [PATCH 05/69] drm/i915/gt: Use virtual_engine during execlists_dequeue

2020-12-14 Thread Chris Wilson
Rather than going back and forth between the rb_node entry and the
virtual_engine type, store the ve local and reuse it. As the
container_of conversion from rb_node to virtual_engine requires a
variable offset, performing that conversion just once shaves off a bit
of code.

v2: Keep a single virtual engine lookup, for typical use.

Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
---
 .../drm/i915/gt/intel_execlists_submission.c  | 239 --
 1 file changed, 105 insertions(+), 134 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 157a8f18d41e..8e83e60492af 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -425,9 +425,15 @@ static int queue_prio(const struct intel_engine_execlists 
*execlists)
return ((p->priority + 1) << I915_USER_PRIORITY_SHIFT) - ffs(p->used);
 }
 
+static int virtual_prio(const struct intel_engine_execlists *el)
+{
+   struct rb_node *rb = rb_first_cached(&el->virtual);
+
+   return rb ? rb_entry(rb, struct ve_node, rb)->prio : INT_MIN;
+}
+
 static inline bool need_preempt(const struct intel_engine_cs *engine,
-   const struct i915_request *rq,
-   struct rb_node *rb)
+   const struct i915_request *rq)
 {
int last_prio;
 
@@ -464,25 +470,6 @@ static inline bool need_preempt(const struct 
intel_engine_cs *engine,
rq_prio(list_next_entry(rq, sched.link)) > last_prio)
return true;
 
-   if (rb) {
-   struct virtual_engine *ve =
-   rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
-   bool preempt = false;
-
-   if (engine == ve->siblings[0]) { /* only preempt one sibling */
-   struct i915_request *next;
-
-   rcu_read_lock();
-   next = READ_ONCE(ve->request);
-   if (next)
-   preempt = rq_prio(next) > last_prio;
-   rcu_read_unlock();
-   }
-
-   if (preempt)
-   return preempt;
-   }
-
/*
 * If the inflight context did not trigger the preemption, then maybe
 * it was the set of queued requests? Pick the highest priority in
@@ -493,7 +480,8 @@ static inline bool need_preempt(const struct 
intel_engine_cs *engine,
 * ELSP[0] or ELSP[1] as, thanks again to PI, if it was the same
 * context, it's priority would not exceed ELSP[0] aka last_prio.
 */
-   return queue_prio(&engine->execlists) > last_prio;
+   return max(virtual_prio(&engine->execlists),
+  queue_prio(&engine->execlists)) > last_prio;
 }
 
 __maybe_unused static inline bool
@@ -1781,6 +1769,35 @@ static bool virtual_matches(const struct virtual_engine 
*ve,
return true;
 }
 
+static struct virtual_engine *
+first_virtual_engine(struct intel_engine_cs *engine)
+{
+   struct intel_engine_execlists *el = &engine->execlists;
+   struct rb_node *rb = rb_first_cached(&el->virtual);
+
+   while (rb) {
+   struct virtual_engine *ve =
+   rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
+   struct i915_request *rq = READ_ONCE(ve->request);
+
+   /* lazily cleanup after another engine handled rq */
+   if (!rq) {
+   rb_erase_cached(rb, &el->virtual);
+   RB_CLEAR_NODE(rb);
+   rb = rb_first_cached(&el->virtual);
+   continue;
+   }
+
+   if (!virtual_matches(ve, rq, engine)) {
+   rb = rb_next(rb);
+   continue;
+   }
+   return ve;
+   }
+
+   return NULL;
+}
+
 static void virtual_xfer_context(struct virtual_engine *ve,
 struct intel_engine_cs *engine)
 {
@@ -1869,32 +1886,15 @@ static void defer_active(struct intel_engine_cs *engine)
 
 static bool
 need_timeslice(const struct intel_engine_cs *engine,
-  const struct i915_request *rq,
-  const struct rb_node *rb)
+  const struct i915_request *rq)
 {
int hint;
 
if (!intel_engine_has_timeslices(engine))
return false;
 
-   hint = engine->execlists.queue_priority_hint;
-
-   if (rb) {
-   const struct virtual_engine *ve =
-   rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
-   const struct intel_engine_cs *inflight =
-   intel_context_inflight(&ve->context);
-
-   if (!inflight || inflight == engine) {
-   struct i915_request *next;
-
-   rcu_read_lock();
-   next = REA

[Intel-gfx] [PATCH 08/69] drm/i915/gt: Remove virtual breadcrumb before transfer

2020-12-14 Thread Chris Wilson
The issue with stale virtual breadcrumbs remain. Now we have the problem
that if the irq-signaler is still referencing the stale breadcrumb as we
transfer it to a new sibling, the list becomes spaghetti. This is a very
small window, but that doesn't stop it being hit infrequently. To
prevent the lists being tangled (the iterator starting on one engine's
b->signalers but walking onto another list), always decouple the virtual
breadcrumb on schedule-out and make sure that the walker has stepped out
of the lists.

Signed-off-by: Chris Wilson 
Reviewed-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.c   |  5 +++--
 .../gpu/drm/i915/gt/intel_execlists_submission.c  | 15 +++
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c 
b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index 00918300f53f..63900edbde88 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -454,15 +454,16 @@ void i915_request_cancel_breadcrumb(struct i915_request 
*rq)
 {
struct intel_breadcrumbs *b = READ_ONCE(rq->engine)->breadcrumbs;
struct intel_context *ce = rq->context;
+   unsigned long flags;
bool release;
 
if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags))
return;
 
-   spin_lock(&ce->signal_lock);
+   spin_lock_irqsave(&ce->signal_lock, flags);
list_del_rcu(&rq->signal_link);
release = remove_signaling_context(b, ce);
-   spin_unlock(&ce->signal_lock);
+   spin_unlock_irqrestore(&ce->signal_lock, flags);
if (release)
intel_context_put(ce);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index d278c4445496..192ec4041d7a 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -1359,6 +1359,21 @@ static inline void execlists_schedule_in(struct 
i915_request *rq, int idx)
 static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
 {
struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
+   struct intel_engine_cs *engine = rq->engine;
+
+   /* Flush concurrent rcu iterators in signal_irq_work */
+   if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &rq->fence.flags)) {
+   /*
+* After this point, the rq may be transferred to a new
+* sibling, so before we clear ce->inflight make sure that
+* the context has been removed from the b->signalers and
+* furthermore we need to make sure that the concurrent
+* iterator in signal_irq_work is no longer following
+* ce->signal_link.
+*/
+   i915_request_cancel_breadcrumb(rq);
+   irq_work_sync(&engine->breadcrumbs->irq_work);
+   }
 
if (READ_ONCE(ve->request))
tasklet_hi_schedule(&ve->base.execlists.tasklet);
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 03/69] drm/i915: Encode fence specific waitqueue behaviour into the wait.flags

2020-12-14 Thread Chris Wilson
Use the wait_queue_entry.flags to denote the special fence behaviour
(flattening continuations along fence chains, and for propagating
errors) rather than trying to detect ordinary waiters by their
functions.

Signed-off-by: Chris Wilson 
Reviewed-by: Matthew Brost 
---
 drivers/gpu/drm/i915/i915_sw_fence.c | 25 +++--
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c 
b/drivers/gpu/drm/i915/i915_sw_fence.c
index 038d4c6884c5..2744558f3050 100644
--- a/drivers/gpu/drm/i915/i915_sw_fence.c
+++ b/drivers/gpu/drm/i915/i915_sw_fence.c
@@ -18,10 +18,15 @@
 #define I915_SW_FENCE_BUG_ON(expr) BUILD_BUG_ON_INVALID(expr)
 #endif
 
-#define I915_SW_FENCE_FLAG_ALLOC BIT(3) /* after WQ_FLAG_* for safety */
-
 static DEFINE_SPINLOCK(i915_sw_fence_lock);
 
+#define WQ_FLAG_BITS \
+   BITS_PER_TYPE(typeof_member(struct wait_queue_entry, flags))
+
+/* after WQ_FLAG_* for safety */
+#define I915_SW_FENCE_FLAG_FENCE BIT(WQ_FLAG_BITS - 1)
+#define I915_SW_FENCE_FLAG_ALLOC BIT(WQ_FLAG_BITS - 2)
+
 enum {
DEBUG_FENCE_IDLE = 0,
DEBUG_FENCE_NOTIFY,
@@ -154,10 +159,10 @@ static void __i915_sw_fence_wake_up_all(struct 
i915_sw_fence *fence,
spin_lock_irqsave_nested(&x->lock, flags, 1 + !!continuation);
if (continuation) {
list_for_each_entry_safe(pos, next, &x->head, entry) {
-   if (pos->func == autoremove_wake_function)
-   pos->func(pos, TASK_NORMAL, 0, continuation);
-   else
+   if (pos->flags & I915_SW_FENCE_FLAG_FENCE)
list_move_tail(&pos->entry, continuation);
+   else
+   pos->func(pos, TASK_NORMAL, 0, continuation);
}
} else {
LIST_HEAD(extra);
@@ -166,9 +171,9 @@ static void __i915_sw_fence_wake_up_all(struct 
i915_sw_fence *fence,
list_for_each_entry_safe(pos, next, &x->head, entry) {
int wake_flags;
 
-   wake_flags = fence->error;
-   if (pos->func == autoremove_wake_function)
-   wake_flags = 0;
+   wake_flags = 0;
+   if (pos->flags & I915_SW_FENCE_FLAG_FENCE)
+   wake_flags = fence->error;
 
pos->func(pos, TASK_NORMAL, wake_flags, &extra);
}
@@ -332,8 +337,8 @@ static int __i915_sw_fence_await_sw_fence(struct 
i915_sw_fence *fence,
  struct i915_sw_fence *signaler,
  wait_queue_entry_t *wq, gfp_t gfp)
 {
+   unsigned int pending;
unsigned long flags;
-   int pending;
 
debug_fence_assert(fence);
might_sleep_if(gfpflags_allow_blocking(gfp));
@@ -349,7 +354,7 @@ static int __i915_sw_fence_await_sw_fence(struct 
i915_sw_fence *fence,
if (unlikely(i915_sw_fence_check_if_after(fence, signaler)))
return -EINVAL;
 
-   pending = 0;
+   pending = I915_SW_FENCE_FLAG_FENCE;
if (!wq) {
wq = kmalloc(sizeof(*wq), gfp);
if (!wq) {
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 57/69] drm/i915: Move tasklet from execlists to sched

2020-12-14 Thread Chris Wilson
Move the scheduling tasklists out of the execlists backend into the
per-engine scheduling bookkeeping.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine.h| 14 -
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 11 ++--
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  5 --
 .../drm/i915/gt/intel_execlists_submission.c  | 62 +--
 drivers/gpu/drm/i915/gt/intel_gt_irq.c|  2 +-
 drivers/gpu/drm/i915/gt/selftest_execlists.c  | 20 +++---
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |  2 +-
 drivers/gpu/drm/i915/gt/selftest_reset.c  |  6 +-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 14 ++---
 drivers/gpu/drm/i915/i915_scheduler.c | 16 ++---
 drivers/gpu/drm/i915/i915_scheduler.h | 20 ++
 drivers/gpu/drm/i915/i915_scheduler_types.h   |  6 ++
 .../gpu/drm/i915/selftests/i915_scheduler.c   | 16 ++---
 13 files changed, 98 insertions(+), 96 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h 
b/drivers/gpu/drm/i915/gt/intel_engine.h
index 760fefdfe392..925343e646e3 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -123,20 +123,6 @@ execlists_active(const struct intel_engine_execlists 
*execlists)
return active;
 }
 
-static inline void
-execlists_active_lock_bh(struct intel_engine_execlists *execlists)
-{
-   local_bh_disable(); /* prevent local softirq and lock recursion */
-   tasklet_lock(&execlists->tasklet);
-}
-
-static inline void
-execlists_active_unlock_bh(struct intel_engine_execlists *execlists)
-{
-   tasklet_unlock(&execlists->tasklet);
-   local_bh_enable(); /* restore softirq, and kick ksoftirqd! */
-}
-
 struct i915_request *
 execlists_unwind_incomplete_requests(struct intel_engine_execlists *execlists);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 8e9dfa3efe9f..f62550d95a60 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -898,7 +898,6 @@ int intel_engines_init(struct intel_gt *gt)
 void intel_engine_cleanup_common(struct intel_engine_cs *engine)
 {
i915_sched_fini_engine(&engine->active);
-   tasklet_kill(&engine->execlists.tasklet); /* flush the callback */
 
cleanup_status_page(engine);
intel_breadcrumbs_free(engine->breadcrumbs);
@@ -1174,7 +1173,7 @@ static bool ring_is_idle(struct intel_engine_cs *engine)
 
 void intel_engine_flush_submission(struct intel_engine_cs *engine)
 {
-   struct tasklet_struct *t = &engine->execlists.tasklet;
+   struct tasklet_struct *t = &engine->active.tasklet;
 
if (!t->func)
return;
@@ -1439,8 +1438,8 @@ static void intel_engine_print_registers(struct 
intel_engine_cs *engine,
 
drm_printf(m, "\tExeclist tasklet queued? %s (%s), preempt? %s, 
timeslice? %s\n",
   yesno(test_bit(TASKLET_STATE_SCHED,
- &engine->execlists.tasklet.state)),
-  
enableddisabled(!atomic_read(&engine->execlists.tasklet.count)),
+ &engine->active.tasklet.state)),
+  
enableddisabled(!atomic_read(&engine->active.tasklet.count)),
   repr_timer(&engine->execlists.preempt),
   repr_timer(&engine->execlists.timer));
 
@@ -1464,7 +1463,7 @@ static void intel_engine_print_registers(struct 
intel_engine_cs *engine,
   idx, hws[idx * 2], hws[idx * 2 + 1]);
}
 
-   execlists_active_lock_bh(execlists);
+   i915_sched_lock_bh(&engine->active);
rcu_read_lock();
for (port = execlists->active; (rq = *port); port++) {
char hdr[160];
@@ -1495,7 +1494,7 @@ static void intel_engine_print_registers(struct 
intel_engine_cs *engine,
i915_request_show(m, rq, hdr, 0);
}
rcu_read_unlock();
-   execlists_active_unlock_bh(execlists);
+   i915_sched_unlock_bh(&engine->active);
} else if (INTEL_GEN(dev_priv) > 6) {
drm_printf(m, "\tPP_DIR_BASE: 0x%08x\n",
   ENGINE_READ(engine, RING_PP_DIR_BASE));
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index ec719eac4dd2..824a187b2803 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -139,11 +139,6 @@ struct st_preempt_hang {
  * driver and the hardware state for execlist mode of submission.
  */
 struct intel_engine_execlists {
-   /**
-* @tasklet: softirq tasklet for bottom handler
-*/
-   struct tasklet_struct tasklet;
-
/**
 * @timer: kick the current context if its timeslice expires
 */
diff 

[Intel-gfx] [PATCH 53/69] drm/i915/gt: Specify a deadline for the heartbeat

2020-12-14 Thread Chris Wilson
As we know when we expect the heartbeat to be checked for completion,
pass this information along as its deadline. We still do not complain if
the deadline is missed, at least until we have tried a few times, but it
will allow for quicker hang detection on systems where deadlines are
adhered to.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c 
b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index 495e8d5e2bf4..0eb4a07b29b0 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -66,6 +66,16 @@ static void heartbeat_commit(struct i915_request *rq,
__i915_request_queue(rq, attr);
 }
 
+static void set_heartbeat_deadline(struct intel_engine_cs *engine,
+  struct i915_request *rq)
+{
+   unsigned long interval;
+
+   interval = READ_ONCE(engine->props.heartbeat_interval_ms);
+   if (interval)
+   i915_request_set_deadline(rq, ktime_get() + (interval << 20));
+}
+
 static void show_heartbeat(const struct i915_request *rq,
   struct intel_engine_cs *engine)
 {
@@ -131,6 +141,8 @@ static void heartbeat(struct work_struct *wrk)
 
local_bh_disable();
i915_request_set_priority(rq, attr.priority);
+   if (attr.priority == I915_PRIORITY_BARRIER)
+   i915_request_set_deadline(rq, 0);
local_bh_enable();
} else {
if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
@@ -162,6 +174,7 @@ static void heartbeat(struct work_struct *wrk)
if (IS_ERR(rq))
goto unlock;
 
+   set_heartbeat_deadline(engine, rq);
heartbeat_commit(rq, &attr);
 
 unlock:
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 16/69] drm/i915/gt: Wrap intel_timeline.has_initial_breadcrumb

2020-12-14 Thread Chris Wilson
In preparation for removing the has_initial_breadcrumb field, add a
helper function for the existing callers.

Signed-off-by: Chris Wilson 
Reviewed-by: Mika Kuoppala 
---
 drivers/gpu/drm/i915/gt/gen8_engine_cs.c| 2 +-
 drivers/gpu/drm/i915/gt/intel_ring_submission.c | 4 ++--
 drivers/gpu/drm/i915/gt/intel_timeline.c| 6 +++---
 drivers/gpu/drm/i915/gt/intel_timeline.h| 6 ++
 drivers/gpu/drm/i915/gt/selftest_timeline.c | 5 +++--
 5 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c 
b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
index 9c6f0ebfa3cf..ebf043692eef 100644
--- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
@@ -354,7 +354,7 @@ int gen8_emit_init_breadcrumb(struct i915_request *rq)
u32 *cs;
 
GEM_BUG_ON(i915_request_has_initial_breadcrumb(rq));
-   if (!i915_request_timeline(rq)->has_initial_breadcrumb)
+   if (!intel_timeline_has_initial_breadcrumb(i915_request_timeline(rq)))
return 0;
 
cs = intel_ring_begin(rq, 6);
diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c 
b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
index 4ea741f488a8..4bc80c50dbe9 100644
--- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
@@ -979,7 +979,7 @@ static int ring_request_alloc(struct i915_request *request)
int ret;
 
GEM_BUG_ON(!intel_context_is_pinned(request->context));
-   GEM_BUG_ON(i915_request_timeline(request)->has_initial_breadcrumb);
+   
GEM_BUG_ON(intel_timeline_has_initial_breadcrumb(i915_request_timeline(request)));
 
/*
 * Flush enough space to reduce the likelihood of waiting after
@@ -1304,7 +1304,7 @@ int intel_ring_submission_setup(struct intel_engine_cs 
*engine)
err = PTR_ERR(timeline);
goto err;
}
-   GEM_BUG_ON(timeline->has_initial_breadcrumb);
+   GEM_BUG_ON(intel_timeline_has_initial_breadcrumb(timeline));
 
err = intel_timeline_pin(timeline, NULL);
if (err)
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c 
b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 512afacd2bdc..ddc8e1b4f3b8 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -428,14 +428,14 @@ void intel_timeline_exit(struct intel_timeline *tl)
 static u32 timeline_advance(struct intel_timeline *tl)
 {
GEM_BUG_ON(!atomic_read(&tl->pin_count));
-   GEM_BUG_ON(tl->seqno & tl->has_initial_breadcrumb);
+   GEM_BUG_ON(tl->seqno & intel_timeline_has_initial_breadcrumb(tl));
 
-   return tl->seqno += 1 + tl->has_initial_breadcrumb;
+   return tl->seqno += 1 + intel_timeline_has_initial_breadcrumb(tl);
 }
 
 static void timeline_rollback(struct intel_timeline *tl)
 {
-   tl->seqno -= 1 + tl->has_initial_breadcrumb;
+   tl->seqno -= 1 + intel_timeline_has_initial_breadcrumb(tl);
 }
 
 static noinline int
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.h 
b/drivers/gpu/drm/i915/gt/intel_timeline.h
index 1ee680d31801..deb71a8dbd43 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.h
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.h
@@ -73,6 +73,12 @@ static inline void intel_timeline_put(struct intel_timeline 
*timeline)
kref_put(&timeline->kref, __intel_timeline_free);
 }
 
+static inline bool
+intel_timeline_has_initial_breadcrumb(const struct intel_timeline *tl)
+{
+   return tl->has_initial_breadcrumb;
+}
+
 static inline int __intel_timeline_sync_set(struct intel_timeline *tl,
u64 context, u32 seqno)
 {
diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c 
b/drivers/gpu/drm/i915/gt/selftest_timeline.c
index e4285d5a0360..a6ff9d1605aa 100644
--- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
+++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
@@ -665,7 +665,7 @@ static int live_hwsp_wrap(void *arg)
if (IS_ERR(tl))
return PTR_ERR(tl);
 
-   if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
+   if (!intel_timeline_has_initial_breadcrumb(tl) || !tl->hwsp_cacheline)
goto out_free;
 
err = intel_timeline_pin(tl, NULL);
@@ -1234,7 +1234,8 @@ static int live_hwsp_rollover_user(void *arg)
goto out;
 
tl = ce->timeline;
-   if (!tl->has_initial_breadcrumb || !tl->hwsp_cacheline)
+   if (!intel_timeline_has_initial_breadcrumb(tl) ||
+   !tl->hwsp_cacheline)
goto out;
 
timeline_rollback(tl);
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 49/69] drm/i915: Extract the ability to defer and rerun a request later

2020-12-14 Thread Chris Wilson
Lift the ability to defer a request until later from execlists into the
common layer.

Signed-off-by: Chris Wilson 
---
 .../drm/i915/gt/intel_execlists_submission.c  | 55 +++
 drivers/gpu/drm/i915/i915_scheduler.c | 52 ++
 drivers/gpu/drm/i915/i915_scheduler.h |  3 +
 3 files changed, 62 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 2963486714b0..5206e335c456 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -1816,9 +1816,13 @@ static void virtual_xfer_context(struct virtual_engine 
*ve,
}
 }
 
-static void defer_request(struct i915_request *rq, struct list_head * const pl)
+static void defer_active(struct intel_engine_cs *engine)
 {
-   LIST_HEAD(list);
+   struct i915_request *rq;
+
+   rq = __unwind_incomplete_requests(engine);
+   if (!rq)
+   return;
 
/*
 * We want to move the interrupted request to the back of
@@ -1827,52 +1831,7 @@ static void defer_request(struct i915_request *rq, 
struct list_head * const pl)
 * flight and were waiting for the interrupted request to
 * be run after it again.
 */
-   do {
-   struct i915_dependency *p;
-
-   GEM_BUG_ON(i915_request_is_active(rq));
-   list_move_tail(&rq->sched.link, pl);
-
-   for_each_waiter(p, rq) {
-   struct i915_request *w =
-   container_of(p->waiter, typeof(*w), sched);
-
-   if (p->flags & I915_DEPENDENCY_WEAK)
-   continue;
-
-   /* Leave semaphores spinning on the other engines */
-   if (w->engine != rq->engine)
-   continue;
-
-   /* No waiter should start before its signaler */
-   GEM_BUG_ON(i915_request_has_initial_breadcrumb(w) &&
-  i915_request_started(w) &&
-  !i915_request_completed(rq));
-
-   GEM_BUG_ON(i915_request_is_active(w));
-   if (!i915_request_is_ready(w))
-   continue;
-
-   if (rq_prio(w) < rq_prio(rq))
-   continue;
-
-   GEM_BUG_ON(rq_prio(w) > rq_prio(rq));
-   list_move_tail(&w->sched.link, &list);
-   }
-
-   rq = list_first_entry_or_null(&list, typeof(*rq), sched.link);
-   } while (rq);
-}
-
-static void defer_active(struct intel_engine_cs *engine)
-{
-   struct i915_request *rq;
-
-   rq = __unwind_incomplete_requests(engine);
-   if (!rq)
-   return;
-
-   defer_request(rq, i915_sched_lookup_priolist(engine, rq_prio(rq)));
+   __intel_engine_defer_request(engine, rq);
 }
 
 static bool
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index d3f7c340873e..1e0d0784d8c2 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -481,6 +481,58 @@ void i915_request_set_priority(struct i915_request *rq, 
int prio)
spin_unlock_irqrestore(&engine->active.lock, flags);
 }
 
+void __intel_engine_defer_request(struct intel_engine_cs *engine,
+ struct i915_request *rq)
+{
+   struct list_head *pl;
+   LIST_HEAD(list);
+
+   lockdep_assert_held(&engine->active.lock);
+   GEM_BUG_ON(!test_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags));
+
+   /*
+* When we defer a request, we must maintain its order with respect
+* to those that are waiting upon it. So we traverse its chain of
+* waiters and move any that are earlier than the request to after it.
+*/
+   pl = i915_sched_lookup_priolist(engine, rq_prio(rq));
+   do {
+   struct i915_dependency *p;
+
+   GEM_BUG_ON(i915_request_is_active(rq));
+   list_move_tail(&rq->sched.link, pl);
+
+   for_each_waiter(p, rq) {
+   struct i915_request *w =
+   container_of(p->waiter, typeof(*w), sched);
+
+   if (p->flags & I915_DEPENDENCY_WEAK)
+   continue;
+
+   /* Leave semaphores spinning on the other engines */
+   if (w->engine != engine)
+   continue;
+
+   /* No waiter should start before its signaler */
+   GEM_BUG_ON(i915_request_has_initial_breadcrumb(w) &&
+  i915_request_started(w) &&
+  !i915_request_completed(rq));
+
+   GEM_BU

[Intel-gfx] [PATCH 59/69] Restore "drm/i915: drop engine_pin/unpin_breadcrumbs_irq"

2020-12-14 Thread Chris Wilson
This was removed in commit 478ffad6d690 ("drm/i915: drop
engine_pin/unpin_breadcrumbs_irq") as the last user had been removed,
but now there is a promise of a new user in the next patch.

Signed-off-by: Chris Wilson 
Reviewed-by: Mika Kuoppala 
---
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.c | 18 ++
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.h |  3 +++
 2 files changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c 
b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index ac1e5f6c3c2c..97bcfb957f3d 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -334,6 +334,24 @@ void intel_breadcrumbs_reset(struct intel_breadcrumbs *b)
spin_unlock_irqrestore(&b->irq_lock, flags);
 }
 
+void intel_breadcrumbs_pin_irq(struct intel_breadcrumbs *b)
+{
+   spin_lock_irq(&b->irq_lock);
+   if (!b->irq_enabled++)
+   irq_enable(b->irq_engine);
+   GEM_BUG_ON(!b->irq_enabled); /* no overflow! */
+   spin_unlock_irq(&b->irq_lock);
+}
+
+void intel_breadcrumbs_unpin_irq(struct intel_breadcrumbs *b)
+{
+   spin_lock_irq(&b->irq_lock);
+   GEM_BUG_ON(!b->irq_enabled); /* no underflow! */
+   if (!--b->irq_enabled)
+   irq_disable(b->irq_engine);
+   spin_unlock_irq(&b->irq_lock);
+}
+
 void intel_breadcrumbs_park(struct intel_breadcrumbs *b)
 {
/* Kick the work once more to drain the signalers */
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h 
b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
index ed3d1deabfbd..94400903e1d0 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
@@ -18,6 +18,9 @@ struct intel_breadcrumbs *
 intel_breadcrumbs_create(struct intel_engine_cs *irq_engine);
 void intel_breadcrumbs_free(struct intel_breadcrumbs *b);
 
+void intel_breadcrumbs_pin_irq(struct intel_breadcrumbs *b);
+void intel_breadcrumbs_unpin_irq(struct intel_breadcrumbs *b);
+
 void intel_breadcrumbs_reset(struct intel_breadcrumbs *b);
 void intel_breadcrumbs_park(struct intel_breadcrumbs *b);
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 54/69] drm/i915: Extend the priority boosting for the display with a deadline

2020-12-14 Thread Chris Wilson
For a modeset/pageflip, there is a very precise deadline by which the
frame must be completed in order to hit the vblank and be shown. While
we don't pass along that exact information, we can at least inform the
scheduler that this request-chain needs to be completed asap.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/display/intel_display.c |  4 +++-
 drivers/gpu/drm/i915/gem/i915_gem_object.h   |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_wait.c | 21 +++-
 3 files changed, 16 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index 04d73ad3e916..20419294f4e2 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -16735,7 +16735,9 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
if (ret)
return ret;
 
-   i915_gem_object_wait_priority(obj, 0, I915_PRIORITY_DISPLAY);
+   i915_gem_object_wait_priority(obj, 0,
+ I915_PRIORITY_DISPLAY,
+ ktime_get() /* next vblank? */);
i915_gem_object_flush_frontbuffer(obj, ORIGIN_DIRTYFB);
 
if (!new_plane_state->uapi.fence) { /* implicit fencing */
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h 
b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index b106bc81c303..88b849c6f49d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -517,7 +517,7 @@ int i915_gem_object_wait(struct drm_i915_gem_object *obj,
 long timeout);
 int i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
  unsigned int flags,
- int prio);
+ int prio, ktime_t deadline);
 
 void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
 enum fb_op_origin origin);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c 
b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
index a5d7efe67021..b8b91a7564cf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
@@ -44,8 +44,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
unsigned int count, i;
int ret;
 
-   ret = dma_resv_get_fences_rcu(resv,
-   &excl, &count, &shared);
+   ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared);
if (ret)
return ret;
 
@@ -91,17 +90,20 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
return timeout;
 }
 
-static void __fence_set_priority(struct dma_fence *fence, int prio)
+static void
+__fence_set_prio(struct dma_fence *fence, int prio, ktime_t deadline)
 {
if (dma_fence_is_signaled(fence) || !dma_fence_is_i915(fence))
return;
 
local_bh_disable();
+   i915_request_set_deadline(to_request(fence),
+ i915_sched_to_ticks(deadline));
i915_request_set_priority(to_request(fence), prio);
local_bh_enable(); /* kick the tasklets if queues were reprioritised */
 }
 
-static void fence_set_priority(struct dma_fence *fence, int prio)
+static void fence_set_prio(struct dma_fence *fence, int prio, ktime_t deadline)
 {
/* Recurse once into a fence-array */
if (dma_fence_is_array(fence)) {
@@ -109,16 +111,17 @@ static void fence_set_priority(struct dma_fence *fence, 
int prio)
int i;
 
for (i = 0; i < array->num_fences; i++)
-   __fence_set_priority(array->fences[i], prio);
+   __fence_set_prio(array->fences[i], prio, deadline);
} else {
-   __fence_set_priority(fence, prio);
+   __fence_set_prio(fence, prio, deadline);
}
 }
 
 int
 i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
  unsigned int flags,
- int prio)
+ int prio,
+ ktime_t deadline)
 {
struct dma_fence *excl;
 
@@ -133,7 +136,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object 
*obj,
return ret;
 
for (i = 0; i < count; i++) {
-   fence_set_priority(shared[i], prio);
+   fence_set_prio(shared[i], prio, deadline);
dma_fence_put(shared[i]);
}
 
@@ -143,7 +146,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object 
*obj,
}
 
if (excl) {
-   fence_set_priority(excl, prio);
+   fence_set_prio(excl, prio, deadline);
dma_fence_put(excl);
}
return 0;
-- 
2.20.1


[Intel-gfx] [PATCH 58/69] drm/i915/gt: Another tweak for flushing the tasklets

2020-12-14 Thread Chris Wilson
tasklet_kill() ensures that we _yield_ the processor until a remote
tasklet is completed. However, this leads to a starvation condition as
being at the bottom of the scheduler's runqueue means that anything else
is able to run, including all hogs keeping the tasklet occupied.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index f62550d95a60..101f54467c1e 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1178,10 +1178,6 @@ void intel_engine_flush_submission(struct 
intel_engine_cs *engine)
if (!t->func)
return;
 
-   /* Synchronise and wait for the tasklet on another CPU */
-   tasklet_kill(t);
-
-   /* Having cancelled the tasklet, ensure that is run */
local_bh_disable();
if (tasklet_trylock(t)) {
/* Must wait for any GPU reset in progress. */
@@ -1190,6 +1186,9 @@ void intel_engine_flush_submission(struct intel_engine_cs 
*engine)
tasklet_unlock(t);
}
local_bh_enable();
+
+   /* Synchronise and wait for the tasklet on another CPU */
+   tasklet_unlock_wait(t);
 }
 
 /**
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 64/69] drm/i915/gt: Enable busy-stats for ring-scheduler

2020-12-14 Thread Chris Wilson
Couple up the context in/out accounting to record how long each engine
is busy handling requests. This is exposed to userspace for more accurate
measurements, and also enables our soft-rps timer.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_ring_scheduler.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_ring_scheduler.c 
b/drivers/gpu/drm/i915/gt/intel_ring_scheduler.c
index 775f21acd7a8..b9335196b0fa 100644
--- a/drivers/gpu/drm/i915/gt/intel_ring_scheduler.c
+++ b/drivers/gpu/drm/i915/gt/intel_ring_scheduler.c
@@ -11,6 +11,7 @@
 #include "intel_breadcrumbs.h"
 #include "intel_context.h"
 #include "intel_engine_pm.h"
+#include "intel_engine_stats.h"
 #include "intel_gt.h"
 #include "intel_gt_pm.h"
 #include "intel_gt_requests.h"
@@ -73,6 +74,8 @@ static struct intel_engine_cs *__schedule_in(struct 
i915_request *rq)
if (engine->fw_domain && !engine->fw_active++)
intel_uncore_forcewake_get(engine->uncore, engine->fw_domain);
 
+   intel_engine_context_in(engine);
+
CE_TRACE(ce, "schedule-in\n");
 
return engine;
@@ -106,6 +109,8 @@ static void __schedule_out(struct i915_request *rq)
else
i915_request_update_deadline(list_next_entry(rq, link));
 
+   intel_engine_context_out(engine);
+
if (engine->fw_domain && !--engine->fw_active)
intel_uncore_forcewake_put(engine->uncore, engine->fw_domain);
intel_gt_pm_put_async(engine->gt);
@@ -809,6 +814,7 @@ int intel_ring_scheduler_setup(struct intel_engine_cs 
*engine)
 
engine->flags |= I915_ENGINE_HAS_SCHEDULER;
engine->flags |= I915_ENGINE_NEEDS_BREADCRUMB_TASKLET;
+   engine->flags |= I915_ENGINE_SUPPORTS_STATS;
 
/* Finally, take ownership and responsibility for cleanup! */
engine->release = ring_release;
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 65/69] drm/i915/gt: Implement ring scheduler for gen6/7

2020-12-14 Thread Chris Wilson
A key prolem with legacy ring buffer submission is that it is an inheret
FIFO queue across all clients; if one blocks, they all block. A
scheduler allows us to avoid that limitation, and ensures that all
clients can submit in parallel, removing the resource contention of the
global ringbuffer.

Having built the ring scheduler infrastructure over top of the global
ringbuffer submission, we now need to provide the HW knowledge required
to build command packets and implement context switching.

Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/gt/intel_ring_scheduler.c| 447 +-
 drivers/gpu/drm/i915/i915_reg.h   |  10 +
 2 files changed, 454 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_ring_scheduler.c 
b/drivers/gpu/drm/i915/gt/intel_ring_scheduler.c
index b9335196b0fa..6b7d5ed5c540 100644
--- a/drivers/gpu/drm/i915/gt/intel_ring_scheduler.c
+++ b/drivers/gpu/drm/i915/gt/intel_ring_scheduler.c
@@ -7,6 +7,10 @@
 
 #include 
 
+#include "gen2_engine_cs.h"
+#include "gen6_engine_cs.h"
+#include "gen6_ppgtt.h"
+#include "gen7_renderclear.h"
 #include "i915_drv.h"
 #include "intel_breadcrumbs.h"
 #include "intel_context.h"
@@ -174,8 +178,263 @@ static void ring_copy(struct intel_ring *dst,
memcpy(out, src->vaddr + start, end - start);
 }
 
+static void mi_set_context(struct intel_ring *ring,
+  struct intel_engine_cs *engine,
+  struct intel_context *ce,
+  u32 flags)
+{
+   struct drm_i915_private *i915 = engine->i915;
+   enum intel_engine_id id;
+   const int num_engines =
+   IS_HASWELL(i915) ? engine->gt->info.num_engines - 1 : 0;
+   int len;
+   u32 *cs;
+
+   len = 4;
+   if (IS_GEN(i915, 7))
+   len += 2 + (num_engines ? 4 * num_engines + 6 : 0);
+   else if (IS_GEN(i915, 5))
+   len += 2;
+
+   cs = ring_map_dw(ring, len);
+
+   /* WaProgramMiArbOnOffAroundMiSetContext:ivb,vlv,hsw,bdw,chv */
+   if (IS_GEN(i915, 7)) {
+   *cs++ = MI_ARB_ON_OFF | MI_ARB_DISABLE;
+   if (num_engines) {
+   struct intel_engine_cs *signaller;
+
+   *cs++ = MI_LOAD_REGISTER_IMM(num_engines);
+   for_each_engine(signaller, engine->gt, id) {
+   if (signaller == engine)
+   continue;
+
+   *cs++ = i915_mmio_reg_offset(
+  RING_PSMI_CTL(signaller->mmio_base));
+   *cs++ = _MASKED_BIT_ENABLE(
+   GEN6_PSMI_SLEEP_MSG_DISABLE);
+   }
+   }
+   } else if (IS_GEN(i915, 5)) {
+   /*
+* This w/a is only listed for pre-production ilk a/b steppings,
+* but is also mentioned for programming the powerctx. To be
+* safe, just apply the workaround; we do not use SyncFlush so
+* this should never take effect and so be a no-op!
+*/
+   *cs++ = MI_SUSPEND_FLUSH | MI_SUSPEND_FLUSH_EN;
+   }
+
+   *cs++ = MI_NOOP;
+   *cs++ = MI_SET_CONTEXT;
+   *cs++ = i915_ggtt_offset(ce->state) | flags;
+   /*
+* w/a: MI_SET_CONTEXT must always be followed by MI_NOOP
+* WaMiSetContext_Hang:snb,ivb,vlv
+*/
+   *cs++ = MI_NOOP;
+
+   if (IS_GEN(i915, 7)) {
+   if (num_engines) {
+   struct intel_engine_cs *signaller;
+   i915_reg_t last_reg = {}; /* keep gcc quiet */
+
+   *cs++ = MI_LOAD_REGISTER_IMM(num_engines);
+   for_each_engine(signaller, engine->gt, id) {
+   if (signaller == engine)
+   continue;
+
+   last_reg = RING_PSMI_CTL(signaller->mmio_base);
+   *cs++ = i915_mmio_reg_offset(last_reg);
+   *cs++ = _MASKED_BIT_DISABLE(
+   GEN6_PSMI_SLEEP_MSG_DISABLE);
+   }
+
+   /* Insert a delay before the next switch! */
+   *cs++ = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT;
+   *cs++ = i915_mmio_reg_offset(last_reg);
+   *cs++ = intel_gt_scratch_offset(engine->gt,
+   
INTEL_GT_SCRATCH_FIELD_DEFAULT);
+   *cs++ = MI_NOOP;
+   }
+   *cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
+   } else if (IS_GEN(i915, 5)) {
+   *cs++ = MI_SUSPEND_FLUSH;
+   }
+}
+
+static struct i915_address_space *vm_alias(struct i915_address_space *vm)
+{
+   if (i915_is_ggtt(vm))
+   vm = &i915_

[Intel-gfx] [PATCH 26/69] drm/i915: Drop i915_request.lock serialisation around await_start

2020-12-14 Thread Chris Wilson
Originally, we used the signal->lock as a means of following the
previous link in its timeline and peeking at the previous fence.
However, we have replaced the explicit serialisation with a series of
very careful probes that anticipate the links being deleted and the
fences recycled before we are able to acquire a strong reference to it.
We do not need the signal->lock crutch anymore, nor want the contention.

Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/i915_request.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index e4dad3aa69ff..87f59931f2ba 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -962,9 +962,16 @@ i915_request_await_start(struct i915_request *rq, struct 
i915_request *signal)
if (i915_request_started(signal))
return 0;
 
+   /*
+* The caller holds a reference on @signal, but we do not serialise
+* against it being retired and removed from the lists.
+*
+* We do not hold a reference to the request before @signal, and
+* so must be very careful to ensure that it is not _recycled_ as
+* we follow the link backwards.
+*/
fence = NULL;
rcu_read_lock();
-   spin_lock_irq(&signal->lock);
do {
struct list_head *pos = READ_ONCE(signal->link.prev);
struct i915_request *prev;
@@ -995,7 +1002,6 @@ i915_request_await_start(struct i915_request *rq, struct 
i915_request *signal)
 
fence = &prev->fence;
} while (0);
-   spin_unlock_irq(&signal->lock);
rcu_read_unlock();
if (!fence)
return 0;
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 63/69] drm/i915/gt: Infrastructure for ring scheduling

2020-12-14 Thread Chris Wilson
Build a bare bones scheduler to sit on top the global legacy ringbuffer
submission. This virtual execlists scheme should be applicable to all
older platforms.

A key problem we have with the legacy ring buffer submission is that it
only allows for FIFO queuing. All clients share the global request queue
and must contend for its lock when submitting. As any client may need to
wait for external events, all clients must then wait. However, if we
stage each client into their own virtual ringbuffer with their own
timelines, we can copy the client requests into the global ringbuffer
only when they are ready, reordering the submission around stalls.
Furthermore, the ability to reorder gives us rudimentarily priority
sorting -- although without preemption support, once something is on the
GPU it stays on the GPU, and so it is still possible for a hog to delay
a high priority request (such as updating the display). However, it does
means that in keeping a short submission queue, the high priority
request will be next. This design resembles the old guc submission
scheduler, for reordering requests onto a global workqueue.

The implementation uses the MI_USER_INTERRUPT at the end of every
request to track completion, so is more interrupt happy than execlists
[which has an interrupt for each context event, albeit two]. Our
interrupts on these system are relatively heavy, and in the past we have
been able to completely starve Sandybrige by the interrupt traffic. Our
interrupt handlers are being much better (in part offloading the work to
bottom halves leaving the interrupt itself only dealing with acking the
registers) but we can still see the impact of starvation in the uneven
submission latency on a saturated system.

Overall though, the short sumission queues and extra interrupts do not
appear to be affecting throughput (+-10%, some tasks even improve to the
reduced request overheads) and improve latency. [Which is a massive
improvement since the introduction of Sandybridge!]

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/Makefile |   1 +
 drivers/gpu/drm/i915/gt/intel_engine.h|   1 +
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |   1 +
 .../gpu/drm/i915/gt/intel_ring_scheduler.c| 822 ++
 .../gpu/drm/i915/gt/intel_ring_submission.c   |  17 +-
 .../gpu/drm/i915/gt/intel_ring_submission.h   |  17 +
 6 files changed, 851 insertions(+), 8 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gt/intel_ring_scheduler.c
 create mode 100644 drivers/gpu/drm/i915/gt/intel_ring_submission.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index f319311be93e..5166347b002d 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -111,6 +111,7 @@ gt-y += \
gt/intel_renderstate.o \
gt/intel_reset.o \
gt/intel_ring.o \
+   gt/intel_ring_scheduler.o \
gt/intel_ring_submission.o \
gt/intel_rps.o \
gt/intel_sseu.o \
diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h 
b/drivers/gpu/drm/i915/gt/intel_engine.h
index 925343e646e3..59fff33cad16 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -195,6 +195,7 @@ void intel_engine_cleanup_common(struct intel_engine_cs 
*engine);
 int intel_engine_resume(struct intel_engine_cs *engine);
 
 int intel_ring_submission_setup(struct intel_engine_cs *engine);
+int intel_ring_scheduler_setup(struct intel_engine_cs *engine);
 
 int intel_engine_stop_cs(struct intel_engine_cs *engine);
 void intel_engine_cancel_stop_cs(struct intel_engine_cs *engine);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 824a187b2803..0698c4ae572c 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -321,6 +321,7 @@ struct intel_engine_cs {
struct {
struct intel_ring *ring;
struct intel_timeline *timeline;
+   struct intel_context *context;
} legacy;
 
/*
diff --git a/drivers/gpu/drm/i915/gt/intel_ring_scheduler.c 
b/drivers/gpu/drm/i915/gt/intel_ring_scheduler.c
new file mode 100644
index ..775f21acd7a8
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/intel_ring_scheduler.c
@@ -0,0 +1,822 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+
+#include 
+
+#include 
+
+#include "i915_drv.h"
+#include "intel_breadcrumbs.h"
+#include "intel_context.h"
+#include "intel_engine_pm.h"
+#include "intel_gt.h"
+#include "intel_gt_pm.h"
+#include "intel_gt_requests.h"
+#include "intel_reset.h"
+#include "intel_ring.h"
+#include "intel_ring_submission.h"
+#include "shmem_utils.h"
+
+/*
+ * Rough estimate of the typical request size, performing a flush,
+ * set-context and then emitting the batch.
+ */
+#define LEGACY_REQUEST_SIZE 200
+
+static inline int rq_prio(const struct i915_request *rq)
+{
+   

[Intel-gfx] [PATCH 09/69] drm/i915/gt: Shrink the critical section for irq signaling

2020-12-14 Thread Chris Wilson
Let's only wait for the list iterator when decoupling the virtual
breadcrumb, as the signaling of all the requests may take a long time,
during which we do not want to keep the tasklet spinning.

Signed-off-by: Chris Wilson 
Reviewed-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.c  | 2 ++
 drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h| 1 +
 drivers/gpu/drm/i915/gt/intel_execlists_submission.c | 3 ++-
 3 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c 
b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index 63900edbde88..ac1e5f6c3c2c 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -239,6 +239,7 @@ static void signal_irq_work(struct irq_work *work)
intel_breadcrumbs_disarm_irq(b);
 
rcu_read_lock();
+   atomic_inc(&b->signaler_active);
list_for_each_entry_rcu(ce, &b->signalers, signal_link) {
struct i915_request *rq;
 
@@ -274,6 +275,7 @@ static void signal_irq_work(struct irq_work *work)
}
}
}
+   atomic_dec(&b->signaler_active);
rcu_read_unlock();
 
llist_for_each_safe(signal, sn, signal) {
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h 
b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
index a74bb3062bd8..f672053d694d 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
@@ -35,6 +35,7 @@ struct intel_breadcrumbs {
spinlock_t signalers_lock; /* protects the list of signalers */
struct list_head signalers;
struct llist_head signaled_requests;
+   atomic_t signaler_active;
 
spinlock_t irq_lock; /* protects the interrupt from hardirq context */
struct irq_work irq_work; /* for use from inside irq_lock */
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 192ec4041d7a..9dcd650805fa 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -1372,7 +1372,8 @@ static void kick_siblings(struct i915_request *rq, struct 
intel_context *ce)
 * ce->signal_link.
 */
i915_request_cancel_breadcrumb(rq);
-   irq_work_sync(&engine->breadcrumbs->irq_work);
+   while (atomic_read(&engine->breadcrumbs->signaler_active))
+   cpu_relax();
}
 
if (READ_ONCE(ve->request))
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 44/69] drm/i915/selftests: Exercise priority inheritance around an engine loop

2020-12-14 Thread Chris Wilson
Exercise rescheduling priority inheritance around a sequence of requests
that wrap around all the engines.

Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/selftests/i915_scheduler.c   | 219 ++
 1 file changed, 219 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_scheduler.c 
b/drivers/gpu/drm/i915/selftests/i915_scheduler.c
index 481549f0ddad..eb85f9731a78 100644
--- a/drivers/gpu/drm/i915/selftests/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/selftests/i915_scheduler.c
@@ -6,6 +6,7 @@
 #include "i915_selftest.h"
 
 #include "gt/intel_context.h"
+#include "gt/intel_ring.h"
 #include "gt/selftest_engine_heartbeat.h"
 #include "selftests/igt_spinner.h"
 #include "selftests/i915_random.h"
@@ -511,10 +512,228 @@ static int igt_priority_chains(void *arg)
return igt_schedule_chains(arg, igt_priority);
 }
 
+static struct i915_request *
+__write_timestamp(struct intel_engine_cs *engine,
+ struct drm_i915_gem_object *obj,
+ int slot,
+ struct i915_request *prev)
+{
+   struct i915_request *rq = ERR_PTR(-EINVAL);
+   struct intel_context *ce;
+   struct i915_vma *vma;
+   int err = 0;
+   u32 *cs;
+
+   ce = intel_context_create(engine);
+   if (IS_ERR(ce))
+   return ERR_CAST(ce);
+
+   vma = i915_vma_instance(obj, ce->vm, NULL);
+   if (IS_ERR(vma)) {
+   err = PTR_ERR(vma);
+   goto out_ce;
+   }
+
+   err = i915_vma_pin(vma, 0, 0, PIN_USER);
+   if (err)
+   goto out_ce;
+
+   rq = intel_context_create_request(ce);
+   if (IS_ERR(rq)) {
+   err = PTR_ERR(rq);
+   goto out_unpin;
+   }
+
+   i915_vma_lock(vma);
+   err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
+   i915_vma_unlock(vma);
+   if (err)
+   goto out_request;
+
+   if (prev) {
+   err = i915_request_await_dma_fence(rq, &prev->fence);
+   if (err)
+   goto out_request;
+   }
+
+   if (engine->emit_init_breadcrumb) {
+   err = engine->emit_init_breadcrumb(rq);
+   if (err)
+   goto out_request;
+   }
+
+   cs = intel_ring_begin(rq, 4);
+   if (IS_ERR(cs)) {
+   err = PTR_ERR(cs);
+   goto out_request;
+   }
+
+   *cs++ = MI_STORE_REGISTER_MEM_GEN8;
+   *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(engine->mmio_base));
+   *cs++ = lower_32_bits(vma->node.start) + sizeof(u32) * slot;
+   *cs++ = upper_32_bits(vma->node.start);
+   intel_ring_advance(rq, cs);
+
+   i915_request_get(rq);
+out_request:
+   i915_request_add(rq);
+out_unpin:
+   i915_vma_unpin(vma);
+out_ce:
+   intel_context_put(ce);
+   i915_request_put(prev);
+   return err ? ERR_PTR(err) : rq;
+}
+
+static struct i915_request *create_spinner(struct drm_i915_private *i915,
+  struct igt_spinner *spin)
+{
+   struct intel_engine_cs *engine;
+
+   for_each_uabi_engine(engine, i915) {
+   struct intel_context *ce;
+   struct i915_request *rq;
+
+   if (igt_spinner_init(spin, engine->gt))
+   return ERR_PTR(-ENOMEM);
+
+   ce = intel_context_create(engine);
+   if (IS_ERR(ce))
+   return ERR_CAST(ce);
+
+   rq = igt_spinner_create_request(spin, ce, MI_NOOP);
+   intel_context_put(ce);
+   if (rq == ERR_PTR(-ENODEV))
+   continue;
+   if (IS_ERR(rq))
+   return rq;
+
+   i915_request_get(rq);
+   i915_request_add(rq);
+   return rq;
+   }
+
+   return ERR_PTR(-ENODEV);
+}
+
+static int __igt_schedule_cycle(struct drm_i915_private *i915,
+   bool (*fn)(struct i915_request *rq,
+  unsigned long v, unsigned long e))
+{
+   struct intel_engine_cs *engine;
+   struct drm_i915_gem_object *obj;
+   struct igt_spinner spin;
+   struct i915_request *rq;
+   unsigned long count, n;
+   u32 *time, last;
+   int err;
+
+   /*
+* Queue a bunch of ordered requests (each waiting on the previous)
+* around the engines a couple of times. Each request will write
+* the timestamp it executes at into the scratch, with the expectation
+* that the timestamp will be in our desired execution order.
+*/
+
+   if (INTEL_GEN(i915) < 8)
+   return 0;
+
+   obj = i915_gem_object_create_internal(i915, SZ_64K);
+   if (IS_ERR(obj))
+   return PTR_ERR(obj);
+
+   time = i915_gem_object_pin_map(obj, I915_MAP_WC);
+   if (IS_ERR(time)) {
+   err = PTR_ERR(time);
+   goto out_obj;
+   }
+
+   rq = create_sp

[Intel-gfx] [PATCH 46/69] drm/i915/gt: Remove timeslice suppression

2020-12-14 Thread Chris Wilson
In the next patch, we remove the strict priority system and continuously
re-evaluate the relative priority of tasks. As such we need to enable
the timeslice whenever there is more than one context in the pipeline.
This simplifies the decision and removes some of the tweaks to suppress
timeslicing, allowing us to lift the timeslice enabling to a common spot
at the end of running the submission tasklet.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  10 --
 .../drm/i915/gt/intel_execlists_submission.c  | 149 +++---
 2 files changed, 53 insertions(+), 106 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index cdc49f8e04ee..d19710191690 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -239,16 +239,6 @@ struct intel_engine_execlists {
 */
unsigned int port_mask;
 
-   /**
-* @switch_priority_hint: Second context priority.
-*
-* We submit multiple contexts to the HW simultaneously and would
-* like to occasionally switch between them to emulate timeslicing.
-* To know when timeslicing is suitable, we track the priority of
-* the context submitted second.
-*/
-   int switch_priority_hint;
-
/**
 * @queue_priority_hint: Highest pending priority.
 *
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 53e5db533adb..af8548725e1f 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -1875,25 +1875,6 @@ static void defer_active(struct intel_engine_cs *engine)
defer_request(rq, i915_sched_lookup_priolist(engine, rq_prio(rq)));
 }
 
-static bool
-need_timeslice(const struct intel_engine_cs *engine,
-  const struct i915_request *rq)
-{
-   int hint;
-
-   if (!intel_engine_has_timeslices(engine))
-   return false;
-
-   hint = max(engine->execlists.queue_priority_hint,
-  virtual_prio(&engine->execlists));
-
-   if (!list_is_last(&rq->sched.link, &engine->active.requests))
-   hint = max(hint, rq_prio(list_next_entry(rq, sched.link)));
-
-   GEM_BUG_ON(hint >= I915_PRIORITY_UNPREEMPTABLE);
-   return hint >= effective_prio(rq);
-}
-
 static bool
 timeslice_yield(const struct intel_engine_execlists *el,
const struct i915_request *rq)
@@ -1913,76 +1894,63 @@ timeslice_yield(const struct intel_engine_execlists *el,
return rq->context->lrc.ccid == READ_ONCE(el->yield);
 }
 
-static bool
-timeslice_expired(const struct intel_engine_execlists *el,
- const struct i915_request *rq)
+static bool needs_timeslice(const struct intel_engine_cs *engine,
+   const struct i915_request *rq)
 {
-   return timer_expired(&el->timer) || timeslice_yield(el, rq);
-}
+   /* If not currently active, or about to switch, wait for next event */
+   if (!rq || i915_request_completed(rq))
+   return false;
 
-static int
-switch_prio(struct intel_engine_cs *engine, const struct i915_request *rq)
-{
-   if (list_is_last(&rq->sched.link, &engine->active.requests))
-   return engine->execlists.queue_priority_hint;
+   /* We do not need to start the timeslice until after the ACK */
+   if (READ_ONCE(engine->execlists.pending[0]))
+   return false;
 
-   return rq_prio(list_next_entry(rq, sched.link));
-}
+   /* If ELSP[1] is occupied, always check to see if worth slicing */
+   if (!list_is_last_rcu(&rq->sched.link, &engine->active.requests))
+   return true;
 
-static inline unsigned long
-timeslice(const struct intel_engine_cs *engine)
-{
-   return READ_ONCE(engine->props.timeslice_duration_ms);
+   /* Otherwise, ELSP[0] is by itself, but may be waiting in the queue */
+   if (!RB_EMPTY_ROOT(&engine->execlists.queue.rb_root))
+   return true;
+
+   return !RB_EMPTY_ROOT(&engine->execlists.virtual.rb_root);
 }
 
-static unsigned long active_timeslice(const struct intel_engine_cs *engine)
+static bool
+timeslice_expired(struct intel_engine_cs *engine, const struct i915_request 
*rq)
 {
-   const struct intel_engine_execlists *execlists = &engine->execlists;
-   const struct i915_request *rq = *execlists->active;
+   const struct intel_engine_execlists *el = &engine->execlists;
 
-   if (!rq || i915_request_completed(rq))
-   return 0;
+   if (!intel_engine_has_timeslices(engine))
+   return false;
 
-   if (READ_ONCE(execlists->switch_priority_hint) < effective_prio(rq))
-   return 0;
+   if (i915_request_has_nopreempt(rq) && i915_request_started(rq))
+   return false;
+
+   if (!needs_timeslic

[Intel-gfx] [PATCH 29/69] drm/i915/gem: Reduce ctx->engines_mutex for get_engines()

2020-12-14 Thread Chris Wilson
Take a snapshot of the ctx->engines, so we can avoid taking the
ctx->engines_mutex for a mere read in get_engines().

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 39 +
 1 file changed, 8 insertions(+), 31 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index e87da2525d0f..8c5514574e8b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -1839,27 +1839,6 @@ set_engines(struct i915_gem_context *ctx,
return 0;
 }
 
-static struct i915_gem_engines *
-__copy_engines(struct i915_gem_engines *e)
-{
-   struct i915_gem_engines *copy;
-   unsigned int n;
-
-   copy = alloc_engines(e->num_engines);
-   if (!copy)
-   return ERR_PTR(-ENOMEM);
-
-   for (n = 0; n < e->num_engines; n++) {
-   if (e->engines[n])
-   copy->engines[n] = intel_context_get(e->engines[n]);
-   else
-   copy->engines[n] = NULL;
-   }
-   copy->num_engines = n;
-
-   return copy;
-}
-
 static int
 get_engines(struct i915_gem_context *ctx,
struct drm_i915_gem_context_param *args)
@@ -1867,19 +1846,17 @@ get_engines(struct i915_gem_context *ctx,
struct i915_context_param_engines __user *user;
struct i915_gem_engines *e;
size_t n, count, size;
+   bool user_engines;
int err = 0;
 
-   err = mutex_lock_interruptible(&ctx->engines_mutex);
-   if (err)
-   return err;
+   e = __context_engines_await(ctx, &user_engines);
+   if (!e)
+   return -ENOENT;
 
-   e = NULL;
-   if (i915_gem_context_user_engines(ctx))
-   e = __copy_engines(i915_gem_context_engines(ctx));
-   mutex_unlock(&ctx->engines_mutex);
-   if (IS_ERR_OR_NULL(e)) {
+   if (!user_engines) {
+   i915_sw_fence_complete(&e->fence);
args->size = 0;
-   return PTR_ERR_OR_ZERO(e);
+   return 0;
}
 
count = e->num_engines;
@@ -1930,7 +1907,7 @@ get_engines(struct i915_gem_context *ctx,
args->size = size;
 
 err_free:
-   free_engines(e);
+   i915_sw_fence_complete(&e->fence);
return err;
 }
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 66/69] drm/i915/gt: Enable ring scheduling for gen6/7

2020-12-14 Thread Chris Wilson
Switch over from FIFO global submission to the priority-sorted
topographical scheduler. At the cost of more busy work on the CPU to
keep the GPU supplied with the next packet of requests, this allows us
to reorder requests around submission stalls.

This also enables the timer based RPS, with the exception of Valleyview
whose PCU doesn't take kindly to our interference.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 2 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 2 ++
 drivers/gpu/drm/i915/gt/intel_rps.c   | 6 ++
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c 
b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
index d3f87dc4eda3..2246b5c308dc 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
@@ -94,7 +94,7 @@ static int live_nop_switch(void *arg)
rq = i915_request_get(this);
i915_request_add(this);
}
-   if (i915_request_wait(rq, 0, HZ / 5) < 0) {
+   if (i915_request_wait(rq, 0, HZ) < 0) {
pr_err("Failed to populated %d contexts\n", nctx);
intel_gt_set_wedged(&i915->gt);
i915_request_put(rq);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index cc7983d14cc0..ff2f8ebb817b 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -866,6 +866,8 @@ int intel_engines_init(struct intel_gt *gt)
 
if (HAS_EXECLISTS(gt->i915))
setup = intel_execlists_submission_setup;
+   else if (INTEL_GEN(gt->i915) >= 6)
+   setup = intel_ring_scheduler_setup;
else
setup = intel_ring_submission_setup;
 
diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c 
b/drivers/gpu/drm/i915/gt/intel_rps.c
index 2b443b735a98..2963ab5a86ff 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -1078,9 +1078,7 @@ static bool gen6_rps_enable(struct intel_rps *rps)
intel_uncore_write_fw(uncore, GEN6_RP_DOWN_TIMEOUT, 5);
intel_uncore_write_fw(uncore, GEN6_RP_IDLE_HYSTERSIS, 10);
 
-   rps->pm_events = (GEN6_PM_RP_UP_THRESHOLD |
- GEN6_PM_RP_DOWN_THRESHOLD |
- GEN6_PM_RP_DOWN_TIMEOUT);
+   rps->pm_events = GEN6_PM_RP_UP_THRESHOLD | GEN6_PM_RP_DOWN_THRESHOLD;
 
return rps_reset(rps);
 }
@@ -1388,7 +1386,7 @@ void intel_rps_enable(struct intel_rps *rps)
GEM_BUG_ON(rps->efficient_freq < rps->min_freq);
GEM_BUG_ON(rps->efficient_freq > rps->max_freq);
 
-   if (has_busy_stats(rps))
+   if (has_busy_stats(rps) && !IS_VALLEYVIEW(i915))
intel_rps_set_timer(rps);
else if (INTEL_GEN(i915) >= 6)
intel_rps_set_interrupts(rps);
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 60/69] drm/i915/gt: Couple tasklet scheduling for all CS interrupts

2020-12-14 Thread Chris Wilson
If any engine asks for the tasklet to be kicked from the CS interrupt,
do so. Currently, this is used by the execlists scheduler backends to
feed in the next request to the HW, and similarly could be used by a
ring scheduler, as will be seen in the next patch.

Signed-off-by: Chris Wilson 
Reviewed-by: Mika Kuoppala 
---
 drivers/gpu/drm/i915/gt/intel_gt_irq.c | 17 -
 drivers/gpu/drm/i915/gt/intel_gt_irq.h |  3 +++
 drivers/gpu/drm/i915/gt/intel_rps.c|  2 +-
 drivers/gpu/drm/i915/i915_irq.c|  8 
 4 files changed, 20 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_irq.c 
b/drivers/gpu/drm/i915/gt/intel_gt_irq.c
index 2106fb403c3e..dfb2d66e1556 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_irq.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_irq.c
@@ -63,6 +63,13 @@ cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
i915_sched_kick(&engine->active);
 }
 
+void gen2_engine_cs_irq(struct intel_engine_cs *engine)
+{
+   intel_engine_signal_breadcrumbs(engine);
+   if (intel_engine_needs_breadcrumb_tasklet(engine))
+   i915_sched_kick(&engine->active);
+}
+
 static u32
 gen11_gt_engine_identity(struct intel_gt *gt,
 const unsigned int bank, const unsigned int bit)
@@ -276,9 +283,9 @@ void gen11_gt_irq_postinstall(struct intel_gt *gt)
 void gen5_gt_irq_handler(struct intel_gt *gt, u32 gt_iir)
 {
if (gt_iir & GT_RENDER_USER_INTERRUPT)
-   
intel_engine_signal_breadcrumbs(gt->engine_class[RENDER_CLASS][0]);
+   gen2_engine_cs_irq(gt->engine_class[RENDER_CLASS][0]);
if (gt_iir & ILK_BSD_USER_INTERRUPT)
-   
intel_engine_signal_breadcrumbs(gt->engine_class[VIDEO_DECODE_CLASS][0]);
+   gen2_engine_cs_irq(gt->engine_class[VIDEO_DECODE_CLASS][0]);
 }
 
 static void gen7_parity_error_irq_handler(struct intel_gt *gt, u32 iir)
@@ -302,11 +309,11 @@ static void gen7_parity_error_irq_handler(struct intel_gt 
*gt, u32 iir)
 void gen6_gt_irq_handler(struct intel_gt *gt, u32 gt_iir)
 {
if (gt_iir & GT_RENDER_USER_INTERRUPT)
-   
intel_engine_signal_breadcrumbs(gt->engine_class[RENDER_CLASS][0]);
+   gen2_engine_cs_irq(gt->engine_class[RENDER_CLASS][0]);
if (gt_iir & GT_BSD_USER_INTERRUPT)
-   
intel_engine_signal_breadcrumbs(gt->engine_class[VIDEO_DECODE_CLASS][0]);
+   gen2_engine_cs_irq(gt->engine_class[VIDEO_DECODE_CLASS][0]);
if (gt_iir & GT_BLT_USER_INTERRUPT)
-   
intel_engine_signal_breadcrumbs(gt->engine_class[COPY_ENGINE_CLASS][0]);
+   gen2_engine_cs_irq(gt->engine_class[COPY_ENGINE_CLASS][0]);
 
if (gt_iir & (GT_BLT_CS_ERROR_INTERRUPT |
  GT_BSD_CS_ERROR_INTERRUPT |
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_irq.h 
b/drivers/gpu/drm/i915/gt/intel_gt_irq.h
index 886c5cf408a2..6c69cd563fe1 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_irq.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_irq.h
@@ -9,6 +9,7 @@
 
 #include 
 
+struct intel_engine_cs;
 struct intel_gt;
 
 #define GEN8_GT_IRQS (GEN8_GT_RCS_IRQ | \
@@ -19,6 +20,8 @@ struct intel_gt;
  GEN8_GT_PM_IRQ | \
  GEN8_GT_GUC_IRQ)
 
+void gen2_engine_cs_irq(struct intel_engine_cs *engine);
+
 void gen11_gt_irq_reset(struct intel_gt *gt);
 void gen11_gt_irq_postinstall(struct intel_gt *gt);
 void gen11_gt_irq_handler(struct intel_gt *gt, const u32 master_ctl);
diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c 
b/drivers/gpu/drm/i915/gt/intel_rps.c
index e1397b8d3586..2b443b735a98 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -1771,7 +1771,7 @@ void gen6_rps_irq_handler(struct intel_rps *rps, u32 
pm_iir)
return;
 
if (pm_iir & PM_VEBOX_USER_INTERRUPT)
-   intel_engine_signal_breadcrumbs(gt->engine[VECS0]);
+   gen2_engine_cs_irq(gt->engine[VECS0]);
 
if (pm_iir & PM_VEBOX_CS_ERROR_INTERRUPT)
DRM_DEBUG("Command parser error, pm_iir 0x%08x\n", pm_iir);
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index b245109f73e3..4e6a3a3a938c 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -3926,7 +3926,7 @@ static irqreturn_t i8xx_irq_handler(int irq, void *arg)
intel_uncore_write16(&dev_priv->uncore, GEN2_IIR, iir);
 
if (iir & I915_USER_INTERRUPT)
-   
intel_engine_signal_breadcrumbs(dev_priv->gt.engine[RCS0]);
+   gen2_engine_cs_irq(dev_priv->gt.engine[RCS0]);
 
if (iir & I915_MASTER_ERROR_INTERRUPT)
i8xx_error_irq_handler(dev_priv, eir, eir_stuck);
@@ -4032,7 +4032,7 @@ static irqreturn_t i915_irq_handler(int irq, void *arg)
intel_uncore_write(&dev_priv->uncore, GEN2_IIR, iir);
 
if (iir & I915_USER_INTERRUPT)

[Intel-gfx] [PATCH 01/69] drm/i915: Use cmpxchg64 for 32b compatilibity

2020-12-14 Thread Chris Wilson
By using the double wide cmpxchg64 on 32bit, we can use the same
algorithm on both 32/64b systems.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_active.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_active.c 
b/drivers/gpu/drm/i915/i915_active.c
index 10a865f3dc09..ab4382841c6b 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -159,8 +159,7 @@ __active_retire(struct i915_active *ref)
GEM_BUG_ON(ref->tree.rb_node != &ref->cache->node);
 
/* Make the cached node available for reuse with any timeline */
-   if (IS_ENABLED(CONFIG_64BIT))
-   ref->cache->timeline = 0; /* needs cmpxchg(u64) */
+   ref->cache->timeline = 0; /* needs cmpxchg(u64) */
}
 
spin_unlock_irqrestore(&ref->tree_lock, flags);
@@ -256,7 +255,6 @@ static struct active_node *__active_lookup(struct 
i915_active *ref, u64 idx)
if (cached == idx)
return it;
 
-#ifdef CONFIG_64BIT /* for cmpxchg(u64) */
/*
 * An unclaimed cache [.timeline=0] can only be claimed once.
 *
@@ -267,9 +265,8 @@ static struct active_node *__active_lookup(struct 
i915_active *ref, u64 idx)
 * only the winner of that race will cmpxchg return the old
 * value of 0).
 */
-   if (!cached && !cmpxchg(&it->timeline, 0, idx))
+   if (!cached && !cmpxchg64(&it->timeline, 0, idx))
return it;
-#endif
}
 
BUILD_BUG_ON(offsetof(typeof(*it), node));
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 50/69] drm/i915: Fix the iterative dfs for defering requests

2020-12-14 Thread Chris Wilson
The current implementation of walking the children of a deferred
requests lacks the backtracking required to reduce the dfs to linear.
Having pulled it from execlists into the common layer, we can reuse the
dfs code for priority inheritance.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_scheduler.c | 58 +++
 1 file changed, 42 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index 1e0d0784d8c2..94fbb3bbcb8d 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -484,25 +484,26 @@ void i915_request_set_priority(struct i915_request *rq, 
int prio)
 void __intel_engine_defer_request(struct intel_engine_cs *engine,
  struct i915_request *rq)
 {
-   struct list_head *pl;
-   LIST_HEAD(list);
+   struct list_head *pos = &rq->sched.waiters_list;
+   struct i915_request *rn;
+   LIST_HEAD(dfs);
+   int prio;
 
lockdep_assert_held(&engine->active.lock);
GEM_BUG_ON(!test_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags));
 
+   prio = rq_prio(rq);
+
/*
 * When we defer a request, we must maintain its order with respect
 * to those that are waiting upon it. So we traverse its chain of
 * waiters and move any that are earlier than the request to after it.
 */
-   pl = i915_sched_lookup_priolist(engine, rq_prio(rq));
+   rq->sched.dfs.next = NULL;
do {
-   struct i915_dependency *p;
-
-   GEM_BUG_ON(i915_request_is_active(rq));
-   list_move_tail(&rq->sched.link, pl);
-
-   for_each_waiter(p, rq) {
+   list_for_each_continue(pos, &rq->sched.waiters_list) {
+   struct i915_dependency *p =
+   list_entry(pos, typeof(*p), wait_link);
struct i915_request *w =
container_of(p->waiter, typeof(*w), sched);
 
@@ -518,19 +519,44 @@ void __intel_engine_defer_request(struct intel_engine_cs 
*engine,
   i915_request_started(w) &&
   !i915_request_completed(rq));
 
-   GEM_BUG_ON(i915_request_is_active(w));
-   if (!i915_request_is_ready(w))
+   if (!i915_request_in_priority_queue(w))
continue;
 
-   if (rq_prio(w) < rq_prio(rq))
+   /*
+* We also need to reorder within the same priority.
+*
+* This is unlike priority-inheritance, where if the
+* signaler already has a higher priority [earlier
+* deadline] than us, we can ignore as it will be
+* scheduled first. If a waiter already has the
+* same priority, we still have to push it to the end
+* of the list. This unfortunately means we cannot
+* use the rq_deadline() itself as a 'visited' bit.
+*/
+   if (rq_prio(w) < prio)
continue;
 
-   GEM_BUG_ON(rq_prio(w) > rq_prio(rq));
-   list_move_tail(&w->sched.link, &list);
+   GEM_BUG_ON(rq_prio(w) != prio);
+
+   /* Remember our position along this branch */
+   rq = stack_push(w, rq, pos);
+   pos = &rq->sched.waiters_list;
}
 
-   rq = list_first_entry_or_null(&list, typeof(*rq), sched.link);
-   } while (rq);
+   /* Note list is reversed for waiters wrt signal hierarchy */
+   GEM_BUG_ON(rq->engine != engine);
+   GEM_BUG_ON(!i915_request_in_priority_queue(rq));
+   list_move(&rq->sched.link, &dfs);
+
+   /* Track our visit, and prevent duplicate processing */
+   clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+   } while ((rq = stack_pop(rq, &pos)));
+
+   pos = i915_sched_lookup_priolist(engine, prio);
+   list_for_each_entry_safe(rq, rn, &dfs, sched.link) {
+   set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+   list_add_tail(&rq->sched.link, pos);
+   }
 }
 
 static void queue_request(struct intel_engine_cs *engine,
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 61/69] drm/i915/gt: Support creation of 'internal' rings

2020-12-14 Thread Chris Wilson
To support legacy ring buffer scheduling, we want a virtual ringbuffer
for each client. These rings are purely for holding the requests as they
are being constructed on the CPU and never accessed by the GPU, so they
should not be bound into the GGTT, and we can use plain old WB mapped
pages.

As they are not bound, we need to nerf a few assumptions that a rq->ring
is in the GGTT.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_context.c|  2 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c  | 17 +-
 drivers/gpu/drm/i915/gt/intel_ring.c   | 66 ++
 drivers/gpu/drm/i915/gt/intel_ring.h   | 12 +++-
 drivers/gpu/drm/i915/gt/intel_ring_types.h |  2 +
 5 files changed, 59 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index 349e7fa1488d..f3a8c139624c 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -259,7 +259,7 @@ int __intel_context_do_pin_ww(struct intel_context *ce,
}
 
CE_TRACE(ce, "pin ring:{start:%08x, head:%04x, tail:%04x}\n",
-i915_ggtt_offset(ce->ring->vma),
+intel_ring_address(ce->ring),
 ce->ring->head, ce->ring->tail);
 
handoff = true;
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 101f54467c1e..cc7983d14cc0 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1309,7 +1309,7 @@ static int print_ring(char *buf, int sz, struct 
i915_request *rq)
 
len = scnprintf(buf, sz,
"ring:{start:%08x, hwsp:%08x, seqno:%08x, 
runtime:%llums}, ",
-   i915_ggtt_offset(rq->ring->vma),
+   intel_ring_address(rq->ring),
tl ? tl->ggtt_offset : 0,
hwsp_seqno(rq),

DIV_ROUND_CLOSEST_ULL(intel_context_get_total_runtime_ns(rq->context),
@@ -1637,7 +1637,7 @@ void intel_engine_dump(struct intel_engine_cs *engine,
i915_request_show(m, rq, "\t\tactive ", 0);
 
drm_printf(m, "\t\tring->start:  0x%08x\n",
-  i915_ggtt_offset(rq->ring->vma));
+  intel_ring_address(rq->ring));
drm_printf(m, "\t\tring->head:   0x%08x\n",
   rq->ring->head);
drm_printf(m, "\t\tring->tail:   0x%08x\n",
@@ -1718,13 +1718,6 @@ ktime_t intel_engine_get_busy_time(struct 
intel_engine_cs *engine, ktime_t *now)
return total;
 }
 
-static bool match_ring(struct i915_request *rq)
-{
-   u32 ring = ENGINE_READ(rq->engine, RING_START);
-
-   return ring == i915_ggtt_offset(rq->ring->vma);
-}
-
 struct i915_request *
 intel_engine_find_active_request(struct intel_engine_cs *engine)
 {
@@ -1764,11 +1757,7 @@ intel_engine_find_active_request(struct intel_engine_cs 
*engine)
continue;
 
if (!i915_request_started(request))
-   continue;
-
-   /* More than one preemptible request may match! */
-   if (!match_ring(request))
-   continue;
+   break;
 
active = request;
break;
diff --git a/drivers/gpu/drm/i915/gt/intel_ring.c 
b/drivers/gpu/drm/i915/gt/intel_ring.c
index 4034a4bac7f0..a45dc3fe89ca 100644
--- a/drivers/gpu/drm/i915/gt/intel_ring.c
+++ b/drivers/gpu/drm/i915/gt/intel_ring.c
@@ -30,33 +30,42 @@ void __intel_ring_pin(struct intel_ring *ring)
 int intel_ring_pin(struct intel_ring *ring, struct i915_gem_ww_ctx *ww)
 {
struct i915_vma *vma = ring->vma;
-   unsigned int flags;
void *addr;
int ret;
 
if (atomic_fetch_inc(&ring->pin_count))
return 0;
 
-   /* Ring wraparound at offset 0 sometimes hangs. No idea why. */
-   flags = PIN_OFFSET_BIAS | i915_ggtt_pin_bias(vma);
+   if (!(ring->flags & INTEL_RING_CREATE_INTERNAL)) {
+   int type = i915_coherent_map_type(vma->vm->i915);
+   unsigned int pin;
 
-   if (vma->obj->stolen)
-   flags |= PIN_MAPPABLE;
-   else
-   flags |= PIN_HIGH;
+   /* Ring wraparound at offset 0 sometimes hangs. No idea why. */
+   pin |= PIN_OFFSET_BIAS | i915_ggtt_pin_bias(vma);
 
-   ret = i915_ggtt_pin(vma, ww, 0, flags);
-   if (unlikely(ret))
-   goto err_unpin;
+   if (vma->obj->stolen)
+   pin |= PIN_MAPPABLE;
+   else
+   pin |= PIN_HIGH;
 
-   if (i915_vma_is_map_and_fenceable(vma))
-   addr = (void __force *)i915_vma_pin_iomap(vma);
-   else
-   addr = i915_gem_ob

[Intel-gfx] [PATCH 15/69] drm/i915/gt: Track all timelines created using the HWSP

2020-12-14 Thread Chris Wilson
We assume that the contents of the HWSP are lost across suspend, and so
upon resume we must restore critical values such as the timeline seqno.
Keep track of every timeline allocated that uses the HWSP as its storage
and so we can then reset all seqno values by walking that list.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c |  9 -
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |  6 
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  1 +
 .../drm/i915/gt/intel_execlists_submission.c  | 11 --
 .../gpu/drm/i915/gt/intel_ring_submission.c   | 35 +++
 drivers/gpu/drm/i915/gt/intel_timeline.h  | 13 +--
 .../gpu/drm/i915/gt/intel_timeline_types.h|  2 ++
 7 files changed, 71 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 71bd052628f4..6c08e74edcae 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -648,6 +648,8 @@ static int init_status_page(struct intel_engine_cs *engine)
void *vaddr;
int ret;
 
+   INIT_LIST_HEAD(&engine->status_page.timelines);
+
/*
 * Though the HWS register does support 36bit addresses, historically
 * we have had hangs and corruption reported due to wild writes if
@@ -936,6 +938,7 @@ void intel_engine_cleanup_common(struct intel_engine_cs 
*engine)
fput(engine->default_state);
 
if (engine->kernel_context) {
+   list_del(&engine->kernel_context->timeline->engine_link);
intel_context_unpin(engine->kernel_context);
intel_context_put(engine->kernel_context);
}
@@ -1281,8 +1284,12 @@ void intel_engines_reset_default_submission(struct 
intel_gt *gt)
struct intel_engine_cs *engine;
enum intel_engine_id id;
 
-   for_each_engine(engine, gt, id)
+   for_each_engine(engine, gt, id) {
+   if (engine->sanitize)
+   engine->sanitize(engine);
+
engine->set_default_submission(engine);
+   }
 }
 
 bool intel_engine_can_store_dword(struct intel_engine_cs *engine)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 99574378047f..1e5bad0b9a82 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -60,6 +60,12 @@ static int __engine_unpark(struct intel_wakeref *wf)
 
/* Scrub the context image after our loss of control */
ce->ops->reset(ce);
+
+   CE_TRACE(ce, "reset { seqno:%x, *hwsp:%x, ring:%x }\n",
+ce->timeline->seqno,
+READ_ONCE(*ce->timeline->hwsp_seqno),
+ce->ring->emit);
+   GEM_BUG_ON(ce->timeline->seqno != *ce->timeline->hwsp_seqno);
}
 
if (engine->unpark)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index e71eef157231..c28f4e190fe6 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -68,6 +68,7 @@ typedef u8 intel_engine_mask_t;
 #define ALL_ENGINES ((intel_engine_mask_t)~0ul)
 
 struct intel_hw_status_page {
+   struct list_head timelines;
struct i915_vma *vma;
u32 *addr;
 };
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 9f5efff08785..c5b013cc10b3 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -3508,7 +3508,6 @@ static int execlists_context_alloc(struct intel_context 
*ce)
 
 static void execlists_context_reset(struct intel_context *ce)
 {
-   CE_TRACE(ce, "reset\n");
GEM_BUG_ON(!intel_context_is_pinned(ce));
 
intel_ring_reset(ce->ring, ce->ring->emit);
@@ -3985,6 +3984,14 @@ static void reset_csb_pointers(struct intel_engine_cs 
*engine)
GEM_BUG_ON(READ_ONCE(*execlists->csb_write) != reset_value);
 }
 
+static void sanitize_hwsp(struct intel_engine_cs *engine)
+{
+   struct intel_timeline *tl;
+
+   list_for_each_entry(tl, &engine->status_page.timelines, engine_link)
+   intel_timeline_reset_seqno(tl);
+}
+
 static void execlists_sanitize(struct intel_engine_cs *engine)
 {
GEM_BUG_ON(execlists_active(&engine->execlists));
@@ -4008,7 +4015,7 @@ static void execlists_sanitize(struct intel_engine_cs 
*engine)
 * that may be lost on resume/initialisation, and so we need to
 * reset the value in the HWSP.
 */
-   intel_timeline_reset_seqno(engine->kernel_context->timeline);
+   sanitize_hwsp(engine);
 
/* And scrub the dirty cachelines for the HWSP */
clflush_cache_range(engine->status_page.addr, PAGE_SIZE);
diff --git a/drivers/gpu/drm/i915/gt/in

[Intel-gfx] [PATCH 62/69] drm/i915/gt: Use client timeline address for seqno writes

2020-12-14 Thread Chris Wilson
If we allow for per-client timelines, even with legacy ring submission,
we open the door to a world full of possiblities [scheduling and
semaphores].

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/gen6_engine_cs.c | 89 +---
 drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 23 ++
 drivers/gpu/drm/i915/i915_request.h  | 13 
 3 files changed, 82 insertions(+), 43 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c 
b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
index 2f59dd3bdc18..14cab4c726ce 100644
--- a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
@@ -141,6 +141,12 @@ int gen6_emit_flush_rcs(struct i915_request *rq, u32 mode)
 
 u32 *gen6_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
 {
+   struct intel_timeline *tl = rcu_dereference_protected(rq->timeline, 1);
+   u32 offset = __i915_request_hwsp_offset(rq);
+   unsigned int flags;
+
+   GEM_BUG_ON(tl->mode == INTEL_TIMELINE_RELATIVE_CONTEXT);
+
/* First we do the gen6_emit_post_sync_nonzero_flush w/a */
*cs++ = GFX_OP_PIPE_CONTROL(4);
*cs++ = PIPE_CONTROL_CS_STALL | PIPE_CONTROL_STALL_AT_SCOREBOARD;
@@ -154,15 +160,22 @@ u32 *gen6_emit_breadcrumb_rcs(struct i915_request *rq, 
u32 *cs)
PIPE_CONTROL_GLOBAL_GTT;
*cs++ = 0;
 
-   /* Finally we can flush and with it emit the breadcrumb */
-   *cs++ = GFX_OP_PIPE_CONTROL(4);
-   *cs++ = (PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH |
+   flags = (PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH |
 PIPE_CONTROL_DEPTH_CACHE_FLUSH |
 PIPE_CONTROL_DC_FLUSH_ENABLE |
 PIPE_CONTROL_QW_WRITE |
 PIPE_CONTROL_CS_STALL);
-   *cs++ = i915_request_active_timeline(rq)->ggtt_offset |
-   PIPE_CONTROL_GLOBAL_GTT;
+   if (intel_timeline_is_relative(tl)) {
+   offset = offset_in_page(offset);
+   flags |= PIPE_CONTROL_STORE_DATA_INDEX;
+   }
+   if (!intel_timeline_in_context(tl))
+   offset |= PIPE_CONTROL_GLOBAL_GTT;
+
+   /* Finally we can flush and with it emit the breadcrumb */
+   *cs++ = GFX_OP_PIPE_CONTROL(4);
+   *cs++ = flags;
+   *cs++ = offset;
*cs++ = rq->fence.seqno;
 
*cs++ = MI_USER_INTERRUPT;
@@ -351,15 +364,28 @@ int gen7_emit_flush_rcs(struct i915_request *rq, u32 mode)
 
 u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
 {
-   *cs++ = GFX_OP_PIPE_CONTROL(4);
-   *cs++ = (PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH |
+   struct intel_timeline *tl = rcu_dereference_protected(rq->timeline, 1);
+   u32 offset = __i915_request_hwsp_offset(rq);
+   unsigned int flags;
+
+   GEM_BUG_ON(tl->mode == INTEL_TIMELINE_RELATIVE_CONTEXT);
+
+   flags = (PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH |
 PIPE_CONTROL_DEPTH_CACHE_FLUSH |
 PIPE_CONTROL_DC_FLUSH_ENABLE |
 PIPE_CONTROL_FLUSH_ENABLE |
 PIPE_CONTROL_QW_WRITE |
-PIPE_CONTROL_GLOBAL_GTT_IVB |
 PIPE_CONTROL_CS_STALL);
-   *cs++ = i915_request_active_timeline(rq)->ggtt_offset;
+   if (intel_timeline_is_relative(tl)) {
+   offset = offset_in_page(offset);
+   flags |= PIPE_CONTROL_STORE_DATA_INDEX;
+   }
+   if (!intel_timeline_in_context(tl))
+   flags |= PIPE_CONTROL_GLOBAL_GTT_IVB;
+
+   *cs++ = GFX_OP_PIPE_CONTROL(4);
+   *cs++ = flags;
+   *cs++ = offset;
*cs++ = rq->fence.seqno;
 
*cs++ = MI_USER_INTERRUPT;
@@ -373,11 +399,21 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, 
u32 *cs)
 
 u32 *gen6_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
 {
-   GEM_BUG_ON(i915_request_active_timeline(rq)->hwsp_ggtt != 
rq->engine->status_page.vma);
-   
GEM_BUG_ON(offset_in_page(i915_request_active_timeline(rq)->hwsp_offset) != 
I915_GEM_HWS_SEQNO_ADDR);
+   struct intel_timeline *tl = rcu_dereference_protected(rq->timeline, 1);
+   u32 offset = __i915_request_hwsp_offset(rq);
+   unsigned int flags = 0;
+
+   GEM_BUG_ON(tl->mode == INTEL_TIMELINE_RELATIVE_CONTEXT);
 
-   *cs++ = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX;
-   *cs++ = I915_GEM_HWS_SEQNO_ADDR | MI_FLUSH_DW_USE_GTT;
+   if (intel_timeline_is_relative(tl)) {
+   offset = offset_in_page(offset);
+   flags |= MI_FLUSH_DW_STORE_INDEX;
+   }
+   if (!intel_timeline_in_context(tl))
+   offset |= MI_FLUSH_DW_USE_GTT;
+
+   *cs++ = MI_FLUSH_DW | MI_FLUSH_DW_OP_STOREDW | flags;
+   *cs++ = offset;
*cs++ = rq->fence.seqno;
 
*cs++ = MI_USER_INTERRUPT;
@@ -391,28 +427,31 @@ u32 *gen6_emit_breadcrumb_xcs(struct i915_request *rq, 
u32 *cs)
 #define GEN7_XCS_WA 32
 u32 *gen7_emit_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
 {
+   struct intel_timeline *tl =

[Intel-gfx] [PATCH 17/69] drm/i915/gt: Track timeline GGTT offset separately from subpage offset

2020-12-14 Thread Chris Wilson
Currently we know that the timeline status page is at most a page in
size, and so we can preserve the lower 12bits of the offset when
relocating the status page in the GGTT. If we want to use a larger
object, such as the context state, we may not necessarily use a position
within the first page and so need more than 12b.

Signed-off-by: Chris Wilson 
Reviewed-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/gen6_engine_cs.c   |  4 ++--
 drivers/gpu/drm/i915/gt/gen8_engine_cs.c   |  2 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c  |  4 ++--
 drivers/gpu/drm/i915/gt/intel_timeline.c   | 17 +++--
 drivers/gpu/drm/i915/gt/intel_timeline_types.h |  1 +
 drivers/gpu/drm/i915/gt/selftest_engine_cs.c   |  2 +-
 drivers/gpu/drm/i915/gt/selftest_rc6.c |  2 +-
 drivers/gpu/drm/i915/gt/selftest_timeline.c| 16 
 8 files changed, 23 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c 
b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
index ce38d1bcaba3..2f59dd3bdc18 100644
--- a/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen6_engine_cs.c
@@ -161,7 +161,7 @@ u32 *gen6_emit_breadcrumb_rcs(struct i915_request *rq, u32 
*cs)
 PIPE_CONTROL_DC_FLUSH_ENABLE |
 PIPE_CONTROL_QW_WRITE |
 PIPE_CONTROL_CS_STALL);
-   *cs++ = i915_request_active_timeline(rq)->hwsp_offset |
+   *cs++ = i915_request_active_timeline(rq)->ggtt_offset |
PIPE_CONTROL_GLOBAL_GTT;
*cs++ = rq->fence.seqno;
 
@@ -359,7 +359,7 @@ u32 *gen7_emit_breadcrumb_rcs(struct i915_request *rq, u32 
*cs)
 PIPE_CONTROL_QW_WRITE |
 PIPE_CONTROL_GLOBAL_GTT_IVB |
 PIPE_CONTROL_CS_STALL);
-   *cs++ = i915_request_active_timeline(rq)->hwsp_offset;
+   *cs++ = i915_request_active_timeline(rq)->ggtt_offset;
*cs++ = rq->fence.seqno;
 
*cs++ = MI_USER_INTERRUPT;
diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c 
b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
index ebf043692eef..ed88dc4de72c 100644
--- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
@@ -346,7 +346,7 @@ static u32 hwsp_offset(const struct i915_request *rq)
if (cl)
return cl->ggtt_offset;
 
-   return rcu_dereference_protected(rq->timeline, 1)->hwsp_offset;
+   return rcu_dereference_protected(rq->timeline, 1)->ggtt_offset;
 }
 
 int gen8_emit_init_breadcrumb(struct i915_request *rq)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 6c08e74edcae..55856c230779 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1340,7 +1340,7 @@ static int print_ring(char *buf, int sz, struct 
i915_request *rq)
len = scnprintf(buf, sz,
"ring:{start:%08x, hwsp:%08x, seqno:%08x, 
runtime:%llums}, ",
i915_ggtt_offset(rq->ring->vma),
-   tl ? tl->hwsp_offset : 0,
+   tl ? tl->ggtt_offset : 0,
hwsp_seqno(rq),

DIV_ROUND_CLOSEST_ULL(intel_context_get_total_runtime_ns(rq->context),
  1000 * 1000));
@@ -1679,7 +1679,7 @@ void intel_engine_dump(struct intel_engine_cs *engine,
 
if (tl) {
drm_printf(m, "\t\tring->hwsp:   0x%08x\n",
-  tl->hwsp_offset);
+  tl->ggtt_offset);
intel_timeline_put(tl);
}
 
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c 
b/drivers/gpu/drm/i915/gt/intel_timeline.c
index ddc8e1b4f3b8..cb20fcbb326b 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -338,13 +338,11 @@ int intel_timeline_pin(struct intel_timeline *tl, struct 
i915_gem_ww_ctx *ww)
if (err)
return err;
 
-   tl->hwsp_offset =
-   i915_ggtt_offset(tl->hwsp_ggtt) +
-   offset_in_page(tl->hwsp_offset);
+   tl->ggtt_offset = i915_ggtt_offset(tl->hwsp_ggtt) + tl->hwsp_offset;
GT_TRACE(tl->gt, "timeline:%llx using HWSP offset:%x\n",
-tl->fence_context, tl->hwsp_offset);
+tl->fence_context, tl->ggtt_offset);
 
-   cacheline_acquire(tl->hwsp_cacheline, tl->hwsp_offset);
+   cacheline_acquire(tl->hwsp_cacheline, tl->ggtt_offset);
if (atomic_fetch_inc(&tl->pin_count)) {
cacheline_release(tl->hwsp_cacheline);
__i915_vma_unpin(tl->hwsp_ggtt);
@@ -512,14 +510,13 @@ __intel_timeline_get_seqno(struct intel_timeline *tl,
 
vaddr = page_mask_bits(cl->vaddr);
tl->hwsp_offset = cacheline * CACHELINE_BYTES;
-   tl->hwsp_seqno =
-  

[Intel-gfx] [PATCH 14/69] drm/i915/gt: Track the overall awake/busy time

2020-12-14 Thread Chris Wilson
Since we wake the GT up before executing a request, and go to sleep as
soon as it is retired, the GT wake time not only represents how long the
device is powered up, but also provides a summary, albeit an overestimate,
of the device runtime (i.e. the rc0 time to compare against rc6 time).

v2: s/busy/awake/

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/debugfs_gt_pm.c  |  5 ++-
 drivers/gpu/drm/i915/gt/intel_gt_pm.c| 49 
 drivers/gpu/drm/i915/gt/intel_gt_pm.h|  2 +
 drivers/gpu/drm/i915/gt/intel_gt_types.h | 24 
 drivers/gpu/drm/i915/i915_debugfs.c  |  5 ++-
 drivers/gpu/drm/i915/i915_pmu.c  |  6 +++
 include/uapi/drm/i915_drm.h  |  1 +
 7 files changed, 89 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/debugfs_gt_pm.c 
b/drivers/gpu/drm/i915/gt/debugfs_gt_pm.c
index 174a24553322..8975717ace06 100644
--- a/drivers/gpu/drm/i915/gt/debugfs_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/debugfs_gt_pm.c
@@ -11,6 +11,7 @@
 #include "i915_drv.h"
 #include "intel_gt.h"
 #include "intel_gt_clock_utils.h"
+#include "intel_gt_pm.h"
 #include "intel_llc.h"
 #include "intel_rc6.h"
 #include "intel_rps.h"
@@ -558,7 +559,9 @@ static int rps_boost_show(struct seq_file *m, void *data)
 
seq_printf(m, "RPS enabled? %s\n", yesno(intel_rps_is_enabled(rps)));
seq_printf(m, "RPS active? %s\n", yesno(intel_rps_is_active(rps)));
-   seq_printf(m, "GPU busy? %s\n", yesno(gt->awake));
+   seq_printf(m, "GPU busy? %s, %llums\n",
+  yesno(gt->awake),
+  ktime_to_ms(intel_gt_get_awake_time(gt)));
seq_printf(m, "Boosts outstanding? %d\n",
   atomic_read(&rps->num_waiters));
seq_printf(m, "Interactive? %d\n", READ_ONCE(rps->power.interactive));
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c 
b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index 274aa0dd7050..c94e8ac884eb 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -39,6 +39,28 @@ static void user_forcewake(struct intel_gt *gt, bool suspend)
intel_gt_pm_put(gt);
 }
 
+static void runtime_begin(struct intel_gt *gt)
+{
+   local_irq_disable();
+   write_seqcount_begin(>->stats.lock);
+   gt->stats.start = ktime_get();
+   gt->stats.active = true;
+   write_seqcount_end(>->stats.lock);
+   local_irq_enable();
+}
+
+static void runtime_end(struct intel_gt *gt)
+{
+   local_irq_disable();
+   write_seqcount_begin(>->stats.lock);
+   gt->stats.active = false;
+   gt->stats.total =
+   ktime_add(gt->stats.total,
+ ktime_sub(ktime_get(), gt->stats.start));
+   write_seqcount_end(>->stats.lock);
+   local_irq_enable();
+}
+
 static int __gt_unpark(struct intel_wakeref *wf)
 {
struct intel_gt *gt = container_of(wf, typeof(*gt), wakeref);
@@ -67,6 +89,7 @@ static int __gt_unpark(struct intel_wakeref *wf)
i915_pmu_gt_unparked(i915);
 
intel_gt_unpark_requests(gt);
+   runtime_begin(gt);
 
return 0;
 }
@@ -79,6 +102,7 @@ static int __gt_park(struct intel_wakeref *wf)
 
GT_TRACE(gt, "\n");
 
+   runtime_end(gt);
intel_gt_park_requests(gt);
 
i915_vma_parked(gt);
@@ -106,6 +130,7 @@ static const struct intel_wakeref_ops wf_ops = {
 void intel_gt_pm_init_early(struct intel_gt *gt)
 {
intel_wakeref_init(>->wakeref, gt->uncore->rpm, &wf_ops);
+   seqcount_mutex_init(>->stats.lock, >->wakeref.mutex);
 }
 
 void intel_gt_pm_init(struct intel_gt *gt)
@@ -339,6 +364,30 @@ int intel_gt_runtime_resume(struct intel_gt *gt)
return intel_uc_runtime_resume(>->uc);
 }
 
+static ktime_t __intel_gt_get_awake_time(const struct intel_gt *gt)
+{
+   ktime_t total = gt->stats.total;
+
+   if (gt->stats.active)
+   total = ktime_add(total,
+ ktime_sub(ktime_get(), gt->stats.start));
+
+   return total;
+}
+
+ktime_t intel_gt_get_awake_time(const struct intel_gt *gt)
+{
+   unsigned int seq;
+   ktime_t total;
+
+   do {
+   seq = read_seqcount_begin(>->stats.lock);
+   total = __intel_gt_get_awake_time(gt);
+   } while (read_seqcount_retry(>->stats.lock, seq));
+
+   return total;
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftest_gt_pm.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h 
b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
index 60f0e2fbe55c..63846a856e7e 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
@@ -58,6 +58,8 @@ int intel_gt_resume(struct intel_gt *gt);
 void intel_gt_runtime_suspend(struct intel_gt *gt);
 int intel_gt_runtime_resume(struct intel_gt *gt);
 
+ktime_t intel_gt_get_awake_time(const struct intel_gt *gt);
+
 static inline bool is_mock_gt(const struct intel_gt *gt)
 {
return I915_SELFTEST_ONLY(gt->awake == -ENODEV);
diff 

[Intel-gfx] [PATCH 67/69] drm/i915: Move saturated workload detection back to the context

2020-12-14 Thread Chris Wilson
When we introduced the saturated workload detection to tell us to back
off from semaphore usage [semaphores have a noticeable impact on
contended bus cycles with the CPU for some heavy workloads], we first
introduced it as a per-context tracker. This allows individual contexts
to try and optimise their own usage, but we found that with the local
tracking and the no-semaphore boosting, the first context to disable
semaphores got a massive priority boost and so would starve the rest and
all new contexts (as they started with semaphores enabled and lower
priority). Hence we moved the saturated workload detection to the
engine, and a consequence had to disable semaphores on virtual engines.

Now that we do not have semaphore priority boosting, and try to fairly
schedule irrespective of semaphore usage, we can move the tracking back
to the context and virtual engines can now utilise the faster inter-engine
synchronisation. If we see that any context fairs to use the semaphore,
because the system is oversubscribed and was busy doing something else
instead of spinning on the semaphore, we disable further usage of
semaphores with that context until it idles again. This should restrict
the semaphores to lightly utilised system where the latency between
requests is more noticeable, and curtail the bus-contention from checking
for signaled semaphores.

References: 44d89409a12e ("drm/i915: Make the semaphore saturation mask global")
Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_context.c   |  3 +++
 drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |  2 --
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  2 --
 .../gpu/drm/i915/gt/intel_execlists_submission.c  | 15 ---
 drivers/gpu/drm/i915/i915_request.c   |  4 ++--
 6 files changed, 7 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index f3a8c139624c..d01678c26a91 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -345,6 +345,9 @@ static int __intel_context_active(struct i915_active 
*active)
 {
struct intel_context *ce = container_of(active, typeof(*ce), active);
 
+   CE_TRACE(ce, "active\n");
+   ce->saturated = 0;
+
intel_context_get(ce);
 
/* everything should already be activated by intel_context_pre_pin() */
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index f7a0fb6f3a2e..1b972b1e0047 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -102,6 +102,8 @@ struct intel_context {
} lrc;
u32 tag; /* cookie passed to HW to track this context on submission */
 
+   intel_engine_mask_t saturated; /* submitting semaphores too late? */
+
/* Time on GPU as tracked by the hw. */
struct {
struct ewma_runtime avg;
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index aea8b6eab5ee..d4fe2dea537b 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -251,8 +251,6 @@ static int __engine_park(struct intel_wakeref *wf)
struct intel_engine_cs *engine =
container_of(wf, typeof(*engine), wakeref);
 
-   engine->saturated = 0;
-
/*
 * If one and only one request is completed between pm events,
 * we know that we are inside the kernel context and it is
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 0698c4ae572c..a93bef46e455 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -304,8 +304,6 @@ struct intel_engine_cs {
 
struct intel_context *kernel_context; /* pinned */
 
-   intel_engine_mask_t saturated; /* submitting semaphores too late? */
-
struct {
struct delayed_work work;
struct i915_request *systole;
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 989f1a2a2e8b..ed3b574f4547 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -4921,21 +4921,6 @@ intel_execlists_create_virtual(struct intel_engine_cs 
**siblings,
ve->base.instance = I915_ENGINE_CLASS_INVALID_VIRTUAL;
ve->base.uabi_instance = I915_ENGINE_CLASS_INVALID_VIRTUAL;
 
-   /*
-* The decision on whether to submit a request using semaphores
-* depends on the saturated state of the engine. We only compute
-* this during HW submission of the request, and we need for this
-* state to be globally applied to all requests being submitted
-* to th

[Intel-gfx] [PATCH 45/69] drm/i915: Improve DFS for priority inheritance

2020-12-14 Thread Chris Wilson
The core of the scheduling algorithm is that we compute the topological
order of the fence DAG. Knowing that we have a DAG, we should be able to
use a DFS to compute the topological sort in linear time. However,
during the conversion of the recursive algorithm into an iterative one,
the memorization of how far we had progressed down a branch was
forgotten.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_scheduler.c | 57 ---
 1 file changed, 33 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index 265c915a9b82..f774e19b9b1a 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -270,6 +270,25 @@ void i915_priolist_free_many(struct list_head *list)
}
 }
 
+static struct i915_request *
+stack_push(struct i915_request *rq,
+  struct i915_request *stack,
+  struct list_head *pos)
+{
+   stack->sched.dfs.prev = pos;
+   rq->sched.dfs.next = (struct list_head *)stack;
+   return rq;
+}
+
+static struct i915_request *stack_pop(struct i915_request *rq,
+ struct list_head **pos)
+{
+   rq = (struct i915_request *)rq->sched.dfs.next;
+   if (rq)
+   *pos = rq->sched.dfs.prev;
+   return rq;
+}
+
 static inline bool need_preempt(int prio, int active)
 {
/*
@@ -334,11 +353,10 @@ static void ipi_priority(struct i915_request *rq, int 
prio)
 static void __i915_request_set_priority(struct i915_request *rq, int prio)
 {
struct intel_engine_cs *engine = rq->engine;
-   struct i915_request *rn;
+   struct list_head *pos = &rq->sched.signalers_list;
struct list_head *plist;
-   LIST_HEAD(dfs);
 
-   list_add(&rq->sched.dfs, &dfs);
+   plist = i915_sched_lookup_priolist(engine, prio);
 
/*
 * Recursively bump all dependent priorities to match the new request.
@@ -358,40 +376,31 @@ static void __i915_request_set_priority(struct 
i915_request *rq, int prio)
 * end result is a topological list of requests in reverse order, the
 * last element in the list is the request we must execute first.
 */
-   list_for_each_entry(rq, &dfs, sched.dfs) {
-   struct i915_dependency *p;
-
-   /* Also release any children on this engine that are ready */
-   GEM_BUG_ON(rq->engine != engine);
-
-   for_each_signaler(p, rq) {
+   rq->sched.dfs.next = NULL;
+   do {
+   list_for_each_continue(pos, &rq->sched.signalers_list) {
+   struct i915_dependency *p =
+   list_entry(pos, typeof(*p), signal_link);
struct i915_request *s =
container_of(p->signaler, typeof(*s), sched);
 
-   GEM_BUG_ON(s == rq);
-
if (rq_prio(s) >= prio)
continue;
 
if (i915_request_completed(s))
continue;
 
-   if (s->engine != rq->engine) {
+   if (s->engine != engine) {
ipi_priority(s, prio);
continue;
}
 
-   list_move_tail(&s->sched.dfs, &dfs);
+   /* Remember our position along this branch */
+   rq = stack_push(s, rq, pos);
+   pos = &rq->sched.signalers_list;
}
-   }
-
-   plist = i915_sched_lookup_priolist(engine, prio);
-
-   /* Fifo and depth-first replacement ensure our deps execute first */
-   list_for_each_entry_safe_reverse(rq, rn, &dfs, sched.dfs) {
-   GEM_BUG_ON(rq->engine != engine);
 
-   INIT_LIST_HEAD(&rq->sched.dfs);
+   RQ_TRACE(rq, "set-priority:%d\n", prio);
WRITE_ONCE(rq->sched.attr.priority, prio);
 
/*
@@ -405,12 +414,13 @@ static void __i915_request_set_priority(struct 
i915_request *rq, int prio)
if (!i915_request_is_ready(rq))
continue;
 
+   GEM_BUG_ON(rq->engine != engine);
if (i915_request_in_priority_queue(rq))
list_move_tail(&rq->sched.link, plist);
 
/* Defer (tasklet) submission until after all updates. */
kick_submission(engine, rq, prio);
-   }
+   } while ((rq = stack_pop(rq, &pos)));
 }
 
 void i915_request_set_priority(struct i915_request *rq, int prio)
@@ -478,7 +488,6 @@ void i915_sched_node_init(struct i915_sched_node *node)
INIT_LIST_HEAD(&node->signalers_list);
INIT_LIST_HEAD(&node->waiters_list);
INIT_LIST_HEAD(&node->link);
-   INIT_LIST_HEAD(&node->dfs);
 
node->ipi_link = NULL;
 
-- 
2.20.1


[Intel-gfx] [PATCH 38/69] drm/i915: Prune empty priolists

2020-12-14 Thread Chris Wilson
A side-effect of our priority inheritance scheme is that we promote
requests from one priority to the next, moving them from one list to the
next. This can often leave the old priority list empty, but still
resident in the rbtree, which we then have to traverse during HW
submission. rb_next() is relatively expensive operation so if we can
push that to the update where we can do piecemeal pruning and reuse the
nodes, this reduces the latency for HW submission.

Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/i915_scheduler.c | 41 +--
 1 file changed, 32 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index dad5318ca825..c65fa0b012de 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -64,9 +64,10 @@ struct list_head *
 i915_sched_lookup_priolist(struct intel_engine_cs *engine, int prio)
 {
struct intel_engine_execlists * const execlists = &engine->execlists;
-   struct i915_priolist *p;
+   struct list_head *free = NULL;
struct rb_node **parent, *rb;
-   bool first = true;
+   struct i915_priolist *p;
+   bool first;
 
lockdep_assert_held(&engine->active.lock);
assert_priolists(execlists);
@@ -77,22 +78,40 @@ i915_sched_lookup_priolist(struct intel_engine_cs *engine, 
int prio)
 find_priolist:
/* most positive priority is scheduled first, equal priorities fifo */
rb = NULL;
+   first = true;
parent = &execlists->queue.rb_root.rb_node;
while (*parent) {
rb = *parent;
p = to_priolist(rb);
-   if (prio > p->priority) {
-   parent = &rb->rb_left;
-   } else if (prio < p->priority) {
-   parent = &rb->rb_right;
-   first = false;
-   } else {
-   return &p->requests;
+
+   if (prio == p->priority)
+   goto out;
+
+   /*
+* Prune an empty priolist, we can reuse it if we need to
+* allocate. After removing this node and rotating the subtrees
+* beneath its parent, we need to restart our descent from the
+* parent.
+*/
+   if (list_empty(&p->requests)) {
+   rb = rb_parent(&p->node);
+   parent = rb ? &rb : &execlists->queue.rb_root.rb_node;
+   rb_erase_cached(&p->node, &execlists->queue);
+   free = i915_priolist_free_defer(p, free);
+   continue;
}
+
+   if (prio > p->priority)
+   parent = &rb->rb_left;
+   else
+   parent = &rb->rb_right, first = false;
}
 
if (prio == I915_PRIORITY_NORMAL) {
p = &execlists->default_priolist;
+   } else if (free) {
+   p = container_of(free, typeof(*p), requests);
+   free = p->requests.next;
} else {
p = kmem_cache_alloc(global.slab_priorities, GFP_ATOMIC);
/* Convert an allocation failure to a priority bump */
@@ -117,7 +136,11 @@ i915_sched_lookup_priolist(struct intel_engine_cs *engine, 
int prio)
 
rb_link_node(&p->node, rb, parent);
rb_insert_color_cached(&p->node, &execlists->queue, first);
+   GEM_BUG_ON(rb_first_cached(&execlists->queue) !=
+  rb_first(&execlists->queue.rb_root));
 
+out:
+   i915_priolist_free_many(free);
return &p->requests;
 }
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 30/69] drm/i915: Reduce test_and_set_bit to set_bit in i915_request_submit()

2020-12-14 Thread Chris Wilson
Avoid the full blown memory barrier of test_and_set_bit() by noting the
completed request and removing it from the lists.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_request.c | 16 +---
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 4d886b3c9cd7..2a2ec95fed5f 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -538,8 +538,10 @@ bool __i915_request_submit(struct i915_request *request)
 * dropped upon retiring. (Otherwise if resubmit a *retired*
 * request, this would be a horrible use-after-free.)
 */
-   if (i915_request_completed(request))
-   goto xfer;
+   if (i915_request_completed(request)) {
+   list_del_init(&request->sched.link);
+   goto active;
+   }
 
if (unlikely(intel_context_is_closed(request->context) &&
 !intel_engine_has_heartbeat(engine)))
@@ -578,11 +580,11 @@ bool __i915_request_submit(struct i915_request *request)
engine->serial++;
result = true;
 
-xfer:
-   if (!test_and_set_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags)) {
-   list_move_tail(&request->sched.link, &engine->active.requests);
-   clear_bit(I915_FENCE_FLAG_PQUEUE, &request->fence.flags);
-   }
+   GEM_BUG_ON(test_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags));
+   list_move_tail(&request->sched.link, &engine->active.requests);
+active:
+   clear_bit(I915_FENCE_FLAG_PQUEUE, &request->fence.flags);
+   set_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags);
 
/*
 * XXX Rollback bonded-execution on __i915_request_unsubmit()?
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 69/69] drm/i915/gt: Support virtual engine queues

2020-12-14 Thread Chris Wilson
Allow multiple requests to be queued unto a virtual engine, whereas
before we only allowed a single request to be queued at a time. The
advantage of keeping just one request in the queue was to ensure that we
always decided late which engine to use. However, with the introduction
of the virtual deadline we throttle submission and still only drip one
request into the sibling at a time (unless it is truly empty, but then a
second request will have an earlier deadline than the queued virtual
engine and force itself in front). This also takes advantage that a
virtual engine will remain bound while it is active, i.e. we can not
switch to a second engine until the context is completed -- such that we
cannot be as lazy as lazy can be.

By allowing a full queue, we avoid having to synchronize via the
breadcrumb interrupt everytime, letting the virtual engine reach the
full throughput of the siblings.

Signed-off-by: Chris Wilson 
---
 .../drm/i915/gt/intel_execlists_submission.c  | 411 +-
 drivers/gpu/drm/i915/i915_request.c   |   3 +-
 drivers/gpu/drm/i915/i915_scheduler.c |  57 ++-
 drivers/gpu/drm/i915/i915_scheduler.h |   2 +
 4 files changed, 272 insertions(+), 201 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 442621efa2ff..22ac750a32ac 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -162,17 +162,6 @@ struct virtual_engine {
struct intel_context context;
struct rcu_work rcu;
 
-   /*
-* We allow only a single request through the virtual engine at a time
-* (each request in the timeline waits for the completion fence of
-* the previous before being submitted). By restricting ourselves to
-* only submitting a single request, each request is placed on to a
-* physical to maximise load spreading (by virtue of the late greedy
-* scheduling -- each real engine takes the next available request
-* upon idling).
-*/
-   struct i915_request *request;
-
/*
 * We keep a rbtree of available virtual engines inside each physical
 * engine, sorted by priority. Here we preallocate the nodes we need
@@ -417,18 +406,49 @@ first_queue_request(struct intel_engine_cs *engine)
} while (1);
 }
 
-static struct i915_request *
-first_virtual_request(const struct intel_engine_cs *engine)
+static struct virtual_engine *
+first_virtual_engine(struct intel_engine_cs *engine)
+{
+   return rb_entry_safe(rb_first_cached(&engine->execlists.virtual),
+struct virtual_engine,
+nodes[engine->id].rb);
+}
+
+static struct i915_request *__first_virtual_request(struct virtual_engine *ve)
 {
struct rb_node *rb;
 
-   rb = rb_first_cached(&engine->execlists.virtual);
-   if (!rb)
-   return NULL;
+   while ((rb = rb_first_cached(&ve->base.active.queue))) {
+   struct i915_priolist *p = to_priolist(rb);
 
-   return READ_ONCE(rb_entry(rb,
- struct virtual_engine,
- nodes[engine->id].rb)->request);
+   if (list_empty(&p->requests)) {
+   rb_erase_cached(&p->node, &ve->base.active.queue);
+   i915_priolist_free(p);
+   continue;
+   }
+
+   return list_first_entry(&p->requests,
+   struct i915_request,
+   sched.link);
+   }
+
+   return NULL;
+}
+
+static const struct i915_request *
+first_virtual_request(struct intel_engine_cs *engine)
+{
+   struct i915_request *rq = NULL;
+   struct virtual_engine *ve;
+
+   ve = first_virtual_engine(engine);
+   if (ve) {
+   spin_lock(&ve->base.active.lock);
+   rq = __first_virtual_request(ve);
+   spin_unlock(&ve->base.active.lock);
+   }
+
+   return rq;
 }
 
 static const struct i915_request *
@@ -525,7 +545,15 @@ assert_priority_queue(const struct i915_request *prev,
if (i915_request_is_active(prev))
return true;
 
-   return rq_deadline(prev) <= rq_deadline(next);
+   if (rq_deadline(prev) <= rq_deadline(next))
+   return true;
+
+   ENGINE_TRACE(prev->engine,
+"next %llx:%lld dl %lld is before prev %llx:%lld dl 
%lld\n",
+next->fence.context, next->fence.seqno, rq_deadline(next),
+prev->fence.context, prev->fence.seqno, rq_deadline(prev));
+
+   return false;
 }
 
 /*
@@ -1334,7 +1362,7 @@ static inline void execlists_schedule_in(struct 
i915_request *rq, int idx)
trace_i915_request_in(rq, idx);
 
old = ce->inflight;
-   if (!old)
+   if (!__int

[Intel-gfx] [PATCH 42/69] drm/i915: Restructure priority inheritance

2020-12-14 Thread Chris Wilson
In anticipation of wanting to be able to call pi from underneath an
engine's active.lock, rework the priority inheritance to primarily work
along an engine's priority queue, delegating any other engine that the
chain may traverse to a worker. This reduces the global spinlock from
governing the multi-entire priority inheritance depth-first search, to a
smaller lock on each engine around a single list on that engine.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c|   2 +
 drivers/gpu/drm/i915/gt/intel_engine_types.h |   3 +
 drivers/gpu/drm/i915/i915_scheduler.c| 338 ---
 drivers/gpu/drm/i915/i915_scheduler.h|   2 +
 drivers/gpu/drm/i915/i915_scheduler_types.h  |  19 +-
 5 files changed, 229 insertions(+), 135 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index dd9d7a260e7a..397516df7484 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -594,6 +594,8 @@ void intel_engine_init_execlists(struct intel_engine_cs 
*engine)
 
execlists->queue_priority_hint = INT_MIN;
execlists->queue = RB_ROOT_CACHED;
+
+   i915_sched_init_ipi(&execlists->ipi);
 }
 
 static void cleanup_status_page(struct intel_engine_cs *engine)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index a1f156404f95..cdc49f8e04ee 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -21,6 +21,7 @@
 #include "i915_gem.h"
 #include "i915_pmu.h"
 #include "i915_priolist_types.h"
+#include "i915_scheduler_types.h"
 #include "i915_selftest.h"
 #include "intel_breadcrumbs_types.h"
 #include "intel_sseu.h"
@@ -268,6 +269,8 @@ struct intel_engine_execlists {
struct rb_root_cached queue;
struct rb_root_cached virtual;
 
+   struct i915_sched_ipi ipi;
+
/**
 * @csb_write: control register for Context Switch buffer
 *
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index 140c0578eef1..4d0c30da971e 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -17,8 +17,6 @@ static struct i915_global_scheduler {
struct kmem_cache *slab_priorities;
 } global;
 
-static DEFINE_SPINLOCK(schedule_lock);
-
 static struct i915_sched_node *node_get(struct i915_sched_node *node)
 {
i915_request_get(container_of(node, struct i915_request, sched));
@@ -30,15 +28,114 @@ static void node_put(struct i915_sched_node *node)
i915_request_put(container_of(node, struct i915_request, sched));
 }
 
-static const struct i915_request *
-node_to_request(const struct i915_sched_node *node)
+static inline int rq_prio(const struct i915_request *rq)
 {
-   return container_of(node, const struct i915_request, sched);
+   return READ_ONCE(rq->sched.attr.priority);
+}
+
+static int ipi_get_prio(struct i915_request *rq)
+{
+   if (READ_ONCE(rq->sched.ipi_priority) == I915_PRIORITY_INVALID)
+   return I915_PRIORITY_INVALID;
+
+   return xchg(&rq->sched.ipi_priority, I915_PRIORITY_INVALID);
+}
+
+static void ipi_schedule(struct work_struct *wrk)
+{
+   struct i915_sched_ipi *ipi = container_of(wrk, typeof(*ipi), work);
+   struct i915_request *rq = xchg(&ipi->list, NULL);
+
+   do {
+   struct i915_request *rn = xchg(&rq->sched.ipi_link, NULL);
+   int prio;
+
+   prio = ipi_get_prio(rq);
+
+   /*
+* For cross-engine scheduling to work we rely on one of two
+* things:
+*
+* a) The requests are using dma-fence fences and so will not
+* be scheduled until the previous engine is completed, and
+* so we cannot cross back onto the original engine and end up
+* queuing an earlier request after the first (due to the
+* interrupted DFS).
+*
+* b) The requests are using semaphores and so may be already
+* be in flight, in which case if we cross back onto the same
+* engine, we will already have put the interrupted DFS into
+* the priolist, and the continuation will now be queued
+* afterwards [out-of-order]. However, since we are using
+* semaphores in this case, we also perform yield on semaphore
+* waits and so will reorder the requests back into the correct
+* sequence. This occurrence (of promoting a request chain
+* that crosses the engines using semaphores back unto itself)
+* should be unlikely enough that it probably does not matter...
+*/
+   local_bh_disable();
+   i915_request_set_priority(rq, prio);
+ 

[Intel-gfx] [PATCH 68/69] drm/i915/gt: Skip over completed active execlists, again

2020-12-14 Thread Chris Wilson
From: Chris Wilson 

Now that we are careful to always force-restore contexts upon rewinding
(where necessary), we can restore our optimisation to skip over
completed active execlists when dequeuing.

Referenecs: 35f3fd8182ba ("drm/i915/execlists: Workaround switching back to a 
completed context")
References: 8ab3a3812aa9 ("drm/i915/gt: Incrementally check for rewinding")
Signed-off-by: Chris Wilson 
Cc: Mika Kuoppala 
---
 .../drm/i915/gt/intel_execlists_submission.c  | 31 +--
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index ed3b574f4547..442621efa2ff 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -1985,12 +1985,20 @@ static void set_preempt_timeout(struct intel_engine_cs 
*engine,
 active_preempt_timeout(engine, rq));
 }
 
+static bool completed(const struct i915_request *rq)
+{
+   if (i915_request_has_sentinel(rq))
+   return false;
+
+   return i915_request_completed(rq);
+}
+
 static void execlists_dequeue(struct intel_engine_cs *engine)
 {
struct intel_engine_execlists * const execlists = &engine->execlists;
struct i915_request **port = execlists->pending;
struct i915_request ** const last_port = port + execlists->port_mask;
-   struct i915_request *last = *execlists->active;
+   struct i915_request *last, * const *active;
struct list_head *free = NULL;
struct virtual_engine *ve;
struct rb_node *rb;
@@ -2028,21 +2036,13 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
 * i.e. we will retrigger preemption following the ack in case
 * of trouble.
 *
-* In theory we can skip over completed contexts that have not
-* yet been processed by events (as those events are in flight):
-*
-* while ((last = *active) && i915_request_completed(last))
-*  active++;
-*
-* However, the GPU cannot handle this as it will ultimately
-* find itself trying to jump back into a context it has just
-* completed and barf.
 */
+   active = execlists->active;
+   while ((last = *active) && completed(last))
+   active++;
 
if (last) {
-   if (i915_request_completed(last)) {
-   goto check_secondary;
-   } else if (need_preempt(engine, last)) {
+   if (need_preempt(engine, last)) {
ENGINE_TRACE(engine,
 "preempting last=%llx:%llu, dl=%llu, 
prio=%d\n",
 last->fence.context,
@@ -2104,7 +2104,6 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
 * we hopefully coalesce several updates into a single
 * submission.
 */
-check_secondary:
if (!list_is_last(&last->sched.link,
  &engine->active.requests)) {
/*
@@ -2293,7 +2292,7 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
 * of ordered contexts.
 */
if (submit &&
-   memcmp(execlists->active,
+   memcmp(active,
   execlists->pending,
   (port - execlists->pending) * sizeof(*port))) {
*port = NULL;
@@ -2301,7 +2300,7 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
execlists_schedule_in(*port, port - execlists->pending);
 
WRITE_ONCE(execlists->yield, -1);
-   set_preempt_timeout(engine, *execlists->active);
+   set_preempt_timeout(engine, *active);
execlists_submit_ports(engine);
} else {
ring_set_paused(engine, 0);
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 32/69] drm/i915/gt: Extract busy-stats for ring-scheduler

2020-12-14 Thread Chris Wilson
Lift the busy-stats context-in/out implementation out of intel_lrc, so
that we can reuse it for other scheduler implementations.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine_stats.h  | 49 +++
 .../drm/i915/gt/intel_execlists_submission.c  | 34 +
 2 files changed, 50 insertions(+), 33 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_stats.h

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_stats.h 
b/drivers/gpu/drm/i915/gt/intel_engine_stats.h
new file mode 100644
index ..58491eae3482
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/intel_engine_stats.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+
+#ifndef __INTEL_ENGINE_STATS_H__
+#define __INTEL_ENGINE_STATS_H__
+
+#include 
+#include 
+#include 
+
+#include "i915_gem.h" /* GEM_BUG_ON */
+#include "intel_engine.h"
+
+static inline void intel_engine_context_in(struct intel_engine_cs *engine)
+{
+   unsigned long flags;
+
+   if (atomic_add_unless(&engine->stats.active, 1, 0))
+   return;
+
+   write_seqlock_irqsave(&engine->stats.lock, flags);
+   if (!atomic_add_unless(&engine->stats.active, 1, 0)) {
+   engine->stats.start = ktime_get();
+   atomic_inc(&engine->stats.active);
+   }
+   write_sequnlock_irqrestore(&engine->stats.lock, flags);
+}
+
+static inline void intel_engine_context_out(struct intel_engine_cs *engine)
+{
+   unsigned long flags;
+
+   GEM_BUG_ON(!atomic_read(&engine->stats.active));
+
+   if (atomic_add_unless(&engine->stats.active, -1, 1))
+   return;
+
+   write_seqlock_irqsave(&engine->stats.lock, flags);
+   if (atomic_dec_and_test(&engine->stats.active)) {
+   engine->stats.total =
+   ktime_add(engine->stats.total,
+ ktime_sub(ktime_get(), engine->stats.start));
+   }
+   write_sequnlock_irqrestore(&engine->stats.lock, flags);
+}
+
+#endif /* __INTEL_ENGINE_STATS_H__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 541dad2948b0..5380ecd62cbe 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -116,6 +116,7 @@
 #include "intel_breadcrumbs.h"
 #include "intel_context.h"
 #include "intel_engine_pm.h"
+#include "intel_engine_stats.h"
 #include "intel_execlists_submission.h"
 #include "intel_gt.h"
 #include "intel_gt_pm.h"
@@ -1127,39 +1128,6 @@ execlists_context_status_change(struct i915_request *rq, 
unsigned long status)
   status, rq);
 }
 
-static void intel_engine_context_in(struct intel_engine_cs *engine)
-{
-   unsigned long flags;
-
-   if (atomic_add_unless(&engine->stats.active, 1, 0))
-   return;
-
-   write_seqlock_irqsave(&engine->stats.lock, flags);
-   if (!atomic_add_unless(&engine->stats.active, 1, 0)) {
-   engine->stats.start = ktime_get();
-   atomic_inc(&engine->stats.active);
-   }
-   write_sequnlock_irqrestore(&engine->stats.lock, flags);
-}
-
-static void intel_engine_context_out(struct intel_engine_cs *engine)
-{
-   unsigned long flags;
-
-   GEM_BUG_ON(!atomic_read(&engine->stats.active));
-
-   if (atomic_add_unless(&engine->stats.active, -1, 1))
-   return;
-
-   write_seqlock_irqsave(&engine->stats.lock, flags);
-   if (atomic_dec_and_test(&engine->stats.active)) {
-   engine->stats.total =
-   ktime_add(engine->stats.total,
- ktime_sub(ktime_get(), engine->stats.start));
-   }
-   write_sequnlock_irqrestore(&engine->stats.lock, flags);
-}
-
 static void
 execlists_check_context(const struct intel_context *ce,
const struct intel_engine_cs *engine,
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 06/69] drm/i915/gt: Decouple inflight virtual engines

2020-12-14 Thread Chris Wilson
Once a virtual engine has been bound to a sibling, it will remain bound
until we finally schedule out the last active request. We can not rebind
the context to a new sibling while it is inflight as the context save
will conflict, hence we wait. As we cannot then use any other sibliing
while the context is inflight, only kick the bound sibling while it
inflight and upon scheduling out the kick the rest (so that we can swap
engines on timeslicing if the previously bound engine becomes
oversubscribed).

Signed-off-by: Chris Wilson 
---
 .../drm/i915/gt/intel_execlists_submission.c  | 28 ---
 1 file changed, 12 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 8e83e60492af..174c3f5f2e81 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -1359,9 +1359,8 @@ static inline void execlists_schedule_in(struct 
i915_request *rq, int idx)
 static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
 {
struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
-   struct i915_request *next = READ_ONCE(ve->request);
 
-   if (next == rq || (next && next->execution_mask & ~rq->execution_mask))
+   if (READ_ONCE(ve->request))
tasklet_hi_schedule(&ve->base.execlists.tasklet);
 }
 
@@ -1781,17 +1780,13 @@ first_virtual_engine(struct intel_engine_cs *engine)
struct i915_request *rq = READ_ONCE(ve->request);
 
/* lazily cleanup after another engine handled rq */
-   if (!rq) {
+   if (!rq || !virtual_matches(ve, rq, engine)) {
rb_erase_cached(rb, &el->virtual);
RB_CLEAR_NODE(rb);
rb = rb_first_cached(&el->virtual);
continue;
}
 
-   if (!virtual_matches(ve, rq, engine)) {
-   rb = rb_next(rb);
-   continue;
-   }
return ve;
}
 
@@ -4968,7 +4963,6 @@ static void virtual_submission_tasklet(unsigned long data)
if (unlikely(!mask))
return;
 
-   local_irq_disable();
for (n = 0; n < ve->num_siblings; n++) {
struct intel_engine_cs *sibling = READ_ONCE(ve->siblings[n]);
struct ve_node * const node = &ve->nodes[sibling->id];
@@ -4978,20 +4972,19 @@ static void virtual_submission_tasklet(unsigned long 
data)
if (!READ_ONCE(ve->request))
break; /* already handled by a sibling's tasklet */
 
+   spin_lock_irq(&sibling->active.lock);
+
if (unlikely(!(mask & sibling->mask))) {
if (!RB_EMPTY_NODE(&node->rb)) {
-   spin_lock(&sibling->active.lock);
rb_erase_cached(&node->rb,
&sibling->execlists.virtual);
RB_CLEAR_NODE(&node->rb);
-   spin_unlock(&sibling->active.lock);
}
-   continue;
-   }
 
-   spin_lock(&sibling->active.lock);
+   goto unlock_engine;
+   }
 
-   if (!RB_EMPTY_NODE(&node->rb)) {
+   if (unlikely(!RB_EMPTY_NODE(&node->rb))) {
/*
 * Cheat and avoid rebalancing the tree if we can
 * reuse this node in situ.
@@ -5031,9 +5024,12 @@ static void virtual_submission_tasklet(unsigned long 
data)
if (first && prio > sibling->execlists.queue_priority_hint)
tasklet_hi_schedule(&sibling->execlists.tasklet);
 
-   spin_unlock(&sibling->active.lock);
+unlock_engine:
+   spin_unlock_irq(&sibling->active.lock);
+
+   if (intel_context_inflight(&ve->context))
+   break;
}
-   local_irq_enable();
 }
 
 static void virtual_submit_request(struct i915_request *rq)
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 23/69] drm/i915/gt: Consolidate the CS timestamp clocks

2020-12-14 Thread Chris Wilson
Pull the GT clock information [used to derive CS timestamps and PM
interval] under the GT so that is it local to the users. In doing so, we
consolidate the two references for the same information, of which the
runtime-info took note of a potential clock source override and scaling
factors.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/debugfs_gt_pm.c   |  20 +-
 drivers/gpu/drm/i915/gt/intel_context.h   |   6 +-
 drivers/gpu/drm/i915/gt/intel_gt.c|   4 +-
 .../gpu/drm/i915/gt/intel_gt_clock_utils.c| 197 ++
 .../gpu/drm/i915/gt/intel_gt_clock_utils.h|   8 +-
 drivers/gpu/drm/i915/gt/intel_gt_types.h  |   1 +
 drivers/gpu/drm/i915/gt/selftest_engine_pm.c  |   6 +-
 drivers/gpu/drm/i915/gt/selftest_gt_pm.c  |   8 +-
 drivers/gpu/drm/i915/i915_debugfs.c   |  19 +-
 drivers/gpu/drm/i915/i915_drv.h   |  12 --
 drivers/gpu/drm/i915/i915_getparam.c  |   2 +-
 drivers/gpu/drm/i915/i915_gpu_error.c |   2 +-
 drivers/gpu/drm/i915/i915_perf.c  |  11 +-
 drivers/gpu/drm/i915/intel_device_info.c  | 157 --
 drivers/gpu/drm/i915/intel_device_info.h  |   3 -
 drivers/gpu/drm/i915/selftests/i915_perf.c|   2 +-
 drivers/gpu/drm/i915/selftests/i915_request.c |   3 +-
 17 files changed, 205 insertions(+), 256 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/debugfs_gt_pm.c 
b/drivers/gpu/drm/i915/gt/debugfs_gt_pm.c
index 8975717ace06..a0f10e8bbd21 100644
--- a/drivers/gpu/drm/i915/gt/debugfs_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/debugfs_gt_pm.c
@@ -404,34 +404,34 @@ static int frequency_show(struct seq_file *m, void 
*unused)
seq_printf(m, "RPDECLIMIT: 0x%08x\n", rpdeclimit);
seq_printf(m, "RPNSWREQ: %dMHz\n", reqf);
seq_printf(m, "CAGF: %dMHz\n", cagf);
-   seq_printf(m, "RP CUR UP EI: %d (%dns)\n",
+   seq_printf(m, "RP CUR UP EI: %d (%lldns)\n",
   rpcurupei,
   intel_gt_pm_interval_to_ns(gt, rpcurupei));
-   seq_printf(m, "RP CUR UP: %d (%dns)\n",
+   seq_printf(m, "RP CUR UP: %d (%lldns)\n",
   rpcurup, intel_gt_pm_interval_to_ns(gt, rpcurup));
-   seq_printf(m, "RP PREV UP: %d (%dns)\n",
+   seq_printf(m, "RP PREV UP: %d (%lldns)\n",
   rpprevup, intel_gt_pm_interval_to_ns(gt, rpprevup));
seq_printf(m, "Up threshold: %d%%\n",
   rps->power.up_threshold);
-   seq_printf(m, "RP UP EI: %d (%dns)\n",
+   seq_printf(m, "RP UP EI: %d (%lldns)\n",
   rpupei, intel_gt_pm_interval_to_ns(gt, rpupei));
-   seq_printf(m, "RP UP THRESHOLD: %d (%dns)\n",
+   seq_printf(m, "RP UP THRESHOLD: %d (%lldns)\n",
   rpupt, intel_gt_pm_interval_to_ns(gt, rpupt));
 
-   seq_printf(m, "RP CUR DOWN EI: %d (%dns)\n",
+   seq_printf(m, "RP CUR DOWN EI: %d (%lldns)\n",
   rpcurdownei,
   intel_gt_pm_interval_to_ns(gt, rpcurdownei));
-   seq_printf(m, "RP CUR DOWN: %d (%dns)\n",
+   seq_printf(m, "RP CUR DOWN: %d (%lldns)\n",
   rpcurdown,
   intel_gt_pm_interval_to_ns(gt, rpcurdown));
-   seq_printf(m, "RP PREV DOWN: %d (%dns)\n",
+   seq_printf(m, "RP PREV DOWN: %d (%lldns)\n",
   rpprevdown,
   intel_gt_pm_interval_to_ns(gt, rpprevdown));
seq_printf(m, "Down threshold: %d%%\n",
   rps->power.down_threshold);
-   seq_printf(m, "RP DOWN EI: %d (%dns)\n",
+   seq_printf(m, "RP DOWN EI: %d (%lldns)\n",
   rpdownei, intel_gt_pm_interval_to_ns(gt, rpdownei));
-   seq_printf(m, "RP DOWN THRESHOLD: %d (%dns)\n",
+   seq_printf(m, "RP DOWN THRESHOLD: %d (%lldns)\n",
   rpdownt, intel_gt_pm_interval_to_ns(gt, rpdownt));
 
max_freq = (IS_GEN9_LP(i915) ? rp_state_cap >> 0 :
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h 
b/drivers/gpu/drm/i915/gt/intel_context.h
index fda2eba81e22..2ce2ec639ba2 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -248,16 +248,14 @@ intel_context_clear_nopreempt(struct intel_context *ce)
 
 static inline u64 intel_context_get_total_runtime_ns(struct intel_context *ce)
 {
-   const u32 period =
-   RUNTIME_INFO(ce->engine->i915)->cs_timestamp_period_ns;
+   const u32 period = ce->engine->gt->clock_period_ns;
 
return READ_ONCE(ce->runtime.total) * period;
 }
 
 static inline u64 intel_context_get_avg_runtime_ns(struct intel_context *ce)
 {
-   const u32 period =
-   RUNTIME_INFO(ce->engin

[Intel-gfx] [PATCH 31/69] drm/i915/gt: Drop atomic for engine->fw_active tracking

2020-12-14 Thread Chris Wilson
Since schedule-in/out is now entirely serialised by the tasklet bitlock,
we do not need to worry about concurrent in/out operations and so reduce
the atomic operations to plain instructions.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c| 2 +-
 drivers/gpu/drm/i915/gt/intel_engine_types.h | 2 +-
 drivers/gpu/drm/i915/gt/intel_execlists_submission.c | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 55856c230779..bd6bb4ede48d 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1644,7 +1644,7 @@ void intel_engine_dump(struct intel_engine_cs *engine,
   ktime_to_ms(intel_engine_get_busy_time(engine,
  &dummy)));
drm_printf(m, "\tForcewake: %x domains, %d active\n",
-  engine->fw_domain, atomic_read(&engine->fw_active));
+  engine->fw_domain, READ_ONCE(engine->fw_active));
 
rcu_read_lock();
rq = READ_ONCE(engine->heartbeat.systole);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index c28f4e190fe6..1fbee35cb5ad 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -329,7 +329,7 @@ struct intel_engine_cs {
 * as possible.
 */
enum forcewake_domains fw_domain;
-   atomic_t fw_active;
+   unsigned int fw_active;
 
unsigned long context_tag;
 
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 974cca0cfe76..541dad2948b0 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -1314,7 +1314,7 @@ __execlists_schedule_in(struct i915_request *rq)
ce->lrc.ccid |= engine->execlists.ccid;
 
__intel_gt_pm_get(engine->gt);
-   if (engine->fw_domain && !atomic_fetch_inc(&engine->fw_active))
+   if (engine->fw_domain && !engine->fw_active++)
intel_uncore_forcewake_get(engine->uncore, engine->fw_domain);
execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
intel_engine_context_in(engine);
@@ -1425,7 +1425,7 @@ static inline void __execlists_schedule_out(struct 
i915_request *rq)
intel_context_update_runtime(ce);
intel_engine_context_out(engine);
execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
-   if (engine->fw_domain && !atomic_dec_return(&engine->fw_active))
+   if (engine->fw_domain && !--engine->fw_active)
intel_uncore_forcewake_put(engine->uncore, engine->fw_domain);
intel_gt_pm_put_async(engine->gt);
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 19/69] drm/i915/gt: Use indices for writing into relative timelines

2020-12-14 Thread Chris Wilson
Relative timelines are relative to either the global or per-process
HWSP, and so we can replace the absolute addressing with store-index
variants for position invariance.

Signed-off-by: Chris Wilson 
Reviewed-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 98 +---
 drivers/gpu/drm/i915/gt/intel_timeline.h | 12 +++
 2 files changed, 82 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c 
b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
index ed88dc4de72c..4f78004f0087 100644
--- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c
@@ -502,7 +502,19 @@ gen8_emit_fini_breadcrumb_tail(struct i915_request *rq, 
u32 *cs)
 
 static u32 *emit_xcs_breadcrumb(struct i915_request *rq, u32 *cs)
 {
-   return gen8_emit_ggtt_write(cs, rq->fence.seqno, hwsp_offset(rq), 0);
+   struct intel_timeline *tl = rcu_dereference_protected(rq->timeline, 1);
+   unsigned int flags = MI_FLUSH_DW_OP_STOREDW;
+   u32 offset = hwsp_offset(rq);
+
+   if (intel_timeline_is_relative(tl)) {
+   offset = offset_in_page(offset);
+   flags |= MI_FLUSH_DW_STORE_INDEX;
+   }
+   GEM_BUG_ON(offset & 7);
+   if (!intel_timeline_in_context(tl))
+   offset |= MI_FLUSH_DW_USE_GTT;
+
+   return __gen8_emit_flush_dw(cs, rq->fence.seqno, offset, flags);
 }
 
 u32 *gen8_emit_fini_breadcrumb_xcs(struct i915_request *rq, u32 *cs)
@@ -512,6 +524,18 @@ u32 *gen8_emit_fini_breadcrumb_xcs(struct i915_request 
*rq, u32 *cs)
 
 u32 *gen8_emit_fini_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
 {
+   struct intel_timeline *tl = rcu_dereference_protected(rq->timeline, 1);
+   unsigned int flags = PIPE_CONTROL_FLUSH_ENABLE | PIPE_CONTROL_CS_STALL;
+   u32 offset = hwsp_offset(rq);
+
+   if (intel_timeline_is_relative(tl)) {
+   offset = offset_in_page(offset);
+   flags |= PIPE_CONTROL_STORE_DATA_INDEX;
+   }
+   GEM_BUG_ON(offset & 7);
+   if (!intel_timeline_in_context(tl))
+   flags |= PIPE_CONTROL_GLOBAL_GTT_IVB;
+
cs = gen8_emit_pipe_control(cs,
PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH |
PIPE_CONTROL_DEPTH_CACHE_FLUSH |
@@ -519,26 +543,33 @@ u32 *gen8_emit_fini_breadcrumb_rcs(struct i915_request 
*rq, u32 *cs)
0);
 
/* XXX flush+write+CS_STALL all in one upsets gem_concurrent_blt:kbl */
-   cs = gen8_emit_ggtt_write_rcs(cs,
- rq->fence.seqno,
- hwsp_offset(rq),
- PIPE_CONTROL_FLUSH_ENABLE |
- PIPE_CONTROL_CS_STALL);
+   cs = __gen8_emit_write_rcs(cs, rq->fence.seqno, offset, 0, flags);
 
return gen8_emit_fini_breadcrumb_tail(rq, cs);
 }
 
 u32 *gen11_emit_fini_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
 {
-   cs = gen8_emit_ggtt_write_rcs(cs,
- rq->fence.seqno,
- hwsp_offset(rq),
- PIPE_CONTROL_CS_STALL |
- PIPE_CONTROL_TILE_CACHE_FLUSH |
- PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH |
- PIPE_CONTROL_DEPTH_CACHE_FLUSH |
- PIPE_CONTROL_DC_FLUSH_ENABLE |
- PIPE_CONTROL_FLUSH_ENABLE);
+   struct intel_timeline *tl = rcu_dereference_protected(rq->timeline, 1);
+   u32 offset = hwsp_offset(rq);
+   unsigned int flags;
+
+   flags = (PIPE_CONTROL_CS_STALL |
+PIPE_CONTROL_TILE_CACHE_FLUSH |
+PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH |
+PIPE_CONTROL_DEPTH_CACHE_FLUSH |
+PIPE_CONTROL_DC_FLUSH_ENABLE |
+PIPE_CONTROL_FLUSH_ENABLE);
+
+   if (intel_timeline_is_relative(tl)) {
+   offset = offset_in_page(offset);
+   flags |= PIPE_CONTROL_STORE_DATA_INDEX;
+   }
+   GEM_BUG_ON(offset & 7);
+   if (!intel_timeline_in_context(tl))
+   flags |= PIPE_CONTROL_GLOBAL_GTT_IVB;
+
+   cs = __gen8_emit_write_rcs(cs, rq->fence.seqno, offset, 0, flags);
 
return gen8_emit_fini_breadcrumb_tail(rq, cs);
 }
@@ -601,19 +632,30 @@ u32 *gen12_emit_fini_breadcrumb_xcs(struct i915_request 
*rq, u32 *cs)
 
 u32 *gen12_emit_fini_breadcrumb_rcs(struct i915_request *rq, u32 *cs)
 {
-   cs = gen12_emit_ggtt_write_rcs(cs,
-  rq->fence.seqno,
-  hwsp_offset(rq),
-  PIPE_CONTROL0_HDC_PIPELINE_FLUSH,
-  PIPE_CONTROL_CS_STALL |
-  PIPE_CONTROL_TILE_CACHE_FLUSH |
-

[Intel-gfx] [PATCH 34/69] drm/i915/gt: Refactor heartbeat request construction and submission

2020-12-14 Thread Chris Wilson
Pull the individual strands of creating a custom heartbeat requests into
a pair of common functions. This will reduce the number of changes we
will need to make in future.

Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  | 59 +--
 1 file changed, 41 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c 
b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index 9060385cd69e..d7be2b9339f9 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -37,6 +37,18 @@ static bool next_heartbeat(struct intel_engine_cs *engine)
return true;
 }
 
+static struct i915_request *
+heartbeat_create(struct intel_context *ce, gfp_t gfp)
+{
+   struct i915_request *rq;
+
+   intel_context_enter(ce);
+   rq = __i915_request_create(ce, gfp);
+   intel_context_exit(ce);
+
+   return rq;
+}
+
 static void idle_pulse(struct intel_engine_cs *engine, struct i915_request *rq)
 {
engine->wakeref_serial = READ_ONCE(engine->serial) + 1;
@@ -45,6 +57,15 @@ static void idle_pulse(struct intel_engine_cs *engine, 
struct i915_request *rq)
engine->heartbeat.systole = i915_request_get(rq);
 }
 
+static void heartbeat_commit(struct i915_request *rq,
+const struct i915_sched_attr *attr)
+{
+   idle_pulse(rq->engine, rq);
+
+   __i915_request_commit(rq);
+   __i915_request_queue(rq, attr);
+}
+
 static void show_heartbeat(const struct i915_request *rq,
   struct intel_engine_cs *engine)
 {
@@ -139,16 +160,11 @@ static void heartbeat(struct work_struct *wrk)
goto out;
}
 
-   intel_context_enter(ce);
-   rq = __i915_request_create(ce, GFP_NOWAIT | __GFP_NOWARN);
-   intel_context_exit(ce);
+   rq = heartbeat_create(ce, GFP_NOWAIT | __GFP_NOWARN);
if (IS_ERR(rq))
goto unlock;
 
-   idle_pulse(engine, rq);
-
-   __i915_request_commit(rq);
-   __i915_request_queue(rq, &attr);
+   heartbeat_commit(rq, &attr);
 
 unlock:
mutex_unlock(&ce->timeline->mutex);
@@ -187,17 +203,13 @@ static int __intel_engine_pulse(struct intel_engine_cs 
*engine)
GEM_BUG_ON(!intel_engine_has_preemption(engine));
GEM_BUG_ON(!intel_engine_pm_is_awake(engine));
 
-   intel_context_enter(ce);
-   rq = __i915_request_create(ce, GFP_NOWAIT | __GFP_NOWARN);
-   intel_context_exit(ce);
+   rq = heartbeat_create(ce, GFP_NOWAIT | __GFP_NOWARN);
if (IS_ERR(rq))
return PTR_ERR(rq);
 
__set_bit(I915_FENCE_FLAG_SENTINEL, &rq->fence.flags);
-   idle_pulse(engine, rq);
 
-   __i915_request_commit(rq);
-   __i915_request_queue(rq, &attr);
+   heartbeat_commit(rq, &attr);
GEM_BUG_ON(rq->sched.attr.priority < I915_PRIORITY_BARRIER);
 
return 0;
@@ -273,8 +285,12 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
 
 int intel_engine_flush_barriers(struct intel_engine_cs *engine)
 {
+   struct i915_sched_attr attr = {
+   .priority = I915_USER_PRIORITY(I915_PRIORITY_MIN),
+   };
+   struct intel_context *ce = engine->kernel_context;
struct i915_request *rq;
-   int err = 0;
+   int err;
 
if (llist_empty(&engine->barrier_tasks))
return 0;
@@ -282,15 +298,22 @@ int intel_engine_flush_barriers(struct intel_engine_cs 
*engine)
if (!intel_engine_pm_get_if_awake(engine))
return 0;
 
-   rq = i915_request_create(engine->kernel_context);
+   if (mutex_lock_interruptible(&ce->timeline->mutex)) {
+   err = -EINTR;
+   goto out_rpm;
+   }
+
+   rq = heartbeat_create(ce, GFP_KERNEL);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
-   goto out_rpm;
+   goto out_unlock;
}
 
-   idle_pulse(engine, rq);
-   i915_request_add(rq);
+   heartbeat_commit(rq, &attr);
 
+   err = 0;
+out_unlock:
+   mutex_unlock(&ce->timeline->mutex);
 out_rpm:
intel_engine_pm_put(engine);
return err;
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 18/69] drm/i915/gt: Add timeline "mode"

2020-12-14 Thread Chris Wilson
Explicitly differentiate between the absolute and relative timelines,
and the global HWSP and ppHWSP relative offsets. When using a timeline
that is relative to a known status page, we can replace the absolute
addressing in the commands with indexed variants.

Signed-off-by: Chris Wilson 
Reviewed-by: Matthew Brost 
---
 drivers/gpu/drm/i915/gt/intel_timeline.c  | 21 ---
 drivers/gpu/drm/i915/gt/intel_timeline.h  |  2 +-
 .../gpu/drm/i915/gt/intel_timeline_types.h| 10 +++--
 3 files changed, 27 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c 
b/drivers/gpu/drm/i915/gt/intel_timeline.c
index cb20fcbb326b..3d0d7c937647 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -229,7 +229,6 @@ static int intel_timeline_init(struct intel_timeline 
*timeline,
 
timeline->gt = gt;
 
-   timeline->has_initial_breadcrumb = !hwsp;
timeline->hwsp_cacheline = NULL;
 
if (!hwsp) {
@@ -246,13 +245,29 @@ static int intel_timeline_init(struct intel_timeline 
*timeline,
return PTR_ERR(cl);
}
 
+   timeline->mode = INTEL_TIMELINE_ABSOLUTE;
timeline->hwsp_cacheline = cl;
timeline->hwsp_offset = cacheline * CACHELINE_BYTES;
 
vaddr = page_mask_bits(cl->vaddr);
} else {
-   timeline->hwsp_offset = offset;
-   vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WB);
+   int preferred;
+
+   if (offset & INTEL_TIMELINE_RELATIVE_CONTEXT) {
+   timeline->mode = INTEL_TIMELINE_RELATIVE_CONTEXT;
+   timeline->hwsp_offset =
+   offset & ~INTEL_TIMELINE_RELATIVE_CONTEXT;
+   preferred = i915_coherent_map_type(gt->i915);
+   } else {
+   timeline->mode = INTEL_TIMELINE_RELATIVE_ENGINE;
+   timeline->hwsp_offset = offset;
+   preferred = I915_MAP_WB;
+   }
+
+   vaddr = i915_gem_object_pin_map(hwsp->obj,
+   preferred | I915_MAP_OVERRIDE);
+   if (IS_ERR(vaddr))
+   vaddr = i915_gem_object_pin_map(hwsp->obj, I915_MAP_WC);
if (IS_ERR(vaddr))
return PTR_ERR(vaddr);
}
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.h 
b/drivers/gpu/drm/i915/gt/intel_timeline.h
index deb71a8dbd43..69250de3a814 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.h
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.h
@@ -76,7 +76,7 @@ static inline void intel_timeline_put(struct intel_timeline 
*timeline)
 static inline bool
 intel_timeline_has_initial_breadcrumb(const struct intel_timeline *tl)
 {
-   return tl->has_initial_breadcrumb;
+   return tl->mode == INTEL_TIMELINE_ABSOLUTE;
 }
 
 static inline int __intel_timeline_sync_set(struct intel_timeline *tl,
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h 
b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
index f187c5aac11c..3c1ab901b702 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
@@ -20,6 +20,12 @@ struct i915_syncmap;
 struct intel_gt;
 struct intel_timeline_hwsp;
 
+enum intel_timeline_mode {
+   INTEL_TIMELINE_ABSOLUTE = 0,
+   INTEL_TIMELINE_RELATIVE_CONTEXT = BIT(0),
+   INTEL_TIMELINE_RELATIVE_ENGINE  = BIT(1),
+};
+
 struct intel_timeline {
u64 fence_context;
u32 seqno;
@@ -45,6 +51,8 @@ struct intel_timeline {
atomic_t pin_count;
atomic_t active_count;
 
+   enum intel_timeline_mode mode;
+
const u32 *hwsp_seqno;
struct i915_vma *hwsp_ggtt;
u32 hwsp_offset;
@@ -52,8 +60,6 @@ struct intel_timeline {
 
struct intel_timeline_cacheline *hwsp_cacheline;
 
-   bool has_initial_breadcrumb;
-
/**
 * List of breadcrumbs associated with GPU requests currently
 * outstanding.
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 47/69] drm/i915: Extract request submission from execlists

2020-12-14 Thread Chris Wilson
In the process of preparing to reuse the request submission logic for
other backends, lift it out of the execlists backend. It already
operates on the common structs, so just a matter of moving and renaming.

Signed-off-by: Chris Wilson 
---
 .../drm/i915/gt/intel_execlists_submission.c  | 55 +
 drivers/gpu/drm/i915/i915_scheduler.c | 82 +++
 drivers/gpu/drm/i915/i915_scheduler.h |  2 +
 3 files changed, 85 insertions(+), 54 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index af8548725e1f..86b15da995ea 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -3136,59 +3136,6 @@ static void execlists_preempt(struct timer_list *timer)
execlists_kick(timer, preempt);
 }
 
-static void queue_request(struct intel_engine_cs *engine,
- struct i915_request *rq)
-{
-   GEM_BUG_ON(!list_empty(&rq->sched.link));
-   list_add_tail(&rq->sched.link,
- i915_sched_lookup_priolist(engine, rq_prio(rq)));
-   set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
-}
-
-static bool submit_queue(struct intel_engine_cs *engine,
-const struct i915_request *rq)
-{
-   struct intel_engine_execlists *execlists = &engine->execlists;
-
-   if (rq_prio(rq) <= execlists->queue_priority_hint)
-   return false;
-
-   execlists->queue_priority_hint = rq_prio(rq);
-   return true;
-}
-
-static bool ancestor_on_hold(const struct intel_engine_cs *engine,
-const struct i915_request *rq)
-{
-   GEM_BUG_ON(i915_request_on_hold(rq));
-   return !list_empty(&engine->active.hold) && hold_request(rq);
-}
-
-static void execlists_submit_request(struct i915_request *request)
-{
-   struct intel_engine_cs *engine = request->engine;
-   unsigned long flags;
-
-   /* Will be called from irq-context when using foreign fences. */
-   spin_lock_irqsave(&engine->active.lock, flags);
-
-   if (unlikely(ancestor_on_hold(engine, request))) {
-   RQ_TRACE(request, "ancestor on hold\n");
-   list_add_tail(&request->sched.link, &engine->active.hold);
-   i915_request_set_hold(request);
-   } else {
-   queue_request(engine, request);
-
-   GEM_BUG_ON(RB_EMPTY_ROOT(&engine->execlists.queue.rb_root));
-   GEM_BUG_ON(list_empty(&request->sched.link));
-
-   if (submit_queue(engine, request))
-   __execlists_kick(&engine->execlists);
-   }
-
-   spin_unlock_irqrestore(&engine->active.lock, flags);
-}
-
 static void __execlists_context_fini(struct intel_context *ce)
 {
intel_ring_put(ce->ring);
@@ -4360,7 +4307,7 @@ static void execlists_park(struct intel_engine_cs *engine)
 
 void intel_execlists_set_default_submission(struct intel_engine_cs *engine)
 {
-   engine->submit_request = execlists_submit_request;
+   engine->submit_request = i915_request_enqueue;
engine->execlists.tasklet.func = execlists_submission_tasklet;
 
engine->reset.prepare = execlists_reset_prepare;
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index f774e19b9b1a..6a162d39efc9 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -481,6 +481,88 @@ void i915_request_set_priority(struct i915_request *rq, 
int prio)
spin_unlock_irqrestore(&engine->active.lock, flags);
 }
 
+static void queue_request(struct intel_engine_cs *engine,
+ struct i915_request *rq)
+{
+   GEM_BUG_ON(!list_empty(&rq->sched.link));
+   list_add_tail(&rq->sched.link,
+ i915_sched_lookup_priolist(engine, rq_prio(rq)));
+   set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+}
+
+static bool submit_queue(struct intel_engine_cs *engine,
+const struct i915_request *rq)
+{
+   struct intel_engine_execlists *execlists = &engine->execlists;
+
+   if (rq_prio(rq) <= execlists->queue_priority_hint)
+   return false;
+
+   execlists->queue_priority_hint = rq_prio(rq);
+   return true;
+}
+
+static bool hold_request(const struct i915_request *rq)
+{
+   struct i915_dependency *p;
+   bool result = false;
+
+   /*
+* If one of our ancestors is on hold, we must also be put on hold,
+* otherwise we will bypass it and execute before it.
+*/
+   rcu_read_lock();
+   for_each_signaler(p, rq) {
+   const struct i915_request *s =
+   container_of(p->signaler, typeof(*s), sched);
+
+   if (s->engine != rq->engine)
+   continue;
+
+   result = i915_request_on_hold(s);
+   if (result)
+ 

[Intel-gfx] [PATCH 12/69] drm/i915/gt: ce->inflight updates are now serialised

2020-12-14 Thread Chris Wilson
Since schedule-in and schedule-out are now both always under the tasklet
bitlock, we can reduce the individual atomic operations to simple
instructions and worry less.

This notably eliminates the race observed with intel_context_inflight in
__engine_unpark().

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2583
Signed-off-by: Chris Wilson 
Reviewed-by: Tvrtko Ursulin 
---
 .../drm/i915/gt/intel_execlists_submission.c  | 52 +--
 1 file changed, 25 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index f021e0f4b24b..9f5efff08785 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -1302,11 +1302,11 @@ __execlists_schedule_in(struct i915_request *rq)
ce->lrc.ccid = ce->tag;
} else {
/* We don't need a strict matching tag, just different values */
-   unsigned int tag = ffs(READ_ONCE(engine->context_tag));
+   unsigned int tag = __ffs(engine->context_tag);
 
-   GEM_BUG_ON(tag == 0 || tag >= BITS_PER_LONG);
-   clear_bit(tag - 1, &engine->context_tag);
-   ce->lrc.ccid = tag << (GEN11_SW_CTX_ID_SHIFT - 32);
+   GEM_BUG_ON(tag >= BITS_PER_LONG);
+   __clear_bit(tag, &engine->context_tag);
+   ce->lrc.ccid = (1 + tag) << (GEN11_SW_CTX_ID_SHIFT - 32);
 
BUILD_BUG_ON(BITS_PER_LONG > GEN12_MAX_CONTEXT_HW_ID);
}
@@ -1319,6 +1319,8 @@ __execlists_schedule_in(struct i915_request *rq)
execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
intel_engine_context_in(engine);
 
+   CE_TRACE(ce, "schedule-in, ccid:%x\n", ce->lrc.ccid);
+
return engine;
 }
 
@@ -1330,13 +1332,10 @@ static inline void execlists_schedule_in(struct 
i915_request *rq, int idx)
GEM_BUG_ON(!intel_engine_pm_is_awake(rq->engine));
trace_i915_request_in(rq, idx);
 
-   old = READ_ONCE(ce->inflight);
-   do {
-   if (!old) {
-   WRITE_ONCE(ce->inflight, __execlists_schedule_in(rq));
-   break;
-   }
-   } while (!try_cmpxchg(&ce->inflight, &old, ptr_inc(old)));
+   old = ce->inflight;
+   if (!old)
+   old = __execlists_schedule_in(rq);
+   WRITE_ONCE(ce->inflight, ptr_inc(old));
 
GEM_BUG_ON(intel_context_inflight(ce) != rq->engine);
 }
@@ -1389,12 +1388,11 @@ static void kick_siblings(struct i915_request *rq, 
struct intel_context *ce)
tasklet_hi_schedule(&ve->base.execlists.tasklet);
 }
 
-static inline void
-__execlists_schedule_out(struct i915_request *rq,
-struct intel_engine_cs * const engine,
-unsigned int ccid)
+static inline void __execlists_schedule_out(struct i915_request *rq)
 {
struct intel_context * const ce = rq->context;
+   struct intel_engine_cs * const engine = rq->engine;
+   unsigned int ccid;
 
/*
 * NB process_csb() is not under the engine->active.lock and hence
@@ -1402,6 +1400,8 @@ __execlists_schedule_out(struct i915_request *rq,
 * refrain from doing non-trivial work here.
 */
 
+   CE_TRACE(ce, "schedule-out, ccid:%x\n", ce->lrc.ccid);
+
if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
execlists_check_context(ce, engine, "after");
 
@@ -1413,12 +1413,13 @@ __execlists_schedule_out(struct i915_request *rq,
i915_request_completed(rq))
intel_engine_add_retire(engine, ce->timeline);
 
+   ccid = ce->lrc.ccid;
ccid >>= GEN11_SW_CTX_ID_SHIFT - 32;
ccid &= GEN12_MAX_CONTEXT_HW_ID;
if (ccid < BITS_PER_LONG) {
GEM_BUG_ON(ccid == 0);
GEM_BUG_ON(test_bit(ccid - 1, &engine->context_tag));
-   set_bit(ccid - 1, &engine->context_tag);
+   __set_bit(ccid - 1, &engine->context_tag);
}
 
intel_context_update_runtime(ce);
@@ -1439,26 +1440,23 @@ __execlists_schedule_out(struct i915_request *rq,
 */
if (ce->engine != engine)
kick_siblings(rq, ce);
-
-   intel_context_put(ce);
 }
 
 static inline void
 execlists_schedule_out(struct i915_request *rq)
 {
struct intel_context * const ce = rq->context;
-   struct intel_engine_cs *cur, *old;
-   u32 ccid;
 
trace_i915_request_out(rq);
 
-   ccid = rq->context->lrc.ccid;
-   old = READ_ONCE(ce->inflight);
-   do
-   cur = ptr_unmask_bits(old, 2) ? ptr_dec(old) : NULL;
-   while (!try_cmpxchg(&ce->inflight, &old, cur));
-   if (!cur)
-   __execlists_schedule_out(rq, old, ccid);
+   GEM_BUG_ON(!ce->inflight);
+   ce->inflight = ptr_dec(ce->inflight);
+   if (!__intel_context_inflight_count(ce->inflight)) {
+

[Intel-gfx] [PATCH 33/69] drm/i915/gt: Convert stats.active to plain unsigned int

2020-12-14 Thread Chris Wilson
As context-in/out is now always serialised, we do not have to worry
about concurrent enabling/disable of the busy-stats and can reduce the
atomic_t active to a plain unsigned int, and the seqlock to a seqcount.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c|  8 ++--
 drivers/gpu/drm/i915/gt/intel_engine_stats.h | 45 
 drivers/gpu/drm/i915/gt/intel_engine_types.h |  4 +-
 3 files changed, 34 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index bd6bb4ede48d..95cf5a928d9b 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -341,7 +341,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum 
intel_engine_id id)
engine->schedule = NULL;
 
ewma__engine_latency_init(&engine->latency);
-   seqlock_init(&engine->stats.lock);
+   seqcount_init(&engine->stats.lock);
 
ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier);
 
@@ -1722,7 +1722,7 @@ static ktime_t __intel_engine_get_busy_time(struct 
intel_engine_cs *engine,
 * add it to the total.
 */
*now = ktime_get();
-   if (atomic_read(&engine->stats.active))
+   if (READ_ONCE(engine->stats.active))
total = ktime_add(total, ktime_sub(*now, engine->stats.start));
 
return total;
@@ -1741,9 +1741,9 @@ ktime_t intel_engine_get_busy_time(struct intel_engine_cs 
*engine, ktime_t *now)
ktime_t total;
 
do {
-   seq = read_seqbegin(&engine->stats.lock);
+   seq = read_seqcount_begin(&engine->stats.lock);
total = __intel_engine_get_busy_time(engine, now);
-   } while (read_seqretry(&engine->stats.lock, seq));
+   } while (read_seqcount_retry(&engine->stats.lock, seq));
 
return total;
 }
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_stats.h 
b/drivers/gpu/drm/i915/gt/intel_engine_stats.h
index 58491eae3482..24fbdd94351a 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_stats.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_stats.h
@@ -17,33 +17,44 @@ static inline void intel_engine_context_in(struct 
intel_engine_cs *engine)
 {
unsigned long flags;
 
-   if (atomic_add_unless(&engine->stats.active, 1, 0))
+   if (engine->stats.active) {
+   engine->stats.active++;
return;
-
-   write_seqlock_irqsave(&engine->stats.lock, flags);
-   if (!atomic_add_unless(&engine->stats.active, 1, 0)) {
-   engine->stats.start = ktime_get();
-   atomic_inc(&engine->stats.active);
}
-   write_sequnlock_irqrestore(&engine->stats.lock, flags);
+
+   /* The writer is serialised; but the pmu reader may be from hardirq */
+   local_irq_save(flags);
+   write_seqcount_begin(&engine->stats.lock);
+
+   engine->stats.start = ktime_get();
+   engine->stats.active++;
+
+   write_seqcount_end(&engine->stats.lock);
+   local_irq_restore(flags);
+
+   GEM_BUG_ON(!engine->stats.active);
 }
 
 static inline void intel_engine_context_out(struct intel_engine_cs *engine)
 {
unsigned long flags;
 
-   GEM_BUG_ON(!atomic_read(&engine->stats.active));
-
-   if (atomic_add_unless(&engine->stats.active, -1, 1))
+   GEM_BUG_ON(!engine->stats.active);
+   if (engine->stats.active > 1) {
+   engine->stats.active--;
return;
-
-   write_seqlock_irqsave(&engine->stats.lock, flags);
-   if (atomic_dec_and_test(&engine->stats.active)) {
-   engine->stats.total =
-   ktime_add(engine->stats.total,
- ktime_sub(ktime_get(), engine->stats.start));
}
-   write_sequnlock_irqrestore(&engine->stats.lock, flags);
+
+   local_irq_save(flags);
+   write_seqcount_begin(&engine->stats.lock);
+
+   engine->stats.active--;
+   engine->stats.total =
+   ktime_add(engine->stats.total,
+ ktime_sub(ktime_get(), engine->stats.start));
+
+   write_seqcount_end(&engine->stats.lock);
+   local_irq_restore(flags);
 }
 
 #endif /* __INTEL_ENGINE_STATS_H__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 1fbee35cb5ad..fdec129a6317 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -526,12 +526,12 @@ struct intel_engine_cs {
/**
 * @active: Number of contexts currently scheduled in.
 */
-   atomic_t active;
+   unsigned int active;
 
/**
 * @lock: Lock protecting the below fields.
 */
-   seqlock_t lock;
+   seqcount_t lock;
 
/**
 * @total: Total time this engine was busy.
-- 
2.20.1

[Intel-gfx] [PATCH 43/69] drm/i915/selftests: Measure set-priority duration

2020-12-14 Thread Chris Wilson
As a topological sort, we expect it to run in linear graph time,
O(V+E). In removing the recursion, it is no longer a DFS but rather a
BFS, and performs as O(VE). Let's demonstrate how bad this is with a few
examples, and build a few test cases to verify a potential fix.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_scheduler.c |   4 +
 .../drm/i915/selftests/i915_live_selftests.h  |   1 +
 .../drm/i915/selftests/i915_perf_selftests.h  |   1 +
 .../gpu/drm/i915/selftests/i915_scheduler.c   | 663 ++
 4 files changed, 669 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/selftests/i915_scheduler.c

diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index 4d0c30da971e..265c915a9b82 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -626,6 +626,10 @@ void i915_request_show_with_schedule(struct drm_printer *m,
rcu_read_unlock();
 }
 
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/i915_scheduler.c"
+#endif
+
 static void i915_global_scheduler_shrink(void)
 {
kmem_cache_shrink(global.slab_dependencies);
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h 
b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index a92c0e9b7e6b..2200a5baa68e 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -26,6 +26,7 @@ selftest(gt_mocs, intel_mocs_live_selftests)
 selftest(gt_pm, intel_gt_pm_live_selftests)
 selftest(gt_heartbeat, intel_heartbeat_live_selftests)
 selftest(requests, i915_request_live_selftests)
+selftest(scheduler, i915_scheduler_live_selftests)
 selftest(active, i915_active_live_selftests)
 selftest(objects, i915_gem_object_live_selftests)
 selftest(mman, i915_gem_mman_live_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/i915_perf_selftests.h 
b/drivers/gpu/drm/i915/selftests/i915_perf_selftests.h
index c2389f8a257d..137e35283fee 100644
--- a/drivers/gpu/drm/i915/selftests/i915_perf_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_perf_selftests.h
@@ -17,5 +17,6 @@
  */
 selftest(engine_cs, intel_engine_cs_perf_selftests)
 selftest(request, i915_request_perf_selftests)
+selftest(scheduler, i915_scheduler_perf_selftests)
 selftest(blt, i915_gem_object_blt_perf_selftests)
 selftest(region, intel_memory_region_perf_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/i915_scheduler.c 
b/drivers/gpu/drm/i915/selftests/i915_scheduler.c
new file mode 100644
index ..481549f0ddad
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/i915_scheduler.c
@@ -0,0 +1,663 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+
+#include "i915_selftest.h"
+
+#include "gt/intel_context.h"
+#include "gt/selftest_engine_heartbeat.h"
+#include "selftests/igt_spinner.h"
+#include "selftests/i915_random.h"
+
+static void scheduling_disable(struct intel_engine_cs *engine)
+{
+   engine->props.preempt_timeout_ms = 0;
+   engine->props.timeslice_duration_ms = 0;
+
+   st_engine_heartbeat_disable(engine);
+}
+
+static void scheduling_enable(struct intel_engine_cs *engine)
+{
+   st_engine_heartbeat_enable(engine);
+
+   engine->props.preempt_timeout_ms =
+   engine->defaults.preempt_timeout_ms;
+   engine->props.timeslice_duration_ms =
+   engine->defaults.timeslice_duration_ms;
+}
+
+static int first_engine(struct drm_i915_private *i915,
+   int (*chain)(struct intel_engine_cs *engine,
+unsigned long param,
+bool (*fn)(struct i915_request *rq,
+   unsigned long v,
+   unsigned long e)),
+   unsigned long param,
+   bool (*fn)(struct i915_request *rq,
+  unsigned long v, unsigned long e))
+{
+   struct intel_engine_cs *engine;
+
+   for_each_uabi_engine(engine, i915) {
+   if (!intel_engine_has_scheduler(engine))
+   continue;
+
+   return chain(engine, param, fn);
+   }
+
+   return 0;
+}
+
+static int all_engines(struct drm_i915_private *i915,
+  int (*chain)(struct intel_engine_cs *engine,
+   unsigned long param,
+   bool (*fn)(struct i915_request *rq,
+  unsigned long v,
+  unsigned long e)),
+  unsigned long param,
+  bool (*fn)(struct i915_request *rq,
+ unsigned long v, unsigned long e))
+{
+   struct intel_engine_cs *engine;
+   int err;
+
+   for_each_uabi_engine(engine, i915) {
+   if (!intel_engine_has_sch

[Intel-gfx] [PATCH 37/69] drm/i915/gt: Defer the kmem_cache_free() until after the HW submit

2020-12-14 Thread Chris Wilson
Watching lock_stat, we noticed that the kmem_cache_free() would cause
the occasional multi-millisecond spike (directly affecting max-holdtime
and so the max-waittime). Delaying our submission of the next ELSP by a
millisecond will leave the GPU idle, so defer the kmem_cache_free()
until afterwards.

Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/gt/intel_execlists_submission.c| 10 +-
 drivers/gpu/drm/i915/i915_scheduler.c   | 13 +
 drivers/gpu/drm/i915/i915_scheduler.h   | 12 
 3 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 201700fe3483..16161bf4c849 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -2019,6 +2019,7 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
struct i915_request **port = execlists->pending;
struct i915_request ** const last_port = port + execlists->port_mask;
struct i915_request *last = *execlists->active;
+   struct list_head *free = NULL;
struct virtual_engine *ve;
struct rb_node *rb;
bool submit = false;
@@ -2307,8 +2308,9 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
}
}
 
+   /* Remove the node, but defer the free for later */
rb_erase_cached(&p->node, &execlists->queue);
-   i915_priolist_free(p);
+   free = i915_priolist_free_defer(p, free);
}
 done:
*port++ = i915_request_get(last);
@@ -2360,6 +2362,12 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
i915_request_put(*port);
*execlists->pending = NULL;
}
+
+   /*
+* We noticed that kmem_cache_free() may cause 1ms+ latencies, so
+* we defer the frees until after we have submitted the ELSP.
+*/
+   i915_priolist_free_many(free);
 }
 
 static void execlists_dequeue_irq(struct intel_engine_cs *engine)
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index a57353191d12..dad5318ca825 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -126,6 +126,19 @@ void __i915_priolist_free(struct i915_priolist *p)
kmem_cache_free(global.slab_priorities, p);
 }
 
+void i915_priolist_free_many(struct list_head *list)
+{
+   while (list) {
+   struct i915_priolist *p;
+
+   p = container_of(list, typeof(*p), requests);
+   list = p->requests.next;
+
+   GEM_BUG_ON(p->priority == I915_PRIORITY_NORMAL);
+   kmem_cache_free(global.slab_priorities, p);
+   }
+}
+
 struct sched_cache {
struct list_head *priolist;
 };
diff --git a/drivers/gpu/drm/i915/i915_scheduler.h 
b/drivers/gpu/drm/i915/i915_scheduler.h
index 858a0938f47a..503630bd2c03 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.h
+++ b/drivers/gpu/drm/i915/i915_scheduler.h
@@ -48,6 +48,18 @@ static inline void i915_priolist_free(struct i915_priolist 
*p)
__i915_priolist_free(p);
 }
 
+void i915_priolist_free_many(struct list_head *list);
+
+static inline struct list_head *
+i915_priolist_free_defer(struct i915_priolist *p, struct list_head *free)
+{
+   if (p->priority != I915_PRIORITY_NORMAL) {
+   p->requests.next = free;
+   free = &p->requests;
+   }
+   return free;
+}
+
 void i915_request_show_with_schedule(struct drm_printer *m,
 const struct i915_request *rq,
 const char *prefix,
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 40/69] drm/i915/gt: Do not suspend bonded requests if one hangs

2020-12-14 Thread Chris Wilson
Treat the dependency between bonded requests as weak and leave the
remainder of the pair on the GPU if one hangs.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_execlists_submission.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index e5d0f6bf4777..53e5db533adb 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -2764,6 +2764,9 @@ static void __execlists_hold(struct i915_request *rq)
struct i915_request *w =
container_of(p->waiter, typeof(*w), sched);
 
+   if (p->flags & I915_DEPENDENCY_WEAK)
+   continue;
+
/* Leave semaphores spinning on the other engines */
if (w->engine != rq->engine)
continue;
@@ -2862,6 +2865,9 @@ static void __execlists_unhold(struct i915_request *rq)
struct i915_request *w =
container_of(p->waiter, typeof(*w), sched);
 
+   if (p->flags & I915_DEPENDENCY_WEAK)
+   continue;
+
/* Propagate any change in error status */
if (rq->fence.error)
i915_request_set_error_once(w, rq->fence.error);
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 41/69] drm/i915: Teach the i915_dependency to use a double-lock

2020-12-14 Thread Chris Wilson
Currently, we construct and teardown the i915_dependency chains using a
global spinlock. As the lists are entirely local, it should be possible
to use an double-lock with an explicit nesting [signaler -> waiter,
always] and so avoid the costly convenience of a global spinlock.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_request.c |  2 +-
 drivers/gpu/drm/i915/i915_scheduler.c   | 63 ++---
 drivers/gpu/drm/i915/i915_scheduler.h   |  2 +-
 drivers/gpu/drm/i915/i915_scheduler_types.h |  2 +
 4 files changed, 45 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 914d271b7222..5e1617a3a75d 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -329,7 +329,7 @@ bool i915_request_retire(struct i915_request *rq)
intel_context_unpin(rq->context);
 
free_capture_list(rq);
-   i915_sched_node_fini(&rq->sched);
+   i915_sched_node_retire(&rq->sched);
i915_request_put(rq);
 
return true;
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index 7ce875bacdaa..140c0578eef1 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -19,6 +19,17 @@ static struct i915_global_scheduler {
 
 static DEFINE_SPINLOCK(schedule_lock);
 
+static struct i915_sched_node *node_get(struct i915_sched_node *node)
+{
+   i915_request_get(container_of(node, struct i915_request, sched));
+   return node;
+}
+
+static void node_put(struct i915_sched_node *node)
+{
+   i915_request_put(container_of(node, struct i915_request, sched));
+}
+
 static const struct i915_request *
 node_to_request(const struct i915_sched_node *node)
 {
@@ -389,6 +400,8 @@ void i915_request_set_priority(struct i915_request *rq, int 
prio)
 
 void i915_sched_node_init(struct i915_sched_node *node)
 {
+   spin_lock_init(&node->lock);
+
INIT_LIST_HEAD(&node->signalers_list);
INIT_LIST_HEAD(&node->waiters_list);
INIT_LIST_HEAD(&node->link);
@@ -413,10 +426,17 @@ i915_dependency_alloc(void)
return kmem_cache_alloc(global.slab_dependencies, GFP_KERNEL);
 }
 
+static void
+rcu_dependency_free(struct rcu_head *rcu)
+{
+   kmem_cache_free(global.slab_dependencies,
+   container_of(rcu, typeof(struct i915_dependency), rcu));
+}
+
 static void
 i915_dependency_free(struct i915_dependency *dep)
 {
-   kmem_cache_free(global.slab_dependencies, dep);
+   call_rcu(&dep->rcu, rcu_dependency_free);
 }
 
 bool __i915_sched_node_add_dependency(struct i915_sched_node *node,
@@ -426,24 +446,27 @@ bool __i915_sched_node_add_dependency(struct 
i915_sched_node *node,
 {
bool ret = false;
 
-   spin_lock_irq(&schedule_lock);
+   /* The signal->lock is always the outer lock in this double-lock. */
+   spin_lock(&signal->lock);
 
if (!node_signaled(signal)) {
INIT_LIST_HEAD(&dep->dfs_link);
dep->signaler = signal;
-   dep->waiter = node;
+   dep->waiter = node_get(node);
dep->flags = flags;
 
/* All set, now publish. Beware the lockless walkers. */
+   spin_lock_nested(&node->lock, SINGLE_DEPTH_NESTING);
list_add_rcu(&dep->signal_link, &node->signalers_list);
list_add_rcu(&dep->wait_link, &signal->waiters_list);
+   spin_unlock(&node->lock);
 
/* Propagate the chains */
node->flags |= signal->flags;
ret = true;
}
 
-   spin_unlock_irq(&schedule_lock);
+   spin_unlock(&signal->lock);
 
return ret;
 }
@@ -465,39 +488,36 @@ int i915_sched_node_add_dependency(struct i915_sched_node 
*node,
return 0;
 }
 
-void i915_sched_node_fini(struct i915_sched_node *node)
+void i915_sched_node_retire(struct i915_sched_node *node)
 {
struct i915_dependency *dep, *tmp;
 
-   spin_lock_irq(&schedule_lock);
-
/*
 * Everyone we depended upon (the fences we wait to be signaled)
 * should retire before us and remove themselves from our list.
 * However, retirement is run independently on each timeline and
-* so we may be called out-of-order.
+* so we may be called out-of-order. As we need to avoid taking
+* the signaler's lock, just mark up our completion and be wary
+* in traversing the signalers->waiters_list.
 */
-   list_for_each_entry_safe(dep, tmp, &node->signalers_list, signal_link) {
-   GEM_BUG_ON(!list_empty(&dep->dfs_link));
-
-   list_del_rcu(&dep->wait_link);
-   if (dep->flags & I915_DEPENDENCY_ALLOC)
-   i915_dependency_free(dep);
-   }
-   INIT_LIST_HEAD(&node->signalers_list);
 
/* Remove ourselves from everyone who depends upon us */
+

[Intel-gfx] [PATCH 28/69] drm/i915/gem: Reduce ctx->engine_mutex for reading the clone source

2020-12-14 Thread Chris Wilson
When cloning the engines from the source context, we need to ensure that
the engines are not freed as we copy them, and that the flags we clone
from the source correspond with the engines we copy across. To do this
we need only take a reference to the src->engines, rather than hold the
src->engine_mutex, so long as we verify that nothing changed under the
read.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 24 +
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 738a07b3583c..e87da2525d0f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -713,7 +713,8 @@ __create_context(struct drm_i915_private *i915)
 }
 
 static inline struct i915_gem_engines *
-__context_engines_await(const struct i915_gem_context *ctx)
+__context_engines_await(const struct i915_gem_context *ctx,
+   bool *user_engines)
 {
struct i915_gem_engines *engines;
 
@@ -722,6 +723,10 @@ __context_engines_await(const struct i915_gem_context *ctx)
engines = rcu_dereference(ctx->engines);
GEM_BUG_ON(!engines);
 
+   if (user_engines)
+   *user_engines = i915_gem_context_user_engines(ctx);
+
+   /* successful await => strong mb */
if (unlikely(!i915_sw_fence_await(&engines->fence)))
continue;
 
@@ -745,7 +750,7 @@ context_apply_all(struct i915_gem_context *ctx,
struct intel_context *ce;
int err = 0;
 
-   e = __context_engines_await(ctx);
+   e = __context_engines_await(ctx, NULL);
for_each_gem_engine(ce, e, it) {
err = fn(ce, data);
if (err)
@@ -1071,7 +1076,7 @@ static int context_barrier_task(struct i915_gem_context 
*ctx,
return err;
}
 
-   e = __context_engines_await(ctx);
+   e = __context_engines_await(ctx, NULL);
if (!e) {
i915_active_release(&cb->base);
return -ENOENT;
@@ -2091,11 +2096,14 @@ static int copy_ring_size(struct intel_context *dst,
 static int clone_engines(struct i915_gem_context *dst,
 struct i915_gem_context *src)
 {
-   struct i915_gem_engines *e = i915_gem_context_lock_engines(src);
-   struct i915_gem_engines *clone;
+   struct i915_gem_engines *clone, *e;
bool user_engines;
unsigned long n;
 
+   e = __context_engines_await(src, &user_engines);
+   if (!e)
+   return -ENOENT;
+
clone = alloc_engines(e->num_engines);
if (!clone)
goto err_unlock;
@@ -2137,9 +2145,7 @@ static int clone_engines(struct i915_gem_context *dst,
}
}
clone->num_engines = n;
-
-   user_engines = i915_gem_context_user_engines(src);
-   i915_gem_context_unlock_engines(src);
+   i915_sw_fence_complete(&e->fence);
 
/* Serialised by constructor */
engines_idle_release(dst, rcu_replace_pointer(dst->engines, clone, 1));
@@ -2150,7 +2156,7 @@ static int clone_engines(struct i915_gem_context *dst,
return 0;
 
 err_unlock:
-   i915_gem_context_unlock_engines(src);
+   i915_sw_fence_complete(&e->fence);
return -ENOMEM;
 }
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 10/69] drm/i915/gt: Resubmit the virtual engine on schedule-out

2020-12-14 Thread Chris Wilson
Having recognised that we do not change the sibling until we schedule
out, we can then defer the decision to resubmit the virtual engine from
the unwind of the active queue to scheduling out of the virtual context.
This improves our resilence in virtual engine scheduling, and should
eliminate the rare cases of gem_exec_balance failing.

By keeping the unwind order intact on the local engine, we can preserve
data dependency ordering while doing a preempt-to-busy pass until we
have determined the new ELSP. This means that if we try to timeslice
between a virtual engine and a data-dependent ordinary request, the pair
will maintain their relative ordering and we will avoid the
resubmission, cancelling the timeslicing until further change.

The dilemma though is that we then may end up in a situation where the
'demotion' of the virtual request to an ordinary request in the engine
queue results in filling the ELSP[] with virtual requests instead of
spreading the load across the engines. To compensate for this, we mark
each virtual request and refuse to resubmit a virtual request in the
secondary ELSP slots, thus forcing subsequent virtual requests to be
scheduled out after timeslicing. By delaying the decision until we
schedule out, we will avoid unnecessary resubmission.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2079
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2098
Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
---
 .../drm/i915/gt/intel_execlists_submission.c  | 126 +++---
 drivers/gpu/drm/i915/gt/selftest_execlists.c  |   2 +-
 2 files changed, 79 insertions(+), 49 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 9dcd650805fa..26d704694c33 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -1082,38 +1082,23 @@ __unwind_incomplete_requests(struct intel_engine_cs 
*engine)
 
__i915_request_unsubmit(rq);
 
-   /*
-* Push the request back into the queue for later resubmission.
-* If this request is not native to this physical engine (i.e.
-* it came from a virtual source), push it back onto the virtual
-* engine so that it can be moved across onto another physical
-* engine as load dictates.
-*/
-   if (likely(rq->execution_mask == engine->mask)) {
-   GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
-   if (rq_prio(rq) != prio) {
-   prio = rq_prio(rq);
-   pl = i915_sched_lookup_priolist(engine, prio);
-   }
-   
GEM_BUG_ON(RB_EMPTY_ROOT(&engine->execlists.queue.rb_root));
+   GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
+   if (rq_prio(rq) != prio) {
+   prio = rq_prio(rq);
+   pl = i915_sched_lookup_priolist(engine, prio);
+   }
+   GEM_BUG_ON(RB_EMPTY_ROOT(&engine->execlists.queue.rb_root));
 
-   list_move(&rq->sched.link, pl);
-   set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+   list_move(&rq->sched.link, pl);
+   set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
 
-   /* Check in case we rollback so far we wrap [size/2] */
-   if (intel_ring_direction(rq->ring,
-rq->tail,
-rq->ring->tail + 8) > 0)
-   rq->context->lrc.desc |= CTX_DESC_FORCE_RESTORE;
+   /* Check in case we rollback so far we wrap [size/2] */
+   if (intel_ring_direction(rq->ring,
+rq->tail,
+rq->ring->tail + 8) > 0)
+   rq->context->lrc.desc |= CTX_DESC_FORCE_RESTORE;
 
-   active = rq;
-   } else {
-   struct intel_engine_cs *owner = rq->context->engine;
-
-   WRITE_ONCE(rq->engine, owner);
-   owner->submit_request(rq);
-   active = NULL;
-   }
+   active = rq;
}
 
return active;
@@ -1356,9 +1341,9 @@ static inline void execlists_schedule_in(struct 
i915_request *rq, int idx)
GEM_BUG_ON(intel_context_inflight(ce) != rq->engine);
 }
 
-static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
+static void
+resubmit_virtual_request(struct i915_request *rq, struct virtual_engine *ve)
 {
-   struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
struct intel_engine_cs *engine = rq->engine;
 
/* Flush concurrent

[Intel-gfx] [PATCH 27/69] drm/i915: Drop i915_request.lock requirement for intel_rps_boost()

2020-12-14 Thread Chris Wilson
Since we use a flag within i915_request.flags to indicate when we have
boosted the request (so that we only apply the boost) once, this can be
used as the serialisation with i915_request_retire() to avoid having to
explicitly take the i915_request.lock which is more heavily contended.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_rps.c | 15 ++-
 drivers/gpu/drm/i915/i915_request.c |  4 +---
 2 files changed, 7 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c 
b/drivers/gpu/drm/i915/gt/intel_rps.c
index f74d5e09e176..e1397b8d3586 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -917,17 +917,15 @@ void intel_rps_park(struct intel_rps *rps)
 
 void intel_rps_boost(struct i915_request *rq)
 {
-   struct intel_rps *rps = &READ_ONCE(rq->engine)->gt->rps;
-   unsigned long flags;
-
-   if (i915_request_signaled(rq) || !intel_rps_is_active(rps))
+   if (i915_request_signaled(rq) || i915_request_has_waitboost(rq))
return;
 
/* Serializes with i915_request_retire() */
-   spin_lock_irqsave(&rq->lock, flags);
-   if (!i915_request_has_waitboost(rq) &&
-   !dma_fence_is_signaled_locked(&rq->fence)) {
-   set_bit(I915_FENCE_FLAG_BOOST, &rq->fence.flags);
+   if (!test_and_set_bit(I915_FENCE_FLAG_BOOST, &rq->fence.flags)) {
+   struct intel_rps *rps = &READ_ONCE(rq->engine)->gt->rps;
+
+   if (!intel_rps_is_active(rps))
+   return;
 
GT_TRACE(rps_to_gt(rps), "boost fence:%llx:%llx\n",
 rq->fence.context, rq->fence.seqno);
@@ -938,7 +936,6 @@ void intel_rps_boost(struct i915_request *rq)
 
atomic_inc(&rps->boosts);
}
-   spin_unlock_irqrestore(&rq->lock, flags);
 }
 
 int intel_rps_set(struct intel_rps *rps, u8 val)
diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 87f59931f2ba..4d886b3c9cd7 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -306,10 +306,8 @@ bool i915_request_retire(struct i915_request *rq)
spin_unlock_irq(&rq->lock);
}
 
-   if (i915_request_has_waitboost(rq)) {
-   GEM_BUG_ON(!atomic_read(&rq->engine->gt->rps.num_waiters));
+   if (test_and_set_bit(I915_FENCE_FLAG_BOOST, &rq->fence.flags))
atomic_dec(&rq->engine->gt->rps.num_waiters);
-   }
 
/*
 * We only loosely track inflight requests across preemption,
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 52/69] drm/i915: Fair low-latency scheduling

2020-12-14 Thread Chris Wilson
The first "scheduler" was a topographical sorting of requests into
priority order. The execution order was deterministic, the earliest
submitted, highest priority request would be executed first. Priority
inheritance ensured that inversions were kept at bay, and allowed us to
dynamically boost priorities (e.g. for interactive pageflips).

The minimalistic timeslicing scheme was an attempt to introduce fairness
between long running requests, by evicting the active request at the end
of a timeslice and moving it to the back of its priority queue (while
ensuring that dependencies were kept in order). For short running
requests from many clients of equal priority, the scheme is still very
much FIFO submission ordering, and as unfair as before.

To impose fairness, we need an external metric that ensures that clients
are interpersed, so we don't execute one long chain from client A before
executing any of client B. This could be imposed by the clients
themselves by using fences based on an external clock, that is they only
submit work for a "frame" at frame-intervals, instead of submitting as
much work as they are able to. The standard SwapBuffers approach is akin
to double bufferring, where as one frame is being executed, the next is
being submitted, such that there is always a maximum of two frames per
client in the pipeline and so ideally maintains consistent input-output
latency. Even this scheme exhibits unfairness under load as a single
client will execute two frames back to back before the next, and with
enough clients, deadlines will be missed.

The idea introduced by BFS/MuQSS is that fairness is introduced by
metering with an external clock. Every request, when it becomes ready to
execute is assigned a virtual deadline, and execution order is then
determined by earliest deadline. Priority is used as a hint, rather than
strict ordering, where high priority requests have earlier deadlines,
but not necessarily earlier than outstanding work. Thus work is executed
in order of 'readiness', with timeslicing to demote long running work.

The Achille's heel of this scheduler is its strong preference for
low-latency and favouring of new queues. Whereas it was easy to dominate
the old scheduler by flooding it with many requests over a short period
of time, the new scheduler can be dominated by a 'synchronous' client
that waits for each of its requests to complete before submitting the
next. As such a client has no history, it is always considered
ready-to-run and receives an earlier deadline than the long running
requests. This is compensated for by refreshing the current execution's
deadline and by disallowing preemption for timeslice shuffling.

To check the impact on throughput (often the downfall of latency
sensitive schedulers), we used gem_wsim to simulate various transcode
workloads with different load balancers, and varying the number of
competing [heterogenous] clients.

+delta%--+
|a   |
|a   |
|aa  |
|aa  |
|aa  |
|aa  |
|   aaa  |
|    |
|   a  a |
|   a aa |
|a  aa   a  aa aa   a   a|
|A_| |
++
   N  Min   MaxMedian   AvgStddev
 108   -23.982194 28.421527  -0.077474828  -0.0726504180.16179718

The impact was on average 0.1% under contention due to the change in
context execution order and number of context switches. The biggest
swings are due to the execution ordering favouring one client or another,
and maybe room for improvement.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine_cs.c |   1 -
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  |   1 +
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |   4 +-
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  14 -
 .../drm/i915/gt/intel_execlists_submission.c  | 242 +-
 drivers/gpu/drm/i915/gt/selftest_execlists.c  |  41 +-
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |   5 +-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |   6 +-
 drivers/gpu/drm/i915/i915_priolist_types.h|   7 +-
 drivers/gpu/drm/i915/i915_request.c   |  14 +-
 drivers/gpu/drm/i915/i915_sche

[Intel-gfx] [PATCH 24/69] drm/i915/gt: Prefer recycling an idle fence

2020-12-14 Thread Chris Wilson
If we want to reuse a fence that is in active use by the GPU, we have to
wait an uncertain amount of time, but if we reuse an inactive fence, we
can change it right away. Loop through the list of available fences
twice, ignoring any active fences on the first pass.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c | 22 ++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c 
b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
index 7fb36b12fe7a..a357bb431815 100644
--- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
@@ -320,13 +320,31 @@ void i915_vma_revoke_fence(struct i915_vma *vma)
fence_write(fence);
 }
 
+static bool fence_is_active(const struct i915_fence_reg *fence)
+{
+   return fence->vma && i915_vma_is_active(fence->vma);
+}
+
 static struct i915_fence_reg *fence_find(struct i915_ggtt *ggtt)
 {
-   struct i915_fence_reg *fence;
+   struct i915_fence_reg *active = NULL;
+   struct i915_fence_reg *fence, *fn;
 
-   list_for_each_entry(fence, &ggtt->fence_list, link) {
+   list_for_each_entry_safe(fence, fn, &ggtt->fence_list, link) {
GEM_BUG_ON(fence->vma && fence->vma->fence != fence);
 
+   if (fence == active) /* now seen this fence twice */
+   active = ERR_PTR(-EAGAIN);
+
+   /* Prefer idle fences so we do not have to wait on the GPU */
+   if (active != ERR_PTR(-EAGAIN) && fence_is_active(fence)) {
+   if (!active)
+   active = fence;
+
+   list_move_tail(&fence->link, &ggtt->fence_list);
+   continue;
+   }
+
if (atomic_read(&fence->pin_count))
continue;
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 02/69] drm/i915/uc: Squelch load failure error message

2020-12-14 Thread Chris Wilson
The caller determines if the failure is an error or not, so avoid
warning when we will try again and succeed. For example,

<7> [111.319321] [drm:intel_guc_fw_upload [i915]] GuC status 0x20
<3> [111.319340] i915 :00:02.0: [drm] *ERROR* GuC load failed: status = 
0x0020
<3> [111.319606] i915 :00:02.0: [drm] *ERROR* GuC load failed: status: 
Reset = 0, BootROM = 0x10, UKernel = 0x00, MIA = 0x00, Auth = 0x00
<7> [111.320045] [drm:__uc_init_hw [i915]] GuC fw load failed: -110; will reset 
and retry 2 more time(s)
<7> [111.322978] [drm:intel_guc_fw_upload [i915]] GuC status 0x8002f0ec

should not have been reported as a _test_ failure, as the GuC was
successfully loaded on the second attempt and the system remained
operational.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2797
Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c | 13 ++---
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c
index f9d0907ea1a5..2270d6b3b272 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c
@@ -76,7 +76,6 @@ static inline bool guc_ready(struct intel_uncore *uncore, u32 
*status)
 
 static int guc_wait_ucode(struct intel_uncore *uncore)
 {
-   struct drm_device *drm = &uncore->i915->drm;
u32 status;
int ret;
 
@@ -89,11 +88,11 @@ static int guc_wait_ucode(struct intel_uncore *uncore)
 * attempt the ucode load again if this happens.)
 */
ret = wait_for(guc_ready(uncore, &status), 100);
-   DRM_DEBUG_DRIVER("GuC status %#x\n", status);
-
if (ret) {
-   drm_err(drm, "GuC load failed: status = 0x%08X\n", status);
-   drm_err(drm, "GuC load failed: status: Reset = %d, "
+   struct drm_device *drm = &uncore->i915->drm;
+
+   drm_dbg(drm, "GuC load failed: status = 0x%08X\n", status);
+   drm_dbg(drm, "GuC load failed: status: Reset = %d, "
"BootROM = 0x%02X, UKernel = 0x%02X, "
"MIA = 0x%02X, Auth = 0x%02X\n",
REG_FIELD_GET(GS_MIA_IN_RESET, status),
@@ -103,12 +102,12 @@ static int guc_wait_ucode(struct intel_uncore *uncore)
REG_FIELD_GET(GS_AUTH_STATUS_MASK, status));
 
if ((status & GS_BOOTROM_MASK) == GS_BOOTROM_RSA_FAILED) {
-   drm_err(drm, "GuC firmware signature verification 
failed\n");
+   drm_dbg(drm, "GuC firmware signature verification 
failed\n");
ret = -ENOEXEC;
}
 
if ((status & GS_UKERNEL_MASK) == GS_UKERNEL_EXCEPTION) {
-   drm_err(drm, "GuC firmware exception. EIP: %#x\n",
+   drm_dbg(drm, "GuC firmware exception. EIP: %#x\n",
intel_uncore_read(uncore, SOFT_SCRATCH(13)));
ret = -ENXIO;
}
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 07/69] drm/i915/gt: Defer schedule_out until after the next dequeue

2020-12-14 Thread Chris Wilson
Inside schedule_out, we do extra work upon idling the context, such as
updating the runtime, kicking off retires, kicking virtual engines.
However, if we are in a series of processing single requests per
contexts, we may find ourselves scheduling out the context, only to
immediately schedule it back in during dequeue. This is just extra work
that we can avoid if we keep the context marked as inflight across the
dequeue. This becomes more significant later on for minimising virtual
engine misses.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_context_types.h |   8 +-
 .../drm/i915/gt/intel_execlists_submission.c  | 121 +++---
 2 files changed, 84 insertions(+), 45 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 52fa9c132746..f7a0fb6f3a2e 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -58,8 +58,12 @@ struct intel_context {
 
struct intel_engine_cs *engine;
struct intel_engine_cs *inflight;
-#define intel_context_inflight(ce) ptr_mask_bits(READ_ONCE((ce)->inflight), 2)
-#define intel_context_inflight_count(ce) 
ptr_unmask_bits(READ_ONCE((ce)->inflight), 2)
+#define __intel_context_inflight(engine) ptr_mask_bits(engine, 3)
+#define __intel_context_inflight_count(engine) ptr_unmask_bits(engine, 3)
+#define intel_context_inflight(ce) \
+   __intel_context_inflight(READ_ONCE((ce)->inflight))
+#define intel_context_inflight_count(ce) \
+   __intel_context_inflight_count(READ_ONCE((ce)->inflight))
 
struct i915_address_space *vm;
struct i915_gem_context __rcu *gem_context;
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 174c3f5f2e81..d278c4445496 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -2017,19 +2017,6 @@ static void set_preempt_timeout(struct intel_engine_cs 
*engine,
 active_preempt_timeout(engine, rq));
 }
 
-static inline void clear_ports(struct i915_request **ports, int count)
-{
-   memset_p((void **)ports, NULL, count);
-}
-
-static inline void
-copy_ports(struct i915_request **dst, struct i915_request **src, int count)
-{
-   /* A memcpy_p() would be very useful here! */
-   while (count--)
-   WRITE_ONCE(*dst++, *src++); /* avoid write tearing */
-}
-
 static void execlists_dequeue(struct intel_engine_cs *engine)
 {
struct intel_engine_execlists * const execlists = &engine->execlists;
@@ -2378,18 +2365,32 @@ static void execlists_dequeue_irq(struct 
intel_engine_cs *engine)
local_irq_enable(); /* flush irq_work (e.g. breadcrumb enabling) */
 }
 
-static void
-cancel_port_requests(struct intel_engine_execlists * const execlists)
+static inline void clear_ports(struct i915_request **ports, int count)
+{
+   memset_p((void **)ports, NULL, count);
+}
+
+static inline void
+copy_ports(struct i915_request **dst, struct i915_request **src, int count)
+{
+   /* A memcpy_p() would be very useful here! */
+   while (count--)
+   WRITE_ONCE(*dst++, *src++); /* avoid write tearing */
+}
+
+static struct i915_request **
+cancel_port_requests(struct intel_engine_execlists * const execlists,
+struct i915_request **inactive)
 {
struct i915_request * const *port;
 
for (port = execlists->pending; *port; port++)
-   execlists_schedule_out(*port);
+   *inactive++ = *port;
clear_ports(execlists->pending, ARRAY_SIZE(execlists->pending));
 
/* Mark the end of active before we overwrite *active */
for (port = xchg(&execlists->active, execlists->pending); *port; port++)
-   execlists_schedule_out(*port);
+   *inactive++ = *port;
clear_ports(execlists->inflight, ARRAY_SIZE(execlists->inflight));
 
smp_wmb(); /* complete the seqlock for execlists_active() */
@@ -2399,6 +2400,8 @@ cancel_port_requests(struct intel_engine_execlists * 
const execlists)
GEM_BUG_ON(execlists->pending[0]);
cancel_timer(&execlists->timer);
cancel_timer(&execlists->preempt);
+
+   return inactive;
 }
 
 static inline void
@@ -2526,7 +2529,8 @@ csb_read(const struct intel_engine_cs *engine, u64 * 
const csb)
return entry;
 }
 
-static void process_csb(struct intel_engine_cs *engine)
+static struct i915_request **
+process_csb(struct intel_engine_cs *engine, struct i915_request **inactive)
 {
struct intel_engine_execlists * const execlists = &engine->execlists;
u64 * const buf = execlists->csb_status;
@@ -2555,7 +2559,7 @@ static void process_csb(struct intel_engine_cs *engine)
head = execlists->csb_head;
tail = READ_ONCE(*execlists->csb_write);
if (unlikely(head == tail))
-   return;
+

[Intel-gfx] [PATCH 21/69] drm/i915/gt: Use ppHWSP for unshared non-semaphore related timelines

2020-12-14 Thread Chris Wilson
When we are not using semaphores with a context/engine, we can simply
reuse the same seqno location across wraps, but we still require each
timeline to have its own address. For LRC submission, each context is
prefixed by a per-process HWSP, which provides us with a unique location
for each context-local timeline. A shared timeline that is common to
multiple contexts will continue to use a separate page.

This enables us to create position invariant contexts should we feel the
need to relocate them.

Signed-off-by: Chris Wilson 
Cc: Joonas Lahtinen 
Reviewed-by: Matthew Brost 
---
 .../drm/i915/gt/intel_execlists_submission.c  | 37 +++
 1 file changed, 22 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index c5b013cc10b3..974cca0cfe76 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -4749,6 +4749,14 @@ static struct intel_timeline *pinned_timeline(struct 
intel_context *ce)
 page_unmask_bits(tl));
 }
 
+static struct intel_timeline *pphwsp_timeline(struct intel_context *ce,
+ struct i915_vma *state)
+{
+   return __intel_timeline_create(ce->engine->gt, state,
+  I915_GEM_HWS_SEQNO_ADDR |
+  INTEL_TIMELINE_RELATIVE_CONTEXT);
+}
+
 static int __execlists_context_alloc(struct intel_context *ce,
 struct intel_engine_cs *engine)
 {
@@ -4779,6 +4787,16 @@ static int __execlists_context_alloc(struct 
intel_context *ce,
goto error_deref_obj;
}
 
+   ring = intel_engine_create_ring(engine, (unsigned long)ce->ring);
+   if (IS_ERR(ring)) {
+   ret = PTR_ERR(ring);
+   goto error_deref_obj;
+   }
+
+   ret = populate_lr_context(ce, ctx_obj, engine, ring);
+   if (ret)
+   goto error_ring_free;
+
if (!page_mask_bits(ce->timeline)) {
struct intel_timeline *tl;
 
@@ -4788,29 +4806,18 @@ static int __execlists_context_alloc(struct 
intel_context *ce,
 */
if (unlikely(ce->timeline))
tl = pinned_timeline(ce);
-   else
+   else if (intel_engine_has_semaphores(engine))
tl = intel_timeline_create(engine->gt);
+   else
+   tl = pphwsp_timeline(ce, vma);
if (IS_ERR(tl)) {
ret = PTR_ERR(tl);
-   goto error_deref_obj;
+   goto error_ring_free;
}
 
ce->timeline = tl;
}
 
-   ring = intel_engine_create_ring(engine, (unsigned long)ce->ring);
-   if (IS_ERR(ring)) {
-   ret = PTR_ERR(ring);
-   goto error_deref_obj;
-   }
-
-   ret = populate_lr_context(ce, ctx_obj, engine, ring);
-   if (ret) {
-   drm_dbg(&engine->i915->drm,
-   "Failed to populate LRC: %d\n", ret);
-   goto error_ring_free;
-   }
-
ce->ring = ring;
ce->state = vma;
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 11/69] drm/i915/gt: Simplify virtual engine handling for execlists_hold()

2020-12-14 Thread Chris Wilson
Now that the tasklet completely controls scheduling of the requests, and
we postpone scheduling out the old requests, we can keep a hanging
virtual request bound to the engine on which it hung, and remove it from
te queue. On release, it will be returned to the same engine and remain
in its queue until it is scheduled; after which point it will become
eligible for transfer to a sibling. Instead, we could opt to resubmit the
request along the virtual engine on unhold, making it eligible for load
balancing immediately -- but that seems like a pointless optimisation
for a hanging context.

Signed-off-by: Chris Wilson 
---
 .../drm/i915/gt/intel_execlists_submission.c  | 29 ---
 1 file changed, 29 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 26d704694c33..f021e0f4b24b 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -2833,35 +2833,6 @@ static bool execlists_hold(struct intel_engine_cs 
*engine,
goto unlock;
}
 
-   if (rq->engine != engine) { /* preempted virtual engine */
-   struct virtual_engine *ve = to_virtual_engine(rq->engine);
-
-   /*
-* intel_context_inflight() is only protected by virtue
-* of process_csb() being called only by the tasklet (or
-* directly from inside reset while the tasklet is suspended).
-* Assert that neither of those are allowed to run while we
-* poke at the request queues.
-*/
-   GEM_BUG_ON(!reset_in_progress(&engine->execlists));
-
-   /*
-* An unsubmitted request along a virtual engine will
-* remain on the active (this) engine until we are able
-* to process the context switch away (and so mark the
-* context as no longer in flight). That cannot have happened
-* yet, otherwise we would not be hanging!
-*/
-   spin_lock(&ve->base.active.lock);
-   GEM_BUG_ON(intel_context_inflight(rq->context) != engine);
-   GEM_BUG_ON(ve->request != rq);
-   ve->request = NULL;
-   spin_unlock(&ve->base.active.lock);
-   i915_request_put(rq);
-
-   rq->engine = engine;
-   }
-
/*
 * Transfer this request onto the hold queue to prevent it
 * being resumbitted to HW (and potentially completed) before we have
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 35/69] drm/i915: Strip out internal priorities

2020-12-14 Thread Chris Wilson
Since we are not using any internal priority levels, and in the next few
patches will introduce a new index for which the optimisation is not so
lear cut, discard the small table within the priolist.

Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  |  2 +-
 .../drm/i915/gt/intel_execlists_submission.c  | 22 ++--
 drivers/gpu/drm/i915/gt/selftest_execlists.c  |  2 -
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +--
 drivers/gpu/drm/i915/i915_priolist_types.h|  8 +--
 drivers/gpu/drm/i915/i915_scheduler.c | 51 +++
 drivers/gpu/drm/i915/i915_scheduler.h | 16 ++
 7 files changed, 20 insertions(+), 87 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c 
b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index d7be2b9339f9..1732a42e9075 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -125,7 +125,7 @@ static void heartbeat(struct work_struct *wrk)
 * low latency and no jitter] the chance to naturally
 * complete before being preempted.
 */
-   attr.priority = I915_PRIORITY_MASK;
+   attr.priority = 0;
if (rq->sched.attr.priority >= attr.priority)
attr.priority |= 
I915_USER_PRIORITY(I915_PRIORITY_HEARTBEAT);
if (rq->sched.attr.priority >= attr.priority)
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 5380ecd62cbe..201700fe3483 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -408,22 +408,13 @@ static int effective_prio(const struct i915_request *rq)
 
 static int queue_prio(const struct intel_engine_execlists *execlists)
 {
-   struct i915_priolist *p;
struct rb_node *rb;
 
rb = rb_first_cached(&execlists->queue);
if (!rb)
return INT_MIN;
 
-   /*
-* As the priolist[] are inverted, with the highest priority in [0],
-* we have to flip the index value to become priority.
-*/
-   p = to_priolist(rb);
-   if (!I915_USER_PRIORITY_SHIFT)
-   return p->priority;
-
-   return ((p->priority + 1) << I915_USER_PRIORITY_SHIFT) - ffs(p->used);
+   return to_priolist(rb)->priority;
 }
 
 static int virtual_prio(const struct intel_engine_execlists *el)
@@ -2240,9 +2231,8 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
while ((rb = rb_first_cached(&execlists->queue))) {
struct i915_priolist *p = to_priolist(rb);
struct i915_request *rq, *rn;
-   int i;
 
-   priolist_for_each_request_consume(rq, rn, p, i) {
+   priolist_for_each_request_consume(rq, rn, p) {
bool merge = true;
 
/*
@@ -4309,9 +4299,8 @@ static void execlists_reset_cancel(struct intel_engine_cs 
*engine)
/* Flush the queued requests to the timeline list (for retiring). */
while ((rb = rb_first_cached(&execlists->queue))) {
struct i915_priolist *p = to_priolist(rb);
-   int i;
 
-   priolist_for_each_request_consume(rq, rn, p, i) {
+   priolist_for_each_request_consume(rq, rn, p) {
mark_eio(rq);
__i915_request_submit(rq);
}
@@ -4800,7 +4789,7 @@ static int __execlists_context_alloc(struct intel_context 
*ce,
 
 static struct list_head *virtual_queue(struct virtual_engine *ve)
 {
-   return &ve->base.execlists.default_priolist.requests[0];
+   return &ve->base.execlists.default_priolist.requests;
 }
 
 static void rcu_virtual_context_destroy(struct work_struct *wrk)
@@ -5389,9 +5378,8 @@ void intel_execlists_show_requests(struct intel_engine_cs 
*engine,
count = 0;
for (rb = rb_first_cached(&execlists->queue); rb; rb = rb_next(rb)) {
struct i915_priolist *p = rb_entry(rb, typeof(*p), node);
-   int i;
 
-   priolist_for_each_request(rq, p, i) {
+   priolist_for_each_request(rq, p) {
if (count++ < max - 1)
show_request(m, rq, "\t\t", 0);
else
diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c 
b/drivers/gpu/drm/i915/gt/selftest_execlists.c
index fbbd8343d7f6..16921b82b96d 100644
--- a/drivers/gpu/drm/i915/gt/selftest_execlists.c
+++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c
@@ -1102,7 +1102,6 @@ create_rewinder(struct intel_context *ce,
 
intel_ring_advance(rq, cs);
 
-   rq->sched.attr.priority = I915_PRIORITY_MASK;
err = 0;
 err:
i915_request_get(rq);
@@ -5371,7 +5370,6 @@ create_timesta

[Intel-gfx] [PATCH 36/69] drm/i915: Remove I915_USER_PRIORITY_SHIFT

2020-12-14 Thread Chris Wilson
As we do not have any internal priority levels, the priority can be set
directed from the user values.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/display/intel_display.c  |  4 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  6 +--
 .../i915/gem/selftests/i915_gem_object_blt.c  |  4 +-
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  | 10 ++---
 drivers/gpu/drm/i915/gt/selftest_execlists.c  | 44 +++
 drivers/gpu/drm/i915/i915_priolist_types.h|  3 --
 drivers/gpu/drm/i915/i915_scheduler.c |  1 -
 7 files changed, 24 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index 761be8deaa9b..515923b54ad7 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -16661,9 +16661,7 @@ static void intel_plane_unpin_fb(struct 
intel_plane_state *old_plane_state)
 
 static void fb_obj_bump_render_priority(struct drm_i915_gem_object *obj)
 {
-   struct i915_sched_attr attr = {
-   .priority = I915_USER_PRIORITY(I915_PRIORITY_DISPLAY),
-   };
+   struct i915_sched_attr attr = { .priority = I915_PRIORITY_DISPLAY };
 
i915_gem_object_wait_priority(obj, 0, &attr);
 }
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 8c5514574e8b..b1a87c0c7daf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -675,7 +675,7 @@ __create_context(struct drm_i915_private *i915)
 
kref_init(&ctx->ref);
ctx->i915 = i915;
-   ctx->sched.priority = I915_USER_PRIORITY(I915_PRIORITY_NORMAL);
+   ctx->sched.priority = I915_PRIORITY_NORMAL;
mutex_init(&ctx->mutex);
INIT_LIST_HEAD(&ctx->link);
 
@@ -1955,7 +1955,7 @@ static int set_priority(struct i915_gem_context *ctx,
!capable(CAP_SYS_NICE))
return -EPERM;
 
-   ctx->sched.priority = I915_USER_PRIORITY(priority);
+   ctx->sched.priority = priority;
context_apply_all(ctx, __apply_priority, ctx);
 
return 0;
@@ -2459,7 +2459,7 @@ int i915_gem_context_getparam_ioctl(struct drm_device 
*dev, void *data,
 
case I915_CONTEXT_PARAM_PRIORITY:
args->size = 0;
-   args->value = ctx->sched.priority >> I915_USER_PRIORITY_SHIFT;
+   args->value = ctx->sched.priority;
break;
 
case I915_CONTEXT_PARAM_SSEU:
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c 
b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c
index 23b6e11bbc3e..c4c04fb97d14 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object_blt.c
@@ -220,7 +220,7 @@ static int igt_fill_blt_thread(void *arg)
return PTR_ERR(ctx);
 
prio = i915_prandom_u32_max_state(I915_PRIORITY_MAX, prng);
-   ctx->sched.priority = I915_USER_PRIORITY(prio);
+   ctx->sched.priority = prio;
}
 
ce = i915_gem_context_get_engine(ctx, 0);
@@ -338,7 +338,7 @@ static int igt_copy_blt_thread(void *arg)
return PTR_ERR(ctx);
 
prio = i915_prandom_u32_max_state(I915_PRIORITY_MAX, prng);
-   ctx->sched.priority = I915_USER_PRIORITY(prio);
+   ctx->sched.priority = prio;
}
 
ce = i915_gem_context_get_engine(ctx, 0);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c 
b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index 1732a42e9075..ed03c08737f5 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -81,9 +81,7 @@ static void show_heartbeat(const struct i915_request *rq,
 
 static void heartbeat(struct work_struct *wrk)
 {
-   struct i915_sched_attr attr = {
-   .priority = I915_USER_PRIORITY(I915_PRIORITY_MIN),
-   };
+   struct i915_sched_attr attr = { .priority = I915_PRIORITY_MIN };
struct intel_engine_cs *engine =
container_of(wrk, typeof(*engine), heartbeat.work.work);
struct intel_context *ce = engine->kernel_context;
@@ -127,7 +125,7 @@ static void heartbeat(struct work_struct *wrk)
 */
attr.priority = 0;
if (rq->sched.attr.priority >= attr.priority)
-   attr.priority |= 
I915_USER_PRIORITY(I915_PRIORITY_HEARTBEAT);
+   attr.priority = I915_PRIORITY_HEARTBEAT;
if (rq->sched.attr.priority >= attr.priority)
attr.priority = I915_PRIORITY_BARRIER;
 
@@ -285,9 +283,7 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
 
 int intel_engine_flush_barriers(struct intel_engine_cs *engine)
 {
-   struct i915_sched_attr attr = {
- 

[Intel-gfx] [PATCH 25/69] drm/i915/gem: Optimistically prune dma-resv from the shrinker.

2020-12-14 Thread Chris Wilson
As we shrink an object, also see if we can prune the dma-resv of idle
fences it is maintaining a reference to.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/Makefile|  1 +
 drivers/gpu/drm/i915/dma_resv_utils.c| 17 +
 drivers/gpu/drm/i915/dma_resv_utils.h| 13 +
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c |  3 +++
 drivers/gpu/drm/i915/gem/i915_gem_wait.c |  8 +++-
 5 files changed, 37 insertions(+), 5 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/dma_resv_utils.c
 create mode 100644 drivers/gpu/drm/i915/dma_resv_utils.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index f9ef5199b124..f319311be93e 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -58,6 +58,7 @@ i915-y += i915_drv.o \
 
 # core library code
 i915-y += \
+   dma_resv_utils.o \
i915_memcpy.o \
i915_mm.o \
i915_sw_fence.o \
diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c 
b/drivers/gpu/drm/i915/dma_resv_utils.c
new file mode 100644
index ..9e508e7d4629
--- /dev/null
+++ b/drivers/gpu/drm/i915/dma_resv_utils.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+
+#include 
+
+#include "dma_resv_utils.h"
+
+void dma_resv_prune(struct dma_resv *resv)
+{
+   if (dma_resv_trylock(resv)) {
+   if (dma_resv_test_signaled_rcu(resv, true))
+   dma_resv_add_excl_fence(resv, NULL);
+   dma_resv_unlock(resv);
+   }
+}
diff --git a/drivers/gpu/drm/i915/dma_resv_utils.h 
b/drivers/gpu/drm/i915/dma_resv_utils.h
new file mode 100644
index ..b9d8fb5f8367
--- /dev/null
+++ b/drivers/gpu/drm/i915/dma_resv_utils.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+
+#ifndef DMA_RESV_UTILS_H
+#define DMA_RESV_UTILS_H
+
+struct dma_resv;
+
+void dma_resv_prune(struct dma_resv *resv);
+
+#endif /* DMA_RESV_UTILS_H */
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c 
b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
index dc8f052a0ffe..c2dba1cd9532 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
@@ -15,6 +15,7 @@
 
 #include "gt/intel_gt_requests.h"
 
+#include "dma_resv_utils.h"
 #include "i915_trace.h"
 
 static bool swap_available(void)
@@ -209,6 +210,8 @@ i915_gem_shrink(struct drm_i915_private *i915,
mutex_unlock(&obj->mm.lock);
}
 
+   dma_resv_prune(obj->base.resv);
+
scanned += obj->base.size >> PAGE_SHIFT;
i915_gem_object_put(obj);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c 
b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
index 8af55cd3e690..c1b13ac50d0f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
@@ -9,6 +9,7 @@
 
 #include "gt/intel_engine.h"
 
+#include "dma_resv_utils.h"
 #include "i915_gem_ioctls.h"
 #include "i915_gem_object.h"
 
@@ -84,11 +85,8 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
 * Opportunistically prune the fences iff we know they have *all* been
 * signaled.
 */
-   if (prune_fences && dma_resv_trylock(resv)) {
-   if (dma_resv_test_signaled_rcu(resv, true))
-   dma_resv_add_excl_fence(resv, NULL);
-   dma_resv_unlock(resv);
-   }
+   if (prune_fences)
+   dma_resv_prune(resv);
 
return timeout;
 }
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 39/69] drm/i915: Replace engine->schedule() with a known request operation

2020-12-14 Thread Chris Wilson
Looking to the future, we want to set the scheduling attributes
explicitly and so replace the generic engine->schedule() with the more
direct i915_request_set_priority()

What it loses in removing the 'schedule' name from the function, it
gains in having an explicit entry point with a stated goal.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/display/intel_display.c  |  9 +
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c|  1 -
 drivers/gpu/drm/i915/gem/i915_gem_object.h|  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_wait.c  | 27 +--
 drivers/gpu/drm/i915/gt/intel_engine_cs.c |  3 --
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  |  4 +--
 drivers/gpu/drm/i915/gt/intel_engine_types.h  | 29 
 drivers/gpu/drm/i915/gt/intel_engine_user.c   |  2 +-
 .../drm/i915/gt/intel_execlists_submission.c  |  3 +-
 drivers/gpu/drm/i915/gt/selftest_execlists.c  | 33 +--
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  | 11 +++
 drivers/gpu/drm/i915/gt/selftest_timeline.c   |  2 +-
 drivers/gpu/drm/i915/i915_request.c   | 10 +++---
 drivers/gpu/drm/i915/i915_scheduler.c | 15 +
 drivers/gpu/drm/i915/i915_scheduler.h |  3 +-
 15 files changed, 57 insertions(+), 97 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index 515923b54ad7..04d73ad3e916 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -16659,13 +16659,6 @@ static void intel_plane_unpin_fb(struct 
intel_plane_state *old_plane_state)
intel_unpin_fb_vma(vma, old_plane_state->flags);
 }
 
-static void fb_obj_bump_render_priority(struct drm_i915_gem_object *obj)
-{
-   struct i915_sched_attr attr = { .priority = I915_PRIORITY_DISPLAY };
-
-   i915_gem_object_wait_priority(obj, 0, &attr);
-}
-
 /**
  * intel_prepare_plane_fb - Prepare fb for usage on plane
  * @_plane: drm plane to prepare for
@@ -16742,7 +16735,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
if (ret)
return ret;
 
-   fb_obj_bump_render_priority(obj);
+   i915_gem_object_wait_priority(obj, 0, I915_PRIORITY_DISPLAY);
i915_gem_object_flush_frontbuffer(obj, ORIGIN_DIRTYFB);
 
if (!new_plane_state->uapi.fence) { /* implicit fencing */
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 193996144c84..b21c04a3a3bf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -2650,7 +2650,6 @@ static struct i915_request *eb_pin_engine(struct 
i915_execbuffer *eb, bool throt
int err;
 
GEM_BUG_ON(eb->args->flags & __EXEC_ENGINE_PINNED);
-
if (unlikely(intel_context_is_banned(ce)))
return ERR_PTR(-EIO);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h 
b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index be14486f63a7..b106bc81c303 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -517,7 +517,7 @@ int i915_gem_object_wait(struct drm_i915_gem_object *obj,
 long timeout);
 int i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
  unsigned int flags,
- const struct i915_sched_attr *attr);
+ int prio);
 
 void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
 enum fb_op_origin origin);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c 
b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
index c1b13ac50d0f..a5d7efe67021 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c
@@ -91,28 +91,17 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
return timeout;
 }
 
-static void __fence_set_priority(struct dma_fence *fence,
-const struct i915_sched_attr *attr)
+static void __fence_set_priority(struct dma_fence *fence, int prio)
 {
-   struct i915_request *rq;
-   struct intel_engine_cs *engine;
-
if (dma_fence_is_signaled(fence) || !dma_fence_is_i915(fence))
return;
 
-   rq = to_request(fence);
-   engine = rq->engine;
-
local_bh_disable();
-   rcu_read_lock(); /* RCU serialisation for set-wedged protection */
-   if (engine->schedule)
-   engine->schedule(rq, attr);
-   rcu_read_unlock();
+   i915_request_set_priority(to_request(fence), prio);
local_bh_enable(); /* kick the tasklets if queues were reprioritised */
 }
 
-static void fence_set_priority(struct dma_fence *fence,
-  const struct i915_sched_attr *attr)
+static void fence_set_priority(struct dma_fence *fence, int prio)
 {
/* Recurse once into a fen

[Intel-gfx] [PATCH 20/69] drm/i915/selftests: Exercise relative timeline modes

2020-12-14 Thread Chris Wilson
A quick test to verify that the backend accepts each type of timeline
and can use them to track and control request emission.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/selftest_timeline.c | 96 +
 1 file changed, 96 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/selftest_timeline.c 
b/drivers/gpu/drm/i915/gt/selftest_timeline.c
index 6f355c8a4f81..aafefdfe912a 100644
--- a/drivers/gpu/drm/i915/gt/selftest_timeline.c
+++ b/drivers/gpu/drm/i915/gt/selftest_timeline.c
@@ -1364,9 +1364,105 @@ static int live_hwsp_recycle(void *arg)
return err;
 }
 
+static int live_hwsp_relative(void *arg)
+{
+   struct intel_gt *gt = arg;
+   struct intel_engine_cs *engine;
+   enum intel_engine_id id;
+
+   /*
+* Check backend support for different timeline modes.
+*/
+
+   for_each_engine(engine, gt, id) {
+   enum intel_timeline_mode mode;
+
+   if (!engine->schedule)
+   continue;
+
+   for (mode = INTEL_TIMELINE_ABSOLUTE;
+mode <= INTEL_TIMELINE_RELATIVE_ENGINE;
+mode++) {
+   struct intel_timeline *tl;
+   struct i915_request *rq;
+   struct intel_context *ce;
+   const char *msg;
+   int err;
+
+   if (mode == INTEL_TIMELINE_RELATIVE_CONTEXT &&
+   !HAS_EXECLISTS(gt->i915))
+   continue;
+
+   ce = intel_context_create(engine);
+   if (IS_ERR(ce))
+   return PTR_ERR(ce);
+
+   err = intel_context_alloc_state(ce);
+   if (err) {
+   intel_context_put(ce);
+   return err;
+   }
+
+   switch (mode) {
+   case INTEL_TIMELINE_ABSOLUTE:
+   tl = intel_timeline_create(gt);
+   msg = "local";
+   break;
+
+   case INTEL_TIMELINE_RELATIVE_CONTEXT:
+   tl = __intel_timeline_create(gt,
+ce->state,
+
INTEL_TIMELINE_RELATIVE_CONTEXT |
+0x400);
+   msg = "ppHWSP";
+   break;
+
+   case INTEL_TIMELINE_RELATIVE_ENGINE:
+   tl = __intel_timeline_create(gt,
+
engine->status_page.vma,
+0x400);
+   msg = "HWSP";
+   break;
+   default:
+   continue;
+   }
+   if (IS_ERR(tl)) {
+   intel_context_put(ce);
+   return PTR_ERR(tl);
+   }
+
+   pr_info("Testing %s timeline on %s\n",
+   msg, engine->name);
+
+   intel_timeline_put(ce->timeline);
+   ce->timeline = tl;
+
+   rq = intel_context_create_request(ce);
+   intel_context_put(ce);
+   if (IS_ERR(rq))
+   return PTR_ERR(rq);
+
+   GEM_BUG_ON(rcu_access_pointer(rq->timeline) != tl);
+
+   i915_request_get(rq);
+   i915_request_add(rq);
+
+   if (i915_request_wait(rq, 0, HZ / 5) < 0) {
+   i915_request_put(rq);
+   return -EIO;
+   }
+
+   i915_request_put(rq);
+   }
+   }
+
+   return 0;
+}
+
 int intel_timeline_live_selftests(struct drm_i915_private *i915)
 {
static const struct i915_subtest tests[] = {
+   SUBTEST(live_hwsp_relative),
SUBTEST(live_hwsp_recycle),
SUBTEST(live_hwsp_engine),
SUBTEST(live_hwsp_alternate),
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [RFC PATCH] drm/i915: intel_detect_pch_virt() can be static

2020-12-14 Thread kernel test robot


Reported-by: kernel test robot 
Signed-off-by: kernel test robot 
---
 intel_pch.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/intel_pch.c b/drivers/gpu/drm/i915/intel_pch.c
index ca5989700ecf23..a3f84327535688 100644
--- a/drivers/gpu/drm/i915/intel_pch.c
+++ b/drivers/gpu/drm/i915/intel_pch.c
@@ -184,7 +184,7 @@ intel_virt_detect_pch(const struct drm_i915_private 
*dev_priv)
return id;
 }
 
-void intel_detect_pch_virt(struct drm_i915_private *dev_priv)
+static void intel_detect_pch_virt(struct drm_i915_private *dev_priv)
 {
unsigned short id;
enum intel_pch pch_type;
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 13/69] drm/i915/gem: Drop free_work for GEM contexts

2020-12-14 Thread Chris Wilson
The free_list and worker was introduced in commit 5f09a9c8ab6b ("drm/i915:
Allow contexts to be unreferenced locklessly"), but subsequently made
redundant by the removal of the last sleeping lock in commit 2935ed5339c4
("drm/i915: Remove logical HW ID"). As we can now free the GEM context
immediately from any context, remove the deferral of the free_list

v2: Lift removing the context from the global list into close().

Suggested-by: Mika Kuoppala 
Signed-off-by: Chris Wilson 
Cc: Mika Kuoppala 
Cc: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c   | 59 +++
 drivers/gpu/drm/i915/gem/i915_gem_context.h   |  1 -
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  1 -
 drivers/gpu/drm/i915/i915_drv.h   |  3 -
 drivers/gpu/drm/i915/i915_gem.c   |  2 -
 .../gpu/drm/i915/selftests/mock_gem_device.c  |  2 -
 6 files changed, 8 insertions(+), 60 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index ad136d009d9b..738a07b3583c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -334,13 +334,12 @@ static struct i915_gem_engines *default_engines(struct 
i915_gem_context *ctx)
return e;
 }
 
-static void i915_gem_context_free(struct i915_gem_context *ctx)
+void i915_gem_context_release(struct kref *ref)
 {
-   GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
+   struct i915_gem_context *ctx = container_of(ref, typeof(*ctx), ref);
 
-   spin_lock(&ctx->i915->gem.contexts.lock);
-   list_del(&ctx->link);
-   spin_unlock(&ctx->i915->gem.contexts.lock);
+   trace_i915_context_free(ctx);
+   GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
 
mutex_destroy(&ctx->engines_mutex);
mutex_destroy(&ctx->lut_mutex);
@@ -354,37 +353,6 @@ static void i915_gem_context_free(struct i915_gem_context 
*ctx)
kfree_rcu(ctx, rcu);
 }
 
-static void contexts_free_all(struct llist_node *list)
-{
-   struct i915_gem_context *ctx, *cn;
-
-   llist_for_each_entry_safe(ctx, cn, list, free_link)
-   i915_gem_context_free(ctx);
-}
-
-static void contexts_flush_free(struct i915_gem_contexts *gc)
-{
-   contexts_free_all(llist_del_all(&gc->free_list));
-}
-
-static void contexts_free_worker(struct work_struct *work)
-{
-   struct i915_gem_contexts *gc =
-   container_of(work, typeof(*gc), free_work);
-
-   contexts_flush_free(gc);
-}
-
-void i915_gem_context_release(struct kref *ref)
-{
-   struct i915_gem_context *ctx = container_of(ref, typeof(*ctx), ref);
-   struct i915_gem_contexts *gc = &ctx->i915->gem.contexts;
-
-   trace_i915_context_free(ctx);
-   if (llist_add(&ctx->free_link, &gc->free_list))
-   schedule_work(&gc->free_work);
-}
-
 static inline struct i915_gem_engines *
 __context_engines_static(const struct i915_gem_context *ctx)
 {
@@ -633,6 +601,10 @@ static void context_close(struct i915_gem_context *ctx)
 */
lut_close(ctx);
 
+   spin_lock(&ctx->i915->gem.contexts.lock);
+   list_del(&ctx->link);
+   spin_unlock(&ctx->i915->gem.contexts.lock);
+
mutex_unlock(&ctx->mutex);
 
/*
@@ -850,9 +822,6 @@ i915_gem_create_context(struct drm_i915_private *i915, 
unsigned int flags)
!HAS_EXECLISTS(i915))
return ERR_PTR(-EINVAL);
 
-   /* Reap the stale contexts */
-   contexts_flush_free(&i915->gem.contexts);
-
ctx = __create_context(i915);
if (IS_ERR(ctx))
return ctx;
@@ -897,9 +866,6 @@ static void init_contexts(struct i915_gem_contexts *gc)
 {
spin_lock_init(&gc->lock);
INIT_LIST_HEAD(&gc->list);
-
-   INIT_WORK(&gc->free_work, contexts_free_worker);
-   init_llist_head(&gc->free_list);
 }
 
 void i915_gem_init__contexts(struct drm_i915_private *i915)
@@ -907,12 +873,6 @@ void i915_gem_init__contexts(struct drm_i915_private *i915)
init_contexts(&i915->gem.contexts);
 }
 
-void i915_gem_driver_release__contexts(struct drm_i915_private *i915)
-{
-   flush_work(&i915->gem.contexts.free_work);
-   rcu_barrier(); /* and flush the left over RCU frees */
-}
-
 static int gem_context_register(struct i915_gem_context *ctx,
struct drm_i915_file_private *fpriv,
u32 *id)
@@ -986,7 +946,6 @@ int i915_gem_context_open(struct drm_i915_private *i915,
 void i915_gem_context_close(struct drm_file *file)
 {
struct drm_i915_file_private *file_priv = file->driver_priv;
-   struct drm_i915_private *i915 = file_priv->dev_priv;
struct i915_address_space *vm;
struct i915_gem_context *ctx;
unsigned long idx;
@@ -998,8 +957,6 @@ void i915_gem_context_close(struct drm_file *file)
xa_for_each(&file_priv->vm_xa, idx, vm)
i915_vm_put(vm);
xa_destroy(&file_priv->vm_xa);
-
-   contexts_fl

Re: [Intel-gfx] [PATCH] drm/i915: Try to guess PCH type even without ISA bridge

2020-12-14 Thread kernel test robot
Hi Xiong,

I love your patch! Perhaps something to improve:

[auto build test WARNING on drm-intel/for-linux-next]
[also build test WARNING on drm-tip/drm-tip v5.10 next-20201211]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Xiong-Zhang/drm-i915-Try-to-guess-PCH-type-even-without-ISA-bridge/20201214-150157
base:   git://anongit.freedesktop.org/drm-intel for-linux-next
config: i386-randconfig-s002-20201214 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce:
# apt-get install sparse
# sparse version: v0.6.3-184-g1b896707-dirty
# 
https://github.com/0day-ci/linux/commit/718272991fa5c06a48629bce020ecfbdea006f96
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Xiong-Zhang/drm-i915-Try-to-guess-PCH-type-even-without-ISA-bridge/20201214-150157
git checkout 718272991fa5c06a48629bce020ecfbdea006f96
# save the attached .config to linux build tree
make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 


"sparse warnings: (new ones prefixed by >>)"
>> drivers/gpu/drm/i915/intel_pch.c:187:6: sparse: sparse: symbol 
>> 'intel_detect_pch_virt' was not declared. Should it be static?

Please review and possibly fold the followup patch.

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [CI,1/3] drm/i915/pmu: Don't grab wakeref when enabling events

2020-12-14 Thread Patchwork
== Series Details ==

Series: series starting with [CI,1/3] drm/i915/pmu: Don't grab wakeref when 
enabling events
URL   : https://patchwork.freedesktop.org/series/84898/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9478 -> Patchwork_19132


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/index.html

Known issues


  Here are the changes found in Patchwork_19132 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@gem_exec_suspend@basic-s3:
- fi-tgl-y:   [PASS][1] -> [DMESG-WARN][2] ([i915#2411] / 
[i915#402])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9478/fi-tgl-y/igt@gem_exec_susp...@basic-s3.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/fi-tgl-y/igt@gem_exec_susp...@basic-s3.html

  * igt@gem_mmap_gtt@basic:
- fi-tgl-y:   [PASS][3] -> [DMESG-WARN][4] ([i915#402]) +2 similar 
issues
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9478/fi-tgl-y/igt@gem_mmap_...@basic.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/fi-tgl-y/igt@gem_mmap_...@basic.html

  * igt@runner@aborted:
- fi-bdw-5557u:   NOTRUN -> [FAIL][5] ([i915#2029] / [i915#2722])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/fi-bdw-5557u/igt@run...@aborted.html

  
 Possible fixes 

  * igt@fbdev@read:
- fi-tgl-y:   [DMESG-WARN][6] ([i915#402]) -> [PASS][7] +1 similar 
issue
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9478/fi-tgl-y/igt@fb...@read.html
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/fi-tgl-y/igt@fb...@read.html

  
  [i915#2029]: https://gitlab.freedesktop.org/drm/intel/issues/2029
  [i915#2411]: https://gitlab.freedesktop.org/drm/intel/issues/2411
  [i915#2722]: https://gitlab.freedesktop.org/drm/intel/issues/2722
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402


Participating hosts (42 -> 38)
--

  Missing(4): fi-blb-e6850 fi-hsw-4200u fi-bdw-samus fi-bsw-n3050 


Build changes
-

  * Linux: CI_DRM_9478 -> Patchwork_19132

  CI-20190529: 20190529
  CI_DRM_9478: 94cf3a4cc350324f21728c70954c46e535405c87 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5890: 0e209dc3cd7561a57ec45be74b8b299eaf391950 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_19132: d781b1a46f9cd54e6dd1c7ee0c47beafd4918dda @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

d781b1a46f9c drm/i915/pmu: Remove !CONFIG_PM code
e28201228e02 drm/i915/pmu: Use raw clock for rc6 estimation
5f62030b7d49 drm/i915/pmu: Don't grab wakeref when enabling events

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 1/4] dma-buf: Remove kmap kerneldoc vestiges

2020-12-14 Thread Christian König

Am 11.12.20 um 16:58 schrieb Daniel Vetter:

Also try to clarify a bit when dma_buf_begin/end_cpu_access should
be called.

Signed-off-by: Daniel Vetter 
Cc: Thomas Zimmermann 
Cc: Sumit Semwal 
Cc: "Christian König" 
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
---
  drivers/dma-buf/dma-buf.c | 20 ++--
  include/linux/dma-buf.h   | 25 +
  2 files changed, 23 insertions(+), 22 deletions(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index e63684d4cd90..a12fdffa130f 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1001,15 +1001,15 @@ EXPORT_SYMBOL_GPL(dma_buf_move_notify);
   *   vmalloc space might be limited and result in vmap calls failing.
   *
   *   Interfaces::
+ *
   *  void \*dma_buf_vmap(struct dma_buf \*dmabuf)
   *  void dma_buf_vunmap(struct dma_buf \*dmabuf, void \*vaddr)
   *
   *   The vmap call can fail if there is no vmap support in the exporter, or if
- *   it runs out of vmalloc space. Fallback to kmap should be implemented. Note
- *   that the dma-buf layer keeps a reference count for all vmap access and
- *   calls down into the exporter's vmap function only when no vmapping exists,
- *   and only unmaps it once. Protection against concurrent vmap/vunmap calls 
is
- *   provided by taking the dma_buf->lock mutex.
+ *   it runs out of vmalloc space. Note that the dma-buf layer keeps a 
reference
+ *   count for all vmap access and calls down into the exporter's vmap function
+ *   only when no vmapping exists, and only unmaps it once. Protection against
+ *   concurrent vmap/vunmap calls is provided by taking the &dma_buf.lock 
mutex.


Who is talking the lock? The caller of the dma_buf_vmap/vunmap() 
functions, the functions itself or the callback inside the exporter?


Christian.


   *
   * - For full compatibility on the importer side with existing userspace
   *   interfaces, which might already support mmap'ing buffers. This is needed 
in
@@ -1098,6 +1098,11 @@ static int __dma_buf_begin_cpu_access(struct dma_buf 
*dmabuf,
   * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
   * it guaranteed to be coherent with other DMA access.
   *
+ * This function will also wait for any DMA transactions tracked through
+ * implicit synchronization in &dma_buf.resv. For DMA transactions with 
explicit
+ * synchronization this function will only ensure cache coherency, callers must
+ * ensure synchronization with such DMA transactions on their own.
+ *
   * Can return negative error values, returns 0 on success.
   */
  int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
@@ -1199,7 +1204,10 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
   * This call may fail due to lack of virtual mapping address space.
   * These calls are optional in drivers. The intended use for them
   * is for mapping objects linear in kernel space for high use objects.
- * Please attempt to use kmap/kunmap before thinking about these interfaces.
+ *
+ * To ensure coherency users must call dma_buf_begin_cpu_access() and
+ * dma_buf_end_cpu_access() around any cpu access performed through this
+ * mapping.
   *
   * Returns 0 on success, or a negative errno code otherwise.
   */
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index cf72699cb2bc..7eca37c8b10c 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -183,24 +183,19 @@ struct dma_buf_ops {
 * @begin_cpu_access:
 *
 * This is called from dma_buf_begin_cpu_access() and allows the
-* exporter to ensure that the memory is actually available for cpu
-* access - the exporter might need to allocate or swap-in and pin the
-* backing storage. The exporter also needs to ensure that cpu access is
-* coherent for the access direction. The direction can be used by the
-* exporter to optimize the cache flushing, i.e. access with a different
+* exporter to ensure that the memory is actually coherent for cpu
+* access. The exporter also needs to ensure that cpu access is coherent
+* for the access direction. The direction can be used by the exporter
+* to optimize the cache flushing, i.e. access with a different
 * direction (read instead of write) might return stale or even bogus
 * data (e.g. when the exporter needs to copy the data to temporary
 * storage).
 *
-* This callback is optional.
+* Note that this is both called through the DMA_BUF_IOCTL_SYNC IOCTL
+* command for userspace mappings established through @mmap, and also
+* for kernel mappings established with @vmap.
 *
-* FIXME: This is both called through the DMA_BUF_IOCTL_SYNC command
-* from userspace (where storage shouldn't be pinned to avoid handing
-* de-factor mlock rights to userspace) and for the kernel-internal
-* users of the various kmap in

Re: [Intel-gfx] [PATCH 3/4] dma-buf: begin/end_cpu might lock the dma_resv lock

2020-12-14 Thread Christian König

Am 11.12.20 um 16:58 schrieb Daniel Vetter:

At least amdgpu and i915 do, so lets just document this as the rule.

Signed-off-by: Daniel Vetter 
Cc: Thomas Zimmermann 
Cc: Sumit Semwal 
Cc: "Christian König" 
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org


Reviewed-by: Christian König 


---
  drivers/dma-buf/dma-buf.c | 4 
  1 file changed, 4 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index e1fa6c6f02c4..00d5afe904cc 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1118,6 +1118,8 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
if (WARN_ON(!dmabuf))
return -EINVAL;
  
+	might_lock(&dma_buf->resv.lock);

+
if (dmabuf->ops->begin_cpu_access)
ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
  
@@ -1151,6 +1153,8 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
  
  	WARN_ON(!dmabuf);
  
+	might_lock(&dma_buf->resv.lock);

+
if (dmabuf->ops->end_cpu_access)
ret = dmabuf->ops->end_cpu_access(dmabuf, direction);
  


___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t 2/4] i915/gem_exec_balancer: Measure timeslicing fairness

2020-12-14 Thread Chris Wilson
Oversaturate the virtual engines on the system and check that each
workload receives a fair share of the available GPU time.

Signed-off-by: Chris Wilson 
---
 tests/i915/gem_exec_balancer.c | 154 +
 1 file changed, 154 insertions(+)

diff --git a/tests/i915/gem_exec_balancer.c b/tests/i915/gem_exec_balancer.c
index 35a032ccb..5efd586ad 100644
--- a/tests/i915/gem_exec_balancer.c
+++ b/tests/i915/gem_exec_balancer.c
@@ -2763,6 +2763,157 @@ static void smoketest(int i915, int timeout)
gem_close(i915, batch[0].handle);
 }
 
+static uint32_t read_ctx_timestamp(int i915, uint32_t ctx)
+{
+   struct drm_i915_gem_relocation_entry reloc;
+   struct drm_i915_gem_exec_object2 obj = {
+   .handle = gem_create(i915, 4096),
+   .offset = 32 << 20,
+   .relocs_ptr = to_user_pointer(&reloc),
+   .relocation_count = 1,
+   };
+   struct drm_i915_gem_execbuffer2 execbuf = {
+   .buffers_ptr = to_user_pointer(&obj),
+   .buffer_count = 1,
+   .rsvd1 = ctx,
+   };
+   uint32_t *map, *cs;
+   uint32_t ts;
+
+   cs = map = gem_mmap__device_coherent(i915, obj.handle,
+0, 4096, PROT_WRITE);
+
+   *cs++ = 0x24 << 23 | 1 << 19 | 2; /* relative SRM */
+   *cs++ = 0x3a8; /* CTX_TIMESTAMP */
+   memset(&reloc, 0, sizeof(reloc));
+   reloc.target_handle = obj.handle;
+   reloc.presumed_offset = obj.offset;
+   reloc.offset = offset_in_page(cs);
+   reloc.delta = 4000;
+   *cs++ = obj.offset + 4000;
+   *cs++ = obj.offset >> 32;
+
+   *cs++ = MI_BATCH_BUFFER_END;
+
+   gem_execbuf(i915, &execbuf);
+   gem_sync(i915, obj.handle);
+   gem_close(i915, obj.handle);
+
+   ts = map[1000];
+   munmap(map, 4096);
+
+   return ts;
+}
+
+static int cmp_u32(const void *A, const void *B)
+{
+   const uint32_t *a = A, *b = B;
+
+   if (*a < *b)
+   return -1;
+   else if (*a > *b)
+   return 1;
+   else
+   return 0;
+}
+
+static int read_ctx_timestamp_frequency(int i915)
+{
+   int value = 1250; /* icl!!! are you feeling alright? CTX vs CS */
+   drm_i915_getparam_t gp = {
+   .value = &value,
+   .param = I915_PARAM_CS_TIMESTAMP_FREQUENCY,
+   };
+   if (intel_gen(intel_get_drm_devid(i915)) != 11)
+   ioctl(i915, DRM_IOCTL_I915_GETPARAM, &gp);
+   return value;
+}
+
+static uint64_t div64_u64_round_up(uint64_t x, uint64_t y)
+{
+   return (x + y - 1) / y;
+}
+
+static uint64_t ticks_to_ns(int i915, uint64_t ticks)
+{
+   return div64_u64_round_up(ticks * NSEC_PER_SEC,
+ read_ctx_timestamp_frequency(i915));
+}
+
+static void __fairslice(int i915,
+   const struct i915_engine_class_instance *ci,
+   unsigned int count)
+{
+   igt_spin_t *spin = NULL;
+   uint32_t ctx[count + 1];
+   uint32_t ts[count + 1];
+
+   igt_debug("Launching %zd spinners on %s\n",
+ ARRAY_SIZE(ctx), class_to_str(ci->engine_class));
+   igt_assert(ARRAY_SIZE(ctx) >= 3);
+
+   for (int i = 0; i < ARRAY_SIZE(ctx); i++) {
+   ctx[i] = load_balancer_create(i915, ci, count);
+   if (spin == NULL) {
+   spin = __igt_spin_new(i915, .ctx = ctx[i]);
+   } else {
+   struct drm_i915_gem_execbuffer2 eb = {
+   .buffer_count = 1,
+   .buffers_ptr = 
to_user_pointer(&spin->obj[IGT_SPIN_BATCH]),
+   .rsvd1 = ctx[i],
+   };
+   gem_execbuf(i915, &eb);
+   }
+   }
+
+   sleep(2); /* over the course of many timeslices */
+
+   igt_assert(gem_bo_busy(i915, spin->handle));
+   igt_spin_end(spin);
+   igt_debug("Cancelled spinners\n");
+
+   for (int i = 0; i < ARRAY_SIZE(ctx); i++)
+   ts[i] = read_ctx_timestamp(i915, ctx[i]);
+
+   for (int i = 0; i < ARRAY_SIZE(ctx); i++)
+   gem_context_destroy(i915, ctx[i]);
+   igt_spin_free(i915, spin);
+
+   qsort(ts, ARRAY_SIZE(ctx), sizeof(*ts), cmp_u32);
+   igt_info("%s: [%.1f, %.1f, %.1f] ms, expect %1.fms\n",
+class_to_str(ci->engine_class),
+1e-6 * ticks_to_ns(i915, ts[0]),
+1e-6 * ticks_to_ns(i915, ts[(count + 1) / 2]),
+1e-6 * ticks_to_ns(i915, ts[count]),
+2e3 * count / ARRAY_SIZE(ctx));
+
+   igt_assert_f(ts[count], "CTX_TIMESTAMP not reported!\n");
+   igt_assert_f((ts[count] - ts[0]) * 6 < ts[(count + 1) / 2],
+"Range of timeslices greater than tolerable: %.2fms > 
%.2fms; unfair!\n",
+1e-6 * ticks_to_ns(i915, ts[count] - ts[0]),
+1e-6 *

[Intel-gfx] [PATCH i-g-t 3/4] i915/gem_shrink: Refactor allocation sizing based on available memory

2020-12-14 Thread Chris Wilson
Refactor the allocation such that we utilise just enough memory pressure
to invoke the shrinker, and just enough processes to spread across the
CPUs and contend on the shrinker.

Signed-off-by: Chris Wilson 
---
 tests/i915/gem_shrink.c | 11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/tests/i915/gem_shrink.c b/tests/i915/gem_shrink.c
index 023db8c56..e8a814fe6 100644
--- a/tests/i915/gem_shrink.c
+++ b/tests/i915/gem_shrink.c
@@ -426,6 +426,7 @@ igt_main
int num_processes = 0;
 
igt_fixture {
+   const int ncpus = sysconf(_SC_NPROCESSORS_ONLN);
uint64_t mem_size = intel_get_total_ram_mb();
int fd;
 
@@ -434,16 +435,16 @@ igt_main
 
/*
 * Spawn enough processes to use all memory, but each only
-* uses half the available mappable aperture ~128MiB.
+* uses half of the available per-cpu memory.
 * Individually the processes would be ok, but en masse
 * we expect the shrinker to start purging objects,
 * and possibly fail.
 */
-   alloc_size = gem_mappable_aperture_size(fd) / 2;
-   num_processes = 1 + (mem_size / (alloc_size >> 20));
+   alloc_size = (mem_size + ncpus - 1) / ncpus / 2;
+   num_processes = ncpus + (mem_size / alloc_size);
 
-   igt_info("Using %d processes and %'lluMiB per process\n",
-num_processes, (long long)(alloc_size >> 20));
+   igt_info("Using %d processes and %'"PRIu64"MiB per process\n",
+num_processes, alloc_size);
 
intel_require_memory(num_processes, alloc_size,
 CHECK_SWAP | CHECK_RAM);
-- 
2.29.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t 1/4] i915/perf_pmu: Verify RC6 measurements before/after suspend

2020-12-14 Thread Chris Wilson
RC6 should work before suspend, and continue to increment while idle
after suspend. Should.

v2: Include a longer sleep after suspend; it appears we are reticent to
idle so soon after waking up.

Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
---
 tests/i915/perf_pmu.c | 28 +---
 1 file changed, 25 insertions(+), 3 deletions(-)

diff --git a/tests/i915/perf_pmu.c b/tests/i915/perf_pmu.c
index cb7273142..0b470c1bc 100644
--- a/tests/i915/perf_pmu.c
+++ b/tests/i915/perf_pmu.c
@@ -170,6 +170,7 @@ static unsigned int measured_usleep(unsigned int usec)
 #define TEST_RUNTIME_PM (8)
 #define FLAG_LONG (16)
 #define FLAG_HANG (32)
+#define TEST_S3 (64)
 
 static igt_spin_t * __spin_poll(int fd, uint32_t ctx,
const struct intel_execution_engine2 *e)
@@ -1578,7 +1579,7 @@ test_frequency_idle(int gem_fd)
 "Actual frequency should be 0 while parked!\n");
 }
 
-static bool wait_for_rc6(int fd)
+static bool wait_for_rc6(int fd, int timeout)
 {
struct timespec tv = {};
uint64_t start, now;
@@ -1594,7 +1595,7 @@ static bool wait_for_rc6(int fd)
now = pmu_read_single(fd);
if (now - start > 1e6)
return true;
-   } while (!igt_seconds_elapsed(&tv));
+   } while (igt_seconds_elapsed(&tv) <= timeout);
 
return false;
 }
@@ -1636,14 +1637,32 @@ test_rc6(int gem_fd, unsigned int flags)
}
}
 
-   igt_require(wait_for_rc6(fd));
+   igt_require(wait_for_rc6(fd, 1));
 
/* While idle check full RC6. */
prev = __pmu_read_single(fd, &ts[0]);
slept = measured_usleep(duration_ns / 1000);
idle = __pmu_read_single(fd, &ts[1]);
+
igt_debug("slept=%lu perf=%"PRIu64"\n", slept, ts[1] - ts[0]);
+   assert_within_epsilon(idle - prev, ts[1] - ts[0], tolerance);
+
+   if (flags & TEST_S3) {
+   prev = __pmu_read_single(fd, &ts[0]);
+   igt_system_suspend_autoresume(SUSPEND_STATE_MEM,
+ SUSPEND_TEST_NONE);
+   idle = __pmu_read_single(fd, &ts[1]);
+   igt_debug("suspend=%"PRIu64"\n", ts[1] - ts[0]);
+   //assert_within_epsilon(idle - prev, ts[1] - ts[0], tolerance);
+   }
+
+   igt_assert(wait_for_rc6(fd, 5));
 
+   prev = __pmu_read_single(fd, &ts[0]);
+   slept = measured_usleep(duration_ns / 1000);
+   idle = __pmu_read_single(fd, &ts[1]);
+
+   igt_debug("slept=%lu perf=%"PRIu64"\n", slept, ts[1] - ts[0]);
assert_within_epsilon(idle - prev, ts[1] - ts[0], tolerance);
 
/* Wake up device and check no RC6. */
@@ -2245,6 +2264,9 @@ igt_main
igt_subtest("rc6-runtime-pm-long")
test_rc6(fd, TEST_RUNTIME_PM | FLAG_LONG);
 
+   igt_subtest("rc6-suspend")
+   test_rc6(fd, TEST_S3);
+
/**
 * Check render nodes are counted.
 */
-- 
2.29.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t 4/4] i915/gem_exec_schedule: Try to spot unfairness

2020-12-14 Thread Chris Wilson
An important property for multi-client systems is that each client gets
a 'fair' allotment of system time. (Where fairness is at the whim of the
context properties, such as priorities.) This test forks N independent
clients (albeit they happen to share a single vm), and does an equal
amount of work in client and asserts that they take an equal amount of
time.

Though we have never claimed to have a completely fair scheduler, that
is what is expected.

v2: igt_assert_f and more commentary; exclude vip from client stats,
include range of frame intervals from each individual client
v3: Write down what the test actually does!

Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
Cc: Ramalingam C 
---
 tests/i915/gem_exec_schedule.c | 960 +
 1 file changed, 960 insertions(+)

diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index f23d63ac3..3c950b06f 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -29,6 +29,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -2516,6 +2517,932 @@ static void measure_semaphore_power(int i915)
rapl_close(&pkg);
 }
 
+static int read_timestamp_frequency(int i915)
+{
+   int value = 0;
+   drm_i915_getparam_t gp = {
+   .value = &value,
+   .param = I915_PARAM_CS_TIMESTAMP_FREQUENCY,
+   };
+   ioctl(i915, DRM_IOCTL_I915_GETPARAM, &gp);
+   return value;
+}
+
+static uint64_t div64_u64_round_up(uint64_t x, uint64_t y)
+{
+   return (x + y - 1) / y;
+}
+
+static uint64_t ns_to_ctx_ticks(int i915, uint64_t ns)
+{
+   int f = read_timestamp_frequency(i915);
+   if (intel_gen(intel_get_drm_devid(i915)) == 11)
+   f = 1250; /* icl!!! are you feeling alright? CTX vs CS */
+   return div64_u64_round_up(ns * f, NSEC_PER_SEC);
+}
+
+static uint64_t ticks_to_ns(int i915, uint64_t ticks)
+{
+   return div64_u64_round_up(ticks * NSEC_PER_SEC,
+ read_timestamp_frequency(i915));
+}
+
+#define MI_INSTR(opcode, flags) (((opcode) << 23) | (flags))
+
+#define MI_MATH(x)  MI_INSTR(0x1a, (x) - 1)
+#define MI_MATH_INSTR(opcode, op1, op2) ((opcode) << 20 | (op1) << 10 | (op2))
+/* Opcodes for MI_MATH_INSTR */
+#define   MI_MATH_NOOP  MI_MATH_INSTR(0x000, 0x0, 0x0)
+#define   MI_MATH_LOAD(op1, op2)MI_MATH_INSTR(0x080, op1, op2)
+#define   MI_MATH_LOADINV(op1, op2) MI_MATH_INSTR(0x480, op1, op2)
+#define   MI_MATH_LOAD0(op1)MI_MATH_INSTR(0x081, op1)
+#define   MI_MATH_LOAD1(op1)MI_MATH_INSTR(0x481, op1)
+#define   MI_MATH_ADD   MI_MATH_INSTR(0x100, 0x0, 0x0)
+#define   MI_MATH_SUB   MI_MATH_INSTR(0x101, 0x0, 0x0)
+#define   MI_MATH_AND   MI_MATH_INSTR(0x102, 0x0, 0x0)
+#define   MI_MATH_ORMI_MATH_INSTR(0x103, 0x0, 0x0)
+#define   MI_MATH_XOR   MI_MATH_INSTR(0x104, 0x0, 0x0)
+#define   MI_MATH_STORE(op1, op2)   MI_MATH_INSTR(0x180, op1, op2)
+#define   MI_MATH_STOREINV(op1, op2)MI_MATH_INSTR(0x580, op1, op2)
+/* Registers used as operands in MI_MATH_INSTR */
+#define   MI_MATH_REG(x)(x)
+#define   MI_MATH_REG_SRCA  0x20
+#define   MI_MATH_REG_SRCB  0x21
+#define   MI_MATH_REG_ACCU  0x31
+#define   MI_MATH_REG_ZF0x32
+#define   MI_MATH_REG_CF0x33
+
+#define MI_LOAD_REGISTER_REGMI_INSTR(0x2A, 1)
+
+static void delay(int i915,
+ const struct intel_execution_engine2 *e,
+ uint32_t handle,
+ uint64_t addr,
+ uint64_t ns)
+{
+   const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8;
+   const uint32_t base = gem_engine_mmio_base(i915, e->name);
+#define CS_GPR(x) (base + 0x600 + 8 * (x))
+#define RUNTIME (base + 0x3a8)
+   enum { START_TS, NOW_TS };
+   uint32_t *map, *cs, *jmp;
+
+   igt_require(base);
+
+   /* Loop until CTX_TIMESTAMP - initial > @ns */
+
+   cs = map = gem_mmap__device_coherent(i915, handle, 0, 4096, PROT_WRITE);
+
+   *cs++ = MI_LOAD_REGISTER_IMM;
+   *cs++ = CS_GPR(START_TS) + 4;
+   *cs++ = 0;
+   *cs++ = MI_LOAD_REGISTER_REG;
+   *cs++ = RUNTIME;
+   *cs++ = CS_GPR(START_TS);
+
+   while (offset_in_page(cs) & 63)
+   *cs++ = 0;
+   jmp = cs;
+
+   *cs++ = 0x5 << 23; /* MI_ARB_CHECK */
+
+   *cs++ = MI_LOAD_REGISTER_IMM;
+   *cs++ = CS_GPR(NOW_TS) + 4;
+   *cs++ = 0;
+   *cs++ = MI_LOAD_REGISTER_REG;
+   *cs++ = RUNTIME;
+   *cs++ = CS_GPR(NOW_TS);
+
+   /* delta = now - start; inverted to match COND_BBE */
+   *cs++ = MI_MATH(4);
+   *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(NOW_TS));
+   *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(START_TS));
+   *cs++ = MI_MATH_SUB;
+   *cs++ = MI_MATH_STOREI

Re: [Intel-gfx] [PATCH v5 3/6] drm/i915/display/psr: Use plane damage clips to calculate damaged area

2020-12-14 Thread Mun, Gwan-gyeong
On Sun, 2020-12-13 at 10:39 -0800, José Roberto de Souza wrote:
> Now using plane damage clips property to calcualte the damaged area.
> Selective fetch only supports one region to be fetched so software
> needs to calculate a bounding box around all damage clips.
> 
> Now that we are not complete fetching each plane, there is another
> loop needed as all the plane areas that intersect with the pipe
> damadged area needs to be fetched from memory so the complete
> blending
> of all planes can happen.
> 
> v2:
> - do not shifthing new_plane_state->uapi.dst only src is in 16.16
> format
> 
> v4:
> - setting plane selective fetch area using the whole pipe damage area
> - mark the whole plane area damaged if plane visibility or alpha
> changed
> 
> v5:
> - taking in consideration src.y1 in the damage coordinates
> - adding to the pipe damaged area planes that were visible but are
> invisible in the new state
> 
> Cc: Ville Syrjälä 
> Cc: Gwan-gyeong Mun 
> Signed-off-by: José Roberto de Souza 
> ---
>  drivers/gpu/drm/i915/display/intel_psr.c | 93 
> 
>  1 file changed, 79 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_psr.c
> b/drivers/gpu/drm/i915/display/intel_psr.c
> index d9a395c486d3..b256184821da 100644
> --- a/drivers/gpu/drm/i915/display/intel_psr.c
> +++ b/drivers/gpu/drm/i915/display/intel_psr.c
> @@ -1269,11 +1269,11 @@ int intel_psr2_sel_fetch_update(struct
> intel_atomic_state *state,
>   struct intel_crtc *crtc)
>  {
>   struct intel_crtc_state *crtc_state =
> intel_atomic_get_new_crtc_state(state, crtc);
> + struct drm_rect pipe_clip = { .x1 = 0, .y1 = -1, .x2 = INT_MAX,
> .y2 = -1 };
>   struct intel_plane_state *new_plane_state, *old_plane_state;
> - struct drm_rect pipe_clip = { .y1 = -1 };
>   struct intel_plane *plane;
>   bool full_update = false;
> - int i, ret;
> + int i, src_y1, src_y2, ret;
>  
>   if (!crtc_state->enable_psr2_sel_fetch)
>   return 0;
> @@ -1282,9 +1282,21 @@ int intel_psr2_sel_fetch_update(struct
> intel_atomic_state *state,
>   if (ret)
>   return ret;
>  
> + src_y1 = new_plane_state->uapi.src.y1 >> 16;
> + src_y2 = new_plane_state->uapi.src.y2 >> 16;
> +
> + /*
> +  * Calculate minimal selective fetch area of each plane and
> calculate
> +  * the pipe damaged area.
> +  * In the next loop the plane selective fetch area will
> actually be set
> +  * using whole pipe damaged area.
> +  */
>   for_each_oldnew_intel_plane_in_state(state, plane,
> old_plane_state,
>new_plane_state, i) {
> - struct drm_rect *sel_fetch_area, temp;
> + struct drm_mode_rect *damaged_clips;
> + struct drm_rect sel_fetch_area = { .y1 = -1 };
> + u32 num_clips;
> + int j;
>  
>   if (new_plane_state->uapi.crtc != crtc_state-
> >uapi.crtc)
>   continue;
> @@ -1300,23 +1312,76 @@ int intel_psr2_sel_fetch_update(struct
> intel_atomic_state *state,
>   break;
>   }
>  
> - if (!new_plane_state->uapi.visible)
> - continue;
> + damaged_clips =
> drm_plane_get_damage_clips(&new_plane_state->uapi);
> + num_clips =
> drm_plane_get_damage_clips_count(&new_plane_state->uapi);
>  
>   /*
> -  * For now doing a selective fetch in the whole plane
> area,
> -  * optimizations will come in the future.
> +  * If visibility or alpha changed or plane moved, mark
> the whole
> +  * plane area as damaged as it needs to be complete
> redraw in
> +  * the new position.
>*/
> - sel_fetch_area = &new_plane_state->psr2_sel_fetch_area;
> - sel_fetch_area->y1 = new_plane_state->uapi.src.y1 >>
> 16;
> - sel_fetch_area->y2 = new_plane_state->uapi.src.y2 >>
> 16;
> + if (new_plane_state->uapi.visible != old_plane_state-
> >uapi.visible ||
> + new_plane_state->uapi.alpha != old_plane_state-
> >uapi.alpha ||
> + !drm_rect_equals(&new_plane_state->uapi.dst,
> +  &old_plane_state->uapi.dst)) {
> + num_clips = 0;
> + sel_fetch_area.y1 = src_y1;
> + sel_fetch_area.y2 = src_y2;
> + } else if (!num_clips && new_plane_state->uapi.fb !=
> +old_plane_state->uapi.fb) {
> + /*
> +  * If the plane don't have damage areas but the
> +  * framebuffer changed, mark the whole plane
> area as
> +  * damaged.
> +  */
> + sel_fetch_area.y1 = src_y1;
> + sel_fetch_area.y2 = src_y2;
> + }
> +
> +  

Re: [Intel-gfx] [PATCH v5 1/2] drm/i915/display: Support PSR Multiple Transcoders

2020-12-14 Thread Anshuman Gupta
On 2020-12-11 at 19:14:20 +0200, Gwan-gyeong Mun wrote:
> It is a preliminary work for supporting multiple EDP PSR and
> DP PanelReplay. And it refactors singleton PSR to Multi Transcoder
> supportable PSR.
> And this moves and renames the i915_psr structure of drm_i915_private's to
> intel_dp's intel_psr structure.
> It also causes changes in PSR interrupt handling routine for supporting
> multiple transcoders. But it does not change the scenario and timing of
> enabling and disabling PSR.
> 
> v2: Fix indentation and add comments
> v3: Remove Blank line
> v4: Rebased
> v5: Rebased and Addressed Anshuman's review comment.
> - Move calling of intel_psr_init() to intel_dp_init_connector()
> 
> Signed-off-by: Gwan-gyeong Mun 
> Cc: José Roberto de Souza 
> Cc: Juha-Pekka Heikkila 
> Cc: Anshuman Gupta 
> ---
>  drivers/gpu/drm/i915/display/intel_ddi.c  |   3 +
>  drivers/gpu/drm/i915/display/intel_display.c  |   4 -
>  .../drm/i915/display/intel_display_debugfs.c  | 111 ++--
>  .../drm/i915/display/intel_display_types.h|  38 ++
>  drivers/gpu/drm/i915/display/intel_dp.c   |  23 +-
>  drivers/gpu/drm/i915/display/intel_psr.c  | 585 ++
>  drivers/gpu/drm/i915/display/intel_psr.h  |  14 +-
>  drivers/gpu/drm/i915/display/intel_sprite.c   |   6 +-
>  drivers/gpu/drm/i915/i915_drv.h   |  38 --
>  drivers/gpu/drm/i915/i915_irq.c   |  49 +-
>  10 files changed, 491 insertions(+), 380 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c 
> b/drivers/gpu/drm/i915/display/intel_ddi.c
> index 6863236df1d0..4b87f72cb9c0 100644
> --- a/drivers/gpu/drm/i915/display/intel_ddi.c
> +++ b/drivers/gpu/drm/i915/display/intel_ddi.c
> @@ -4320,7 +4320,10 @@ static void intel_ddi_update_pipe_dp(struct 
> intel_atomic_state *state,
>  
>   intel_ddi_set_dp_msa(crtc_state, conn_state);
>  
> + //TODO: move PSR related functions into intel_psr_update()
> + intel_psr2_program_trans_man_trk_ctl(intel_dp, crtc_state);
>   intel_psr_update(intel_dp, crtc_state, conn_state);
> +
>   intel_dp_set_infoframes(encoder, true, crtc_state, conn_state);
>   intel_edp_drrs_update(intel_dp, crtc_state);
>  
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
> b/drivers/gpu/drm/i915/display/intel_display.c
> index 761be8deaa9b..f26d9bcd722c 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -15869,8 +15869,6 @@ static void commit_pipe_config(struct 
> intel_atomic_state *state,
>  
>   if (new_crtc_state->update_pipe)
>   intel_pipe_fastset(old_crtc_state, new_crtc_state);
> -
> - intel_psr2_program_trans_man_trk_ctl(new_crtc_state);
>   }
>  
>   if (dev_priv->display.atomic_update_watermarks)
> @@ -17830,8 +17828,6 @@ static void intel_setup_outputs(struct 
> drm_i915_private *dev_priv)
>   intel_dvo_init(dev_priv);
>   }
>  
> - intel_psr_init(dev_priv);
> -
>   for_each_intel_encoder(&dev_priv->drm, encoder) {
>   encoder->base.possible_crtcs =
>   intel_encoder_possible_crtcs(encoder);
> diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c 
> b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> index cd7e5519ee7d..041053167d7f 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> @@ -249,18 +249,17 @@ static int i915_psr_sink_status_show(struct seq_file 
> *m, void *data)
>   "sink internal error",
>   };
>   struct drm_connector *connector = m->private;
> - struct drm_i915_private *dev_priv = to_i915(connector->dev);
>   struct intel_dp *intel_dp =
>   intel_attached_dp(to_intel_connector(connector));
>   int ret;
>  
> - if (!CAN_PSR(dev_priv)) {
> - seq_puts(m, "PSR Unsupported\n");
> + if (connector->status != connector_status_connected)
>   return -ENODEV;
> - }
>  
> - if (connector->status != connector_status_connected)
> + if (!CAN_PSR(intel_dp)) {
> + seq_puts(m, "PSR Unsupported\n");
>   return -ENODEV;
> + }
>  
>   ret = drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_STATUS, &val);
>  
> @@ -280,12 +279,13 @@ static int i915_psr_sink_status_show(struct seq_file 
> *m, void *data)
>  DEFINE_SHOW_ATTRIBUTE(i915_psr_sink_status);
>  
>  static void
> -psr_source_status(struct drm_i915_private *dev_priv, struct seq_file *m)
> +psr_source_status(struct intel_dp *intel_dp, struct seq_file *m)
>  {
>   u32 val, status_val;
>   const char *status = "unknown";
> + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
>  
> - if (dev_priv->psr.psr2_enabled) {
> + if (intel_dp->psr.psr2_enabled) {
>   static const char * const live_status[] = {
>   "IDLE",
>   "CAPTURE",
> @@ -3

Re: [Intel-gfx] [PATCH v5 2/2] drm/i915/display: Support Multiple Transcoders' PSR status on debugfs

2020-12-14 Thread Anshuman Gupta
On 2020-12-11 at 19:14:21 +0200, Gwan-gyeong Mun wrote:
> In order to support the PSR state of each transcoder, it adds
> i915_psr_status to sub-directory of each transcoder.
> 
> v2: Change using of Symbolic permissions 'S_IRUGO' to using of octal
> permissions '0444'
> v5: Addressed JJani Nikula's review comments
>  - Remove checking of Gen12 for i915_psr_status.
>  - Add check of HAS_PSR()
>  - Remove meaningless check routine.
> 
> Signed-off-by: Gwan-gyeong Mun 
> Cc: José Roberto de Souza 
> Cc: Jani Nikula 
> Cc: Anshuman Gupta 
> ---
>  .../gpu/drm/i915/display/intel_display_debugfs.c | 16 
>  1 file changed, 16 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c 
> b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> index 041053167d7f..d2dd61c4ee0b 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> @@ -2224,6 +2224,16 @@ static int i915_hdcp_sink_capability_show(struct 
> seq_file *m, void *data)
>  }
>  DEFINE_SHOW_ATTRIBUTE(i915_hdcp_sink_capability);
>  
> +static int i915_psr_status_show(struct seq_file *m, void *data)
> +{
> + struct drm_connector *connector = m->private;
> + struct intel_dp *intel_dp =
> + intel_attached_dp(to_intel_connector(connector));
> +
> + return intel_psr_status(m, intel_dp);
> +}
> +DEFINE_SHOW_ATTRIBUTE(i915_psr_status);
> +
>  #define LPSP_CAPABLE(COND) (COND ? seq_puts(m, "LPSP: capable\n") : \
>   seq_puts(m, "LPSP: incapable\n"))
>  
> @@ -2399,6 +2409,12 @@ int intel_connector_debugfs_add(struct drm_connector 
> *connector)
>   connector, &i915_psr_sink_status_fops);
>   }
>  
> + if (HAS_PSR(dev_priv) &&
> + connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
> + debugfs_create_file("i915_psr_status", 0444, root,
Could we keep the file name as i915_edp_psr_status, as we have today?
with that addressed.
Reviewed-by: Anshuman Gupta 
> + connector, &i915_psr_status_fops);
> + }
> +
>   if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort ||
>   connector->connector_type == DRM_MODE_CONNECTOR_HDMIA ||
>   connector->connector_type == DRM_MODE_CONNECTOR_HDMIB) {
> -- 
> 2.25.0
> 
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v5 1/2] drm/i915/display: Support PSR Multiple Transcoders

2020-12-14 Thread Jani Nikula
On Fri, 11 Dec 2020, Gwan-gyeong Mun  wrote:
> It is a preliminary work for supporting multiple EDP PSR and
> DP PanelReplay. And it refactors singleton PSR to Multi Transcoder
> supportable PSR.
> And this moves and renames the i915_psr structure of drm_i915_private's to
> intel_dp's intel_psr structure.
> It also causes changes in PSR interrupt handling routine for supporting
> multiple transcoders. But it does not change the scenario and timing of
> enabling and disabling PSR.

Can we plan to throw out the psr debugfs files not attached to a
connector, please? I.e. i915_edp_psr_debug and i915_edp_psr_status would
be removed from top level altogether, and only be available under
connector debugfs.

BR,
Jani.


>
> v2: Fix indentation and add comments
> v3: Remove Blank line
> v4: Rebased
> v5: Rebased and Addressed Anshuman's review comment.
> - Move calling of intel_psr_init() to intel_dp_init_connector()
>
> Signed-off-by: Gwan-gyeong Mun 
> Cc: José Roberto de Souza 
> Cc: Juha-Pekka Heikkila 
> Cc: Anshuman Gupta 
> ---
>  drivers/gpu/drm/i915/display/intel_ddi.c  |   3 +
>  drivers/gpu/drm/i915/display/intel_display.c  |   4 -
>  .../drm/i915/display/intel_display_debugfs.c  | 111 ++--
>  .../drm/i915/display/intel_display_types.h|  38 ++
>  drivers/gpu/drm/i915/display/intel_dp.c   |  23 +-
>  drivers/gpu/drm/i915/display/intel_psr.c  | 585 ++
>  drivers/gpu/drm/i915/display/intel_psr.h  |  14 +-
>  drivers/gpu/drm/i915/display/intel_sprite.c   |   6 +-
>  drivers/gpu/drm/i915/i915_drv.h   |  38 --
>  drivers/gpu/drm/i915/i915_irq.c   |  49 +-
>  10 files changed, 491 insertions(+), 380 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c 
> b/drivers/gpu/drm/i915/display/intel_ddi.c
> index 6863236df1d0..4b87f72cb9c0 100644
> --- a/drivers/gpu/drm/i915/display/intel_ddi.c
> +++ b/drivers/gpu/drm/i915/display/intel_ddi.c
> @@ -4320,7 +4320,10 @@ static void intel_ddi_update_pipe_dp(struct 
> intel_atomic_state *state,
>  
>   intel_ddi_set_dp_msa(crtc_state, conn_state);
>  
> + //TODO: move PSR related functions into intel_psr_update()
> + intel_psr2_program_trans_man_trk_ctl(intel_dp, crtc_state);
>   intel_psr_update(intel_dp, crtc_state, conn_state);
> +
>   intel_dp_set_infoframes(encoder, true, crtc_state, conn_state);
>   intel_edp_drrs_update(intel_dp, crtc_state);
>  
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
> b/drivers/gpu/drm/i915/display/intel_display.c
> index 761be8deaa9b..f26d9bcd722c 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -15869,8 +15869,6 @@ static void commit_pipe_config(struct 
> intel_atomic_state *state,
>  
>   if (new_crtc_state->update_pipe)
>   intel_pipe_fastset(old_crtc_state, new_crtc_state);
> -
> - intel_psr2_program_trans_man_trk_ctl(new_crtc_state);
>   }
>  
>   if (dev_priv->display.atomic_update_watermarks)
> @@ -17830,8 +17828,6 @@ static void intel_setup_outputs(struct 
> drm_i915_private *dev_priv)
>   intel_dvo_init(dev_priv);
>   }
>  
> - intel_psr_init(dev_priv);
> -
>   for_each_intel_encoder(&dev_priv->drm, encoder) {
>   encoder->base.possible_crtcs =
>   intel_encoder_possible_crtcs(encoder);
> diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c 
> b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> index cd7e5519ee7d..041053167d7f 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> @@ -249,18 +249,17 @@ static int i915_psr_sink_status_show(struct seq_file 
> *m, void *data)
>   "sink internal error",
>   };
>   struct drm_connector *connector = m->private;
> - struct drm_i915_private *dev_priv = to_i915(connector->dev);
>   struct intel_dp *intel_dp =
>   intel_attached_dp(to_intel_connector(connector));
>   int ret;
>  
> - if (!CAN_PSR(dev_priv)) {
> - seq_puts(m, "PSR Unsupported\n");
> + if (connector->status != connector_status_connected)
>   return -ENODEV;
> - }
>  
> - if (connector->status != connector_status_connected)
> + if (!CAN_PSR(intel_dp)) {
> + seq_puts(m, "PSR Unsupported\n");
>   return -ENODEV;
> + }
>  
>   ret = drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_STATUS, &val);
>  
> @@ -280,12 +279,13 @@ static int i915_psr_sink_status_show(struct seq_file 
> *m, void *data)
>  DEFINE_SHOW_ATTRIBUTE(i915_psr_sink_status);
>  
>  static void
> -psr_source_status(struct drm_i915_private *dev_priv, struct seq_file *m)
> +psr_source_status(struct intel_dp *intel_dp, struct seq_file *m)
>  {
>   u32 val, status_val;
>   const char *status = "unknown";
> + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);

Re: [Intel-gfx] [PATCH v5 2/2] drm/i915/display: Support Multiple Transcoders' PSR status on debugfs

2020-12-14 Thread Jani Nikula
On Mon, 14 Dec 2020, Anshuman Gupta  wrote:
> On 2020-12-11 at 19:14:21 +0200, Gwan-gyeong Mun wrote:
>> In order to support the PSR state of each transcoder, it adds
>> i915_psr_status to sub-directory of each transcoder.
>> 
>> v2: Change using of Symbolic permissions 'S_IRUGO' to using of octal
>> permissions '0444'
>> v5: Addressed JJani Nikula's review comments
>>  - Remove checking of Gen12 for i915_psr_status.
>>  - Add check of HAS_PSR()
>>  - Remove meaningless check routine.
>> 
>> Signed-off-by: Gwan-gyeong Mun 
>> Cc: José Roberto de Souza 
>> Cc: Jani Nikula 
>> Cc: Anshuman Gupta 
>> ---
>>  .../gpu/drm/i915/display/intel_display_debugfs.c | 16 
>>  1 file changed, 16 insertions(+)
>> 
>> diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c 
>> b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
>> index 041053167d7f..d2dd61c4ee0b 100644
>> --- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
>> +++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
>> @@ -2224,6 +2224,16 @@ static int i915_hdcp_sink_capability_show(struct 
>> seq_file *m, void *data)
>>  }
>>  DEFINE_SHOW_ATTRIBUTE(i915_hdcp_sink_capability);
>>  
>> +static int i915_psr_status_show(struct seq_file *m, void *data)
>> +{
>> +struct drm_connector *connector = m->private;
>> +struct intel_dp *intel_dp =
>> +intel_attached_dp(to_intel_connector(connector));
>> +
>> +return intel_psr_status(m, intel_dp);
>> +}
>> +DEFINE_SHOW_ATTRIBUTE(i915_psr_status);
>> +
>>  #define LPSP_CAPABLE(COND) (COND ? seq_puts(m, "LPSP: capable\n") : \
>>  seq_puts(m, "LPSP: incapable\n"))
>>  
>> @@ -2399,6 +2409,12 @@ int intel_connector_debugfs_add(struct drm_connector 
>> *connector)
>>  connector, &i915_psr_sink_status_fops);
>>  }
>>  
>> +if (HAS_PSR(dev_priv) &&
>> +connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
>> +debugfs_create_file("i915_psr_status", 0444, root,
> Could we keep the file name as i915_edp_psr_status, as we have today?
> with that addressed.

Depends on whether the plan is to use the same file for regular DP panel
replay in the future. Then edp would be misleading.

BR,
Jani.

> Reviewed-by: Anshuman Gupta 
>> +connector, &i915_psr_status_fops);
>> +}
>> +
>>  if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort ||
>>  connector->connector_type == DRM_MODE_CONNECTOR_HDMIA ||
>>  connector->connector_type == DRM_MODE_CONNECTOR_HDMIB) {
>> -- 
>> 2.25.0
>> 

-- 
Jani Nikula, Intel Open Source Graphics Center
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v5 1/2] drm/i915/display: Support PSR Multiple Transcoders

2020-12-14 Thread Mun, Gwan-gyeong
On Mon, 2020-12-14 at 16:32 +0530, Anshuman Gupta wrote:
> On 2020-12-11 at 19:14:20 +0200, Gwan-gyeong Mun wrote:
> > It is a preliminary work for supporting multiple EDP PSR and
> > DP PanelReplay. And it refactors singleton PSR to Multi Transcoder
> > supportable PSR.
> > And this moves and renames the i915_psr structure of
> > drm_i915_private's to
> > intel_dp's intel_psr structure.
> > It also causes changes in PSR interrupt handling routine for
> > supporting
> > multiple transcoders. But it does not change the scenario and
> > timing of
> > enabling and disabling PSR.
> > 
> > v2: Fix indentation and add comments
> > v3: Remove Blank line
> > v4: Rebased
> > v5: Rebased and Addressed Anshuman's review comment.
> > - Move calling of intel_psr_init() to intel_dp_init_connector()
> > 
> > Signed-off-by: Gwan-gyeong Mun 
> > Cc: José Roberto de Souza 
> > Cc: Juha-Pekka Heikkila 
> > Cc: Anshuman Gupta 
> > ---
> >  drivers/gpu/drm/i915/display/intel_ddi.c  |   3 +
> >  drivers/gpu/drm/i915/display/intel_display.c  |   4 -
> >  .../drm/i915/display/intel_display_debugfs.c  | 111 ++--
> >  .../drm/i915/display/intel_display_types.h|  38 ++
> >  drivers/gpu/drm/i915/display/intel_dp.c   |  23 +-
> >  drivers/gpu/drm/i915/display/intel_psr.c  | 585 ++--
> > --
> >  drivers/gpu/drm/i915/display/intel_psr.h  |  14 +-
> >  drivers/gpu/drm/i915/display/intel_sprite.c   |   6 +-
> >  drivers/gpu/drm/i915/i915_drv.h   |  38 --
> >  drivers/gpu/drm/i915/i915_irq.c   |  49 +-
> >  10 files changed, 491 insertions(+), 380 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c
> > b/drivers/gpu/drm/i915/display/intel_ddi.c
> > index 6863236df1d0..4b87f72cb9c0 100644
> > --- a/drivers/gpu/drm/i915/display/intel_ddi.c
> > +++ b/drivers/gpu/drm/i915/display/intel_ddi.c
> > @@ -4320,7 +4320,10 @@ static void intel_ddi_update_pipe_dp(struct
> > intel_atomic_state *state,
> >  
> > intel_ddi_set_dp_msa(crtc_state, conn_state);
> >  
> > +   //TODO: move PSR related functions into intel_psr_update()
> > +   intel_psr2_program_trans_man_trk_ctl(intel_dp, crtc_state);
> > intel_psr_update(intel_dp, crtc_state, conn_state);
> > +
> > intel_dp_set_infoframes(encoder, true, crtc_state, conn_state);
> > intel_edp_drrs_update(intel_dp, crtc_state);
> >  
> > diff --git a/drivers/gpu/drm/i915/display/intel_display.c
> > b/drivers/gpu/drm/i915/display/intel_display.c
> > index 761be8deaa9b..f26d9bcd722c 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display.c
> > +++ b/drivers/gpu/drm/i915/display/intel_display.c
> > @@ -15869,8 +15869,6 @@ static void commit_pipe_config(struct
> > intel_atomic_state *state,
> >  
> > if (new_crtc_state->update_pipe)
> > intel_pipe_fastset(old_crtc_state,
> > new_crtc_state);
> > -
> > -   intel_psr2_program_trans_man_trk_ctl(new_crtc_state);
> > }
> >  
> > if (dev_priv->display.atomic_update_watermarks)
> > @@ -17830,8 +17828,6 @@ static void intel_setup_outputs(struct
> > drm_i915_private *dev_priv)
> > intel_dvo_init(dev_priv);
> > }
> >  
> > -   intel_psr_init(dev_priv);
> > -
> > for_each_intel_encoder(&dev_priv->drm, encoder) {
> > encoder->base.possible_crtcs =
> > intel_encoder_possible_crtcs(encoder);
> > diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > index cd7e5519ee7d..041053167d7f 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > +++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > @@ -249,18 +249,17 @@ static int i915_psr_sink_status_show(struct
> > seq_file *m, void *data)
> > "sink internal error",
> > };
> > struct drm_connector *connector = m->private;
> > -   struct drm_i915_private *dev_priv = to_i915(connector->dev);
> > struct intel_dp *intel_dp =
> > intel_attached_dp(to_intel_connector(connector));
> > int ret;
> >  
> > -   if (!CAN_PSR(dev_priv)) {
> > -   seq_puts(m, "PSR Unsupported\n");
> > +   if (connector->status != connector_status_connected)
> > return -ENODEV;
> > -   }
> >  
> > -   if (connector->status != connector_status_connected)
> > +   if (!CAN_PSR(intel_dp)) {
> > +   seq_puts(m, "PSR Unsupported\n");
> > return -ENODEV;
> > +   }
> >  
> > ret = drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_STATUS, &val);
> >  
> > @@ -280,12 +279,13 @@ static int i915_psr_sink_status_show(struct
> > seq_file *m, void *data)
> >  DEFINE_SHOW_ATTRIBUTE(i915_psr_sink_status);
> >  
> >  static void
> > -psr_source_status(struct drm_i915_private *dev_priv, struct
> > seq_file *m)
> > +psr_source_status(struct intel_dp *intel_dp, struct seq_file *m)
> >  {
> > u32 val, status_val;
> > const char *status = "unknown";
> > +   struct drm_i915_private *dev_priv = dp_to_i915(intel

[Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [CI,1/3] drm/i915/pmu: Don't grab wakeref when enabling events

2020-12-14 Thread Patchwork
== Series Details ==

Series: series starting with [CI,1/3] drm/i915/pmu: Don't grab wakeref when 
enabling events
URL   : https://patchwork.freedesktop.org/series/84898/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9478_full -> Patchwork_19132_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_19132_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_exec_params@rsvd2-dirt:
- shard-iclb: NOTRUN -> [SKIP][1] ([fdo#109283])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-iclb6/igt@gem_exec_par...@rsvd2-dirt.html

  * igt@gem_exec_reloc@basic-wide-active@rcs0:
- shard-iclb: NOTRUN -> [FAIL][2] ([i915#2389]) +3 similar issues
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-iclb6/igt@gem_exec_reloc@basic-wide-act...@rcs0.html
- shard-kbl:  NOTRUN -> [FAIL][3] ([i915#2389]) +4 similar issues
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-kbl1/igt@gem_exec_reloc@basic-wide-act...@rcs0.html

  * igt@gem_exec_whisper@basic-forked-all:
- shard-glk:  [PASS][4] -> [DMESG-WARN][5] ([i915#118] / [i915#95])
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9478/shard-glk8/igt@gem_exec_whis...@basic-forked-all.html
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-glk5/igt@gem_exec_whis...@basic-forked-all.html

  * igt@gem_fenced_exec_thrash@no-spare-fences:
- shard-snb:  [PASS][6] -> [INCOMPLETE][7] ([i915#2055])
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9478/shard-snb7/igt@gem_fenced_exec_thr...@no-spare-fences.html
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-snb6/igt@gem_fenced_exec_thr...@no-spare-fences.html

  * igt@gem_huc_copy@huc-copy:
- shard-tglb: [PASS][8] -> [SKIP][9] ([i915#2190])
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9478/shard-tglb5/igt@gem_huc_c...@huc-copy.html
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-tglb6/igt@gem_huc_c...@huc-copy.html

  * igt@gem_render_copy@y-tiled-to-vebox-linear:
- shard-iclb: NOTRUN -> [SKIP][10] ([i915#768])
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-iclb6/igt@gem_render_c...@y-tiled-to-vebox-linear.html

  * igt@gen9_exec_parse@batch-without-end:
- shard-iclb: NOTRUN -> [SKIP][11] ([fdo#112306])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-iclb6/igt@gen9_exec_pa...@batch-without-end.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-270:
- shard-iclb: NOTRUN -> [SKIP][12] ([fdo#110725] / [fdo#111614])
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-iclb6/igt@kms_big...@y-tiled-64bpp-rotate-270.html

  * igt@kms_big_fb@yf-tiled-8bpp-rotate-0:
- shard-iclb: NOTRUN -> [SKIP][13] ([fdo#110723])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-iclb6/igt@kms_big...@yf-tiled-8bpp-rotate-0.html

  * igt@kms_chamelium@hdmi-frame-dump:
- shard-skl:  NOTRUN -> [SKIP][14] ([fdo#109271] / [fdo#111827]) +2 
similar issues
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-skl3/igt@kms_chamel...@hdmi-frame-dump.html

  * igt@kms_color@pipe-b-ctm-0-75:
- shard-skl:  [PASS][15] -> [DMESG-WARN][16] ([i915#1982])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9478/shard-skl4/igt@kms_co...@pipe-b-ctm-0-75.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-skl5/igt@kms_co...@pipe-b-ctm-0-75.html

  * igt@kms_color_chamelium@pipe-a-ctm-max:
- shard-iclb: NOTRUN -> [SKIP][17] ([fdo#109284] / [fdo#111827]) +3 
similar issues
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-iclb6/igt@kms_color_chamel...@pipe-a-ctm-max.html

  * igt@kms_color_chamelium@pipe-d-ctm-max:
- shard-kbl:  NOTRUN -> [SKIP][18] ([fdo#109271] / [fdo#111827]) +6 
similar issues
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-kbl1/igt@kms_color_chamel...@pipe-d-ctm-max.html
- shard-iclb: NOTRUN -> [SKIP][19] ([fdo#109278] / [fdo#109284] / 
[fdo#111827])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-iclb6/igt@kms_color_chamel...@pipe-d-ctm-max.html

  * igt@kms_content_protection@legacy:
- shard-iclb: NOTRUN -> [SKIP][20] ([fdo#109300] / [fdo#111066])
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-iclb6/igt@kms_content_protect...@legacy.html

  * igt@kms_content_protection@uevent:
- shard-kbl:  NOTRUN -> [FAIL][21] ([i915#2105])
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19132/shard-kbl2/igt@kms_content_protect...@uevent.html

  * igt@kms_cursor_crc@pipe-a-cursor-64x21-

[Intel-gfx] [PATCH 1/2] drm/i915: Individual request cancellation

2020-12-14 Thread Chris Wilson
Currently, we cancel outstanding requests within a context when the
context is closed. We may also want to cancel individual requests using
the same graceful preemption mechanism.

Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
---
 .../gpu/drm/i915/gt/intel_engine_heartbeat.c  |   1 +
 .../drm/i915/gt/intel_execlists_submission.c  |   9 +-
 drivers/gpu/drm/i915/i915_request.c   |  62 ++-
 drivers/gpu/drm/i915/i915_request.h   |   4 +-
 drivers/gpu/drm/i915/selftests/i915_request.c | 170 ++
 5 files changed, 240 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c 
b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index 9060385cd69e..351c3431786d 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -267,6 +267,7 @@ int intel_engine_pulse(struct intel_engine_cs *engine)
mutex_unlock(&ce->timeline->mutex);
}
 
+   intel_engine_flush_submission(engine);
intel_engine_pm_put(engine);
return err;
 }
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index dcecc2887891..fd3b170ec4ff 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -1309,6 +1309,11 @@ static void intel_context_update_runtime(struct 
intel_context *ce)
ce->runtime.total += dt;
 }
 
+static bool bad_request(const struct i915_request *rq)
+{
+   return rq->fence.error && i915_request_started(rq);
+}
+
 static inline struct intel_engine_cs *
 __execlists_schedule_in(struct i915_request *rq)
 {
@@ -1317,7 +1322,7 @@ __execlists_schedule_in(struct i915_request *rq)
 
intel_context_get(ce);
 
-   if (unlikely(intel_context_is_banned(ce)))
+   if (unlikely(intel_context_is_banned(ce) || bad_request(rq)))
reset_active(rq, engine);
 
if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
@@ -2008,7 +2013,7 @@ static unsigned long active_preempt_timeout(struct 
intel_engine_cs *engine,
return 0;
 
/* Force a fast reset for terminated contexts (ignoring sysfs!) */
-   if (unlikely(intel_context_is_banned(rq->context)))
+   if (unlikely(intel_context_is_banned(rq->context) || bad_request(rq)))
return 1;
 
return READ_ONCE(engine->props.preempt_timeout_ms);
diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index a9db1376b996..45e53001e386 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -33,6 +33,9 @@
 #include "gem/i915_gem_context.h"
 #include "gt/intel_breadcrumbs.h"
 #include "gt/intel_context.h"
+#include "gt/intel_engine.h"
+#include "gt/intel_engine_heartbeat.h"
+#include "gt/intel_reset.h"
 #include "gt/intel_ring.h"
 #include "gt/intel_rps.h"
 
@@ -498,20 +501,22 @@ void __i915_request_skip(struct i915_request *rq)
rq->infix = rq->postfix;
 }
 
-void i915_request_set_error_once(struct i915_request *rq, int error)
+bool i915_request_set_error_once(struct i915_request *rq, int error)
 {
int old;
 
GEM_BUG_ON(!IS_ERR_VALUE((long)error));
 
if (i915_request_signaled(rq))
-   return;
+   return false;
 
old = READ_ONCE(rq->fence.error);
do {
if (fatal_error(old))
-   return;
+   return false;
} while (!try_cmpxchg(&rq->fence.error, &old, error));
+
+   return true;
 }
 
 bool __i915_request_submit(struct i915_request *request)
@@ -669,6 +674,57 @@ void i915_request_unsubmit(struct i915_request *request)
spin_unlock_irqrestore(&engine->active.lock, flags);
 }
 
+static struct intel_engine_cs *active_engine(struct i915_request *rq)
+{
+   struct intel_engine_cs *engine, *locked;
+
+   locked = READ_ONCE(rq->engine);
+   spin_lock_irq(&locked->active.lock);
+   while (unlikely(locked != (engine = READ_ONCE(rq->engine {
+   spin_unlock(&locked->active.lock);
+   locked = engine;
+   spin_lock(&locked->active.lock);
+   }
+
+   engine = NULL;
+   if (i915_request_is_active(rq) && !i915_request_completed(rq))
+   engine = locked;
+
+   spin_unlock_irq(&locked->active.lock);
+
+   return engine;
+}
+
+static void __cancel_request(struct i915_request *rq)
+{
+   struct intel_engine_cs *engine = active_engine(rq);
+
+   if (engine && intel_engine_pulse(engine))
+   intel_gt_handle_error(engine->gt, engine->mask, 0,
+ "request cancellation by %s",
+ current->comm);
+}
+
+void i915_request_cancel(struct i915_request *rq, int error)
+{
+   if (!i915_request_set_error_once(rq, error))
+   return;
+
+   if (i915_sw_fence

[Intel-gfx] [PATCH 2/2] drm/i915/gem: Allow cancelling an individual fence

2020-12-14 Thread Chris Wilson
Primarily as a thought experiment, construct an ioctl that allows the
user to cancel the associated fence, causing immediate completion if
currently executing.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/Makefile  |  1 +
 drivers/gpu/drm/i915/gem/i915_gem_cancel.c | 57 ++
 drivers/gpu/drm/i915/gem/i915_gem_ioctls.h |  2 +
 drivers/gpu/drm/i915/i915_drv.c|  1 +
 include/uapi/drm/i915_drm.h|  8 +++
 5 files changed, 69 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_cancel.c

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index f9ef5199b124..86ff6142d2fe 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -129,6 +129,7 @@ i915-y += $(gt-y)
 # GEM (Graphics Execution Management) code
 gem-y += \
gem/i915_gem_busy.o \
+   gem/i915_gem_cancel.o \
gem/i915_gem_clflush.o \
gem/i915_gem_client_blt.o \
gem/i915_gem_context.o \
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_cancel.c 
b/drivers/gpu/drm/i915/gem/i915_gem_cancel.c
new file mode 100644
index ..c85dc22fb96d
--- /dev/null
+++ b/drivers/gpu/drm/i915/gem/i915_gem_cancel.c
@@ -0,0 +1,57 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+
+#include 
+
+#include 
+#include 
+
+#include "i915_drv.h"
+#include "i915_gem_ioctls.h"
+
+int
+i915_gem_cancel_ioctl(struct drm_device *dev, void *data, struct drm_file 
*file)
+{
+   struct drm_i915_gem_cancel *args = data;
+   struct dma_fence *fence;
+   int err;
+
+   /* Only supported if we can gracefully cancel a request */
+   if (!(to_i915(dev)->caps.scheduler & I915_SCHEDULER_CAP_PREEMPTION))
+   return -ENODEV;
+
+   if (args->flags & ~(I915_GEM_CANCEL_SYNCOBJ))
+   return -EINVAL;
+
+   if (args->flags & I915_GEM_CANCEL_SYNCOBJ) {
+   struct drm_syncobj *syncobj;
+
+   syncobj = drm_syncobj_find(file, args->handle);
+   if (!syncobj) {
+   DRM_DEBUG("Invalid syncobj handle:%d provided\n",
+ args->handle);
+   return -ENOENT;
+   }
+
+   fence = drm_syncobj_fence_get(syncobj);
+   drm_syncobj_put(syncobj);
+   } else {
+   fence = sync_file_get_fence(args->handle);
+   if (!fence) {
+   DRM_DEBUG("Invalid fence fd:%d provided\n",
+ args->handle);
+   return -ENOENT;
+   }
+   }
+
+   err = -EINVAL;
+   if (dma_fence_is_i915(fence)) {
+   i915_request_cancel(to_request(fence), -EINTR);
+   err = 0;
+   }
+
+   dma_fence_put(fence);
+   return err;
+}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h 
b/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h
index 87d8b27f426d..6487f9a652e6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h
@@ -12,6 +12,8 @@ struct drm_file;
 
 int i915_gem_busy_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
+int i915_gem_cancel_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file);
 int i915_gem_create_ioctl(struct drm_device *dev, void *data,
  struct drm_file *file);
 int i915_gem_execbuffer_ioctl(struct drm_device *dev, void *data,
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 5708e11d917b..de80fbf47b73 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1758,6 +1758,7 @@ static const struct drm_ioctl_desc i915_ioctls[] = {
DRM_IOCTL_DEF_DRV(I915_QUERY, i915_query_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_VM_CREATE, i915_gem_vm_create_ioctl, 
DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_VM_DESTROY, i915_gem_vm_destroy_ioctl, 
DRM_RENDER_ALLOW),
+   DRM_IOCTL_DEF_DRV(I915_GEM_CANCEL, i915_gem_cancel_ioctl, 
DRM_RENDER_ALLOW),
 };
 
 static const struct drm_driver driver = {
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 6edcb2b6c708..bc1d065cd1e0 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -359,6 +359,7 @@ typedef struct _drm_i915_sarea {
 #define DRM_I915_QUERY 0x39
 #define DRM_I915_GEM_VM_CREATE 0x3a
 #define DRM_I915_GEM_VM_DESTROY0x3b
+#define DRM_I915_GEM_CANCEL0x3c
 /* Must be kept compact -- no holes */
 
 #define DRM_IOCTL_I915_INITDRM_IOW( DRM_COMMAND_BASE + 
DRM_I915_INIT, drm_i915_init_t)
@@ -422,6 +423,7 @@ typedef struct _drm_i915_sarea {
 #define DRM_IOCTL_I915_QUERY   DRM_IOWR(DRM_COMMAND_BASE + 
DRM_I915_QUERY, struct drm_i915_query)
 #define DRM_IOCTL_I915_GEM_VM_CREATE   DRM_IOWR(D

Re: [Intel-gfx] [PATCH 69/69] drm/i915/gt: Support virtual engine queues

2020-12-14 Thread kernel test robot
Hi Chris,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on drm-tip/drm-tip]
[cannot apply to drm-intel/for-linux-next v5.10 next-20201214]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Chris-Wilson/drm-i915-Use-cmpxchg64-for-32b-compatilibity/20201214-181222
base:   git://anongit.freedesktop.org/drm/drm-tip drm-tip
config: i386-randconfig-s002-20201214 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce:
# apt-get install sparse
# sparse version: v0.6.3-184-g1b896707-dirty
# 
https://github.com/0day-ci/linux/commit/44f806e9c54f1723714820d49dda7beddc38aa1e
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Chris-Wilson/drm-i915-Use-cmpxchg64-for-32b-compatilibity/20201214-181222
git checkout 44f806e9c54f1723714820d49dda7beddc38aa1e
# save the attached .config to linux build tree
make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 


"sparse warnings: (new ones prefixed by >>)"
>> drivers/gpu/drm/i915/gt/intel_execlists_submission.c:1401:37: sparse: 
>> sparse: incorrect type in initializer (different address spaces) @@ 
>> expected struct intel_timeline *tl @@ got struct intel_timeline 
>> [noderef] __rcu * @@
   drivers/gpu/drm/i915/gt/intel_execlists_submission.c:1401:37: sparse: 
expected struct intel_timeline *tl
   drivers/gpu/drm/i915/gt/intel_execlists_submission.c:1401:37: sparse: 
got struct intel_timeline [noderef] __rcu *

vim +1401 drivers/gpu/drm/i915/gt/intel_execlists_submission.c

  1396  
  1397  static void
  1398  resubmit_virtual_request(struct i915_request *rq, struct virtual_engine 
*ve)
  1399  {
  1400  struct intel_engine_cs *engine = rq->engine;
> 1401  struct intel_timeline *tl = READ_ONCE(rq->timeline);
  1402  struct i915_request *pos = rq;
  1403  
  1404  spin_lock_irq(&engine->active.lock);
  1405  
  1406  /* Rewind back to the start of this virtual engine queue */
  1407  list_for_each_entry_continue_reverse(rq, &tl->requests, link) {
  1408  if (i915_request_completed(rq))
  1409  break;
  1410  
  1411  pos = rq;
  1412  }
  1413  
  1414  /* Resubmit the queue in execution order */
  1415  spin_lock(&ve->base.active.lock);
  1416  list_for_each_entry_from(pos, &tl->requests, link) {
  1417  if (pos->engine != engine)
  1418  break;
  1419  
  1420  __resubmit_virtual_request(pos, engine, ve);
  1421  }
  1422  spin_unlock(&ve->base.active.lock);
  1423  
  1424  spin_unlock_irq(&engine->active.lock);
  1425  }
  1426  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/69] drm/i915: Use cmpxchg64 for 32b compatilibity

2020-12-14 Thread Patchwork
== Series Details ==

Series: series starting with [01/69] drm/i915: Use cmpxchg64 for 32b 
compatilibity
URL   : https://patchwork.freedesktop.org/series/84900/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
0dc75ecba54f drm/i915: Use cmpxchg64 for 32b compatilibity
4e8074241dc4 drm/i915/uc: Squelch load failure error message
-:10: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description 
(prefer a maximum 75 chars per line)
#10: 
<3> [111.319340] i915 :00:02.0: [drm] *ERROR* GuC load failed: status = 
0x0020

total: 0 errors, 1 warnings, 0 checks, 36 lines checked
b674cfb42071 drm/i915: Encode fence specific waitqueue behaviour into the 
wait.flags
4aa9ef6d49f8 drm/i915/gt: Replace direct submit with direct call to tasklet
efdd2df1bc73 drm/i915/gt: Use virtual_engine during execlists_dequeue
b930ba2a83d3 drm/i915/gt: Decouple inflight virtual engines
a83ac0a0f58e drm/i915/gt: Defer schedule_out until after the next dequeue
af9dc0dc069e drm/i915/gt: Remove virtual breadcrumb before transfer
b6c9a8fd2b70 drm/i915/gt: Shrink the critical section for irq signaling
9aefee3ded48 drm/i915/gt: Resubmit the virtual engine on schedule-out
a28f9e8810aa drm/i915/gt: Simplify virtual engine handling for execlists_hold()
8ddfebc69fcb drm/i915/gt: ce->inflight updates are now serialised
8ccab6df158e drm/i915/gem: Drop free_work for GEM contexts
e4f662259e10 drm/i915/gt: Track the overall awake/busy time
d77dfac7aad3 drm/i915/gt: Track all timelines created using the HWSP
5d9534289170 drm/i915/gt: Wrap intel_timeline.has_initial_breadcrumb
3c050f7bff37 drm/i915/gt: Track timeline GGTT offset separately from subpage 
offset
98e9db354de5 drm/i915/gt: Add timeline "mode"
a3ff02a6c8f0 drm/i915/gt: Use indices for writing into relative timelines
722362f35c0d drm/i915/selftests: Exercise relative timeline modes
99692fe0f319 drm/i915/gt: Use ppHWSP for unshared non-semaphore related 
timelines
937af88e9869 drm/i915/selftests: Confirm RING_TIMESTAMP / CTX_TIMESTAMP share a 
clock
-:88: WARNING:MEMORY_BARRIER: memory barrier without comment
#88: FILE: drivers/gpu/drm/i915/gt/selftest_engine_pm.c:68:
+   wmb();

-:136: CHECK:USLEEP_RANGE: usleep_range is preferred over udelay; see 
Documentation/timers/timers-howto.rst
#136: FILE: drivers/gpu/drm/i915/gt/selftest_engine_pm.c:116:
+   udelay(100);

total: 0 errors, 1 warnings, 1 checks, 221 lines checked
ccb57dc0aafd drm/i915/gt: Consolidate the CS timestamp clocks
801b88f03c21 drm/i915/gt: Prefer recycling an idle fence
f41ace3aacd9 drm/i915/gem: Optimistically prune dma-resv from the shrinker.
-:25: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does 
MAINTAINERS need updating?
#25: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 72 lines checked
e1ba077c4634 drm/i915: Drop i915_request.lock serialisation around await_start
acb070e009e6 drm/i915: Drop i915_request.lock requirement for intel_rps_boost()
0874a9ee121d drm/i915/gem: Reduce ctx->engine_mutex for reading the clone source
ca26a0835509 drm/i915/gem: Reduce ctx->engines_mutex for get_engines()
aba47aad12c0 drm/i915: Reduce test_and_set_bit to set_bit in 
i915_request_submit()
a1dec9510015 drm/i915/gt: Drop atomic for engine->fw_active tracking
04fb23cb64d2 drm/i915/gt: Extract busy-stats for ring-scheduler
-:12: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does 
MAINTAINERS need updating?
#12: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 95 lines checked
f5458bf52a9d drm/i915/gt: Convert stats.active to plain unsigned int
7bbaf73cc381 drm/i915/gt: Refactor heartbeat request construction and submission
19429f06a9d1 drm/i915: Strip out internal priorities
70dd8ff805da drm/i915: Remove I915_USER_PRIORITY_SHIFT
506d0c1b2d3a drm/i915/gt: Defer the kmem_cache_free() until after the HW submit
b0f4da50b0fc drm/i915: Prune empty priolists
b916ef805b71 drm/i915: Replace engine->schedule() with a known request operation
34b62d953f50 drm/i915/gt: Do not suspend bonded requests if one hangs
76805c1d568f drm/i915: Teach the i915_dependency to use a double-lock
c5834625d876 drm/i915: Restructure priority inheritance
622182a65e16 drm/i915/selftests: Measure set-priority duration
-:52: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does 
MAINTAINERS need updating?
#52: 
new file mode 100644

-:433: WARNING:LINE_SPACING: Missing a blank line after declarations
#433: FILE: drivers/gpu/drm/i915/selftests/i915_scheduler.c:377:
+   struct igt_spinner spin;
+   I915_RND_STATE(prng);

total: 0 errors, 2 warnings, 0 checks, 686 lines checked
fdf0bf3f6e3e drm/i915/selftests: Exercise priority inheritance around an engine 
loop
3835bb8e7018 drm/i915: Improve DFS for priority inheritance
6463731d3356 drm/i915/gt: Remove timeslice suppression
0c5bc38dac03 drm/i915: Extract request submission from execlists
f0634ef3a719 drm/i915: Extract request suspension from the execlists backend
a5ffb8e4d752 drm/i915: Extract the ability to 

[Intel-gfx] ✗ Fi.CI.SPARSE: warning for series starting with [01/69] drm/i915: Use cmpxchg64 for 32b compatilibity

2020-12-14 Thread Patchwork
== Series Details ==

Series: series starting with [01/69] drm/i915: Use cmpxchg64 for 32b 
compatilibity
URL   : https://patchwork.freedesktop.org/series/84900/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:27:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:27:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:27:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:27:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:27:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:27:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:32:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:32:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:32:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:32:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:49:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:49:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:49:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:49:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:49:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:49:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:56:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:56:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:56:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:56:9: warning: trying to copy 
expression type 31
+drivers/gpu/drm/i915/gt/intel_execlists_submission.c:1401:37:expected 
struct intel_timeline *tl
+drivers/gpu/drm/i915/gt/intel_execlists_submission.c:1401:37:got struct 
intel_timeline [noderef] __rcu *const volatile
+drivers/gpu/drm/i915/gt/intel_execlists_submission.c:1401:37: warning: 
incorrect type in initializer (different address spaces)
-./include/linux/seqlock.h:838:24: warning: trying to copy expression type 31
-./include/linux/seqlock.h:838:24: warning: trying to copy expression type 31
-./include/linux/seqlock.h:864:16: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/intel_wakeref.c:137:19: warning: context imbalance in 
'wakeref_auto_timeout' - unexpected unlock
+drivers/gpu/drm/i915/selftests/i915_syncmap.c:80:54: warning: dubious: x | !y


___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


  1   2   3   >