[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [01/19] drm/i915/execlists: Always clear ring_pause if we do not submit

2019-06-23 Thread Patchwork
== Series Details ==

Series: series starting with [01/19] drm/i915/execlists: Always clear 
ring_pause if we do not submit
URL   : https://patchwork.freedesktop.org/series/62612/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_6331 -> Patchwork_13401


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13401/

Known issues


  Here are the changes found in Patchwork_13401 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@gem_ctx_param@basic:
- fi-icl-u3:  [PASS][1] -> [DMESG-WARN][2] ([fdo#107724])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6331/fi-icl-u3/igt@gem_ctx_pa...@basic.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13401/fi-icl-u3/igt@gem_ctx_pa...@basic.html

  * igt@i915_pm_rpm@module-reload:
- fi-skl-6770hq:  [PASS][3] -> [FAIL][4] ([fdo#108511])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6331/fi-skl-6770hq/igt@i915_pm_...@module-reload.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13401/fi-skl-6770hq/igt@i915_pm_...@module-reload.html

  * igt@kms_chamelium@hdmi-hpd-fast:
- fi-kbl-7500u:   [PASS][5] -> [FAIL][6] ([fdo#109485])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6331/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13401/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html

  
 Possible fixes 

  * igt@gem_mmap_gtt@basic-small-bo:
- fi-icl-u3:  [DMESG-WARN][7] ([fdo#107724]) -> [PASS][8]
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6331/fi-icl-u3/igt@gem_mmap_...@basic-small-bo.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13401/fi-icl-u3/igt@gem_mmap_...@basic-small-bo.html

  
  [fdo#107724]: https://bugs.freedesktop.org/show_bug.cgi?id=107724
  [fdo#108511]: https://bugs.freedesktop.org/show_bug.cgi?id=108511
  [fdo#109485]: https://bugs.freedesktop.org/show_bug.cgi?id=109485


Participating hosts (48 -> 41)
--

  Missing(7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-icl-y 
fi-bdw-samus fi-skl-6600u 


Build changes
-

  * Linux: CI_DRM_6331 -> Patchwork_13401

  CI_DRM_6331: a20afe5511160e9c1ea22b80b3be0226dfb0917a @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5064: 22850c1906550fb97b405c019275dcfb34be8cf7 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_13401: 0b202e3b3bfb51ab4d06fb6e0837281a2dd5fec8 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

0b202e3b3bfb drm/i915: Replace struct_mutex for batch pool serialisation
e32d77e3c6d1 drm/i915: Protect request retirement with timeline->mutex
7b4beefd667b drm/i915/gt: Drop stale commentary for timeline density
7e3b10829e7d drm/i915/gt: Always call kref_init for the timeline
0e23c4b14977 drm/i915/selftests: Hold ref on request across waits
b99ea82088cd drm/i915/gt: Guard timeline pinning with its own mutex
5cc733c6a4bf drm/i915/gt: Convert timeline tracking to spinlock
04234a447a4d drm/i915/gt: Track timeline activeness in enter/exit
c368b8ca78ee drm/i915: Teach execbuffer to take the engine wakeref not GT
98b27c39f6af drm/i915: Lift intel_engines_resume() to callers
a44002018103 drm/i915: Only recover active engines
e24b23a19ef7 drm/i915: Add a wakeref getter for iff the wakeref is already 
active
997c5179ea07 drm/i915: Rename intel_wakeref_[is]_active
967a4aec42b1 drm/i915/selftests: Fixup atomic reset checking
6adb57162c1b drm/i915/selftest: Drop manual request wakerefs around hangcheck
5a2731bd57c9 drm/i915/selftests: Serialise nop reset with retirement
b87561f35a6f drm/i915/gt: Pass intel_gt to pm routines
bee84a8f5dd3 drm/i915/execlists: Convert recursive defer_request() into an 
iteractive
64d75313cafc drm/i915/execlists: Always clear ring_pause if we do not submit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13401/
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] ✗ Fi.CI.SPARSE: warning for series starting with [01/19] drm/i915/execlists: Always clear ring_pause if we do not submit

2019-06-23 Thread Patchwork
== Series Details ==

Series: series starting with [01/19] drm/i915/execlists: Always clear 
ring_pause if we do not submit
URL   : https://patchwork.freedesktop.org/series/62612/
State : warning

== Summary ==

$ dim sparse origin/drm-tip
Sparse version: v0.5.2
Commit: drm/i915/execlists: Always clear ring_pause if we do not submit
Okay!

Commit: drm/i915/execlists: Convert recursive defer_request() into an iteractive
Okay!

Commit: drm/i915/gt: Pass intel_gt to pm routines
Okay!

Commit: drm/i915/selftests: Serialise nop reset with retirement
Okay!

Commit: drm/i915/selftest: Drop manual request wakerefs around hangcheck
Okay!

Commit: drm/i915/selftests: Fixup atomic reset checking
Okay!

Commit: drm/i915: Rename intel_wakeref_[is]_active
+./include/uapi/linux/perf_event.h:147:56: warning: cast truncates bits from 
constant value (8000 becomes 0)

Commit: drm/i915: Add a wakeref getter for iff the wakeref is already active
Okay!

Commit: drm/i915: Only recover active engines
Okay!

Commit: drm/i915: Lift intel_engines_resume() to callers
Okay!

Commit: drm/i915: Teach execbuffer to take the engine wakeref not GT
Okay!

Commit: drm/i915/gt: Track timeline activeness in enter/exit
Okay!

Commit: drm/i915/gt: Convert timeline tracking to spinlock
Okay!

Commit: drm/i915/gt: Guard timeline pinning with its own mutex
Okay!

Commit: drm/i915/selftests: Hold ref on request across waits
Okay!

Commit: drm/i915/gt: Always call kref_init for the timeline
Okay!

Commit: drm/i915/gt: Drop stale commentary for timeline density
Okay!

Commit: drm/i915: Protect request retirement with timeline->mutex
Okay!

Commit: drm/i915: Replace struct_mutex for batch pool serialisation
+./include/uapi/linux/perf_event.h:147:56: warning: cast truncates bits from 
constant value (8000 becomes 0)

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/19] drm/i915/execlists: Always clear ring_pause if we do not submit

2019-06-23 Thread Patchwork
== Series Details ==

Series: series starting with [01/19] drm/i915/execlists: Always clear 
ring_pause if we do not submit
URL   : https://patchwork.freedesktop.org/series/62612/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
64d75313cafc drm/i915/execlists: Always clear ring_pause if we do not submit
bee84a8f5dd3 drm/i915/execlists: Convert recursive defer_request() into an 
iteractive
b87561f35a6f drm/i915/gt: Pass intel_gt to pm routines
5a2731bd57c9 drm/i915/selftests: Serialise nop reset with retirement
6adb57162c1b drm/i915/selftest: Drop manual request wakerefs around hangcheck
967a4aec42b1 drm/i915/selftests: Fixup atomic reset checking
997c5179ea07 drm/i915: Rename intel_wakeref_[is]_active
e24b23a19ef7 drm/i915: Add a wakeref getter for iff the wakeref is already 
active
a44002018103 drm/i915: Only recover active engines
98b27c39f6af drm/i915: Lift intel_engines_resume() to callers
-:215: WARNING:AVOID_BUG: Avoid crashing the kernel - try using WARN_ON & 
recovery code rather than BUG() or BUG_ON()
#215: FILE: drivers/gpu/drm/i915/i915_gem.c:1200:
+   BUG_ON(!i915->kernel_context);

total: 0 errors, 1 warnings, 0 checks, 432 lines checked
c368b8ca78ee drm/i915: Teach execbuffer to take the engine wakeref not GT
04234a447a4d drm/i915/gt: Track timeline activeness in enter/exit
5cc733c6a4bf drm/i915/gt: Convert timeline tracking to spinlock
b99ea82088cd drm/i915/gt: Guard timeline pinning with its own mutex
0e23c4b14977 drm/i915/selftests: Hold ref on request across waits
7e3b10829e7d drm/i915/gt: Always call kref_init for the timeline
7b4beefd667b drm/i915/gt: Drop stale commentary for timeline density
e32d77e3c6d1 drm/i915: Protect request retirement with timeline->mutex
0b202e3b3bfb drm/i915: Replace struct_mutex for batch pool serialisation
-:304: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does 
MAINTAINERS need updating?
#304: 
new file mode 100644

-:309: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier 
tag in line 1
#309: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool.c:1:
+/*

-:310: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use 
line 1 instead
#310: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool.c:2:
+ * SPDX-License-Identifier: MIT

-:479: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier 
tag in line 1
#479: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool.h:1:
+/*

-:480: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use 
line 1 instead
#480: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool.h:2:
+ * SPDX-License-Identifier: MIT

-:519: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier 
tag in line 1
#519: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool_types.h:1:
+/*

-:520: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use 
line 1 instead
#520: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool_types.h:2:
+ * SPDX-License-Identifier: MIT

-:536: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#536: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool_types.h:18:
+   spinlock_t lock;

total: 0 errors, 7 warnings, 1 checks, 593 lines checked

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 17/19] drm/i915/gt: Drop stale commentary for timeline density

2019-06-23 Thread Chris Wilson
We no longer allocate a continguous set of timeline ids for all engines
upon creation, so we no longer should assume that the timelines are
density allocated within a context. Hopefully, still dense enough for us
to take advantage of the compressed radix tree.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_timeline.c | 14 ++
 1 file changed, 2 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c 
b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 030bcd2249e9..995406bde27e 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -210,21 +210,11 @@ int intel_timeline_init(struct intel_timeline *timeline,
 {
void *vaddr;
 
-   /*
-* Ideally we want a set of engines on a single leaf as we expect
-* to mostly be tracking synchronisation between engines. It is not
-* a huge issue if this is not the case, but we may want to mitigate
-* any page crossing penalties if they become an issue.
-*
-* Called during early_init before we know how many engines there are.
-*/
-   BUILD_BUG_ON(KSYNCMAP < I915_NUM_ENGINES);
-
-   timeline->gt = gt;
-
kref_init(&timeline->kref);
atomic_set(&timeline->pin_count, 0);
 
+   timeline->gt = gt;
+
timeline->has_initial_breadcrumb = !hwsp;
timeline->hwsp_cacheline = NULL;
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 04/19] drm/i915/selftests: Serialise nop reset with retirement

2019-06-23 Thread Chris Wilson
In order for the reset count to be accurate across our selftest, we need
to prevent the background retire worker from modifying our expected
state.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c 
b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
index 3ceb397c8645..0e0b6c572ae9 100644
--- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
+++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
@@ -398,6 +398,7 @@ static int igt_reset_nop(void *arg)
count = 0;
do {
mutex_lock(&i915->drm.struct_mutex);
+
for_each_engine(engine, i915, id) {
int i;
 
@@ -413,11 +414,12 @@ static int igt_reset_nop(void *arg)
i915_request_add(rq);
}
}
-   mutex_unlock(&i915->drm.struct_mutex);
 
igt_global_reset_lock(i915);
i915_reset(i915, ALL_ENGINES, NULL);
igt_global_reset_unlock(i915);
+
+   mutex_unlock(&i915->drm.struct_mutex);
if (i915_reset_failed(i915)) {
err = -EIO;
break;
@@ -511,9 +513,8 @@ static int igt_reset_nop_engine(void *arg)
 
i915_request_add(rq);
}
-   mutex_unlock(&i915->drm.struct_mutex);
-
err = i915_reset_engine(engine, NULL);
+   mutex_unlock(&i915->drm.struct_mutex);
if (err) {
pr_err("i915_reset_engine failed\n");
break;
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 07/19] drm/i915: Rename intel_wakeref_[is]_active

2019-06-23 Thread Chris Wilson
Our general rule is to use is/has as the verb for boolean functions,
rename intel_wakeref_active to intel_wakeref_is_active so the question
being asked is clear.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gem/i915_gem_pm.c| 3 ++-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 2 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.h | 9 +
 drivers/gpu/drm/i915/gt/intel_lrc.c   | 2 +-
 drivers/gpu/drm/i915/gt/intel_reset.c | 2 +-
 drivers/gpu/drm/i915/intel_wakeref.h  | 4 ++--
 6 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c 
b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
index ee1f66594a35..6b730bd4d72f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
@@ -54,7 +54,8 @@ static void idle_work_handler(struct work_struct *work)
mutex_lock(&i915->drm.struct_mutex);
 
intel_wakeref_lock(&i915->gt.wakeref);
-   park = !intel_wakeref_active(&i915->gt.wakeref) && !work_pending(work);
+   park = (!intel_wakeref_is_active(&i915->gt.wakeref) &&
+   !work_pending(work));
intel_wakeref_unlock(&i915->gt.wakeref);
if (park)
i915_gem_park(i915);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 4961f74fd902..d1508f0b4c84 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1155,7 +1155,7 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
if (i915_reset_failed(engine->i915))
return true;
 
-   if (!intel_wakeref_active(&engine->wakeref))
+   if (!intel_engine_pm_is_awake(engine))
return true;
 
/* Waiting to drain ELSP? */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.h 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
index b326cd993d60..f3f5b031b4a1 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
@@ -7,12 +7,21 @@
 #ifndef INTEL_ENGINE_PM_H
 #define INTEL_ENGINE_PM_H
 
+#include "intel_engine_types.h"
+#include "intel_wakeref.h"
+
 struct drm_i915_private;
 struct intel_engine_cs;
 
 void intel_engine_pm_get(struct intel_engine_cs *engine);
 void intel_engine_pm_put(struct intel_engine_cs *engine);
 
+static inline bool
+intel_engine_pm_is_awake(const struct intel_engine_cs *engine)
+{
+   return intel_wakeref_is_active(&engine->wakeref);
+}
+
 void intel_engine_park(struct intel_engine_cs *engine);
 
 void intel_engine_init__pm(struct intel_engine_cs *engine);
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 0361dfaacf9a..275b55187a36 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -688,7 +688,7 @@ static void execlists_submit_ports(struct intel_engine_cs 
*engine)
 * that all ELSP are drained i.e. we have processed the CSB,
 * before allowing ourselves to idle and calling intel_runtime_pm_put().
 */
-   GEM_BUG_ON(!intel_wakeref_active(&engine->wakeref));
+   GEM_BUG_ON(!intel_engine_pm_is_awake(engine));
 
/*
 * ELSQ note: the submit queue is not cleared after being submitted
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c 
b/drivers/gpu/drm/i915/gt/intel_reset.c
index e92054e118cc..8ce92c51564e 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -1072,7 +1072,7 @@ int i915_reset_engine(struct intel_engine_cs *engine, 
const char *msg)
GEM_TRACE("%s flags=%lx\n", engine->name, error->flags);
GEM_BUG_ON(!test_bit(I915_RESET_ENGINE + engine->id, &error->flags));
 
-   if (!intel_wakeref_active(&engine->wakeref))
+   if (!intel_engine_pm_is_awake(engine))
return 0;
 
reset_prepare_engine(engine);
diff --git a/drivers/gpu/drm/i915/intel_wakeref.h 
b/drivers/gpu/drm/i915/intel_wakeref.h
index d45e78639dc4..f74272770a5c 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.h
+++ b/drivers/gpu/drm/i915/intel_wakeref.h
@@ -128,13 +128,13 @@ intel_wakeref_unlock(struct intel_wakeref *wf)
 }
 
 /**
- * intel_wakeref_active: Query whether the wakeref is currently held
+ * intel_wakeref_is_active: Query whether the wakeref is currently held
  * @wf: the wakeref
  *
  * Returns: true if the wakeref is currently held.
  */
 static inline bool
-intel_wakeref_active(struct intel_wakeref *wf)
+intel_wakeref_is_active(const struct intel_wakeref *wf)
 {
return READ_ONCE(wf->wakeref);
 }
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 13/19] drm/i915/gt: Convert timeline tracking to spinlock

2019-06-23 Thread Chris Wilson
Convert the list manipulation of active to use spinlocks so that we can
perform the updates from underneath a quick interrupt callback.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_gt_types.h |  2 +-
 drivers/gpu/drm/i915/gt/intel_reset.c| 13 ++---
 drivers/gpu/drm/i915/gt/intel_timeline.c | 12 +---
 drivers/gpu/drm/i915/i915_gem.c  | 20 ++--
 4 files changed, 26 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h 
b/drivers/gpu/drm/i915/gt/intel_gt_types.h
index c03e56628ee2..cfd41e6c54e1 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
@@ -26,7 +26,7 @@ struct intel_gt {
struct i915_ggtt *ggtt;
 
struct intel_gt_timelines {
-   struct mutex mutex; /* protects list */
+   spinlock_t lock; /* protects active_list */
struct list_head active_list;
 
/* Pack multiple timelines' seqnos into the same page */
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c 
b/drivers/gpu/drm/i915/gt/intel_reset.c
index adfdb908587f..72002c0f9698 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -858,6 +858,7 @@ void i915_gem_set_wedged(struct drm_i915_private *i915)
 static bool __i915_gem_unset_wedged(struct drm_i915_private *i915)
 {
struct i915_gpu_error *error = &i915->gpu_error;
+   struct intel_gt_timelines *timelines = &i915->gt.timelines;
struct intel_timeline *tl;
 
if (!test_bit(I915_WEDGED, &error->flags))
@@ -878,14 +879,16 @@ static bool __i915_gem_unset_wedged(struct 
drm_i915_private *i915)
 *
 * No more can be submitted until we reset the wedged bit.
 */
-   mutex_lock(&i915->gt.timelines.mutex);
-   list_for_each_entry(tl, &i915->gt.timelines.active_list, link) {
+   spin_lock(&timelines->lock);
+   list_for_each_entry(tl, &timelines->active_list, link) {
struct i915_request *rq;
 
rq = i915_active_request_get_unlocked(&tl->last_request);
if (!rq)
continue;
 
+   spin_unlock(&timelines->lock);
+
/*
 * All internal dependencies (i915_requests) will have
 * been flushed by the set-wedge, but we may be stuck waiting
@@ -895,8 +898,12 @@ static bool __i915_gem_unset_wedged(struct 
drm_i915_private *i915)
 */
dma_fence_default_wait(&rq->fence, false, MAX_SCHEDULE_TIMEOUT);
i915_request_put(rq);
+
+   /* Restart iteration after droping lock */
+   spin_lock(&timelines->lock);
+   tl = list_entry(&timelines->active_list, typeof(*tl), link);
}
-   mutex_unlock(&i915->gt.timelines.mutex);
+   spin_unlock(&timelines->lock);
 
intel_gt_sanitize(&i915->gt, false);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c 
b/drivers/gpu/drm/i915/gt/intel_timeline.c
index b6bfbdefaf7c..672bccbfd797 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -273,7 +273,7 @@ static void timelines_init(struct intel_gt *gt)
 {
struct intel_gt_timelines *timelines = >->timelines;
 
-   mutex_init(&timelines->mutex);
+   spin_lock_init(&timelines->lock);
INIT_LIST_HEAD(&timelines->active_list);
 
spin_lock_init(&timelines->hwsp_lock);
@@ -354,9 +354,9 @@ void intel_timeline_enter(struct intel_timeline *tl)
return;
GEM_BUG_ON(!tl->active_count); /* overflow? */
 
-   mutex_lock(&timelines->mutex);
+   spin_lock(&timelines->lock);
list_add(&tl->link, &timelines->active_list);
-   mutex_unlock(&timelines->mutex);
+   spin_unlock(&timelines->lock);
 }
 
 void intel_timeline_exit(struct intel_timeline *tl)
@@ -367,9 +367,9 @@ void intel_timeline_exit(struct intel_timeline *tl)
if (--tl->active_count)
return;
 
-   mutex_lock(&timelines->mutex);
+   spin_lock(&timelines->lock);
list_del(&tl->link);
-   mutex_unlock(&timelines->mutex);
+   spin_unlock(&timelines->lock);
 
/*
 * Since this timeline is idle, all bariers upon which we were waiting
@@ -557,8 +557,6 @@ static void timelines_fini(struct intel_gt *gt)
 
GEM_BUG_ON(!list_empty(&timelines->active_list));
GEM_BUG_ON(!list_empty(&timelines->hwsp_free_list));
-
-   mutex_destroy(&timelines->mutex);
 }
 
 void intel_timelines_fini(struct drm_i915_private *i915)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 5cc3a75d521a..7e390d46ad2e 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -905,20 +905,20 @@ static int wait_for_engines(struct drm_i915_private *i915)
 
 static long
 wait_for_timelines(struct drm_i915_private *i915,
-  

[Intel-gfx] [PATCH 11/19] drm/i915: Teach execbuffer to take the engine wakeref not GT

2019-06-23 Thread Chris Wilson
In the next patch, we would like to couple into the engine wakeref to
free the batch pool on idling. The caveat here is that we therefore want
to track the engine wakeref more precisely and to hold it instead of the
broader GT wakeref as we process the ioctl.

Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c| 36 ---
 drivers/gpu/drm/i915/gt/intel_context.h   |  7 
 2 files changed, 31 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 1c5dfbfad71b..f43eaaa5db5f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -2143,13 +2143,35 @@ static int eb_pin_context(struct i915_execbuffer *eb, 
struct intel_context *ce)
if (err)
return err;
 
+   /*
+* Take a local wakeref for preparing to dispatch the execbuf as
+* we expect to access the hardware fairly frequently in the
+* process. Upon first dispatch, we acquire another prolonged
+* wakeref that we hold until the GPU has been idle for at least
+* 100ms.
+*/
+   err = intel_context_timeline_lock(ce);
+   if (err)
+   goto err_unpin;
+
+   intel_context_enter(ce);
+   intel_context_timeline_unlock(ce);
+
eb->engine = ce->engine;
eb->context = ce;
return 0;
+
+err_unpin:
+   intel_context_unpin(ce);
+   return err;
 }
 
 static void eb_unpin_context(struct i915_execbuffer *eb)
 {
+   __intel_context_timeline_lock(eb->context);
+   intel_context_exit(eb->context);
+   intel_context_timeline_unlock(eb->context);
+
intel_context_unpin(eb->context);
 }
 
@@ -2430,18 +2452,9 @@ i915_gem_do_execbuffer(struct drm_device *dev,
if (unlikely(err))
goto err_destroy;
 
-   /*
-* Take a local wakeref for preparing to dispatch the execbuf as
-* we expect to access the hardware fairly frequently in the
-* process. Upon first dispatch, we acquire another prolonged
-* wakeref that we hold until the GPU has been idle for at least
-* 100ms.
-*/
-   intel_gt_pm_get(&eb.i915->gt);
-
err = i915_mutex_lock_interruptible(dev);
if (err)
-   goto err_rpm;
+   goto err_context;
 
err = eb_select_engine(&eb, file, args);
if (unlikely(err))
@@ -2606,8 +2619,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
eb_unpin_context(&eb);
 err_unlock:
mutex_unlock(&dev->struct_mutex);
-err_rpm:
-   intel_gt_pm_put(&eb.i915->gt);
+err_context:
i915_gem_context_put(eb.gem_context);
 err_destroy:
eb_destroy(&eb);
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h 
b/drivers/gpu/drm/i915/gt/intel_context.h
index 40cd8320fcc3..065ba4ac4e87 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -126,6 +126,13 @@ static inline void intel_context_put(struct intel_context 
*ce)
kref_put(&ce->ref, ce->ops->destroy);
 }
 
+static inline void
+__intel_context_timeline_lock(struct intel_context *ce)
+   __acquires(&ce->ring->timeline->mutex)
+{
+   mutex_lock(&ce->ring->timeline->mutex);
+}
+
 static inline int __must_check
 intel_context_timeline_lock(struct intel_context *ce)
__acquires(&ce->ring->timeline->mutex)
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 10/19] drm/i915: Lift intel_engines_resume() to callers

2019-06-23 Thread Chris Wilson
Since the reset path wants to recover the engines itself, it only wants
to reinitialise the hardware using i915_gem_init_hw(). Pull the call to
intel_engines_resume() to the module init/resume path so we can avoid it
during reset.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gem/i915_gem_pm.c|   7 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |  24 ---
 drivers/gpu/drm/i915/gt/intel_engine_pm.h |   2 -
 drivers/gpu/drm/i915/gt/intel_gt_pm.c |  21 ++-
 drivers/gpu/drm/i915/gt/intel_gt_pm.h |   2 +-
 drivers/gpu/drm/i915/gt/intel_reset.c |  21 ++-
 drivers/gpu/drm/i915/i915_gem.c   | 173 +-
 7 files changed, 116 insertions(+), 134 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c 
b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
index 6b730bd4d72f..4d774376f5b8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
@@ -254,14 +254,15 @@ void i915_gem_resume(struct drm_i915_private *i915)
i915_gem_restore_gtt_mappings(i915);
i915_gem_restore_fences(i915);
 
+   if (i915_gem_init_hw(i915))
+   goto err_wedged;
+
/*
 * As we didn't flush the kernel context before suspend, we cannot
 * guarantee that the context image is complete. So let's just reset
 * it and start again.
 */
-   intel_gt_resume(&i915->gt);
-
-   if (i915_gem_init_hw(i915))
+   if (intel_gt_resume(&i915->gt))
goto err_wedged;
 
intel_uc_resume(i915);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 5253c382034d..84e432abe8e0 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -142,27 +142,3 @@ void intel_engine_init__pm(struct intel_engine_cs *engine)
 {
intel_wakeref_init(&engine->wakeref);
 }
-
-int intel_engines_resume(struct drm_i915_private *i915)
-{
-   struct intel_engine_cs *engine;
-   enum intel_engine_id id;
-   int err = 0;
-
-   intel_gt_pm_get(&i915->gt);
-   for_each_engine(engine, i915, id) {
-   intel_engine_pm_get(engine);
-   engine->serial++; /* kernel context lost */
-   err = engine->resume(engine);
-   intel_engine_pm_put(engine);
-   if (err) {
-   dev_err(i915->drm.dev,
-   "Failed to restart %s (%d)\n",
-   engine->name, err);
-   break;
-   }
-   }
-   intel_gt_pm_put(&i915->gt);
-
-   return err;
-}
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.h 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
index 7d057cdcd919..015ac72d7ad0 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
@@ -31,6 +31,4 @@ void intel_engine_park(struct intel_engine_cs *engine);
 
 void intel_engine_init__pm(struct intel_engine_cs *engine);
 
-int intel_engines_resume(struct drm_i915_private *i915);
-
 #endif /* INTEL_ENGINE_PM_H */
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c 
b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index ec6b69d014b6..36ba80e6a0b7 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -5,6 +5,7 @@
  */
 
 #include "i915_drv.h"
+#include "intel_engine_pm.h"
 #include "intel_gt_pm.h"
 #include "intel_pm.h"
 #include "intel_wakeref.h"
@@ -122,10 +123,11 @@ void intel_gt_sanitize(struct intel_gt *gt, bool force)
intel_engine_reset(engine, false);
 }
 
-void intel_gt_resume(struct intel_gt *gt)
+int intel_gt_resume(struct intel_gt *gt)
 {
struct intel_engine_cs *engine;
enum intel_engine_id id;
+   int err = 0;
 
/*
 * After resume, we may need to poke into the pinned kernel
@@ -133,9 +135,12 @@ void intel_gt_resume(struct intel_gt *gt)
 * Only the kernel contexts should remain pinned over suspend,
 * allowing us to fixup the user contexts on their first pin.
 */
+   intel_gt_pm_get(gt);
for_each_engine(engine, gt->i915, id) {
struct intel_context *ce;
 
+   intel_engine_pm_get(engine);
+
ce = engine->kernel_context;
if (ce)
ce->ops->reset(ce);
@@ -143,5 +148,19 @@ void intel_gt_resume(struct intel_gt *gt)
ce = engine->preempt_context;
if (ce)
ce->ops->reset(ce);
+
+   engine->serial++; /* kernel context lost */
+   err = engine->resume(engine);
+
+   intel_engine_pm_put(engine);
+   if (err) {
+   dev_err(gt->i915->drm.dev,
+   "Failed to restart %s (%d)\n",
+   engine->name, err);
+   break;
+   }
}
+   intel_gt_pm_put

[Intel-gfx] [PATCH 14/19] drm/i915/gt: Guard timeline pinning with its own mutex

2019-06-23 Thread Chris Wilson
In preparation for removing struct_mutex from around context retirement,
we need to make timeline pinning safe. Since multiple engines/contexts
can share a single timeline, it needs to be protected by a mutex.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_timeline.c  | 27 +--
 .../gpu/drm/i915/gt/intel_timeline_types.h|  2 +-
 drivers/gpu/drm/i915/gt/mock_engine.c |  6 ++---
 3 files changed, 16 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c 
b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 672bccbfd797..d05875ff2071 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -221,7 +221,9 @@ int intel_timeline_init(struct intel_timeline *timeline,
BUILD_BUG_ON(KSYNCMAP < I915_NUM_ENGINES);
 
timeline->gt = gt;
-   timeline->pin_count = 0;
+
+   atomic_set(&timeline->pin_count, 0);
+
timeline->has_initial_breadcrumb = !hwsp;
timeline->hwsp_cacheline = NULL;
 
@@ -287,7 +289,7 @@ void intel_timelines_init(struct drm_i915_private *i915)
 
 void intel_timeline_fini(struct intel_timeline *timeline)
 {
-   GEM_BUG_ON(timeline->pin_count);
+   GEM_BUG_ON(atomic_read(&timeline->pin_count));
GEM_BUG_ON(!list_empty(&timeline->requests));
 
if (timeline->hwsp_cacheline)
@@ -323,33 +325,29 @@ int intel_timeline_pin(struct intel_timeline *tl)
 {
int err;
 
-   if (tl->pin_count++)
+   if (atomic_add_unless(&tl->pin_count, 1, 0))
return 0;
-   GEM_BUG_ON(!tl->pin_count);
-   GEM_BUG_ON(tl->active_count);
 
err = i915_vma_pin(tl->hwsp_ggtt, 0, 0, PIN_GLOBAL | PIN_HIGH);
if (err)
-   goto unpin;
+   return err;
 
tl->hwsp_offset =
i915_ggtt_offset(tl->hwsp_ggtt) +
offset_in_page(tl->hwsp_offset);
 
cacheline_acquire(tl->hwsp_cacheline);
+   if (atomic_fetch_inc(&tl->pin_count))
+   cacheline_release(tl->hwsp_cacheline);
 
return 0;
-
-unpin:
-   tl->pin_count = 0;
-   return err;
 }
 
 void intel_timeline_enter(struct intel_timeline *tl)
 {
struct intel_gt_timelines *timelines = &tl->gt->timelines;
 
-   GEM_BUG_ON(!tl->pin_count);
+   GEM_BUG_ON(!atomic_read(&tl->pin_count));
if (tl->active_count++)
return;
GEM_BUG_ON(!tl->active_count); /* overflow? */
@@ -381,7 +379,7 @@ void intel_timeline_exit(struct intel_timeline *tl)
 
 static u32 timeline_advance(struct intel_timeline *tl)
 {
-   GEM_BUG_ON(!tl->pin_count);
+   GEM_BUG_ON(!atomic_read(&tl->pin_count));
GEM_BUG_ON(tl->seqno & tl->has_initial_breadcrumb);
 
return tl->seqno += 1 + tl->has_initial_breadcrumb;
@@ -532,11 +530,10 @@ int intel_timeline_read_hwsp(struct i915_request *from,
 
 void intel_timeline_unpin(struct intel_timeline *tl)
 {
-   GEM_BUG_ON(!tl->pin_count);
-   if (--tl->pin_count)
+   GEM_BUG_ON(!atomic_read(&tl->pin_count));
+   if (!atomic_dec_and_test(&tl->pin_count))
return;
 
-   GEM_BUG_ON(tl->active_count);
cacheline_release(tl->hwsp_cacheline);
 
__i915_vma_unpin(tl->hwsp_ggtt);
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline_types.h 
b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
index b820ee76b7f5..8dd14a2b8781 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_timeline_types.h
@@ -25,7 +25,7 @@ struct intel_timeline {
 
struct mutex mutex; /* protects the flow of requests */
 
-   unsigned int pin_count;
+   atomic_t pin_count;
const u32 *hwsp_seqno;
struct i915_vma *hwsp_ggtt;
u32 hwsp_offset;
diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c 
b/drivers/gpu/drm/i915/gt/mock_engine.c
index 490ebd121f4c..a48b36d31e65 100644
--- a/drivers/gpu/drm/i915/gt/mock_engine.c
+++ b/drivers/gpu/drm/i915/gt/mock_engine.c
@@ -38,13 +38,13 @@ struct mock_ring {
 
 static void mock_timeline_pin(struct intel_timeline *tl)
 {
-   tl->pin_count++;
+   atomic_inc(&tl->pin_count);
 }
 
 static void mock_timeline_unpin(struct intel_timeline *tl)
 {
-   GEM_BUG_ON(!tl->pin_count);
-   tl->pin_count--;
+   GEM_BUG_ON(!atomic_read(&tl->pin_count));
+   atomic_dec(&tl->pin_count);
 }
 
 static struct intel_ring *mock_ring(struct intel_engine_cs *engine)
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 12/19] drm/i915/gt: Track timeline activeness in enter/exit

2019-06-23 Thread Chris Wilson
Lift moving the timeline to/from the active_list on enter/exit in order
to shorten the active tracking span in comparison to the existing
pin/unpin.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gem/i915_gem_pm.c|  1 -
 drivers/gpu/drm/i915/gt/intel_context.c   |  2 +
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |  1 +
 drivers/gpu/drm/i915/gt/intel_lrc.c   |  4 +
 drivers/gpu/drm/i915/gt/intel_timeline.c  | 98 +++
 drivers/gpu/drm/i915/gt/intel_timeline.h  |  3 +-
 .../gpu/drm/i915/gt/intel_timeline_types.h|  1 +
 drivers/gpu/drm/i915/gt/selftest_timeline.c   |  2 -
 8 files changed, 46 insertions(+), 66 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c 
b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
index 4d774376f5b8..93d188526457 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
@@ -38,7 +38,6 @@ static void i915_gem_park(struct drm_i915_private *i915)
i915_gem_batch_pool_fini(&engine->batch_pool);
}
 
-   intel_timelines_park(i915);
i915_vma_parked(i915);
 
i915_globals_park();
diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
b/drivers/gpu/drm/i915/gt/intel_context.c
index 938dd032b820..bc59f57450a7 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -222,10 +222,12 @@ int __init i915_global_context_init(void)
 void intel_context_enter_engine(struct intel_context *ce)
 {
intel_engine_pm_get(ce->engine);
+   intel_timeline_enter(ce->ring->timeline);
 }
 
 void intel_context_exit_engine(struct intel_context *ce)
 {
+   intel_timeline_exit(ce->ring->timeline);
intel_engine_pm_put(ce->engine);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 84e432abe8e0..9751a02d86bc 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -88,6 +88,7 @@ static bool switch_to_kernel_context(struct intel_engine_cs 
*engine)
 
/* Check again on the next retirement. */
engine->wakeref_serial = engine->serial + 1;
+   intel_timeline_enter(rq->timeline);
 
i915_request_add_barriers(rq);
__i915_request_commit(rq);
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 275b55187a36..67153b99eac5 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -3168,6 +3168,8 @@ static void virtual_context_enter(struct intel_context 
*ce)
 
for (n = 0; n < ve->num_siblings; n++)
intel_engine_pm_get(ve->siblings[n]);
+
+   intel_timeline_enter(ce->ring->timeline);
 }
 
 static void virtual_context_exit(struct intel_context *ce)
@@ -3175,6 +3177,8 @@ static void virtual_context_exit(struct intel_context *ce)
struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
unsigned int n;
 
+   intel_timeline_exit(ce->ring->timeline);
+
for (n = 0; n < ve->num_siblings; n++)
intel_engine_pm_put(ve->siblings[n]);
 }
diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c 
b/drivers/gpu/drm/i915/gt/intel_timeline.c
index 478258274986..b6bfbdefaf7c 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -285,64 +285,11 @@ void intel_timelines_init(struct drm_i915_private *i915)
timelines_init(&i915->gt);
 }
 
-static void timeline_add_to_active(struct intel_timeline *tl)
-{
-   struct intel_gt_timelines *gt = &tl->gt->timelines;
-
-   mutex_lock(>->mutex);
-   list_add(&tl->link, >->active_list);
-   mutex_unlock(>->mutex);
-}
-
-static void timeline_remove_from_active(struct intel_timeline *tl)
-{
-   struct intel_gt_timelines *gt = &tl->gt->timelines;
-
-   mutex_lock(>->mutex);
-   list_del(&tl->link);
-   mutex_unlock(>->mutex);
-}
-
-static void timelines_park(struct intel_gt *gt)
-{
-   struct intel_gt_timelines *timelines = >->timelines;
-   struct intel_timeline *timeline;
-
-   mutex_lock(&timelines->mutex);
-   list_for_each_entry(timeline, &timelines->active_list, link) {
-   /*
-* All known fences are completed so we can scrap
-* the current sync point tracking and start afresh,
-* any attempt to wait upon a previous sync point
-* will be skipped as the fence was signaled.
-*/
-   i915_syncmap_free(&timeline->sync);
-   }
-   mutex_unlock(&timelines->mutex);
-}
-
-/**
- * intel_timelines_park - called when the driver idles
- * @i915: the drm_i915_private device
- *
- * When the driver is completely idle, we know that all of our sync points
- * have been signaled and our tracking is then entirely redundant. Any request
- * to wait upon an older sync point will be completed instantly as we know

[Intel-gfx] [PATCH 03/19] drm/i915/gt: Pass intel_gt to pm routines

2019-06-23 Thread Chris Wilson
Switch from passing the i915 container to newly named struct intel_gt.

Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c|  4 ++--
 drivers/gpu/drm/i915/gem/i915_gem_pm.c|  2 +-
 .../drm/i915/gem/selftests/i915_gem_mman.c|  4 ++--
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |  8 +++
 drivers/gpu/drm/i915/gt/intel_gt_pm.c | 24 +++
 drivers/gpu/drm/i915/gt/intel_gt_pm.h |  9 ---
 drivers/gpu/drm/i915/gt/intel_reset.c |  6 ++---
 drivers/gpu/drm/i915/i915_drv.c   |  2 +-
 drivers/gpu/drm/i915/i915_gem.c   |  2 +-
 drivers/gpu/drm/i915/selftests/i915_gem.c |  2 +-
 10 files changed, 33 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index cf8edb6822ee..1c5dfbfad71b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -2437,7 +2437,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 * wakeref that we hold until the GPU has been idle for at least
 * 100ms.
 */
-   intel_gt_pm_get(eb.i915);
+   intel_gt_pm_get(&eb.i915->gt);
 
err = i915_mutex_lock_interruptible(dev);
if (err)
@@ -2607,7 +2607,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 err_unlock:
mutex_unlock(&dev->struct_mutex);
 err_rpm:
-   intel_gt_pm_put(eb.i915);
+   intel_gt_pm_put(&eb.i915->gt);
i915_gem_context_put(eb.gem_context);
 err_destroy:
eb_destroy(&eb);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c 
b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
index 8f721cf0ab99..ee1f66594a35 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
@@ -258,7 +258,7 @@ void i915_gem_resume(struct drm_i915_private *i915)
 * guarantee that the context image is complete. So let's just reset
 * it and start again.
 */
-   intel_gt_resume(i915);
+   intel_gt_resume(&i915->gt);
 
if (i915_gem_init_hw(i915))
goto err_wedged;
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c 
b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index 24a3c677ccd5..a1f0b235f56b 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -379,7 +379,7 @@ static void disable_retire_worker(struct drm_i915_private 
*i915)
 {
i915_gem_shrinker_unregister(i915);
 
-   intel_gt_pm_get(i915);
+   intel_gt_pm_get(&i915->gt);
 
cancel_delayed_work_sync(&i915->gem.retire_work);
flush_work(&i915->gem.idle_work);
@@ -387,7 +387,7 @@ static void disable_retire_worker(struct drm_i915_private 
*i915)
 
 static void restore_retire_worker(struct drm_i915_private *i915)
 {
-   intel_gt_pm_put(i915);
+   intel_gt_pm_put(&i915->gt);
 
mutex_lock(&i915->drm.struct_mutex);
igt_flush_test(i915, I915_WAIT_LOCKED);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 2ce00d3dc42a..5253c382034d 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -18,7 +18,7 @@ static int __engine_unpark(struct intel_wakeref *wf)
 
GEM_TRACE("%s\n", engine->name);
 
-   intel_gt_pm_get(engine->i915);
+   intel_gt_pm_get(engine->gt);
 
/* Pin the default state for fast resets from atomic context. */
map = NULL;
@@ -129,7 +129,7 @@ static int __engine_park(struct intel_wakeref *wf)
 
engine->execlists.no_priolist = false;
 
-   intel_gt_pm_put(engine->i915);
+   intel_gt_pm_put(engine->gt);
return 0;
 }
 
@@ -149,7 +149,7 @@ int intel_engines_resume(struct drm_i915_private *i915)
enum intel_engine_id id;
int err = 0;
 
-   intel_gt_pm_get(i915);
+   intel_gt_pm_get(&i915->gt);
for_each_engine(engine, i915, id) {
intel_engine_pm_get(engine);
engine->serial++; /* kernel context lost */
@@ -162,7 +162,7 @@ int intel_engines_resume(struct drm_i915_private *i915)
break;
}
}
-   intel_gt_pm_put(i915);
+   intel_gt_pm_put(&i915->gt);
 
return err;
 }
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c 
b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index 6062840b5b46..ec6b69d014b6 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -50,9 +50,11 @@ static int intel_gt_unpark(struct intel_wakeref *wf)
return 0;
 }
 
-void intel_gt_pm_get(struct drm_i915_private *i915)
+void intel_gt_pm_get(struct intel_gt *gt)
 {
-   intel_wakeref_get(&i915->runtime_pm, &i915->gt.wakeref, 
intel_gt_unpark);
+   struct intel_runtime_pm *rpm = >->i915->runtime_pm;
+
+   intel_wakeref_get(rpm, >->wakeref, intel_gt_unpark);
 }
 
 

[Intel-gfx] [PATCH 19/19] drm/i915: Replace struct_mutex for batch pool serialisation

2019-06-23 Thread Chris Wilson
Switch to tracking activity via i915_active on individual nodes, only
keeping a list of retired objects in the cache, and reaping the cache
when the engine itself idles.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/Makefile |   2 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c|  58 ---
 drivers/gpu/drm/i915/gem/i915_gem_object.c|   1 -
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |   1 -
 drivers/gpu/drm/i915/gem/i915_gem_pm.c|   4 +-
 drivers/gpu/drm/i915/gt/intel_engine.h|   1 -
 drivers/gpu/drm/i915/gt/intel_engine_cs.c |  11 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |   2 +
 drivers/gpu/drm/i915/gt/intel_engine_pool.c   | 164 ++
 drivers/gpu/drm/i915/gt/intel_engine_pool.h   |  34 
 .../gpu/drm/i915/gt/intel_engine_pool_types.h |  29 
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |   6 +-
 drivers/gpu/drm/i915/gt/mock_engine.c |   3 +
 drivers/gpu/drm/i915/i915_debugfs.c   |  68 
 drivers/gpu/drm/i915/i915_gem_batch_pool.h|  26 ---
 15 files changed, 277 insertions(+), 133 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_pool.c
 create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_pool.h
 create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_pool_types.h
 delete mode 100644 drivers/gpu/drm/i915/i915_gem_batch_pool.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 84ac0fd1b8d0..ad8b9f1887a0 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -72,6 +72,7 @@ obj-y += gt/
 gt-y += \
gt/intel_breadcrumbs.o \
gt/intel_context.o \
+   gt/intel_engine_pool.o \
gt/intel_engine_cs.o \
gt/intel_engine_pm.o \
gt/intel_gt.o \
@@ -118,7 +119,6 @@ i915-y += \
  $(gem-y) \
  i915_active.o \
  i915_cmd_parser.o \
- i915_gem_batch_pool.o \
  i915_gem_evict.o \
  i915_gem_fence_reg.o \
  i915_gem_gtt.o \
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 80c9c57a302f..0ea2d49bc8b9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -16,6 +16,7 @@
 
 #include "gem/i915_gem_ioctls.h"
 #include "gt/intel_context.h"
+#include "gt/intel_engine_pool.h"
 #include "gt/intel_gt.h"
 #include "gt/intel_gt_pm.h"
 
@@ -1145,25 +1146,26 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
 unsigned int len)
 {
struct reloc_cache *cache = &eb->reloc_cache;
-   struct drm_i915_gem_object *obj;
+   struct intel_engine_pool_node *pool;
struct i915_request *rq;
struct i915_vma *batch;
u32 *cmd;
int err;
 
-   obj = i915_gem_batch_pool_get(&eb->engine->batch_pool, PAGE_SIZE);
-   if (IS_ERR(obj))
-   return PTR_ERR(obj);
+   pool = intel_engine_pool_get(&eb->engine->pool, PAGE_SIZE);
+   if (IS_ERR(pool))
+   return PTR_ERR(pool);
 
-   cmd = i915_gem_object_pin_map(obj,
+   cmd = i915_gem_object_pin_map(pool->obj,
  cache->has_llc ?
  I915_MAP_FORCE_WB :
  I915_MAP_FORCE_WC);
-   i915_gem_object_unpin_pages(obj);
-   if (IS_ERR(cmd))
-   return PTR_ERR(cmd);
+   if (IS_ERR(cmd)) {
+   err = PTR_ERR(cmd);
+   goto out_pool;
+   }
 
-   batch = i915_vma_instance(obj, vma->vm, NULL);
+   batch = i915_vma_instance(pool->obj, vma->vm, NULL);
if (IS_ERR(batch)) {
err = PTR_ERR(batch);
goto err_unmap;
@@ -1179,6 +1181,10 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
goto err_unpin;
}
 
+   err = intel_engine_pool_mark_active(pool, rq);
+   if (err)
+   goto err_request;
+
err = reloc_move_to_gpu(rq, vma);
if (err)
goto err_request;
@@ -1204,7 +1210,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
cache->rq_size = 0;
 
/* Return with batch mapping (cmd) still pinned */
-   return 0;
+   goto out_pool;
 
 skip_request:
i915_request_skip(rq, err);
@@ -1213,7 +1219,9 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
 err_unpin:
i915_vma_unpin(batch);
 err_unmap:
-   i915_gem_object_unpin_map(obj);
+   i915_gem_object_unpin_map(pool->obj);
+out_pool:
+   intel_engine_pool_put(pool);
return err;
 }
 
@@ -1957,18 +1965,17 @@ static int i915_reset_gen7_sol_offsets(struct 
i915_request *rq)
 
 static struct i915_vma *eb_parse(struct i915_execbuffer *eb, bool is_master)
 {
-   struct drm_i915_gem_object *shadow_batch_obj;
+   struct intel_engine_pool_node *pool;
struct i915_vma *vma;
int err;
 
-   

[Intel-gfx] [PATCH 06/19] drm/i915/selftests: Fixup atomic reset checking

2019-06-23 Thread Chris Wilson
We require that the intel_gpu_reset() was atomic, not the whole of
i915_reset() which is guarded by a mutex. However, we do require that
i915_reset_engine() is atomic for use from within the submission tasklet.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/selftest_reset.c | 65 +++-
 1 file changed, 63 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_reset.c 
b/drivers/gpu/drm/i915/gt/selftest_reset.c
index 64c2c8ab64ec..641cf3aee8d5 100644
--- a/drivers/gpu/drm/i915/gt/selftest_reset.c
+++ b/drivers/gpu/drm/i915/gt/selftest_reset.c
@@ -73,11 +73,13 @@ static int igt_atomic_reset(void *arg)
for (p = igt_atomic_phases; p->name; p++) {
GEM_TRACE("intel_gpu_reset under %s\n", p->name);
 
-   p->critical_section_begin();
reset_prepare(i915);
+   p->critical_section_begin();
+
err = intel_gpu_reset(i915, ALL_ENGINES);
-   reset_finish(i915);
+
p->critical_section_end();
+   reset_finish(i915);
 
if (err) {
pr_err("intel_gpu_reset failed under %s\n", p->name);
@@ -95,12 +97,71 @@ static int igt_atomic_reset(void *arg)
return err;
 }
 
+static int igt_atomic_engine_reset(void *arg)
+{
+   struct drm_i915_private *i915 = arg;
+   const typeof(*igt_atomic_phases) *p;
+   struct intel_engine_cs *engine;
+   enum intel_engine_id id;
+   int err = 0;
+
+   /* Check that the resets are usable from atomic context */
+
+   if (!intel_has_reset_engine(i915))
+   return 0;
+
+   if (USES_GUC_SUBMISSION(i915))
+   return 0;
+
+   intel_gt_pm_get(&i915->gt);
+   igt_global_reset_lock(i915);
+
+   /* Flush any requests before we get started and check basics */
+   if (!igt_force_reset(i915))
+   goto out_unlock;
+
+   for_each_engine(engine, i915, id) {
+   tasklet_disable_nosync(&engine->execlists.tasklet);
+   intel_engine_pm_get(engine);
+
+   for (p = igt_atomic_phases; p->name; p++) {
+   GEM_TRACE("i915_reset_engine(%s) under %s\n",
+ engine->name, p->name);
+
+   p->critical_section_begin();
+   err = i915_reset_engine(engine, NULL);
+   p->critical_section_end();
+
+   if (err) {
+   pr_err("i915_reset_engine(%s) failed under 
%s\n",
+  engine->name, p->name);
+   break;
+   }
+   }
+
+   intel_engine_pm_put(engine);
+   tasklet_enable(&engine->execlists.tasklet);
+   if (err)
+   break;
+   }
+
+   /* As we poke around the guts, do a full reset before continuing. */
+   igt_force_reset(i915);
+
+out_unlock:
+   igt_global_reset_unlock(i915);
+   intel_gt_pm_put(&i915->gt);
+
+   return err;
+}
+
 int intel_reset_live_selftests(struct drm_i915_private *i915)
 {
static const struct i915_subtest tests[] = {
SUBTEST(igt_global_reset), /* attempt to recover GPU first */
SUBTEST(igt_wedged_reset),
SUBTEST(igt_atomic_reset),
+   SUBTEST(igt_atomic_engine_reset),
};
intel_wakeref_t wakeref;
int err = 0;
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 02/19] drm/i915/execlists: Convert recursive defer_request() into an iteractive

2019-06-23 Thread Chris Wilson
As this engine owns the lock around rq->sched.link (for those waiters
submitted to this engine), we can use that link as an element in a local
list. We can thus replace the recursive algorithm with an iterative walk
over the ordered list of waiters.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 50 ++---
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
b/drivers/gpu/drm/i915/gt/intel_lrc.c
index efccc31887de..0361dfaacf9a 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -837,10 +837,9 @@ last_active(const struct intel_engine_execlists *execlists)
return *last;
 }
 
-static void
-defer_request(struct i915_request * const rq, struct list_head * const pl)
+static void defer_request(struct i915_request *rq, struct list_head * const pl)
 {
-   struct i915_dependency *p;
+   LIST_HEAD(list);
 
/*
 * We want to move the interrupted request to the back of
@@ -849,34 +848,35 @@ defer_request(struct i915_request * const rq, struct 
list_head * const pl)
 * flight and were waiting for the interrupted request to
 * be run after it again.
 */
-   list_move_tail(&rq->sched.link, pl);
+   do {
+   struct i915_dependency *p;
 
-   list_for_each_entry(p, &rq->sched.waiters_list, wait_link) {
-   struct i915_request *w =
-   container_of(p->waiter, typeof(*w), sched);
+   list_move_tail(&rq->sched.link, pl);
 
-   /* Leave semaphores spinning on the other engines */
-   if (w->engine != rq->engine)
-   continue;
+   list_for_each_entry(p, &rq->sched.waiters_list, wait_link) {
+   struct i915_request *w =
+   container_of(p->waiter, typeof(*w), sched);
 
-   /* No waiter should start before the active request completed */
-   GEM_BUG_ON(i915_request_started(w));
+   /* Leave semaphores spinning on the other engines */
+   if (w->engine != rq->engine)
+   continue;
 
-   GEM_BUG_ON(rq_prio(w) > rq_prio(rq));
-   if (rq_prio(w) < rq_prio(rq))
-   continue;
+   /* No waiter should start before its signaler */
+   GEM_BUG_ON(i915_request_started(w) &&
+  !i915_request_completed(rq));
 
-   if (list_empty(&w->sched.link))
-   continue; /* Not yet submitted; unready */
+   if (list_empty(&w->sched.link))
+   continue; /* Not yet submitted; unready */
 
-   /*
-* This should be very shallow as it is limited by the
-* number of requests that can fit in a ring (<64) and
-* the number of contexts that can be in flight on this
-* engine.
-*/
-   defer_request(w, pl);
-   }
+   if (rq_prio(w) < rq_prio(rq))
+   continue;
+
+   GEM_BUG_ON(rq_prio(w) > rq_prio(rq));
+   list_move_tail(&w->sched.link, &list);
+   }
+
+   rq = list_first_entry_or_null(&list, typeof(*rq), sched.link);
+   } while (rq);
 }
 
 static void defer_active(struct intel_engine_cs *engine)
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 05/19] drm/i915/selftest: Drop manual request wakerefs around hangcheck

2019-06-23 Thread Chris Wilson
We no longer need to manually acquire a wakeref for request emission, so
drop the redundant wakerefs, letting us test our wakeref handling more
precisely.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 7 ---
 drivers/gpu/drm/i915/gt/selftest_reset.c | 4 ++--
 2 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c 
b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
index 0e0b6c572ae9..cf592a049a71 100644
--- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
+++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
@@ -373,7 +373,6 @@ static int igt_reset_nop(void *arg)
struct i915_gem_context *ctx;
unsigned int reset_count, count;
enum intel_engine_id id;
-   intel_wakeref_t wakeref;
struct drm_file *file;
IGT_TIMEOUT(end_time);
int err = 0;
@@ -393,7 +392,6 @@ static int igt_reset_nop(void *arg)
}
 
i915_gem_context_clear_bannable(ctx);
-   wakeref = intel_runtime_pm_get(&i915->runtime_pm);
reset_count = i915_reset_count(&i915->gpu_error);
count = 0;
do {
@@ -442,8 +440,6 @@ static int igt_reset_nop(void *arg)
err = igt_flush_test(i915, I915_WAIT_LOCKED);
mutex_unlock(&i915->drm.struct_mutex);
 
-   intel_runtime_pm_put(&i915->runtime_pm, wakeref);
-
 out:
mock_file_free(i915, file);
if (i915_reset_failed(i915))
@@ -457,7 +453,6 @@ static int igt_reset_nop_engine(void *arg)
struct intel_engine_cs *engine;
struct i915_gem_context *ctx;
enum intel_engine_id id;
-   intel_wakeref_t wakeref;
struct drm_file *file;
int err = 0;
 
@@ -479,7 +474,6 @@ static int igt_reset_nop_engine(void *arg)
}
 
i915_gem_context_clear_bannable(ctx);
-   wakeref = intel_runtime_pm_get(&i915->runtime_pm);
for_each_engine(engine, i915, id) {
unsigned int reset_count, reset_engine_count;
unsigned int count;
@@ -549,7 +543,6 @@ static int igt_reset_nop_engine(void *arg)
err = igt_flush_test(i915, I915_WAIT_LOCKED);
mutex_unlock(&i915->drm.struct_mutex);
 
-   intel_runtime_pm_put(&i915->runtime_pm, wakeref);
 out:
mock_file_free(i915, file);
if (i915_reset_failed(i915))
diff --git a/drivers/gpu/drm/i915/gt/selftest_reset.c 
b/drivers/gpu/drm/i915/gt/selftest_reset.c
index 89da9e7cc1ba..64c2c8ab64ec 100644
--- a/drivers/gpu/drm/i915/gt/selftest_reset.c
+++ b/drivers/gpu/drm/i915/gt/selftest_reset.c
@@ -63,8 +63,8 @@ static int igt_atomic_reset(void *arg)
 
/* Check that the resets are usable from atomic context */
 
+   intel_gt_pm_get(&i915->gt);
igt_global_reset_lock(i915);
-   mutex_lock(&i915->drm.struct_mutex);
 
/* Flush any requests before we get started and check basics */
if (!igt_force_reset(i915))
@@ -89,8 +89,8 @@ static int igt_atomic_reset(void *arg)
igt_force_reset(i915);
 
 unlock:
-   mutex_unlock(&i915->drm.struct_mutex);
igt_global_reset_unlock(i915);
+   intel_gt_pm_put(&i915->gt);
 
return err;
 }
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 15/19] drm/i915/selftests: Hold ref on request across waits

2019-06-23 Thread Chris Wilson
As we wait upon the request, we should be sure to hold our own reference
for our checks.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/selftests/i915_request.c | 21 +++
 1 file changed, 12 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c 
b/drivers/gpu/drm/i915/selftests/i915_request.c
index 0fdf948a93a0..1bbfc43d4a9e 100644
--- a/drivers/gpu/drm/i915/selftests/i915_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_request.c
@@ -75,55 +75,58 @@ static int igt_wait_request(void *arg)
err = -ENOMEM;
goto out_unlock;
}
+   i915_request_get(request);
 
if (i915_request_wait(request, 0, 0) != -ETIME) {
pr_err("request wait (busy query) succeeded (expected timeout 
before submit!)\n");
-   goto out_unlock;
+   goto out_request;
}
 
if (i915_request_wait(request, 0, T) != -ETIME) {
pr_err("request wait succeeded (expected timeout before 
submit!)\n");
-   goto out_unlock;
+   goto out_request;
}
 
if (i915_request_completed(request)) {
pr_err("request completed before submit!!\n");
-   goto out_unlock;
+   goto out_request;
}
 
i915_request_add(request);
 
if (i915_request_wait(request, 0, 0) != -ETIME) {
pr_err("request wait (busy query) succeeded (expected timeout 
after submit!)\n");
-   goto out_unlock;
+   goto out_request;
}
 
if (i915_request_completed(request)) {
pr_err("request completed immediately!\n");
-   goto out_unlock;
+   goto out_request;
}
 
if (i915_request_wait(request, 0, T / 2) != -ETIME) {
pr_err("request wait succeeded (expected timeout!)\n");
-   goto out_unlock;
+   goto out_request;
}
 
if (i915_request_wait(request, 0, T) == -ETIME) {
pr_err("request wait timed out!\n");
-   goto out_unlock;
+   goto out_request;
}
 
if (!i915_request_completed(request)) {
pr_err("request not complete after waiting!\n");
-   goto out_unlock;
+   goto out_request;
}
 
if (i915_request_wait(request, 0, T) == -ETIME) {
pr_err("request wait timed out when already complete!\n");
-   goto out_unlock;
+   goto out_request;
}
 
err = 0;
+out_request:
+   i915_request_put(request);
 out_unlock:
mock_device_flush(i915);
mutex_unlock(&i915->drm.struct_mutex);
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 09/19] drm/i915: Only recover active engines

2019-06-23 Thread Chris Wilson
If we issue a reset to a currently idle engine, leave it idle
afterwards. This is useful to excise a linkage between reset and the
shrinker. When waking the engine, we need to pin the default context
image which we use for overwriting a guilty context -- if the engine is
idle we do not need this pinned image! However, this pinning means that
waking the engine acquires the FS_RECLAIM, and so may trigger the
shrinker. The shrinker itself may need to wait upon the GPU to unbind
and object and so may require services of reset; ergo we should avoid
the engine wake up path.

The danger in skipping the recovery for idle engines is that we leave the
engine with no context defined, which may interfere with the operation of
the power context on some older platforms. In practice, we should only
be resetting an active GPU but it something to look out for on Ironlake
(if memory serves).

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_reset.c| 37 ++--
 drivers/gpu/drm/i915/gt/selftest_reset.c |  6 ++--
 2 files changed, 26 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c 
b/drivers/gpu/drm/i915/gt/intel_reset.c
index 8ce92c51564e..e7cbd9cf85c1 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -678,7 +678,6 @@ static void reset_prepare_engine(struct intel_engine_cs 
*engine)
 * written to the powercontext is undefined and so we may lose
 * GPU state upon resume, i.e. fail to restart after a reset.
 */
-   intel_engine_pm_get(engine);
intel_uncore_forcewake_get(engine->uncore, FORCEWAKE_ALL);
engine->reset.prepare(engine);
 }
@@ -709,16 +708,21 @@ static void revoke_mmaps(struct drm_i915_private *i915)
}
 }
 
-static void reset_prepare(struct drm_i915_private *i915)
+static intel_engine_mask_t reset_prepare(struct drm_i915_private *i915)
 {
struct intel_engine_cs *engine;
+   intel_engine_mask_t awake = 0;
enum intel_engine_id id;
 
-   intel_gt_pm_get(&i915->gt);
-   for_each_engine(engine, i915, id)
+   for_each_engine(engine, i915, id) {
+   if (intel_engine_pm_get_if_awake(engine))
+   awake |= engine->mask;
reset_prepare_engine(engine);
+   }
 
intel_uc_reset_prepare(i915);
+
+   return awake;
 }
 
 static void gt_revoke(struct drm_i915_private *i915)
@@ -752,20 +756,22 @@ static int gt_reset(struct drm_i915_private *i915,
 static void reset_finish_engine(struct intel_engine_cs *engine)
 {
engine->reset.finish(engine);
-   intel_engine_pm_put(engine);
intel_uncore_forcewake_put(engine->uncore, FORCEWAKE_ALL);
+
+   intel_engine_signal_breadcrumbs(engine);
 }
 
-static void reset_finish(struct drm_i915_private *i915)
+static void reset_finish(struct drm_i915_private *i915,
+intel_engine_mask_t awake)
 {
struct intel_engine_cs *engine;
enum intel_engine_id id;
 
for_each_engine(engine, i915, id) {
reset_finish_engine(engine);
-   intel_engine_signal_breadcrumbs(engine);
+   if (awake & engine->mask)
+   intel_engine_pm_put(engine);
}
-   intel_gt_pm_put(&i915->gt);
 }
 
 static void nop_submit_request(struct i915_request *request)
@@ -789,6 +795,7 @@ static void __i915_gem_set_wedged(struct drm_i915_private 
*i915)
 {
struct i915_gpu_error *error = &i915->gpu_error;
struct intel_engine_cs *engine;
+   intel_engine_mask_t awake;
enum intel_engine_id id;
 
if (test_bit(I915_WEDGED, &error->flags))
@@ -808,7 +815,7 @@ static void __i915_gem_set_wedged(struct drm_i915_private 
*i915)
 * rolling the global seqno forward (since this would complete requests
 * for which we haven't set the fence error to EIO yet).
 */
-   reset_prepare(i915);
+   awake = reset_prepare(i915);
 
/* Even if the GPU reset fails, it should still stop the engines */
if (!INTEL_INFO(i915)->gpu_reset_clobbers_display)
@@ -832,7 +839,7 @@ static void __i915_gem_set_wedged(struct drm_i915_private 
*i915)
for_each_engine(engine, i915, id)
engine->cancel_requests(engine);
 
-   reset_finish(i915);
+   reset_finish(i915, awake);
 
GEM_TRACE("end\n");
 }
@@ -964,6 +971,7 @@ void i915_reset(struct drm_i915_private *i915,
const char *reason)
 {
struct i915_gpu_error *error = &i915->gpu_error;
+   intel_engine_mask_t awake;
int ret;
 
GEM_TRACE("flags=%lx\n", error->flags);
@@ -980,7 +988,7 @@ void i915_reset(struct drm_i915_private *i915,
dev_notice(i915->drm.dev, "Resetting chip for %s\n", reason);
error->reset_count++;
 
-   reset_prepare(i915);
+   awake = reset_prepare(i915);
 
if (!intel_has_gpu_reset(i915)) {
if (i915_modparam

[Intel-gfx] [PATCH 01/19] drm/i915/execlists: Always clear ring_pause if we do not submit

2019-06-23 Thread Chris Wilson
In the unlikely case (thank you CI!), we may find ourselves wanting to
issue a preemption but having no runnable requests left. In this case,
we set the semaphore before computing the preemption and so must unset
it before forgetting (or else we leave the machine busywaiting until the
next request comes along and so likely hang).

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
b/drivers/gpu/drm/i915/gt/intel_lrc.c
index c8a0c9b32764..efccc31887de 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -233,13 +233,18 @@ static inline u32 intel_hws_preempt_address(struct 
intel_engine_cs *engine)
 static inline void
 ring_set_paused(const struct intel_engine_cs *engine, int state)
 {
+   u32 *sema = &engine->status_page.addr[I915_GEM_HWS_PREEMPT];
+
+   if (*sema == state)
+   return;
+
/*
 * We inspect HWS_PREEMPT with a semaphore inside
 * engine->emit_fini_breadcrumb. If the dword is true,
 * the ring is paused as the semaphore will busywait
 * until the dword is false.
 */
-   engine->status_page.addr[I915_GEM_HWS_PREEMPT] = state;
+   *sema = state;
wmb();
 }
 
@@ -1243,6 +1248,8 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
*port = execlists_schedule_in(last, port - execlists->pending);
memset(port + 1, 0, (last_port - port) * sizeof(*port));
execlists_submit_ports(engine);
+   } else {
+   ring_set_paused(engine, 0);
}
 }
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 18/19] drm/i915: Protect request retirement with timeline->mutex

2019-06-23 Thread Chris Wilson
Forgo the struct_mutex requirement for request retirement as we have
been transitioning over to only using the timeline->mutex for
controlling the lifetime of a request on that timeline.

Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c| 192 ++
 drivers/gpu/drm/i915/gt/intel_context.h   |  25 +--
 drivers/gpu/drm/i915/gt/intel_engine_cs.c |   1 -
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |   2 -
 drivers/gpu/drm/i915/gt/intel_gt.c|   1 -
 drivers/gpu/drm/i915/gt/intel_gt_types.h  |   2 -
 drivers/gpu/drm/i915/gt/intel_ringbuffer.c|  11 +-
 drivers/gpu/drm/i915/gt/mock_engine.c |   1 -
 drivers/gpu/drm/i915/i915_request.c   | 149 +++---
 drivers/gpu/drm/i915/i915_request.h   |   3 -
 10 files changed, 199 insertions(+), 188 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index f43eaaa5db5f..80c9c57a302f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -739,63 +739,6 @@ static int eb_select_context(struct i915_execbuffer *eb)
return 0;
 }
 
-static struct i915_request *__eb_wait_for_ring(struct intel_ring *ring)
-{
-   struct i915_request *rq;
-
-   /*
-* Completely unscientific finger-in-the-air estimates for suitable
-* maximum user request size (to avoid blocking) and then backoff.
-*/
-   if (intel_ring_update_space(ring) >= PAGE_SIZE)
-   return NULL;
-
-   /*
-* Find a request that after waiting upon, there will be at least half
-* the ring available. The hysteresis allows us to compete for the
-* shared ring and should mean that we sleep less often prior to
-* claiming our resources, but not so long that the ring completely
-* drains before we can submit our next request.
-*/
-   list_for_each_entry(rq, &ring->request_list, ring_link) {
-   if (__intel_ring_space(rq->postfix,
-  ring->emit, ring->size) > ring->size / 2)
-   break;
-   }
-   if (&rq->ring_link == &ring->request_list)
-   return NULL; /* weird, we will check again later for real */
-
-   return i915_request_get(rq);
-}
-
-static int eb_wait_for_ring(const struct i915_execbuffer *eb)
-{
-   struct i915_request *rq;
-   int ret = 0;
-
-   /*
-* Apply a light amount of backpressure to prevent excessive hogs
-* from blocking waiting for space whilst holding struct_mutex and
-* keeping all of their resources pinned.
-*/
-
-   rq = __eb_wait_for_ring(eb->context->ring);
-   if (rq) {
-   mutex_unlock(&eb->i915->drm.struct_mutex);
-
-   if (i915_request_wait(rq,
- I915_WAIT_INTERRUPTIBLE,
- MAX_SCHEDULE_TIMEOUT) < 0)
-   ret = -EINTR;
-
-   i915_request_put(rq);
-
-   mutex_lock(&eb->i915->drm.struct_mutex);
-   }
-
-   return ret;
-}
-
 static int eb_lookup_vmas(struct i915_execbuffer *eb)
 {
struct radix_tree_root *handles_vma = &eb->gem_context->handles_vma;
@@ -2122,10 +2065,75 @@ static const enum intel_engine_id user_ring_map[] = {
[I915_EXEC_VEBOX]   = VECS0
 };
 
-static int eb_pin_context(struct i915_execbuffer *eb, struct intel_context *ce)
+static struct i915_request *eb_throttle(struct intel_context *ce)
+{
+   struct intel_ring *ring = ce->ring;
+   struct intel_timeline *tl = ring->timeline;
+   struct i915_request *rq;
+
+   /*
+* Completely unscientific finger-in-the-air estimates for suitable
+* maximum user request size (to avoid blocking) and then backoff.
+*/
+   if (intel_ring_update_space(ring) >= PAGE_SIZE)
+   return NULL;
+
+   /*
+* Find a request that after waiting upon, there will be at least half
+* the ring available. The hysteresis allows us to compete for the
+* shared ring and should mean that we sleep less often prior to
+* claiming our resources, but not so long that the ring completely
+* drains before we can submit our next request.
+*/
+   list_for_each_entry(rq, &tl->requests, link) {
+   if (rq->ring != ring)
+   continue;
+
+   if (__intel_ring_space(rq->postfix,
+  ring->emit, ring->size) > ring->size / 2)
+   break;
+   }
+   if (&rq->link == &tl->requests)
+   return NULL; /* weird, we will check again later for real */
+
+   return i915_request_get(rq);
+}
+
+static int
+__eb_pin_context(struct i915_execbuffer *eb, struct intel_context *ce)
 {
int err;
 
+   if (likely(atomic_inc_not_zero(

[Intel-gfx] [PATCH 16/19] drm/i915/gt: Always call kref_init for the timeline

2019-06-23 Thread Chris Wilson
Always initialise the refcount, even for the embedded timelines inside
mock devices.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_timeline.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c 
b/drivers/gpu/drm/i915/gt/intel_timeline.c
index d05875ff2071..030bcd2249e9 100644
--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
+++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
@@ -222,6 +222,7 @@ int intel_timeline_init(struct intel_timeline *timeline,
 
timeline->gt = gt;
 
+   kref_init(&timeline->kref);
atomic_set(&timeline->pin_count, 0);
 
timeline->has_initial_breadcrumb = !hwsp;
@@ -316,8 +317,6 @@ intel_timeline_create(struct intel_gt *gt, struct i915_vma 
*global_hwsp)
return ERR_PTR(err);
}
 
-   kref_init(&timeline->kref);
-
return timeline;
 }
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 08/19] drm/i915: Add a wakeref getter for iff the wakeref is already active

2019-06-23 Thread Chris Wilson
For use in the next patch, we want to acquire a wakeref without having
to wake the device up -- i.e. only acquire the engine wakeref if the
engine is already active.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_engine_pm.h |  7 ++-
 drivers/gpu/drm/i915/intel_wakeref.h  | 15 +++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.h 
b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
index f3f5b031b4a1..7d057cdcd919 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.h
@@ -11,7 +11,6 @@
 #include "intel_wakeref.h"
 
 struct drm_i915_private;
-struct intel_engine_cs;
 
 void intel_engine_pm_get(struct intel_engine_cs *engine);
 void intel_engine_pm_put(struct intel_engine_cs *engine);
@@ -22,6 +21,12 @@ intel_engine_pm_is_awake(const struct intel_engine_cs 
*engine)
return intel_wakeref_is_active(&engine->wakeref);
 }
 
+static inline bool
+intel_engine_pm_get_if_awake(struct intel_engine_cs *engine)
+{
+   return intel_wakeref_get_if_active(&engine->wakeref);
+}
+
 void intel_engine_park(struct intel_engine_cs *engine);
 
 void intel_engine_init__pm(struct intel_engine_cs *engine);
diff --git a/drivers/gpu/drm/i915/intel_wakeref.h 
b/drivers/gpu/drm/i915/intel_wakeref.h
index f74272770a5c..1d6f5986e4e5 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.h
+++ b/drivers/gpu/drm/i915/intel_wakeref.h
@@ -71,6 +71,21 @@ intel_wakeref_get(struct intel_runtime_pm *rpm,
return 0;
 }
 
+/**
+ * intel_wakeref_get_if_in_use: Acquire the wakeref
+ * @wf: the wakeref
+ *
+ * Acquire a hold on the wakeref, but only if the wakeref is already
+ * active.
+ *
+ * Returns: true if the wakeref was acquired, false otherwise.
+ */
+static inline bool
+intel_wakeref_get_if_active(struct intel_wakeref *wf)
+{
+   return atomic_inc_not_zero(&wf->count);
+}
+
 /**
  * intel_wakeref_put: Release the wakeref
  * @i915: the drm_i915_private device
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] linux-next: manual merge of the drm-intel tree with the pci tree

2019-06-23 Thread Stephen Rothwell
Hi all,

On Mon, 17 Jun 2019 13:20:27 +1000 Stephen Rothwell  
wrote:
>
> Today's linux-next merge of the drm-intel tree got a conflict in:
> 
>   drivers/gpu/drm/i915/i915_drv.h
> 
> between commit:
> 
>   151f4e2bdc7a ("docs: power: convert docs to ReST and rename to *.rst")
> 
> from the pci tree and commit:
> 
>   1bf676cc2dba ("drm/i915: move and rename i915_runtime_pm")
> 
> from the drm-intel tree.
> 
> I fixed it up (I just removed the struct definition form this files as
> the latter did - its comment will need to be fixed up in its new file)
> and can carry the fix as necessary. This is now fixed as far as linux-next
> is concerned, but any non trivial conflicts should be mentioned to your
> upstream maintainer when your tree is submitted for merging.  You may
> also want to consider cooperating with the maintainer of the conflicting
> tree to minimise any particularly complex conflicts.

This is now a conflict between the drm and pci trees.
-- 
Cheers,
Stephen Rothwell


pgpOMyXOipvXh.pgp
Description: OpenPGP digital signature
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH v3 3/4] drm/connector: Split out orientation quirk detection

2019-06-23 Thread Hans de Goede

Hi,

On 22-06-19 05:41, Derek Basehore wrote:

Not every platform needs quirk detection for panel orientation, so
split the drm_connector_init_panel_orientation_property into two
functions. One for platforms without the need for quirks, and the
other for platforms that need quirks.

Signed-off-by: Derek Basehore 
---
  drivers/gpu/drm/drm_connector.c | 45 -
  drivers/gpu/drm/i915/intel_dp.c |  4 +--
  drivers/gpu/drm/i915/vlv_dsi.c  |  5 ++--
  include/drm/drm_connector.h |  2 ++
  4 files changed, 39 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index e17586aaa80f..c4b01adf927a 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -1894,31 +1894,23 @@ EXPORT_SYMBOL(drm_connector_set_vrr_capable_property);
   * drm_connector_init_panel_orientation_property -
   *initialize the connecters panel_orientation property
   * @connector: connector for which to init the panel-orientation property.
- * @width: width in pixels of the panel, used for panel quirk detection
- * @height: height in pixels of the panel, used for panel quirk detection
   *
   * This function should only be called for built-in panels, after setting
   * connector->display_info.panel_orientation first (if known).
   *
- * This function will check for platform specific (e.g. DMI based) quirks
- * overriding display_info.panel_orientation first, then if panel_orientation
- * is not DRM_MODE_PANEL_ORIENTATION_UNKNOWN it will attach the
- * "panel orientation" property to the connector.
+ * This function will check if the panel_orientation is not
+ * DRM_MODE_PANEL_ORIENTATION_UNKNOWN. If not, it will attach the "panel
+ * orientation" property to the connector.
   *
   * Returns:
   * Zero on success, negative errno on failure.
   */
  int drm_connector_init_panel_orientation_property(
-   struct drm_connector *connector, int width, int height)
+   struct drm_connector *connector)
  {
struct drm_device *dev = connector->dev;
struct drm_display_info *info = &connector->display_info;
struct drm_property *prop;
-   int orientation_quirk;
-
-   orientation_quirk = drm_get_panel_orientation_quirk(width, height);
-   if (orientation_quirk != DRM_MODE_PANEL_ORIENTATION_UNKNOWN)
-   info->panel_orientation = orientation_quirk;
  
  	if (info->panel_orientation == DRM_MODE_PANEL_ORIENTATION_UNKNOWN)

return 0;
@@ -1941,6 +1933,35 @@ int drm_connector_init_panel_orientation_property(
  }
  EXPORT_SYMBOL(drm_connector_init_panel_orientation_property);
  
+/**

+ * drm_connector_init_panel_orientation_property_quirk -
+ * initialize the connecters panel_orientation property with a quirk
+ * override
+ * @connector: connector for which to init the panel-orientation property.
+ * @width: width in pixels of the panel, used for panel quirk detection
+ * @height: height in pixels of the panel, used for panel quirk detection
+ *
+ * This function will check for platform specific (e.g. DMI based) quirks
+ * overriding display_info.panel_orientation first, then if panel_orientation
+ * is not DRM_MODE_PANEL_ORIENTATION_UNKNOWN it will attach the
+ * "panel orientation" property to the connector.
+ *
+ * Returns:
+ * Zero on success, negative errno on failure.
+ */
+int drm_connector_init_panel_orientation_property_quirk(
+   struct drm_connector *connector, int width, int height)
+{
+   int orientation_quirk;
+
+   orientation_quirk = drm_get_panel_orientation_quirk(width, height);
+   if (orientation_quirk != DRM_MODE_PANEL_ORIENTATION_UNKNOWN)
+   connector->display_info.panel_orientation = orientation_quirk;
+
+   return drm_connector_init_panel_orientation_property(connector);
+}
+EXPORT_SYMBOL(drm_connector_init_panel_orientation_property_quirk);
+
  int drm_connector_set_obj_prop(struct drm_mode_object *obj,
struct drm_property *property,
uint64_t value)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index b099a9dc28fd..7d4e61cf5463 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -7282,8 +7282,8 @@ static bool intel_edp_init_connector(struct intel_dp 
*intel_dp,
intel_panel_setup_backlight(connector, pipe);
  
  	if (fixed_mode)

-   drm_connector_init_panel_orientation_property(
-   connector, fixed_mode->hdisplay, fixed_mode->vdisplay);
+   drm_connector_init_panel_orientation_property_quirk(connector,
+   fixed_mode->hdisplay, fixed_mode->vdisplay);
  
  	return true;
  
diff --git a/drivers/gpu/drm/i915/vlv_dsi.c b/drivers/gpu/drm/i915/vlv_dsi.c

index bfe2891eac37..fa9833dbe359 100644
--- a/drivers/gpu/drm/i915/vlv_dsi.c
+++ b/drivers/gpu/drm/i915/vlv_dsi.c
@@ -1650,6 +1650,7 @@ static void