[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Propagate fence->error across semaphores

2020-05-05 Thread Patchwork
== Series Details ==

Series: drm/i915: Propagate fence->error across semaphores
URL   : https://patchwork.freedesktop.org/series/76968/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8430_full -> Patchwork_17585_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_17585_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_workarounds@suspend-resume-fd:
- shard-kbl:  [PASS][1] -> [DMESG-WARN][2] ([i915#180])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl1/igt@gem_workarou...@suspend-resume-fd.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-kbl1/igt@gem_workarou...@suspend-resume-fd.html

  * igt@gen9_exec_parse@allowed-all:
- shard-kbl:  [PASS][3] -> [DMESG-WARN][4] ([i915#1436] / 
[i915#716])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl1/igt@gen9_exec_pa...@allowed-all.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-kbl2/igt@gen9_exec_pa...@allowed-all.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
- shard-skl:  [PASS][5] -> [INCOMPLETE][6] ([i915#300])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl10/igt@kms_cursor_...@pipe-a-cursor-suspend.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-skl8/igt@kms_cursor_...@pipe-a-cursor-suspend.html

  * igt@kms_cursor_edge_walk@pipe-a-64x64-top-edge:
- shard-apl:  [PASS][7] -> [FAIL][8] ([i915#70] / [i915#95])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl3/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-apl8/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
- shard-kbl:  [PASS][9] -> [FAIL][10] ([i915#70])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl6/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-kbl7/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html

  * igt@kms_frontbuffer_tracking@fbcpsr-shrfb-scaledprimary:
- shard-tglb: [PASS][11] -> [SKIP][12] ([i915#668]) +5 similar 
issues
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-tglb6/igt@kms_frontbuffer_track...@fbcpsr-shrfb-scaledprimary.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-tglb6/igt@kms_frontbuffer_track...@fbcpsr-shrfb-scaledprimary.html

  * igt@kms_hdr@bpc-switch-suspend:
- shard-apl:  [PASS][13] -> [DMESG-WARN][14] ([i915#180]) +4 
similar issues
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl7/igt@kms_...@bpc-switch-suspend.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-apl4/igt@kms_...@bpc-switch-suspend.html
- shard-skl:  [PASS][15] -> [FAIL][16] ([i915#1188])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl1/igt@kms_...@bpc-switch-suspend.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-skl6/igt@kms_...@bpc-switch-suspend.html

  * igt@kms_psr@psr2_primary_mmap_gtt:
- shard-iclb: [PASS][17] -> [SKIP][18] ([fdo#109441]) +1 similar 
issue
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-iclb2/igt@kms_psr@psr2_primary_mmap_gtt.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-iclb6/igt@kms_psr@psr2_primary_mmap_gtt.html

  * igt@testdisplay:
- shard-apl:  [PASS][19] -> [TIMEOUT][20] ([i915#1692])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl1/i...@testdisplay.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-apl7/i...@testdisplay.html
- shard-kbl:  [PASS][21] -> [TIMEOUT][22] ([i915#1692])
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl7/i...@testdisplay.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-kbl6/i...@testdisplay.html

  
 Possible fixes 

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
- shard-apl:  [DMESG-WARN][23] ([i915#180] / [i915#95]) -> 
[PASS][24]
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl8/igt@kms_cursor_...@pipe-a-cursor-suspend.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-apl8/igt@kms_cursor_...@pipe-a-cursor-suspend.html

  * igt@kms_cursor_legacy@all-pipes-torture-move:
- shard-hsw:  [DMESG-WARN][25] ([i915#128]) -> [PASS][26]
   [25]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-hsw1/igt@kms_cursor_leg...@all-pipes-torture-move.html
   [26]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/shard-hsw6/igt@kms_cursor_leg...@all-pipes-torture-move.html

  * 

[Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [1/6] drm/i915: Mark concurrent submissions with a weak-dependency (rev3)

2020-05-05 Thread Patchwork
== Series Details ==

Series: series starting with [1/6] drm/i915: Mark concurrent submissions with a 
weak-dependency (rev3)
URL   : https://patchwork.freedesktop.org/series/76912/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8430_full -> Patchwork_17584_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Possible new issues
---

  Here are the unknown changes that may have been introduced in 
Patchwork_17584_full:

### IGT changes ###

 Suppressed 

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * {igt@gem_exec_fence@parallel@bcs0}:
- shard-tglb: [PASS][1] -> [FAIL][2] +5 similar issues
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-tglb6/igt@gem_exec_fence@paral...@bcs0.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-tglb5/igt@gem_exec_fence@paral...@bcs0.html

  * {igt@gem_exec_fence@syncobj-invalid-wait}:
- shard-snb:  [PASS][3] -> [FAIL][4]
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-snb5/igt@gem_exec_fe...@syncobj-invalid-wait.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-snb2/igt@gem_exec_fe...@syncobj-invalid-wait.html
- shard-skl:  [PASS][5] -> [FAIL][6]
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl8/igt@gem_exec_fe...@syncobj-invalid-wait.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-skl10/igt@gem_exec_fe...@syncobj-invalid-wait.html
- shard-glk:  [PASS][7] -> [FAIL][8]
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-glk8/igt@gem_exec_fe...@syncobj-invalid-wait.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-glk1/igt@gem_exec_fe...@syncobj-invalid-wait.html
- shard-apl:  [PASS][9] -> [FAIL][10]
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl8/igt@gem_exec_fe...@syncobj-invalid-wait.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-apl4/igt@gem_exec_fe...@syncobj-invalid-wait.html
- shard-kbl:  [PASS][11] -> [FAIL][12]
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl2/igt@gem_exec_fe...@syncobj-invalid-wait.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-kbl7/igt@gem_exec_fe...@syncobj-invalid-wait.html
- shard-iclb: [PASS][13] -> [FAIL][14]
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-iclb1/igt@gem_exec_fe...@syncobj-invalid-wait.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-iclb1/igt@gem_exec_fe...@syncobj-invalid-wait.html
- shard-hsw:  [PASS][15] -> [FAIL][16]
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-hsw4/igt@gem_exec_fe...@syncobj-invalid-wait.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-hsw2/igt@gem_exec_fe...@syncobj-invalid-wait.html

  
New tests
-

  New tests have been introduced between CI_DRM_8430_full and 
Patchwork_17584_full:

### New IGT tests (1) ###

  * igt@dmabuf@all@dma_fence_proxy:
- Statuses : 8 pass(s)
- Exec time: [0.03, 0.10] s

  

Known issues


  Here are the changes found in Patchwork_17584_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_exec_whisper@basic-contexts-forked-all:
- shard-iclb: [PASS][17] -> [INCOMPLETE][18] ([CI#80])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-iclb5/igt@gem_exec_whis...@basic-contexts-forked-all.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-iclb7/igt@gem_exec_whis...@basic-contexts-forked-all.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
- shard-skl:  [PASS][19] -> [INCOMPLETE][20] ([i915#300])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl5/igt@kms_cursor_...@pipe-c-cursor-suspend.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-skl8/igt@kms_cursor_...@pipe-c-cursor-suspend.html

  * igt@kms_cursor_edge_walk@pipe-a-64x64-top-edge:
- shard-apl:  [PASS][21] -> [FAIL][22] ([i915#70])
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl3/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-apl7/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
- shard-kbl:  [PASS][23] -> [FAIL][24] ([i915#70])
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl6/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/shard-kbl6/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html

  * igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy:
- shard-g

[Intel-gfx] ✓ Fi.CI.IGT: success for Consider DBuf bandwidth when calculating CDCLK (rev9)

2020-05-05 Thread Patchwork
== Series Details ==

Series: Consider DBuf bandwidth when calculating CDCLK (rev9)
URL   : https://patchwork.freedesktop.org/series/74739/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8430_full -> Patchwork_17583_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_17583_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd:
- shard-skl:  [PASS][1] -> [FAIL][2] ([i915#1528])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl8/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@bsd.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-skl7/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@bsd.html

  * igt@i915_suspend@debugfs-reader:
- shard-apl:  [PASS][3] -> [DMESG-WARN][4] ([i915#180])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl6/igt@i915_susp...@debugfs-reader.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-apl6/igt@i915_susp...@debugfs-reader.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
- shard-kbl:  [PASS][5] -> [DMESG-WARN][6] ([i915#180] / [i915#93] 
/ [i915#95])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl3/igt@kms_cursor_...@pipe-a-cursor-suspend.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-kbl1/igt@kms_cursor_...@pipe-a-cursor-suspend.html

  * igt@kms_cursor_edge_walk@pipe-a-64x64-top-edge:
- shard-apl:  [PASS][7] -> [FAIL][8] ([i915#70])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl3/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-apl4/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
- shard-kbl:  [PASS][9] -> [FAIL][10] ([i915#70])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl6/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-kbl6/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
- shard-skl:  [PASS][11] -> [FAIL][12] ([IGT#5])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl4/igt@kms_cursor_leg...@flip-vs-cursor-atomic-transitions.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-skl6/igt@kms_cursor_leg...@flip-vs-cursor-atomic-transitions.html

  * igt@kms_fbcon_fbt@psr-suspend:
- shard-skl:  [PASS][13] -> [INCOMPLETE][14] ([i915#69])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl7/igt@kms_fbcon_...@psr-suspend.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-skl3/igt@kms_fbcon_...@psr-suspend.html

  * igt@kms_hdr@bpc-switch-suspend:
- shard-skl:  [PASS][15] -> [FAIL][16] ([i915#1188])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl1/igt@kms_...@bpc-switch-suspend.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-skl7/igt@kms_...@bpc-switch-suspend.html

  * igt@kms_plane_lowres@pipe-a-tiling-x:
- shard-glk:  [PASS][17] -> [FAIL][18] ([i915#899])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-glk1/igt@kms_plane_low...@pipe-a-tiling-x.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-glk2/igt@kms_plane_low...@pipe-a-tiling-x.html

  * igt@perf@stress-open-close:
- shard-skl:  [PASS][19] -> [INCOMPLETE][20] ([i915#1356])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl6/igt@p...@stress-open-close.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-skl9/igt@p...@stress-open-close.html

  * igt@testdisplay:
- shard-kbl:  [PASS][21] -> [TIMEOUT][22] ([i915#1692])
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl7/i...@testdisplay.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-kbl4/i...@testdisplay.html

  
 Possible fixes 

  * igt@kms_cursor_legacy@all-pipes-torture-move:
- shard-hsw:  [DMESG-WARN][23] ([i915#128]) -> [PASS][24]
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-hsw1/igt@kms_cursor_leg...@all-pipes-torture-move.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/shard-hsw6/igt@kms_cursor_leg...@all-pipes-torture-move.html

  * {igt@kms_flip@flip-vs-expired-vblank-interruptible@c-dp1}:
- shard-apl:  [FAIL][25] ([i915#79]) -> [PASS][26]
   [25]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl7/igt@kms_flip@flip-vs-expired-vblank-interrupti...@c-dp1.html
   [26]: 
https://intel-gfx-ci.0

[Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [1/2] drm/i915: Mark concurrent submissions with a weak-dependency

2020-05-05 Thread Patchwork
== Series Details ==

Series: series starting with [1/2] drm/i915: Mark concurrent submissions with a 
weak-dependency
URL   : https://patchwork.freedesktop.org/series/76953/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8430_full -> Patchwork_17582_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_17582_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gen9_exec_parse@allowed-all:
- shard-kbl:  [PASS][1] -> [DMESG-WARN][2] ([i915#1436] / 
[i915#716])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl1/igt@gen9_exec_pa...@allowed-all.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-kbl6/igt@gen9_exec_pa...@allowed-all.html

  * igt@kms_cursor_edge_walk@pipe-a-64x64-top-edge:
- shard-apl:  [PASS][3] -> [FAIL][4] ([i915#70])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl3/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-apl7/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
- shard-kbl:  [PASS][5] -> [FAIL][6] ([i915#70])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl6/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-kbl4/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-pgflip-blt:
- shard-skl:  [PASS][7] -> [FAIL][8] ([i915#49])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl2/igt@kms_frontbuffer_track...@psr-1p-primscrn-indfb-pgflip-blt.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-skl10/igt@kms_frontbuffer_track...@psr-1p-primscrn-indfb-pgflip-blt.html

  * igt@kms_psr@psr2_cursor_render:
- shard-iclb: [PASS][9] -> [SKIP][10] ([fdo#109441]) +2 similar 
issues
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-iclb2/igt@kms_psr@psr2_cursor_render.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-iclb7/igt@kms_psr@psr2_cursor_render.html

  * igt@kms_vblank@pipe-a-ts-continuation-dpms-suspend:
- shard-skl:  [PASS][11] -> [INCOMPLETE][12] ([i915#69])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl6/igt@kms_vbl...@pipe-a-ts-continuation-dpms-suspend.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-skl5/igt@kms_vbl...@pipe-a-ts-continuation-dpms-suspend.html

  * igt@testdisplay:
- shard-kbl:  [PASS][13] -> [TIMEOUT][14] ([i915#1692])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl7/i...@testdisplay.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-kbl1/i...@testdisplay.html

  
 Possible fixes 

  * igt@gem_exec_params@invalid-bsd-ring:
- shard-iclb: [SKIP][15] ([fdo#109276]) -> [PASS][16]
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-iclb6/igt@gem_exec_par...@invalid-bsd-ring.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-iclb4/igt@gem_exec_par...@invalid-bsd-ring.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
- shard-apl:  [DMESG-WARN][17] ([i915#180] / [i915#95]) -> 
[PASS][18]
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl8/igt@kms_cursor_...@pipe-a-cursor-suspend.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-apl7/igt@kms_cursor_...@pipe-a-cursor-suspend.html

  * igt@kms_cursor_legacy@all-pipes-torture-move:
- shard-hsw:  [DMESG-WARN][19] ([i915#128]) -> [PASS][20]
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-hsw1/igt@kms_cursor_leg...@all-pipes-torture-move.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-hsw6/igt@kms_cursor_leg...@all-pipes-torture-move.html

  * {igt@kms_flip@flip-vs-expired-vblank-interruptible@c-dp1}:
- shard-apl:  [FAIL][21] ([i915#79]) -> [PASS][22]
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl7/igt@kms_flip@flip-vs-expired-vblank-interrupti...@c-dp1.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-apl2/igt@kms_flip@flip-vs-expired-vblank-interrupti...@c-dp1.html

  * {igt@kms_flip@flip-vs-expired-vblank@b-edp1}:
- shard-skl:  [FAIL][23] ([i915#79]) -> [PASS][24]
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl2/igt@kms_flip@flip-vs-expired-vbl...@b-edp1.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/shard-skl9/igt@kms_flip@flip-vs-expired-vbl...@b-edp1.html

  * {igt@kms_flip@plain-flip-ts-check-interruptible@a-edp1}:
- shard-skl:  [FAIL][25] ([i915#3

[Intel-gfx] ✓ Fi.CI.IGT: success for SAGV support for Gen12+ (rev35)

2020-05-05 Thread Patchwork
== Series Details ==

Series: SAGV support for Gen12+ (rev35)
URL   : https://patchwork.freedesktop.org/series/75129/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8430_full -> Patchwork_17581_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_17581_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_workarounds@suspend-resume:
- shard-apl:  [PASS][1] -> [DMESG-WARN][2] ([i915#180])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl4/igt@gem_workarou...@suspend-resume.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-apl6/igt@gem_workarou...@suspend-resume.html

  * igt@kms_cursor_edge_walk@pipe-a-64x64-top-edge:
- shard-apl:  [PASS][3] -> [FAIL][4] ([i915#70] / [i915#95])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl3/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-apl3/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
- shard-kbl:  [PASS][5] -> [FAIL][6] ([i915#70])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-kbl6/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-kbl7/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html

  * igt@kms_cursor_legacy@all-pipes-single-move:
- shard-hsw:  [PASS][7] -> [INCOMPLETE][8] ([i915#61])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-hsw7/igt@kms_cursor_leg...@all-pipes-single-move.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-hsw8/igt@kms_cursor_leg...@all-pipes-single-move.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-pgflip-blt:
- shard-skl:  [PASS][9] -> [FAIL][10] ([i915#49])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl2/igt@kms_frontbuffer_track...@psr-1p-primscrn-indfb-pgflip-blt.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-skl2/igt@kms_frontbuffer_track...@psr-1p-primscrn-indfb-pgflip-blt.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-c:
- shard-skl:  [PASS][11] -> [INCOMPLETE][12] ([i915#69])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl3/igt@kms_pipe_crc_ba...@suspend-read-crc-pipe-c.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-skl10/igt@kms_pipe_crc_ba...@suspend-read-crc-pipe-c.html

  * igt@kms_psr@psr2_cursor_render:
- shard-iclb: [PASS][13] -> [SKIP][14] ([fdo#109441]) +2 similar 
issues
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-iclb2/igt@kms_psr@psr2_cursor_render.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-iclb4/igt@kms_psr@psr2_cursor_render.html

  
 Possible fixes 

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
- shard-apl:  [DMESG-WARN][15] ([i915#180] / [i915#95]) -> 
[PASS][16]
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl8/igt@kms_cursor_...@pipe-a-cursor-suspend.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-apl3/igt@kms_cursor_...@pipe-a-cursor-suspend.html

  * igt@kms_cursor_legacy@all-pipes-torture-move:
- shard-hsw:  [DMESG-WARN][17] ([i915#128]) -> [PASS][18]
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-hsw1/igt@kms_cursor_leg...@all-pipes-torture-move.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-hsw4/igt@kms_cursor_leg...@all-pipes-torture-move.html

  * {igt@kms_flip@flip-vs-expired-vblank-interruptible@c-dp1}:
- shard-apl:  [FAIL][19] ([i915#79]) -> [PASS][20]
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-apl7/igt@kms_flip@flip-vs-expired-vblank-interrupti...@c-dp1.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-apl8/igt@kms_flip@flip-vs-expired-vblank-interrupti...@c-dp1.html

  * {igt@kms_flip@flip-vs-expired-vblank@b-edp1}:
- shard-skl:  [FAIL][21] ([i915#79]) -> [PASS][22]
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl2/igt@kms_flip@flip-vs-expired-vbl...@b-edp1.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-skl2/igt@kms_flip@flip-vs-expired-vbl...@b-edp1.html

  * {igt@kms_flip@plain-flip-ts-check-interruptible@a-edp1}:
- shard-skl:  [FAIL][23] ([i915#34]) -> [PASS][24]
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8430/shard-skl1/igt@kms_flip@plain-flip-ts-check-interrupti...@a-edp1.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/shard-skl5/igt@kms_flip@plain-flip-ts-check-interrupti...@a-edp1.html

  * igt@kms_hdr@bpc-switch-dpms:
-

[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [01/14] drm/i915: Mark concurrent submissions with a weak-dependency

2020-05-05 Thread Patchwork
== Series Details ==

Series: series starting with [01/14] drm/i915: Mark concurrent submissions with 
a weak-dependency
URL   : https://patchwork.freedesktop.org/series/76973/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8433 -> Patchwork_17586


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17586/index.html

New tests
-

  New tests have been introduced between CI_DRM_8433 and Patchwork_17586:

### New IGT tests (1) ###

  * igt@dmabuf@all@dma_fence_proxy:
- Statuses : 40 pass(s)
- Exec time: [0.03, 0.10] s

  

Known issues


  Here are the changes found in Patchwork_17586 that come from known issues:

### IGT changes ###

 Warnings 

  * igt@i915_pm_rpm@module-reload:
- fi-kbl-x1275:   [SKIP][1] ([fdo#109271]) -> [FAIL][2] ([i915#62])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8433/fi-kbl-x1275/igt@i915_pm_...@module-reload.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17586/fi-kbl-x1275/igt@i915_pm_...@module-reload.html

  
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#62]: https://gitlab.freedesktop.org/drm/intel/issues/62


Participating hosts (51 -> 44)
--

  Missing(7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8433 -> Patchwork_17586

  CI-20190529: 20190529
  CI_DRM_8433: db68fed086f2ddcdc30e0d9ca5faaba5e55d0d01 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5633: c8c2e5ed5cd8e4b7a69a903f3f1653612086abcc @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17586: 9ac5d5912623123a8c73ccb3c80385c0348b4897 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

9ac5d5912623 drm/i915: Drop I915_IDLE_ENGINES_TIMEOUT
161372b2eff2 drm/i915: Drop I915_RESET_TIMEOUT and friends
e779556e8752 drm/i915: Replace the hardcoded I915_FENCE_TIMEOUT
a88bbc5651d9 drm/i915/gt: Declare when we enabled timeslicing
e2f962c64a8e drm/i915/gem: Allow combining submit-fences with syncobj
0654a7646abf drm/i915/gem: Teach execbuf how to wait on future syncobj
52b85f061e0e drm/syncobj: Allow use of dma-fence-proxy
53a2de478fd1 dma-buf: Proxy fence, an unsignaled fence placeholder
827865132bb2 drm/i915: Tidy awaiting on dma-fences
b147f27369cc drm/i915: Prevent using semaphores to chain up to external fences
d2d277b8470c drm/i915: Pull waiting on an external dma-fence into its routine
c098263cd3f9 drm/i915: Ignore submit-fences on the same timeline
46cafc823859 drm/i915: Propagate error from completed fences
c4828948d40e drm/i915: Mark concurrent submissions with a weak-dependency

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17586/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/14] drm/i915: Mark concurrent submissions with a weak-dependency

2020-05-05 Thread Patchwork
== Series Details ==

Series: series starting with [01/14] drm/i915: Mark concurrent submissions with 
a weak-dependency
URL   : https://patchwork.freedesktop.org/series/76973/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
c4828948d40e drm/i915: Mark concurrent submissions with a weak-dependency
46cafc823859 drm/i915: Propagate error from completed fences
c098263cd3f9 drm/i915: Ignore submit-fences on the same timeline
d2d277b8470c drm/i915: Pull waiting on an external dma-fence into its routine
b147f27369cc drm/i915: Prevent using semaphores to chain up to external fences
827865132bb2 drm/i915: Tidy awaiting on dma-fences
53a2de478fd1 dma-buf: Proxy fence, an unsignaled fence placeholder
-:45: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does 
MAINTAINERS need updating?
#45: 
new file mode 100644

-:380: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#380: FILE: drivers/dma-buf/st-dma-fence-proxy.c:20:
+   spinlock_t lock;

-:540: WARNING:MEMORY_BARRIER: memory barrier without comment
#540: FILE: drivers/dma-buf/st-dma-fence-proxy.c:180:
+   smp_store_mb(container_of(cb, struct simple_cb, cb)->seen, true);

total: 0 errors, 2 warnings, 1 checks, 1043 lines checked
52b85f061e0e drm/syncobj: Allow use of dma-fence-proxy
0654a7646abf drm/i915/gem: Teach execbuf how to wait on future syncobj
e2f962c64a8e drm/i915/gem: Allow combining submit-fences with syncobj
a88bbc5651d9 drm/i915/gt: Declare when we enabled timeslicing
e779556e8752 drm/i915: Replace the hardcoded I915_FENCE_TIMEOUT
-:111: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does 
MAINTAINERS need updating?
#111: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 125 lines checked
161372b2eff2 drm/i915: Drop I915_RESET_TIMEOUT and friends
9ac5d5912623 drm/i915: Drop I915_IDLE_ENGINES_TIMEOUT

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t 1/2] lib/i915: Report scheduler caps for timeslicing

2020-05-05 Thread Chris Wilson
Signed-off-by: Chris Wilson 
---
 include/drm-uapi/i915_drm.h |  8 +---
 lib/i915/gem_scheduler.c| 15 +++
 lib/i915/gem_scheduler.h|  1 +
 3 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/include/drm-uapi/i915_drm.h b/include/drm-uapi/i915_drm.h
index 2b55af13a..a222b6bfb 100644
--- a/include/drm-uapi/i915_drm.h
+++ b/include/drm-uapi/i915_drm.h
@@ -523,6 +523,7 @@ typedef struct drm_i915_irq_wait {
 #define   I915_SCHEDULER_CAP_PREEMPTION(1ul << 2)
 #define   I915_SCHEDULER_CAP_SEMAPHORES(1ul << 3)
 #define   I915_SCHEDULER_CAP_ENGINE_BUSY_STATS (1ul << 4)
+#define   I915_SCHEDULER_CAP_TIMESLICING   (1ul << 5)
 
 #define I915_PARAM_HUC_STATUS   42
 
@@ -1040,9 +1041,10 @@ struct drm_i915_gem_exec_fence {
 */
__u32 handle;
 
-#define I915_EXEC_FENCE_WAIT(1<<0)
-#define I915_EXEC_FENCE_SIGNAL  (1<<1)
-#define __I915_EXEC_FENCE_UNKNOWN_FLAGS (-(I915_EXEC_FENCE_SIGNAL << 1))
+#define I915_EXEC_FENCE_WAIT(1u << 0)
+#define I915_EXEC_FENCE_SIGNAL  (1u << 1)
+#define I915_EXEC_FENCE_WAIT_SUBMIT (1u << 2)
+#define __I915_EXEC_FENCE_UNKNOWN_FLAGS (-(I915_EXEC_FENCE_WAIT_SUBMIT << 1))
__u32 flags;
 };
 
diff --git a/lib/i915/gem_scheduler.c b/lib/i915/gem_scheduler.c
index 1beb85dec..a1dc694e5 100644
--- a/lib/i915/gem_scheduler.c
+++ b/lib/i915/gem_scheduler.c
@@ -131,6 +131,19 @@ bool gem_scheduler_has_engine_busy_stats(int fd)
I915_SCHEDULER_CAP_ENGINE_BUSY_STATS;
 }
 
+/**
+ * gem_scheduler_has_timeslicing:
+ * @fd: open i915 drm file descriptor
+ *
+ * Feature test macro to query whether the driver supports using HW preemption
+ * to implement timeslicing of userspace batches. This allows userspace to
+ * implement micro-level scheduling within their own batches.
+ */
+bool gem_scheduler_has_timeslicing(int fd)
+{
+   return gem_scheduler_capability(fd) & I915_SCHEDULER_CAP_TIMESLICING;
+}
+
 /**
  * gem_scheduler_print_capability:
  * @fd: open i915 drm file descriptor
@@ -151,6 +164,8 @@ void gem_scheduler_print_capability(int fd)
igt_info(" - With preemption enabled\n");
if (caps & I915_SCHEDULER_CAP_SEMAPHORES)
igt_info(" - With HW semaphores enabled\n");
+   if (caps & I915_SCHEDULER_CAP_TIMESLICING)
+   igt_info(" - With user timeslicing enabled\n");
if (caps & I915_SCHEDULER_CAP_ENGINE_BUSY_STATS)
igt_info(" - With engine busy statistics\n");
 }
diff --git a/lib/i915/gem_scheduler.h b/lib/i915/gem_scheduler.h
index 14bd4cac4..d43e84bd2 100644
--- a/lib/i915/gem_scheduler.h
+++ b/lib/i915/gem_scheduler.h
@@ -32,6 +32,7 @@ bool gem_scheduler_has_ctx_priority(int fd);
 bool gem_scheduler_has_preemption(int fd);
 bool gem_scheduler_has_semaphores(int fd);
 bool gem_scheduler_has_engine_busy_stats(int fd);
+bool gem_scheduler_has_timeslicing(int fd);
 void gem_scheduler_print_capability(int fd);
 
 #endif /* GEM_SCHEDULER_H */
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t 2/2] i915/gem_exec_fence: Teach invalid-wait about invalid future fences

2020-05-05 Thread Chris Wilson
When we allow a wait on a future future fence, it must autoexpire if the
fence is never signaled by userspace. Also put future fences to work, as
the intention is to use them, along with WAIT_SUBMIT and semaphores, for
userspace to perform its own fine-grained scheduling. Or simply run
concurrent clients without having to flush batches between context
switches.

v2: Verify deadlock detection

Signed-off-by: Chris Wilson 
---
 tests/i915/gem_exec_fence.c | 558 +++-
 1 file changed, 555 insertions(+), 3 deletions(-)

diff --git a/tests/i915/gem_exec_fence.c b/tests/i915/gem_exec_fence.c
index 4b0d87e4d..e51b7452e 100644
--- a/tests/i915/gem_exec_fence.c
+++ b/tests/i915/gem_exec_fence.c
@@ -46,6 +46,15 @@ struct sync_merge_data {
 #define SYNC_IOC_MERGE _IOWR(SYNC_IOC_MAGIC, 3, struct sync_merge_data)
 #endif
 
+#define MI_SEMAPHORE_WAIT  (0x1c << 23)
+#define   MI_SEMAPHORE_POLL (1 << 15)
+#define   MI_SEMAPHORE_SAD_GT_SDD   (0 << 12)
+#define   MI_SEMAPHORE_SAD_GTE_SDD  (1 << 12)
+#define   MI_SEMAPHORE_SAD_LT_SDD   (2 << 12)
+#define   MI_SEMAPHORE_SAD_LTE_SDD  (3 << 12)
+#define   MI_SEMAPHORE_SAD_EQ_SDD   (4 << 12)
+#define   MI_SEMAPHORE_SAD_NEQ_SDD  (5 << 12)
+
 static void store(int fd, const struct intel_execution_engine2 *e,
  int fence, uint32_t target, unsigned offset_value)
 {
@@ -907,11 +916,12 @@ static void test_syncobj_invalid_wait(int fd)
struct drm_i915_gem_exec_fence fence = {
.handle = syncobj_create(fd, 0),
};
+   int out;
 
memset(&execbuf, 0, sizeof(execbuf));
execbuf.buffers_ptr = to_user_pointer(&obj);
execbuf.buffer_count = 1;
-   execbuf.flags = I915_EXEC_FENCE_ARRAY;
+   execbuf.flags = I915_EXEC_FENCE_ARRAY | I915_EXEC_FENCE_OUT;
execbuf.cliprects_ptr = to_user_pointer(&fence);
execbuf.num_cliprects = 1;
 
@@ -919,14 +929,59 @@ static void test_syncobj_invalid_wait(int fd)
obj.handle = gem_create(fd, 4096);
gem_write(fd, obj.handle, 0, &bbe, sizeof(bbe));
 
-   /* waiting before the fence is set is invalid */
+   /* waiting before the fence is set is^W may be invalid */
fence.flags = I915_EXEC_FENCE_WAIT;
-   igt_assert_eq(__gem_execbuf(fd, &execbuf), -EINVAL);
+   if (__gem_execbuf_wr(fd, &execbuf)) {
+   igt_assert_eq(__gem_execbuf(fd, &execbuf), -EINVAL);
+   return;
+   }
+
+   /* If we do allow the wait on a future fence, it should autoexpire */
+   gem_sync(fd, obj.handle);
+   out = execbuf.rsvd2 >> 32;
+   igt_assert_eq(sync_fence_status(out), -ETIMEDOUT);
+   close(out);
 
gem_close(fd, obj.handle);
syncobj_destroy(fd, fence.handle);
 }
 
+static void test_syncobj_incomplete_wait_submit(int i915)
+{
+   struct drm_i915_gem_exec_object2 obj = {
+   .handle = batch_create(i915),
+   };
+   struct drm_i915_gem_exec_fence fence = {
+   .handle = syncobj_create(i915, 0),
+   .flags = I915_EXEC_FENCE_WAIT | I915_EXEC_FENCE_WAIT_SUBMIT,
+   };
+   struct drm_i915_gem_execbuffer2 execbuf = {
+   .buffers_ptr = to_user_pointer(&obj),
+   .buffer_count = 1,
+
+   .cliprects_ptr = to_user_pointer(&fence),
+   .num_cliprects = 1,
+
+   .flags = I915_EXEC_FENCE_ARRAY | I915_EXEC_FENCE_OUT,
+   };
+   int out;
+
+   /* waiting before the fence is set is^W may be invalid */
+   if (__gem_execbuf_wr(i915, &execbuf)) {
+   igt_assert_eq(__gem_execbuf(i915, &execbuf), -EINVAL);
+   return;
+   }
+
+   /* If we do allow the wait on a future fence, it should autoexpire */
+   gem_sync(i915, obj.handle);
+   out = execbuf.rsvd2 >> 32;
+   igt_assert_eq(sync_fence_status(out), -ETIMEDOUT);
+   close(out);
+
+   gem_close(i915, obj.handle);
+   syncobj_destroy(i915, fence.handle);
+}
+
 static void test_syncobj_invalid_flags(int fd)
 {
const uint32_t bbe = MI_BATCH_BUFFER_END;
@@ -1073,6 +1128,398 @@ static void test_syncobj_wait(int fd)
}
 }
 
+static uint32_t future_batch(int i915, uint32_t offset)
+{
+   uint32_t handle = gem_create(i915, 4096);
+   const int gen = intel_gen(intel_get_drm_devid(i915));
+   uint32_t cs[16];
+   int i = 0;
+
+   cs[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
+   if (gen >= 8) {
+   cs[++i] = offset + 4000;
+   cs[++i] = 0;
+   } else if (gen >= 4) {
+   cs[++i] = 0;
+   cs[++i] = offset + 4000;
+   } else {
+   cs[i]--;
+   cs[++i] = offset + 4000;
+   }
+   cs[++i] = 1;
+   cs[i + 1] = MI_BATCH_BUFFER_END;
+   gem_write(i915, handle, 0, cs, sizeof(cs));
+
+   cs[i] = 2;
+   gem_write(i915, handle, 64, cs, sizeof(cs));
+
+   return handle;
+}

[Intel-gfx] [PATCH 02/14] drm/i915: Propagate error from completed fences

2020-05-05 Thread Chris Wilson
We need to preserve fatal errors from fences that are being terminated
as we hook them up.

Fixes: ef4688497512 ("drm/i915: Propagate fence errors")
Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
Cc: Matthew Auld 
---
 drivers/gpu/drm/i915/i915_request.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 95edc5523a01..b4cc17fa9e8f 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1034,8 +1034,10 @@ i915_request_await_request(struct i915_request *to, 
struct i915_request *from)
GEM_BUG_ON(to == from);
GEM_BUG_ON(to->timeline == from->timeline);
 
-   if (i915_request_completed(from))
+   if (i915_request_completed(from)) {
+   i915_sw_fence_set_error_once(&to->submit, from->fence.error);
return 0;
+   }
 
if (to->engine->schedule) {
ret = i915_sched_node_add_dependency(&to->sched,
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 12/14] drm/i915: Replace the hardcoded I915_FENCE_TIMEOUT

2020-05-05 Thread Chris Wilson
Expose the hardcoded timeout for unsignaled foreign fences as a Kconfig
option, primarily to allow brave systems to disable the timeout and
solely rely on correct signaling.

Signed-off-by: Chris Wilson 
Cc: Joonas Lahtinen 
---
 drivers/gpu/drm/i915/Kconfig.profile   | 12 
 drivers/gpu/drm/i915/Makefile  |  1 +
 drivers/gpu/drm/i915/display/intel_display.c   |  5 +++--
 drivers/gpu/drm/i915/gem/i915_gem_clflush.c|  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_client_blt.c |  3 +--
 drivers/gpu/drm/i915/gem/i915_gem_fence.c  |  4 ++--
 drivers/gpu/drm/i915/i915_config.c | 15 +++
 drivers/gpu/drm/i915/i915_drv.h| 10 +-
 drivers/gpu/drm/i915/i915_request.c|  9 ++---
 9 files changed, 50 insertions(+), 11 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_config.c

diff --git a/drivers/gpu/drm/i915/Kconfig.profile 
b/drivers/gpu/drm/i915/Kconfig.profile
index 0bfd276c19fe..3925be65d314 100644
--- a/drivers/gpu/drm/i915/Kconfig.profile
+++ b/drivers/gpu/drm/i915/Kconfig.profile
@@ -1,3 +1,15 @@
+config DRM_I915_FENCE_TIMEOUT
+   int "Timeout for unsignaled foreign fences"
+   default 1 # milliseconds
+   help
+ When listening to a foreign fence, we install a supplementary timer
+ to ensure that we are always signaled and our userspace is able to
+ make forward progress. This value specifies the timeout used for an
+ unsignaled foreign fence.
+
+ May be 0 to disable the timeout, and rely on the foreign fence being
+ eventually signaled.
+
 config DRM_I915_USERFAULT_AUTOSUSPEND
int "Runtime autosuspend delay for userspace GGTT mmaps (ms)"
default 250 # milliseconds
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 5359c736c789..b0da6ea6e3f1 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -35,6 +35,7 @@ subdir-ccflags-y += -I$(srctree)/$(src)
 
 # core driver code
 i915-y += i915_drv.o \
+ i915_config.o \
  i915_irq.o \
  i915_getparam.o \
  i915_params.o \
diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index fd6d63b03489..432b4eeaf9f6 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -15814,7 +15814,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
if (new_plane_state->uapi.fence) { /* explicit fencing */
ret = i915_sw_fence_await_dma_fence(&state->commit_ready,
new_plane_state->uapi.fence,
-   I915_FENCE_TIMEOUT,
+   
i915_fence_timeout(dev_priv),
GFP_KERNEL);
if (ret < 0)
return ret;
@@ -15841,7 +15841,8 @@ intel_prepare_plane_fb(struct drm_plane *_plane,
 
ret = i915_sw_fence_await_reservation(&state->commit_ready,
  obj->base.resv, NULL,
- false, I915_FENCE_TIMEOUT,
+ false,
+ 
i915_fence_timeout(dev_priv),
  GFP_KERNEL);
if (ret < 0)
goto unpin_fb;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c 
b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
index 34be4c0ee7c5..bc0223716906 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
@@ -108,7 +108,7 @@ bool i915_gem_clflush_object(struct drm_i915_gem_object 
*obj,
if (clflush) {
i915_sw_fence_await_reservation(&clflush->base.chain,
obj->base.resv, NULL, true,
-   I915_FENCE_TIMEOUT,
+   
i915_fence_timeout(to_i915(obj->base.dev)),
I915_FENCE_GFP);
dma_resv_add_excl_fence(obj->base.resv, &clflush->base.dma);
dma_fence_work_commit(&clflush->base);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c 
b/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c
index 3a146aa2593b..d3a86a4d5c04 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c
@@ -288,8 +288,7 @@ int i915_gem_schedule_fill_pages_blt(struct 
drm_i915_gem_object *obj,
 
i915_gem_object_lock(obj);
err = i915_sw_fence_await_reservation(&work->wait,
- obj->base.resv, NULL,
- t

[Intel-gfx] [PATCH 01/14] drm/i915: Mark concurrent submissions with a weak-dependency

2020-05-05 Thread Chris Wilson
We recorded the dependencies for WAIT_FOR_SUBMIT in order that we could
correctly perform priority inheritance from the parallel branches to the
common trunk. However, for the purpose of timeslicing and reset
handling, the dependency is weak -- as we the pair of requests are
allowed to run in parallel and not in strict succession. So for example
we do need to suspend one if the other hangs.

The real significance though is that this allows us to rearrange
groups of WAIT_FOR_SUBMIT linked requests along the single engine, and
so can resolve user level inter-batch scheduling dependencies from user
semaphores.

Fixes: c81471f5e95c ("drm/i915: Copy across scheduler behaviour flags across 
submit fences")
Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
Cc:  # v5.6+
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 9 +
 drivers/gpu/drm/i915/i915_request.c | 8 ++--
 drivers/gpu/drm/i915/i915_scheduler.c   | 4 +++-
 drivers/gpu/drm/i915/i915_scheduler.h   | 3 ++-
 drivers/gpu/drm/i915/i915_scheduler_types.h | 1 +
 5 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
b/drivers/gpu/drm/i915/gt/intel_lrc.c
index dc3f2ee7136d..10109f661bcb 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1880,6 +1880,9 @@ static void defer_request(struct i915_request *rq, struct 
list_head * const pl)
struct i915_request *w =
container_of(p->waiter, typeof(*w), sched);
 
+   if (p->flags & I915_DEPENDENCY_WEAK)
+   continue;
+
/* Leave semaphores spinning on the other engines */
if (w->engine != rq->engine)
continue;
@@ -2726,6 +2729,9 @@ static void __execlists_hold(struct i915_request *rq)
struct i915_request *w =
container_of(p->waiter, typeof(*w), sched);
 
+   if (p->flags & I915_DEPENDENCY_WEAK)
+   continue;
+
/* Leave semaphores spinning on the other engines */
if (w->engine != rq->engine)
continue;
@@ -2850,6 +2856,9 @@ static void __execlists_unhold(struct i915_request *rq)
struct i915_request *w =
container_of(p->waiter, typeof(*w), sched);
 
+   if (p->flags & I915_DEPENDENCY_WEAK)
+   continue;
+
/* Propagate any change in error status */
if (rq->fence.error)
i915_request_set_error_once(w, rq->fence.error);
diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 22635bbabf06..95edc5523a01 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1038,7 +1038,9 @@ i915_request_await_request(struct i915_request *to, 
struct i915_request *from)
return 0;
 
if (to->engine->schedule) {
-   ret = i915_sched_node_add_dependency(&to->sched, &from->sched);
+   ret = i915_sched_node_add_dependency(&to->sched,
+&from->sched,
+0);
if (ret < 0)
return ret;
}
@@ -1200,7 +1202,9 @@ __i915_request_await_execution(struct i915_request *to,
 
/* Couple the dependency tree for PI on this exposed to->fence */
if (to->engine->schedule) {
-   err = i915_sched_node_add_dependency(&to->sched, &from->sched);
+   err = i915_sched_node_add_dependency(&to->sched,
+&from->sched,
+I915_DEPENDENCY_WEAK);
if (err < 0)
return err;
}
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index 37cfcf5b321b..5f4c1e49e974 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -462,7 +462,8 @@ bool __i915_sched_node_add_dependency(struct 
i915_sched_node *node,
 }
 
 int i915_sched_node_add_dependency(struct i915_sched_node *node,
-  struct i915_sched_node *signal)
+  struct i915_sched_node *signal,
+  unsigned long flags)
 {
struct i915_dependency *dep;
 
@@ -473,6 +474,7 @@ int i915_sched_node_add_dependency(struct i915_sched_node 
*node,
local_bh_disable();
 
if (!__i915_sched_node_add_dependency(node, signal, dep,
+ flags |
  I915_DEPENDENCY_EXTERNAL |

[Intel-gfx] [PATCH 07/14] dma-buf: Proxy fence, an unsignaled fence placeholder

2020-05-05 Thread Chris Wilson
Often we need to create a fence for a future event that has not yet been
associated with a fence. We can store a proxy fence, a placeholder, in
the timeline and replace it later when the real fence is known. Any
listeners that attach to the proxy fence will automatically be signaled
when the real fence completes, and any future listeners will instead be
attach directly to the real fence avoiding any indirection overhead.

Signed-off-by: Chris Wilson 
Cc: Lionel Landwerlin 
---
 drivers/dma-buf/Makefile |  13 +-
 drivers/dma-buf/dma-fence-private.h  |  20 +
 drivers/dma-buf/dma-fence-proxy.c| 248 ++
 drivers/dma-buf/dma-fence.c  |   4 +-
 drivers/dma-buf/selftests.h  |   1 +
 drivers/dma-buf/st-dma-fence-proxy.c | 699 +++
 include/linux/dma-fence-proxy.h  |  34 ++
 7 files changed, 1015 insertions(+), 4 deletions(-)
 create mode 100644 drivers/dma-buf/dma-fence-private.h
 create mode 100644 drivers/dma-buf/dma-fence-proxy.c
 create mode 100644 drivers/dma-buf/st-dma-fence-proxy.c
 create mode 100644 include/linux/dma-fence-proxy.h

diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index 995e05f609ff..afaf6dadd9a3 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,6 +1,12 @@
 # SPDX-License-Identifier: GPL-2.0-only
-obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
-dma-resv.o seqno-fence.o
+obj-y := \
+   dma-buf.o \
+   dma-fence.o \
+   dma-fence-array.o \
+   dma-fence-chain.o \
+   dma-fence-proxy.o \
+   dma-resv.o \
+   seqno-fence.o
 obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
 obj-$(CONFIG_DMABUF_HEAPS) += heaps/
 obj-$(CONFIG_SYNC_FILE)+= sync_file.o
@@ -10,6 +16,7 @@ obj-$(CONFIG_UDMABUF) += udmabuf.o
 dmabuf_selftests-y := \
selftest.o \
st-dma-fence.o \
-   st-dma-fence-chain.o
+   st-dma-fence-chain.o \
+   st-dma-fence-proxy.o
 
 obj-$(CONFIG_DMABUF_SELFTESTS) += dmabuf_selftests.o
diff --git a/drivers/dma-buf/dma-fence-private.h 
b/drivers/dma-buf/dma-fence-private.h
new file mode 100644
index ..6924d28af0fa
--- /dev/null
+++ b/drivers/dma-buf/dma-fence-private.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Fence mechanism for dma-buf and to allow for asynchronous dma access
+ *
+ * Copyright (C) 2012 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark 
+ * Maarten Lankhorst 
+ */
+
+#ifndef DMA_FENCE_PRIVATE_H
+#define DMA_FENCE_PRIAVTE_H
+
+struct dma_fence;
+
+bool __dma_fence_enable_signaling(struct dma_fence *fence);
+
+#endif /* DMA_FENCE_PRIAVTE_H */
diff --git a/drivers/dma-buf/dma-fence-proxy.c 
b/drivers/dma-buf/dma-fence-proxy.c
new file mode 100644
index ..f0cd89b966e0
--- /dev/null
+++ b/drivers/dma-buf/dma-fence-proxy.c
@@ -0,0 +1,248 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * dma-fence-proxy: placeholder unsignaled fence
+ *
+ * Copyright (C) 2017-2019 Intel Corporation
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "dma-fence-private.h"
+
+struct dma_fence_proxy {
+   struct dma_fence base;
+
+   struct dma_fence *real;
+   struct dma_fence_cb cb;
+   struct irq_work work;
+
+   wait_queue_head_t wq;
+};
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#define same_lockclass(A, B) (A)->dep_map.key == (B)->dep_map.key
+#else
+#define same_lockclass(A, B) 0
+#endif
+
+static const char *proxy_get_driver_name(struct dma_fence *fence)
+{
+   struct dma_fence_proxy *p = container_of(fence, typeof(*p), base);
+   struct dma_fence *real = READ_ONCE(p->real);
+
+   return real ? real->ops->get_driver_name(real) : "proxy";
+}
+
+static const char *proxy_get_timeline_name(struct dma_fence *fence)
+{
+   struct dma_fence_proxy *p = container_of(fence, typeof(*p), base);
+   struct dma_fence *real = READ_ONCE(p->real);
+
+   return real ? real->ops->get_timeline_name(real) : "unset";
+}
+
+static void proxy_irq_work(struct irq_work *work)
+{
+   struct dma_fence_proxy *p = container_of(work, typeof(*p), work);
+
+   dma_fence_signal(&p->base);
+   dma_fence_put(&p->base);
+}
+
+static void proxy_callback(struct dma_fence *real, struct dma_fence_cb *cb)
+{
+   struct dma_fence_proxy *p = container_of(cb, typeof(*p), cb);
+
+   if (real->error)
+   dma_fence_set_error(&p->base, real->error);
+
+   /* Lower the height of the proxy chain -> single stack frame */
+   irq_work_queue(&p->work);
+}
+
+static bool proxy_enable_signaling(struct dma_fence *fence)
+{
+   struct dma_fence_proxy *p = container_of(fence, typeof(*p), base);
+   struct dma_fence *real = READ_ONCE(p->real);
+   bool ret = true;
+
+   if (real) {
+   spin_lock_nested(real->lock,
+same_lockclass(&p->wq.lock, real->lock));
+   ret = __dma_fence_enable_si

[Intel-gfx] [PATCH 14/14] drm/i915: Drop I915_IDLE_ENGINES_TIMEOUT

2020-05-05 Thread Chris Wilson
This timeout is only used in one place, to provide a tiny bit of grace
for slow igt to cleanup after themselves. If we are a bit stricter and
opt to kill outstanding requsts rather than wait, we can speed up igt by
not waiting for 200ms after a hang.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_debugfs.c | 11 ++-
 drivers/gpu/drm/i915/i915_drv.h |  2 --
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
b/drivers/gpu/drm/i915/i915_debugfs.c
index 8e98df6a3045..649acf1fc33d 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -1463,12 +1463,13 @@ gt_drop_caches(struct intel_gt *gt, u64 val)
 {
int ret;
 
-   if (val & DROP_RESET_ACTIVE &&
-   wait_for(intel_engines_are_idle(gt), I915_IDLE_ENGINES_TIMEOUT))
-   intel_gt_set_wedged(gt);
+   if (val & (DROP_RETIRE | DROP_RESET_ACTIVE))
+   intel_gt_wait_for_idle(gt, 1);
 
-   if (val & DROP_RETIRE)
-   intel_gt_retire_requests(gt);
+   if (val & DROP_RESET_ACTIVE && intel_gt_pm_get_if_awake(gt)) {
+   intel_gt_set_wedged(gt);
+   intel_gt_pm_put(gt);
+   }
 
if (val & (DROP_IDLE | DROP_ACTIVE)) {
ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index ad287e5d6ded..97687ea53c3d 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -612,8 +612,6 @@ struct i915_gem_mm {
u32 shrink_count;
 };
 
-#define I915_IDLE_ENGINES_TIMEOUT (200) /* in ms */
-
 unsigned long i915_fence_context_timeout(const struct drm_i915_private *i915,
 u64 context);
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 03/14] drm/i915: Ignore submit-fences on the same timeline

2020-05-05 Thread Chris Wilson
While we ordinarily do not skip submit-fences due to the accompanying
hook that we want to callback on execution, a submit-fence on the same
timeline is meaningless.

Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/i915_request.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index b4cc17fa9e8f..d4cbdee5a89a 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1242,6 +1242,9 @@ i915_request_await_execution(struct i915_request *rq,
continue;
}
 
+   if (fence->context == rq->fence.context)
+   continue;
+
/*
 * We don't squash repeated fence dependencies here as we
 * want to run our callback in all cases.
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 11/14] drm/i915/gt: Declare when we enabled timeslicing

2020-05-05 Thread Chris Wilson
Let userspace know if they can trust timeslicing by including it as part
of the I915_PARAM_HAS_SCHEDULER::I915_SCHEDULER_CAP_TIMESLICING

v2: Only declare timeslicing if we can safely preempt userspace.

Fixes: 8ee36e048c98 ("drm/i915/execlists: Minimalistic timeslicing")
Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/3802
Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4854
Signed-off-by: Chris Wilson 
Cc: Kenneth Graunke 
Cc: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/gt/intel_engine_user.c | 1 +
 include/uapi/drm/i915_drm.h | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_user.c 
b/drivers/gpu/drm/i915/gt/intel_engine_user.c
index 848decee9066..8415511f1465 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_user.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_user.c
@@ -98,6 +98,7 @@ static void set_scheduler_caps(struct drm_i915_private *i915)
MAP(HAS_PREEMPTION, PREEMPTION),
MAP(HAS_SEMAPHORES, SEMAPHORES),
MAP(SUPPORTS_STATS, ENGINE_BUSY_STATS),
+   MAP(HAS_TIMESLICES, TIMESLICING),
 #undef MAP
};
struct intel_engine_cs *engine;
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 704dd0e3bc1d..1ee227b5131a 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -523,6 +523,7 @@ typedef struct drm_i915_irq_wait {
 #define   I915_SCHEDULER_CAP_PREEMPTION(1ul << 2)
 #define   I915_SCHEDULER_CAP_SEMAPHORES(1ul << 3)
 #define   I915_SCHEDULER_CAP_ENGINE_BUSY_STATS (1ul << 4)
+#define   I915_SCHEDULER_CAP_TIMESLICING   (1ul << 5)
 
 #define I915_PARAM_HUC_STATUS   42
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 06/14] drm/i915: Tidy awaiting on dma-fences

2020-05-05 Thread Chris Wilson
Just tidy up the return handling for completed dma-fences.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_sw_fence.c | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c 
b/drivers/gpu/drm/i915/i915_sw_fence.c
index 7daf81f55c90..295b9829e2da 100644
--- a/drivers/gpu/drm/i915/i915_sw_fence.c
+++ b/drivers/gpu/drm/i915/i915_sw_fence.c
@@ -546,13 +546,11 @@ int __i915_sw_fence_await_dma_fence(struct i915_sw_fence 
*fence,
cb->fence = fence;
i915_sw_fence_await(fence);
 
-   ret = dma_fence_add_callback(dma, &cb->base, __dma_i915_sw_fence_wake);
-   if (ret == 0) {
-   ret = 1;
-   } else {
+   ret = 1;
+   if (dma_fence_add_callback(dma, &cb->base, __dma_i915_sw_fence_wake)) {
+   /* fence already signaled */
__dma_i915_sw_fence_wake(dma, &cb->base);
-   if (ret == -ENOENT) /* fence already signaled */
-   ret = 0;
+   ret = 0;
}
 
return ret;
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 05/14] drm/i915: Prevent using semaphores to chain up to external fences

2020-05-05 Thread Chris Wilson
The downside of using semaphores is that we lose metadata passing
along the signaling chain. This is particularly nasty when we
need to pass along a fatal error such as EFAULT or EDEADLK. For
fatal errors we want to scrub the request before it is executed,
which means that we cannot preload the request onto HW and have
it wait upon a semaphore.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_request.c | 26 +
 drivers/gpu/drm/i915/i915_scheduler_types.h |  1 +
 2 files changed, 27 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index dfb1e86ffc7f..e3c691bf082f 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1002,6 +1002,15 @@ emit_semaphore_wait(struct i915_request *to,
if (!rcu_access_pointer(from->hwsp_cacheline))
goto await_fence;
 
+   /*
+* If this or its dependents are waiting on an external fence
+* that may fail catastrophically, then we want to avoid using
+* sempahores as they bypass the fence signaling metadata, and we
+* lose the fence->error propagation.
+*/
+   if (from->sched.flags & I915_SCHED_HAS_EXTERNAL_CHAIN)
+   goto await_fence;
+
/* Just emit the first semaphore we see as request space is limited. */
if (already_busywaiting(to) & mask)
goto await_fence;
@@ -1064,12 +1073,29 @@ i915_request_await_request(struct i915_request *to, 
struct i915_request *from)
return ret;
}
 
+   if (from->sched.flags & I915_SCHED_HAS_EXTERNAL_CHAIN)
+   to->sched.flags |= I915_SCHED_HAS_EXTERNAL_CHAIN;
+
return 0;
 }
 
+static void mark_external(struct i915_request *rq)
+{
+   /*
+* The downside of using semaphores is that we lose metadata passing
+* along the signaling chain. This is particularly nasty when we
+* need to pass along a fatal error such as EFAULT or EDEADLK. For
+* fatal errors we want to scrub the request before it is executed,
+* which means that we cannot preload the request onto HW and have
+* it wait upon a semaphore.
+*/
+   rq->sched.flags |= I915_SCHED_HAS_EXTERNAL_CHAIN;
+}
+
 static int
 i915_request_await_external(struct i915_request *rq, struct dma_fence *fence)
 {
+   mark_external(rq);
return i915_sw_fence_await_dma_fence(&rq->submit, fence,
 fence->context ? 
I915_FENCE_TIMEOUT : 0,
 I915_FENCE_GFP);
diff --git a/drivers/gpu/drm/i915/i915_scheduler_types.h 
b/drivers/gpu/drm/i915/i915_scheduler_types.h
index 7186875088a0..6ab2c5289bed 100644
--- a/drivers/gpu/drm/i915/i915_scheduler_types.h
+++ b/drivers/gpu/drm/i915/i915_scheduler_types.h
@@ -66,6 +66,7 @@ struct i915_sched_node {
struct i915_sched_attr attr;
unsigned int flags;
 #define I915_SCHED_HAS_SEMAPHORE_CHAIN BIT(0)
+#define I915_SCHED_HAS_EXTERNAL_CHAIN  BIT(1)
intel_engine_mask_t semaphores;
 };
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 13/14] drm/i915: Drop I915_RESET_TIMEOUT and friends

2020-05-05 Thread Chris Wilson
These were used to set various timeouts for the reset procedure
(deciding when the engine was dead, and even if the reset itself was not
making forward progress). No longer used.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_drv.h | 7 ---
 1 file changed, 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 2e3b5c4d0759..ad287e5d6ded 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -623,13 +623,6 @@ i915_fence_timeout(const struct drm_i915_private *i915)
return i915_fence_context_timeout(i915, U64_MAX);
 }
 
-#define I915_RESET_TIMEOUT (10 * HZ) /* 10s */
-
-#define I915_ENGINE_DEAD_TIMEOUT  (4 * HZ)  /* Seqno, head and subunits dead */
-#define I915_SEQNO_DEAD_TIMEOUT   (12 * HZ) /* Seqno dead with active head */
-
-#define I915_ENGINE_WEDGED_TIMEOUT  (60 * HZ)  /* Reset but no recovery? */
-
 /* Amount of SAGV/QGV points, BSpec precisely defines this */
 #define I915_NUM_QGV_POINTS 8
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 10/14] drm/i915/gem: Allow combining submit-fences with syncobj

2020-05-05 Thread Chris Wilson
We allow exported sync_file fences to be used as submit fences, but they
are not the only source of user fences. We also accept an array of
syncobj, and as with sync_file these are dma_fences underneath and so
feature the same set of controls. The submit-fence allows for a request
to be scheduled at the same time as the signaler, rather than as normal
after. Userspace can combine submit-fence with its own semaphores for
intra-batch scheduling.

Not exposing submit-fences to syncobj was at the time just a matter of
pragmatic expediency.

Fixes: a88b6e4cbafd ("drm/i915: Allow specification of parallel execbuf")
Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4854
Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
Cc: Lionel Landwerlin 
Reviewed-by: Tvrtko Ursulin 
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c| 14 +++
 drivers/gpu/drm/i915/i915_request.c   | 24 +++
 include/uapi/drm/i915_drm.h   |  7 +++---
 3 files changed, 37 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 7abb96505a31..ec16ace50acf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -2432,7 +2432,7 @@ static void
 __free_fence_array(struct drm_syncobj **fences, unsigned int n)
 {
while (n--)
-   drm_syncobj_put(ptr_mask_bits(fences[n], 2));
+   drm_syncobj_put(ptr_mask_bits(fences[n], 3));
kvfree(fences);
 }
 
@@ -2489,7 +2489,7 @@ get_fence_array(struct drm_i915_gem_execbuffer2 *args,
BUILD_BUG_ON(~(ARCH_KMALLOC_MINALIGN - 1) &
 ~__I915_EXEC_FENCE_UNKNOWN_FLAGS);
 
-   fences[n] = ptr_pack_bits(syncobj, fence.flags, 2);
+   fences[n] = ptr_pack_bits(syncobj, fence.flags, 3);
}
 
return fences;
@@ -2520,7 +2520,7 @@ await_fence_array(struct i915_execbuffer *eb,
struct dma_fence *fence;
unsigned int flags;
 
-   syncobj = ptr_unpack_bits(fences[n], &flags, 2);
+   syncobj = ptr_unpack_bits(fences[n], &flags, 3);
if (!(flags & I915_EXEC_FENCE_WAIT))
continue;
 
@@ -2544,7 +2544,11 @@ await_fence_array(struct i915_execbuffer *eb,
spin_unlock(&syncobj->lock);
}
 
-   err = i915_request_await_dma_fence(eb->request, fence);
+   if (flags & I915_EXEC_FENCE_WAIT_SUBMIT)
+   err = i915_request_await_execution(eb->request, fence,
+  
eb->engine->bond_execute);
+   else
+   err = i915_request_await_dma_fence(eb->request, fence);
dma_fence_put(fence);
if (err < 0)
return err;
@@ -2565,7 +2569,7 @@ signal_fence_array(struct i915_execbuffer *eb,
struct drm_syncobj *syncobj;
unsigned int flags;
 
-   syncobj = ptr_unpack_bits(fences[n], &flags, 2);
+   syncobj = ptr_unpack_bits(fences[n], &flags, 3);
if (!(flags & I915_EXEC_FENCE_SIGNAL))
continue;
 
diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index f506e3914dd8..b1cbc6d0babf 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1381,6 +1381,26 @@ __i915_request_await_execution(struct i915_request *to,
 &from->fence);
 }
 
+static int execution_proxy(struct await_proxy *ap)
+{
+   return i915_request_await_execution(ap->request, ap->fence, ap->data);
+}
+
+static int
+i915_request_await_proxy_execution(struct i915_request *rq,
+  struct dma_fence *fence,
+  void (*hook)(struct i915_request *rq,
+   struct dma_fence *signal))
+{
+   /*
+* We have to wait until the real request is known in order to
+* be able to hook into its execution, as opposed to waiting for
+* its completion.
+*/
+   return __i915_request_await_proxy(rq, fence, I915_FENCE_TIMEOUT,
+ execution_proxy, hook);
+}
+
 int
 i915_request_await_execution(struct i915_request *rq,
 struct dma_fence *fence,
@@ -1420,6 +1440,10 @@ i915_request_await_execution(struct i915_request *rq,
ret = __i915_request_await_execution(rq,
 to_request(fence),
 hook);
+   else if (dma_fence_is_proxy(fence))
+   ret = i915_request_await_proxy_execution(rq,
+ 

[Intel-gfx] [PATCH 04/14] drm/i915: Pull waiting on an external dma-fence into its routine

2020-05-05 Thread Chris Wilson
As a means for a small code consolidation, but primarily to start
thinking more carefully about internal-vs-external linkage, pull the
pair of i915_sw_fence_await_dma_fence() calls into a common routine.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_request.c | 16 ++--
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index d4cbdee5a89a..dfb1e86ffc7f 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1067,6 +1067,14 @@ i915_request_await_request(struct i915_request *to, 
struct i915_request *from)
return 0;
 }
 
+static int
+i915_request_await_external(struct i915_request *rq, struct dma_fence *fence)
+{
+   return i915_sw_fence_await_dma_fence(&rq->submit, fence,
+fence->context ? 
I915_FENCE_TIMEOUT : 0,
+I915_FENCE_GFP);
+}
+
 int
 i915_request_await_dma_fence(struct i915_request *rq, struct dma_fence *fence)
 {
@@ -1114,9 +1122,7 @@ i915_request_await_dma_fence(struct i915_request *rq, 
struct dma_fence *fence)
if (dma_fence_is_i915(fence))
ret = i915_request_await_request(rq, to_request(fence));
else
-   ret = i915_sw_fence_await_dma_fence(&rq->submit, fence,
-   fence->context ? 
I915_FENCE_TIMEOUT : 0,
-   I915_FENCE_GFP);
+   ret = i915_request_await_external(rq, fence);
if (ret < 0)
return ret;
 
@@ -1255,9 +1261,7 @@ i915_request_await_execution(struct i915_request *rq,
 to_request(fence),
 hook);
else
-   ret = i915_sw_fence_await_dma_fence(&rq->submit, fence,
-   I915_FENCE_TIMEOUT,
-   GFP_KERNEL);
+   ret = i915_request_await_external(rq, fence);
if (ret < 0)
return ret;
} while (--nchild);
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 08/14] drm/syncobj: Allow use of dma-fence-proxy

2020-05-05 Thread Chris Wilson
Allow the callers to supply a dma-fence-proxy for asynchronous waiting on
future fences.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/drm_syncobj.c | 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 42d46414f767..e141db0e1eb6 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -184,6 +184,7 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -324,14 +325,9 @@ void drm_syncobj_replace_fence(struct drm_syncobj *syncobj,
struct dma_fence *old_fence;
struct syncobj_wait_entry *cur, *tmp;
 
-   if (fence)
-   dma_fence_get(fence);
-
spin_lock(&syncobj->lock);
 
-   old_fence = rcu_dereference_protected(syncobj->fence,
- lockdep_is_held(&syncobj->lock));
-   rcu_assign_pointer(syncobj->fence, fence);
+   old_fence = dma_fence_replace_proxy(&syncobj->fence, fence);
 
if (fence != old_fence) {
list_for_each_entry_safe(cur, tmp, &syncobj->cb_list, node)
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 09/14] drm/i915/gem: Teach execbuf how to wait on future syncobj

2020-05-05 Thread Chris Wilson
If a syncobj has not yet been assigned, treat it as a future fence and
install and wait upon a dma-fence-proxy. The proxy will be replace by
the real fence later, and that fence will be responsible for signaling
our waiter.

Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4854
Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c|  21 ++-
 drivers/gpu/drm/i915/gt/intel_lrc.c   |   3 +
 drivers/gpu/drm/i915/i915_request.c   | 135 ++
 drivers/gpu/drm/i915/i915_scheduler.c |  41 ++
 drivers/gpu/drm/i915/i915_scheduler.h |   3 +
 5 files changed, 201 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 966523a8503f..7abb96505a31 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -5,6 +5,7 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -2524,8 +2525,24 @@ await_fence_array(struct i915_execbuffer *eb,
continue;
 
fence = drm_syncobj_fence_get(syncobj);
-   if (!fence)
-   return -EINVAL;
+   if (!fence) {
+   struct dma_fence *old;
+
+   fence = dma_fence_create_proxy();
+   if (!fence)
+   return -ENOMEM;
+
+   spin_lock(&syncobj->lock);
+   old = rcu_dereference_protected(syncobj->fence, true);
+   if (unlikely(old)) {
+   dma_fence_put(fence);
+   fence = dma_fence_get(old);
+   } else {
+   rcu_assign_pointer(syncobj->fence,
+  dma_fence_get(fence));
+   }
+   spin_unlock(&syncobj->lock);
+   }
 
err = i915_request_await_dma_fence(eb->request, fence);
dma_fence_put(fence);
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 10109f661bcb..8d05eb0a7ef9 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -3500,6 +3500,9 @@ static int gen8_emit_init_breadcrumb(struct i915_request 
*rq)
 {
u32 *cs;
 
+   /* Seal the semaphore section -- we are ready to begin */
+   rq->sched.semaphores |= ALL_ENGINES;
+
if (!i915_request_timeline(rq)->has_initial_breadcrumb)
return 0;
 
diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index e3c691bf082f..f506e3914dd8 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -23,6 +23,7 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1101,6 +1102,137 @@ i915_request_await_external(struct i915_request *rq, 
struct dma_fence *fence)
 I915_FENCE_GFP);
 }
 
+struct await_proxy {
+   struct wait_queue_entry base;
+   struct i915_request *request;
+   struct dma_fence *fence;
+   struct timer_list timer;
+   struct work_struct work;
+   int (*attach)(struct await_proxy *ap);
+   void *data;
+};
+
+static void await_proxy_work(struct work_struct *work)
+{
+   struct await_proxy *ap = container_of(work, typeof(*ap), work);
+   struct i915_request *rq = ap->request;
+
+   del_timer_sync(&ap->timer);
+
+   if (ap->fence) {
+   int err = 0;
+
+   /*
+* If the fence is external, we impose a 10s timeout.
+* However, if the fence is internal, we skip a timeout in
+* the belief that all fences are in-order (DAG, no cycles)
+* and we can enforce forward progress by reset the GPU if
+* necessary. A future fence, provided userspace, can trivially
+* generate a cycle in the dependency graph, and so cause
+* that entire cycle to become deadlocked and for no forward
+* progress to either be made, and the driver being kept
+* eternally awake.
+*/
+   if (dma_fence_is_i915(ap->fence) &&
+   !i915_sched_node_verify_dag(&rq->sched,
+   &to_request(ap->fence)->sched))
+   err = -EDEADLK;
+
+   if (!err) {
+   mutex_lock(&rq->context->timeline->mutex);
+   err = ap->attach(ap);
+   mutex_unlock(&rq->context->timeline->mutex);
+   }
+
+   if (err < 0)
+   i915_sw_fence_set_error_once(&rq->submit, err);
+   }
+
+   i915_sw_fence_complete(&rq->submit);
+
+   dma_fence_put(ap->fence);

Re: [Intel-gfx] ✗ Fi.CI.IGT: failure for drm/i915/icp: Add Wa_14010685332

2020-05-05 Thread Matt Roper
On Fri, May 01, 2020 at 11:32:17PM +, Patchwork wrote:
> == Series Details ==
> 
> Series: drm/i915/icp: Add Wa_14010685332
> URL   : https://patchwork.freedesktop.org/series/76841/
> State : failure
> 
> == Summary ==
> 
> CI Bug Log - changes from CI_DRM_8407_full -> Patchwork_17547_full
> 
> 
> Summary
> ---
> 
>   **FAILURE**
> 
>   Serious unknown changes coming with Patchwork_17547_full absolutely need to 
> be
>   verified manually.
>   
>   If you think the reported changes have nothing to do with the changes
>   introduced in Patchwork_17547_full, please notify your bug team to allow 
> them
>   to document this new failure mode, which will reduce false positives in CI.
> 
>   
> 
> Possible new issues
> ---
> 
>   Here are the unknown changes that may have been introduced in 
> Patchwork_17547_full:
> 
> ### IGT changes ###
> 
>  Possible regressions 
> 
>   * igt@gem_eio@unwedge-stress:
> - shard-hsw:  [PASS][1] -> [FAIL][2]
>[1]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8407/shard-hsw6/igt@gem_...@unwedge-stress.html
>[2]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17547/shard-hsw4/igt@gem_...@unwedge-stress.html

Unrelated; this patch adds gen11 IRQ code that only runs when when an
ICP PCH is present; it wouldn't have any impact on a Haswell system.

Patch applied to dinq.  Thanks Bob for the review.


Matt

> 
>   
> Known issues
> 
> 
>   Here are the changes found in Patchwork_17547_full that come from known 
> issues:
> 
> ### IGT changes ###
> 
>  Issues hit 
> 
>   * igt@gem_workarounds@suspend-resume-fd:
> - shard-skl:  [PASS][3] -> [INCOMPLETE][4] ([i915#69])
>[3]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8407/shard-skl4/igt@gem_workarou...@suspend-resume-fd.html
>[4]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17547/shard-skl6/igt@gem_workarou...@suspend-resume-fd.html
> 
>   * igt@i915_suspend@forcewake:
> - shard-kbl:  [PASS][5] -> [DMESG-WARN][6] ([i915#180]) +2 
> similar issues
>[5]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8407/shard-kbl2/igt@i915_susp...@forcewake.html
>[6]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17547/shard-kbl7/igt@i915_susp...@forcewake.html
> 
>   * igt@kms_cursor_crc@pipe-b-cursor-suspend:
> - shard-apl:  [PASS][7] -> [DMESG-WARN][8] ([i915#180]) +1 
> similar issue
>[7]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8407/shard-apl1/igt@kms_cursor_...@pipe-b-cursor-suspend.html
>[8]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17547/shard-apl2/igt@kms_cursor_...@pipe-b-cursor-suspend.html
> 
>   * igt@kms_hdr@bpc-switch:
> - shard-skl:  [PASS][9] -> [FAIL][10] ([i915#1188])
>[9]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8407/shard-skl10/igt@kms_...@bpc-switch.html
>[10]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17547/shard-skl8/igt@kms_...@bpc-switch.html
> 
>   * igt@kms_lease@lease_again:
> - shard-snb:  [PASS][11] -> [SKIP][12] ([fdo#109271]) +1 similar 
> issue
>[11]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8407/shard-snb1/igt@kms_lease@lease_again.html
>[12]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17547/shard-snb2/igt@kms_lease@lease_again.html
> 
>   * igt@kms_pipe_crc_basic@nonblocking-crc-pipe-b-frame-sequence:
> - shard-skl:  [PASS][13] -> [FAIL][14] ([i915#53])
>[13]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8407/shard-skl7/igt@kms_pipe_crc_ba...@nonblocking-crc-pipe-b-frame-sequence.html
>[14]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17547/shard-skl2/igt@kms_pipe_crc_ba...@nonblocking-crc-pipe-b-frame-sequence.html
> 
>   * igt@kms_plane_alpha_blend@pipe-b-coverage-7efc:
> - shard-skl:  [PASS][15] -> [FAIL][16] ([fdo#108145] / [i915#265])
>[15]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8407/shard-skl7/igt@kms_plane_alpha_bl...@pipe-b-coverage-7efc.html
>[16]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17547/shard-skl2/igt@kms_plane_alpha_bl...@pipe-b-coverage-7efc.html
> 
>   * igt@kms_psr@psr2_sprite_plane_onoff:
> - shard-iclb: [PASS][17] -> [SKIP][18] ([fdo#109441])
>[17]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8407/shard-iclb2/igt@kms_psr@psr2_sprite_plane_onoff.html
>[18]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17547/shard-iclb7/igt@kms_psr@psr2_sprite_plane_onoff.html
> 
>   
>  Possible fixes 
> 
>   * {igt@gem_exec_reloc@basic-many-active@rcs0}:
> - shard-apl:  [FAIL][19] ([i915#1815]) -> [PASS][20]
>[19]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8407/shard-apl3/igt@gem_exec_reloc@basic-many-act...@rcs0.html
>[20]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17547/shard-apl4/igt@gem_exec_reloc@basic-many-act...@r

Re: [Intel-gfx] [PATCH 2/9] drm/i915/gen12: Fix HDC pipeline flush

2020-05-05 Thread D Scott Phillips
Mika Kuoppala  writes:

> HDC pipeline flush is bit on the first dword of
> the PIPE_CONTROL, not the second. Make it so.
>
> Signed-off-by: Mika Kuoppala 

Fixes: 4aa0b5d457f5 ("drm/i915/tgl: Add HDC Pipeline Flush")
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 6/9] drm/i915/gen12: Invalidate indirect state pointers

2020-05-05 Thread D Scott Phillips
Mika Kuoppala  writes:

> Aim for completeness for invalidating everything
> and mark state pointers stale.
>
> Signed-off-by: Mika Kuoppala 

nak, this breaks iris. indirect state disable removes push constant
state from the render context, not just invalidating it
emphemerally. iris is depending on that state to persist.
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v27 2/6] drm/i915: Separate icl and skl SAGV checking

2020-05-05 Thread Lisovskiy, Stanislav
On Tue, May 05, 2020 at 02:01:16PM +0300, Ville Syrjälä wrote:
> On Tue, May 05, 2020 at 01:42:46PM +0300, Ville Syrjälä wrote:
> > On Tue, May 05, 2020 at 01:22:43PM +0300, Stanislav Lisovskiy wrote:
> > > Introduce platform dependent SAGV checking in
> > > combination with bandwidth state pipe SAGV mask.
> > > 
> > > v2, v3, v4, v5, v6: Fix rebase conflict
> > > 
> > > Signed-off-by: Stanislav Lisovskiy 
> > > ---
> > >  drivers/gpu/drm/i915/intel_pm.c | 30 --
> > >  1 file changed, 28 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/intel_pm.c 
> > > b/drivers/gpu/drm/i915/intel_pm.c
> > > index da567fac7c93..c7d726a656b2 100644
> > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > @@ -3853,6 +3853,24 @@ static bool intel_crtc_can_enable_sagv(const 
> > > struct intel_crtc_state *crtc_state
> > >   return true;
> > >  }
> > >  
> > > +static bool skl_crtc_can_enable_sagv(const struct intel_crtc_state 
> > > *crtc_state)
> > > +{
> > > + struct intel_atomic_state *state = 
> > > to_intel_atomic_state(crtc_state->uapi.state);
> > > + /*
> > > +  * SKL+ workaround: bspec recommends we disable SAGV when we have
> > > +  * more then one pipe enabled
> > > +  */
> > > + if (hweight8(state->active_pipes) > 1)
> > > + return false;
> > 
> > That stuff should no longer be here since we now have it done properly
> > in intel_can_eanble_sagv().
> > 
> > > +
> > > + return intel_crtc_can_enable_sagv(crtc_state);
> > > +}
> > > +
> > > +static bool icl_crtc_can_enable_sagv(const struct intel_crtc_state 
> > > *crtc_state)
> > > +{
> > > + return intel_crtc_can_enable_sagv(crtc_state);
> > > +}
> > 
> > This looks the wrong way around. IMO intel_crtc_can_enable_sagv()
> > should rather call the skl vs. icl variants as needed. Although we
> > don't yet have the icl variant so the oerdering of the patches is
> > a bit weird.
> 
> Do we even need an icl variant actually? Does it use the skl or tgl
> way of checking for sagv yes vs. no?

As I undestand icl implementation should be pretty much the same as
skl, except that icl doesn't have this one active pipe limitation
thing.


Stan
> 
> > 
> > > +
> > >  bool intel_can_enable_sagv(const struct intel_bw_state *bw_state)
> > >  {
> > >   if (bw_state->active_pipes && !is_power_of_2(bw_state->active_pipes))
> > > @@ -3863,22 +3881,30 @@ bool intel_can_enable_sagv(const struct 
> > > intel_bw_state *bw_state)
> > >  
> > >  static int intel_compute_sagv_mask(struct intel_atomic_state *state)
> > >  {
> > > + struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> > >   int ret;
> > >   struct intel_crtc *crtc;
> > > - struct intel_crtc_state *new_crtc_state;
> > > + const struct intel_crtc_state *new_crtc_state;
> > >   struct intel_bw_state *new_bw_state = NULL;
> > >   const struct intel_bw_state *old_bw_state = NULL;
> > >   int i;
> > >  
> > >   for_each_new_intel_crtc_in_state(state, crtc,
> > >new_crtc_state, i) {
> > > + bool can_sagv;
> > > +
> > >   new_bw_state = intel_atomic_get_bw_state(state);
> > >   if (IS_ERR(new_bw_state))
> > >   return PTR_ERR(new_bw_state);
> > >  
> > >   old_bw_state = intel_atomic_get_old_bw_state(state);
> > >  
> > > - if (intel_crtc_can_enable_sagv(new_crtc_state))
> > > + if (INTEL_GEN(dev_priv) >= 11)
> > > + can_sagv = icl_crtc_can_enable_sagv(new_crtc_state);
> > > + else
> > > + can_sagv = skl_crtc_can_enable_sagv(new_crtc_state);
> > > +
> > > + if (can_sagv)
> > >   new_bw_state->pipe_sagv_reject &= ~BIT(crtc->pipe);
> > >   else
> > >   new_bw_state->pipe_sagv_reject |= BIT(crtc->pipe);
> > > -- 
> > > 2.24.1.485.gad05a3d8e5
> > 
> > -- 
> > Ville Syrjälä
> > Intel
> > ___
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> 
> -- 
> Ville Syrjälä
> Intel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 07/22] drm/i915/gt: Stop holding onto the pinned_default_state

2020-05-05 Thread Chris Wilson
Quoting Andi Shyti (2020-05-05 21:08:03)
> Hi Chris,
> 
> On Mon, May 04, 2020 at 05:48:48AM +0100, Chris Wilson wrote:
> > As we only restore the default context state upon banning a context, we
> > only need enough of the state to run the ring and nothing more. That is
> > we only need our bare protocontext.
> > 
> > Signed-off-by: Chris Wilson 
> > Cc: Tvrtko Ursulin 
> > Cc: Mika Kuoppala 
> > Cc: Andi Shyti 
> 
> I don't see any issue, looks correct to me:
> 
> Reviewed-by: Andi Shyti 

Ta. Only time will tell if this makes recovery more stable; I hope it
does.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/execlists: Record the active CCID from before reset

2020-05-05 Thread Patchwork
== Series Details ==

Series: drm/i915/execlists: Record the active CCID from before reset
URL   : https://patchwork.freedesktop.org/series/76946/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8427_full -> Patchwork_17580_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_17580_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_eio@in-flight-suspend:
- shard-apl:  [PASS][1] -> [DMESG-WARN][2] ([i915#180])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-apl8/igt@gem_...@in-flight-suspend.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-apl4/igt@gem_...@in-flight-suspend.html

  * igt@gem_workarounds@suspend-resume-context:
- shard-kbl:  [PASS][3] -> [DMESG-WARN][4] ([i915#180])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-kbl7/igt@gem_workarou...@suspend-resume-context.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-kbl1/igt@gem_workarou...@suspend-resume-context.html

  * igt@i915_pm_backlight@fade_with_suspend:
- shard-skl:  [PASS][5] -> [INCOMPLETE][6] ([i915#69])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-skl1/igt@i915_pm_backlight@fade_with_suspend.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-skl4/igt@i915_pm_backlight@fade_with_suspend.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
- shard-skl:  [PASS][7] -> [INCOMPLETE][8] ([i915#300])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-skl3/igt@kms_cursor_...@pipe-c-cursor-suspend.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-skl10/igt@kms_cursor_...@pipe-c-cursor-suspend.html

  * igt@kms_cursor_edge_walk@pipe-a-64x64-top-edge:
- shard-apl:  [PASS][9] -> [FAIL][10] ([i915#70])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-apl1/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-apl1/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
- shard-kbl:  [PASS][11] -> [FAIL][12] ([i915#70])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-kbl4/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-kbl7/igt@kms_cursor_edge_w...@pipe-a-64x64-top-edge.html

  * igt@kms_cursor_legacy@2x-long-cursor-vs-flip-atomic:
- shard-hsw:  [PASS][13] -> [FAIL][14] ([i915#96])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-hsw7/igt@kms_cursor_leg...@2x-long-cursor-vs-flip-atomic.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-hsw8/igt@kms_cursor_leg...@2x-long-cursor-vs-flip-atomic.html

  * igt@kms_frontbuffer_tracking@fbc-suspend:
- shard-apl:  [PASS][15] -> [DMESG-WARN][16] ([i915#180] / 
[i915#95]) +1 similar issue
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-apl1/igt@kms_frontbuffer_track...@fbc-suspend.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-apl6/igt@kms_frontbuffer_track...@fbc-suspend.html

  * igt@kms_frontbuffer_tracking@psr-rgb101010-draw-mmap-cpu:
- shard-skl:  [PASS][17] -> [FAIL][18] ([i915#49])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-skl7/igt@kms_frontbuffer_track...@psr-rgb101010-draw-mmap-cpu.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-skl4/igt@kms_frontbuffer_track...@psr-rgb101010-draw-mmap-cpu.html

  * igt@kms_hdr@bpc-switch:
- shard-skl:  [PASS][19] -> [FAIL][20] ([i915#1188])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-skl5/igt@kms_...@bpc-switch.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-skl9/igt@kms_...@bpc-switch.html

  * igt@kms_plane_alpha_blend@pipe-a-coverage-7efc:
- shard-skl:  [PASS][21] -> [FAIL][22] ([fdo#108145] / [i915#265])
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-skl7/igt@kms_plane_alpha_bl...@pipe-a-coverage-7efc.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-skl4/igt@kms_plane_alpha_bl...@pipe-a-coverage-7efc.html

  * igt@kms_psr@psr2_primary_mmap_cpu:
- shard-iclb: [PASS][23] -> [SKIP][24] ([fdo#109441]) +3 similar 
issues
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/shard-iclb2/igt@kms_psr@psr2_primary_mmap_cpu.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/shard-iclb1/igt@kms_psr@psr2_primary_mmap_cpu.html

  
 Possible fixes 

  * igt@gem_ctx_persistence@engines-mixed-process@vecs0:
- shard-skl:  [FAIL][25] ([i915#1528]) -> [PASS][26]
   [25]: 
https://i

Re: [Intel-gfx] [PATCH 07/22] drm/i915/gt: Stop holding onto the pinned_default_state

2020-05-05 Thread Andi Shyti
Hi Chris,

On Mon, May 04, 2020 at 05:48:48AM +0100, Chris Wilson wrote:
> As we only restore the default context state upon banning a context, we
> only need enough of the state to run the ring and nothing more. That is
> we only need our bare protocontext.
> 
> Signed-off-by: Chris Wilson 
> Cc: Tvrtko Ursulin 
> Cc: Mika Kuoppala 
> Cc: Andi Shyti 

I don't see any issue, looks correct to me:

Reviewed-by: Andi Shyti 

Andi
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v2] drm: Fix HDCP failures when SRM fw is missing

2020-05-05 Thread Sean Paul
On Wed, Apr 29, 2020 at 12:20 PM Ramalingam C  wrote:
>
> On 2020-04-29 at 10:46:29 -0400, Sean Paul wrote:
> > On Wed, Apr 29, 2020 at 10:22 AM Ramalingam C  
> > wrote:
> > >
> > > On 2020-04-29 at 09:58:16 -0400, Sean Paul wrote:
> > > > On Wed, Apr 29, 2020 at 9:50 AM Ramalingam C  
> > > > wrote:
> > > > >
> > > > > On 2020-04-14 at 15:02:55 -0400, Sean Paul wrote:
> > > > > > From: Sean Paul 
> > > > > >
> > > > > > The SRM cleanup in 79643fddd6eb2 ("drm/hdcp: optimizing the srm
> > > > > > handling") inadvertently altered the behavior of HDCP auth when
> > > > > > the SRM firmware is missing. Before that patch, missing SRM was
> > > > > > interpreted as the device having no revoked keys. With that patch,
> > > > > > if the SRM fw file is missing we reject _all_ keys.
> > > > > >
> > > > > > This patch fixes that regression by returning success if the file
> > > > > > cannot be found. It also checks the return value from request_srm 
> > > > > > such
> > > > > > that we won't end up trying to parse the ksv list if there is an 
> > > > > > error
> > > > > > fetching it.
> > > > > >
> > > > > > Fixes: 79643fddd6eb ("drm/hdcp: optimizing the srm handling")
> > > > > > Cc: sta...@vger.kernel.org
> > > > > > Cc: Ramalingam C 
> > > > > > Cc: Sean Paul 
> > > > > > Cc: Maarten Lankhorst 
> > > > > > Cc: Maxime Ripard 
> > > > > > Cc: Thomas Zimmermann 
> > > > > > Cc: David Airlie 
> > > > > > Cc: Daniel Vetter 
> > > > > > Cc: dri-de...@lists.freedesktop.org
> > > > > > Signed-off-by: Sean Paul 
> > > > > >
> > > > > > Changes in v2:
> > > > > > -Noticed a couple other things to clean up
> > > > > > ---
> > > > > >
> > > > > > Sorry for the quick rev, noticed a couple other loose ends that 
> > > > > > should
> > > > > > be cleaned up.
> > > > > >
> > > > > >  drivers/gpu/drm/drm_hdcp.c | 8 +++-
> > > > > >  1 file changed, 7 insertions(+), 1 deletion(-)
> > > > > >
> > > > > > diff --git a/drivers/gpu/drm/drm_hdcp.c b/drivers/gpu/drm/drm_hdcp.c
> > > > > > index 7f386adcf872..910108ccaae1 100644
> > > > > > --- a/drivers/gpu/drm/drm_hdcp.c
> > > > > > +++ b/drivers/gpu/drm/drm_hdcp.c
> > > > > > @@ -241,8 +241,12 @@ static int drm_hdcp_request_srm(struct 
> > > > > > drm_device *drm_dev,
> > > > > >
> > > > > >   ret = request_firmware_direct(&fw, (const char *)fw_name,
> > > > > > drm_dev->dev);
> > > > > > - if (ret < 0)
> > > > > > + if (ret < 0) {
> > > > > > + *revoked_ksv_cnt = 0;
> > > > > > + *revoked_ksv_list = NULL;
> > > > > These two variables are already initialized by the caller.
> > > >
> > > > Right now it is, but that's not guaranteed. In the ret == 0 case, it's
> > > > pretty common for a caller to assume the called function has
> > > > validated/assigned all the function output.
> > > Ok.
> > > >
> > > > > > + ret = 0;
> > > > > Missing of this should have been caught by CI. May be CI system always
> > > > > having the SRM file from previous execution. Never been removed. IGT
> > > > > need a fix to clean the prior SRM files before execution.
> > > > >
> > > > > CI fix shouldn't block this fix.
> > > > > >   goto exit;
> > > > > > + }
> > > > > >
> > > > > >   if (fw->size && fw->data)
> > > > > >   ret = drm_hdcp_srm_update(fw->data, fw->size, 
> > > > > > revoked_ksv_list,
> > > > > > @@ -287,6 +291,8 @@ int drm_hdcp_check_ksvs_revoked(struct 
> > > > > > drm_device *drm_dev, u8 *ksvs,
> > > > > >
> > > > > >   ret = drm_hdcp_request_srm(drm_dev, &revoked_ksv_list,
> > > > > >  &revoked_ksv_cnt);
> > > > > > + if (ret)
> > > > > > + return ret;
> > > > > This error code also shouldn't effect the caller(i915)
> > > >
> > > > Why not? I'd assume an invalid SRM revocation list should probably be
> > > > treated as failure?
> > > IMHO invalid SRM revocation need not be treated as HDCP authentication
> > > failure.
> > >
> > > First of all SRM need not supplied by all players. and incase, supplied
> > > SRM is not as per the spec, then we dont have any list of revoked ID.
> > > with this I dont think we need to fail the HDCP authentication. Until we
> > > have valid list of revoked IDs from SRM, and the receiver ID is matching
> > > to one of the revoked IDs, I wouldn't want to fail the HDCP
> > > authentication.
> > >
> >
> > Ok, thanks for the explanation. This all seems reasonable to me.
> >
> > Looks like this can be applied as-is, right?
> Yes.
>

Applied to drm-misc-fixes

Sean

> Thanks,
> Ram
>
> > I'll review the patch you
> > posted so we can ignore the -ve return values.
> >
> > Thanks for the review!
> >
> > Sean
> >
> > > -Ram
> > > >
> > > >
> > > > > hence pushed a
> > > > > change https://patchwork.freedesktop.org/series/76730/
> > > > >
> > > > > With these addresed.
> > > > >
> > > > > LGTM.
> > > > >
> > > > > Reviewed-by: Ramalingam C 
> > > > > >
> > > > > >   /* revoked_ksv_cnt will be zero whe

Re: [Intel-gfx] [PATCH] drm/i915: Propagate fence->error across semaphores

2020-05-05 Thread Chris Wilson
Quoting Chris Wilson (2020-05-05 17:13:02)
> Replacing an inter-engine fence with a semaphore reduced the HW
> execution latency, but that comes at a cost. For normal fences, we are
> able to propagate the metadata such as errors along with the signaling.
> For semaphores, we are missing this error propagation so add it in the
> back channel we use to monitor the semaphore overload.
> 
> This raises a valid point on whether error propagation is sufficient in
> the semaphore case if it is coupled to a fatal error, such as EFAULT. It
> is not, and we should teach ourselves not to use a semaphore if we would
> chain up to an external fence whose error we must not ignore.
> 
> Fixes: ef4688497512 ("drm/i915: Propagate fence errors")
> Signed-off-by: Chris Wilson 
> Cc: Tvrtko Ursulin 
> Cc: Matthew Auld 
> ---
>  drivers/gpu/drm/i915/i915_request.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_request.c 
> b/drivers/gpu/drm/i915/i915_request.c
> index 9c5de07db47d..96a8c7a1be73 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -614,6 +614,9 @@ semaphore_notify(struct i915_sw_fence *fence, enum 
> i915_sw_fence_notify state)
>  
> switch (state) {
> case FENCE_COMPLETE:
> +   if (unlikely(fence->error))
> +   i915_request_set_error_once(rq, fence->error);

This is just horrible. I don't like it even as a hack.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Propagate fence->error across semaphores

2020-05-05 Thread Patchwork
== Series Details ==

Series: drm/i915: Propagate fence->error across semaphores
URL   : https://patchwork.freedesktop.org/series/76968/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8430 -> Patchwork_17585


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/index.html


Changes
---

  No changes found


Participating hosts (50 -> 43)
--

  Missing(7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8430 -> Patchwork_17585

  CI-20190529: 20190529
  CI_DRM_8430: 2daa6f8cad645f49a898158190a20a893b4aabe3 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5632: e630cb8cd2ec01d6d5358eb2a3f6ea70498b8183 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17585: 71d8a4f50f28cbd6c43c8877add6f2105fce76e7 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

71d8a4f50f28 drm/i915: Propagate fence->error across semaphores

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17585/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/tgl: Put HDC flush pipe_control bit in the right dword

2020-05-05 Thread D Scott Phillips
Lionel Landwerlin  writes:

> On 05/05/2020 03:09, D Scott Phillips wrote:
>> D Scott Phillips  writes:
>>
>>> Previously we set HDC_PIPELINE_FLUSH in dword 1 of gen12
>>> pipe_control commands. HDC Pipeline flush actually resides in
>>> dword 0, and the bit we were setting in dword 1 was Indirect State
>>> Pointers Disable, which invalidates indirect state in the render
>>> context. This causes failures for userspace, as things like push
>>> constant state gets invalidated.
>>>
>>> Cc: Mika Kuoppala 
>>> Cc: Chris Wilson 
>>> Signed-off-by: D Scott Phillips 
>> also,
>>
>> Fixes: 4aa0b5d457f5 ("drm/i915/tgl: Add HDC Pipeline Flush")
>> ___
>> Intel-gfx mailing list
>> Intel-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
>
> I think Mika sent the same patch in "drm/i915/gen12: Fix HDC pipeline 
> flush".
>
> -Lionel

Ah, quite right, I missed it. Ignore this.
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: HDCP: retry link integrity check on failure

2020-05-05 Thread Oliver Barta
On Tue, May 5, 2020 at 9:38 AM Ramalingam C  wrote:
>
> On 2020-05-04 at 14:35:24 +0200, Oliver Barta wrote:
> > From: Oliver Barta 
> >
> > A single Ri mismatch doesn't automatically mean that the link integrity
> > is broken. Update and check of Ri and Ri' are done asynchronously. In
> > case an update happens just between the read of Ri' and the check against
> > Ri there will be a mismatch even if the link integrity is fine otherwise.
>
> Thanks for working on this. Btw, did you face this sporadic link check
> failure or theoretically you are fixing it?
>
> IMO this change will rule out possible sporadic link check failures as
> mentioned in the commit msg. Though I haven't faced this issue at my
> testings.
>
> Reviewed-by: Ramalingam C 
>

I found it by code inspection, the probability for this to happen is
very low. In order to test the patch I'm decreasing the value of
DRM_HDCP_CHECK_PERIOD_MS to just a few ms. Once you do that it happens
every few seconds.

Thanks,
Oliver

> >
> > Signed-off-by: Oliver Barta 
> > ---
> >  drivers/gpu/drm/i915/display/intel_hdmi.c | 19 ---
> >  1 file changed, 16 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c 
> > b/drivers/gpu/drm/i915/display/intel_hdmi.c
> > index 010f37240710..3156fde392f2 100644
> > --- a/drivers/gpu/drm/i915/display/intel_hdmi.c
> > +++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
> > @@ -1540,7 +1540,7 @@ int intel_hdmi_hdcp_toggle_signalling(struct 
> > intel_digital_port *intel_dig_port,
> >  }
> >
> >  static
> > -bool intel_hdmi_hdcp_check_link(struct intel_digital_port *intel_dig_port)
> > +bool intel_hdmi_hdcp_check_link_once(struct intel_digital_port 
> > *intel_dig_port)
> >  {
> >   struct drm_i915_private *i915 = 
> > to_i915(intel_dig_port->base.base.dev);
> >   struct intel_connector *connector =
> > @@ -1563,8 +1563,7 @@ bool intel_hdmi_hdcp_check_link(struct 
> > intel_digital_port *intel_dig_port)
> >   if (wait_for((intel_de_read(i915, HDCP_STATUS(i915, cpu_transcoder, 
> > port)) &
> > (HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC)) ==
> >(HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC), 1)) {
> > - drm_err(&i915->drm,
> > - "Ri' mismatch detected, link check failed (%x)\n",
> > + drm_dbg_kms(&i915->drm, "Ri' mismatch detected (%x)\n",
> >   intel_de_read(i915, HDCP_STATUS(i915, cpu_transcoder,
> >   port)));
> >   return false;
> > @@ -1572,6 +1571,20 @@ bool intel_hdmi_hdcp_check_link(struct 
> > intel_digital_port *intel_dig_port)
> >   return true;
> >  }
> >
> > +static
> > +bool intel_hdmi_hdcp_check_link(struct intel_digital_port *intel_dig_port)
> > +{
> > + struct drm_i915_private *i915 = 
> > to_i915(intel_dig_port->base.base.dev);
> > + int retry;
> > +
> > + for (retry = 0; retry < 3; retry++)
> > + if (intel_hdmi_hdcp_check_link_once(intel_dig_port))
> > + return true;
> > +
> > + drm_err(&i915->drm, "Link check failed\n");
> > + return false;
> > +}
> > +
> >  struct hdcp2_hdmi_msg_timeout {
> >   u8 msg_id;
> >   u16 timeout;
> > --
> > 2.20.1
> >
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH hmm v2 4/5] mm/hmm: remove HMM_PFN_SPECIAL

2020-05-05 Thread John Hubbard

On 2020-05-01 11:20, Jason Gunthorpe wrote:

From: Jason Gunthorpe 

This is just an alias for HMM_PFN_ERROR, nothing cares that the error was
because of a special page vs any other error case.


Reviewed-by: John Hubbard 

thanks,
--
John Hubbard
NVIDIA


Acked-by: Felix Kuehling 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Jason Gunthorpe 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 1 -
  drivers/gpu/drm/nouveau/nouveau_svm.c   | 1 -
  include/linux/hmm.h | 8 
  mm/hmm.c| 2 +-
  4 files changed, 1 insertion(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 41ae7f96f48194..76b4a4fa39ed04 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -775,7 +775,6 @@ static const uint64_t hmm_range_flags[HMM_PFN_FLAG_MAX] = {
  static const uint64_t hmm_range_values[HMM_PFN_VALUE_MAX] = {
0xfffeUL, /* HMM_PFN_ERROR */
0, /* HMM_PFN_NONE */
-   0xfffcUL /* HMM_PFN_SPECIAL */
  };
  
  /**

diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c 
b/drivers/gpu/drm/nouveau/nouveau_svm.c
index c68e9317cf0740..cf0d9bd61bebf9 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
@@ -379,7 +379,6 @@ static const u64
  nouveau_svm_pfn_values[HMM_PFN_VALUE_MAX] = {
[HMM_PFN_ERROR  ] = ~NVIF_VMM_PFNMAP_V0_V,
[HMM_PFN_NONE   ] =  NVIF_VMM_PFNMAP_V0_NONE,
-   [HMM_PFN_SPECIAL] = ~NVIF_VMM_PFNMAP_V0_V,
  };
  
  /* Issue fault replay for GPU to retry accesses that faulted previously. */

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 0df27dd03d53d7..81c302c884c0e3 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -44,10 +44,6 @@ enum hmm_pfn_flag_e {
   * Flags:
   * HMM_PFN_ERROR: corresponding CPU page table entry points to poisoned memory
   * HMM_PFN_NONE: corresponding CPU page table entry is pte_none()
- * HMM_PFN_SPECIAL: corresponding CPU page table entry is special; i.e., the
- *  result of vmf_insert_pfn() or vm_insert_page(). Therefore, it should 
not
- *  be mirrored by a device, because the entry will never have 
HMM_PFN_VALID
- *  set and the pfn value is undefined.
   *
   * Driver provides values for none entry, error entry, and special entry.
   * Driver can alias (i.e., use same value) error and special, but
@@ -56,12 +52,10 @@ enum hmm_pfn_flag_e {
   * HMM pfn value returned by hmm_vma_get_pfns() or hmm_vma_fault() will be:
   * hmm_range.values[HMM_PFN_ERROR] if CPU page table entry is poisonous,
   * hmm_range.values[HMM_PFN_NONE] if there is no CPU page table entry,
- * hmm_range.values[HMM_PFN_SPECIAL] if CPU page table entry is a special one
   */
  enum hmm_pfn_value_e {
HMM_PFN_ERROR,
HMM_PFN_NONE,
-   HMM_PFN_SPECIAL,
HMM_PFN_VALUE_MAX
  };
  
@@ -110,8 +104,6 @@ static inline struct page *hmm_device_entry_to_page(const struct hmm_range *rang

return NULL;
if (entry == range->values[HMM_PFN_ERROR])
return NULL;
-   if (entry == range->values[HMM_PFN_SPECIAL])
-   return NULL;
if (!(entry & range->flags[HMM_PFN_VALID]))
return NULL;
return pfn_to_page(entry >> range->pfn_shift);
diff --git a/mm/hmm.c b/mm/hmm.c
index f06bcac948a79b..2e975eedb14f89 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -301,7 +301,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, 
unsigned long addr,
pte_unmap(ptep);
return -EFAULT;
}
-   *pfn = range->values[HMM_PFN_SPECIAL];
+   *pfn = range->values[HMM_PFN_ERROR];
return 0;
}
  



___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: HDCP: retry link integrity check on failure

2020-05-05 Thread Oliver Barta
On Mon, May 4, 2020 at 10:24 PM Sean Paul  wrote:
>
> On Mon, May 4, 2020 at 1:32 PM Oliver Barta  wrote:
> >
> > From: Oliver Barta 
> >
> > A single Ri mismatch doesn't automatically mean that the link integrity
> > is broken. Update and check of Ri and Ri' are done asynchronously. In
> > case an update happens just between the read of Ri' and the check against
> > Ri there will be a mismatch even if the link integrity is fine otherwise.
> >
> > Signed-off-by: Oliver Barta 
> > ---
> >  drivers/gpu/drm/i915/display/intel_hdmi.c | 19 ---
> >  1 file changed, 16 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c 
> > b/drivers/gpu/drm/i915/display/intel_hdmi.c
> > index 010f37240710..3156fde392f2 100644
> > --- a/drivers/gpu/drm/i915/display/intel_hdmi.c
> > +++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
> > @@ -1540,7 +1540,7 @@ int intel_hdmi_hdcp_toggle_signalling(struct 
> > intel_digital_port *intel_dig_port,
> >  }
> >
> >  static
> > -bool intel_hdmi_hdcp_check_link(struct intel_digital_port *intel_dig_port)
> > +bool intel_hdmi_hdcp_check_link_once(struct intel_digital_port 
> > *intel_dig_port)
> >  {
> > struct drm_i915_private *i915 = 
> > to_i915(intel_dig_port->base.base.dev);
> > struct intel_connector *connector =
> > @@ -1563,8 +1563,7 @@ bool intel_hdmi_hdcp_check_link(struct 
> > intel_digital_port *intel_dig_port)
> > if (wait_for((intel_de_read(i915, HDCP_STATUS(i915, cpu_transcoder, 
> > port)) &
> >   (HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC)) ==
> >  (HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC), 1)) {
>
> Why doesn't the wait_for catch this?
>
> Sean
>

Hello Sean,

thank you for having a look on my patch. The wait_for can't catch this
because it is Ri' which is outdated compared to Ri. Ri' however needs
to be read over DDC interface which is done only once during the check
sequence. It is not updated during the waiting time.

Oliver

> > -   drm_err(&i915->drm,
> > -   "Ri' mismatch detected, link check failed (%x)\n",
> > +   drm_dbg_kms(&i915->drm, "Ri' mismatch detected (%x)\n",
> > intel_de_read(i915, HDCP_STATUS(i915, 
> > cpu_transcoder,
> > port)));
> > return false;
> > @@ -1572,6 +1571,20 @@ bool intel_hdmi_hdcp_check_link(struct 
> > intel_digital_port *intel_dig_port)
> > return true;
> >  }
> >
> > +static
> > +bool intel_hdmi_hdcp_check_link(struct intel_digital_port *intel_dig_port)
> > +{
> > +   struct drm_i915_private *i915 = 
> > to_i915(intel_dig_port->base.base.dev);
> > +   int retry;
> > +
> > +   for (retry = 0; retry < 3; retry++)
> > +   if (intel_hdmi_hdcp_check_link_once(intel_dig_port))
> > +   return true;
> > +
> > +   drm_err(&i915->drm, "Link check failed\n");
> > +   return false;
> > +}
> > +
> >  struct hdcp2_hdmi_msg_timeout {
> > u8 msg_id;
> > u16 timeout;
> > --
> > 2.20.1
> >
> > ___
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault

2020-05-05 Thread John Hubbard

On 2020-05-01 11:20, Jason Gunthorpe wrote:

From: Jason Gunthorpe 

Presumably the intent here was that hmm_range_fault() could put the data
into some HW specific format and thus avoid some work. However, nothing
actually does that, and it isn't clear how anything actually could do that
as hmm_range_fault() provides CPU addresses which must be DMA mapped.

Perhaps there is some special HW that does not need DMA mapping, but we
don't have any examples of this, and the theoretical performance win of
avoiding an extra scan over the pfns array doesn't seem worth the
complexity. Plus pfns needs to be scanned anyhow to sort out any
DEVICE_PRIVATE pages.

This version replaces the uint64_t with an usigned long containing a pfn
and fixed flags. On input flags is filled with the HMM_PFN_REQ_* values,
on successful output it is filled with HMM_PFN_* values, describing the
state of the pages.



Just some minor stuff below. I wasn't able to spot any errors in the code,
though, so these are just documentation nits.


...



diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst
index 9924f2caa0184c..c9f2329113a47f 100644
--- a/Documentation/vm/hmm.rst
+++ b/Documentation/vm/hmm.rst
@@ -185,9 +185,6 @@ The usage pattern is::
range.start = ...;
range.end = ...;
range.pfns = ...;


That should be:

  range.hmm_pfns = ...;



-  range.flags = ...;
-  range.values = ...;
-  range.pfn_shift = ...;
  
if (!mmget_not_zero(interval_sub->notifier.mm))

return -EFAULT;
@@ -229,15 +226,10 @@ The hmm_range struct has 2 fields, default_flags and 
pfn_flags_mask, that specif
  fault or snapshot policy for the whole range instead of having to set them
  for each entry in the pfns array.
  
-For instance, if the device flags for range.flags are::

+For instance if the device driver wants pages for a range with at least read
+permission, it sets::
  
-range.flags[HMM_PFN_VALID] = (1 << 63);

-range.flags[HMM_PFN_WRITE] = (1 << 62);
-
-and the device driver wants pages for a range with at least read permission,
-it sets::
-
-range->default_flags = (1 << 63);
+range->default_flags = HMM_PFN_REQ_FAULT;
  range->pfn_flags_mask = 0;
  
  and calls hmm_range_fault() as described above. This will fill fault all pages

@@ -246,18 +238,18 @@ in the range with at least read permission.
  Now let's say the driver wants to do the same except for one page in the 
range for
  which it wants to have write permission. Now driver set::
  
-range->default_flags = (1 << 63);

-range->pfn_flags_mask = (1 << 62);
-range->pfns[index_of_write] = (1 << 62);
+range->default_flags = HMM_PFN_REQ_FAULT;
+range->pfn_flags_mask = HMM_PFN_REQ_WRITE;
+range->pfns[index_of_write] = HMM_PFN_REQ_WRITE;



All these choices for _WRITE behavior make it slightly confusing. I mean, it's
better than it was, but there are default flags, a mask, and an index as well,
and it looks like maybe we have a little more power and flexibility than
desirable? Nouveau for example is now just setting the mask only:

// nouveau_range_fault():
.pfn_flags_mask = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE,
(.default_flags is not set, so is zero)

Maybe the example should do what Nouveau is doing? And/or do we want to get rid
of either .default_flags or .pfn_flags_mask?

...


diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c 
b/drivers/gpu/drm/nouveau/nouveau_svm.c
index cf0d9bd61bebf9..99697df28bfe12 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.c


...


@@ -518,9 +506,45 @@ static const struct mmu_interval_notifier_ops 
nouveau_svm_mni_ops = {
.invalidate = nouveau_svm_range_invalidate,
  };
  
+static void nouveau_hmm_convert_pfn(struct nouveau_drm *drm,

+   struct hmm_range *range, u64 *ioctl_addr)
+{
+   unsigned long i, npages;
+
+   /*
+* The ioctl_addr prepared here is passed through nvif_object_ioctl()
+* to an eventual DMA map in something like gp100_vmm_pgt_pfn()
+*
+* This is all just encoding the internal hmm reprensetation into a


"representation"

...


@@ -542,12 +564,15 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm,
return -EBUSY;
  
  		range.notifier_seq = mmu_interval_read_begin(range.notifier);

-   range.default_flags = 0;
-   range.pfn_flags_mask = -1UL;
down_read(&mm->mmap_sem);
ret = hmm_range_fault(&range);
up_read(&mm->mmap_sem);
if (ret) {
+   /*
+* FIXME: the input PFN_REQ flags are destroyed on
+* -EBUSY, we need to regenerate them, also for the
+* other continue below
+*/



How serious is this FIXME? It seems like we could get stuck in a loop here,
if we're not issuing a new

Re: [Intel-gfx] [PATCH hmm v2 2/5] mm/hmm: make hmm_range_fault return 0 or -1

2020-05-05 Thread John Hubbard

On 2020-05-01 11:20, Jason Gunthorpe wrote:

From: Jason Gunthorpe 

hmm_vma_walk->last is supposed to be updated after every write to the
pfns, so that it can be returned by hmm_range_fault(). However, this is
not done consistently. Fortunately nothing checks the return code of
hmm_range_fault() for anything other than error.

More importantly last must be set before returning -EBUSY as it is used to
prevent reading an output pfn as an input flags when the loop restarts.

For clarity and simplicity make hmm_range_fault() return 0 or -ERRNO. Only
set last when returning -EBUSY.


Yes, this is also a nice simplification.


...
@@ -590,10 +580,13 @@ long hmm_range_fault(struct hmm_range *range)
return -EBUSY;
ret = walk_page_range(mm, hmm_vma_walk.last, range->end,
  &hmm_walk_ops, &hmm_vma_walk);
+   /*
+* When -EBUSY is returned the loop restarts with
+* hmm_vma_walk.last set to an address that has not been stored
+* in pfns. All entries < last in the pfn array are set to their
+* output, and all >= are still at their input values.
+*/


I'm glad you added that comment. This is much easier to figure out with
that in place. After poking around this patch and eventually understanding the
.last handling, I wondered if you might like this slightly tweaked wording
instead:

/*
 * Each of the hmm_walk_ops routines returns -EBUSY if and only
 * hmm_vma_walk.last has been set to an address that has not yet
 * been stored in pfns. All entries < last in the pfn array are
 * set to their output, and all >= are still at their input
 * values.
 */

Either way,

Reviewed-by: John Hubbard 

thanks,
--
John Hubbard
NVIDIA


} while (ret == -EBUSY);
-
-   if (ret)
-   return ret;
-   return (hmm_vma_walk.last - range->start) >> PAGE_SHIFT;
+   return ret;
  }
  EXPORT_SYMBOL(hmm_range_fault);



___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [1/6] drm/i915: Mark concurrent submissions with a weak-dependency (rev3)

2020-05-05 Thread Patchwork
== Series Details ==

Series: series starting with [1/6] drm/i915: Mark concurrent submissions with a 
weak-dependency (rev3)
URL   : https://patchwork.freedesktop.org/series/76912/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8430 -> Patchwork_17584


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/index.html

New tests
-

  New tests have been introduced between CI_DRM_8430 and Patchwork_17584:

### New IGT tests (1) ###

  * igt@dmabuf@all@dma_fence_proxy:
- Statuses : 40 pass(s)
- Exec time: [0.02, 0.09] s

  


Changes
---

  No changes found


Participating hosts (50 -> 43)
--

  Missing(7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8430 -> Patchwork_17584

  CI-20190529: 20190529
  CI_DRM_8430: 2daa6f8cad645f49a898158190a20a893b4aabe3 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5632: e630cb8cd2ec01d6d5358eb2a3f6ea70498b8183 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17584: 7e5f8efa9a8d423a289def24f06bd53489a08ad2 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

7e5f8efa9a8d drm/i915/gt: Declare when we enabled timeslicing
f3459c71c608 drm/i915/gem: Allow combining submit-fences with syncobj
e796d58b1084 drm/i915/gem: Teach execbuf how to wait on future syncobj
f3138b0d84f7 drm/syncobj: Allow use of dma-fence-proxy
19243ac2eee1 dma-buf: Proxy fence, an unsignaled fence placeholder
d336bdf650f8 drm/i915: Mark concurrent submissions with a weak-dependency

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17584/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: HDCP: retry link integrity check on failure

2020-05-05 Thread Sean Paul
On Tue, May 5, 2020 at 3:27 AM Oliver Barta  wrote:
>
> On Mon, May 4, 2020 at 10:24 PM Sean Paul  wrote:
> >
> > On Mon, May 4, 2020 at 1:32 PM Oliver Barta  wrote:
> > >
> > > From: Oliver Barta 
> > >
> > > A single Ri mismatch doesn't automatically mean that the link integrity
> > > is broken. Update and check of Ri and Ri' are done asynchronously. In
> > > case an update happens just between the read of Ri' and the check against
> > > Ri there will be a mismatch even if the link integrity is fine otherwise.
> > >
> > > Signed-off-by: Oliver Barta 
> > > ---
> > >  drivers/gpu/drm/i915/display/intel_hdmi.c | 19 ---
> > >  1 file changed, 16 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c 
> > > b/drivers/gpu/drm/i915/display/intel_hdmi.c
> > > index 010f37240710..3156fde392f2 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_hdmi.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
> > > @@ -1540,7 +1540,7 @@ int intel_hdmi_hdcp_toggle_signalling(struct 
> > > intel_digital_port *intel_dig_port,
> > >  }
> > >
> > >  static
> > > -bool intel_hdmi_hdcp_check_link(struct intel_digital_port 
> > > *intel_dig_port)
> > > +bool intel_hdmi_hdcp_check_link_once(struct intel_digital_port 
> > > *intel_dig_port)
> > >  {
> > > struct drm_i915_private *i915 = 
> > > to_i915(intel_dig_port->base.base.dev);
> > > struct intel_connector *connector =
> > > @@ -1563,8 +1563,7 @@ bool intel_hdmi_hdcp_check_link(struct 
> > > intel_digital_port *intel_dig_port)
> > > if (wait_for((intel_de_read(i915, HDCP_STATUS(i915, 
> > > cpu_transcoder, port)) &
> > >   (HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC)) ==
> > >  (HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC), 1)) {
> >
> > Why doesn't the wait_for catch this?
> >
> > Sean
> >
>
> Hello Sean,
>
> thank you for having a look on my patch. The wait_for can't catch this
> because it is Ri' which is outdated compared to Ri. Ri' however needs
> to be read over DDC interface which is done only once during the check
> sequence. It is not updated during the waiting time.
>

Ok, makes sense, thank you.

Reviewed-by: Sean Paul 

> Oliver
>
> > > -   drm_err(&i915->drm,
> > > -   "Ri' mismatch detected, link check failed (%x)\n",
> > > +   drm_dbg_kms(&i915->drm, "Ri' mismatch detected (%x)\n",
> > > intel_de_read(i915, HDCP_STATUS(i915, 
> > > cpu_transcoder,
> > > port)));
> > > return false;
> > > @@ -1572,6 +1571,20 @@ bool intel_hdmi_hdcp_check_link(struct 
> > > intel_digital_port *intel_dig_port)
> > > return true;
> > >  }
> > >
> > > +static
> > > +bool intel_hdmi_hdcp_check_link(struct intel_digital_port 
> > > *intel_dig_port)
> > > +{
> > > +   struct drm_i915_private *i915 = 
> > > to_i915(intel_dig_port->base.base.dev);
> > > +   int retry;
> > > +
> > > +   for (retry = 0; retry < 3; retry++)
> > > +   if (intel_hdmi_hdcp_check_link_once(intel_dig_port))
> > > +   return true;
> > > +
> > > +   drm_err(&i915->drm, "Link check failed\n");
> > > +   return false;
> > > +}
> > > +
> > >  struct hdcp2_hdmi_msg_timeout {
> > > u8 msg_id;
> > > u16 timeout;
> > > --
> > > 2.20.1
> > >
> > > ___
> > > Intel-gfx mailing list
> > > Intel-gfx@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH] drm/i915: Propagate fence->error across semaphores

2020-05-05 Thread Chris Wilson
Replacing an inter-engine fence with a semaphore reduced the HW
execution latency, but that comes at a cost. For normal fences, we are
able to propagate the metadata such as errors along with the signaling.
For semaphores, we are missing this error propagation so add it in the
back channel we use to monitor the semaphore overload.

This raises a valid point on whether error propagation is sufficient in
the semaphore case if it is coupled to a fatal error, such as EFAULT. It
is not, and we should teach ourselves not to use a semaphore if we would
chain up to an external fence whose error we must not ignore.

Fixes: ef4688497512 ("drm/i915: Propagate fence errors")
Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
Cc: Matthew Auld 
---
 drivers/gpu/drm/i915/i915_request.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 9c5de07db47d..96a8c7a1be73 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -614,6 +614,9 @@ semaphore_notify(struct i915_sw_fence *fence, enum 
i915_sw_fence_notify state)
 
switch (state) {
case FENCE_COMPLETE:
+   if (unlikely(fence->error))
+   i915_request_set_error_once(rq, fence->error);
+
if (!(READ_ONCE(rq->sched.attr.priority) & 
I915_PRIORITY_NOSEMAPHORE)) {
i915_request_get(rq);
init_irq_work(&rq->semaphore_work, irq_semaphore_cb);
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v2 08/22] drm/i915/rkl: Add power well support

2020-05-05 Thread Imre Deak
On Tue, May 05, 2020 at 07:39:04AM -0700, Matt Roper wrote:
> On Tue, May 05, 2020 at 10:20:58AM +0530, Anshuman Gupta wrote:
> > On 2020-05-04 at 15:52:13 -0700, Matt Roper wrote:
> > > RKL power wells are similar to TGL power wells, but have some important
> > > differences:
> > > 
> > >  * PG1 now has pipe A's VDSC (rather than sticking it in PG2)
> > >  * PG2 no longer exists
> > >  * DDI-C (aka TC-1) moves from PG1 -> PG3
> > >  * PG5 no longer exists due to the lack of a fourth pipe
> > > 
> > > Also note that what we refer to as 'DDI-C' and 'DDI-D' need to actually
> > > be programmed as TC-1 and TC-2 even though this platform doesn't have TC
> > > outputs.
> > > 
> > > Bspec: 49234
> > > Cc: Imre Deak 
> > > Cc: Lucas De Marchi 
> > > Cc: Anshuman Gupta 
> > > Signed-off-by: Matt Roper 
> > > ---
> > >  .../drm/i915/display/intel_display_power.c| 185 +-
> > >  drivers/gpu/drm/i915/display/intel_vdsc.c |   4 +-
> > >  2 files changed, 186 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c 
> > > b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > index 49998906cc61..71691919d101 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > @@ -2913,6 +2913,53 @@ void intel_display_power_put(struct 
> > > drm_i915_private *dev_priv,
> > >   BIT_ULL(POWER_DOMAIN_AUX_I_TBT) |   \
> > >   BIT_ULL(POWER_DOMAIN_TC_COLD_OFF))
> > >  
> > > +#define RKL_PW_4_POWER_DOMAINS ( \
> > > + BIT_ULL(POWER_DOMAIN_PIPE_C) |  \
> > > + BIT_ULL(POWER_DOMAIN_PIPE_C_PANEL_FITTER) | \
> > > + BIT_ULL(POWER_DOMAIN_TRANSCODER_C) |\
> > > + BIT_ULL(POWER_DOMAIN_INIT))
> > > +
> > > +#define RKL_PW_3_POWER_DOMAINS ( \
> > > + RKL_PW_4_POWER_DOMAINS |\
> > > + BIT_ULL(POWER_DOMAIN_PIPE_B) |  \
> > > + BIT_ULL(POWER_DOMAIN_PIPE_B_PANEL_FITTER) | \
> > > + BIT_ULL(POWER_DOMAIN_AUDIO) |   \
> > > + BIT_ULL(POWER_DOMAIN_VGA) | \
> > > + BIT_ULL(POWER_DOMAIN_TRANSCODER_B) |\
> > > + BIT_ULL(POWER_DOMAIN_PORT_DDI_D_LANES) |\
> > > + BIT_ULL(POWER_DOMAIN_PORT_DDI_E_LANES) |\
> > > + BIT_ULL(POWER_DOMAIN_AUX_D) |   \
> > > + BIT_ULL(POWER_DOMAIN_AUX_E) |   \
> > > + BIT_ULL(POWER_DOMAIN_INIT))
> > > +
> > > +/*
> > > + * There is no PW_2/PG_2 on RKL.
> > > + *
> > > + * RKL PW_1/PG_1 domains (under HW/DMC control):
> > > + * - DBUF function (note: registers are in PW0)
> > > + * - PIPE_A and its planes and VDSC/joining, except VGA
> > > + * - transcoder A
> > > + * - DDI_A and DDI_B
> > > + * - FBC
> > > + *
> > > + * RKL PW_0/PG_0 domains (under HW/DMC control):
> > > + * - PCI
> > > + * - clocks except port PLL
> > > + * - shared functions:
> > > + * * interrupts except pipe interrupts
> > > + * * MBus except PIPE_MBUS_DBOX_CTL
> > > + * * DBUF registers
> > > + * - central power except FBC
> > > + * - top-level GTC (DDI-level GTC is in the well associated with the DDI)
> > > + */
> > > +
> > > +#define RKL_DISPLAY_DC_OFF_POWER_DOMAINS (   \
> > > + RKL_PW_3_POWER_DOMAINS |\
> > > + BIT_ULL(POWER_DOMAIN_MODESET) | \
> > > + BIT_ULL(POWER_DOMAIN_AUX_A) |   \
> > > + BIT_ULL(POWER_DOMAIN_AUX_B) |   \
> > > + BIT_ULL(POWER_DOMAIN_INIT))
> > > +
> > >  static const struct i915_power_well_ops i9xx_always_on_power_well_ops = {
> > >   .sync_hw = i9xx_power_well_sync_hw_noop,
> > >   .enable = i9xx_always_on_power_well_noop,
> > > @@ -4283,6 +4330,140 @@ static const struct i915_power_well_desc 
> > > tgl_power_wells[] = {
> > >   },
> > >  };
> > >  
> > > +static const struct i915_power_well_desc rkl_power_wells[] = {
> > > + {
> > > + .name = "always-on",
> > > + .always_on = true,
> > > + .domains = POWER_DOMAIN_MASK,
> > > + .ops = &i9xx_always_on_power_well_ops,
> > > + .id = DISP_PW_ID_NONE,
> > > + },
> > > + {
> > > + .name = "power well 1",
> > > + /* Handled by the DMC firmware */
> > > + .always_on = true,
> > > + .domains = 0,
> > > + .ops = &hsw_power_well_ops,
> > > + .id = SKL_DISP_PW_1,
> > > + {
> > > + .hsw.regs = &hsw_power_well_regs,
> > > + .hsw.idx = ICL_PW_CTL_IDX_PW_1,
> > > + .hsw.has_fuses = true,
> > > + },
> > > + },
> > > + {
> > > + .name = "DC off",
> > > + .domains = RKL_DISPLAY_DC_OFF_POWER_DOMAINS,
> > > + .ops = &gen9_dc_off_power_well_ops,
> > > + .id = SKL_DISP_DC_OFF,
> > > + },
> > > + {
> > > + .name = "power well 3",
> > > + .domains = RKL_PW_3_POWER_DOMAINS,
> > > + .ops = &hsw_power_well_ops,
> > > + .id = ICL_DISP_PW_3,
> > 

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/6] drm/i915: Mark concurrent submissions with a weak-dependency (rev3)

2020-05-05 Thread Patchwork
== Series Details ==

Series: series starting with [1/6] drm/i915: Mark concurrent submissions with a 
weak-dependency (rev3)
URL   : https://patchwork.freedesktop.org/series/76912/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
d336bdf650f8 drm/i915: Mark concurrent submissions with a weak-dependency
19243ac2eee1 dma-buf: Proxy fence, an unsignaled fence placeholder
-:45: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does 
MAINTAINERS need updating?
#45: 
new file mode 100644

-:387: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#387: FILE: drivers/dma-buf/st-dma-fence-proxy.c:20:
+   spinlock_t lock;

-:547: WARNING:MEMORY_BARRIER: memory barrier without comment
#547: FILE: drivers/dma-buf/st-dma-fence-proxy.c:180:
+   smp_store_mb(container_of(cb, struct simple_cb, cb)->seen, true);

total: 0 errors, 2 warnings, 1 checks, 1050 lines checked
f3138b0d84f7 drm/syncobj: Allow use of dma-fence-proxy
e796d58b1084 drm/i915/gem: Teach execbuf how to wait on future syncobj
f3459c71c608 drm/i915/gem: Allow combining submit-fences with syncobj
7e5f8efa9a8d drm/i915/gt: Declare when we enabled timeslicing

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for Consider DBuf bandwidth when calculating CDCLK (rev9)

2020-05-05 Thread Patchwork
== Series Details ==

Series: Consider DBuf bandwidth when calculating CDCLK (rev9)
URL   : https://patchwork.freedesktop.org/series/74739/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8430 -> Patchwork_17583


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/index.html


Changes
---

  No changes found


Participating hosts (50 -> 43)
--

  Missing(7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8430 -> Patchwork_17583

  CI-20190529: 20190529
  CI_DRM_8430: 2daa6f8cad645f49a898158190a20a893b4aabe3 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5632: e630cb8cd2ec01d6d5358eb2a3f6ea70498b8183 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17583: 0792cc71d598d77d1570d73068661bd1cd504e02 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

0792cc71d598 drm/i915: Remove unneeded hack now for CDCLK
6b07838f4a00 drm/i915: Adjust CDCLK accordingly to our DBuf bw needs
0c2d539a68ee drm/i915: Introduce for_each_dbuf_slice_in_mask macro
0debc96ba00f drm/i915: Force recalculate min_cdclk if planes config changed
b9fba3b4f20a drm/i915: Decouple cdclk calculation from modeset checks

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17583/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Consider DBuf bandwidth when calculating CDCLK (rev9)

2020-05-05 Thread Patchwork
== Series Details ==

Series: Consider DBuf bandwidth when calculating CDCLK (rev9)
URL   : https://patchwork.freedesktop.org/series/74739/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
b9fba3b4f20a drm/i915: Decouple cdclk calculation from modeset checks
0debc96ba00f drm/i915: Force recalculate min_cdclk if planes config changed
0c2d539a68ee drm/i915: Introduce for_each_dbuf_slice_in_mask macro
-:24: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__slice' - possible 
side-effects?
#24: FILE: drivers/gpu/drm/i915/display/intel_display.h:190:
+#define for_each_dbuf_slice_in_mask(__slice, __mask) \
+   for ((__slice) = DBUF_S1; (__slice) < I915_MAX_DBUF_SLICES; 
(__slice)++) \
+   for_each_if((BIT(__slice)) & (__mask))

total: 0 errors, 0 warnings, 1 checks, 20 lines checked
6b07838f4a00 drm/i915: Adjust CDCLK accordingly to our DBuf bw needs
0792cc71d598 drm/i915: Remove unneeded hack now for CDCLK

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [1/2] drm/i915: Mark concurrent submissions with a weak-dependency

2020-05-05 Thread Patchwork
== Series Details ==

Series: series starting with [1/2] drm/i915: Mark concurrent submissions with a 
weak-dependency
URL   : https://patchwork.freedesktop.org/series/76953/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8430 -> Patchwork_17582


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/index.html


Changes
---

  No changes found


Participating hosts (50 -> 43)
--

  Missing(7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8430 -> Patchwork_17582

  CI-20190529: 20190529
  CI_DRM_8430: 2daa6f8cad645f49a898158190a20a893b4aabe3 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5632: e630cb8cd2ec01d6d5358eb2a3f6ea70498b8183 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17582: 82d985fb10a5ba875341421463dd01193a51775c @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

82d985fb10a5 drm/i915: Ignore submit-fences on the same timeline
56d08c3dff51 drm/i915: Mark concurrent submissions with a weak-dependency

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17582/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH] drm/i915/gem: Teach execbuf how to wait on future syncobj

2020-05-05 Thread Chris Wilson
If a syncobj has not yet been assigned, treat it as a future fence and
install and wait upon a dma-fence-proxy. The proxy will be replace by
the real fence later, and that fence will be responsible for signaling
our waiter.

Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4854
Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c|  21 ++-
 drivers/gpu/drm/i915/i915_request.c   | 133 ++
 drivers/gpu/drm/i915/i915_scheduler.c |  41 ++
 drivers/gpu/drm/i915/i915_scheduler.h |   3 +
 4 files changed, 196 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 966523a8503f..7abb96505a31 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -5,6 +5,7 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -2524,8 +2525,24 @@ await_fence_array(struct i915_execbuffer *eb,
continue;
 
fence = drm_syncobj_fence_get(syncobj);
-   if (!fence)
-   return -EINVAL;
+   if (!fence) {
+   struct dma_fence *old;
+
+   fence = dma_fence_create_proxy();
+   if (!fence)
+   return -ENOMEM;
+
+   spin_lock(&syncobj->lock);
+   old = rcu_dereference_protected(syncobj->fence, true);
+   if (unlikely(old)) {
+   dma_fence_put(fence);
+   fence = dma_fence_get(old);
+   } else {
+   rcu_assign_pointer(syncobj->fence,
+  dma_fence_get(fence));
+   }
+   spin_unlock(&syncobj->lock);
+   }
 
err = i915_request_await_dma_fence(eb->request, fence);
dma_fence_put(fence);
diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index d369b25e46bb..9c5de07db47d 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -23,6 +23,7 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1065,6 +1066,136 @@ i915_request_await_request(struct i915_request *to, 
struct i915_request *from)
return 0;
 }
 
+struct await_proxy {
+   struct wait_queue_entry base;
+   struct i915_request *request;
+   struct dma_fence *fence;
+   struct timer_list timer;
+   struct work_struct work;
+   int (*attach)(struct await_proxy *ap);
+   void *data;
+};
+
+static void await_proxy_work(struct work_struct *work)
+{
+   struct await_proxy *ap = container_of(work, typeof(*ap), work);
+   struct i915_request *rq = ap->request;
+
+   del_timer_sync(&ap->timer);
+
+   if (ap->fence) {
+   int err = 0;
+
+   /*
+* If the fence is external, we impose a 10s timeout.
+* However, if the fence is internal, we skip a timeout in
+* the belief that all fences are in-order (DAG, no cycles)
+* and we can enforce forward progress by reset the GPU if
+* necessary. A future fence, provided userspace, can trivially
+* generate a cycle in the dependency graph, and so cause
+* that entire cycle to become deadlocked and for no forward
+* progress to either be made, and the driver being kept
+* eternally awake.
+*/
+   if (dma_fence_is_i915(ap->fence) &&
+   !i915_sched_node_verify_dag(&rq->sched,
+   &to_request(ap->fence)->sched))
+   err = -EDEADLK;
+
+   if (!err) {
+   mutex_lock(&rq->context->timeline->mutex);
+   err = ap->attach(ap);
+   mutex_unlock(&rq->context->timeline->mutex);
+   }
+
+   if (err < 0)
+   i915_sw_fence_set_error_once(&rq->submit, err);
+   }
+
+   i915_sw_fence_complete(&rq->submit);
+
+   dma_fence_put(ap->fence);
+   kfree(ap);
+}
+
+static int
+await_proxy_wake(struct wait_queue_entry *entry,
+unsigned int mode,
+int flags,
+void *fence)
+{
+   struct await_proxy *ap = container_of(entry, typeof(*ap), base);
+
+   ap->fence = dma_fence_get(fence);
+   schedule_work(&ap->work);
+
+   return 0;
+}
+
+static void
+await_proxy_timer(struct timer_list *t)
+{
+   struct await_proxy *ap = container_of(t, typeof(*ap), timer);
+
+   if (dma_fence_remove_proxy_listener(ap->base.private, &ap->base)) {
+   struct i915_request *rq = ap->request;
+

Re: [Intel-gfx] [PATCH v2 08/22] drm/i915/rkl: Add power well support

2020-05-05 Thread Matt Roper
On Tue, May 05, 2020 at 10:20:58AM +0530, Anshuman Gupta wrote:
> On 2020-05-04 at 15:52:13 -0700, Matt Roper wrote:
> > RKL power wells are similar to TGL power wells, but have some important
> > differences:
> > 
> >  * PG1 now has pipe A's VDSC (rather than sticking it in PG2)
> >  * PG2 no longer exists
> >  * DDI-C (aka TC-1) moves from PG1 -> PG3
> >  * PG5 no longer exists due to the lack of a fourth pipe
> > 
> > Also note that what we refer to as 'DDI-C' and 'DDI-D' need to actually
> > be programmed as TC-1 and TC-2 even though this platform doesn't have TC
> > outputs.
> > 
> > Bspec: 49234
> > Cc: Imre Deak 
> > Cc: Lucas De Marchi 
> > Cc: Anshuman Gupta 
> > Signed-off-by: Matt Roper 
> > ---
> >  .../drm/i915/display/intel_display_power.c| 185 +-
> >  drivers/gpu/drm/i915/display/intel_vdsc.c |   4 +-
> >  2 files changed, 186 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c 
> > b/drivers/gpu/drm/i915/display/intel_display_power.c
> > index 49998906cc61..71691919d101 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> > +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> > @@ -2913,6 +2913,53 @@ void intel_display_power_put(struct drm_i915_private 
> > *dev_priv,
> > BIT_ULL(POWER_DOMAIN_AUX_I_TBT) |   \
> > BIT_ULL(POWER_DOMAIN_TC_COLD_OFF))
> >  
> > +#define RKL_PW_4_POWER_DOMAINS (   \
> > +   BIT_ULL(POWER_DOMAIN_PIPE_C) |  \
> > +   BIT_ULL(POWER_DOMAIN_PIPE_C_PANEL_FITTER) | \
> > +   BIT_ULL(POWER_DOMAIN_TRANSCODER_C) |\
> > +   BIT_ULL(POWER_DOMAIN_INIT))
> > +
> > +#define RKL_PW_3_POWER_DOMAINS (   \
> > +   RKL_PW_4_POWER_DOMAINS |\
> > +   BIT_ULL(POWER_DOMAIN_PIPE_B) |  \
> > +   BIT_ULL(POWER_DOMAIN_PIPE_B_PANEL_FITTER) | \
> > +   BIT_ULL(POWER_DOMAIN_AUDIO) |   \
> > +   BIT_ULL(POWER_DOMAIN_VGA) | \
> > +   BIT_ULL(POWER_DOMAIN_TRANSCODER_B) |\
> > +   BIT_ULL(POWER_DOMAIN_PORT_DDI_D_LANES) |\
> > +   BIT_ULL(POWER_DOMAIN_PORT_DDI_E_LANES) |\
> > +   BIT_ULL(POWER_DOMAIN_AUX_D) |   \
> > +   BIT_ULL(POWER_DOMAIN_AUX_E) |   \
> > +   BIT_ULL(POWER_DOMAIN_INIT))
> > +
> > +/*
> > + * There is no PW_2/PG_2 on RKL.
> > + *
> > + * RKL PW_1/PG_1 domains (under HW/DMC control):
> > + * - DBUF function (note: registers are in PW0)
> > + * - PIPE_A and its planes and VDSC/joining, except VGA
> > + * - transcoder A
> > + * - DDI_A and DDI_B
> > + * - FBC
> > + *
> > + * RKL PW_0/PG_0 domains (under HW/DMC control):
> > + * - PCI
> > + * - clocks except port PLL
> > + * - shared functions:
> > + * * interrupts except pipe interrupts
> > + * * MBus except PIPE_MBUS_DBOX_CTL
> > + * * DBUF registers
> > + * - central power except FBC
> > + * - top-level GTC (DDI-level GTC is in the well associated with the DDI)
> > + */
> > +
> > +#define RKL_DISPLAY_DC_OFF_POWER_DOMAINS ( \
> > +   RKL_PW_3_POWER_DOMAINS |\
> > +   BIT_ULL(POWER_DOMAIN_MODESET) | \
> > +   BIT_ULL(POWER_DOMAIN_AUX_A) |   \
> > +   BIT_ULL(POWER_DOMAIN_AUX_B) |   \
> > +   BIT_ULL(POWER_DOMAIN_INIT))
> > +
> >  static const struct i915_power_well_ops i9xx_always_on_power_well_ops = {
> > .sync_hw = i9xx_power_well_sync_hw_noop,
> > .enable = i9xx_always_on_power_well_noop,
> > @@ -4283,6 +4330,140 @@ static const struct i915_power_well_desc 
> > tgl_power_wells[] = {
> > },
> >  };
> >  
> > +static const struct i915_power_well_desc rkl_power_wells[] = {
> > +   {
> > +   .name = "always-on",
> > +   .always_on = true,
> > +   .domains = POWER_DOMAIN_MASK,
> > +   .ops = &i9xx_always_on_power_well_ops,
> > +   .id = DISP_PW_ID_NONE,
> > +   },
> > +   {
> > +   .name = "power well 1",
> > +   /* Handled by the DMC firmware */
> > +   .always_on = true,
> > +   .domains = 0,
> > +   .ops = &hsw_power_well_ops,
> > +   .id = SKL_DISP_PW_1,
> > +   {
> > +   .hsw.regs = &hsw_power_well_regs,
> > +   .hsw.idx = ICL_PW_CTL_IDX_PW_1,
> > +   .hsw.has_fuses = true,
> > +   },
> > +   },
> > +   {
> > +   .name = "DC off",
> > +   .domains = RKL_DISPLAY_DC_OFF_POWER_DOMAINS,
> > +   .ops = &gen9_dc_off_power_well_ops,
> > +   .id = SKL_DISP_DC_OFF,
> > +   },
> > +   {
> > +   .name = "power well 3",
> > +   .domains = RKL_PW_3_POWER_DOMAINS,
> > +   .ops = &hsw_power_well_ops,
> > +   .id = ICL_DISP_PW_3,
> > +   {
> > +   .hsw.regs = &hsw_power_well_regs,
> > +   .hsw.idx = ICL_PW_CTL_IDX_PW_3,
> > +   .hsw.irq_pipe_mask = BIT(PIPE_B),
> > + 

[Intel-gfx] [PATCH v6 1/5] drm/i915: Decouple cdclk calculation from modeset checks

2020-05-05 Thread Stanislav Lisovskiy
We need to calculate cdclk after watermarks/ddb has been calculated
as with recent hw CDCLK needs to be adjusted accordingly to DBuf
requirements, which is not possible with current code organization.

Setting CDCLK according to DBuf BW requirements and not just rejecting
if it doesn't satisfy BW requirements, will allow us to save power when
it is possible and gain additional bandwidth when it's needed - i.e
boosting both our power management and perfomance capabilities.

This patch is preparation for that, first we now extract modeset
calculation from modeset checks, in order to call it after wm/ddb
has been calculated.

v2: - Extract only intel_modeset_calc_cdclk from intel_modeset_checks
  (Ville Syrjälä)

v3: - Clear plls after intel_modeset_calc_cdclk

v4: - Added r-b from previous revision to commit message

Reviewed-by: Ville Syrjälä 
Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_display.c | 22 +++-
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index fd6d63b03489..3bf6751497c8 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -14493,12 +14493,6 @@ static int intel_modeset_checks(struct 
intel_atomic_state *state)
return ret;
}
 
-   ret = intel_modeset_calc_cdclk(state);
-   if (ret)
-   return ret;
-
-   intel_modeset_clear_plls(state);
-
if (IS_HASWELL(dev_priv))
return hsw_mode_set_planes_workaround(state);
 
@@ -14830,10 +14824,6 @@ static int intel_atomic_check(struct drm_device *dev,
goto fail;
}
 
-   ret = intel_atomic_check_crtcs(state);
-   if (ret)
-   goto fail;
-
intel_fbc_choose_crtc(dev_priv, state);
ret = calc_watermark_data(state);
if (ret)
@@ -14843,6 +14833,18 @@ static int intel_atomic_check(struct drm_device *dev,
if (ret)
goto fail;
 
+   if (any_ms) {
+   ret = intel_modeset_calc_cdclk(state);
+   if (ret)
+   return ret;
+
+   intel_modeset_clear_plls(state);
+   }
+
+   ret = intel_atomic_check_crtcs(state);
+   if (ret)
+   goto fail;
+
for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
new_crtc_state, i) {
if (!needs_modeset(new_crtc_state) &&
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v6 2/5] drm/i915: Force recalculate min_cdclk if planes config changed

2020-05-05 Thread Stanislav Lisovskiy
In Gen11+ whenever we might exceed DBuf bandwidth we might need to
recalculate CDCLK which DBuf bandwidth is scaled with.
Total Dbuf bw used might change based on particular plane needs.

In intel_atomic_check_planes we try to filter out the cases when
we definitely don't need to recalculate required bandwidth/CDCLK.
In current code we compare amount of planes and skip recalculating
if those are equal.
This seems being too relaxed requirement and might be even wrong
because plane combination might become different despite amount
of planes is same - that requires recalculating min cdclk and
consumed bandwidth.

v2: - Changed commit message to properly reflect the need why,
  we might want to change from hamming weight comparison
  to actual plane combination checking.

Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_display.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index 3bf6751497c8..33f566114c81 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -14569,7 +14569,7 @@ static bool active_planes_affects_min_cdclk(struct 
drm_i915_private *dev_priv)
/* See {hsw,vlv,ivb}_plane_ratio() */
return IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv) ||
IS_CHERRYVIEW(dev_priv) || IS_VALLEYVIEW(dev_priv) ||
-   IS_IVYBRIDGE(dev_priv);
+   IS_IVYBRIDGE(dev_priv) || (INTEL_GEN(dev_priv) >= 11);
 }
 
 static int intel_atomic_check_planes(struct intel_atomic_state *state,
@@ -14615,7 +14615,13 @@ static int intel_atomic_check_planes(struct 
intel_atomic_state *state,
old_active_planes = old_crtc_state->active_planes & 
~BIT(PLANE_CURSOR);
new_active_planes = new_crtc_state->active_planes & 
~BIT(PLANE_CURSOR);
 
-   if (hweight8(old_active_planes) == hweight8(new_active_planes))
+   /*
+* Not only the number of planes, but if the plane 
configuration had
+* changed might already mean we need to recompute min CDCLK,
+* because different planes might consume different amount of 
Dbuf bandwidth
+* according to formula: Bw per plane = Pixel rate * bpp * 
pipe/plane scale factor
+*/
+   if (old_active_planes == new_active_planes)
continue;
 
ret = intel_crtc_add_planes_to_state(state, crtc, 
new_active_planes);
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v6 4/5] drm/i915: Adjust CDCLK accordingly to our DBuf bw needs

2020-05-05 Thread Stanislav Lisovskiy
According to BSpec max BW per slice is calculated using formula
Max BW = CDCLK * 64. Currently when calculating min CDCLK we
account only per plane requirements, however in order to avoid
FIFO underruns we need to estimate accumulated BW consumed by
all planes(ddb entries basically) residing on that particular
DBuf slice. This will allow us to put CDCLK lower and save power
when we don't need that much bandwidth or gain additional
performance once plane consumption grows.

v2: - Fix long line warning
- Limited new DBuf bw checks to only gens >= 11

v3: - Lets track used Dbuf bw per slice and per crtc in bw state
  (or may be in DBuf state in future), that way we don't need
  to have all crtcs in state and those only if we detect if
  are actually going to change cdclk, just same way as we
  do with other stuff, i.e intel_atomic_serialize_global_state
  and co. Just as per Ville's paradigm.
- Made dbuf bw calculation procedure look nicer by introducing
  for_each_dbuf_slice_in_mask - we often will now need to iterate
  slices using mask.
- According to experimental results CDCLK * 64 accounts for
  overall bandwidth across all dbufs, not per dbuf.

v4: - Fixed missing const(Ville)
- Removed spurious whitespaces(Ville)
- Fixed local variable init(reduced scope where not needed)
- Added some comments about data rate for planar formats
- Changed struct intel_crtc_bw to intel_dbuf_bw
- Moved dbuf bw calculation to intel_compute_min_cdclk(Ville)

v5: - Removed unneeded macro

v6: - Prevent too frequent CDCLK switching back and forth:
  Always switch to higher CDCLK when needed to prevent bandwidth
  issues, however don't switch to lower CDCLK earlier than once
  in 30 minutes in order to prevent constant modeset blinking.
  We could of course not switch back at all, however this is
  bad from power consumption point of view.

Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_bw.c  | 73 +++-
 drivers/gpu/drm/i915/display/intel_bw.h  |  7 ++
 drivers/gpu/drm/i915/display/intel_cdclk.c   | 54 +++
 drivers/gpu/drm/i915/display/intel_cdclk.h   |  5 +-
 drivers/gpu/drm/i915/display/intel_display.c |  8 +++
 drivers/gpu/drm/i915/intel_pm.c  | 31 -
 drivers/gpu/drm/i915/intel_pm.h  |  3 +
 7 files changed, 177 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_bw.c 
b/drivers/gpu/drm/i915/display/intel_bw.c
index 6e7cc3a4f1aa..cbfab51d75ee 100644
--- a/drivers/gpu/drm/i915/display/intel_bw.c
+++ b/drivers/gpu/drm/i915/display/intel_bw.c
@@ -6,6 +6,7 @@
 #include 
 
 #include "intel_bw.h"
+#include "intel_pm.h"
 #include "intel_display_types.h"
 #include "intel_sideband.h"
 
@@ -333,7 +334,6 @@ static unsigned int intel_bw_crtc_data_rate(const struct 
intel_crtc_state *crtc_
 
return data_rate;
 }
-
 void intel_bw_crtc_update(struct intel_bw_state *bw_state,
  const struct intel_crtc_state *crtc_state)
 {
@@ -410,6 +410,77 @@ intel_atomic_get_bw_state(struct intel_atomic_state *state)
return to_intel_bw_state(bw_state);
 }
 
+int intel_bw_calc_min_cdclk(struct intel_atomic_state *state)
+{
+   struct drm_i915_private *dev_priv = to_i915(state->base.dev);
+   int i;
+   const struct intel_crtc_state *crtc_state;
+   struct intel_crtc *crtc;
+   int max_bw = 0;
+   int min_cdclk;
+   struct intel_bw_state *bw_state;
+   int slice_id;
+
+   bw_state = intel_atomic_get_bw_state(state);
+   if (IS_ERR(bw_state))
+   return PTR_ERR(bw_state);
+
+   for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
+   enum plane_id plane_id;
+   struct intel_dbuf_bw *crtc_bw = &bw_state->dbuf_bw[crtc->pipe];
+
+   memset(&crtc_bw->used_bw, 0, sizeof(crtc_bw->used_bw));
+
+   for_each_plane_id_on_crtc(crtc, plane_id) {
+   const struct skl_ddb_entry *plane_alloc =
+   &crtc_state->wm.skl.plane_ddb_y[plane_id];
+   const struct skl_ddb_entry *uv_plane_alloc =
+   &crtc_state->wm.skl.plane_ddb_uv[plane_id];
+   unsigned int data_rate = 
crtc_state->data_rate[plane_id];
+   unsigned int dbuf_mask = 0;
+
+   dbuf_mask |= skl_ddb_dbuf_slice_mask(dev_priv, 
plane_alloc);
+   dbuf_mask |= skl_ddb_dbuf_slice_mask(dev_priv, 
uv_plane_alloc);
+
+   /*
+* FIXME: To calculate that more properly we probably 
need to
+* to split per plane data_rate into data_rate_y and 
data_rate_uv
+* for multiplanar formats in order not to get 
accounted those twice
+* if they happen to reside on different slices.
+   

[Intel-gfx] [PATCH v6 3/5] drm/i915: Introduce for_each_dbuf_slice_in_mask macro

2020-05-05 Thread Stanislav Lisovskiy
We quite often need now to iterate only particular dbuf slices
in mask, whether they are active or related to particular crtc.

v2: - Minor code refactoring
v3: - Use enum for max slices instead of macro

Let's make our life a bit easier and use a macro for that.

Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_display.h   | 7 +++
 drivers/gpu/drm/i915/display/intel_display_power.h | 1 +
 2 files changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/intel_display.h 
b/drivers/gpu/drm/i915/display/intel_display.h
index efb4da205ea2..b7a6d56bac5f 100644
--- a/drivers/gpu/drm/i915/display/intel_display.h
+++ b/drivers/gpu/drm/i915/display/intel_display.h
@@ -187,6 +187,13 @@ enum plane_id {
for ((__p) = PLANE_PRIMARY; (__p) < I915_MAX_PLANES; (__p)++) \
for_each_if((__crtc)->plane_ids_mask & BIT(__p))
 
+#define for_each_dbuf_slice_in_mask(__slice, __mask) \
+   for ((__slice) = DBUF_S1; (__slice) < I915_MAX_DBUF_SLICES; 
(__slice)++) \
+   for_each_if((BIT(__slice)) & (__mask))
+
+#define for_each_dbuf_slice(__slice) \
+   for_each_dbuf_slice_in_mask(__slice, BIT(I915_MAX_DBUF_SLICES) - 1)
+
 enum port {
PORT_NONE = -1,
 
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h 
b/drivers/gpu/drm/i915/display/intel_display_power.h
index 6c917699293b..4d0d6f9dad26 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.h
+++ b/drivers/gpu/drm/i915/display/intel_display_power.h
@@ -314,6 +314,7 @@ intel_display_power_put_async(struct drm_i915_private *i915,
 enum dbuf_slice {
DBUF_S1,
DBUF_S2,
+   I915_MAX_DBUF_SLICES
 };
 
 #define with_intel_display_power(i915, domain, wf) \
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v6 5/5] drm/i915: Remove unneeded hack now for CDCLK

2020-05-05 Thread Stanislav Lisovskiy
No need to bump up CDCLK now, as it is now correctly
calculated, accounting for DBuf BW as BSpec says.

Reviewed-by: Manasi Navare 
Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_cdclk.c | 12 
 1 file changed, 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_cdclk.c 
b/drivers/gpu/drm/i915/display/intel_cdclk.c
index 45343b9a9650..42f39066ad47 100644
--- a/drivers/gpu/drm/i915/display/intel_cdclk.c
+++ b/drivers/gpu/drm/i915/display/intel_cdclk.c
@@ -2070,18 +2070,6 @@ int intel_crtc_compute_min_cdclk(const struct 
intel_crtc_state *crtc_state)
/* Account for additional needs from the planes */
min_cdclk = max(intel_planes_min_cdclk(crtc_state), min_cdclk);
 
-   /*
-* HACK. Currently for TGL platforms we calculate
-* min_cdclk initially based on pixel_rate divided
-* by 2, accounting for also plane requirements,
-* however in some cases the lowest possible CDCLK
-* doesn't work and causing the underruns.
-* Explicitly stating here that this seems to be currently
-* rather a Hack, than final solution.
-*/
-   if (IS_TIGERLAKE(dev_priv))
-   min_cdclk = max(min_cdclk, (int)crtc_state->pixel_rate);
-
if (min_cdclk > dev_priv->max_cdclk_freq) {
drm_dbg_kms(&dev_priv->drm,
"required cdclk (%d kHz) exceeds max (%d kHz)\n",
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v6 0/5] Consider DBuf bandwidth when calculating CDCLK

2020-05-05 Thread Stanislav Lisovskiy
We need to calculate cdclk after watermarks/ddb has been calculated
as with recent hw CDCLK needs to be adjusted accordingly to DBuf
requirements, which is not possible with current code organization.

Setting CDCLK according to DBuf BW requirements and not just rejecting
if it doesn't satisfy BW requirements, will allow us to save power when
it is possible and gain additional bandwidth when it's needed - i.e
boosting both our power management and perfomance capabilities.

This patch is preparation for that, first we now extract modeset
calculation from modeset checks, in order to call it after wm/ddb
has been calculated.

Stanislav Lisovskiy (5):
  drm/i915: Decouple cdclk calculation from modeset checks
  drm/i915: Force recalculate min_cdclk if planes config changed
  drm/i915: Introduce for_each_dbuf_slice_in_mask macro
  drm/i915: Adjust CDCLK accordingly to our DBuf bw needs
  drm/i915: Remove unneeded hack now for CDCLK

 drivers/gpu/drm/i915/display/intel_bw.c   | 73 ++-
 drivers/gpu/drm/i915/display/intel_bw.h   |  7 ++
 drivers/gpu/drm/i915/display/intel_cdclk.c| 56 +++---
 drivers/gpu/drm/i915/display/intel_cdclk.h|  3 +-
 drivers/gpu/drm/i915/display/intel_display.c  | 40 +++---
 drivers/gpu/drm/i915/display/intel_display.h  |  7 ++
 .../drm/i915/display/intel_display_power.h|  1 +
 drivers/gpu/drm/i915/intel_pm.c   | 31 +++-
 drivers/gpu/drm/i915/intel_pm.h   |  3 +
 9 files changed, 194 insertions(+), 27 deletions(-)

-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/tgl: Put HDC flush pipe_control bit in the right dword

2020-05-05 Thread Patchwork
== Series Details ==

Series: drm/i915/tgl: Put HDC flush pipe_control bit in the right dword
URL   : https://patchwork.freedesktop.org/series/76925/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8424_full -> Patchwork_17578_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_17578_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_ctx_persistence@legacy-engines-mixed-process@render:
- shard-skl:  [PASS][1] -> [FAIL][2] ([i915#1528])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-skl10/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@render.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-skl5/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@render.html

  * igt@gem_exec_fence@basic-await@vcs0:
- shard-skl:  [PASS][3] -> [FAIL][4] ([i915#1472]) +1 similar issue
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-skl8/igt@gem_exec_fence@basic-aw...@vcs0.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-skl1/igt@gem_exec_fence@basic-aw...@vcs0.html

  * igt@gem_exec_params@invalid-bsd-ring:
- shard-iclb: [PASS][5] -> [SKIP][6] ([fdo#109276])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-iclb4/igt@gem_exec_par...@invalid-bsd-ring.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-iclb7/igt@gem_exec_par...@invalid-bsd-ring.html

  * igt@i915_suspend@fence-restore-untiled:
- shard-kbl:  [PASS][7] -> [DMESG-WARN][8] ([i915#180])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-kbl7/igt@i915_susp...@fence-restore-untiled.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-kbl4/igt@i915_susp...@fence-restore-untiled.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
- shard-apl:  [PASS][9] -> [DMESG-WARN][10] ([i915#180]) +2 similar 
issues
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl3/igt@kms_cursor_...@pipe-c-cursor-suspend.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-apl6/igt@kms_cursor_...@pipe-c-cursor-suspend.html

  * igt@kms_cursor_edge_walk@pipe-a-256x256-bottom-edge:
- shard-apl:  [PASS][11] -> [FAIL][12] ([i915#70] / [i915#95])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl4/igt@kms_cursor_edge_w...@pipe-a-256x256-bottom-edge.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-apl4/igt@kms_cursor_edge_w...@pipe-a-256x256-bottom-edge.html

  * igt@kms_draw_crc@draw-method-xrgb-pwrite-untiled:
- shard-skl:  [PASS][13] -> [FAIL][14] ([i915#177] / [i915#52] / 
[i915#54])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-skl1/igt@kms_draw_...@draw-method-xrgb-pwrite-untiled.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-skl8/igt@kms_draw_...@draw-method-xrgb-pwrite-untiled.html

  * igt@kms_flip_tiling@flip-changes-tiling-y:
- shard-apl:  [PASS][15] -> [FAIL][16] ([i915#95])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl6/igt@kms_flip_til...@flip-changes-tiling-y.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-apl3/igt@kms_flip_til...@flip-changes-tiling-y.html
- shard-kbl:  [PASS][17] -> [FAIL][18] ([i915#699] / [i915#93] / 
[i915#95])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-kbl2/igt@kms_flip_til...@flip-changes-tiling-y.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-kbl2/igt@kms_flip_til...@flip-changes-tiling-y.html

  * igt@kms_hdr@bpc-switch:
- shard-skl:  [PASS][19] -> [FAIL][20] ([i915#1188])
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-skl1/igt@kms_...@bpc-switch.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-skl8/igt@kms_...@bpc-switch.html

  * igt@kms_psr@psr2_sprite_mmap_gtt:
- shard-iclb: [PASS][21] -> [SKIP][22] ([fdo#109441]) +3 similar 
issues
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-iclb2/igt@kms_psr@psr2_sprite_mmap_gtt.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-iclb8/igt@kms_psr@psr2_sprite_mmap_gtt.html

  
 Possible fixes 

  * igt@gem_ctx_persistence@legacy-engines-mixed-process@render:
- shard-apl:  [FAIL][23] ([i915#1528]) -> [PASS][24]
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl7/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@render.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17578/shard-apl6/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@render.html

  * igt@gem_ctx_persistence@legacy-engin

Re: [Intel-gfx] [PATCH] drm/i915/gem: Teach execbuf how to wait on future syncobj

2020-05-05 Thread Chris Wilson
Quoting Chris Wilson (2020-05-05 14:48:19)
> +static void await_proxy_work(struct work_struct *work)
> +{
> +   struct await_proxy *ap = container_of(work, typeof(*ap), work);
> +   struct i915_request *rq = ap->request;
> +
> +   del_timer_sync(&ap->timer);
> +
> +   if (ap->fence) {
> +   int err = 0;
> +
> +   /*
> +* If the fence is external, we impose a 10s timeout.
> +* However, if the fence is internal, we skip a timeout in
> +* the belief that all fences are in-order (DAG, no cycles)
> +* and we can enforce forward progress by reset the GPU if
> +* necessary. A future fence, provided userspace, can 
> trivially
> +* generate a cycle in the dependency graph, and so cause
> +* that entire cycle to become deadlocked and for no forward
> +* progress to either be made, and the driver being kept
> +* eternally awake.
> +*
> +* While we do have a full DAG-verifier in the i915_sw_fence
> +* debug code, that is perhaps prohibitiverly expensive
> +* (and is necessarily global), so we replace that by
> +* checking to see if the endpoints have a recorded cycle.
> +*/
> +   if (dma_fence_is_i915(ap->fence)) {
> +   struct i915_request *signal = to_request(ap->fence);
> +
> +   rcu_read_lock();
> +   if 
> (intel_timeline_sync_is_later(rcu_dereference(signal->timeline),
> +&rq->fence)) {
> +   i915_sw_fence_set_error_once(&signal->submit,
> +-EDEADLK);
> +   err = -EDEADLK;
> +   }
> +   rcu_read_unlock();

End points are not enough. It covers the trivial example I made for
testing, but only that. I think for this to be safe we do need the full
DAG verifier. Oh well, by the time Tvrtko has finished complaining about
it being recursive it might not be so terrible.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH] drm/i915/gem: Teach execbuf how to wait on future syncobj

2020-05-05 Thread Chris Wilson
If a syncobj has not yet been assigned, treat it as a future fence and
install and wait upon a dma-fence-proxy. The proxy will be replace by
the real fence later, and that fence will be responsible for signaling
our waiter.

Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4854
Signed-off-by: Chris Wilson 
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c|  21 ++-
 drivers/gpu/drm/i915/i915_request.c   | 146 ++
 2 files changed, 165 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 966523a8503f..7abb96505a31 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -5,6 +5,7 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -2524,8 +2525,24 @@ await_fence_array(struct i915_execbuffer *eb,
continue;
 
fence = drm_syncobj_fence_get(syncobj);
-   if (!fence)
-   return -EINVAL;
+   if (!fence) {
+   struct dma_fence *old;
+
+   fence = dma_fence_create_proxy();
+   if (!fence)
+   return -ENOMEM;
+
+   spin_lock(&syncobj->lock);
+   old = rcu_dereference_protected(syncobj->fence, true);
+   if (unlikely(old)) {
+   dma_fence_put(fence);
+   fence = dma_fence_get(old);
+   } else {
+   rcu_assign_pointer(syncobj->fence,
+  dma_fence_get(fence));
+   }
+   spin_unlock(&syncobj->lock);
+   }
 
err = i915_request_await_dma_fence(eb->request, fence);
dma_fence_put(fence);
diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index d369b25e46bb..f7ef9fd178a0 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -23,6 +23,7 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1065,6 +1066,149 @@ i915_request_await_request(struct i915_request *to, 
struct i915_request *from)
return 0;
 }
 
+struct await_proxy {
+   struct wait_queue_entry base;
+   struct i915_request *request;
+   struct dma_fence *fence;
+   struct timer_list timer;
+   struct work_struct work;
+   int (*attach)(struct await_proxy *ap);
+   void *data;
+};
+
+static void await_proxy_work(struct work_struct *work)
+{
+   struct await_proxy *ap = container_of(work, typeof(*ap), work);
+   struct i915_request *rq = ap->request;
+
+   del_timer_sync(&ap->timer);
+
+   if (ap->fence) {
+   int err = 0;
+
+   /*
+* If the fence is external, we impose a 10s timeout.
+* However, if the fence is internal, we skip a timeout in
+* the belief that all fences are in-order (DAG, no cycles)
+* and we can enforce forward progress by reset the GPU if
+* necessary. A future fence, provided userspace, can trivially
+* generate a cycle in the dependency graph, and so cause
+* that entire cycle to become deadlocked and for no forward
+* progress to either be made, and the driver being kept
+* eternally awake.
+*
+* While we do have a full DAG-verifier in the i915_sw_fence
+* debug code, that is perhaps prohibitiverly expensive
+* (and is necessarily global), so we replace that by
+* checking to see if the endpoints have a recorded cycle.
+*/
+   if (dma_fence_is_i915(ap->fence)) {
+   struct i915_request *signal = to_request(ap->fence);
+
+   rcu_read_lock();
+   if 
(intel_timeline_sync_is_later(rcu_dereference(signal->timeline),
+&rq->fence)) {
+   i915_sw_fence_set_error_once(&signal->submit,
+-EDEADLK);
+   err = -EDEADLK;
+   }
+   rcu_read_unlock();
+   }
+
+   if (!err) {
+   mutex_lock(&rq->context->timeline->mutex);
+   err = ap->attach(ap);
+   mutex_unlock(&rq->context->timeline->mutex);
+   }
+
+   if (err < 0)
+   i915_sw_fence_set_error_once(&rq->submit, err);
+   }
+
+   i915_sw_fence_complete(&rq->submit);
+
+   dma_fence_put(ap->fence);
+   kfree(ap);
+}
+
+static i

[Intel-gfx] ✓ Fi.CI.BAT: success for SAGV support for Gen12+ (rev35)

2020-05-05 Thread Patchwork
== Series Details ==

Series: SAGV support for Gen12+ (rev35)
URL   : https://patchwork.freedesktop.org/series/75129/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8430 -> Patchwork_17581


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/index.html


Changes
---

  No changes found


Participating hosts (50 -> 42)
--

  Missing(8): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-bwr-2160 fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8430 -> Patchwork_17581

  CI-20190529: 20190529
  CI_DRM_8430: 2daa6f8cad645f49a898158190a20a893b4aabe3 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5632: e630cb8cd2ec01d6d5358eb2a3f6ea70498b8183 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17581: c9b00f24365bb8afb5caae9aa834993b996bc66a @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

c9b00f24365b drm/i915: Enable SAGV support for Gen12
f4e7158266f6 drm/i915: Restrict qgv points which don't have enough bandwidth.
5dd210b03447 drm/i915: Add TGL+ SAGV support
9ac87f34c218 drm/i915: Separate icl and skl SAGV checking
c2e36d77edc9 drm/i915: Introduce skl_plane_wm_level accessor.

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17581/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t 2/2] i915/gem_exec_fence: Teach invalid-wait about invalid future fences

2020-05-05 Thread Chris Wilson
When we allow a wait on a future future fence, it must autoexpire if the
fence is never signaled by userspace. Also put future fences to work, as
the intention is to use them, along with WAIT_SUBMIT and semaphores, for
userspace to perform its own fine-grained scheduling. Or simply run
concurrent clients without having to flush batches between context
switches.

v2: Verify deadlock detection

Signed-off-by: Chris Wilson 
---
 tests/i915/gem_exec_fence.c | 430 +++-
 1 file changed, 427 insertions(+), 3 deletions(-)

diff --git a/tests/i915/gem_exec_fence.c b/tests/i915/gem_exec_fence.c
index 17fdaebd5..374b273e4 100644
--- a/tests/i915/gem_exec_fence.c
+++ b/tests/i915/gem_exec_fence.c
@@ -47,6 +47,15 @@ struct sync_merge_data {
 #define SYNC_IOC_MERGE _IOWR(SYNC_IOC_MAGIC, 3, struct sync_merge_data)
 #endif
 
+#define MI_SEMAPHORE_WAIT  (0x1c << 23)
+#define   MI_SEMAPHORE_POLL (1 << 15)
+#define   MI_SEMAPHORE_SAD_GT_SDD   (0 << 12)
+#define   MI_SEMAPHORE_SAD_GTE_SDD  (1 << 12)
+#define   MI_SEMAPHORE_SAD_LT_SDD   (2 << 12)
+#define   MI_SEMAPHORE_SAD_LTE_SDD  (3 << 12)
+#define   MI_SEMAPHORE_SAD_EQ_SDD   (4 << 12)
+#define   MI_SEMAPHORE_SAD_NEQ_SDD  (5 << 12)
+
 static void store(int fd, const struct intel_execution_engine2 *e,
  int fence, uint32_t target, unsigned offset_value)
 {
@@ -913,11 +922,12 @@ static void test_syncobj_invalid_wait(int fd)
struct drm_i915_gem_exec_fence fence = {
.handle = syncobj_create(fd, 0),
};
+   int out;
 
memset(&execbuf, 0, sizeof(execbuf));
execbuf.buffers_ptr = to_user_pointer(&obj);
execbuf.buffer_count = 1;
-   execbuf.flags = I915_EXEC_FENCE_ARRAY;
+   execbuf.flags = I915_EXEC_FENCE_ARRAY | I915_EXEC_FENCE_OUT;
execbuf.cliprects_ptr = to_user_pointer(&fence);
execbuf.num_cliprects = 1;
 
@@ -925,14 +935,59 @@ static void test_syncobj_invalid_wait(int fd)
obj.handle = gem_create(fd, 4096);
gem_write(fd, obj.handle, 0, &bbe, sizeof(bbe));
 
-   /* waiting before the fence is set is invalid */
+   /* waiting before the fence is set is^W may be invalid */
fence.flags = I915_EXEC_FENCE_WAIT;
-   igt_assert_eq(__gem_execbuf(fd, &execbuf), -EINVAL);
+   if (__gem_execbuf_wr(fd, &execbuf)) {
+   igt_assert_eq(__gem_execbuf(fd, &execbuf), -EINVAL);
+   return;
+   }
+
+   /* If we do allow the wait on a future fence, it should autoexpire */
+   gem_sync(fd, obj.handle);
+   out = execbuf.rsvd2 >> 32;
+   igt_assert_eq(sync_fence_status(out), -ETIMEDOUT);
+   close(out);
 
gem_close(fd, obj.handle);
syncobj_destroy(fd, fence.handle);
 }
 
+static void test_syncobj_incomplete_wait_submit(int i915)
+{
+   struct drm_i915_gem_exec_object2 obj = {
+   .handle = batch_create(i915),
+   };
+   struct drm_i915_gem_exec_fence fence = {
+   .handle = syncobj_create(i915, 0),
+   .flags = I915_EXEC_FENCE_WAIT | I915_EXEC_FENCE_WAIT_SUBMIT,
+   };
+   struct drm_i915_gem_execbuffer2 execbuf = {
+   .buffers_ptr = to_user_pointer(&obj),
+   .buffer_count = 1,
+
+   .cliprects_ptr = to_user_pointer(&fence),
+   .num_cliprects = 1,
+
+   .flags = I915_EXEC_FENCE_ARRAY | I915_EXEC_FENCE_OUT,
+   };
+   int out;
+
+   /* waiting before the fence is set is^W may be invalid */
+   if (__gem_execbuf_wr(i915, &execbuf)) {
+   igt_assert_eq(__gem_execbuf(i915, &execbuf), -EINVAL);
+   return;
+   }
+
+   /* If we do allow the wait on a future fence, it should autoexpire */
+   gem_sync(i915, obj.handle);
+   out = execbuf.rsvd2 >> 32;
+   igt_assert_eq(sync_fence_status(out), -ETIMEDOUT);
+   close(out);
+
+   gem_close(i915, obj.handle);
+   syncobj_destroy(i915, fence.handle);
+}
+
 static void test_syncobj_invalid_flags(int fd)
 {
const uint32_t bbe = MI_BATCH_BUFFER_END;
@@ -1079,6 +1134,319 @@ static void test_syncobj_wait(int fd)
}
 }
 
+static uint32_t future_batch(int i915, uint32_t offset)
+{
+   uint32_t handle = gem_create(i915, 4096);
+   const int gen = intel_gen(intel_get_drm_devid(i915));
+   uint32_t cs[16];
+   int i = 0;
+
+   cs[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
+   if (gen >= 8) {
+   cs[++i] = offset + 4000;
+   cs[++i] = 0;
+   } else if (gen >= 4) {
+   cs[++i] = 0;
+   cs[++i] = offset + 4000;
+   } else {
+   cs[i]--;
+   cs[++i] = offset + 4000;
+   }
+   cs[++i] = 1;
+   cs[i + 1] = MI_BATCH_BUFFER_END;
+   gem_write(i915, handle, 0, cs, sizeof(cs));
+
+   cs[i] = 2;
+   gem_write(i915, handle, 64, cs, sizeof(cs));
+
+   return handle;
+}

[Intel-gfx] [PATCH i-g-t 1/2] lib/i915: Report scheduler caps for timeslicing

2020-05-05 Thread Chris Wilson
Signed-off-by: Chris Wilson 
---
 include/drm-uapi/i915_drm.h |  8 +---
 lib/i915/gem_scheduler.c| 15 +++
 lib/i915/gem_scheduler.h|  1 +
 3 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/include/drm-uapi/i915_drm.h b/include/drm-uapi/i915_drm.h
index 2b55af13a..a222b6bfb 100644
--- a/include/drm-uapi/i915_drm.h
+++ b/include/drm-uapi/i915_drm.h
@@ -523,6 +523,7 @@ typedef struct drm_i915_irq_wait {
 #define   I915_SCHEDULER_CAP_PREEMPTION(1ul << 2)
 #define   I915_SCHEDULER_CAP_SEMAPHORES(1ul << 3)
 #define   I915_SCHEDULER_CAP_ENGINE_BUSY_STATS (1ul << 4)
+#define   I915_SCHEDULER_CAP_TIMESLICING   (1ul << 5)
 
 #define I915_PARAM_HUC_STATUS   42
 
@@ -1040,9 +1041,10 @@ struct drm_i915_gem_exec_fence {
 */
__u32 handle;
 
-#define I915_EXEC_FENCE_WAIT(1<<0)
-#define I915_EXEC_FENCE_SIGNAL  (1<<1)
-#define __I915_EXEC_FENCE_UNKNOWN_FLAGS (-(I915_EXEC_FENCE_SIGNAL << 1))
+#define I915_EXEC_FENCE_WAIT(1u << 0)
+#define I915_EXEC_FENCE_SIGNAL  (1u << 1)
+#define I915_EXEC_FENCE_WAIT_SUBMIT (1u << 2)
+#define __I915_EXEC_FENCE_UNKNOWN_FLAGS (-(I915_EXEC_FENCE_WAIT_SUBMIT << 1))
__u32 flags;
 };
 
diff --git a/lib/i915/gem_scheduler.c b/lib/i915/gem_scheduler.c
index 1beb85dec..a1dc694e5 100644
--- a/lib/i915/gem_scheduler.c
+++ b/lib/i915/gem_scheduler.c
@@ -131,6 +131,19 @@ bool gem_scheduler_has_engine_busy_stats(int fd)
I915_SCHEDULER_CAP_ENGINE_BUSY_STATS;
 }
 
+/**
+ * gem_scheduler_has_timeslicing:
+ * @fd: open i915 drm file descriptor
+ *
+ * Feature test macro to query whether the driver supports using HW preemption
+ * to implement timeslicing of userspace batches. This allows userspace to
+ * implement micro-level scheduling within their own batches.
+ */
+bool gem_scheduler_has_timeslicing(int fd)
+{
+   return gem_scheduler_capability(fd) & I915_SCHEDULER_CAP_TIMESLICING;
+}
+
 /**
  * gem_scheduler_print_capability:
  * @fd: open i915 drm file descriptor
@@ -151,6 +164,8 @@ void gem_scheduler_print_capability(int fd)
igt_info(" - With preemption enabled\n");
if (caps & I915_SCHEDULER_CAP_SEMAPHORES)
igt_info(" - With HW semaphores enabled\n");
+   if (caps & I915_SCHEDULER_CAP_TIMESLICING)
+   igt_info(" - With user timeslicing enabled\n");
if (caps & I915_SCHEDULER_CAP_ENGINE_BUSY_STATS)
igt_info(" - With engine busy statistics\n");
 }
diff --git a/lib/i915/gem_scheduler.h b/lib/i915/gem_scheduler.h
index 14bd4cac4..d43e84bd2 100644
--- a/lib/i915/gem_scheduler.h
+++ b/lib/i915/gem_scheduler.h
@@ -32,6 +32,7 @@ bool gem_scheduler_has_ctx_priority(int fd);
 bool gem_scheduler_has_preemption(int fd);
 bool gem_scheduler_has_semaphores(int fd);
 bool gem_scheduler_has_engine_busy_stats(int fd);
+bool gem_scheduler_has_timeslicing(int fd);
 void gem_scheduler_print_capability(int fd);
 
 #endif /* GEM_SCHEDULER_H */
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.IGT: failure for Introduce Rocket Lake (rev4)

2020-05-05 Thread Patchwork
== Series Details ==

Series: Introduce Rocket Lake (rev4)
URL   : https://patchwork.freedesktop.org/series/76826/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_8424_full -> Patchwork_17577_full


Summary
---

  **FAILURE**

  Serious unknown changes coming with Patchwork_17577_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_17577_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
---

  Here are the unknown changes that may have been introduced in 
Patchwork_17577_full:

### IGT changes ###

 Possible regressions 

  * igt@gem_mmap_offset@open-flood:
- shard-kbl:  [PASS][1] -> [INCOMPLETE][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-kbl2/igt@gem_mmap_off...@open-flood.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17577/shard-kbl7/igt@gem_mmap_off...@open-flood.html

  
Known issues


  Here are the changes found in Patchwork_17577_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_eio@in-flight-suspend:
- shard-apl:  [PASS][3] -> [DMESG-WARN][4] ([i915#180]) +1 similar 
issue
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl1/igt@gem_...@in-flight-suspend.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17577/shard-apl4/igt@gem_...@in-flight-suspend.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
- shard-kbl:  [PASS][5] -> [DMESG-WARN][6] ([i915#180])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-kbl3/igt@kms_cursor_...@pipe-c-cursor-suspend.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17577/shard-kbl7/igt@kms_cursor_...@pipe-c-cursor-suspend.html

  * igt@kms_cursor_edge_walk@pipe-a-256x256-bottom-edge:
- shard-apl:  [PASS][7] -> [FAIL][8] ([i915#70] / [i915#95])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl4/igt@kms_cursor_edge_w...@pipe-a-256x256-bottom-edge.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17577/shard-apl7/igt@kms_cursor_edge_w...@pipe-a-256x256-bottom-edge.html

  * igt@kms_flip_tiling@flip-changes-tiling-y:
- shard-apl:  [PASS][9] -> [FAIL][10] ([i915#95])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl6/igt@kms_flip_til...@flip-changes-tiling-y.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17577/shard-apl6/igt@kms_flip_til...@flip-changes-tiling-y.html
- shard-kbl:  [PASS][11] -> [FAIL][12] ([i915#699] / [i915#93] / 
[i915#95])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-kbl2/igt@kms_flip_til...@flip-changes-tiling-y.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17577/shard-kbl4/igt@kms_flip_til...@flip-changes-tiling-y.html

  * igt@kms_hdr@bpc-switch:
- shard-skl:  [PASS][13] -> [FAIL][14] ([i915#1188])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-skl1/igt@kms_...@bpc-switch.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17577/shard-skl5/igt@kms_...@bpc-switch.html

  * igt@kms_psr@psr2_sprite_blt:
- shard-iclb: [PASS][15] -> [SKIP][16] ([fdo#109441])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-iclb2/igt@kms_psr@psr2_sprite_blt.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17577/shard-iclb7/igt@kms_psr@psr2_sprite_blt.html

  * igt@kms_vblank@pipe-a-ts-continuation-suspend:
- shard-apl:  [PASS][17] -> [DMESG-WARN][18] ([i915#180] / 
[i915#95])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl6/igt@kms_vbl...@pipe-a-ts-continuation-suspend.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17577/shard-apl6/igt@kms_vbl...@pipe-a-ts-continuation-suspend.html

  
 Possible fixes 

  * igt@gem_ctx_persistence@legacy-engines-mixed-process@render:
- shard-apl:  [FAIL][19] ([i915#1528]) -> [PASS][20]
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl7/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@render.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17577/shard-apl1/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@render.html

  * igt@gem_ctx_persistence@legacy-engines-mixed-process@vebox:
- shard-skl:  [FAIL][21] ([i915#1528]) -> [PASS][22]
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-skl10/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@vebox.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17577/shard-skl1/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@vebox.html

  * igt@gen9_exec_parse@allowed-all:
- shard-apl: 

[Intel-gfx] [PATCH 2/2] drm/i915: Ignore submit-fences on the same timeline

2020-05-05 Thread Chris Wilson
While we ordinarily do not skip submit-fences due to the accompanying
hook that we want to callback on execution, a submit-fence on the same
timeline is meaningless.

Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/i915_request.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 95edc5523a01..d369b25e46bb 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1240,6 +1240,9 @@ i915_request_await_execution(struct i915_request *rq,
continue;
}
 
+   if (fence->context == rq->fence.context)
+   continue;
+
/*
 * We don't squash repeated fence dependencies here as we
 * want to run our callback in all cases.
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 1/2] drm/i915: Mark concurrent submissions with a weak-dependency

2020-05-05 Thread Chris Wilson
We recorded the dependencies for WAIT_FOR_SUBMIT in order that we could
correctly perform priority inheritance from the parallel branches to the
common trunk. However, for the purpose of timeslicing and reset
handling, the dependency is weak -- as we the pair of requests are
allowed to run in parallel and not in strict succession. So for example
we do need to suspend one if the other hangs.

The real significance though is that this allows us to rearrange
groups of WAIT_FOR_SUBMIT linked requests along the single engine, and
so can resolve user level inter-batch scheduling dependencies from user
semaphores.

Fixes: c81471f5e95c ("drm/i915: Copy across scheduler behaviour flags across 
submit fences")
Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
Cc:  # v5.6+
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 9 +
 drivers/gpu/drm/i915/i915_request.c | 8 ++--
 drivers/gpu/drm/i915/i915_scheduler.c   | 4 +++-
 drivers/gpu/drm/i915/i915_scheduler.h   | 3 ++-
 drivers/gpu/drm/i915/i915_scheduler_types.h | 1 +
 5 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
b/drivers/gpu/drm/i915/gt/intel_lrc.c
index dc3f2ee7136d..10109f661bcb 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1880,6 +1880,9 @@ static void defer_request(struct i915_request *rq, struct 
list_head * const pl)
struct i915_request *w =
container_of(p->waiter, typeof(*w), sched);
 
+   if (p->flags & I915_DEPENDENCY_WEAK)
+   continue;
+
/* Leave semaphores spinning on the other engines */
if (w->engine != rq->engine)
continue;
@@ -2726,6 +2729,9 @@ static void __execlists_hold(struct i915_request *rq)
struct i915_request *w =
container_of(p->waiter, typeof(*w), sched);
 
+   if (p->flags & I915_DEPENDENCY_WEAK)
+   continue;
+
/* Leave semaphores spinning on the other engines */
if (w->engine != rq->engine)
continue;
@@ -2850,6 +2856,9 @@ static void __execlists_unhold(struct i915_request *rq)
struct i915_request *w =
container_of(p->waiter, typeof(*w), sched);
 
+   if (p->flags & I915_DEPENDENCY_WEAK)
+   continue;
+
/* Propagate any change in error status */
if (rq->fence.error)
i915_request_set_error_once(w, rq->fence.error);
diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 22635bbabf06..95edc5523a01 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1038,7 +1038,9 @@ i915_request_await_request(struct i915_request *to, 
struct i915_request *from)
return 0;
 
if (to->engine->schedule) {
-   ret = i915_sched_node_add_dependency(&to->sched, &from->sched);
+   ret = i915_sched_node_add_dependency(&to->sched,
+&from->sched,
+0);
if (ret < 0)
return ret;
}
@@ -1200,7 +1202,9 @@ __i915_request_await_execution(struct i915_request *to,
 
/* Couple the dependency tree for PI on this exposed to->fence */
if (to->engine->schedule) {
-   err = i915_sched_node_add_dependency(&to->sched, &from->sched);
+   err = i915_sched_node_add_dependency(&to->sched,
+&from->sched,
+I915_DEPENDENCY_WEAK);
if (err < 0)
return err;
}
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
b/drivers/gpu/drm/i915/i915_scheduler.c
index 37cfcf5b321b..5f4c1e49e974 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -462,7 +462,8 @@ bool __i915_sched_node_add_dependency(struct 
i915_sched_node *node,
 }
 
 int i915_sched_node_add_dependency(struct i915_sched_node *node,
-  struct i915_sched_node *signal)
+  struct i915_sched_node *signal,
+  unsigned long flags)
 {
struct i915_dependency *dep;
 
@@ -473,6 +474,7 @@ int i915_sched_node_add_dependency(struct i915_sched_node 
*node,
local_bh_disable();
 
if (!__i915_sched_node_add_dependency(node, signal, dep,
+ flags |
  I915_DEPENDENCY_EXTERNAL |

[Intel-gfx] [PATCH i-g-t] lib/i915: Reset all engine properties to defaults prior to the start of a test

2020-05-05 Thread Chris Wilson
We need each test in an isolated context, so that bad results from one
test do not interfere with the next. In particular, we want to clean up
the device and reset it to the defaults so that they are known for the
next test, and the test can focus on behaviour it wants to control.

Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
Cc: Joonas Lahtinen 
---
 lib/i915/gem.c | 83 ++
 1 file changed, 83 insertions(+)

diff --git a/lib/i915/gem.c b/lib/i915/gem.c
index b2717ba6a..6fa8abf21 100644
--- a/lib/i915/gem.c
+++ b/lib/i915/gem.c
@@ -22,6 +22,7 @@
  *
  */
 
+#include 
 #include 
 #include 
 
@@ -30,6 +31,87 @@
 #include "igt_debugfs.h"
 #include "igt_sysfs.h"
 
+static void __restore_defaults(int engine)
+{
+   struct dirent *de;
+   int defaults;
+   DIR *dir;
+
+   defaults = openat(engine, ".defaults", O_RDONLY);
+   if (defaults < 0)
+   return;
+
+   dir = fdopendir(defaults);
+   if (!dir) {
+   close(defaults);
+   return;
+   }
+
+   while ((de = readdir(dir))) {
+   char buf[256];
+   int fd, len;
+
+   if (*de->d_name == '.')
+   continue;
+
+   fd = openat(defaults, de->d_name, O_RDONLY);
+   if (fd < 0)
+   continue;
+
+   len = read(fd, buf, sizeof(buf));
+   close(fd);
+
+   fd = openat(engine, de->d_name, O_WRONLY);
+   if (fd < 0)
+   continue;
+
+   write(fd, buf, len);
+   close(fd);
+   }
+
+   closedir(dir);
+}
+
+static void restore_defaults(int i915)
+{
+   struct dirent *de;
+   int engines;
+   DIR *dir;
+   int sys;
+
+   sys = igt_sysfs_open(i915);
+   if (sys < 0)
+   return;
+
+   engines = openat(sys, "engine", O_RDONLY);
+   if (engines < 0)
+   goto close_sys;
+
+   dir = fdopendir(engines);
+   if (!dir) {
+   close(engines);
+   goto close_sys;
+   }
+
+   while ((de = readdir(dir))) {
+   int engine;
+
+   if (*de->d_name == '.')
+   continue;
+
+   engine = openat(engines, de->d_name, O_RDONLY);
+   if (engine < 0)
+   continue;
+
+   __restore_defaults(engine);
+   close(engine);
+   }
+
+   closedir(dir);
+close_sys:
+   close(sys);
+}
+
 static void reset_device(int i915)
 {
int dir;
@@ -66,6 +148,7 @@ void igt_require_gem(int i915)
 * sequences of batches.
 */
reset_device(i915);
+   restore_defaults(i915);
 
err = 0;
if (ioctl(i915, DRM_IOCTL_I915_GEM_THROTTLE)) {
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: HDCP: retry link integrity check on failure

2020-05-05 Thread Ramalingam C
On 2020-05-05 at 14:06:51 +0200, Oliver Barta wrote:
> On Tue, May 5, 2020 at 9:38 AM Ramalingam C  wrote:
> >
> > On 2020-05-04 at 14:35:24 +0200, Oliver Barta wrote:
> > > From: Oliver Barta 
> > >
> > > A single Ri mismatch doesn't automatically mean that the link integrity
> > > is broken. Update and check of Ri and Ri' are done asynchronously. In
> > > case an update happens just between the read of Ri' and the check against
> > > Ri there will be a mismatch even if the link integrity is fine otherwise.
> >
> > Thanks for working on this. Btw, did you face this sporadic link check
> > failure or theoretically you are fixing it?
> >
> > IMO this change will rule out possible sporadic link check failures as
> > mentioned in the commit msg. Though I haven't faced this issue at my
> > testings.
> >
> > Reviewed-by: Ramalingam C 
> >
> 
> I found it by code inspection, the probability for this to happen is
> very low. In order to test the patch I'm decreasing the value of
> DRM_HDCP_CHECK_PERIOD_MS to just a few ms. Once you do that it happens
> every few seconds.
Ok. That make sense. Thanks for the explanation.

-Ram
> 
> Thanks,
> Oliver
> 
> > >
> > > Signed-off-by: Oliver Barta 
> > > ---
> > >  drivers/gpu/drm/i915/display/intel_hdmi.c | 19 ---
> > >  1 file changed, 16 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c 
> > > b/drivers/gpu/drm/i915/display/intel_hdmi.c
> > > index 010f37240710..3156fde392f2 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_hdmi.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
> > > @@ -1540,7 +1540,7 @@ int intel_hdmi_hdcp_toggle_signalling(struct 
> > > intel_digital_port *intel_dig_port,
> > >  }
> > >
> > >  static
> > > -bool intel_hdmi_hdcp_check_link(struct intel_digital_port 
> > > *intel_dig_port)
> > > +bool intel_hdmi_hdcp_check_link_once(struct intel_digital_port 
> > > *intel_dig_port)
> > >  {
> > >   struct drm_i915_private *i915 = 
> > > to_i915(intel_dig_port->base.base.dev);
> > >   struct intel_connector *connector =
> > > @@ -1563,8 +1563,7 @@ bool intel_hdmi_hdcp_check_link(struct 
> > > intel_digital_port *intel_dig_port)
> > >   if (wait_for((intel_de_read(i915, HDCP_STATUS(i915, cpu_transcoder, 
> > > port)) &
> > > (HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC)) ==
> > >(HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC), 1)) {
> > > - drm_err(&i915->drm,
> > > - "Ri' mismatch detected, link check failed (%x)\n",
> > > + drm_dbg_kms(&i915->drm, "Ri' mismatch detected (%x)\n",
> > >   intel_de_read(i915, HDCP_STATUS(i915, 
> > > cpu_transcoder,
> > >   port)));
> > >   return false;
> > > @@ -1572,6 +1571,20 @@ bool intel_hdmi_hdcp_check_link(struct 
> > > intel_digital_port *intel_dig_port)
> > >   return true;
> > >  }
> > >
> > > +static
> > > +bool intel_hdmi_hdcp_check_link(struct intel_digital_port 
> > > *intel_dig_port)
> > > +{
> > > + struct drm_i915_private *i915 = 
> > > to_i915(intel_dig_port->base.base.dev);
> > > + int retry;
> > > +
> > > + for (retry = 0; retry < 3; retry++)
> > > + if (intel_hdmi_hdcp_check_link_once(intel_dig_port))
> > > + return true;
> > > +
> > > + drm_err(&i915->drm, "Link check failed\n");
> > > + return false;
> > > +}
> > > +
> > >  struct hdcp2_hdmi_msg_timeout {
> > >   u8 msg_id;
> > >   u16 timeout;
> > > --
> > > 2.20.1
> > >
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.IGT: success for Prefer drm_WARN* over WARN* (rev3)

2020-05-05 Thread Patchwork
== Series Details ==

Series: Prefer drm_WARN* over WARN* (rev3)
URL   : https://patchwork.freedesktop.org/series/75543/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8424_full -> Patchwork_17575_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_17575_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_eio@in-flight-suspend:
- shard-skl:  [PASS][1] -> [INCOMPLETE][2] ([i915#69]) +1 similar 
issue
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-skl2/igt@gem_...@in-flight-suspend.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-skl7/igt@gem_...@in-flight-suspend.html
- shard-kbl:  [PASS][3] -> [DMESG-WARN][4] ([i915#180])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-kbl1/igt@gem_...@in-flight-suspend.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-kbl4/igt@gem_...@in-flight-suspend.html

  * igt@gem_exec_params@invalid-bsd-ring:
- shard-iclb: [PASS][5] -> [SKIP][6] ([fdo#109276])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-iclb4/igt@gem_exec_par...@invalid-bsd-ring.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-iclb8/igt@gem_exec_par...@invalid-bsd-ring.html

  * igt@i915_pm_dc@dc6-psr:
- shard-iclb: [PASS][7] -> [FAIL][8] ([i915#454])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-iclb7/igt@i915_pm...@dc6-psr.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-iclb4/igt@i915_pm...@dc6-psr.html

  * igt@kms_cursor_legacy@pipe-b-torture-bo:
- shard-skl:  [PASS][9] -> [DMESG-WARN][10] ([i915#128])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-skl9/igt@kms_cursor_leg...@pipe-b-torture-bo.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-skl10/igt@kms_cursor_leg...@pipe-b-torture-bo.html

  * igt@kms_flip_tiling@flip-changes-tiling-y:
- shard-apl:  [PASS][11] -> [FAIL][12] ([i915#95])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl6/igt@kms_flip_til...@flip-changes-tiling-y.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-apl1/igt@kms_flip_til...@flip-changes-tiling-y.html

  * igt@kms_hdr@bpc-switch-suspend:
- shard-skl:  [PASS][13] -> [FAIL][14] ([i915#1188]) +1 similar 
issue
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-skl5/igt@kms_...@bpc-switch-suspend.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-skl10/igt@kms_...@bpc-switch-suspend.html

  * igt@kms_psr@psr2_sprite_mmap_gtt:
- shard-iclb: [PASS][15] -> [SKIP][16] ([fdo#109441]) +3 similar 
issues
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-iclb2/igt@kms_psr@psr2_sprite_mmap_gtt.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-iclb7/igt@kms_psr@psr2_sprite_mmap_gtt.html

  * igt@kms_vblank@pipe-a-ts-continuation-suspend:
- shard-apl:  [PASS][17] -> [DMESG-WARN][18] ([i915#180] / 
[i915#95])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl6/igt@kms_vbl...@pipe-a-ts-continuation-suspend.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-apl1/igt@kms_vbl...@pipe-a-ts-continuation-suspend.html

  
 Possible fixes 

  * igt@gem_ctx_persistence@legacy-engines-mixed-process@render:
- shard-apl:  [FAIL][19] ([i915#1528]) -> [PASS][20]
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-apl7/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@render.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-apl3/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@render.html

  * igt@gem_ctx_persistence@legacy-engines-mixed-process@vebox:
- shard-skl:  [FAIL][21] ([i915#1528]) -> [PASS][22]
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-skl10/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@vebox.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-skl1/igt@gem_ctx_persistence@legacy-engines-mixed-proc...@vebox.html

  * igt@kms_fence_pin_leak:
- shard-kbl:  [DMESG-WARN][23] ([i915#165] / [i915#78]) -> 
[PASS][24]
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-kbl2/igt@kms_fence_pin_leak.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17575/shard-kbl4/igt@kms_fence_pin_leak.html

  * {igt@kms_flip@flip-vs-suspend-interruptible@b-dp1}:
- shard-kbl:  [DMESG-WARN][25] ([i915#180]) -> [PASS][26]
   [25]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8424/shard-kbl4/igt@kms_flip@flip-vs-suspend-interrupti...@b-dp1.html
   [26]: 

[Intel-gfx] [PATCH i-g-t] lib/i915: Split igt_require_gem() into i915/

2020-05-05 Thread Chris Wilson
igt_require_gem() is a pecularity of i915/, move it out of the core.

Signed-off-by: Chris Wilson 
---
 lib/Makefile.sources|  2 +
 lib/i915/gem.c  | 80 +
 lib/i915/gem.h  | 30 
 lib/i915/gem_vm.h   |  1 +
 lib/igt_dummyload.c | 11 +--
 lib/igt_gt.c|  3 +-
 lib/ioctl_wrappers.c| 49 -
 lib/ioctl_wrappers.h|  1 -
 lib/meson.build |  1 +
 tests/amdgpu/amd_prime.c|  7 +-
 tests/core_hotunplug.c  |  1 +
 tests/debugfs_test.c|  2 +
 tests/i915/gem_bad_blit.c   |  4 +-
 tests/i915/gem_bad_reloc.c  |  4 +-
 tests/i915/gem_blits.c  |  1 +
 tests/i915/gem_busy.c   |  3 +-
 tests/i915/gem_caching.c|  3 +-
 tests/i915/gem_close.c  |  1 +
 tests/i915/gem_close_race.c |  4 +-
 tests/i915/gem_concurrent_all.c |  5 +-
 tests/i915/gem_cs_prefetch.c|  1 +
 tests/i915/gem_cs_tlb.c |  4 +-
 tests/i915/gem_ctx_clone.c  |  5 +-
 tests/i915/gem_ctx_create.c |  4 +-
 tests/i915/gem_ctx_engines.c|  4 +-
 tests/i915/gem_ctx_exec.c   |  3 +-
 tests/i915/gem_ctx_freq.c   |  1 +
 tests/i915/gem_ctx_isolation.c  |  1 +
 tests/i915/gem_ctx_persistence.c|  1 +
 tests/i915/gem_ctx_ringsize.c   |  1 +
 tests/i915/gem_ctx_shared.c |  4 +-
 tests/i915/gem_ctx_sseu.c   |  4 +-
 tests/i915/gem_ctx_switch.c |  4 +-
 tests/i915/gem_ctx_thrash.c |  6 +-
 tests/i915/gem_eio.c|  3 +-
 tests/i915/gem_evict_alignment.c|  3 +-
 tests/i915/gem_evict_everything.c   |  4 +-
 tests/i915/gem_exec_alignment.c |  3 +-
 tests/i915/gem_exec_async.c |  1 +
 tests/i915/gem_exec_await.c |  9 +--
 tests/i915/gem_exec_bad_domains.c   |  4 +-
 tests/i915/gem_exec_balancer.c  |  3 +-
 tests/i915/gem_exec_big.c   |  4 +-
 tests/i915/gem_exec_capture.c   |  1 +
 tests/i915/gem_exec_create.c|  4 +-
 tests/i915/gem_exec_fence.c | 11 +--
 tests/i915/gem_exec_flush.c |  1 +
 tests/i915/gem_exec_gttfill.c   |  1 +
 tests/i915/gem_exec_latency.c   |  3 +-
 tests/i915/gem_exec_lut_handle.c|  4 +-
 tests/i915/gem_exec_nop.c   | 12 ++--
 tests/i915/gem_exec_parallel.c  |  1 +
 tests/i915/gem_exec_params.c|  8 +--
 tests/i915/gem_exec_reloc.c |  1 +
 tests/i915/gem_exec_schedule.c  |  3 +-
 tests/i915/gem_exec_store.c |  4 +-
 tests/i915/gem_exec_suspend.c   |  1 +
 tests/i915/gem_exec_whisper.c   |  1 +
 tests/i915/gem_fenced_exec_thrash.c |  1 +
 tests/i915/gem_gpgpu_fill.c |  4 +-
 tests/i915/gem_gtt_hog.c|  3 +-
 tests/i915/gem_linear_blits.c   |  3 +-
 tests/i915/gem_media_fill.c |  4 +-
 tests/i915/gem_media_vme.c  |  4 +-
 tests/i915/gem_partial_pwrite_pread.c   |  3 +-
 tests/i915/gem_pipe_control_store_loop.c|  4 +-
 tests/i915/gem_ppgtt.c  |  5 +-
 tests/i915/gem_pread_after_blit.c   |  3 +-
 tests/i915/gem_pwrite_snooped.c |  4 +-
 tests/i915/gem_read_read_speed.c|  5 +-
 tests/i915/gem_render_copy.c|  5 +-
 tests/i915/gem_render_copy_redux.c  |  3 +-
 tests/i915/gem_render_linear_blits.c|  3 +-
 tests/i915/gem_render_tiled_blits.c |  3 +-
 tests/i915/gem_request_retire.c |  3 +-
 tests/i915/gem_ringfill.c   |  3 +-
 tests/i915/gem_set_tiling_vs_blt.c  |  4 +-
 tests/i915/gem_shrink.c |  1 +
 tests/i915/gem_softpin.c|  1 +
 tests/i915/gem_spin_batch.c |  1 +
 tests/i915/gem_streaming_writes.c   |  4 +-
 tests/i915/gem_sync.c   |  1 +
 tests/i915/gem_tiled_blits.c|  3 +-
 tests/i915/gem_tiled_fence_blits.c  |  1 +
 tests/i915/gem_tiled_partial_pwrite_pread.c |  3 +-
 tests/i915/gem_unfence_active_buffers.c |  4 +-
 tests/i915/gem_unref_active_buffers.c   |  4 +-
 tests/i915/gem_userptr_blits.c  |  3 +-
 tests/i915/gem_vm_create.c  |  3 +-
 tests/i915/gem_wait.c   |  1 +
 tests/i915/gem_workarounds.c|  5 +-
 tests/i915/gen3_mixed_blits.c   |  5 +-
 tests

Re: [Intel-gfx] [PATCH v27 4/6] drm/i915: Added required new PCode commands

2020-05-05 Thread Ville Syrjälä
On Tue, May 05, 2020 at 01:22:45PM +0300, Stanislav Lisovskiy wrote:
> We need a new PCode request commands and reply codes
> to be added as a prepartion patch for QGV points
> restricting for new SAGV support.
> 
> v2: - Extracted those changes into separate patch
>   (Ville Syrjälä)
> 
> v3: - Moved new PCode masks to another place from
>   PCode commands(Ville)
> 
> v4: - Moved new PCode masks to correspondent PCode
>   command, with identation(Ville)
> - Changed naming to ICL_ instead of GEN11_
>   to fit more nicely into existing definition
>   style.
> 
> Signed-off-by: Stanislav Lisovskiy 

Thanks. Pushed this one.

> ---
>  drivers/gpu/drm/i915/i915_reg.h   | 4 
>  drivers/gpu/drm/i915/intel_sideband.c | 2 ++
>  2 files changed, 6 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
> index fd9f2904d93c..f23a18ee28f9 100644
> --- a/drivers/gpu/drm/i915/i915_reg.h
> +++ b/drivers/gpu/drm/i915/i915_reg.h
> @@ -9064,6 +9064,7 @@ enum {
>  #define GEN7_PCODE_ILLEGAL_DATA  0x3
>  #define GEN11_PCODE_ILLEGAL_SUBCOMMAND   0x4
>  #define GEN11_PCODE_LOCKED   0x6
> +#define GEN11_PCODE_REJECTED 0x11
>  #define GEN7_PCODE_MIN_FREQ_TABLE_GT_RATIO_OUT_OF_RANGE 0x10
>  #define   GEN6_PCODE_WRITE_RC6VIDS   0x4
>  #define   GEN6_PCODE_READ_RC6VIDS0x5
> @@ -9085,6 +9086,9 @@ enum {
>  #define   ICL_PCODE_MEM_SUBSYSYSTEM_INFO 0xd
>  #define ICL_PCODE_MEM_SS_READ_GLOBAL_INFO(0x0 << 8)
>  #define ICL_PCODE_MEM_SS_READ_QGV_POINT_INFO(point)  (((point) << 
> 16) | (0x1 << 8))
> +#define   ICL_PCODE_SAGV_DE_MEM_SS_CONFIG0xe
> +#define ICL_PCODE_POINTS_RESTRICTED  0x0
> +#define ICL_PCODE_POINTS_RESTRICTED_MASK 0x1
>  #define   GEN6_PCODE_READ_D_COMP 0x10
>  #define   GEN6_PCODE_WRITE_D_COMP0x11
>  #define   ICL_PCODE_EXIT_TCCOLD  0x12
> diff --git a/drivers/gpu/drm/i915/intel_sideband.c 
> b/drivers/gpu/drm/i915/intel_sideband.c
> index d5129c1dd452..916ccd1c0e96 100644
> --- a/drivers/gpu/drm/i915/intel_sideband.c
> +++ b/drivers/gpu/drm/i915/intel_sideband.c
> @@ -371,6 +371,8 @@ static int gen7_check_mailbox_status(u32 mbox)
>   return -ENXIO;
>   case GEN11_PCODE_LOCKED:
>   return -EBUSY;
> + case GEN11_PCODE_REJECTED:
> + return -EACCES;
>   case GEN7_PCODE_MIN_FREQ_TABLE_GT_RATIO_OUT_OF_RANGE:
>   return -EOVERFLOW;
>   default:
> -- 
> 2.24.1.485.gad05a3d8e5

-- 
Ville Syrjälä
Intel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v27 2/6] drm/i915: Separate icl and skl SAGV checking

2020-05-05 Thread Ville Syrjälä
On Tue, May 05, 2020 at 01:42:46PM +0300, Ville Syrjälä wrote:
> On Tue, May 05, 2020 at 01:22:43PM +0300, Stanislav Lisovskiy wrote:
> > Introduce platform dependent SAGV checking in
> > combination with bandwidth state pipe SAGV mask.
> > 
> > v2, v3, v4, v5, v6: Fix rebase conflict
> > 
> > Signed-off-by: Stanislav Lisovskiy 
> > ---
> >  drivers/gpu/drm/i915/intel_pm.c | 30 --
> >  1 file changed, 28 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_pm.c 
> > b/drivers/gpu/drm/i915/intel_pm.c
> > index da567fac7c93..c7d726a656b2 100644
> > --- a/drivers/gpu/drm/i915/intel_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > @@ -3853,6 +3853,24 @@ static bool intel_crtc_can_enable_sagv(const struct 
> > intel_crtc_state *crtc_state
> > return true;
> >  }
> >  
> > +static bool skl_crtc_can_enable_sagv(const struct intel_crtc_state 
> > *crtc_state)
> > +{
> > +   struct intel_atomic_state *state = 
> > to_intel_atomic_state(crtc_state->uapi.state);
> > +   /*
> > +* SKL+ workaround: bspec recommends we disable SAGV when we have
> > +* more then one pipe enabled
> > +*/
> > +   if (hweight8(state->active_pipes) > 1)
> > +   return false;
> 
> That stuff should no longer be here since we now have it done properly
> in intel_can_eanble_sagv().
> 
> > +
> > +   return intel_crtc_can_enable_sagv(crtc_state);
> > +}
> > +
> > +static bool icl_crtc_can_enable_sagv(const struct intel_crtc_state 
> > *crtc_state)
> > +{
> > +   return intel_crtc_can_enable_sagv(crtc_state);
> > +}
> 
> This looks the wrong way around. IMO intel_crtc_can_enable_sagv()
> should rather call the skl vs. icl variants as needed. Although we
> don't yet have the icl variant so the oerdering of the patches is
> a bit weird.

Do we even need an icl variant actually? Does it use the skl or tgl
way of checking for sagv yes vs. no?

> 
> > +
> >  bool intel_can_enable_sagv(const struct intel_bw_state *bw_state)
> >  {
> > if (bw_state->active_pipes && !is_power_of_2(bw_state->active_pipes))
> > @@ -3863,22 +3881,30 @@ bool intel_can_enable_sagv(const struct 
> > intel_bw_state *bw_state)
> >  
> >  static int intel_compute_sagv_mask(struct intel_atomic_state *state)
> >  {
> > +   struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> > int ret;
> > struct intel_crtc *crtc;
> > -   struct intel_crtc_state *new_crtc_state;
> > +   const struct intel_crtc_state *new_crtc_state;
> > struct intel_bw_state *new_bw_state = NULL;
> > const struct intel_bw_state *old_bw_state = NULL;
> > int i;
> >  
> > for_each_new_intel_crtc_in_state(state, crtc,
> >  new_crtc_state, i) {
> > +   bool can_sagv;
> > +
> > new_bw_state = intel_atomic_get_bw_state(state);
> > if (IS_ERR(new_bw_state))
> > return PTR_ERR(new_bw_state);
> >  
> > old_bw_state = intel_atomic_get_old_bw_state(state);
> >  
> > -   if (intel_crtc_can_enable_sagv(new_crtc_state))
> > +   if (INTEL_GEN(dev_priv) >= 11)
> > +   can_sagv = icl_crtc_can_enable_sagv(new_crtc_state);
> > +   else
> > +   can_sagv = skl_crtc_can_enable_sagv(new_crtc_state);
> > +
> > +   if (can_sagv)
> > new_bw_state->pipe_sagv_reject &= ~BIT(crtc->pipe);
> > else
> > new_bw_state->pipe_sagv_reject |= BIT(crtc->pipe);
> > -- 
> > 2.24.1.485.gad05a3d8e5
> 
> -- 
> Ville Syrjälä
> Intel
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Ville Syrjälä
Intel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v27 3/6] drm/i915: Add TGL+ SAGV support

2020-05-05 Thread Ville Syrjälä
On Tue, May 05, 2020 at 01:22:44PM +0300, Stanislav Lisovskiy wrote:
> Starting from TGL we need to have a separate wm0
> values for SAGV and non-SAGV which affects
> how calculations are done.
> 
> v2: Remove long lines
> v3: Removed COLOR_PLANE enum references
> v4, v5, v6: Fixed rebase conflict
> 
> Signed-off-by: Stanislav Lisovskiy 
> ---
>  drivers/gpu/drm/i915/display/intel_display.c  |   8 +-
>  .../drm/i915/display/intel_display_types.h|   3 +
>  drivers/gpu/drm/i915/intel_pm.c   | 128 +-
>  3 files changed, 130 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
> b/drivers/gpu/drm/i915/display/intel_display.c
> index fd6d63b03489..be5741cb7595 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -13961,7 +13961,9 @@ static void verify_wm_state(struct intel_crtc *crtc,
>   /* Watermarks */
>   for (level = 0; level <= max_level; level++) {
>   if (skl_wm_level_equals(&hw_plane_wm->wm[level],
> - &sw_plane_wm->wm[level]))
> + &sw_plane_wm->wm[level]) ||
> + (level == 0 && 
> skl_wm_level_equals(&hw_plane_wm->wm[level],
> +
> &sw_plane_wm->sagv_wm0)))
>   continue;
>  
>   drm_err(&dev_priv->drm,
> @@ -14016,7 +14018,9 @@ static void verify_wm_state(struct intel_crtc *crtc,
>   /* Watermarks */
>   for (level = 0; level <= max_level; level++) {
>   if (skl_wm_level_equals(&hw_plane_wm->wm[level],
> - &sw_plane_wm->wm[level]))
> + &sw_plane_wm->wm[level]) ||
> + (level == 0 && 
> skl_wm_level_equals(&hw_plane_wm->wm[level],
> +
> &sw_plane_wm->sagv_wm0)))
>   continue;
>  
>   drm_err(&dev_priv->drm,
> diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h 
> b/drivers/gpu/drm/i915/display/intel_display_types.h
> index 9488449e4b94..32cbbf7dddc6 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_types.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_types.h
> @@ -688,11 +688,14 @@ struct skl_plane_wm {
>   struct skl_wm_level wm[8];
>   struct skl_wm_level uv_wm[8];
>   struct skl_wm_level trans_wm;
> + struct skl_wm_level sagv_wm0;
> + struct skl_wm_level uv_sagv_wm0;

As mentioned before uv_wm is not a thing on icl+, so nuke this.

>   bool is_planar;
>  };
>  
>  struct skl_pipe_wm {
>   struct skl_plane_wm planes[I915_MAX_PLANES];
> + bool can_sagv;

I would call it use_sagv_wm or somesuch to make it actually clear what
it does.

>  };
>  
>  enum vlv_wm_level {
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index c7d726a656b2..1b9925b6672c 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -3871,6 +3871,9 @@ static bool icl_crtc_can_enable_sagv(const struct 
> intel_crtc_state *crtc_state)
>   return intel_crtc_can_enable_sagv(crtc_state);
>  }
>  
> +static bool
> +tgl_crtc_can_enable_sagv(const struct intel_crtc_state *crtc_state);
> +
>  bool intel_can_enable_sagv(const struct intel_bw_state *bw_state)
>  {
>   if (bw_state->active_pipes && !is_power_of_2(bw_state->active_pipes))
> @@ -3884,7 +3887,7 @@ static int intel_compute_sagv_mask(struct 
> intel_atomic_state *state)
>   struct drm_i915_private *dev_priv = to_i915(state->base.dev);
>   int ret;
>   struct intel_crtc *crtc;
> - const struct intel_crtc_state *new_crtc_state;
> + struct intel_crtc_state *new_crtc_state;
>   struct intel_bw_state *new_bw_state = NULL;
>   const struct intel_bw_state *old_bw_state = NULL;
>   int i;
> @@ -3899,7 +3902,9 @@ static int intel_compute_sagv_mask(struct 
> intel_atomic_state *state)
>  
>   old_bw_state = intel_atomic_get_old_bw_state(state);
>  
> - if (INTEL_GEN(dev_priv) >= 11)
> + if (INTEL_GEN(dev_priv) >= 12)
> + can_sagv = tgl_crtc_can_enable_sagv(new_crtc_state);
> + else if (INTEL_GEN(dev_priv) >= 11)
>   can_sagv = icl_crtc_can_enable_sagv(new_crtc_state);
>   else
>   can_sagv = skl_crtc_can_enable_sagv(new_crtc_state);
> @@ -3921,6 +3926,24 @@ static int intel_compute_sagv_mask(struct 
> intel_atomic_state *state)
>   return ret;
>   }
>  
> + for_each_new_intel_crtc_in_state(state, crtc,
> +  new_crtc_state, i) {
> + struct skl_pipe_wm *pipe_wm = &new_crtc_state->wm.skl.

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/execlists: Record the active CCID from before reset

2020-05-05 Thread Patchwork
== Series Details ==

Series: drm/i915/execlists: Record the active CCID from before reset
URL   : https://patchwork.freedesktop.org/series/76946/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8427 -> Patchwork_17580


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/index.html

Known issues


  Here are the changes found in Patchwork_17580 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@i915_selftest@live@gt_engines:
- fi-bwr-2160:[PASS][1] -> [INCOMPLETE][2] ([i915#489])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8427/fi-bwr-2160/igt@i915_selftest@live@gt_engines.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/fi-bwr-2160/igt@i915_selftest@live@gt_engines.html

  
  [i915#489]: https://gitlab.freedesktop.org/drm/intel/issues/489


Participating hosts (50 -> 43)
--

  Missing(7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8427 -> Patchwork_17580

  CI-20190529: 20190529
  CI_DRM_8427: d7afe86abe766f68e758be1c9db6618e55bdf38d @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5632: e630cb8cd2ec01d6d5358eb2a3f6ea70498b8183 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17580: 92e7b4e3f924ed7fd4d57f69f156573ac43684cb @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

92e7b4e3f924 drm/i915/execlists: Record the active CCID from before reset

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17580/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v27 2/6] drm/i915: Separate icl and skl SAGV checking

2020-05-05 Thread Ville Syrjälä
On Tue, May 05, 2020 at 01:22:43PM +0300, Stanislav Lisovskiy wrote:
> Introduce platform dependent SAGV checking in
> combination with bandwidth state pipe SAGV mask.
> 
> v2, v3, v4, v5, v6: Fix rebase conflict
> 
> Signed-off-by: Stanislav Lisovskiy 
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 30 --
>  1 file changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index da567fac7c93..c7d726a656b2 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -3853,6 +3853,24 @@ static bool intel_crtc_can_enable_sagv(const struct 
> intel_crtc_state *crtc_state
>   return true;
>  }
>  
> +static bool skl_crtc_can_enable_sagv(const struct intel_crtc_state 
> *crtc_state)
> +{
> + struct intel_atomic_state *state = 
> to_intel_atomic_state(crtc_state->uapi.state);
> + /*
> +  * SKL+ workaround: bspec recommends we disable SAGV when we have
> +  * more then one pipe enabled
> +  */
> + if (hweight8(state->active_pipes) > 1)
> + return false;

That stuff should no longer be here since we now have it done properly
in intel_can_eanble_sagv().

> +
> + return intel_crtc_can_enable_sagv(crtc_state);
> +}
> +
> +static bool icl_crtc_can_enable_sagv(const struct intel_crtc_state 
> *crtc_state)
> +{
> + return intel_crtc_can_enable_sagv(crtc_state);
> +}

This looks the wrong way around. IMO intel_crtc_can_enable_sagv()
should rather call the skl vs. icl variants as needed. Although we
don't yet have the icl variant so the oerdering of the patches is
a bit weird.

> +
>  bool intel_can_enable_sagv(const struct intel_bw_state *bw_state)
>  {
>   if (bw_state->active_pipes && !is_power_of_2(bw_state->active_pipes))
> @@ -3863,22 +3881,30 @@ bool intel_can_enable_sagv(const struct 
> intel_bw_state *bw_state)
>  
>  static int intel_compute_sagv_mask(struct intel_atomic_state *state)
>  {
> + struct drm_i915_private *dev_priv = to_i915(state->base.dev);
>   int ret;
>   struct intel_crtc *crtc;
> - struct intel_crtc_state *new_crtc_state;
> + const struct intel_crtc_state *new_crtc_state;
>   struct intel_bw_state *new_bw_state = NULL;
>   const struct intel_bw_state *old_bw_state = NULL;
>   int i;
>  
>   for_each_new_intel_crtc_in_state(state, crtc,
>new_crtc_state, i) {
> + bool can_sagv;
> +
>   new_bw_state = intel_atomic_get_bw_state(state);
>   if (IS_ERR(new_bw_state))
>   return PTR_ERR(new_bw_state);
>  
>   old_bw_state = intel_atomic_get_old_bw_state(state);
>  
> - if (intel_crtc_can_enable_sagv(new_crtc_state))
> + if (INTEL_GEN(dev_priv) >= 11)
> + can_sagv = icl_crtc_can_enable_sagv(new_crtc_state);
> + else
> + can_sagv = skl_crtc_can_enable_sagv(new_crtc_state);
> +
> + if (can_sagv)
>   new_bw_state->pipe_sagv_reject &= ~BIT(crtc->pipe);
>   else
>   new_bw_state->pipe_sagv_reject |= BIT(crtc->pipe);
> -- 
> 2.24.1.485.gad05a3d8e5

-- 
Ville Syrjälä
Intel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t] i915/gem_ctx_exec: Exploit resource contention to verify execbuf independence

2020-05-05 Thread Chris Wilson
Even if one client is blocked on a resource, that should not impact
another client.

Signed-off-by: Chris Wilson 
---
 tests/i915/gem_ctx_exec.c | 122 +-
 1 file changed, 121 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_ctx_exec.c b/tests/i915/gem_ctx_exec.c
index ad2f9e545..97a1e0d32 100644
--- a/tests/i915/gem_ctx_exec.c
+++ b/tests/i915/gem_ctx_exec.c
@@ -35,8 +35,9 @@
 #include 
 #include 
 #include 
-#include 
 #include 
+#include 
+#include 
 #include 
 
 #include 
@@ -331,6 +332,122 @@ static void nohangcheck_hostile(int i915)
close(i915);
 }
 
+static void kill_children(int sig)
+{
+   sighandler_t old;
+
+   old = signal(sig, SIG_IGN);
+   kill(-getpgrp(), sig);
+   signal(sig, old);
+}
+
+static bool has_persistence(int i915)
+{
+   struct drm_i915_gem_context_param p = {
+   .param = I915_CONTEXT_PARAM_PERSISTENCE,
+   };
+   uint64_t saved;
+
+   if (__gem_context_get_param(i915, &p))
+   return false;
+
+   saved = p.value;
+   p.value = 0;
+   if (__gem_context_set_param(i915, &p))
+   return false;
+
+   p.value = saved;
+   return __gem_context_set_param(i915, &p) == 0;
+}
+
+static void pi_active(int i915)
+{
+   igt_spin_t *spin = igt_spin_new(i915);
+   unsigned long count = 0;
+   bool blocked = false;
+   struct pollfd pfd;
+   int lnk[2];
+   int *done;
+
+   igt_require(gem_scheduler_enabled(i915));
+   igt_require(has_persistence(i915)); /* for graceful error recovery */
+
+   done = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
+   igt_assert(done != MAP_FAILED);
+
+   igt_assert(pipe(lnk) == 0);
+
+   igt_fork(child, 1) {
+   struct sigaction sa = { .sa_handler = alarm_handler };
+
+   sigaction(SIGHUP, &sa, NULL);
+
+   do {
+   uint32_t ctx;
+
+   if (__gem_context_clone(i915, 0,
+   I915_CONTEXT_CLONE_ENGINES |
+   I915_CONTEXT_CLONE_VM,
+   0, &ctx))
+   break;
+
+   gem_context_set_persistence(i915, ctx, false);
+   if (READ_ONCE(*done))
+   break;
+
+   spin->execbuf.rsvd1 = ctx;
+   if (__execbuf(i915, &spin->execbuf))
+   break;
+
+   count++;
+   write(lnk[1], &count, sizeof(count));
+   } while (1);
+   }
+
+   pfd.fd = lnk[0];
+   pfd.events = POLLIN;
+   close(lnk[1]);
+
+   igt_until_timeout(90) {
+   if (poll(&pfd, 1, 1000) == 0) {
+   igt_info("Child blocked after %lu active contexts\n",
+count);
+   blocked = true;
+   break;
+   }
+   read(pfd.fd, &count, sizeof(count));
+   }
+
+   if (blocked) {
+   struct sigaction old_sa, sa = { .sa_handler = alarm_handler };
+   struct itimerval itv;
+
+   sigaction(SIGALRM, &sa, &old_sa);
+   itv.it_value.tv_sec = 0;
+   itv.it_value.tv_usec = 25; /* 250ms */
+   setitimer(ITIMER_REAL, &itv, NULL);
+
+   igt_assert_f(__execbuf(i915, &spin->execbuf) == 0,
+"Active execbuf blocked for more than 250ms by %lu 
child contexts\n",
+count);
+
+   memset(&itv, 0, sizeof(itv));
+   setitimer(ITIMER_REAL, &itv, NULL);
+   sigaction(SIGALRM, &old_sa, NULL);
+   } else {
+   igt_info("Not blocked after %lu active contexts\n",
+count);
+   }
+
+   *done = 1;
+   kill_children(SIGHUP);
+   igt_waitchildren();
+   gem_quiescent_gpu(i915);
+   close(lnk[0]);
+
+   munmap(done, 4096);
+}
+
 igt_main
 {
const uint32_t batch[2] = { 0, MI_BATCH_BUFFER_END };
@@ -369,6 +486,9 @@ igt_main
igt_subtest("eviction")
big_exec(fd, handle, 0);
 
+   igt_subtest("basic-pi-active")
+   pi_active(fd);
+
igt_subtest("basic-norecovery")
norecovery(fd);
 
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v27 4/6] drm/i915: Added required new PCode commands

2020-05-05 Thread Stanislav Lisovskiy
We need a new PCode request commands and reply codes
to be added as a prepartion patch for QGV points
restricting for new SAGV support.

v2: - Extracted those changes into separate patch
  (Ville Syrjälä)

v3: - Moved new PCode masks to another place from
  PCode commands(Ville)

v4: - Moved new PCode masks to correspondent PCode
  command, with identation(Ville)
- Changed naming to ICL_ instead of GEN11_
  to fit more nicely into existing definition
  style.

Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/i915_reg.h   | 4 
 drivers/gpu/drm/i915/intel_sideband.c | 2 ++
 2 files changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index fd9f2904d93c..f23a18ee28f9 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -9064,6 +9064,7 @@ enum {
 #define GEN7_PCODE_ILLEGAL_DATA0x3
 #define GEN11_PCODE_ILLEGAL_SUBCOMMAND 0x4
 #define GEN11_PCODE_LOCKED 0x6
+#define GEN11_PCODE_REJECTED   0x11
 #define GEN7_PCODE_MIN_FREQ_TABLE_GT_RATIO_OUT_OF_RANGE 0x10
 #define   GEN6_PCODE_WRITE_RC6VIDS 0x4
 #define   GEN6_PCODE_READ_RC6VIDS  0x5
@@ -9085,6 +9086,9 @@ enum {
 #define   ICL_PCODE_MEM_SUBSYSYSTEM_INFO   0xd
 #define ICL_PCODE_MEM_SS_READ_GLOBAL_INFO  (0x0 << 8)
 #define ICL_PCODE_MEM_SS_READ_QGV_POINT_INFO(point)(((point) << 
16) | (0x1 << 8))
+#define   ICL_PCODE_SAGV_DE_MEM_SS_CONFIG  0xe
+#define ICL_PCODE_POINTS_RESTRICTED0x0
+#define ICL_PCODE_POINTS_RESTRICTED_MASK   0x1
 #define   GEN6_PCODE_READ_D_COMP   0x10
 #define   GEN6_PCODE_WRITE_D_COMP  0x11
 #define   ICL_PCODE_EXIT_TCCOLD0x12
diff --git a/drivers/gpu/drm/i915/intel_sideband.c 
b/drivers/gpu/drm/i915/intel_sideband.c
index d5129c1dd452..916ccd1c0e96 100644
--- a/drivers/gpu/drm/i915/intel_sideband.c
+++ b/drivers/gpu/drm/i915/intel_sideband.c
@@ -371,6 +371,8 @@ static int gen7_check_mailbox_status(u32 mbox)
return -ENXIO;
case GEN11_PCODE_LOCKED:
return -EBUSY;
+   case GEN11_PCODE_REJECTED:
+   return -EACCES;
case GEN7_PCODE_MIN_FREQ_TABLE_GT_RATIO_OUT_OF_RANGE:
return -EOVERFLOW;
default:
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v27 1/6] drm/i915: Introduce skl_plane_wm_level accessor.

2020-05-05 Thread Stanislav Lisovskiy
For future Gen12 SAGV implementation we need to
seemlessly alter wm levels calculated, depending
on whether we are allowed to enable SAGV or not.

So this accessor will give additional flexibility
to do that.

Currently this accessor is still simply working
as "pass-through" function. This will be changed
in next coming patches from this series.

v2: - plane_id -> plane->id(Ville Syrjälä)
- Moved wm_level var to have more local scope
  (Ville Syrjälä)
- Renamed yuv to color_plane(Ville Syrjälä) in
  skl_plane_wm_level

v3: - plane->id -> plane_id(this time for real, Ville Syrjälä)
- Changed colorplane id type from boolean to int as index
  (Ville Syrjälä)
- Moved crtc_state param so that it is first now
  (Ville Syrjälä)
- Moved wm_level declaration to tigher scope in
  skl_write_plane_wm(Ville Syrjälä)

v4: - Started to use enum values for color plane
- Do sizeof for a type what we are memset'ing
- Zero out wm_uv as well(Ville Syrjälä)

v5: - Fixed rebase conflict caused by COLOR_PLANE_*
  enum removal

Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/intel_pm.c | 85 ++---
 1 file changed, 67 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 416cb1a1e7cb..da567fac7c93 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -4632,6 +4632,18 @@ icl_get_total_relative_data_rate(struct intel_crtc_state 
*crtc_state,
return total_data_rate;
 }
 
+static const struct skl_wm_level *
+skl_plane_wm_level(const struct intel_crtc_state *crtc_state,
+  enum plane_id plane_id,
+  int level,
+  int color_plane)
+{
+   const struct skl_plane_wm *wm =
+   &crtc_state->wm.skl.optimal.planes[plane_id];
+
+   return color_plane == 0 ? &wm->wm[level] : &wm->uv_wm[level];
+}
+
 static int
 skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 {
@@ -4691,22 +4703,28 @@ skl_allocate_pipe_ddb(struct intel_crtc_state 
*crtc_state)
 */
for (level = ilk_wm_max_level(dev_priv); level >= 0; level--) {
blocks = 0;
+
for_each_plane_id_on_crtc(crtc, plane_id) {
-   const struct skl_plane_wm *wm =
-   &crtc_state->wm.skl.optimal.planes[plane_id];
+   const struct skl_wm_level *wm_level;
+   const struct skl_wm_level *wm_uv_level;
+
+   wm_level = skl_plane_wm_level(crtc_state, plane_id,
+ level, 0);
+   wm_uv_level = skl_plane_wm_level(crtc_state, plane_id,
+level, 1);
 
if (plane_id == PLANE_CURSOR) {
-   if (wm->wm[level].min_ddb_alloc > 
total[PLANE_CURSOR]) {
+   if (wm_level->min_ddb_alloc > 
total[PLANE_CURSOR]) {
drm_WARN_ON(&dev_priv->drm,
-   wm->wm[level].min_ddb_alloc 
!= U16_MAX);
+   wm_level->min_ddb_alloc != 
U16_MAX);
blocks = U32_MAX;
break;
}
continue;
}
 
-   blocks += wm->wm[level].min_ddb_alloc;
-   blocks += wm->uv_wm[level].min_ddb_alloc;
+   blocks += wm_level->min_ddb_alloc;
+   blocks += wm_uv_level->min_ddb_alloc;
}
 
if (blocks <= alloc_size) {
@@ -4729,11 +4747,16 @@ skl_allocate_pipe_ddb(struct intel_crtc_state 
*crtc_state)
 * proportional to its relative data rate.
 */
for_each_plane_id_on_crtc(crtc, plane_id) {
-   const struct skl_plane_wm *wm =
-   &crtc_state->wm.skl.optimal.planes[plane_id];
+   const struct skl_wm_level *wm_level;
+   const struct skl_wm_level *wm_uv_level;
u64 rate;
u16 extra;
 
+   wm_level = skl_plane_wm_level(crtc_state, plane_id,
+ level, 0);
+   wm_uv_level = skl_plane_wm_level(crtc_state, plane_id,
+level, 1);
+
if (plane_id == PLANE_CURSOR)
continue;
 
@@ -4748,7 +4771,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
extra = min_t(u16, alloc_size,
  DIV64_U64_ROUND_UP(alloc_size * rate,
 total_data_rate));
-   total[plane_id] = wm->wm[level].min_ddb_alloc + extra;

[Intel-gfx] [PATCH v27 6/6] drm/i915: Enable SAGV support for Gen12

2020-05-05 Thread Stanislav Lisovskiy
Flip the switch and enable SAGV support
for Gen12 also.

Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/intel_pm.c | 4 
 1 file changed, 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 5d0aab515e2a..a12f1d0a0be2 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3638,10 +3638,6 @@ static bool skl_needs_memory_bw_wa(struct 
drm_i915_private *dev_priv)
 static bool
 intel_has_sagv(struct drm_i915_private *dev_priv)
 {
-   /* HACK! */
-   if (IS_GEN(dev_priv, 12))
-   return false;
-
return (IS_GEN9_BC(dev_priv) || INTEL_GEN(dev_priv) >= 10) &&
dev_priv->sagv_status != I915_SAGV_NOT_CONTROLLED;
 }
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v27 0/6] SAGV support for Gen12+

2020-05-05 Thread Stanislav Lisovskiy
For Gen11+ platforms BSpec suggests disabling specific
QGV points separately, depending on bandwidth limitations
and current display configuration. Thus it required adding
a new PCode request for disabling QGV points and some
refactoring of already existing SAGV code.
Also had to refactor intel_can_enable_sagv function,
as current seems to be outdated and using skl specific
workarounds, also not following BSpec for Gen11+.

v25: Rebased patch series as part was merged already
v26: Had to resend the whole series as one more mid patch was added
v27: Patches 2,3,7 were pushed, have to resend the series to prevent
 build failure.

Stanislav Lisovskiy (6):
  drm/i915: Introduce skl_plane_wm_level accessor.
  drm/i915: Separate icl and skl SAGV checking
  drm/i915: Add TGL+ SAGV support
  drm/i915: Added required new PCode commands
  drm/i915: Restrict qgv points which don't have enough bandwidth.
  drm/i915: Enable SAGV support for Gen12

 drivers/gpu/drm/i915/display/intel_bw.c   | 139 ++--
 drivers/gpu/drm/i915/display/intel_bw.h   |   9 +
 drivers/gpu/drm/i915/display/intel_display.c  |   8 +-
 .../drm/i915/display/intel_display_types.h|   6 +
 drivers/gpu/drm/i915/i915_reg.h   |   4 +
 drivers/gpu/drm/i915/intel_pm.c   | 303 --
 drivers/gpu/drm/i915/intel_pm.h   |   2 +
 drivers/gpu/drm/i915/intel_sideband.c |   2 +
 8 files changed, 407 insertions(+), 66 deletions(-)

-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v27 3/6] drm/i915: Add TGL+ SAGV support

2020-05-05 Thread Stanislav Lisovskiy
Starting from TGL we need to have a separate wm0
values for SAGV and non-SAGV which affects
how calculations are done.

v2: Remove long lines
v3: Removed COLOR_PLANE enum references
v4, v5, v6: Fixed rebase conflict

Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_display.c  |   8 +-
 .../drm/i915/display/intel_display_types.h|   3 +
 drivers/gpu/drm/i915/intel_pm.c   | 128 +-
 3 files changed, 130 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index fd6d63b03489..be5741cb7595 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -13961,7 +13961,9 @@ static void verify_wm_state(struct intel_crtc *crtc,
/* Watermarks */
for (level = 0; level <= max_level; level++) {
if (skl_wm_level_equals(&hw_plane_wm->wm[level],
-   &sw_plane_wm->wm[level]))
+   &sw_plane_wm->wm[level]) ||
+   (level == 0 && 
skl_wm_level_equals(&hw_plane_wm->wm[level],
+  
&sw_plane_wm->sagv_wm0)))
continue;
 
drm_err(&dev_priv->drm,
@@ -14016,7 +14018,9 @@ static void verify_wm_state(struct intel_crtc *crtc,
/* Watermarks */
for (level = 0; level <= max_level; level++) {
if (skl_wm_level_equals(&hw_plane_wm->wm[level],
-   &sw_plane_wm->wm[level]))
+   &sw_plane_wm->wm[level]) ||
+   (level == 0 && 
skl_wm_level_equals(&hw_plane_wm->wm[level],
+  
&sw_plane_wm->sagv_wm0)))
continue;
 
drm_err(&dev_priv->drm,
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h 
b/drivers/gpu/drm/i915/display/intel_display_types.h
index 9488449e4b94..32cbbf7dddc6 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -688,11 +688,14 @@ struct skl_plane_wm {
struct skl_wm_level wm[8];
struct skl_wm_level uv_wm[8];
struct skl_wm_level trans_wm;
+   struct skl_wm_level sagv_wm0;
+   struct skl_wm_level uv_sagv_wm0;
bool is_planar;
 };
 
 struct skl_pipe_wm {
struct skl_plane_wm planes[I915_MAX_PLANES];
+   bool can_sagv;
 };
 
 enum vlv_wm_level {
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index c7d726a656b2..1b9925b6672c 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3871,6 +3871,9 @@ static bool icl_crtc_can_enable_sagv(const struct 
intel_crtc_state *crtc_state)
return intel_crtc_can_enable_sagv(crtc_state);
 }
 
+static bool
+tgl_crtc_can_enable_sagv(const struct intel_crtc_state *crtc_state);
+
 bool intel_can_enable_sagv(const struct intel_bw_state *bw_state)
 {
if (bw_state->active_pipes && !is_power_of_2(bw_state->active_pipes))
@@ -3884,7 +3887,7 @@ static int intel_compute_sagv_mask(struct 
intel_atomic_state *state)
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
int ret;
struct intel_crtc *crtc;
-   const struct intel_crtc_state *new_crtc_state;
+   struct intel_crtc_state *new_crtc_state;
struct intel_bw_state *new_bw_state = NULL;
const struct intel_bw_state *old_bw_state = NULL;
int i;
@@ -3899,7 +3902,9 @@ static int intel_compute_sagv_mask(struct 
intel_atomic_state *state)
 
old_bw_state = intel_atomic_get_old_bw_state(state);
 
-   if (INTEL_GEN(dev_priv) >= 11)
+   if (INTEL_GEN(dev_priv) >= 12)
+   can_sagv = tgl_crtc_can_enable_sagv(new_crtc_state);
+   else if (INTEL_GEN(dev_priv) >= 11)
can_sagv = icl_crtc_can_enable_sagv(new_crtc_state);
else
can_sagv = skl_crtc_can_enable_sagv(new_crtc_state);
@@ -3921,6 +3926,24 @@ static int intel_compute_sagv_mask(struct 
intel_atomic_state *state)
return ret;
}
 
+   for_each_new_intel_crtc_in_state(state, crtc,
+new_crtc_state, i) {
+   struct skl_pipe_wm *pipe_wm = &new_crtc_state->wm.skl.optimal;
+
+   /*
+* Due to drm limitation at commit state, when
+* changes are written the whole atomic state is
+* zeroed away => which prevents from using it,
+* so just sticking it into pipe wm state for
+* keeping it simple - anyway this is rel

[Intel-gfx] [PATCH v27 2/6] drm/i915: Separate icl and skl SAGV checking

2020-05-05 Thread Stanislav Lisovskiy
Introduce platform dependent SAGV checking in
combination with bandwidth state pipe SAGV mask.

v2, v3, v4, v5, v6: Fix rebase conflict

Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/intel_pm.c | 30 --
 1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index da567fac7c93..c7d726a656b2 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3853,6 +3853,24 @@ static bool intel_crtc_can_enable_sagv(const struct 
intel_crtc_state *crtc_state
return true;
 }
 
+static bool skl_crtc_can_enable_sagv(const struct intel_crtc_state *crtc_state)
+{
+   struct intel_atomic_state *state = 
to_intel_atomic_state(crtc_state->uapi.state);
+   /*
+* SKL+ workaround: bspec recommends we disable SAGV when we have
+* more then one pipe enabled
+*/
+   if (hweight8(state->active_pipes) > 1)
+   return false;
+
+   return intel_crtc_can_enable_sagv(crtc_state);
+}
+
+static bool icl_crtc_can_enable_sagv(const struct intel_crtc_state *crtc_state)
+{
+   return intel_crtc_can_enable_sagv(crtc_state);
+}
+
 bool intel_can_enable_sagv(const struct intel_bw_state *bw_state)
 {
if (bw_state->active_pipes && !is_power_of_2(bw_state->active_pipes))
@@ -3863,22 +3881,30 @@ bool intel_can_enable_sagv(const struct intel_bw_state 
*bw_state)
 
 static int intel_compute_sagv_mask(struct intel_atomic_state *state)
 {
+   struct drm_i915_private *dev_priv = to_i915(state->base.dev);
int ret;
struct intel_crtc *crtc;
-   struct intel_crtc_state *new_crtc_state;
+   const struct intel_crtc_state *new_crtc_state;
struct intel_bw_state *new_bw_state = NULL;
const struct intel_bw_state *old_bw_state = NULL;
int i;
 
for_each_new_intel_crtc_in_state(state, crtc,
 new_crtc_state, i) {
+   bool can_sagv;
+
new_bw_state = intel_atomic_get_bw_state(state);
if (IS_ERR(new_bw_state))
return PTR_ERR(new_bw_state);
 
old_bw_state = intel_atomic_get_old_bw_state(state);
 
-   if (intel_crtc_can_enable_sagv(new_crtc_state))
+   if (INTEL_GEN(dev_priv) >= 11)
+   can_sagv = icl_crtc_can_enable_sagv(new_crtc_state);
+   else
+   can_sagv = skl_crtc_can_enable_sagv(new_crtc_state);
+
+   if (can_sagv)
new_bw_state->pipe_sagv_reject &= ~BIT(crtc->pipe);
else
new_bw_state->pipe_sagv_reject |= BIT(crtc->pipe);
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v27 5/6] drm/i915: Restrict qgv points which don't have enough bandwidth.

2020-05-05 Thread Stanislav Lisovskiy
According to BSpec 53998, we should try to
restrict qgv points, which can't provide
enough bandwidth for desired display configuration.

Currently we are just comparing against all of
those and take minimum(worst case).

v2: Fixed wrong PCode reply mask, removed hardcoded
values.

v3: Forbid simultaneous legacy SAGV PCode requests and
restricting qgv points. Put the actual restriction
to commit function, added serialization(thanks to Ville)
to prevent commit being applied out of order in case of
nonblocking and/or nomodeset commits.

v4:
- Minor code refactoring, fixed few typos(thanks to James Ausmus)
- Change the naming of qgv point
  masking/unmasking functions(James Ausmus).
- Simplify the masking/unmasking operation itself,
  as we don't need to mask only single point per request(James Ausmus)
- Reject and stick to highest bandwidth point if SAGV
  can't be enabled(BSpec)

v5:
- Add new mailbox reply codes, which seems to happen during boot
  time for TGL and indicate that QGV setting is not yet available.

v6:
- Increase number of supported QGV points to be in sync with BSpec.

v7: - Rebased and resolved conflict to fix build failure.
- Fix NUM_QGV_POINTS to 8 and moved that to header file(James Ausmus)

v8: - Don't report an error if we can't restrict qgv points, as SAGV
  can be disabled by BIOS, which is completely legal. So don't
  make CI panic. Instead if we detect that there is only 1 QGV
  point accessible just analyze if we can fit the required bandwidth
  requirements, but no need in restricting.

v9: - Fix wrong QGV transition if we have 0 planes and no SAGV
  simultaneously.

v10: - Fix CDCLK corruption, because of global state getting serialized
   without modeset, which caused copying of non-calculated cdclk
   to be copied to dev_priv(thanks to Ville for the hint).

v11: - Remove unneeded headers and spaces(Matthew Roper)
 - Remove unneeded intel_qgv_info qi struct from bw check and zero
   out the needed one(Matthew Roper)
 - Changed QGV error message to have more clear meaning(Matthew Roper)
 - Use state->modeset_set instead of any_ms(Matthew Roper)
 - Moved NUM_SAGV_POINTS from i915_reg.h to i915_drv.h where it's used
 - Keep using crtc_state->hw.active instead of .enable(Matthew Roper)
 - Moved unrelated changes to other patch(using latency as parameter
   for plane wm calculation, moved to SAGV refactoring patch)

v12: - Fix rebase conflict with own temporary SAGV/QGV fix.
 - Remove unnecessary mask being zero check when unmasking
   qgv points as this is completely legal(Matt Roper)
 - Check if we are setting the same mask as already being set
   in hardware to prevent error from PCode.
 - Fix error message when restricting/unrestricting qgv points
   to "mask/unmask" which sounds more accurate(Matt Roper)
 - Move sagv status setting to icl_get_bw_info from atomic check
   as this should be calculated only once.(Matt Roper)
 - Edited comments for the case when we can't enable SAGV and
   use only 1 QGV point with highest bandwidth to be more
   understandable.(Matt Roper)

v13: - Moved max_data_rate in bw check to closer scope(Ville Syrjälä)
 - Changed comment for zero new_mask in qgv points masking function
   to better reflect reality(Ville Syrjälä)
 - Simplified bit mask operation in qgv points masking function
   (Ville Syrjälä)
 - Moved intel_qgv_points_mask closer to gen11 SAGV disabling,
   however this still can't be under modeset condition(Ville Syrjälä)
 - Packed qgv_points_mask as u8 and moved closer to pipe_sagv_mask
   (Ville Syrjälä)
 - Extracted PCode changes to separate patch.(Ville Syrjälä)
 - Now treat num_planes 0 same as 1 to avoid confusion and
   returning max_bw as 0, which would prevent choosing QGV
   point having max bandwidth in case if SAGV is not allowed,
   as per BSpec(Ville Syrjälä)
 - Do the actual qgv_points_mask swap in the same place as
   all other global state parts like cdclk are swapped.
   In the next patch, this all will be moved to bw state as
   global state, once new global state patch series from Ville
   lands

v14: - Now using global state to serialize access to qgv points
 - Added global state locking back, otherwise we seem to read
   bw state in a wrong way.

v15: - Added TODO comment for near atomic global state locking in
   bw code.

v16: - Fixed intel_atomic_bw_* functions to be intel_bw_* as discussed
   with Jani Nikula.
 - Take bw_state_changed flag into use.

v17: - Moved qgv point related manipulations next to SAGV code, as
   those are semantically related(Ville Syrjälä)
 - Renamed those into intel_sagv_(pre)|(post)_plane_update
   (Ville Syrjälä)

v18: - Move sagv related calls from commit tail into
   intel_sagv_(pre)|(post)_plane_update(Ville Syr

Re: [Intel-gfx] [CI] drm/i915/gt: Stop holding onto the pinned_default_state

2020-05-05 Thread Chris Wilson
Quoting Chris Wilson (2020-05-05 10:21:46)
> Quoting Mika Kuoppala (2020-05-05 10:12:49)
> > > @@ -4166,8 +4163,6 @@ static void __execlists_reset(struct 
> > > intel_engine_cs *engine, bool stalled)
> > >* image back to the expected values to skip over the guilty 
> > > request.
> > >*/
> > >   __i915_request_reset(rq, stalled);
> > > - if (!stalled)
> > > - goto out_replay;
> > 
> > Why the change how to handle stalled?
> 
> The protocontext is only sufficient to recover a hung context. If we are
> resetting an innocent context, we need it to retain its register state.
> 
> stalled == guilty => really hung, we only replay for the breacrumbs
> !stalled == innocent, global reset => we need to try and recover the
> context exactly.
> 
> The secret is that if we declare innocence too early, we kill it with
> fire in a second pass.

The real secret is that the protocontext is being applied later on being
banned. And this change was because the two paths are not different at
this point.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [CI] drm/i915/gt: Stop holding onto the pinned_default_state

2020-05-05 Thread Chris Wilson
Quoting Mika Kuoppala (2020-05-05 10:12:49)
> > @@ -4166,8 +4163,6 @@ static void __execlists_reset(struct intel_engine_cs 
> > *engine, bool stalled)
> >* image back to the expected values to skip over the guilty request.
> >*/
> >   __i915_request_reset(rq, stalled);
> > - if (!stalled)
> > - goto out_replay;
> 
> Why the change how to handle stalled?

The protocontext is only sufficient to recover a hung context. If we are
resetting an innocent context, we need it to retain its register state.

stalled == guilty => really hung, we only replay for the breacrumbs
!stalled == innocent, global reset => we need to try and recover the
context exactly.

The secret is that if we declare innocence too early, we kill it with
fire in a second pass.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/tgl+: Fix interrupt handling for DP AUX transactions

2020-05-05 Thread Imre Deak
On Mon, May 04, 2020 at 09:41:16PM +, Patchwork wrote:
> == Series Details ==
> 
> Series: drm/i915/tgl+: Fix interrupt handling for DP AUX transactions
> URL   : https://patchwork.freedesktop.org/series/76892/
> State : success

Puhsed to -dinq, thanks for the review and re-reporting.

> 
> == Summary ==
> 
> CI Bug Log - changes from CI_DRM_8416_full -> Patchwork_17564_full
> 
> 
> Summary
> ---
> 
>   **SUCCESS**
> 
>   No regressions found.
> 
>   
> 
> Known issues
> 
> 
>   Here are the changes found in Patchwork_17564_full that come from known 
> issues:
> 
> ### IGT changes ###
> 
>  Issues hit 
> 
>   * igt@gem_exec_flush@basic-wb-ro-before-default:
> - shard-hsw:  [PASS][1] -> [INCOMPLETE][2] ([i915#61])
>[1]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8416/shard-hsw6/igt@gem_exec_fl...@basic-wb-ro-before-default.html
>[2]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17564/shard-hsw1/igt@gem_exec_fl...@basic-wb-ro-before-default.html
> 
>   * igt@gem_workarounds@suspend-resume-fd:
> - shard-apl:  [PASS][3] -> [DMESG-WARN][4] ([i915#180])
>[3]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8416/shard-apl4/igt@gem_workarou...@suspend-resume-fd.html
>[4]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17564/shard-apl6/igt@gem_workarou...@suspend-resume-fd.html
> 
>   * igt@kms_atomic_transition@plane-toggle-modeset-transition:
> - shard-apl:  [PASS][5] -> [INCOMPLETE][6] ([CI#80])
>[5]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8416/shard-apl8/igt@kms_atomic_transit...@plane-toggle-modeset-transition.html
>[6]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17564/shard-apl8/igt@kms_atomic_transit...@plane-toggle-modeset-transition.html
> 
>   * igt@kms_cursor_crc@pipe-c-cursor-64x21-onscreen:
> - shard-glk:  [PASS][7] -> [FAIL][8] ([i915#54])
>[7]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8416/shard-glk8/igt@kms_cursor_...@pipe-c-cursor-64x21-onscreen.html
>[8]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17564/shard-glk2/igt@kms_cursor_...@pipe-c-cursor-64x21-onscreen.html
> 
>   * igt@kms_cursor_edge_walk@pipe-a-256x256-bottom-edge:
> - shard-apl:  [PASS][9] -> [FAIL][10] ([i915#70] / [i915#95])
>[9]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8416/shard-apl3/igt@kms_cursor_edge_w...@pipe-a-256x256-bottom-edge.html
>[10]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17564/shard-apl4/igt@kms_cursor_edge_w...@pipe-a-256x256-bottom-edge.html
> 
>   * igt@kms_flip_tiling@flip-changes-tiling-y:
> - shard-apl:  [PASS][11] -> [FAIL][12] ([i915#95])
>[11]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8416/shard-apl1/igt@kms_flip_til...@flip-changes-tiling-y.html
>[12]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17564/shard-apl3/igt@kms_flip_til...@flip-changes-tiling-y.html
> - shard-kbl:  [PASS][13] -> [FAIL][14] ([i915#699] / [i915#93] / 
> [i915#95])
>[13]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8416/shard-kbl1/igt@kms_flip_til...@flip-changes-tiling-y.html
>[14]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17564/shard-kbl6/igt@kms_flip_til...@flip-changes-tiling-y.html
> 
>   * igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-render:
> - shard-skl:  [PASS][15] -> [FAIL][16] ([i915#49])
>[15]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8416/shard-skl9/igt@kms_frontbuffer_track...@psr-1p-primscrn-pri-indfb-draw-render.html
>[16]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17564/shard-skl8/igt@kms_frontbuffer_track...@psr-1p-primscrn-pri-indfb-draw-render.html
> 
>   * igt@kms_pipe_crc_basic@hang-read-crc-pipe-a:
> - shard-snb:  [PASS][17] -> [SKIP][18] ([fdo#109271]) +3 similar 
> issues
>[17]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8416/shard-snb6/igt@kms_pipe_crc_ba...@hang-read-crc-pipe-a.html
>[18]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17564/shard-snb2/igt@kms_pipe_crc_ba...@hang-read-crc-pipe-a.html
> 
>   * igt@kms_plane_alpha_blend@pipe-a-coverage-7efc:
> - shard-skl:  [PASS][19] -> [FAIL][20] ([fdo#108145] / [i915#265])
>[19]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8416/shard-skl9/igt@kms_plane_alpha_bl...@pipe-a-coverage-7efc.html
>[20]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17564/shard-skl8/igt@kms_plane_alpha_bl...@pipe-a-coverage-7efc.html
> 
>   * igt@kms_psr@psr2_primary_mmap_gtt:
> - shard-iclb: [PASS][21] -> [SKIP][22] ([fdo#109441]) +1 similar 
> issue
>[21]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8416/shard-iclb2/igt@kms_psr@psr2_primary_mmap_gtt.html
>[22]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17564/shard-iclb5/igt@kms_psr@psr2_primary_mmap_gtt.html
> 
>   

Re: [Intel-gfx] [CI] drm/i915/gt: Stop holding onto the pinned_default_state

2020-05-05 Thread Mika Kuoppala
Chris Wilson  writes:

> As we only restore the default context state upon banning a context, we
> only need enough of the state to run the ring and nothing more. That is
> we only need our bare protocontext.
>
> Signed-off-by: Chris Wilson 
> Cc: Tvrtko Ursulin 
> Cc: Mika Kuoppala 
> Cc: Andi Shyti 
> ---
>  drivers/gpu/drm/i915/gt/intel_engine_pm.c| 14 +-
>  drivers/gpu/drm/i915/gt/intel_engine_types.h |  1 -
>  drivers/gpu/drm/i915/gt/intel_lrc.c  | 14 ++
>  drivers/gpu/drm/i915/gt/selftest_context.c   | 11 ++--
>  drivers/gpu/drm/i915/gt/selftest_lrc.c   | 53 +++-
>  5 files changed, 47 insertions(+), 46 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c 
> b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> index 811debefebc0..d0a1078ef632 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> @@ -21,18 +21,11 @@ static int __engine_unpark(struct intel_wakeref *wf)
>   struct intel_engine_cs *engine =
>   container_of(wf, typeof(*engine), wakeref);
>   struct intel_context *ce;
> - void *map;
>  
>   ENGINE_TRACE(engine, "\n");
>  
>   intel_gt_pm_get(engine->gt);
>  
> - /* Pin the default state for fast resets from atomic context. */
> - map = NULL;
> - if (engine->default_state)
> - map = shmem_pin_map(engine->default_state);
> - engine->pinned_default_state = map;
> -
>   /* Discard stale context state from across idling */
>   ce = engine->kernel_context;
>   if (ce) {
> @@ -42,6 +35,7 @@ static int __engine_unpark(struct intel_wakeref *wf)
>   if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) && ce->state) {
>   struct drm_i915_gem_object *obj = ce->state->obj;
>   int type = i915_coherent_map_type(engine->i915);
> + void *map;
>  
>   map = i915_gem_object_pin_map(obj, type);
>   if (!IS_ERR(map)) {
> @@ -260,12 +254,6 @@ static int __engine_park(struct intel_wakeref *wf)
>   if (engine->park)
>   engine->park(engine);
>  
> - if (engine->pinned_default_state) {
> - shmem_unpin_map(engine->default_state,
> - engine->pinned_default_state);
> - engine->pinned_default_state = NULL;
> - }
> -
>   engine->execlists.no_priolist = false;
>  
>   /* While gt calls i915_vma_parked(), we have to break the lock cycle */
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
> b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> index 6c676774dcd9..c84525363bb7 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> @@ -339,7 +339,6 @@ struct intel_engine_cs {
>   unsigned long wakeref_serial;
>   struct intel_wakeref wakeref;
>   struct file *default_state;
> - void *pinned_default_state;
>  
>   struct {
>   struct intel_ring *ring;
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
> b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index d4ef344657b0..100ed0fce2e2 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -1271,14 +1271,11 @@ execlists_check_context(const struct intel_context 
> *ce,
>  static void restore_default_state(struct intel_context *ce,
> struct intel_engine_cs *engine)
>  {
> - u32 *regs = ce->lrc_reg_state;
> + u32 *regs;
>  
> - if (engine->pinned_default_state)
> - memcpy(regs, /* skip restoring the vanilla PPHWSP */
> -engine->pinned_default_state + LRC_STATE_OFFSET,
> -engine->context_size - PAGE_SIZE);
> + regs = memset(ce->lrc_reg_state, 0, engine->context_size - PAGE_SIZE);
> + execlists_init_reg_state(regs, ce, engine, ce->ring, true);
>  
> - execlists_init_reg_state(regs, ce, engine, ce->ring, false);
>   ce->runtime.last = intel_context_get_runtime(ce);
>  }
>  
> @@ -4166,8 +4163,6 @@ static void __execlists_reset(struct intel_engine_cs 
> *engine, bool stalled)
>* image back to the expected values to skip over the guilty request.
>*/
>   __i915_request_reset(rq, stalled);
> - if (!stalled)
> - goto out_replay;

Why the change how to handle stalled?

-Mika


>  
>   /*
>* We want a simple context + ring to execute the breadcrumb update.
> @@ -4177,9 +4172,6 @@ static void __execlists_reset(struct intel_engine_cs 
> *engine, bool stalled)
>* future request will be after userspace has had the opportunity
>* to recreate its own state.
>*/
> - GEM_BUG_ON(!intel_context_is_pinned(ce));
> - restore_default_state(ce, engine);
> -
>  out_replay:
>   ENGINE_TRACE(engine, "replay {head:%04x, tail:%04x}\n",
>head, ce->ring->tail);
> diff --git a/drivers/gpu/drm/i915/gt/selftes

Re: [Intel-gfx] [PATCH] drm: Replace drm_modeset_lock/unlock_all with DRM_MODESET_LOCK_ALL_* helpers

2020-05-05 Thread Daniel Vetter
On Tue, May 05, 2020 at 07:55:00AM +0200, Michał Orzeł wrote:
> 
> 
> On 04.05.2020 13:53, Daniel Vetter wrote:
> > On Fri, May 01, 2020 at 05:49:33PM +0200, Michał Orzeł wrote:
> >>
> >>
> >> On 30.04.2020 20:30, Daniel Vetter wrote:
> >>> On Thu, Apr 30, 2020 at 5:38 PM Sean Paul  wrote:
> 
>  On Wed, Apr 29, 2020 at 4:57 AM Jani Nikula 
>   wrote:
> >
> > On Tue, 28 Apr 2020, Michal Orzel  wrote:
> >> As suggested by the TODO list for the kernel DRM subsystem, replace
> >> the deprecated functions that take/drop modeset locks with new helpers.
> >>
> >> Signed-off-by: Michal Orzel 
> >> ---
> >>  drivers/gpu/drm/drm_mode_object.c | 10 ++
> >>  1 file changed, 6 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/drm_mode_object.c 
> >> b/drivers/gpu/drm/drm_mode_object.c
> >> index 35c2719..901b078 100644
> >> --- a/drivers/gpu/drm/drm_mode_object.c
> >> +++ b/drivers/gpu/drm/drm_mode_object.c
> >> @@ -402,12 +402,13 @@ int drm_mode_obj_get_properties_ioctl(struct 
> >> drm_device *dev, void *data,
> >>  {
> >>   struct drm_mode_obj_get_properties *arg = data;
> >>   struct drm_mode_object *obj;
> >> + struct drm_modeset_acquire_ctx ctx;
> >>   int ret = 0;
> >>
> >>   if (!drm_core_check_feature(dev, DRIVER_MODESET))
> >>   return -EOPNOTSUPP;
> >>
> >> - drm_modeset_lock_all(dev);
> >> + DRM_MODESET_LOCK_ALL_BEGIN(dev, ctx, 0, ret);
> >
> > I cry a little every time I look at the DRM_MODESET_LOCK_ALL_BEGIN and
> > DRM_MODESET_LOCK_ALL_END macros. :(
> >
> > Currently only six users... but there are ~60 calls to
> > drm_modeset_lock_all{,_ctx} that I presume are to be replaced. I wonder
> > if this will come back and haunt us.
> >
> 
>  What's the alternative? Seems like the options without the macros is
>  to use incorrect scope or have a bunch of retry/backoff cargo-cult
>  everywhere (and hope the copy source is done correctly).
> >>>
> >>> Yeah Sean & me had a bunch of bikesheds and this is the least worst
> >>> option we could come up with. You can't make it a function because of
> >>> the control flow. You don't want to open code this because it's tricky
> >>> to get right, if all you want is to just grab all locks. But it is
> >>> magic hidden behind a macro, which occasionally ends up hurting.
> >>> -Daniel
> >> So what are we doing with this problem? Should we replace at once approx. 
> >> 60 calls?
> > 
> > I'm confused by your question - dradual conversion is entirely orthogonal
> > to what exactly we're converting too. All I added here is that we've
> > discussed this at length, and the macro is the best thing we've come up
> > with. I still think it's the best compromise.
> > 
> > Flag-day conversion for over 60 calls doesn't work, no matter what.
> > -Daniel
> > 
> I agree with that. All I wanted to ask was whether I should add something 
> additional to this patch or not.

Patch looks good and passed CI, so I went ahead and applied it.

Thanks, Daniel

> 
> Thanks,
> Michal
> >>
> >> Michal
> >>>
>  Sean
> 
> > BR,
> > Jani.
> >
> >
> >>
> >>   obj = drm_mode_object_find(dev, file_priv, arg->obj_id, 
> >> arg->obj_type);
> >>   if (!obj) {
> >> @@ -427,7 +428,7 @@ int drm_mode_obj_get_properties_ioctl(struct 
> >> drm_device *dev, void *data,
> >>  out_unref:
> >>   drm_mode_object_put(obj);
> >>  out:
> >> - drm_modeset_unlock_all(dev);
> >> + DRM_MODESET_LOCK_ALL_END(ctx, ret);
> >>   return ret;
> >>  }
> >>
> >> @@ -449,12 +450,13 @@ static int set_property_legacy(struct 
> >> drm_mode_object *obj,
> >>  {
> >>   struct drm_device *dev = prop->dev;
> >>   struct drm_mode_object *ref;
> >> + struct drm_modeset_acquire_ctx ctx;
> >>   int ret = -EINVAL;
> >>
> >>   if (!drm_property_change_valid_get(prop, prop_value, &ref))
> >>   return -EINVAL;
> >>
> >> - drm_modeset_lock_all(dev);
> >> + DRM_MODESET_LOCK_ALL_BEGIN(dev, ctx, 0, ret);
> >>   switch (obj->type) {
> >>   case DRM_MODE_OBJECT_CONNECTOR:
> >>   ret = drm_connector_set_obj_prop(obj, prop, prop_value);
> >> @@ -468,7 +470,7 @@ static int set_property_legacy(struct 
> >> drm_mode_object *obj,
> >>   break;
> >>   }
> >>   drm_property_change_valid_put(prop, ref);
> >> - drm_modeset_unlock_all(dev);
> >> + DRM_MODESET_LOCK_ALL_END(ctx, ret);
> >>
> >>   return ret;
> >>  }
> >
> > --
> > Jani Nikula, Intel Open Source Graphics Center
> >>>
> >>>
> >>>
> > 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Intel-g

Re: [Intel-gfx] ✗ Fi.CI.BUILD: failure for SAGV support for Gen12+ (rev34)

2020-05-05 Thread Lisovskiy, Stanislav
As patches 2,3,7 were pushed - can't now send particular patches, because it 
fails to apply
same patch twice.
So will _have to_ resend the whole series again.

Best Regards,

Lisovskiy Stanislav

Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo


From: Patchwork 
Sent: Tuesday, May 5, 2020 11:11:06 AM
To: Lisovskiy, Stanislav
Cc: intel-gfx@lists.freedesktop.org
Subject: ✗ Fi.CI.BUILD: failure for SAGV support for Gen12+ (rev34)

== Series Details ==

Series: SAGV support for Gen12+ (rev34)
URL   : https://patchwork.freedesktop.org/series/75129/
State : failure

== Summary ==

Applying: drm/i915: Introduce skl_plane_wm_level accessor.
Applying: drm/i915: Use bw state for per crtc SAGV evaluation
Using index info to reconstruct a base tree...
M   drivers/gpu/drm/i915/display/intel_bw.h
M   drivers/gpu/drm/i915/intel_pm.c
M   drivers/gpu/drm/i915/intel_pm.h
Falling back to patching base and 3-way merge...
Auto-merging drivers/gpu/drm/i915/intel_pm.c
CONFLICT (content): Merge conflict in drivers/gpu/drm/i915/intel_pm.c
Auto-merging drivers/gpu/drm/i915/display/intel_bw.h
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0002 drm/i915: Use bw state for per crtc SAGV evaluation
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 4/9] drm/i915/gen12: Flush L3

2020-05-05 Thread Mika Kuoppala
Chris Wilson  writes:

> Quoting Mika Kuoppala (2020-04-30 16:47:30)
>> Flush TDL and L3.
>> 
>> Signed-off-by: Mika Kuoppala 
>
> That's very misnamed bit!
>
> There's a comment that this must be paired with the corresponding pc in
> the same HW dispatch.

Not for gen12.
-Mika
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [CI] drm/i915/execlists: Record the active CCID from before reset

2020-05-05 Thread Chris Wilson
If we cannot trust the reset will flush out the CS event queue such that
process_csb() reports an accurate view of HW, we will need to search the
active and pending contexts to determine which was actually running at
the time we issued the reset.

Signed-off-by: Chris Wilson 
Reviewed-by: Mika Kuoppala 
---
 drivers/gpu/drm/i915/gt/intel_engine_types.h | 5 +
 drivers/gpu/drm/i915/gt/intel_lrc.c  | 4 +++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h 
b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 6c676774dcd9..b1048f039552 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -179,6 +179,11 @@ struct intel_engine_execlists {
 */
u32 error_interrupt;
 
+   /**
+* @reset_ccid: Active CCID [EXECLISTS_STATUS_HI] at the time of reset
+*/
+   u32 reset_ccid;
+
/**
 * @no_priolist: priority lists disabled
 */
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
b/drivers/gpu/drm/i915/gt/intel_lrc.c
index c00366387b54..3ff81c89fe01 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -4074,6 +4074,8 @@ static void execlists_reset_prepare(struct 
intel_engine_cs *engine)
 */
ring_set_paused(engine, 1);
intel_engine_stop_cs(engine);
+
+   engine->execlists.reset_ccid = active_ccid(engine);
 }
 
 static void __reset_stop_ring(u32 *regs, const struct intel_engine_cs *engine)
@@ -4116,7 +4118,7 @@ static void __execlists_reset(struct intel_engine_cs 
*engine, bool stalled)
 * its request, it was still running at the time of the
 * reset and will have been clobbered.
 */
-   rq = execlists_active(execlists);
+   rq = active_context(engine, engine->execlists.reset_ccid);
if (!rq)
goto unwind;
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v3 07/25] drm: i915: fix common struct sg_table related issues

2020-05-05 Thread Marek Szyprowski
The Documentation/DMA-API-HOWTO.txt states that dma_map_sg returns the
numer of the created entries in the DMA address space. However the
subsequent calls to dma_sync_sg_for_{device,cpu} and dma_unmap_sg must be
called with the original number of the entries passed to dma_map_sg. The
sg_table->nents in turn holds the result of the dma_map_sg call as stated
in include/linux/scatterlist.h.

This driver creatively uses sg_table->orig_nents to store the size of the
allocate scatterlist and ignores the number of the entries returned by
dma_map_sg function. The sg_table->orig_nents is (mis)used to properly
free the (over)allocated scatterlist.

This patch only introduces common dma-mapping wrappers operating directly
on the struct sg_table objects to the dmabuf related functions, so the
other drivers, which might share buffers with i915 could rely on the
properly set nents and orig_nents values.

Signed-off-by: Marek Szyprowski 
---
For more information, see '[PATCH v3 00/25] DRM: fix struct sg_table nents
vs. orig_nents misuse' thread: https://lkml.org/lkml/2020/5/5/187
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c   | 13 +
 drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c |  7 +++
 2 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c 
b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 7db5a79..7e8583e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -48,12 +48,10 @@ static struct sg_table *i915_gem_map_dma_buf(struct 
dma_buf_attachment *attachme
src = sg_next(src);
}
 
-   if (!dma_map_sg_attrs(attachment->dev,
- st->sgl, st->nents, dir,
- DMA_ATTR_SKIP_CPU_SYNC)) {
-   ret = -ENOMEM;
+   ret = dma_map_sgtable_attrs(attachment->dev, st, dir,
+   DMA_ATTR_SKIP_CPU_SYNC);
+   if (ret)
goto err_free_sg;
-   }
 
return st;
 
@@ -73,9 +71,8 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment 
*attachment,
 {
struct drm_i915_gem_object *obj = dma_buf_to_obj(attachment->dmabuf);
 
-   dma_unmap_sg_attrs(attachment->dev,
-  sg->sgl, sg->nents, dir,
-  DMA_ATTR_SKIP_CPU_SYNC);
+   dma_unmap_sgtable_attrs(attachment->dev, sg, dir,
+   DMA_ATTR_SKIP_CPU_SYNC);
sg_free_table(sg);
kfree(sg);
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c 
b/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
index debaf7b..756cb76 100644
--- a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
@@ -28,10 +28,9 @@ static struct sg_table *mock_map_dma_buf(struct 
dma_buf_attachment *attachment,
sg = sg_next(sg);
}
 
-   if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
-   err = -ENOMEM;
+   err = dma_map_sgtable(attachment->dev, st, dir);
+   if (err)
goto err_st;
-   }
 
return st;
 
@@ -46,7 +45,7 @@ static void mock_unmap_dma_buf(struct dma_buf_attachment 
*attachment,
   struct sg_table *st,
   enum dma_data_direction dir)
 {
-   dma_unmap_sg(attachment->dev, st->sgl, st->nents, dir);
+   dma_unmap_sgtable(attachment->dev, st, dir);
sg_free_table(st);
kfree(st);
 }
-- 
1.9.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/gt: Stop holding onto the pinned_default_state (rev2)

2020-05-05 Thread Patchwork
== Series Details ==

Series: drm/i915/gt: Stop holding onto the pinned_default_state (rev2)
URL   : https://patchwork.freedesktop.org/series/76738/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8422_full -> Patchwork_17574_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_17574_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_exec_suspend@basic-s3:
- shard-apl:  [PASS][1] -> [DMESG-WARN][2] ([i915#180])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-apl7/igt@gem_exec_susp...@basic-s3.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-apl4/igt@gem_exec_susp...@basic-s3.html

  * igt@gem_exec_whisper@basic-queues-forked:
- shard-kbl:  [PASS][3] -> [FAIL][4] ([i915#1479] / [i915#1772])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl1/igt@gem_exec_whis...@basic-queues-forked.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-kbl1/igt@gem_exec_whis...@basic-queues-forked.html

  * igt@gen9_exec_parse@allowed-all:
- shard-skl:  [PASS][5] -> [DMESG-WARN][6] ([i915#716])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-skl9/igt@gen9_exec_pa...@allowed-all.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-skl10/igt@gen9_exec_pa...@allowed-all.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
- shard-kbl:  [PASS][7] -> [DMESG-WARN][8] ([i915#180]) +2 similar 
issues
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl7/igt@kms_cursor_...@pipe-c-cursor-suspend.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-kbl4/igt@kms_cursor_...@pipe-c-cursor-suspend.html

  * igt@kms_flip_tiling@flip-changes-tiling-y:
- shard-apl:  [PASS][9] -> [FAIL][10] ([i915#95])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-apl2/igt@kms_flip_til...@flip-changes-tiling-y.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-apl6/igt@kms_flip_til...@flip-changes-tiling-y.html
- shard-kbl:  [PASS][11] -> [FAIL][12] ([i915#699] / [i915#93] / 
[i915#95])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl6/igt@kms_flip_til...@flip-changes-tiling-y.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-kbl2/igt@kms_flip_til...@flip-changes-tiling-y.html

  * igt@kms_hdr@bpc-switch:
- shard-skl:  [PASS][13] -> [FAIL][14] ([i915#1188])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-skl4/igt@kms_...@bpc-switch.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-skl6/igt@kms_...@bpc-switch.html

  * igt@kms_psr@psr2_sprite_blt:
- shard-iclb: [PASS][15] -> [SKIP][16] ([fdo#109441]) +1 similar 
issue
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-iclb2/igt@kms_psr@psr2_sprite_blt.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-iclb8/igt@kms_psr@psr2_sprite_blt.html

  
 Possible fixes 

  * {igt@gem_ctx_isolation@preservation-s3@bcs0}:
- shard-kbl:  [DMESG-WARN][17] ([i915#180]) -> [PASS][18] +2 
similar issues
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl6/igt@gem_ctx_isolation@preservation...@bcs0.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-kbl6/igt@gem_ctx_isolation@preservation...@bcs0.html
- shard-apl:  [DMESG-WARN][19] ([i915#180]) -> [PASS][20]
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-apl1/igt@gem_ctx_isolation@preservation...@bcs0.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-apl2/igt@gem_ctx_isolation@preservation...@bcs0.html

  * igt@gem_exec_params@invalid-bsd-ring:
- shard-iclb: [SKIP][21] ([fdo#109276]) -> [PASS][22]
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-iclb6/igt@gem_exec_par...@invalid-bsd-ring.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-iclb2/igt@gem_exec_par...@invalid-bsd-ring.html

  * igt@kms_cursor_crc@pipe-b-cursor-suspend:
- shard-skl:  [INCOMPLETE][23] ([i915#300]) -> [PASS][24]
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-skl7/igt@kms_cursor_...@pipe-b-cursor-suspend.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/shard-skl2/igt@kms_cursor_...@pipe-b-cursor-suspend.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-toggle:
- shard-hsw:  [SKIP][25] ([fdo#109271]) -> [PASS][26]
   [25]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-hsw6/igt@kms_cursor_leg...@cursora-vs-flipb-toggle.html
   [26]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17574/sha

Re: [Intel-gfx] [PATCH v2 2/3] drm/i915: Setup MCR steering for RCS engine workarounds

2020-05-05 Thread Tvrtko Ursulin



On 05/05/2020 00:34, Matt Roper wrote:

On Mon, May 04, 2020 at 12:43:54PM +0100, Tvrtko Ursulin wrote:

On 02/05/2020 05:57, Matt Roper wrote:

Reads of multicast registers give the value associated with
slice/subslice 0 by default unless we manually steer the reads to a
different slice/subslice.  If slice/subslice 0 are fused off in hardware,
performing unsteered reads of multicast registers will return a value of
0 rather than the value we wrote into the multicast register.

To ensure we can properly readback and verify workarounds that touch
registers in a multicast range, we currently setup MCR steering to a
known-valid slice/subslice as the very first item in the GT workaround
list for gen10+.  That steering will then be in place as we verify the
rest of the registers that show up in the GT workaround list, and at
initialization the steering will also still be in effect when we move on
to applying and verifying the workarounds in the RCS engine's workaround
list (which is where most of the multicast registers actually show up).

However we seem run into problems during resets where RCS engine
workarounds are applied without being preceded by application of the GT
workaround list and the steering isn't in place.  Let's add the same MCR
steering to the beginning of the RCS engine's workaround list to ensure
that it's always in place and we don't get erroneous messages about RCS
engine workarounds failing to apply.

References: https://gitlab.freedesktop.org/drm/intel/issues/1222
Cc: Tvrtko Ursulin 
Cc: ch...@chris-wilson.co.uk
Signed-off-by: Matt Roper 
---
   drivers/gpu/drm/i915/gt/intel_workarounds.c | 3 +++
   1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c 
b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index 4a255de13394..b11b83546696 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
@@ -1345,6 +1345,9 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct 
i915_wa_list *wal)
   {
struct drm_i915_private *i915 = engine->i915;
+   if (INTEL_GEN(i915) >= 10)
+   wa_init_mcr(i915, wal);
+
if (IS_TGL_REVID(i915, TGL_REVID_A0, TGL_REVID_A0)) {
/*
 * Wa_1607138336:tgl



No complaints, only a question - is live_engine_reset_workarounds able to
catch this, presumably sporadic, 0xfdc loss after engine reset?



From what I can see, it looks like that selftests uses a separate

ring-based approach to handling the workarounds rather than using the
CPU.  It looks like that selftest just skips all MCR registers since we
can't steer ring accesses the way we can with the CPU.


But 0xfdc is verified after engine reset with a mmio read. Because it is 
in the GT list. Strange.. Ack on the series anyway.


Regards,

Tvrtko
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.BUILD: failure for SAGV support for Gen12+ (rev34)

2020-05-05 Thread Patchwork
== Series Details ==

Series: SAGV support for Gen12+ (rev34)
URL   : https://patchwork.freedesktop.org/series/75129/
State : failure

== Summary ==

Applying: drm/i915: Introduce skl_plane_wm_level accessor.
Applying: drm/i915: Use bw state for per crtc SAGV evaluation
Using index info to reconstruct a base tree...
M   drivers/gpu/drm/i915/display/intel_bw.h
M   drivers/gpu/drm/i915/intel_pm.c
M   drivers/gpu/drm/i915/intel_pm.h
Falling back to patching base and 3-way merge...
Auto-merging drivers/gpu/drm/i915/intel_pm.c
CONFLICT (content): Merge conflict in drivers/gpu/drm/i915/intel_pm.c
Auto-merging drivers/gpu/drm/i915/display/intel_bw.h
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0002 drm/i915: Use bw state for per crtc SAGV evaluation
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/gt: Small tidy of gen8+ breadcrumb emission

2020-05-05 Thread Patchwork
== Series Details ==

Series: drm/i915/gt: Small tidy of gen8+ breadcrumb emission
URL   : https://patchwork.freedesktop.org/series/76918/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8422_full -> Patchwork_17573_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_17573_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gem_workarounds@suspend-resume:
- shard-apl:  [PASS][1] -> [DMESG-WARN][2] ([i915#180])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-apl1/igt@gem_workarou...@suspend-resume.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-apl4/igt@gem_workarou...@suspend-resume.html

  * igt@gen9_exec_parse@allowed-all:
- shard-skl:  [PASS][3] -> [DMESG-WARN][4] ([i915#716])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-skl9/igt@gen9_exec_pa...@allowed-all.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-skl7/igt@gen9_exec_pa...@allowed-all.html

  * igt@kms_cursor_crc@pipe-a-cursor-128x128-offscreen:
- shard-kbl:  [PASS][5] -> [FAIL][6] ([i915#54] / [i915#93] / 
[i915#95])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl6/igt@kms_cursor_...@pipe-a-cursor-128x128-offscreen.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-kbl7/igt@kms_cursor_...@pipe-a-cursor-128x128-offscreen.html

  * igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen:
- shard-skl:  [PASS][7] -> [FAIL][8] ([i915#54])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-skl1/igt@kms_cursor_...@pipe-a-cursor-256x85-onscreen.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-skl5/igt@kms_cursor_...@pipe-a-cursor-256x85-onscreen.html

  * igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy:
- shard-glk:  [PASS][9] -> [FAIL][10] ([i915#72])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-glk9/igt@kms_cursor_leg...@2x-long-flip-vs-cursor-legacy.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-glk9/igt@kms_cursor_leg...@2x-long-flip-vs-cursor-legacy.html

  * igt@kms_draw_crc@draw-method-xrgb-pwrite-untiled:
- shard-skl:  [PASS][11] -> [FAIL][12] ([i915#177] / [i915#52] / 
[i915#54])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-skl4/igt@kms_draw_...@draw-method-xrgb-pwrite-untiled.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-skl1/igt@kms_draw_...@draw-method-xrgb-pwrite-untiled.html

  * igt@kms_flip_tiling@flip-changes-tiling-y:
- shard-apl:  [PASS][13] -> [FAIL][14] ([i915#95])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-apl2/igt@kms_flip_til...@flip-changes-tiling-y.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-apl1/igt@kms_flip_til...@flip-changes-tiling-y.html
- shard-kbl:  [PASS][15] -> [FAIL][16] ([i915#699] / [i915#93] / 
[i915#95])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl6/igt@kms_flip_til...@flip-changes-tiling-y.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-kbl4/igt@kms_flip_til...@flip-changes-tiling-y.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
- shard-kbl:  [PASS][17] -> [DMESG-WARN][18] ([i915#180]) +2 
similar issues
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl4/igt@kms_pipe_crc_ba...@suspend-read-crc-pipe-a.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-kbl4/igt@kms_pipe_crc_ba...@suspend-read-crc-pipe-a.html

  * igt@kms_psr@psr2_cursor_plane_onoff:
- shard-iclb: [PASS][19] -> [SKIP][20] ([fdo#109441]) +2 similar 
issues
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-iclb2/igt@kms_psr@psr2_cursor_plane_onoff.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-iclb3/igt@kms_psr@psr2_cursor_plane_onoff.html

  
 Possible fixes 

  * {igt@gem_ctx_isolation@preservation-s3@bcs0}:
- shard-kbl:  [DMESG-WARN][21] ([i915#180]) -> [PASS][22] +2 
similar issues
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl6/igt@gem_ctx_isolation@preservation...@bcs0.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-kbl7/igt@gem_ctx_isolation@preservation...@bcs0.html

  * igt@kms_cursor_crc@pipe-b-cursor-suspend:
- shard-skl:  [INCOMPLETE][23] ([i915#300]) -> [PASS][24]
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-skl7/igt@kms_cursor_...@pipe-b-cursor-suspend.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17573/shard-skl3/igt@kms_cursor_...@pipe-b-cursor-suspend.html

  * {igt@kms

Re: [Intel-gfx] [CI] drm/i915/gt: Small tidy of gen8+ breadcrumb emission

2020-05-05 Thread Mika Kuoppala
Chris Wilson  writes:

> Use a local to shrink a line under 80 columns, and refactor the common
> emit_xcs_breadcrumb() wrapper of ggtt-write.
>
> Signed-off-by: Chris Wilson 

Reviewed-by: Mika Kuoppala 

> ---
>  drivers/gpu/drm/i915/gt/intel_lrc.c | 34 +
>  1 file changed, 15 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
> b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index d4ef344657b0..c00366387b54 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -4641,8 +4641,7 @@ static u32 *emit_preempt_busywait(struct i915_request 
> *request, u32 *cs)
>  }
>  
>  static __always_inline u32*
> -gen8_emit_fini_breadcrumb_footer(struct i915_request *request,
> -  u32 *cs)
> +gen8_emit_fini_breadcrumb_tail(struct i915_request *request, u32 *cs)
>  {
>   *cs++ = MI_USER_INTERRUPT;
>  
> @@ -4656,14 +4655,16 @@ gen8_emit_fini_breadcrumb_footer(struct i915_request 
> *request,
>   return gen8_emit_wa_tail(request, cs);
>  }
>  
> -static u32 *gen8_emit_fini_breadcrumb(struct i915_request *request, u32 *cs)
> +static u32 *emit_xcs_breadcrumb(struct i915_request *request, u32 *cs)
>  {
> - cs = gen8_emit_ggtt_write(cs,
> -   request->fence.seqno,
> -   
> i915_request_active_timeline(request)->hwsp_offset,
> -   0);
> + u32 addr = i915_request_active_timeline(request)->hwsp_offset;
>  
> - return gen8_emit_fini_breadcrumb_footer(request, cs);
> + return gen8_emit_ggtt_write(cs, request->fence.seqno, addr, 0);
> +}
> +
> +static u32 *gen8_emit_fini_breadcrumb(struct i915_request *rq, u32 *cs)
> +{
> + return gen8_emit_fini_breadcrumb_tail(rq, emit_xcs_breadcrumb(rq, cs));
>  }
>  
>  static u32 *gen8_emit_fini_breadcrumb_rcs(struct i915_request *request, u32 
> *cs)
> @@ -4681,7 +4682,7 @@ static u32 *gen8_emit_fini_breadcrumb_rcs(struct 
> i915_request *request, u32 *cs)
> PIPE_CONTROL_FLUSH_ENABLE |
> PIPE_CONTROL_CS_STALL);
>  
> - return gen8_emit_fini_breadcrumb_footer(request, cs);
> + return gen8_emit_fini_breadcrumb_tail(request, cs);
>  }
>  
>  static u32 *
> @@ -4697,7 +4698,7 @@ gen11_emit_fini_breadcrumb_rcs(struct i915_request 
> *request, u32 *cs)
> PIPE_CONTROL_DC_FLUSH_ENABLE |
> PIPE_CONTROL_FLUSH_ENABLE);
>  
> - return gen8_emit_fini_breadcrumb_footer(request, cs);
> + return gen8_emit_fini_breadcrumb_tail(request, cs);
>  }
>  
>  /*
> @@ -4735,7 +4736,7 @@ static u32 *gen12_emit_preempt_busywait(struct 
> i915_request *request, u32 *cs)
>  }
>  
>  static __always_inline u32*
> -gen12_emit_fini_breadcrumb_footer(struct i915_request *request, u32 *cs)
> +gen12_emit_fini_breadcrumb_tail(struct i915_request *request, u32 *cs)
>  {
>   *cs++ = MI_USER_INTERRUPT;
>  
> @@ -4749,14 +4750,9 @@ gen12_emit_fini_breadcrumb_footer(struct i915_request 
> *request, u32 *cs)
>   return gen8_emit_wa_tail(request, cs);
>  }
>  
> -static u32 *gen12_emit_fini_breadcrumb(struct i915_request *request, u32 *cs)
> +static u32 *gen12_emit_fini_breadcrumb(struct i915_request *rq, u32 *cs)
>  {
> - cs = gen8_emit_ggtt_write(cs,
> -   request->fence.seqno,
> -   
> i915_request_active_timeline(request)->hwsp_offset,
> -   0);
> -
> - return gen12_emit_fini_breadcrumb_footer(request, cs);
> + return gen12_emit_fini_breadcrumb_tail(rq, emit_xcs_breadcrumb(rq, cs));
>  }
>  
>  static u32 *
> @@ -4775,7 +4771,7 @@ gen12_emit_fini_breadcrumb_rcs(struct i915_request 
> *request, u32 *cs)
> PIPE_CONTROL_FLUSH_ENABLE |
> PIPE_CONTROL_HDC_PIPELINE_FLUSH);
>  
> - return gen12_emit_fini_breadcrumb_footer(request, cs);
> + return gen12_emit_fini_breadcrumb_tail(request, cs);
>  }
>  
>  static void execlists_park(struct intel_engine_cs *engine)
> -- 
> 2.20.1
>
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: HDCP: retry link integrity check on failure

2020-05-05 Thread Ramalingam C
On 2020-05-04 at 14:35:24 +0200, Oliver Barta wrote:
> From: Oliver Barta 
> 
> A single Ri mismatch doesn't automatically mean that the link integrity
> is broken. Update and check of Ri and Ri' are done asynchronously. In
> case an update happens just between the read of Ri' and the check against
> Ri there will be a mismatch even if the link integrity is fine otherwise.

Thanks for working on this. Btw, did you face this sporadic link check
failure or theoretically you are fixing it?

IMO this change will rule out possible sporadic link check failures as
mentioned in the commit msg. Though I haven't faced this issue at my
testings.

Reviewed-by: Ramalingam C 

> 
> Signed-off-by: Oliver Barta 
> ---
>  drivers/gpu/drm/i915/display/intel_hdmi.c | 19 ---
>  1 file changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c 
> b/drivers/gpu/drm/i915/display/intel_hdmi.c
> index 010f37240710..3156fde392f2 100644
> --- a/drivers/gpu/drm/i915/display/intel_hdmi.c
> +++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
> @@ -1540,7 +1540,7 @@ int intel_hdmi_hdcp_toggle_signalling(struct 
> intel_digital_port *intel_dig_port,
>  }
>  
>  static
> -bool intel_hdmi_hdcp_check_link(struct intel_digital_port *intel_dig_port)
> +bool intel_hdmi_hdcp_check_link_once(struct intel_digital_port 
> *intel_dig_port)
>  {
>   struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev);
>   struct intel_connector *connector =
> @@ -1563,8 +1563,7 @@ bool intel_hdmi_hdcp_check_link(struct 
> intel_digital_port *intel_dig_port)
>   if (wait_for((intel_de_read(i915, HDCP_STATUS(i915, cpu_transcoder, 
> port)) &
> (HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC)) ==
>(HDCP_STATUS_RI_MATCH | HDCP_STATUS_ENC), 1)) {
> - drm_err(&i915->drm,
> - "Ri' mismatch detected, link check failed (%x)\n",
> + drm_dbg_kms(&i915->drm, "Ri' mismatch detected (%x)\n",
>   intel_de_read(i915, HDCP_STATUS(i915, cpu_transcoder,
>   port)));
>   return false;
> @@ -1572,6 +1571,20 @@ bool intel_hdmi_hdcp_check_link(struct 
> intel_digital_port *intel_dig_port)
>   return true;
>  }
>  
> +static
> +bool intel_hdmi_hdcp_check_link(struct intel_digital_port *intel_dig_port)
> +{
> + struct drm_i915_private *i915 = to_i915(intel_dig_port->base.base.dev);
> + int retry;
> +
> + for (retry = 0; retry < 3; retry++)
> + if (intel_hdmi_hdcp_check_link_once(intel_dig_port))
> + return true;
> +
> + drm_err(&i915->drm, "Link check failed\n");
> + return false;
> +}
> +
>  struct hdcp2_hdmi_msg_timeout {
>   u8 msg_id;
>   u16 timeout;
> -- 
> 2.20.1
> 
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/tgl: Put HDC flush pipe_control bit in the right dword

2020-05-05 Thread Lionel Landwerlin

On 05/05/2020 03:09, D Scott Phillips wrote:

D Scott Phillips  writes:


Previously we set HDC_PIPELINE_FLUSH in dword 1 of gen12
pipe_control commands. HDC Pipeline flush actually resides in
dword 0, and the bit we were setting in dword 1 was Indirect State
Pointers Disable, which invalidates indirect state in the render
context. This causes failures for userspace, as things like push
constant state gets invalidated.

Cc: Mika Kuoppala 
Cc: Chris Wilson 
Signed-off-by: D Scott Phillips 

also,

Fixes: 4aa0b5d457f5 ("drm/i915/tgl: Add HDC Pipeline Flush")
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


I think Mika sent the same patch in "drm/i915/gen12: Fix HDC pipeline 
flush".


-Lionel

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v26 8/9] drm/i915: Restrict qgv points which don't have enough bandwidth.

2020-05-05 Thread Stanislav Lisovskiy
According to BSpec 53998, we should try to
restrict qgv points, which can't provide
enough bandwidth for desired display configuration.

Currently we are just comparing against all of
those and take minimum(worst case).

v2: Fixed wrong PCode reply mask, removed hardcoded
values.

v3: Forbid simultaneous legacy SAGV PCode requests and
restricting qgv points. Put the actual restriction
to commit function, added serialization(thanks to Ville)
to prevent commit being applied out of order in case of
nonblocking and/or nomodeset commits.

v4:
- Minor code refactoring, fixed few typos(thanks to James Ausmus)
- Change the naming of qgv point
  masking/unmasking functions(James Ausmus).
- Simplify the masking/unmasking operation itself,
  as we don't need to mask only single point per request(James Ausmus)
- Reject and stick to highest bandwidth point if SAGV
  can't be enabled(BSpec)

v5:
- Add new mailbox reply codes, which seems to happen during boot
  time for TGL and indicate that QGV setting is not yet available.

v6:
- Increase number of supported QGV points to be in sync with BSpec.

v7: - Rebased and resolved conflict to fix build failure.
- Fix NUM_QGV_POINTS to 8 and moved that to header file(James Ausmus)

v8: - Don't report an error if we can't restrict qgv points, as SAGV
  can be disabled by BIOS, which is completely legal. So don't
  make CI panic. Instead if we detect that there is only 1 QGV
  point accessible just analyze if we can fit the required bandwidth
  requirements, but no need in restricting.

v9: - Fix wrong QGV transition if we have 0 planes and no SAGV
  simultaneously.

v10: - Fix CDCLK corruption, because of global state getting serialized
   without modeset, which caused copying of non-calculated cdclk
   to be copied to dev_priv(thanks to Ville for the hint).

v11: - Remove unneeded headers and spaces(Matthew Roper)
 - Remove unneeded intel_qgv_info qi struct from bw check and zero
   out the needed one(Matthew Roper)
 - Changed QGV error message to have more clear meaning(Matthew Roper)
 - Use state->modeset_set instead of any_ms(Matthew Roper)
 - Moved NUM_SAGV_POINTS from i915_reg.h to i915_drv.h where it's used
 - Keep using crtc_state->hw.active instead of .enable(Matthew Roper)
 - Moved unrelated changes to other patch(using latency as parameter
   for plane wm calculation, moved to SAGV refactoring patch)

v12: - Fix rebase conflict with own temporary SAGV/QGV fix.
 - Remove unnecessary mask being zero check when unmasking
   qgv points as this is completely legal(Matt Roper)
 - Check if we are setting the same mask as already being set
   in hardware to prevent error from PCode.
 - Fix error message when restricting/unrestricting qgv points
   to "mask/unmask" which sounds more accurate(Matt Roper)
 - Move sagv status setting to icl_get_bw_info from atomic check
   as this should be calculated only once.(Matt Roper)
 - Edited comments for the case when we can't enable SAGV and
   use only 1 QGV point with highest bandwidth to be more
   understandable.(Matt Roper)

v13: - Moved max_data_rate in bw check to closer scope(Ville Syrjälä)
 - Changed comment for zero new_mask in qgv points masking function
   to better reflect reality(Ville Syrjälä)
 - Simplified bit mask operation in qgv points masking function
   (Ville Syrjälä)
 - Moved intel_qgv_points_mask closer to gen11 SAGV disabling,
   however this still can't be under modeset condition(Ville Syrjälä)
 - Packed qgv_points_mask as u8 and moved closer to pipe_sagv_mask
   (Ville Syrjälä)
 - Extracted PCode changes to separate patch.(Ville Syrjälä)
 - Now treat num_planes 0 same as 1 to avoid confusion and
   returning max_bw as 0, which would prevent choosing QGV
   point having max bandwidth in case if SAGV is not allowed,
   as per BSpec(Ville Syrjälä)
 - Do the actual qgv_points_mask swap in the same place as
   all other global state parts like cdclk are swapped.
   In the next patch, this all will be moved to bw state as
   global state, once new global state patch series from Ville
   lands

v14: - Now using global state to serialize access to qgv points
 - Added global state locking back, otherwise we seem to read
   bw state in a wrong way.

v15: - Added TODO comment for near atomic global state locking in
   bw code.

v16: - Fixed intel_atomic_bw_* functions to be intel_bw_* as discussed
   with Jani Nikula.
 - Take bw_state_changed flag into use.

v17: - Moved qgv point related manipulations next to SAGV code, as
   those are semantically related(Ville Syrjälä)
 - Renamed those into intel_sagv_(pre)|(post)_plane_update
   (Ville Syrjälä)

v18: - Move sagv related calls from commit tail into
   intel_sagv_(pre)|(post)_plane_update(Ville Syr

[Intel-gfx] [PATCH v26 6/9] drm/i915: Added required new PCode commands

2020-05-05 Thread Stanislav Lisovskiy
We need a new PCode request commands and reply codes
to be added as a prepartion patch for QGV points
restricting for new SAGV support.

v2: - Extracted those changes into separate patch
  (Ville Syrjälä)

v3: - Moved new PCode masks to another place from
  PCode commands(Ville)

v4: - Moved new PCode masks to correspondent PCode
  command, with identation(Ville)
- Changed naming to ICL_ instead of GEN11_
  to fit more nicely into existing definition
  style.

Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/i915_reg.h   | 4 
 drivers/gpu/drm/i915/intel_sideband.c | 2 ++
 2 files changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 4a1965467374..8118d1e39f6a 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -9086,6 +9086,7 @@ enum {
 #define GEN7_PCODE_ILLEGAL_DATA0x3
 #define GEN11_PCODE_ILLEGAL_SUBCOMMAND 0x4
 #define GEN11_PCODE_LOCKED 0x6
+#define GEN11_PCODE_REJECTED   0x11
 #define GEN7_PCODE_MIN_FREQ_TABLE_GT_RATIO_OUT_OF_RANGE 0x10
 #define   GEN6_PCODE_WRITE_RC6VIDS 0x4
 #define   GEN6_PCODE_READ_RC6VIDS  0x5
@@ -9107,6 +9108,9 @@ enum {
 #define   ICL_PCODE_MEM_SUBSYSYSTEM_INFO   0xd
 #define ICL_PCODE_MEM_SS_READ_GLOBAL_INFO  (0x0 << 8)
 #define ICL_PCODE_MEM_SS_READ_QGV_POINT_INFO(point)(((point) << 
16) | (0x1 << 8))
+#define   ICL_PCODE_SAGV_DE_MEM_SS_CONFIG  0xe
+#define ICL_PCODE_POINTS_RESTRICTED0x0
+#define ICL_PCODE_POINTS_RESTRICTED_MASK   0x1
 #define   GEN6_PCODE_READ_D_COMP   0x10
 #define   GEN6_PCODE_WRITE_D_COMP  0x11
 #define   ICL_PCODE_EXIT_TCCOLD0x12
diff --git a/drivers/gpu/drm/i915/intel_sideband.c 
b/drivers/gpu/drm/i915/intel_sideband.c
index 14daf6af6854..59ef364549cf 100644
--- a/drivers/gpu/drm/i915/intel_sideband.c
+++ b/drivers/gpu/drm/i915/intel_sideband.c
@@ -371,6 +371,8 @@ static int gen7_check_mailbox_status(u32 mbox)
return -ENXIO;
case GEN11_PCODE_LOCKED:
return -EBUSY;
+   case GEN11_PCODE_REJECTED:
+   return -EACCES;
case GEN7_PCODE_MIN_FREQ_TABLE_GT_RATIO_OUT_OF_RANGE:
return -EOVERFLOW;
default:
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: HDCP: retry link integrity check on failure

2020-05-05 Thread Patchwork
== Series Details ==

Series: drm/i915: HDCP: retry link integrity check on failure
URL   : https://patchwork.freedesktop.org/series/76917/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8422_full -> Patchwork_17572_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_17572_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gen9_exec_parse@allowed-all:
- shard-apl:  [PASS][1] -> [DMESG-WARN][2] ([i915#716])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-apl3/igt@gen9_exec_pa...@allowed-all.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572/shard-apl4/igt@gen9_exec_pa...@allowed-all.html

  * igt@i915_suspend@forcewake:
- shard-iclb: [PASS][3] -> [INCOMPLETE][4] ([i915#1185])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-iclb5/igt@i915_susp...@forcewake.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572/shard-iclb3/igt@i915_susp...@forcewake.html

  * igt@kms_cursor_crc@pipe-a-cursor-256x256-sliding:
- shard-kbl:  [PASS][5] -> [FAIL][6] ([i915#54] / [i915#93] / 
[i915#95])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl6/igt@kms_cursor_...@pipe-a-cursor-256x256-sliding.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572/shard-kbl2/igt@kms_cursor_...@pipe-a-cursor-256x256-sliding.html

  * igt@kms_flip_tiling@flip-changes-tiling-y:
- shard-apl:  [PASS][7] -> [FAIL][8] ([i915#95])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-apl2/igt@kms_flip_til...@flip-changes-tiling-y.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572/shard-apl6/igt@kms_flip_til...@flip-changes-tiling-y.html
- shard-kbl:  [PASS][9] -> [FAIL][10] ([i915#699] / [i915#93] / 
[i915#95])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl6/igt@kms_flip_til...@flip-changes-tiling-y.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572/shard-kbl2/igt@kms_flip_til...@flip-changes-tiling-y.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-render:
- shard-skl:  [PASS][11] -> [FAIL][12] ([i915#49])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-skl5/igt@kms_frontbuffer_track...@psr-1p-primscrn-pri-indfb-draw-render.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572/shard-skl7/igt@kms_frontbuffer_track...@psr-1p-primscrn-pri-indfb-draw-render.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
- shard-kbl:  [PASS][13] -> [DMESG-WARN][14] ([i915#180]) +3 
similar issues
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl4/igt@kms_pipe_crc_ba...@suspend-read-crc-pipe-a.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572/shard-kbl4/igt@kms_pipe_crc_ba...@suspend-read-crc-pipe-a.html

  * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes:
- shard-apl:  [PASS][15] -> [DMESG-WARN][16] ([i915#180]) +2 
similar issues
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-apl8/igt@kms_pl...@plane-panning-bottom-right-suspend-pipe-b-planes.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572/shard-apl4/igt@kms_pl...@plane-panning-bottom-right-suspend-pipe-b-planes.html

  * igt@kms_plane_alpha_blend@pipe-a-coverage-7efc:
- shard-skl:  [PASS][17] -> [FAIL][18] ([fdo#108145] / [i915#265])
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-skl5/igt@kms_plane_alpha_bl...@pipe-a-coverage-7efc.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572/shard-skl7/igt@kms_plane_alpha_bl...@pipe-a-coverage-7efc.html

  * igt@kms_psr@psr2_sprite_blt:
- shard-iclb: [PASS][19] -> [SKIP][20] ([fdo#109441]) +1 similar 
issue
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-iclb2/igt@kms_psr@psr2_sprite_blt.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572/shard-iclb7/igt@kms_psr@psr2_sprite_blt.html

  * igt@kms_vblank@pipe-a-ts-continuation-suspend:
- shard-kbl:  [PASS][21] -> [DMESG-WARN][22] ([i915#180] / 
[i915#93] / [i915#95])
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl4/igt@kms_vbl...@pipe-a-ts-continuation-suspend.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572/shard-kbl6/igt@kms_vbl...@pipe-a-ts-continuation-suspend.html

  
 Possible fixes 

  * {igt@gem_ctx_isolation@preservation-s3@bcs0}:
- shard-kbl:  [DMESG-WARN][23] ([i915#180]) -> [PASS][24] +1 
similar issue
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8422/shard-kbl6/igt@gem_ctx_isolation@preservation...@bcs0.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17572

  1   2   >