[Intel-gfx] [PULL] gvt-fixes

2020-05-11 Thread Zhenyu Wang
Hi, Here's two more fixes for 5.7. One is for recent guest display probe failure, which is fixed by setting correct transcoder and DPLL clock against virtual display. Another is regression to fix kernel oops for older aliasing ppgtt guest. Thanks -- The following changes since commit

Re: [Intel-gfx] [PATCH 3/3] misc/habalabs: don't set default fence_ops->wait

2020-05-11 Thread Dave Airlie
On Mon, 11 May 2020 at 19:37, Oded Gabbay wrote: > > On Mon, May 11, 2020 at 12:11 PM Daniel Vetter wrote: > > > > It's the default. > Thanks for catching that. > > > > > Also so much for "we're not going to tell the graphics people how to > > review their code", dma_fence is a pretty core piece

[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Force pte cacheline to main memory

2020-05-11 Thread Patchwork
== Series Details == Series: drm/i915: Force pte cacheline to main memory URL : https://patchwork.freedesktop.org/series/77162/ State : success == Summary == CI Bug Log - changes from CI_DRM_8466_full -> Patchwork_17630_full Summary

[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/mst: filter out the display mode exceed sink's capability (rev2)

2020-05-11 Thread Patchwork
== Series Details == Series: drm/i915/mst: filter out the display mode exceed sink's capability (rev2) URL : https://patchwork.freedesktop.org/series/76095/ State : success == Summary == CI Bug Log - changes from CI_DRM_8466_full -> Patchwork_17629_full

[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Remove unused HAS_FWTABLE macro

2020-05-11 Thread Patchwork
== Series Details == Series: drm/i915: Remove unused HAS_FWTABLE macro URL : https://patchwork.freedesktop.org/series/77158/ State : success == Summary == CI Bug Log - changes from CI_DRM_8466_full -> Patchwork_17627_full Summary ---

[Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/selftests: Always flush before unpining after writing

2020-05-11 Thread Patchwork
== Series Details == Series: drm/i915/selftests: Always flush before unpining after writing URL : https://patchwork.freedesktop.org/series/77156/ State : success == Summary == CI Bug Log - changes from CI_DRM_8465_full -> Patchwork_17625_full

Re: [Intel-gfx] [RFC] GPU-bound energy efficiency improvements for the intel_pstate driver (v2.99)

2020-05-11 Thread Francisco Jerez
Peter Zijlstra writes: > On Mon, Apr 27, 2020 at 08:22:47PM -0700, Francisco Jerez wrote: >> This addresses the technical concerns people brought up about my >> previous v2 revision of this series. Other than a few bug fixes, the >> only major change relative to v2 is that the controller is now

Re: [Intel-gfx] [PATCH 1/9] drm/msm: Don't call dma_buf_vunmap without _vmap

2020-05-11 Thread Rob Clark
On Mon, May 11, 2020 at 8:29 AM Daniel Vetter wrote: > > On Mon, May 11, 2020 at 5:24 PM Rob Clark wrote: > > > > On Mon, May 11, 2020 at 2:36 AM Daniel Vetter > > wrote: > > > > > > I honestly don't exactly understand what's going on here, but the > > > current code is wrong for sure: It

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Force pte cacheline to main memory

2020-05-11 Thread Patchwork
== Series Details == Series: drm/i915: Force pte cacheline to main memory URL : https://patchwork.freedesktop.org/series/77162/ State : success == Summary == CI Bug Log - changes from CI_DRM_8466 -> Patchwork_17630 Summary ---

Re: [Intel-gfx] [PATCH 2/3] dma-fence: use default wait function for mock fences

2020-05-11 Thread Ruhl, Michael J
>-Original Message- >From: dri-devel On Behalf Of >Ruhl, Michael J >Sent: Monday, May 11, 2020 2:13 PM >To: Daniel Vetter ; LKML ker...@vger.kernel.org> >Cc: Intel Graphics Development ; DRI >Development ; linaro-mm- >s...@lists.linaro.org; Vetter, Daniel ; linux- >me...@vger.kernel.org

Re: [Intel-gfx] [PATCH 2/3] dma-fence: use default wait function for mock fences

2020-05-11 Thread Ruhl, Michael J
>-Original Message- >From: Intel-gfx On Behalf Of >Daniel Vetter >Sent: Monday, May 11, 2020 5:12 AM >To: LKML >Cc: Daniel Vetter ; Intel Graphics Development >; DRI Development de...@lists.freedesktop.org>; linaro-mm-...@lists.linaro.org; Vetter, Daniel >; Sumit Semwal ; linux-

Re: [Intel-gfx] [PATCH 1/3] drm/writeback: don't set fence->ops to default

2020-05-11 Thread Ruhl, Michael J
>-Original Message- >From: dri-devel On Behalf Of >Daniel Vetter >Sent: Monday, May 11, 2020 5:12 AM >To: LKML >Cc: David Airlie ; Daniel Vetter ; >Intel Graphics Development ; DRI >Development ; Thomas Zimmermann >; Vetter, Daniel >Subject: [PATCH 1/3] drm/writeback: don't set

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/mst: filter out the display mode exceed sink's capability (rev2)

2020-05-11 Thread Patchwork
== Series Details == Series: drm/i915/mst: filter out the display mode exceed sink's capability (rev2) URL : https://patchwork.freedesktop.org/series/76095/ State : success == Summary == CI Bug Log - changes from CI_DRM_8466 -> Patchwork_17629

[Intel-gfx] ✗ Fi.CI.BAT: failure for Consider DBuf bandwidth when calculating CDCLK (rev10)

2020-05-11 Thread Patchwork
== Series Details == Series: Consider DBuf bandwidth when calculating CDCLK (rev10) URL : https://patchwork.freedesktop.org/series/74739/ State : failure == Summary == CI Bug Log - changes from CI_DRM_8466 -> Patchwork_17628 Summary

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Consider DBuf bandwidth when calculating CDCLK (rev10)

2020-05-11 Thread Patchwork
== Series Details == Series: Consider DBuf bandwidth when calculating CDCLK (rev10) URL : https://patchwork.freedesktop.org/series/74739/ State : warning == Summary == $ dim checkpatch origin/drm-tip f95bf937bce0 drm/i915: Decouple cdclk calculation from modeset checks 4dca6193f621 drm/i915:

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Remove unused HAS_FWTABLE macro

2020-05-11 Thread Patchwork
== Series Details == Series: drm/i915: Remove unused HAS_FWTABLE macro URL : https://patchwork.freedesktop.org/series/77158/ State : success == Summary == CI Bug Log - changes from CI_DRM_8466 -> Patchwork_17627 Summary ---

Re: [Intel-gfx] [PATCH 3/3] drm/i915/gt: Restore Cherryview back to full-ppgtt

2020-05-11 Thread Mika Kuoppala
Chris Wilson writes: > This reverts commit 0b718ba1e884f64dce27c19311dd2859b87e56b9. > > There are still some residual issues with asynchronous binding and > execution, but since commit 92581f9fb99c ("drm/i915: Immediately execute > the fenced work") we prefer not to use asynchronous binds, and

[Intel-gfx] ✗ Fi.CI.BAT: failure for series starting with [01/23] Revert "drm/i915/gem: Drop relocation slowpath".

2020-05-11 Thread Patchwork
== Series Details == Series: series starting with [01/23] Revert "drm/i915/gem: Drop relocation slowpath". URL : https://patchwork.freedesktop.org/series/77157/ State : failure == Summary == CI Bug Log - changes from CI_DRM_8465 -> Patchwork_17626

Re: [Intel-gfx] [PATCH] drm/i915: Force pte cacheline to main memory

2020-05-11 Thread Chris Wilson
Quoting Chris Wilson (2020-05-11 17:13:52) > Quoting Mika Kuoppala (2020-05-11 17:08:03) > > We have problems of tgl not seeing a valid pte entry > > when iommu is enabled. Add heavy handed flushing > > of entry modification by flushing the cpu, cacheline > > and then wcb. This forces the pte out

Re: [Intel-gfx] [PATCH] drm/i915: Force pte cacheline to main memory

2020-05-11 Thread Chris Wilson
Quoting Mika Kuoppala (2020-05-11 17:08:03) > We have problems of tgl not seeing a valid pte entry > when iommu is enabled. Add heavy handed flushing > of entry modification by flushing the cpu, cacheline > and then wcb. This forces the pte out to main memory > past this point regarless of

[Intel-gfx] [PATCH] drm/i915: Force pte cacheline to main memory

2020-05-11 Thread Mika Kuoppala
We have problems of tgl not seeing a valid pte entry when iommu is enabled. Add heavy handed flushing of entry modification by flushing the cpu, cacheline and then wcb. This forces the pte out to main memory past this point regarless of promises of coherency. This is an evolution of an

Re: [Intel-gfx] [PATCH] drm/i915: Remove unused HAS_FWTABLE macro

2020-05-11 Thread Chris Wilson
Quoting Pascal Terjan (2020-05-10 22:25:21) > It has been unused since commit ccb2aceaaa5f ("drm/i915: use vfuncs for > reg_read/write_fw_domains"). > > Signed-off-by: Pascal Terjan Reviewed-by: Chris Wilson -Chris ___ Intel-gfx mailing list

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/23] Revert "drm/i915/gem: Drop relocation slowpath".

2020-05-11 Thread Patchwork
== Series Details == Series: series starting with [01/23] Revert "drm/i915/gem: Drop relocation slowpath". URL : https://patchwork.freedesktop.org/series/77157/ State : warning == Summary == $ dim checkpatch origin/drm-tip 0aaea3234436 Revert "drm/i915/gem: Drop relocation slowpath". -:80:

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/selftests: Always flush before unpining after writing

2020-05-11 Thread Patchwork
== Series Details == Series: drm/i915/selftests: Always flush before unpining after writing URL : https://patchwork.freedesktop.org/series/77156/ State : success == Summary == CI Bug Log - changes from CI_DRM_8465 -> Patchwork_17625

Re: [Intel-gfx] [PATCH] drm/i915/selftests: Always flush before unpining after writing

2020-05-11 Thread Mika Kuoppala
Chris Wilson writes: > Be consistent, and even when we know we had used a WC, flush the mapped > object after writing into it. The flush understands the mapping type and > will only clflush if !I915_MAP_WC, but will always insert a wmb [sfence] > so that we can be sure that all writes are

Re: [Intel-gfx] [PATCH 1/9] drm/msm: Don't call dma_buf_vunmap without _vmap

2020-05-11 Thread Daniel Vetter
On Mon, May 11, 2020 at 5:24 PM Rob Clark wrote: > > On Mon, May 11, 2020 at 2:36 AM Daniel Vetter wrote: > > > > I honestly don't exactly understand what's going on here, but the > > current code is wrong for sure: It calls dma_buf_vunmap without ever > > calling dma_buf_vmap. > > > > What I'm

Re: [Intel-gfx] [PATCH 1/9] drm/msm: Don't call dma_buf_vunmap without _vmap

2020-05-11 Thread Rob Clark
On Mon, May 11, 2020 at 2:36 AM Daniel Vetter wrote: > > I honestly don't exactly understand what's going on here, but the > current code is wrong for sure: It calls dma_buf_vunmap without ever > calling dma_buf_vmap. > > What I'm not sure about is whether the WARN_ON is correct: > - msm imports

[Intel-gfx] [PATCH v2] drm/i915/mst: filter out the display mode exceed sink's capability

2020-05-11 Thread Lee Shawn C
So far, max dot clock rate for MST mode rely on physcial bandwidth limitation. It would caused compatibility issue if source display resolution exceed MST hub output ability. For example, source DUT had DP 1.2 output capability. And MST docking just support HDMI 1.4 spec. When a HDMI 2.0 monitor

[Intel-gfx] [PATCH v7 6/7] drm/i915: Adjust CDCLK accordingly to our DBuf bw needs

2020-05-11 Thread Stanislav Lisovskiy
According to BSpec max BW per slice is calculated using formula Max BW = CDCLK * 64. Currently when calculating min CDCLK we account only per plane requirements, however in order to avoid FIFO underruns we need to estimate accumulated BW consumed by all planes(ddb entries basically) residing on

[Intel-gfx] [PATCH v7 2/7] drm/i915: Extract cdclk requirements checking to separate function

2020-05-11 Thread Stanislav Lisovskiy
In Gen11+ whenever we might exceed DBuf bandwidth we might need to recalculate CDCLK which DBuf bandwidth is scaled with. Total Dbuf bw used might change based on particular plane needs. Thus to calculate if cdclk needs to be changed it is not enough anymore to check plane configuration and plane

[Intel-gfx] [PATCH v7 4/7] drm/i915: Plane configuration affects CDCLK in Gen11+

2020-05-11 Thread Stanislav Lisovskiy
From: Stanislav Lisovskiy So lets support it. Signed-off-by: Stanislav Lisovskiy --- drivers/gpu/drm/i915/display/intel_display.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c

[Intel-gfx] [PATCH v7 1/7] drm/i915: Decouple cdclk calculation from modeset checks

2020-05-11 Thread Stanislav Lisovskiy
We need to calculate cdclk after watermarks/ddb has been calculated as with recent hw CDCLK needs to be adjusted accordingly to DBuf requirements, which is not possible with current code organization. Setting CDCLK according to DBuf BW requirements and not just rejecting if it doesn't satisfy BW

[Intel-gfx] [PATCH v7 5/7] drm/i915: Introduce for_each_dbuf_slice_in_mask macro

2020-05-11 Thread Stanislav Lisovskiy
We quite often need now to iterate only particular dbuf slices in mask, whether they are active or related to particular crtc. v2: - Minor code refactoring v3: - Use enum for max slices instead of macro Let's make our life a bit easier and use a macro for that. Signed-off-by: Stanislav

[Intel-gfx] [PATCH v7 7/7] drm/i915: Remove unneeded hack now for CDCLK

2020-05-11 Thread Stanislav Lisovskiy
No need to bump up CDCLK now, as it is now correctly calculated, accounting for DBuf BW as BSpec says. Reviewed-by: Manasi Navare Signed-off-by: Stanislav Lisovskiy --- drivers/gpu/drm/i915/display/intel_cdclk.c | 12 1 file changed, 12 deletions(-) diff --git

[Intel-gfx] [PATCH v7 0/7] Consider DBuf bandwidth when calculating CDCLK

2020-05-11 Thread Stanislav Lisovskiy
We need to calculate cdclk after watermarks/ddb has been calculated as with recent hw CDCLK needs to be adjusted accordingly to DBuf requirements, which is not possible with current code organization. Setting CDCLK according to DBuf BW requirements and not just rejecting if it doesn't satisfy BW

[Intel-gfx] [PATCH v7 3/7] drm/i915: Check plane configuration properly

2020-05-11 Thread Stanislav Lisovskiy
From: Stanislav Lisovskiy Checking with hweight8 if plane configuration had changed seems to be wrong as different plane configs can result in a same hamming weight. So lets check the bitmask itself. Signed-off-by: Stanislav Lisovskiy --- drivers/gpu/drm/i915/display/intel_display.c | 8

[Intel-gfx] i915 flickers after return from suspend

2020-05-11 Thread Arno
My laptop (core m5-6y54) starts flickering after returning from Suspend (to RAM) or other commands touching the video driver (xrandr ...) From kernel (tested with up to 5.7) I get the message: [drm:intel_cpu_fifo_underrun_irq_handler [i915]] *ERROR* CPU pipe A FIFO underrun I added a bug with

[Intel-gfx] [PATCH 20/23] drm/i915: Move i915_vma_lock in the selftests to avoid lock inversion, v2.

2020-05-11 Thread Maarten Lankhorst
Make sure vma_lock is not used as inner lock when kernel context is used, and add ww handling where appropriate. Signed-off-by: Maarten Lankhorst --- .../i915/gem/selftests/i915_gem_coherency.c | 26 ++-- .../drm/i915/gem/selftests/i915_gem_mman.c| 41 ++-

[Intel-gfx] ✓ Fi.CI.IGT: success for shmem helper untangling

2020-05-11 Thread Patchwork
== Series Details == Series: shmem helper untangling URL : https://patchwork.freedesktop.org/series/77146/ State : success == Summary == CI Bug Log - changes from CI_DRM_8462_full -> Patchwork_17623_full Summary --- **SUCCESS**

[Intel-gfx] [PATCH 13/23] drm/i915: Make sure execbuffer always passes ww state to i915_vma_pin.

2020-05-11 Thread Maarten Lankhorst
As a preparation step for full object locking and wait/wound handling during pin and object mapping, ensure that we always pass the ww context in i915_gem_execbuffer.c to i915_vma_pin, use lockdep to ensure this happens. This also requires changing the order of eb_parse slightly, to ensure we

[Intel-gfx] [PATCH 10/23] drm/i915: Nuke arguments to eb_pin_engine

2020-05-11 Thread Maarten Lankhorst
Those arguments are already set as eb.file and eb.args, so kill off the extra arguments. This will allow us to move eb_pin_engine() to after we reserved all BO's. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 17 +++-- 1 file changed, 7

[Intel-gfx] [PATCH 17/23] drm/i915: Dirty hack to fix selftests locking inversion

2020-05-11 Thread Maarten Lankhorst
Some i915 selftests still use i915_vma_lock() as inner lock, and intel_context_create_request() intel_timeline->mutex as outer lock. Fortunately for selftests this is not an issue, they should be fixed but we can move ahead and cleanify lockdep now. Signed-off-by: Maarten Lankhorst ---

[Intel-gfx] [PATCH 22/23] drm/i915: Add ww locking to pin_to_display_plane

2020-05-11 Thread Maarten Lankhorst
Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/i915_gem_domain.c | 65 -- drivers/gpu/drm/i915/gem/i915_gem_object.h | 1 + 2 files changed, 49 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c

[Intel-gfx] [PATCH 05/23] Revert "drm/i915/gem: Split eb_vma into its own allocation"

2020-05-11 Thread Maarten Lankhorst
This reverts commit 0f1dd02295f35dcdcbaafcbcbbec0753884ab974. This conflicts with the ww mutex handling, which needs to drop the references after gpu submission anyway, because otherwise we may risk unlocking a BO after first freeing it. Signed-off-by: Maarten Lankhorst ---

[Intel-gfx] [PATCH 11/23] drm/i915: Pin engine before pinning all objects, v4.

2020-05-11 Thread Maarten Lankhorst
We want to lock all gem objects, including the engine context objects, rework the throttling to ensure that we can do this. Now we only throttle once, but can take eb_pin_engine while acquiring objects. This means we will have to drop the lock to wait. If we don't have to throttle we can still

[Intel-gfx] [PATCH 14/23] drm/i915: Convert i915_gem_object/client_blt.c to use ww locking as well, v2.

2020-05-11 Thread Maarten Lankhorst
This is the last part outside of selftests that still don't use the correct lock ordering of timeline->mutex vs resv_lock. With gem fixed, there are a few places that still get locking wrong: - gvt/scheduler.c - i915_perf.c - Most if not all selftests. Changes since v1: - Add

[Intel-gfx] [PATCH 08/23] drm/i915: Use ww locking in intel_renderstate.

2020-05-11 Thread Maarten Lankhorst
We want to start using ww locking in intel_context_pin, for this we need to lock multiple objects, and the single i915_gem_object_lock is not enough. Convert to using ww-waiting, and make sure we always pin intel_context_state, even if we don't have a renderstate object. Signed-off-by: Maarten

[Intel-gfx] [PATCH 03/23] drm/i915: Remove locking from i915_gem_object_prepare_read/write

2020-05-11 Thread Maarten Lankhorst
Execbuffer submission will perform its own WW locking, and we cannot rely on the implicit lock there. This also makes it clear that the GVT code will get a lockdep splat when multiple batchbuffer shadows need to be performed in the same instance, fix that up. Signed-off-by: Maarten Lankhorst

[Intel-gfx] [PATCH 07/23] drm/i915: Use per object locking in execbuf, v10.

2020-05-11 Thread Maarten Lankhorst
Now that we changed execbuf submission slightly to allow us to do all pinning in one place, we can now simply add ww versions on top of struct_mutex. All we have to do is a separate path for -EDEADLK handling, which needs to unpin all gem bo's before dropping the lock, then starting over. This

[Intel-gfx] [PATCH 16/23] drm/i915: Convert i915_perf to ww locking as well

2020-05-11 Thread Maarten Lankhorst
We have the ordering of timeline->mutex vs resv_lock wrong, convert the i915_pin_vma and intel_context_pin as well to future-proof this. We may need to do future changes to do this more transaction-like, and only get down to a single i915_gem_ww_ctx, but for now this should work. Signed-off-by:

[Intel-gfx] [PATCH 15/23] drm/i915: Kill last user of intel_context_create_request outside of selftests

2020-05-11 Thread Maarten Lankhorst
Instead of using intel_context_create_request(), use intel_context_pin() and i915_create_request directly. Now all those calls are gone outside of selftests. :) Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/intel_workarounds.c | 43 ++--- 1 file changed, 29

[Intel-gfx] [PATCH 06/23] drm/i915/gem: Make eb_add_lut interruptible wait on object lock.

2020-05-11 Thread Maarten Lankhorst
The lock here should be interruptible, so we can backoff if needed. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c

[Intel-gfx] [PATCH 01/23] Revert "drm/i915/gem: Drop relocation slowpath".

2020-05-11 Thread Maarten Lankhorst
This reverts commit 7dc8f1143778 ("drm/i915/gem: Drop relocation slowpath"). We need the slowpath relocation for taking ww-mutex inside the page fault handler, and we will take this mutex when pinning all objects. [mlankhorst: Adjusted for reloc_gpu_flush() changes] Cc: Chris Wilson Cc: Matthew

[Intel-gfx] [PATCH 18/23] drm/i915/selftests: Fix locking inversion in lrc selftest.

2020-05-11 Thread Maarten Lankhorst
This function does not use intel_context_create_request, so it has to use the same locking order as normal code. This is required to shut up lockdep in selftests. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/selftest_lrc.c | 15 --- 1 file changed, 12 insertions(+),

[Intel-gfx] [PATCH 12/23] drm/i915: Rework intel_context pinning to do everything outside of pin_mutex

2020-05-11 Thread Maarten Lankhorst
Instead of doing everything inside of pin_mutex, we move all pinning outside. Because i915_active has its own reference counting and pinning is also having the same issues vs mutexes, we make sure everything is pinned first, so the pinning in i915_active only needs to bump refcounts. This allows

[Intel-gfx] [PATCH 04/23] drm/i915: Parse command buffer earlier in eb_relocate(slow)

2020-05-11 Thread Maarten Lankhorst
We want to introduce backoff logic, but we need to lock the pool object as well for command parsing. Because of this, we will need backoff logic for the engine pool obj, move the batch validation up slightly to eb_lookup_vmas, and the actual command parsing in a separate function which can get

[Intel-gfx] [PATCH 21/23] drm/i915: Add ww locking to vm_fault_gtt

2020-05-11 Thread Maarten Lankhorst
Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 51 +++- 1 file changed, 33 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index 70f5f82da288..68f503cf6284

[Intel-gfx] [PATCH 02/23] drm/i915: Add an implementation for i915_gem_ww_ctx locking, v2.

2020-05-11 Thread Maarten Lankhorst
i915_gem_ww_ctx is used to lock all gem bo's for pinning and memory eviction. We don't use it yet, but lets start adding the definition first. To use it, we have to pass a non-NULL ww to gem_object_lock, and don't unlock directly. It is done in i915_gem_ww_ctx_fini. Changes since v1: - Change

[Intel-gfx] [PATCH 19/23] drm/i915: Use ww pinning for intel_context_create_request()

2020-05-11 Thread Maarten Lankhorst
We want to get rid of intel_context_pin(), convert intel_context_create_request() first. :) Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/intel_context.c | 20 +++- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git

[Intel-gfx] [PATCH 09/23] drm/i915: Add ww context handling to context_barrier_task

2020-05-11 Thread Maarten Lankhorst
This is required if we want to pass a ww context in intel_context_pin and gen6_ppgtt_pin(). Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/i915_gem_context.c | 55 ++- .../drm/i915/gem/selftests/i915_gem_context.c | 22 +++- 2 files changed, 48

[Intel-gfx] [PATCH 23/23] drm/i915: Ensure we hold the pin mutex

2020-05-11 Thread Maarten Lankhorst
Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/intel_renderstate.c | 2 +- drivers/gpu/drm/i915/i915_vma.c | 9 - drivers/gpu/drm/i915/i915_vma.h | 1 + 3 files changed, 10 insertions(+), 2 deletions(-) diff --git

[Intel-gfx] [PATCH] drm/i915/selftests: Always flush before unpining after writing

2020-05-11 Thread Chris Wilson
Be consistent, and even when we know we had used a WC, flush the mapped object after writing into it. The flush understands the mapping type and will only clflush if !I915_MAP_WC, but will always insert a wmb [sfence] so that we can be sure that all writes are visible. v2: Add the unconditional

[Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915: Make intel_timeline_init static

2020-05-11 Thread Patchwork
== Series Details == Series: drm/i915: Make intel_timeline_init static URL : https://patchwork.freedesktop.org/series/77149/ State : failure == Summary == Applying: drm/i915: Make intel_timeline_init static Using index info to reconstruct a base tree... M

Re: [Intel-gfx] [PATCH] drm: Fix page flip ioctl format check

2020-05-11 Thread Ville Syrjälä
On Mon, May 11, 2020 at 02:41:13PM +0200, Daniel Vetter wrote: > On Mon, May 11, 2020 at 2:37 PM Ville Syrjälä > wrote: > > > > On Sat, May 09, 2020 at 12:13:02PM +0200, Daniel Vetter wrote: > > > On Fri, May 8, 2020 at 7:09 PM Rodrigo Vivi > > > wrote: > > > > > > > > On Fri, Apr 17, 2020 at

Re: [Intel-gfx] [PATCH] drm/i915/mst: filter out the display mode exceed sink's capability

2020-05-11 Thread Ville Syrjälä
On Thu, May 07, 2020 at 06:46:17PM -0400, Lyude Paul wrote: > On Thu, 2020-04-30 at 02:37 +, Lee, Shawn C wrote: > > On Sat, 2020-04-25, Lyude Paul wrote: > > > Hi! Sorry this took me a little while to get back to, I had a couple of > > > MST regressions that I had to look into > > > > > > On

Re: [Intel-gfx] [PATCH] drm: Fix page flip ioctl format check

2020-05-11 Thread Daniel Vetter
On Mon, May 11, 2020 at 2:37 PM Ville Syrjälä wrote: > > On Sat, May 09, 2020 at 12:13:02PM +0200, Daniel Vetter wrote: > > On Fri, May 8, 2020 at 7:09 PM Rodrigo Vivi wrote: > > > > > > On Fri, Apr 17, 2020 at 09:28:34PM +0300, Ville Syrjälä wrote: > > > > On Fri, Apr 17, 2020 at 08:10:26PM

Re: [Intel-gfx] [PATCH] drm: Fix page flip ioctl format check

2020-05-11 Thread Ville Syrjälä
On Sat, May 09, 2020 at 12:13:02PM +0200, Daniel Vetter wrote: > On Fri, May 8, 2020 at 7:09 PM Rodrigo Vivi wrote: > > > > On Fri, Apr 17, 2020 at 09:28:34PM +0300, Ville Syrjälä wrote: > > > On Fri, Apr 17, 2020 at 08:10:26PM +0200, Daniel Vetter wrote: > > > > On Fri, Apr 17, 2020 at 5:43 PM

Re: [Intel-gfx] [PATCH 5/9] drm/udl: Don't call get/put_pages on imported dma-buf

2020-05-11 Thread Thomas Zimmermann
Hi Am 11.05.20 um 13:37 schrieb Daniel Vetter: > On Mon, May 11, 2020 at 01:23:38PM +0200, Thomas Zimmermann wrote: >> Hi >> >> Am 11.05.20 um 11:35 schrieb Daniel Vetter: >>> There's no direct harm, because for the shmem helpers these are noops >>> on imported buffers. The trouble is in the

[Intel-gfx] ✓ Fi.CI.BAT: success for shmem helper untangling

2020-05-11 Thread Patchwork
== Series Details == Series: shmem helper untangling URL : https://patchwork.freedesktop.org/series/77146/ State : success == Summary == CI Bug Log - changes from CI_DRM_8462 -> Patchwork_17623 Summary --- **SUCCESS** No

Re: [Intel-gfx] [PATCH] drm/i915/mst: Wait for ACT sent before enabling the pipe

2020-05-11 Thread Imre Deak
On Thu, May 07, 2020 at 05:41:25PM +0300, Ville Syrjala wrote: > From: Ville Syrjälä > > The correct sequence according to bspec is to wait for the ACT sent > status before we turn on the pipe. Make it so. > > Signed-off-by: Ville Syrjälä Reviewed-by: Imre Deak > --- >

Re: [Intel-gfx] [PATCH 5/9] drm/udl: Don't call get/put_pages on imported dma-buf

2020-05-11 Thread Daniel Vetter
On Mon, May 11, 2020 at 01:23:38PM +0200, Thomas Zimmermann wrote: > Hi > > Am 11.05.20 um 11:35 schrieb Daniel Vetter: > > There's no direct harm, because for the shmem helpers these are noops > > on imported buffers. The trouble is in the locks these take - I want > > to change dma_buf_vmap

Re: [Intel-gfx] [PATCH 05/20] drm/i915: Tidy awaiting on dma-fences

2020-05-11 Thread Mika Kuoppala
Chris Wilson writes: > Just tidy up the return handling for completed dma-fences. While it may > return errors for invalid fence, we already know that we have a good > fence and the only error will be an already signaled fence. > > Signed-off-by: Chris Wilson Reviewed-by: Mika Kuoppala > ---

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for shmem helper untangling

2020-05-11 Thread Patchwork
== Series Details == Series: shmem helper untangling URL : https://patchwork.freedesktop.org/series/77146/ State : warning == Summary == $ dim checkpatch origin/drm-tip 28f8f1b12090 drm/msm: Don't call dma_buf_vunmap without _vmap -:47: WARNING:NO_AUTHOR_SIGN_OFF: Missing Signed-off-by: line

Re: [Intel-gfx] [PATCH 04/20] drm/i915: Mark the addition of the initial-breadcrumb in the request

2020-05-11 Thread Chris Wilson
Quoting Mika Kuoppala (2020-05-11 12:21:55) > Chris Wilson writes: > > > The initial-breadcrumb is used to mark the end of the awaiting and the > > beginning of the user payload. We verify that we do not start the user > > payload before all signaler are completed, checking our semaphore setup >

Re: [Intel-gfx] [PATCH 04/20] drm/i915: Mark the addition of the initial-breadcrumb in the request

2020-05-11 Thread Mika Kuoppala
Chris Wilson writes: > The initial-breadcrumb is used to mark the end of the awaiting and the > beginning of the user payload. We verify that we do not start the user > payload before all signaler are completed, checking our semaphore setup > by looking for the initial breadcrumb being written

Re: [Intel-gfx] [PATCH 5/9] drm/udl: Don't call get/put_pages on imported dma-buf

2020-05-11 Thread Thomas Zimmermann
Hi Am 11.05.20 um 11:35 schrieb Daniel Vetter: > There's no direct harm, because for the shmem helpers these are noops > on imported buffers. The trouble is in the locks these take - I want > to change dma_buf_vmap locking, and so need to make sure that we only > ever take certain locks on one

Re: [Intel-gfx] [PATCH 3/9] drm/doc: Some polish for shmem helpers

2020-05-11 Thread Thomas Zimmermann
Am 11.05.20 um 11:35 schrieb Daniel Vetter: > - Move the shmem helper section to the drm-mm.rst file, next to the > vram helpers. Makes a lot more sense there with the now wider scope. > Also, that's where the all the other backing storage stuff resides. > It's just the framebuffer helpers

Re: [Intel-gfx] [RFC] GPU-bound energy efficiency improvements for the intel_pstate driver (v2.99)

2020-05-11 Thread Peter Zijlstra
On Mon, Apr 27, 2020 at 08:22:47PM -0700, Francisco Jerez wrote: > This addresses the technical concerns people brought up about my > previous v2 revision of this series. Other than a few bug fixes, the > only major change relative to v2 is that the controller is now exposed > as a new CPUFREQ

[Intel-gfx] [PATCH] drm/i915: Make intel_timeline_init static

2020-05-11 Thread Mika Kuoppala
Commit fb5970da1b42 ("drm/i915/gt: Use the kernel_context to measure the breadcrumb size") removed the last external user for intel_timeline_init. Mark it static. Signed-off-by: Mika Kuoppala Reviewed-by: Chris Wilson --- drivers/gpu/drm/i915/gt/intel_timeline.c | 8

[Intel-gfx] [PATCH i-g-t v3] i915/gem_ringfill: Do a basic pass over all engines simultaneously

2020-05-11 Thread Chris Wilson
Change the basic pre-mergetest to do a single pass over all engines simultaneously. This should take no longer than checking a single engine, while providing just the right amount of stress regardless of machine size. v2: Move around the quiescence and requires to avoid calling them from the

Re: [Intel-gfx] [PATCH 2/3] dma-fence: use default wait function for mock fences

2020-05-11 Thread Daniel Vetter
On Mon, May 11, 2020 at 10:41:03AM +0100, Chris Wilson wrote: > Quoting Daniel Vetter (2020-05-11 10:11:41) > > No need to micro-optmize when we're waiting in a mocked object ... > > It's setting up the expected return values for the test. Drat, I suspect something like that but didn't spot it.

Re: [Intel-gfx] [igt-dev] [PATCH i-g-t] i915/gem_ringfill: Do a basic pass over all engines simultaneously

2020-05-11 Thread Petri Latvala
On Mon, May 11, 2020 at 10:53:58AM +0100, Chris Wilson wrote: > Quoting Petri Latvala (2020-05-11 10:49:10) > > On Mon, May 11, 2020 at 10:39:24AM +0100, Chris Wilson wrote: > > > Change the basic pre-mergetest to do a single pass over all engines > > > simultaneously. This should take no longer

Re: [Intel-gfx] [PATCH i-g-t] i915/gem_ringfill: Do a basic pass over all engines simultaneously

2020-05-11 Thread Chris Wilson
Quoting Petri Latvala (2020-05-11 10:49:10) > On Mon, May 11, 2020 at 10:39:24AM +0100, Chris Wilson wrote: > > Change the basic pre-mergetest to do a single pass over all engines > > simultaneously. This should take no longer than checking a single > > engine, while providing just the right

Re: [Intel-gfx] [PATCH i-g-t] i915/gem_ringfill: Do a basic pass over all engines simultaneously

2020-05-11 Thread Petri Latvala
On Mon, May 11, 2020 at 10:39:24AM +0100, Chris Wilson wrote: > Change the basic pre-mergetest to do a single pass over all engines > simultaneously. This should take no longer than checking a single > engine, while providing just the right amount of stress regardless of > machine size. > > v2:

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/gt: Restore pristine per-device user messages

2020-05-11 Thread Patchwork
== Series Details == Series: drm/i915/gt: Restore pristine per-device user messages URL : https://patchwork.freedesktop.org/series/77145/ State : warning == Summary == $ dim checkpatch origin/drm-tip 9e3c95d954fd drm/i915/gt: Restore pristine per-device user messages -:202:

Re: [Intel-gfx] [PATCH 3/3] misc/habalabs: don't set default fence_ops->wait

2020-05-11 Thread Oded Gabbay
And just FYI, the driver was written internally at 2016-17, where the dma-buf module didn't check the .wait ops before calling it and that's why the initialization of the default wait was there in the first place. I should have removed it when I upstreamed it but it missed my review. Thanks, Oded

[Intel-gfx] ✗ Fi.CI.BAT: failure for series starting with [1/3] drm/writeback: don't set fence->ops to default

2020-05-11 Thread Patchwork
== Series Details == Series: series starting with [1/3] drm/writeback: don't set fence->ops to default URL : https://patchwork.freedesktop.org/series/77144/ State : failure == Summary == CI Bug Log - changes from CI_DRM_8461 -> Patchwork_17621

Re: [Intel-gfx] [PATCH 2/3] dma-fence: use default wait function for mock fences

2020-05-11 Thread Chris Wilson
Quoting Daniel Vetter (2020-05-11 10:11:41) > No need to micro-optmize when we're waiting in a mocked object ... It's setting up the expected return values for the test. -Chris ___ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org

[Intel-gfx] [PATCH i-g-t] i915/gem_ringfill: Do a basic pass over all engines simultaneously

2020-05-11 Thread Chris Wilson
Change the basic pre-mergetest to do a single pass over all engines simultaneously. This should take no longer than checking a single engine, while providing just the right amount of stress regardless of machine size. v2: Move around the quiescence and requires to avoid calling them from the

Re: [Intel-gfx] [igt-dev] [PATCH i-g-t] i915/gem_ringfill: Do a basic pass over all engines simultaneously

2020-05-11 Thread Chris Wilson
Quoting Petri Latvala (2020-05-11 10:31:49) > On Mon, May 11, 2020 at 09:21:41AM +0100, Chris Wilson wrote: > > Change the basic pre-mergetest to do a single pass over all engines > > simultaneously. This should take no longer than checking a single > > engine, while providing just the right

Re: [Intel-gfx] [PATCH 3/3] misc/habalabs: don't set default fence_ops->wait

2020-05-11 Thread Oded Gabbay
On Mon, May 11, 2020 at 12:11 PM Daniel Vetter wrote: > > It's the default. Thanks for catching that. > > Also so much for "we're not going to tell the graphics people how to > review their code", dma_fence is a pretty core piece of gpu driver > infrastructure. And it's very much uapi relevant,

[Intel-gfx] [PATCH 7/9] drm/shmem-helpers: Redirect mmap for imported dma-buf

2020-05-11 Thread Daniel Vetter
Currently this seems to work by converting the sgt into a pages array, and then treating it like a native object. Do the right thing and redirect mmap to the exporter. With this nothing is calling get_pages anymore on imported dma-buf, and we can start to remove the use of the ->pages array for

[Intel-gfx] [PATCH 1/9] drm/msm: Don't call dma_buf_vunmap without _vmap

2020-05-11 Thread Daniel Vetter
I honestly don't exactly understand what's going on here, but the current code is wrong for sure: It calls dma_buf_vunmap without ever calling dma_buf_vmap. What I'm not sure about is whether the WARN_ON is correct: - msm imports dma-buf using drm_prime_sg_to_page_addr_arrays. Which is a pretty

[Intel-gfx] [PATCH 4/9] drm/virtio: Call the right shmem helpers

2020-05-11 Thread Daniel Vetter
drm_gem_shmem_get_sg_table is meant to implement obj->funcs->get_sg_table, for prime exporting. The one we want is drm_gem_shmem_get_pages_sgt, which also handles imported dma-buf, not just native objects. v2: Rebase, this stuff moved around in commit 2f2aa13724d56829d910b2fa8e80c502d388f106

[Intel-gfx] [PATCH 8/9] drm/shmem-helpers: Ensure get_pages is not called on imported dma-buf

2020-05-11 Thread Daniel Vetter
Just a bit of light paranoia. Also sprinkle this check over drm_gem_shmem_get_sg_table, which should only be called when exporting, same for the pin/unpin functions, on which it relies to work correctly. Cc: Gerd Hoffmann Cc: Rob Herring Cc: Noralf Trønnes Signed-off-by: Daniel Vetter ---

[Intel-gfx] [PATCH 6/9] drm/shmem-helpers: Don't call get/put_pages on imported dma-buf in vmap

2020-05-11 Thread Daniel Vetter
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or

[Intel-gfx] [PATCH 3/9] drm/doc: Some polish for shmem helpers

2020-05-11 Thread Daniel Vetter
- Move the shmem helper section to the drm-mm.rst file, next to the vram helpers. Makes a lot more sense there with the now wider scope. Also, that's where the all the other backing storage stuff resides. It's just the framebuffer helpers that should be in the kms helper section. - Try to

[Intel-gfx] [PATCH 5/9] drm/udl: Don't call get/put_pages on imported dma-buf

2020-05-11 Thread Daniel Vetter
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or

[Intel-gfx] [PATCH 9/9] drm/shmem-helpers: Simplify dma-buf importing

2020-05-11 Thread Daniel Vetter
- Ditch the ->pages array - Make it a private gem bo, which means no shmem object, which means fireworks if anyone calls drm_gem_object_get_pages. But we've just made sure that's all covered. Cc: Gerd Hoffmann Cc: Rob Herring Cc: Noralf Trønnes Signed-off-by: Daniel Vetter ---

[Intel-gfx] [PATCH 0/9] shmem helper untangling

2020-05-11 Thread Daniel Vetter
Hi all, I've started this a while ago, with the idea to move shmem helpers over to dma_resv_lock. Big prep work for that was to untangle the layering between functions called by drivers, and functions used to implement drm_gem_object_funcs. I didn't ever get to the locking part, but I think the

  1   2   >