[Intel-gfx] ✓ Fi.CI.IGT: success for dma-fence: add might_sleep annotation to _wait()

2020-05-19 Thread Patchwork
== Series Details == Series: dma-fence: add might_sleep annotation to _wait() URL : https://patchwork.freedesktop.org/series/77417/ State : success == Summary == CI Bug Log - changes from CI_DRM_8505_full -> Patchwork_17713_full Summary

[Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [CI,1/3] drm/i915/selftests: Add tests for timeslicing virtual engines

2020-05-19 Thread Patchwork
== Series Details == Series: series starting with [CI,1/3] drm/i915/selftests: Add tests for timeslicing virtual engines URL : https://patchwork.freedesktop.org/series/77414/ State : success == Summary == CI Bug Log - changes from CI_DRM_8505_full -> Patchwork_17712_full

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/gt: Trace the CS interrupt

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/gt: Trace the CS interrupt URL : https://patchwork.freedesktop.org/series/77441/ State : success == Summary == CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17720 Summary --- **SUCCESS**

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/hdcp: Add additional R0' wait

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/hdcp: Add additional R0' wait URL : https://patchwork.freedesktop.org/series/77439/ State : success == Summary == CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17719 Summary --- **SUCCESS**

[Intel-gfx] ✗ Fi.CI.BAT: failure for Consider DBuf bandwidth when calculating CDCLK (rev15)

2020-05-19 Thread Patchwork
== Series Details == Series: Consider DBuf bandwidth when calculating CDCLK (rev15) URL : https://patchwork.freedesktop.org/series/74739/ State : failure == Summary == CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17718 Summary

Re: [Intel-gfx] [PATCH v2 02/22] x86/gpu: add RKL stolen memory support

2020-05-19 Thread Lucas De Marchi
Cc'ing x...@kernel.org and maintainers On Wed, May 6, 2020 at 4:52 AM Srivatsa, Anusha wrote: > > > > > -Original Message- > > From: Intel-gfx On Behalf Of Matt > > Roper > > Sent: Tuesday, May 5, 2020 4:22 AM > > To: intel-gfx@lists.freedesktop.org > > Cc: De Marchi, Lucas > >

Re: [Intel-gfx] [PATCH v9 6/7] drm/i915: Adjust CDCLK accordingly to our DBuf bw needs

2020-05-19 Thread Manasi Navare
On Wed, May 20, 2020 at 12:25:25AM +0300, Stanislav Lisovskiy wrote: > According to BSpec max BW per slice is calculated using formula > Max BW = CDCLK * 64. Currently when calculating min CDCLK we > account only per plane requirements, however in order to avoid > FIFO underruns we need to

[Intel-gfx] ✗ Fi.CI.SPARSE: warning for Consider DBuf bandwidth when calculating CDCLK (rev15)

2020-05-19 Thread Patchwork
== Series Details == Series: Consider DBuf bandwidth when calculating CDCLK (rev15) URL : https://patchwork.freedesktop.org/series/74739/ State : warning == Summary == $ dim sparse --fast origin/drm-tip Sparse version: v0.6.0 Fast mode used, each commit won't be checked separately. -

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Consider DBuf bandwidth when calculating CDCLK (rev15)

2020-05-19 Thread Patchwork
== Series Details == Series: Consider DBuf bandwidth when calculating CDCLK (rev15) URL : https://patchwork.freedesktop.org/series/74739/ State : warning == Summary == $ dim checkpatch origin/drm-tip 42922a1cf4d9 drm/i915: Decouple cdclk calculation from modeset checks a2e2a5f43cd7 drm/i915:

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2)

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2) URL : https://patchwork.freedesktop.org/series/77382/ State : success == Summary == CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17717 Summary

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2)

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2) URL : https://patchwork.freedesktop.org/series/77382/ State : warning == Summary == $ dim checkpatch origin/drm-tip e71c461a0da4 drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC -:26:

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/ehl: Wa_22010271021 (rev2)

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/ehl: Wa_22010271021 (rev2) URL : https://patchwork.freedesktop.org/series/77428/ State : success == Summary == CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17716 Summary --- **SUCCESS**

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/ehl: Wa_22010271021 (rev2)

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/ehl: Wa_22010271021 (rev2) URL : https://patchwork.freedesktop.org/series/77428/ State : warning == Summary == $ dim checkpatch origin/drm-tip 60adbb75a3d8 drm/i915/ehl: Wa_22010271021 -:12: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/gem: Suppress some random warnings

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/gem: Suppress some random warnings URL : https://patchwork.freedesktop.org/series/77431/ State : success == Summary == CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17715 Summary ---

[Intel-gfx] [CI] drm/i915/gt: Trace the CS interrupt

2020-05-19 Thread Chris Wilson
We have traces for the semaphore and the error, but not the far more frequent CS interrupts. This is likely to be too much, but for the purpose of live_unlite_preempt it may answer a question or two. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/gt/intel_gt_irq.c | 6 +- 1 file

[Intel-gfx] [PATCH] drm/i915/hdcp: Add additional R0' wait

2020-05-19 Thread Sean Paul
From: Sean Paul We're seeing some R0' mismatches in the field, particularly with repeaters. I'm guessing the (already lenient) 300ms wait time isn't enough for some setups. So add an additional wait when R0' is mismatched. Signed-off-by: Sean Paul --- drivers/gpu/drm/i915/display/intel_hdcp.c

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/gem: Suppress some random warnings

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/gem: Suppress some random warnings URL : https://patchwork.freedesktop.org/series/77431/ State : warning == Summary == $ dim checkpatch origin/drm-tip c47e2d0db533 drm/i915/gem: Suppress some random warnings -:62: CHECK:COMPARISON_TO_NULL: Comparison to

Re: [Intel-gfx] [PATCH] dma-fence: add might_sleep annotation to _wait()

2020-05-19 Thread Chris Wilson
Quoting Daniel Vetter (2020-05-19 14:27:56) > Do it uncontionally, there's a separate peek function with > dma_fence_is_signalled() which can be called from atomic context. > > v2: Consensus calls for an unconditional might_sleep (Chris, > Christian) > > Full audit: > - dma-fence.h: Uses

[Intel-gfx] [PATCH v9 6/7] drm/i915: Adjust CDCLK accordingly to our DBuf bw needs

2020-05-19 Thread Stanislav Lisovskiy
According to BSpec max BW per slice is calculated using formula Max BW = CDCLK * 64. Currently when calculating min CDCLK we account only per plane requirements, however in order to avoid FIFO underruns we need to estimate accumulated BW consumed by all planes(ddb entries basically) residing on

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Neuter virtual rq->engine on retire

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915: Neuter virtual rq->engine on retire URL : https://patchwork.freedesktop.org/series/77425/ State : success == Summary == CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17714 Summary ---

[Intel-gfx] [PATCH i-g-t] i915: Add gem_exec_endless

2020-05-19 Thread Chris Wilson
Start our preparations for guaranteeing endless execution. First, we just want to estimate the direct userspace dispatch overhead of running an endless chain of batch buffers. The legacy binding process here will be replaced by async VM_BIND, but for the moment this suffices to construct the GTT

[Intel-gfx] [PATCH v2] drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC

2020-05-19 Thread Swathi Dhanavanthri
This is a permanent w/a for JSL/EHL.This is to be applied to the PCH types on JSL/EHL ie JSP/MCC Bspec: 52888 v2: Fixed the wrong usage of logical OR(ville) Signed-off-by: Swathi Dhanavanthri --- drivers/gpu/drm/i915/i915_irq.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff

Re: [Intel-gfx] [PATCH] drm/i915/ehl: Wa_22010271021

2020-05-19 Thread Dhanavanthri, Swathi
Maybe we can add JSL to the comment too. Other than that looks good to me. Reviewed-by: Swathi Dhanavanthri -Original Message- From: Intel-gfx On Behalf Of Matt Atwood Sent: Tuesday, May 19, 2020 9:26 AM To: intel-gfx@lists.freedesktop.org Subject: [Intel-gfx] [PATCH] drm/i915/ehl:

[Intel-gfx] [PATCH] drm/i915/gem: Suppress some random warnings

2020-05-19 Thread Chris Wilson
Leave the error propagation in place, but limit the warnings to only show up in CI if the unlikely errors are hit. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 3 +-- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 3 +--

[Intel-gfx] ✓ Fi.CI.BAT: success for dma-fence: add might_sleep annotation to _wait()

2020-05-19 Thread Patchwork
== Series Details == Series: dma-fence: add might_sleep annotation to _wait() URL : https://patchwork.freedesktop.org/series/77417/ State : success == Summary == CI Bug Log - changes from CI_DRM_8505 -> Patchwork_17713 Summary ---

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-fence: add might_sleep annotation to _wait()

2020-05-19 Thread Patchwork
== Series Details == Series: dma-fence: add might_sleep annotation to _wait() URL : https://patchwork.freedesktop.org/series/77417/ State : warning == Summary == $ dim checkpatch origin/drm-tip aa2f5c93ddcf dma-fence: add might_sleep annotation to _wait() -:16: WARNING:TYPO_SPELLING: 'TIMOUT'

[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [CI,1/3] drm/i915/selftests: Add tests for timeslicing virtual engines

2020-05-19 Thread Patchwork
== Series Details == Series: series starting with [CI,1/3] drm/i915/selftests: Add tests for timeslicing virtual engines URL : https://patchwork.freedesktop.org/series/77414/ State : success == Summary == CI Bug Log - changes from CI_DRM_8505 -> Patchwork_17712

[Intel-gfx] ✗ Fi.CI.BUILD: failure for Consider DBuf bandwidth when calculating CDCLK (rev14)

2020-05-19 Thread Patchwork
== Series Details == Series: Consider DBuf bandwidth when calculating CDCLK (rev14) URL : https://patchwork.freedesktop.org/series/74739/ State : failure == Summary == Applying: drm/i915: Decouple cdclk calculation from modeset checks Applying: drm/i915: Extract cdclk requirements checking to

Re: [Intel-gfx] [PATCH] drm/i915: Neuter virtual rq->engine on retire

2020-05-19 Thread Chris Wilson
Quoting Chris Wilson (2020-05-19 18:00:04) > Quoting Chris Wilson (2020-05-19 15:51:31) > > We do not hold a reference to rq->engine, and so if it is a virtual > > engine it may have already been freed by the time we free the request. > > The last reference we hold on the virtual engine is via

[Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915/selftests: Measure dispatch latency (rev10)

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/selftests: Measure dispatch latency (rev10) URL : https://patchwork.freedesktop.org/series/77308/ State : failure == Summary == Applying: drm/i915/selftests: Measure dispatch latency Using index info to reconstruct a base tree... M

Re: [Intel-gfx] [PATCH] drm/i915: Neuter virtual rq->engine on retire

2020-05-19 Thread Chris Wilson
Quoting Chris Wilson (2020-05-19 15:51:31) > We do not hold a reference to rq->engine, and so if it is a virtual > engine it may have already been freed by the time we free the request. > The last reference we hold on the virtual engine is via rq->context, > and that is released on request

[Intel-gfx] [PATCH] drm/i915/ehl: Wa_22010271021

2020-05-19 Thread Matt Atwood
Reflect recent Bspec changes. Bspec: 33451 Signed-off-by: Matt Atwood --- drivers/gpu/drm/i915/gt/intel_workarounds.c | 6 ++ 1 file changed, 6 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c index

Re: [Intel-gfx] [PATCH v10] drm/i915/dsb: Pre allocate and late cleanup of cmd buffer

2020-05-19 Thread Maarten Lankhorst
Op 18-05-2020 om 14:12 schreef Animesh Manna: > Pre-allocate command buffer in atomic_commit using intel_dsb_prepare > function which also includes pinning and map in cpu domain. > > No functional change is dsb write/commit functions. > > Now dsb get/put function is removed and ref-count mechanism

[Intel-gfx] [PATCH] drm/i915: Neuter virtual rq->engine on retire

2020-05-19 Thread Chris Wilson
We do not hold a reference to rq->engine, and so if it is a virtual engine it may have already been freed by the time we free the request. The last reference we hold on the virtual engine is via rq->context, and that is released on request retirement. So if we find ourselves retiring a virtual

Re: [Intel-gfx] [PATCH 04/12] drm/i915/selftests: Check for an initial-breadcrumb in wait_for_submit()

2020-05-19 Thread Mika Kuoppala
Chris Wilson writes: > When we look at i915_request_is_started() we must be careful in case we > are using a request that does not have the initial-breadcrumb and > instead the is-started is being compared against the end of the previous > request. This will make wait_for_submit() declare that a

Re: [Intel-gfx] [PATCH] drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC

2020-05-19 Thread kbuild test robot
the system. BTW, we also suggest to use '--base' option to specify the base tree in git format-patch, please see https://stackoverflow.com/a/37406982] url: https://github.com/0day-ci/linux/commits/Swathi-Dhanavanthri/drm-i915-ehl-Extend-w-a-14010685332-to-JSP-MCC/20200519-184947 base: git

Re: [Intel-gfx] [PATCH v2 3/9] drm/i915/display/sdvo: Prefer drm_WARN* over WARN*

2020-05-19 Thread Laxminarayan Bharadiya, Pankaj
> -Original Message- > From: Jani Nikula > Sent: 19 May 2020 19:12 > To: Laxminarayan Bharadiya, Pankaj > ; dan...@ffwll.ch; intel- > g...@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Joonas Lahtinen > ; Vivi, Rodrigo ; > David Airlie ; Ville Syrjälä > ; Chris > Wilson ;

Re: [Intel-gfx] [PATCH 02/12] drm/i915/selftests: Change priority overflow detection

2020-05-19 Thread Mika Kuoppala
Chris Wilson writes: > Check for integer overflow in the priority chain, rather than against a > type-constricted max-priority check. > > Signed-off-by: Chris Wilson Reviewed-by: Mika Kuoppala > --- > drivers/gpu/drm/i915/gt/selftest_lrc.c | 6 +++--- > 1 file changed, 3 insertions(+), 3

Re: [Intel-gfx] [PATCH 03/12] drm/i915/selftests: Restore to default heartbeat

2020-05-19 Thread Mika Kuoppala
Chris Wilson writes: > Since we temporarily disable the heartbeat and restore back to the > default value, we can use the stored defaults on the engine and avoid > using a local. > > Signed-off-by: Chris Wilson > --- Reviewed-by: Mika Kuoppala > drivers/gpu/drm/i915/gt/selftest_hangcheck.c

Re: [Intel-gfx] [PATCH v2 3/9] drm/i915/display/sdvo: Prefer drm_WARN* over WARN*

2020-05-19 Thread Jani Nikula
On Fri, 08 May 2020, "Laxminarayan Bharadiya, Pankaj" wrote: >> -Original Message- >> From: Jani Nikula >> Sent: 08 May 2020 12:19 >> To: Laxminarayan Bharadiya, Pankaj >> ; dan...@ffwll.ch; intel- >> g...@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Joonas Lahtinen >> ;

Re: [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule

2020-05-19 Thread Mika Kuoppala
Chris Wilson writes: s/supressing/suppressing > We recorded the execlists->queue_priority_hint update for the inflight > request without kicking the tasklet. The next submitted request then > failed to be scheduled as it had a lower priority than the hint, leaving > the HW runnning with only

[Intel-gfx] [PATCH] dma-fence: add might_sleep annotation to _wait()

2020-05-19 Thread Daniel Vetter
Do it uncontionally, there's a separate peek function with dma_fence_is_signalled() which can be called from atomic context. v2: Consensus calls for an unconditional might_sleep (Chris, Christian) Full audit: - dma-fence.h: Uses MAX_SCHEDULE_TIMOUT, good chance this sleeps - dma-resv.c: Timeout

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/selftests: Measure dispatch latency (rev9)

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/selftests: Measure dispatch latency (rev9) URL : https://patchwork.freedesktop.org/series/77308/ State : success == Summary == CI Bug Log - changes from CI_DRM_8502 -> Patchwork_17709 Summary ---

[Intel-gfx] [CI 2/3] drm/i915/gt: Kick virtual siblings on timeslice out

2020-05-19 Thread Chris Wilson
If we decide to timeslice out the current virtual request, we will unsubmit it while it is still busy (ve->context.inflight == sibling[0]). If the virtual tasklet and then the other sibling tasklets run before we completely schedule out the active virtual request for the preemption, those other

[Intel-gfx] [CI 1/3] drm/i915/selftests: Add tests for timeslicing virtual engines

2020-05-19 Thread Chris Wilson
Make sure that we can execute a virtual request on an already busy engine, and conversely that we can execute a normal request if the engines are already fully occupied by virtual requests. Signed-off-by: Chris Wilson Reviewed-by: Tvrtko Ursulin --- drivers/gpu/drm/i915/gt/selftest_lrc.c | 200

[Intel-gfx] [CI 3/3] drm/i915/gt: Incorporate the virtual engine into timeslicing

2020-05-19 Thread Chris Wilson
It was quite the oversight to only factor in the normal queue to decide the timeslicing switch priority. By leaving out the next virtual request from the priority decision, we would not timeslice the current engine if there was an available virtual request. Testcase: igt/gem_exec_balancer/sliced

[Intel-gfx] [PATCH v9 4/7] drm/i915: Plane configuration affects CDCLK in Gen11+

2020-05-19 Thread Stanislav Lisovskiy
From: Stanislav Lisovskiy So lets support it. Reviewed-by: Manasi Navare Signed-off-by: Stanislav Lisovskiy --- drivers/gpu/drm/i915/display/intel_display.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/display/intel_display.c

[Intel-gfx] [PATCH v9 2/7] drm/i915: Extract cdclk requirements checking to separate function

2020-05-19 Thread Stanislav Lisovskiy
In Gen11+ whenever we might exceed DBuf bandwidth we might need to recalculate CDCLK which DBuf bandwidth is scaled with. Total Dbuf bw used might change based on particular plane needs. Thus to calculate if cdclk needs to be changed it is not enough anymore to check plane configuration and plane

[Intel-gfx] [PATCH v9 6/7] drm/i915: Adjust CDCLK accordingly to our DBuf bw needs

2020-05-19 Thread Stanislav Lisovskiy
According to BSpec max BW per slice is calculated using formula Max BW = CDCLK * 64. Currently when calculating min CDCLK we account only per plane requirements, however in order to avoid FIFO underruns we need to estimate accumulated BW consumed by all planes(ddb entries basically) residing on

[Intel-gfx] [PATCH v9 0/7] Consider DBuf bandwidth when calculating CDCLK

2020-05-19 Thread Stanislav Lisovskiy
We need to calculate cdclk after watermarks/ddb has been calculated as with recent hw CDCLK needs to be adjusted accordingly to DBuf requirements, which is not possible with current code organization. Setting CDCLK according to DBuf BW requirements and not just rejecting if it doesn't satisfy BW

[Intel-gfx] [PATCH v9 1/7] drm/i915: Decouple cdclk calculation from modeset checks

2020-05-19 Thread Stanislav Lisovskiy
We need to calculate cdclk after watermarks/ddb has been calculated as with recent hw CDCLK needs to be adjusted accordingly to DBuf requirements, which is not possible with current code organization. Setting CDCLK according to DBuf BW requirements and not just rejecting if it doesn't satisfy BW

[Intel-gfx] [PATCH v9 7/7] drm/i915: Remove unneeded hack now for CDCLK

2020-05-19 Thread Stanislav Lisovskiy
No need to bump up CDCLK now, as it is now correctly calculated, accounting for DBuf BW as BSpec says. Reviewed-by: Manasi Navare Signed-off-by: Stanislav Lisovskiy --- drivers/gpu/drm/i915/display/intel_cdclk.c | 12 1 file changed, 12 deletions(-) diff --git

[Intel-gfx] [PATCH v9 3/7] drm/i915: Check plane configuration properly

2020-05-19 Thread Stanislav Lisovskiy
From: Stanislav Lisovskiy Checking with hweight8 if plane configuration had changed seems to be wrong as different plane configs can result in a same hamming weight. So lets check the bitmask itself. Reviewed-by: Manasi Navare Signed-off-by: Stanislav Lisovskiy ---

[Intel-gfx] [PATCH v9 5/7] drm/i915: Introduce for_each_dbuf_slice_in_mask macro

2020-05-19 Thread Stanislav Lisovskiy
We quite often need now to iterate only particular dbuf slices in mask, whether they are active or related to particular crtc. v2: - Minor code refactoring v3: - Use enum for max slices instead of macro Let's make our life a bit easier and use a macro for that. Reviewed-by: Manasi Navare

[Intel-gfx] [PATCH] drm/i915/selftests: Measure dispatch latency

2020-05-19 Thread Chris Wilson
A useful metric of the system's health is how fast we can tell the GPU to do various actions, so measure our latency. v2: Refactor all the instruction building into emitters. v3: Mark the error handling if not perfect, at least consistent. Signed-off-by: Chris Wilson Cc: Mika Kuoppala Cc:

[Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915/selftests: Measure CS_TIMESTAMP (rev3)

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/selftests: Measure CS_TIMESTAMP (rev3) URL : https://patchwork.freedesktop.org/series/77320/ State : failure == Summary == CI Bug Log - changes from CI_DRM_8502 -> Patchwork_17708 Summary ---

Re: [Intel-gfx] [PATCH] drm/i915/selftests: Measure dispatch latency

2020-05-19 Thread Chris Wilson
Quoting Mika Kuoppala (2020-05-19 13:47:31) > Chris Wilson writes: > > > A useful metric of the system's health is how fast we can tell the GPU > > to do various actions, so measure our latency. > > > > v2: Refactor all the instruction building into emitters. > > > > Signed-off-by: Chris Wilson

Re: [Intel-gfx] [PATCH] drm/i915/selftests: Measure dispatch latency

2020-05-19 Thread Mika Kuoppala
Chris Wilson writes: > A useful metric of the system's health is how fast we can tell the GPU > to do various actions, so measure our latency. > > v2: Refactor all the instruction building into emitters. > > Signed-off-by: Chris Wilson > Cc: Mika Kuoppala > Cc: Joonas Lahtinen Not much

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/selftests: Measure CS_TIMESTAMP (rev3)

2020-05-19 Thread Patchwork
== Series Details == Series: drm/i915/selftests: Measure CS_TIMESTAMP (rev3) URL : https://patchwork.freedesktop.org/series/77320/ State : warning == Summary == $ dim checkpatch origin/drm-tip 4686c5234501 drm/i915/selftests: Measure CS_TIMESTAMP -:68: CHECK:USLEEP_RANGE: usleep_range is

Re: [Intel-gfx] [PATCH] drm/i915/selftests: Measure CS_TIMESTAMP

2020-05-19 Thread Ville Syrjälä
On Tue, May 19, 2020 at 11:46:54AM +0100, Chris Wilson wrote: > Quoting Ville Syrjälä (2020-05-19 11:42:45) > > On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote: > > > Count the number of CS_TIMESTAMP ticks and check that it matches our > > > expectations. > > > > Looks ok for

[Intel-gfx] [PATCH] drm/i915/selftests: Measure dispatch latency

2020-05-19 Thread Chris Wilson
A useful metric of the system's health is how fast we can tell the GPU to do various actions, so measure our latency. v2: Refactor all the instruction building into emitters. Signed-off-by: Chris Wilson Cc: Mika Kuoppala Cc: Joonas Lahtinen --- drivers/gpu/drm/i915/selftests/i915_request.c |

Re: [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Fix AUX power domain toggling across TypeC mode resets

2020-05-19 Thread Imre Deak
On Fri, May 15, 2020 at 08:36:31PM +, Patchwork wrote: > == Series Details == > > Series: drm/i915: Fix AUX power domain toggling across TypeC mode resets > URL : https://patchwork.freedesktop.org/series/77280/ > State : success Thanks for the review, pushed to -dinq. > > == Summary == >

Re: [Intel-gfx] [igt-dev] [PATCH i-g-t] i915: Add gem_exec_endless

2020-05-19 Thread Chris Wilson
Quoting Mika Kuoppala (2020-05-19 11:43:16) > Chris Wilson writes: > > +static void supervisor_dispatch(struct supervisor *sv, uint64_t addr) > > +{ > > + WRITE_ONCE(*sv->dispatch, 64 << 10); > > addr << 10 ? addr :) -Chris ___ Intel-gfx mailing

Re: [Intel-gfx] [PATCH] drm/i915/selftests: Measure CS_TIMESTAMP

2020-05-19 Thread Chris Wilson
Quoting Ville Syrjälä (2020-05-19 11:42:45) > On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote: > > Count the number of CS_TIMESTAMP ticks and check that it matches our > > expectations. > > Looks ok for everything except g4x/ilk. Those would need something > like >

Re: [Intel-gfx] [PATCH i-g-t] i915: Add gem_exec_endless

2020-05-19 Thread Mika Kuoppala
Chris Wilson writes: > Start our preparations for guaranteeing endless execution. > > First, we just want to estimate the 'ulta-low latency' dispatch overhead > by running an endless chain of batch buffers. The legacy binding process > here will be replaced by async VM_BIND, but for the moment

Re: [Intel-gfx] [PATCH] drm/i915/selftests: Measure CS_TIMESTAMP

2020-05-19 Thread Ville Syrjälä
On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote: > Count the number of CS_TIMESTAMP ticks and check that it matches our > expectations. Looks ok for everything except g4x/ilk. Those would need something like https://patchwork.freedesktop.org/patch/355944/?series=74145=1 + read

[Intel-gfx] [PATCH i-g-t] i915: Add gem_exec_endless

2020-05-19 Thread Chris Wilson
Start our preparations for guaranteeing endless execution. First, we just want to estimate the 'ulta-low latency' dispatch overhead by running an endless chain of batch buffers. The legacy binding process here will be replaced by async VM_BIND, but for the moment this suffices to construct the

Re: [Intel-gfx] [PATCH 09/12] drm/i915/gt: Incorporate the virtual engine into timeslicing

2020-05-19 Thread Tvrtko Ursulin
On 19/05/2020 07:31, Chris Wilson wrote: It was quite the oversight to only factor in the normal queue to decide the timeslicing switch priority. By leaving out the next virtual request from the priority decision, we would not timeslice the current engine if there was an available virtual

Re: [Intel-gfx] [PATCH 07/12] drm/i915/selftests: Add tests for timeslicing virtual engines

2020-05-19 Thread Tvrtko Ursulin
On 19/05/2020 07:31, Chris Wilson wrote: Make sure that we can execute a virtual request on an already busy engine, and conversely that we can execute a normal request if the engines are already fully occupied by virtual requests. Signed-off-by: Chris Wilson ---

[Intel-gfx] ✗ Fi.CI.IGT: failure for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule

2020-05-19 Thread Patchwork
== Series Details == Series: series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule URL : https://patchwork.freedesktop.org/series/77389/ State : failure == Summary == CI Bug Log - changes from CI_DRM_8498_full -> Patchwork_17707_full

Re: [Intel-gfx] [PATCH] drm/i915/gvt: Use ARRAY_SIZE for vgpu_types

2020-05-19 Thread Zhenyu Wang
On 2020.05.18 22:00:52 +0100, Chris Wilson wrote: > Quoting Aishwarya Ramakrishnan (2020-05-18 16:03:36) > > Prefer ARRAY_SIZE instead of using sizeof > > > > Fixes coccicheck warning: Use ARRAY_SIZE > > > > Signed-off-by: Aishwarya Ramakrishnan > Reviewed-by: Chris Wilson Applied, thanks!

Re: [Intel-gfx] [PATCH] drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC

2020-05-19 Thread Ville Syrjälä
On Mon, May 18, 2020 at 05:58:32PM -0700, Swathi Dhanavanthri wrote: > This is a permanent w/a for JSL/EHL.This is to be applied to the > PCH types on JSL/EHL ie JSP/MCC > Bspec: 52888 > > Signed-off-by: Swathi Dhanavanthri > --- > drivers/gpu/drm/i915/i915_irq.c | 4 ++-- > 1 file changed, 2

[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule

2020-05-19 Thread Patchwork
== Series Details == Series: series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule URL : https://patchwork.freedesktop.org/series/77389/ State : success == Summary == CI Bug Log - changes from CI_DRM_8498 -> Patchwork_17707

[Intel-gfx] ✗ Fi.CI.SPARSE: warning for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule

2020-05-19 Thread Patchwork
== Series Details == Series: series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule URL : https://patchwork.freedesktop.org/series/77389/ State : warning == Summary == $ dim sparse --fast origin/drm-tip Sparse version: v0.6.0 Fast mode used, each

[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule

2020-05-19 Thread Patchwork
== Series Details == Series: series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule URL : https://patchwork.freedesktop.org/series/77389/ State : warning == Summary == $ dim checkpatch origin/drm-tip 364ab8bd9968 drm/i915: Don't set queue-priority

Re: [Intel-gfx] drm/i915: device params part 1

2020-05-19 Thread Chris Wilson
Quoting Jani Nikula (2020-05-18 17:47:47) > This is the first 3 patches of [1], because apparently patch 4 breaks > the world. I've yet to pinpoint the issue, but these could move forward > in the meanwhile. It's not you, it's igt_params.c -Chris ___

[Intel-gfx] [PATCH 10/12] drm/i915/gt: Use virtual_engine during execlists_dequeue

2020-05-19 Thread Chris Wilson
Rather than going back and forth between the rb_node entry and the virtual_engine type, store the ve local and reuse it. As the container_of conversion from rb_node to virtual_engine requires a variable offset, performing that conversion just once shaves off a bit of code. v2: Keep a single

[Intel-gfx] [PATCH 06/12] drm/i915: Move saturated workload detection back to the context

2020-05-19 Thread Chris Wilson
When we introduced the saturated workload detection to tell us to back off from semaphore usage [semaphores have a noticeable impact on contended bus cycles with the CPU for some heavy workloads], we first introduced it as a per-context tracker. This allows individual contexts to try and optimise

[Intel-gfx] [PATCH 03/12] drm/i915/selftests: Restore to default heartbeat

2020-05-19 Thread Chris Wilson
Since we temporarily disable the heartbeat and restore back to the default value, we can use the stored defaults on the engine and avoid using a local. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 25 +++ drivers/gpu/drm/i915/gt/selftest_lrc.c | 67

[Intel-gfx] [PATCH 08/12] drm/i915/gt: Kick virtual siblings on timeslice out

2020-05-19 Thread Chris Wilson
If we decide to timeslice out the current virtual request, we will unsubmit it while it is still busy (ve->context.inflight == sibling[0]). If the virtual tasklet and then the other sibling tasklets run before we completely schedule out the active virtual request for the preemption, those other

[Intel-gfx] [PATCH 07/12] drm/i915/selftests: Add tests for timeslicing virtual engines

2020-05-19 Thread Chris Wilson
Make sure that we can execute a virtual request on an already busy engine, and conversely that we can execute a normal request if the engines are already fully occupied by virtual requests. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/gt/selftest_lrc.c | 200 - 1

[Intel-gfx] [PATCH 04/12] drm/i915/selftests: Check for an initial-breadcrumb in wait_for_submit()

2020-05-19 Thread Chris Wilson
When we look at i915_request_is_started() we must be careful in case we are using a request that does not have the initial-breadcrumb and instead the is-started is being compared against the end of the previous request. This will make wait_for_submit() declare that a request has been already

[Intel-gfx] [PATCH 12/12] drm/i915/gt: Resubmit the virtual engine on schedule-out

2020-05-19 Thread Chris Wilson
Having recognised that we do not change the sibling until we schedule out, we can then defer the decision to resubmit the virtual engine from the unwind of the active queue to scheduling out of the virtual context. By keeping the unwind order intact on the local engine, we can preserve data

[Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule

2020-05-19 Thread Chris Wilson
We recorded the execlists->queue_priority_hint update for the inflight request without kicking the tasklet. The next submitted request then failed to be scheduled as it had a lower priority than the hint, leaving the HW runnning with only the inflight request. Fixes: 6cebcf746f3f ("drm/i915:

[Intel-gfx] [PATCH 11/12] drm/i915/gt: Decouple inflight virtual engines

2020-05-19 Thread Chris Wilson
Once a virtual engine has been bound to a sibling, it will remain bound until we finally schedule out the last active request. We can not rebind the context to a new sibling while it is inflight as the context save will conflict, hence we wait. As we cannot then use any other sibliing while the

[Intel-gfx] [PATCH 05/12] drm/i915/execlists: Shortcircuit queue_prio() for no internal levels

2020-05-19 Thread Chris Wilson
If there are no internal levels and the user priority-shift is zero, we can help the compiler eliminate some dead code: Function old new delta start_timeslice 169 154 -15 __execlists_submission_tasklet

[Intel-gfx] [PATCH 09/12] drm/i915/gt: Incorporate the virtual engine into timeslicing

2020-05-19 Thread Chris Wilson
It was quite the oversight to only factor in the normal queue to decide the timeslicing switch priority. By leaving out the next virtual request from the priority decision, we would not timeslice the current engine if there was an available virtual request. Testcase: igt/gem_exec_balancer/sliced

[Intel-gfx] [PATCH 02/12] drm/i915/selftests: Change priority overflow detection

2020-05-19 Thread Chris Wilson
Check for integer overflow in the priority chain, rather than against a type-constricted max-priority check. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/gt/selftest_lrc.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c

[Intel-gfx] linux-next: manual merge of the tip tree with the drm-misc tree

2020-05-19 Thread Stephen Rothwell
Hi all, Today's linux-next merge of the tip tree got a conflict in: drivers/gpu/drm/drm_dp_mst_topology.c between commit: a4292e52106b ("drm: Match drm_dp_send_clear_payload_id_table definition to declaration") from the drm-misc tree and commit: 53965dbe5396 ("drm: Make