== Series Details ==
Series: dma-fence: add might_sleep annotation to _wait()
URL : https://patchwork.freedesktop.org/series/77417/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8505_full -> Patchwork_17713_full
Summary
== Series Details ==
Series: series starting with [CI,1/3] drm/i915/selftests: Add tests for
timeslicing virtual engines
URL : https://patchwork.freedesktop.org/series/77414/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8505_full -> Patchwork_17712_full
== Series Details ==
Series: drm/i915/gt: Trace the CS interrupt
URL : https://patchwork.freedesktop.org/series/77441/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17720
Summary
---
**SUCCESS**
== Series Details ==
Series: drm/i915/hdcp: Add additional R0' wait
URL : https://patchwork.freedesktop.org/series/77439/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17719
Summary
---
**SUCCESS**
== Series Details ==
Series: Consider DBuf bandwidth when calculating CDCLK (rev15)
URL : https://patchwork.freedesktop.org/series/74739/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17718
Summary
Cc'ing x...@kernel.org and maintainers
On Wed, May 6, 2020 at 4:52 AM Srivatsa, Anusha
wrote:
>
>
>
> > -Original Message-
> > From: Intel-gfx On Behalf Of Matt
> > Roper
> > Sent: Tuesday, May 5, 2020 4:22 AM
> > To: intel-gfx@lists.freedesktop.org
> > Cc: De Marchi, Lucas
> >
On Wed, May 20, 2020 at 12:25:25AM +0300, Stanislav Lisovskiy wrote:
> According to BSpec max BW per slice is calculated using formula
> Max BW = CDCLK * 64. Currently when calculating min CDCLK we
> account only per plane requirements, however in order to avoid
> FIFO underruns we need to
== Series Details ==
Series: Consider DBuf bandwidth when calculating CDCLK (rev15)
URL : https://patchwork.freedesktop.org/series/74739/
State : warning
== Summary ==
$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.0
Fast mode used, each commit won't be checked separately.
-
== Series Details ==
Series: Consider DBuf bandwidth when calculating CDCLK (rev15)
URL : https://patchwork.freedesktop.org/series/74739/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
42922a1cf4d9 drm/i915: Decouple cdclk calculation from modeset checks
a2e2a5f43cd7 drm/i915:
== Series Details ==
Series: drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2)
URL : https://patchwork.freedesktop.org/series/77382/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17717
Summary
== Series Details ==
Series: drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2)
URL : https://patchwork.freedesktop.org/series/77382/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
e71c461a0da4 drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC
-:26:
== Series Details ==
Series: drm/i915/ehl: Wa_22010271021 (rev2)
URL : https://patchwork.freedesktop.org/series/77428/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17716
Summary
---
**SUCCESS**
== Series Details ==
Series: drm/i915/ehl: Wa_22010271021 (rev2)
URL : https://patchwork.freedesktop.org/series/77428/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
60adbb75a3d8 drm/i915/ehl: Wa_22010271021
-:12: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit
== Series Details ==
Series: drm/i915/gem: Suppress some random warnings
URL : https://patchwork.freedesktop.org/series/77431/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17715
Summary
---
We have traces for the semaphore and the error, but not the far more
frequent CS interrupts. This is likely to be too much, but for the
purpose of live_unlite_preempt it may answer a question or two.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/intel_gt_irq.c | 6 +-
1 file
From: Sean Paul
We're seeing some R0' mismatches in the field, particularly with
repeaters. I'm guessing the (already lenient) 300ms wait time isn't
enough for some setups. So add an additional wait when R0' is
mismatched.
Signed-off-by: Sean Paul
---
drivers/gpu/drm/i915/display/intel_hdcp.c
== Series Details ==
Series: drm/i915/gem: Suppress some random warnings
URL : https://patchwork.freedesktop.org/series/77431/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
c47e2d0db533 drm/i915/gem: Suppress some random warnings
-:62: CHECK:COMPARISON_TO_NULL: Comparison to
Quoting Daniel Vetter (2020-05-19 14:27:56)
> Do it uncontionally, there's a separate peek function with
> dma_fence_is_signalled() which can be called from atomic context.
>
> v2: Consensus calls for an unconditional might_sleep (Chris,
> Christian)
>
> Full audit:
> - dma-fence.h: Uses
According to BSpec max BW per slice is calculated using formula
Max BW = CDCLK * 64. Currently when calculating min CDCLK we
account only per plane requirements, however in order to avoid
FIFO underruns we need to estimate accumulated BW consumed by
all planes(ddb entries basically) residing on
== Series Details ==
Series: drm/i915: Neuter virtual rq->engine on retire
URL : https://patchwork.freedesktop.org/series/77425/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17714
Summary
---
Start our preparations for guaranteeing endless execution.
First, we just want to estimate the direct userspace dispatch overhead
of running an endless chain of batch buffers. The legacy binding process
here will be replaced by async VM_BIND, but for the moment this
suffices to construct the GTT
This is a permanent w/a for JSL/EHL.This is to be applied to the
PCH types on JSL/EHL ie JSP/MCC
Bspec: 52888
v2: Fixed the wrong usage of logical OR(ville)
Signed-off-by: Swathi Dhanavanthri
---
drivers/gpu/drm/i915/i915_irq.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff
Maybe we can add JSL to the comment too.
Other than that looks good to me.
Reviewed-by: Swathi Dhanavanthri
-Original Message-
From: Intel-gfx On Behalf Of Matt
Atwood
Sent: Tuesday, May 19, 2020 9:26 AM
To: intel-gfx@lists.freedesktop.org
Subject: [Intel-gfx] [PATCH] drm/i915/ehl:
Leave the error propagation in place, but limit the warnings to only
show up in CI if the unlikely errors are hit.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 3 +--
drivers/gpu/drm/i915/gem/i915_gem_phys.c | 3 +--
== Series Details ==
Series: dma-fence: add might_sleep annotation to _wait()
URL : https://patchwork.freedesktop.org/series/77417/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8505 -> Patchwork_17713
Summary
---
== Series Details ==
Series: dma-fence: add might_sleep annotation to _wait()
URL : https://patchwork.freedesktop.org/series/77417/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
aa2f5c93ddcf dma-fence: add might_sleep annotation to _wait()
-:16: WARNING:TYPO_SPELLING: 'TIMOUT'
== Series Details ==
Series: series starting with [CI,1/3] drm/i915/selftests: Add tests for
timeslicing virtual engines
URL : https://patchwork.freedesktop.org/series/77414/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8505 -> Patchwork_17712
== Series Details ==
Series: Consider DBuf bandwidth when calculating CDCLK (rev14)
URL : https://patchwork.freedesktop.org/series/74739/
State : failure
== Summary ==
Applying: drm/i915: Decouple cdclk calculation from modeset checks
Applying: drm/i915: Extract cdclk requirements checking to
Quoting Chris Wilson (2020-05-19 18:00:04)
> Quoting Chris Wilson (2020-05-19 15:51:31)
> > We do not hold a reference to rq->engine, and so if it is a virtual
> > engine it may have already been freed by the time we free the request.
> > The last reference we hold on the virtual engine is via
== Series Details ==
Series: drm/i915/selftests: Measure dispatch latency (rev10)
URL : https://patchwork.freedesktop.org/series/77308/
State : failure
== Summary ==
Applying: drm/i915/selftests: Measure dispatch latency
Using index info to reconstruct a base tree...
M
Quoting Chris Wilson (2020-05-19 15:51:31)
> We do not hold a reference to rq->engine, and so if it is a virtual
> engine it may have already been freed by the time we free the request.
> The last reference we hold on the virtual engine is via rq->context,
> and that is released on request
Reflect recent Bspec changes.
Bspec: 33451
Signed-off-by: Matt Atwood
---
drivers/gpu/drm/i915/gt/intel_workarounds.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c
b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index
Op 18-05-2020 om 14:12 schreef Animesh Manna:
> Pre-allocate command buffer in atomic_commit using intel_dsb_prepare
> function which also includes pinning and map in cpu domain.
>
> No functional change is dsb write/commit functions.
>
> Now dsb get/put function is removed and ref-count mechanism
We do not hold a reference to rq->engine, and so if it is a virtual
engine it may have already been freed by the time we free the request.
The last reference we hold on the virtual engine is via rq->context,
and that is released on request retirement. So if we find ourselves
retiring a virtual
Chris Wilson writes:
> When we look at i915_request_is_started() we must be careful in case we
> are using a request that does not have the initial-breadcrumb and
> instead the is-started is being compared against the end of the previous
> request. This will make wait_for_submit() declare that a
the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]
url:
https://github.com/0day-ci/linux/commits/Swathi-Dhanavanthri/drm-i915-ehl-Extend-w-a-14010685332-to-JSP-MCC/20200519-184947
base: git
> -Original Message-
> From: Jani Nikula
> Sent: 19 May 2020 19:12
> To: Laxminarayan Bharadiya, Pankaj
> ; dan...@ffwll.ch; intel-
> g...@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Joonas Lahtinen
> ; Vivi, Rodrigo ;
> David Airlie ; Ville Syrjälä
> ; Chris
> Wilson ;
Chris Wilson writes:
> Check for integer overflow in the priority chain, rather than against a
> type-constricted max-priority check.
>
> Signed-off-by: Chris Wilson
Reviewed-by: Mika Kuoppala
> ---
> drivers/gpu/drm/i915/gt/selftest_lrc.c | 6 +++---
> 1 file changed, 3 insertions(+), 3
Chris Wilson writes:
> Since we temporarily disable the heartbeat and restore back to the
> default value, we can use the stored defaults on the engine and avoid
> using a local.
>
> Signed-off-by: Chris Wilson
> ---
Reviewed-by: Mika Kuoppala
> drivers/gpu/drm/i915/gt/selftest_hangcheck.c
On Fri, 08 May 2020, "Laxminarayan Bharadiya, Pankaj"
wrote:
>> -Original Message-
>> From: Jani Nikula
>> Sent: 08 May 2020 12:19
>> To: Laxminarayan Bharadiya, Pankaj
>> ; dan...@ffwll.ch; intel-
>> g...@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Joonas Lahtinen
>> ;
Chris Wilson writes:
s/supressing/suppressing
> We recorded the execlists->queue_priority_hint update for the inflight
> request without kicking the tasklet. The next submitted request then
> failed to be scheduled as it had a lower priority than the hint, leaving
> the HW runnning with only
Do it uncontionally, there's a separate peek function with
dma_fence_is_signalled() which can be called from atomic context.
v2: Consensus calls for an unconditional might_sleep (Chris,
Christian)
Full audit:
- dma-fence.h: Uses MAX_SCHEDULE_TIMOUT, good chance this sleeps
- dma-resv.c: Timeout
== Series Details ==
Series: drm/i915/selftests: Measure dispatch latency (rev9)
URL : https://patchwork.freedesktop.org/series/77308/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8502 -> Patchwork_17709
Summary
---
If we decide to timeslice out the current virtual request, we will
unsubmit it while it is still busy (ve->context.inflight == sibling[0]).
If the virtual tasklet and then the other sibling tasklets run before we
completely schedule out the active virtual request for the preemption,
those other
Make sure that we can execute a virtual request on an already busy
engine, and conversely that we can execute a normal request if the
engines are already fully occupied by virtual requests.
Signed-off-by: Chris Wilson
Reviewed-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/gt/selftest_lrc.c | 200
It was quite the oversight to only factor in the normal queue to decide
the timeslicing switch priority. By leaving out the next virtual request
from the priority decision, we would not timeslice the current engine if
there was an available virtual request.
Testcase: igt/gem_exec_balancer/sliced
From: Stanislav Lisovskiy
So lets support it.
Reviewed-by: Manasi Navare
Signed-off-by: Stanislav Lisovskiy
---
drivers/gpu/drm/i915/display/intel_display.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/display/intel_display.c
In Gen11+ whenever we might exceed DBuf bandwidth we might need to
recalculate CDCLK which DBuf bandwidth is scaled with.
Total Dbuf bw used might change based on particular plane needs.
Thus to calculate if cdclk needs to be changed it is not enough
anymore to check plane configuration and plane
According to BSpec max BW per slice is calculated using formula
Max BW = CDCLK * 64. Currently when calculating min CDCLK we
account only per plane requirements, however in order to avoid
FIFO underruns we need to estimate accumulated BW consumed by
all planes(ddb entries basically) residing on
We need to calculate cdclk after watermarks/ddb has been calculated
as with recent hw CDCLK needs to be adjusted accordingly to DBuf
requirements, which is not possible with current code organization.
Setting CDCLK according to DBuf BW requirements and not just rejecting
if it doesn't satisfy BW
We need to calculate cdclk after watermarks/ddb has been calculated
as with recent hw CDCLK needs to be adjusted accordingly to DBuf
requirements, which is not possible with current code organization.
Setting CDCLK according to DBuf BW requirements and not just rejecting
if it doesn't satisfy BW
No need to bump up CDCLK now, as it is now correctly
calculated, accounting for DBuf BW as BSpec says.
Reviewed-by: Manasi Navare
Signed-off-by: Stanislav Lisovskiy
---
drivers/gpu/drm/i915/display/intel_cdclk.c | 12
1 file changed, 12 deletions(-)
diff --git
From: Stanislav Lisovskiy
Checking with hweight8 if plane configuration had
changed seems to be wrong as different plane configs
can result in a same hamming weight.
So lets check the bitmask itself.
Reviewed-by: Manasi Navare
Signed-off-by: Stanislav Lisovskiy
---
We quite often need now to iterate only particular dbuf slices
in mask, whether they are active or related to particular crtc.
v2: - Minor code refactoring
v3: - Use enum for max slices instead of macro
Let's make our life a bit easier and use a macro for that.
Reviewed-by: Manasi Navare
A useful metric of the system's health is how fast we can tell the GPU
to do various actions, so measure our latency.
v2: Refactor all the instruction building into emitters.
v3: Mark the error handling if not perfect, at least consistent.
Signed-off-by: Chris Wilson
Cc: Mika Kuoppala
Cc:
== Series Details ==
Series: drm/i915/selftests: Measure CS_TIMESTAMP (rev3)
URL : https://patchwork.freedesktop.org/series/77320/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8502 -> Patchwork_17708
Summary
---
Quoting Mika Kuoppala (2020-05-19 13:47:31)
> Chris Wilson writes:
>
> > A useful metric of the system's health is how fast we can tell the GPU
> > to do various actions, so measure our latency.
> >
> > v2: Refactor all the instruction building into emitters.
> >
> > Signed-off-by: Chris Wilson
Chris Wilson writes:
> A useful metric of the system's health is how fast we can tell the GPU
> to do various actions, so measure our latency.
>
> v2: Refactor all the instruction building into emitters.
>
> Signed-off-by: Chris Wilson
> Cc: Mika Kuoppala
> Cc: Joonas Lahtinen
Not much
== Series Details ==
Series: drm/i915/selftests: Measure CS_TIMESTAMP (rev3)
URL : https://patchwork.freedesktop.org/series/77320/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
4686c5234501 drm/i915/selftests: Measure CS_TIMESTAMP
-:68: CHECK:USLEEP_RANGE: usleep_range is
On Tue, May 19, 2020 at 11:46:54AM +0100, Chris Wilson wrote:
> Quoting Ville Syrjälä (2020-05-19 11:42:45)
> > On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote:
> > > Count the number of CS_TIMESTAMP ticks and check that it matches our
> > > expectations.
> >
> > Looks ok for
A useful metric of the system's health is how fast we can tell the GPU
to do various actions, so measure our latency.
v2: Refactor all the instruction building into emitters.
Signed-off-by: Chris Wilson
Cc: Mika Kuoppala
Cc: Joonas Lahtinen
---
drivers/gpu/drm/i915/selftests/i915_request.c |
On Fri, May 15, 2020 at 08:36:31PM +, Patchwork wrote:
> == Series Details ==
>
> Series: drm/i915: Fix AUX power domain toggling across TypeC mode resets
> URL : https://patchwork.freedesktop.org/series/77280/
> State : success
Thanks for the review, pushed to -dinq.
>
> == Summary ==
>
Quoting Mika Kuoppala (2020-05-19 11:43:16)
> Chris Wilson writes:
> > +static void supervisor_dispatch(struct supervisor *sv, uint64_t addr)
> > +{
> > + WRITE_ONCE(*sv->dispatch, 64 << 10);
>
> addr << 10 ?
addr :)
-Chris
___
Intel-gfx mailing
Quoting Ville Syrjälä (2020-05-19 11:42:45)
> On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote:
> > Count the number of CS_TIMESTAMP ticks and check that it matches our
> > expectations.
>
> Looks ok for everything except g4x/ilk. Those would need something
> like
>
Chris Wilson writes:
> Start our preparations for guaranteeing endless execution.
>
> First, we just want to estimate the 'ulta-low latency' dispatch overhead
> by running an endless chain of batch buffers. The legacy binding process
> here will be replaced by async VM_BIND, but for the moment
On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote:
> Count the number of CS_TIMESTAMP ticks and check that it matches our
> expectations.
Looks ok for everything except g4x/ilk. Those would need something
like
https://patchwork.freedesktop.org/patch/355944/?series=74145=1
+ read
Start our preparations for guaranteeing endless execution.
First, we just want to estimate the 'ulta-low latency' dispatch overhead
by running an endless chain of batch buffers. The legacy binding process
here will be replaced by async VM_BIND, but for the moment this
suffices to construct the
On 19/05/2020 07:31, Chris Wilson wrote:
It was quite the oversight to only factor in the normal queue to decide
the timeslicing switch priority. By leaving out the next virtual request
from the priority decision, we would not timeslice the current engine if
there was an available virtual
On 19/05/2020 07:31, Chris Wilson wrote:
Make sure that we can execute a virtual request on an already busy
engine, and conversely that we can execute a normal request if the
engines are already fully occupied by virtual requests.
Signed-off-by: Chris Wilson
---
== Series Details ==
Series: series starting with [01/12] drm/i915: Don't set queue-priority hint
when supressing the reschedule
URL : https://patchwork.freedesktop.org/series/77389/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8498_full -> Patchwork_17707_full
On 2020.05.18 22:00:52 +0100, Chris Wilson wrote:
> Quoting Aishwarya Ramakrishnan (2020-05-18 16:03:36)
> > Prefer ARRAY_SIZE instead of using sizeof
> >
> > Fixes coccicheck warning: Use ARRAY_SIZE
> >
> > Signed-off-by: Aishwarya Ramakrishnan
> Reviewed-by: Chris Wilson
Applied, thanks!
On Mon, May 18, 2020 at 05:58:32PM -0700, Swathi Dhanavanthri wrote:
> This is a permanent w/a for JSL/EHL.This is to be applied to the
> PCH types on JSL/EHL ie JSP/MCC
> Bspec: 52888
>
> Signed-off-by: Swathi Dhanavanthri
> ---
> drivers/gpu/drm/i915/i915_irq.c | 4 ++--
> 1 file changed, 2
== Series Details ==
Series: series starting with [01/12] drm/i915: Don't set queue-priority hint
when supressing the reschedule
URL : https://patchwork.freedesktop.org/series/77389/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8498 -> Patchwork_17707
== Series Details ==
Series: series starting with [01/12] drm/i915: Don't set queue-priority hint
when supressing the reschedule
URL : https://patchwork.freedesktop.org/series/77389/
State : warning
== Summary ==
$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.0
Fast mode used, each
== Series Details ==
Series: series starting with [01/12] drm/i915: Don't set queue-priority hint
when supressing the reschedule
URL : https://patchwork.freedesktop.org/series/77389/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
364ab8bd9968 drm/i915: Don't set queue-priority
Quoting Jani Nikula (2020-05-18 17:47:47)
> This is the first 3 patches of [1], because apparently patch 4 breaks
> the world. I've yet to pinpoint the issue, but these could move forward
> in the meanwhile.
It's not you, it's igt_params.c
-Chris
___
Rather than going back and forth between the rb_node entry and the
virtual_engine type, store the ve local and reuse it. As the
container_of conversion from rb_node to virtual_engine requires a
variable offset, performing that conversion just once shaves off a bit
of code.
v2: Keep a single
When we introduced the saturated workload detection to tell us to back
off from semaphore usage [semaphores have a noticeable impact on
contended bus cycles with the CPU for some heavy workloads], we first
introduced it as a per-context tracker. This allows individual contexts
to try and optimise
Since we temporarily disable the heartbeat and restore back to the
default value, we can use the stored defaults on the engine and avoid
using a local.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 25 +++
drivers/gpu/drm/i915/gt/selftest_lrc.c | 67
If we decide to timeslice out the current virtual request, we will
unsubmit it while it is still busy (ve->context.inflight == sibling[0]).
If the virtual tasklet and then the other sibling tasklets run before we
completely schedule out the active virtual request for the preemption,
those other
Make sure that we can execute a virtual request on an already busy
engine, and conversely that we can execute a normal request if the
engines are already fully occupied by virtual requests.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/selftest_lrc.c | 200 -
1
When we look at i915_request_is_started() we must be careful in case we
are using a request that does not have the initial-breadcrumb and
instead the is-started is being compared against the end of the previous
request. This will make wait_for_submit() declare that a request has
been already
Having recognised that we do not change the sibling until we schedule
out, we can then defer the decision to resubmit the virtual engine from
the unwind of the active queue to scheduling out of the virtual context.
By keeping the unwind order intact on the local engine, we can preserve
data
We recorded the execlists->queue_priority_hint update for the inflight
request without kicking the tasklet. The next submitted request then
failed to be scheduled as it had a lower priority than the hint, leaving
the HW runnning with only the inflight request.
Fixes: 6cebcf746f3f ("drm/i915:
Once a virtual engine has been bound to a sibling, it will remain bound
until we finally schedule out the last active request. We can not rebind
the context to a new sibling while it is inflight as the context save
will conflict, hence we wait. As we cannot then use any other sibliing
while the
If there are no internal levels and the user priority-shift is zero, we
can help the compiler eliminate some dead code:
Function old new delta
start_timeslice 169 154 -15
__execlists_submission_tasklet
It was quite the oversight to only factor in the normal queue to decide
the timeslicing switch priority. By leaving out the next virtual request
from the priority decision, we would not timeslice the current engine if
there was an available virtual request.
Testcase: igt/gem_exec_balancer/sliced
Check for integer overflow in the priority chain, rather than against a
type-constricted max-priority check.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/i915/gt/selftest_lrc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c
Hi all,
Today's linux-next merge of the tip tree got a conflict in:
drivers/gpu/drm/drm_dp_mst_topology.c
between commit:
a4292e52106b ("drm: Match drm_dp_send_clear_payload_id_table definition to
declaration")
from the drm-misc tree and commit:
53965dbe5396 ("drm: Make
89 matches
Mail list logo