== Series Details ==
Series: RFC/RFT drm/i915/oa: Drop aging-tail
URL : https://patchwork.freedesktop.org/series/56889/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_5630 -> Patchwork_12253
Summary
---
**SUCCESS**
Quoting Mika Kuoppala (2019-02-19 11:22:32)
> > diff --git a/drivers/gpu/drm/i915/i915_reset.c
> > b/drivers/gpu/drm/i915/i915_reset.c
> > index 1911e00d2581..bae88a4ea924 100644
> > --- a/drivers/gpu/drm/i915/i915_reset.c
> > +++ b/drivers/gpu/drm/i915/i915_reset.c
> > @@ -59,24 +59,29 @@ static
Chris Wilson writes:
> Currently, we accumulate each time a context hangs the GPU, offset
> against the number of requests it submits, and if that score exceeds a
> certain threshold, we ban that context from submitting any more requests
> (cancelling any work in flight). In contrast, we use a si
== Series Details ==
Series: RFC/RFT drm/i915/oa: Drop aging-tail
URL : https://patchwork.freedesktop.org/series/56889/
State : warning
== Summary ==
$ dim sparse origin/drm-tip
Sparse version: v0.5.2
Commit: RFC/RFT drm/i915/oa: Drop aging-tail
-O:drivers/gpu/drm/i915/i915_perf.c:1422:15: war
== Series Details ==
Series: RFC/RFT drm/i915/oa: Drop aging-tail
URL : https://patchwork.freedesktop.org/series/56889/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
5d1d7354476d RFC/RFT drm/i915/oa: Drop aging-tail
-:523: CHECK:MULTIPLE_ASSIGNMENTS: multiple assignments should
On 19/02/2019 10:36, Chris Wilson wrote:
Quoting Lionel Landwerlin (2019-02-19 10:31:52)
On 19/02/2019 09:18, Chris Wilson wrote:
Quoting Lionel Landwerlin (2019-02-18 18:35:28)
We're about to introduce an options to open the perf stream, giving
the user ability to configure how often it wants
On 19/02/2019 10:28, Chris Wilson wrote:
Switch to using coherent reads that are serialised with the register
read to avoid the memory latency problem in favour over an arbitrary
delay. The only zeroes seen during testing on HSW+ have been from
configuration changes that do not update (therefore
Quoting Lionel Landwerlin (2019-02-19 10:31:52)
> On 19/02/2019 09:18, Chris Wilson wrote:
> > Quoting Lionel Landwerlin (2019-02-18 18:35:28)
> >> We're about to introduce an options to open the perf stream, giving
> >> the user ability to configure how often it wants the kernel to poll
> >> the O
Quoting Matthew Auld (2019-02-19 10:22:57)
> On Wed, 6 Feb 2019 at 13:05, Chris Wilson wrote:
> > +static struct i915_request *dummy_request(struct intel_engine_cs *engine)
> > +{
> > + struct i915_request *rq;
> > +
> > + rq = kmalloc(sizeof(*rq), GFP_KERNEL | __GFP_ZERO);
> > +
On 19/02/2019 09:18, Chris Wilson wrote:
Quoting Lionel Landwerlin (2019-02-18 18:35:28)
We're about to introduce an options to open the perf stream, giving
the user ability to configure how often it wants the kernel to poll
the OA registers for available data.
Right now the workaround against
Switch to using coherent reads that are serialised with the register
read to avoid the memory latency problem in favour over an arbitrary
delay. The only zeroes seen during testing on HSW+ have been from
configuration changes that do not update (therefore were truly zero
entries and should be skipp
On Wed, 6 Feb 2019 at 13:05, Chris Wilson wrote:
>
> WAIT is occasionally suppressed by virtue of preempted requests being
> promoted to NEWCLIENT if they have not all ready received that boost.
> Make this consistent for all WAIT boosts that they are not allowed to
> preempt executing contexts an
Quoting Chris Wilson (2019-02-18 11:46:28)
> We don't want to pre-reserve any holes in our uAPI for that is a sign of
> nefarious and hidden activity. Add a reminder about our uAPI
> expectations to encourage good practice when adding new defines/enums.
>
> Signed-off-by: Chris Wilson
> Cc: Joona
Quoting Lionel Landwerlin (2019-02-18 18:35:28)
> We're about to introduce an options to open the perf stream, giving
> the user ability to configure how often it wants the kernel to poll
> the OA registers for available data.
>
> Right now the workaround against the OA tail pointer race condition
101 - 114 of 114 matches
Mail list logo