On Tue, Oct 25, 2016 at 10:35 PM, Matthew Auld <
matthew.william.auld at gmail.com> wrote:

> On 25 October 2016 at 00:19, Robert Bragg <robert at sixbynine.org> wrote:



>
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h
> b/drivers/gpu/drm/i915/i915_drv.h
> > index 3448d05..ea24814 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -1764,6 +1764,11 @@ struct intel_wm_config {
>
> >
> >  struct drm_i915_private {
> > @@ -2149,16 +2164,46 @@ struct drm_i915_private {
> >
> >         struct {
> >                 bool initialized;
> > +
> >                 struct mutex lock;
> >                 struct list_head streams;
> >
> > +               spinlock_t hook_lock;
> > +
> >                 struct {
> > -                       u32 metrics_set;
> > +                       struct i915_perf_stream *exclusive_stream;
> > +
> > +                       u32 specific_ctx_id;
> Can we just get rid of this, now that the vma remains pinned we can
> simply get the ggtt address at the time of configuring the OA_CONTROL
> register ?
>

I considered that, but would ideally prefer to keep it considering the
gen8+ patches to come. For gen8+ (with execlists) the context ID isn't a
gtt offset.


>
> > +
> > +                       struct hrtimer poll_check_timer;
> > +                       wait_queue_head_t poll_wq;
> > +                       atomic_t pollin;
> > +
>
>

> > +/* The maximum exponent the hardware accepts is 63 (essentially it
> selects one
> > + * of the 64bit timestamp bits to trigger reports from) but there's
> currently
> > + * no known use case for sampling as infrequently as once per 47
> thousand years.
> > + *
> > + * Since the timestamps included in OA reports are only 32bits it seems
> > + * reasonable to limit the OA exponent where it's still possible to
> account for
> > + * overflow in OA report timestamps.
> > + */
> > +#define OA_EXPONENT_MAX 31
> > +
> > +#define INVALID_CTX_ID 0xffffffff
> We shouldn't need this anymore.
>

yeah I removed it and then added it back, just for the sake of explicitly
setting the specific_ctx_id to an invalid ID when closing the exclusive
stream - though resetting the value isn't strictly necessary.

also maybe your comment is assuming specific_ctx_id can be removed, while
I'd prefer to keep it.


> > +
> > +static int claim_specific_ctx(struct i915_perf_stream *stream)
> > +{
> pin_oa_specific_ctx, or something? Also would it not make more sense
> to operate on the context, not the stream.
>

Yeah, I avoided a name like that mainly because it's also initializing
specific_ctx_id, which seemed to me like it would become an unexpected side
effect with that more specific name.

The other consideration is that in my gen8+ patches the pinning code is
conditional depending on whether execlists are enabled, while the function
still initializes specific_ctx_id.

Certainly not attached to the names though.

Chris has some feedback with the code, so maybe that will affect this too.


> > +       struct drm_i915_private *dev_priv = stream->dev_priv;
> > +       struct i915_vma *vma;
> > +       int ret;
> > +
> > +       ret = i915_mutex_lock_interruptible(&dev_priv->drm);
> > +       if (ret)
> > +               return ret;
> > +
> > +       /* So that we don't have to worry about updating the context ID
> > +        * in OACONTOL on the fly we make sure to pin the context
> > +        * upfront for the lifetime of the stream...
> > +        */
> > +       vma = stream->ctx->engine[RCS].state;
> > +       ret = i915_vma_pin(vma, 0, stream->ctx->ggtt_alignment,
> > +                          PIN_GLOBAL | PIN_HIGH);
> > +       if (ret)
> > +               return ret;
> > +
> > +       dev_priv->perf.oa.specific_ctx_id = i915_ggtt_offset(vma);
> > +
> > +       mutex_unlock(&dev_priv->drm.struct_mutex);
> > +
> > +       return 0;
> > +}
>


I'll also follow up on the other notes; thanks!

- Robert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://lists.freedesktop.org/archives/dri-devel/attachments/20161026/1e6cbaf7/attachment.html>

Reply via email to