Quoting Tvrtko Ursulin (2018-01-10 10:42:34)
> 
> On 09/01/2018 21:28, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2018-01-09 16:16:20)
> >> From: Tvrtko Ursulin <tvrtko.ursu...@intel.com>
> >>
> >> Make sure busyness is correctly reported when PMU is enabled after the
> >> engine is already busy with a single long batch.
> >>
> >> Signed-off-by: Tvrtko Ursulin <tvrtko.ursu...@intel.com>
> >> ---
> >>   tests/perf_pmu.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> >>   1 file changed, 42 insertions(+)
> >>
> >> diff --git a/tests/perf_pmu.c b/tests/perf_pmu.c
> >> index 45e2f6148453..e1f449d48808 100644
> >> --- a/tests/perf_pmu.c
> >> +++ b/tests/perf_pmu.c
> >> @@ -157,6 +157,41 @@ single(int gem_fd, const struct 
> >> intel_execution_engine2 *e, bool busy)
> >>          gem_quiescent_gpu(gem_fd);
> >>   }
> >>   
> >> +static void
> >> +busy_start(int gem_fd, const struct intel_execution_engine2 *e)
> >> +{
> >> +       unsigned long slept;
> >> +       igt_spin_t *spin;
> >> +       uint64_t val;
> >> +       int fd;
> >> +
> >> +       /*
> >> +        * Defeat the busy stats delayed disable, we need to guarantee we 
> >> are
> >> +        * the first user.
> >> +        */
> >> +       if (gem_has_execlists(gem_fd))
> >> +               sleep(2);
> > 
> > I don't have a better idea than sleep, but I don't like tying this to
> > execlists. Make the sleep unconditional for now. Is there anyway we can
> > export the knowledge of the implementation through the perf api?
> > Different counters, or now different attrs for different busy-stats?
> 
> I can't come up with any reasonable idea. Best hope I have is that we 
> could also expose busyness via sysfs one day and then I would move this 
> test out of the PMU specific ones to a new home.
> 
> To sleep unconditionally I am also torn because the less backend 
> knowledge and variability in the tests the better, but also I would 
> prefer not to waste time when not necessary.

Consider it a motivator to export enough information to avoid
assumptions :) A test is only as fast as the slowest machine ;)

> >> +
> >> +       spin = __igt_spin_batch_new(gem_fd, 0, e2ring(gem_fd, e), 0);
> >> +
> >> +       /*
> >> +        * Sleep for a bit after making the engine busy to make sure the 
> >> PMU
> >> +        * gets enabled when the batch is already running.
> >> +        */
> >> +       usleep(500000);
> > 
> > Just a request for 500e3.
> > Reviewed-by: Chris Wilson <ch...@chris-wilson.co.uk>
> 
> Thanks! Shard run shows it is catching the current issue fine.
> 
> Do you plan to respin your fix so the other subtest passes as well?

I'm open to suggestions. It was only intended as a quick fix, if you
have a better idea for the api, be my guest. But a quick patch to get us
to a stable state from which we can improve isn't a horrible idea.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to