On 13/02/2021 18:52, Rob Clark wrote:
On Sat, Feb 13, 2021 at 12:04 AM Lionel Landwerlin
<lionel.g.landwer...@intel.com> wrote:
On 13/02/2021 04:20, Rob Clark wrote:
On Fri, Feb 12, 2021 at 5:56 PM Lionel Landwerlin
<lionel.g.landwer...@intel.com> wrote:
On 13/02/2021 03:38, Rob Clark wrote:
On Fri, Feb 12, 2021 at 5:08 PM Lionel Landwerlin
<lionel.g.landwer...@intel.com> wrote:
We're kind of in the same boat for Intel.

Access to GPU perf counters is exclusive to a single process if you want
to build a timeline of the work (because preemption etc...).
ugg, does that mean extensions like AMD_performance_monitor doesn't
actually work on intel?
It work,s but only a single app can use it at a time.

I see.. on the freedreno side we haven't really gone down the
preemption route yet, but we have a way to hook in some safe/restore
cmdstream

That's why I think, for Intel HW, something like gfx-pps is probably
best to pull out all the data on a timeline for the entire system.

Then the drivers could just provide timestamp on the timeline to
annotate it.

Looking at gfx-pps, my question is why is this not part of the mesa
tree?  That way I could use it for freedreno (either as stand-alone
process or part of driver) without duplicating all the perfcntr
tables, and information about different variants of a given generation
needed to interpret the raw counters into something useful for a
human.

Pulling gfx-pps into mesa seems like a sensible way forward.

BR,
-R

Yeah, I guess it depends on how your stack looks.

If the counters cover more than 3d and you have video drivers out of the mesa tree, it might make sense to have it somewhere else.


Anyway I didn't want to sound negative, having a daemon like gfx-pps thing in mesa to report system wide counters works for me :)

Then we can look into how to have each intel driver add annotation on the timeline.


-Lionel

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to