On 24/01/18 15:14, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2018-01-24 12:03:46)
On 23/01/2018 14:17, Lionel Landwerlin wrote:
Hi all,

I've been trying to expose some information to userspace about the fused
parts of the GPU.
This is the 4th attempt at getting this upstream, here are the previous
ones :
      https://patchwork.freedesktop.org/patch/185959/
      https://patchwork.freedesktop.org/series/33436/
      https://patchwork.freedesktop.org/series/33950/

This last iteration was based upon some direction by Daniel :
https://lists.freedesktop.org/archives/dri-devel/2017-December/160162.html
There was, I think, a fair point about having this working in
environments where sysfs (mechanism used in the 3rd iteration) is not
available (containers).

Following some discussion on IRC, it seems Joonas would like this
rewritten in a such way that we essentially drop the generic mechanism
introduced in this patch, and instead go for an additional ioctl() on
the drm fd just for querying the state of a fused part of
slice/subslice/eus.
The proposal is to have a single struct like :

struct drm_i915_topology {
     /* All field are in/out */
     int slice;
     int subslice;
     int eu;

     int enabled;
};

You would let the slice field to -1 and then the kernel would fill it
with the max slice value. Same for subslice (with a valid slice value)
and eu.
When querying with slice = 0, and all other fields to -1, the kernel
would fill the enabled value with 0 or 1.
Essentially that would mean that an application wanting to query the
state of all of the EUs would have to go through them one by one (which
would be about ~100 ioctl() on SKL GT4 for example).

Apart from the fact that we'll probably end up adding another ioctl()
for engine discovery, I don't have any problem with what Joonas is
proposing.
It's just a bit annoying this comes up on the 4th rewrite.
I really wouldn't like to rewrite this one more time and get turned down
because this isn't to the taste of one of the reviewer.
So my question is : Is everybody happy with what Joonas is proposing?
Anybody in favor of having a generic mechanism?
I am not very keen on this counter-proposal for two reasons.

First is that I think is a bit inelegant to have to query so many times
just to get the full topology out. If this ends in some library, we may
end up running this on every trivial client startup.

Secondly, it is kind of dispatcher in it's own right. Since the
operation mode will depend on the combination of field values. As such a
generic, or at least a more explicit, dispatcher, like the proposed
i915_query_ioctl sounds cleaner to me.

I take the point we can't guess how many other users we will have for it
in the future. So there is a little bit of a risk of adding something
generic which won't be used a lot in the future.

Because apart from the three queries Lionel needs, I would be adding an
engine info query and potentially, depending on userspace interest,
engine queue depths. So that would be a maximum of five queries I am
aware of would use the generic framework. Maybe too little, or maybe
good enough for a start?
Another use case would be a single shot method to gather all GETPARAMs.

There's a lot of too'ing and fro'ing at the start of mesa trying to
determine all of the kernel's capabilities, which more or less come down
to a long series of parsing GETPARAM results. Bundling all of those up
into a single ioctl seems attractive to me (bonus for it being properly
defined and not a compat nightmare).
-Chris

Thanks all,

I don't really read much opposition to the current patch series. If anything we could actually want to do more it seems.
It would be good to have the green light and land that.
I've played quickly with a Chris' idea and will add a couple of RFC patches for discussion.

Cheers,

-
Lionel

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to