On 02/01/2017 15:32, Thierry Reding wrote:
On Tue, Dec 27, 2016 at 01:57:21PM +0100, Axel Davy wrote:
Hi Thierry,

Could you explain why in this situation default_fd would be a non-renderable
device node ?
This is preparatory work to allow drivers for split display/render
setups to be tied together within Mesa. If you want accelerated
rendering on those setups, you currently need to patch applications so
that they can deal with two separate devices (patches exist to allow
this for Weston and kmscube, and possibly others).

In order to avoid having to patch applications, the renderonly (the name
is slightly confusing) drivers (Christian Gmeiner sent out patches a few
weeks ago) open two devices internally and contain some magic to share
buffers between the two devices.

So the typical use-case would be that you have two separate DRM devices,
one representing the scanout device and another representing the GPU. In
most cases the scanout device will be display-only, so it will have a
/dev/dri/cardX node, but no corresponding /dev/dri/renderDY node. On the
other hand the GPU will have a "dummy" /dev/dri/cardX node that doesn't
expose any outputs and a corresponding /dev/dri/renderDY node that is
used for rendering.

A bare-metal application (kmscube, Weston, ...) will have to find a node
to perform a modeset with, so it will have to find a /dev/dri/cardX node
which exposes one or more outputs. That will be the scanout device node.
But given that there's no render node there's no way to accelerate using
that device.

Composite drivers will bind to the scanout device node and find a render
node for the GPU (these are usually ARM SoCs, so it's fairly easy to
find the correct render node) and bind the GPU driver to that node. Then
the GPU driver will export buffers used for scanout and have the scanout
driver import them. That way the application can transparently use those
buffers for DRM/KMS.

In my understanding, for X11 DRI3 the Xserver is supposed to give you a
renderable device node,
and for Wayland, the device path advertised is the one used by the server
for compositing.
Both X11 and Wayland will try to use the device used by the server for
compositing. In both cases they will receive the /dev/dri/cardX device
node for the scanout device.

Effectively X11 and Wayland will call back into the scanout driver for
acceleration. However with a minimal renderonly driver that's not going
to work. I had posted a complete wrapper driver a long time ago which I
think could be improved to allow even this use-case to work, but I got
significant pushback.

Anyway, both X11 and Wayland use the loader_get_user_preferred_fd()
function that this patch modifies to allow overriding the final file
descriptor to use. Currently the only way to override is by providing
the DRI_PRIME environment variable or by setting up a dri.conf
configuration file.

loader_get_user_preferred_fd() returns a file descriptor (same as the
default_fd by default, or the one referring to the DRI_PRIME node) and a
flag that specifies whether or not the devices are the same. This is
used in order to determine if DMA-BUF or FLINK should be used to share
buffers.

X11 has special code paths to deal with the PRIME use-case involving an
extra blit, which is somewhat unfortunate because on many devices that's
not going to be necessary.

For Wayland the EGL platform code checks for availability of a render
node to determine whether or not DMA-BUF should be used. That breaks for
this particular case as well because we do have a cardX node without a
corresponding renderDY node.

I suspect that with a full wrapper driver this could be solved more
nicely, including avoiding the extra blit in X11 DRI3. However it's a
lot more code to maintain than piggy-backing on top of PRIME support,
and a lot more complicated to get right (and more potential for breaking
existing use-cases).

Thierry


From what you say, I understand the following, please correct if wrong:

Tthe issue is that you need some special format/placement in order to display on the screen.

A patch serie was proposed to handle that with some specific handling long ago, but received bad comments.

Because a linear buffer is ok to display, you want to force the "different device is rendering" path to enforce a copy to a linear buffer before presentation.

In the proposed scheme, X11 DRI3 open returns an fd of the device used for display (but which cannot render) and Wayland gives the path of the device used for display but which cannot be used for rendering.


This seems something that should really be discussed with the devs of #xorg-devel and #wayland.


My opinion is that is feels contrary to the spirit of X11 DRI3 to give an unusable device fd, and of Wayland to give the path of the device used for display by the server instead of the device the server uses for rendering.

What you seem to want is some sort of constraints format/placement/tiling on the buffers shared with the server (X11 or Wayland). There has been several discussions on this topic, and my understanding is that some work is being done to have universal description for tiling, and have the client reallocate with different constraints when the server feels it would be a good idea. If I understand for Wayland it could be done via wl_drm, or via some other ways. X11 DRI3 can be extended to have similar things.


The proposed patch looks like a hack to me, and I suggest to move to the Xorg and Wayland mailing lists to discuss the proper solution.


Yours,


Axel Davy


_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to