[AMD Official Use Only - AMD Internal Distribution Only] Hello Akihiko,
I just had a brief conversation with Pierre-Eric, and I believe the use case is slightly different than the one described. For virgl, selecting using DRI_PRIME only would suffice because QEMU and VirGL (both using OpenGL), both rely on the driver to pickup the GPU. For Native context however (this case) there is no GL driver use from virglrenderer side, which is why we inform virglrenderer using the callback functions instead. Sincerely, Luq ________________________________ From: Akihiko Odaki <[email protected]> Sent: Thursday, December 11, 2025 9:19:22 a.m. To: Dmitry Osipenko <[email protected]>; Irshad, Luqmaan <[email protected]>; [email protected] <[email protected]> Cc: [email protected] <[email protected]>; [email protected] <[email protected]>; Pelloux-Prayer, Pierre-Eric <[email protected]>; [email protected] <[email protected]> Subject: Re: [PATCH] virtio-gpu: create drm fd based on specified render node path On 2025/12/08 9:49, Dmitry Osipenko wrote: > Hi, > > On 12/5/25 21:49, Luqmaan Irshad wrote: >> Added a special callback function called virtio_get_drm_fd to create >> a render node based on the path specified by the user via QEMU command >> line. This function is called during the virglrenderer callback sequence >> where we specify the get_drm_fd function pointer to call back our >> new function, allowing us to pass the fd of our created render node. I guess what you need can be achieved by specifying a render node for the display. Headless displays (egl-headless and dbus) has the rendernode property for this. For the other displays, Mesa should choose an appropriate render node, and it can be overridden with the DRI_PRIME environment variable: https://docs.mesa3d.org/envvars.html#envvar-DRI_PRIME >> >> Based-on: [email protected] >> >> Signed-off-by: Luqmaan Irshad <[email protected]> >> --- >> hw/display/virtio-gpu-gl.c | 4 ++++ >> hw/display/virtio-gpu-virgl.c | 17 ++++++++++++++++- >> include/hw/virtio/virtio-gpu.h | 1 + >> 3 files changed, 21 insertions(+), 1 deletion(-) > > Do you think it could be possible and worthwhile to make QEMU's EGL > display to use same GPU as virgl automatically? I.e. we tell QEMU/EGL > which GPU to use and then virgl will use same DRM device that backs EGL. As far as I understand, it is already ensured that virgl uses the EGL display QEMU uses, and I think that what you want. Opening a different render node and passing the node to virglrenderer breaks it. Regards, Akihiko Odaki
