RE: [PATCH v1 7/7] ui/spice: Create another texture with linear layout when gl=on is enabled

2024-01-25 Thread Kasireddy, Vivek
Hi Marc-Andre,

> 
> Hi
> 
> On Sat, Jan 20, 2024 at 4:54 AM Vivek Kasireddy
>  wrote:
> >
> > Since most encoders (invoked by Spice) may not work with tiled memory
> > associated with a texture, we need to create another texture that has
> > linear memory layout and use that instead.
> >
> > Note that, there does not seem to be a direct way to indicate to the
> > GL implementation that a texture's backing memory needs to be linear.
> > Instead, we have to do it in a roundabout way where we first need to
> > create a tiled texture and obtain a dmabuf fd associated with it via
> > EGL. Next, we have to create a memory object by importing the dmabuf
> > fd created earlier and finally create a linear texture by using the
> > memory object as the texture storage mechanism.
> >
> > Cc: Gerd Hoffmann 
> > Cc: Marc-André Lureau 
> > Cc: Frediano Ziglio 
> > Cc: Dongwon Kim 
> > Signed-off-by: Vivek Kasireddy 
> > ---
> >  ui/spice-display.c | 33 +
> >  1 file changed, 33 insertions(+)
> >
> > diff --git a/ui/spice-display.c b/ui/spice-display.c
> > index 08b4aec921..94cb378dbe 100644
> > --- a/ui/spice-display.c
> > +++ b/ui/spice-display.c
> > @@ -893,6 +893,7 @@ static void spice_gl_switch(DisplayChangeListener
> *dcl,
> >  {
> >  SimpleSpiceDisplay *ssd = container_of(dcl, SimpleSpiceDisplay, dcl);
> >  EGLint stride, fourcc;
> > +GLuint texture = 0;
> >  int fd;
> >
> >  if (ssd->ds) {
> > @@ -912,6 +913,38 @@ static void spice_gl_switch(DisplayChangeListener
> *dcl,
> >  return;
> >  }
> >
> > +if (remote_client && surface_format(ssd->ds) != PIXMAN_r5g6b5) {
> 
> hmm
> 
> > +/*
> > + * We really want to ensure that the memory layout of the 
> > texture
> > + * is linear; otherwise, the encoder's output may show 
> > corruption.
> > + */
> > +surface_gl_create_texture_from_fd(ssd->ds, fd, );
> 
> What if the encoder actually supports tiled layout?
Right, that would be true in most cases as the Render and Encode hardware are
mostly compatible. And, my patches are not required in this case if Spice 
chooses
a hardware encoder. However, the choice of which encoder to use (hardware vs
software) is decided dynamically and is internal to Spice at this point. And, 
since
there is no easy way to know from Qemu whether an encoder chosen by Spice
would support any given tiling format, I chose to always use linear layout 
since it
is guaranteed to work with all types of encoders.

> 
> Shouldn't this conversion be done at the encoder level as necessary?
Yeah, that is probably the right place but a software encoder like x264enc is 
not
going to know how to do the conversion from various tiled formats. This is the
specific case I am trying to address given that it is not a problem with 
hardware
encoders. I guess we could add a Qemu ui/spice option to tweak this behavior
while launching the VM.

> 
> It's also strange to reuse an FD associated with a tiled texture for a
> linear layout (I am uncomfortable with all this tbh)
I have looked around but there doesn't seem to be a way for requesting the GL
implementation to create a texture with linear layout other than via importing
memory objects. As noted in the comments below, the fd's ownership is taken
over by the GL implementation as part of the import. 

Also, I have started a conversation to figure out if there is any other way to
create linear textures more efficiently:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27067

> 
> > +
> > +/*
> > + * A successful return after glImportMemoryFdEXT() means that
> > + * the ownership of fd has been passed to GL. In other words,
> > + * the fd we got above should not be used anymore.
> > + */
> > +if (texture > 0) {
> > +fd = egl_get_fd_for_texture(texture,
> > +, ,
> > +NULL);
> > +if (fd < 0) {
> 
> I suggest adding warnings or tracing, to help debug issues...
Sure, will do that in v2.

> 
> > +glDeleteTextures(1, );
> > +fd = egl_get_fd_for_texture(ssd->ds->texture,
> > +, ,
> > +NULL);
> > +if (fd < 0) {
> > +surface_gl_destroy_texture(ssd->gls, ssd->ds);
> > +return;
> > +}
> > +} else {
> > +surface_gl_destroy_texture(ssd->gls, ssd->ds);
> > +ssd->ds->texture = texture;
> 
> Have you tried this series with virgl? (I doubt the renderer accepts
> that the scanout texture is changed)
I have tried with virgl enabled and it mostly works because my patches
don't affect virgl codepaths such as 

Re: [PATCH v1 7/7] ui/spice: Create another texture with linear layout when gl=on is enabled

2024-01-22 Thread Marc-André Lureau
Hi

On Sat, Jan 20, 2024 at 4:54 AM Vivek Kasireddy
 wrote:
>
> Since most encoders (invoked by Spice) may not work with tiled memory
> associated with a texture, we need to create another texture that has
> linear memory layout and use that instead.
>
> Note that, there does not seem to be a direct way to indicate to the
> GL implementation that a texture's backing memory needs to be linear.
> Instead, we have to do it in a roundabout way where we first need to
> create a tiled texture and obtain a dmabuf fd associated with it via
> EGL. Next, we have to create a memory object by importing the dmabuf
> fd created earlier and finally create a linear texture by using the
> memory object as the texture storage mechanism.
>
> Cc: Gerd Hoffmann 
> Cc: Marc-André Lureau 
> Cc: Frediano Ziglio 
> Cc: Dongwon Kim 
> Signed-off-by: Vivek Kasireddy 
> ---
>  ui/spice-display.c | 33 +
>  1 file changed, 33 insertions(+)
>
> diff --git a/ui/spice-display.c b/ui/spice-display.c
> index 08b4aec921..94cb378dbe 100644
> --- a/ui/spice-display.c
> +++ b/ui/spice-display.c
> @@ -893,6 +893,7 @@ static void spice_gl_switch(DisplayChangeListener *dcl,
>  {
>  SimpleSpiceDisplay *ssd = container_of(dcl, SimpleSpiceDisplay, dcl);
>  EGLint stride, fourcc;
> +GLuint texture = 0;
>  int fd;
>
>  if (ssd->ds) {
> @@ -912,6 +913,38 @@ static void spice_gl_switch(DisplayChangeListener *dcl,
>  return;
>  }
>
> +if (remote_client && surface_format(ssd->ds) != PIXMAN_r5g6b5) {

hmm

> +/*
> + * We really want to ensure that the memory layout of the texture
> + * is linear; otherwise, the encoder's output may show 
> corruption.
> + */
> +surface_gl_create_texture_from_fd(ssd->ds, fd, );

What if the encoder actually supports tiled layout?

Shouldn't this conversion be done at the encoder level as necessary?

It's also strange to reuse an FD associated with a tiled texture for a
linear layout (I am uncomfortable with all this tbh)

> +
> +/*
> + * A successful return after glImportMemoryFdEXT() means that
> + * the ownership of fd has been passed to GL. In other words,
> + * the fd we got above should not be used anymore.
> + */
> +if (texture > 0) {
> +fd = egl_get_fd_for_texture(texture,
> +, ,
> +NULL);
> +if (fd < 0) {

I suggest adding warnings or tracing, to help debug issues...

> +glDeleteTextures(1, );
> +fd = egl_get_fd_for_texture(ssd->ds->texture,
> +, ,
> +NULL);
> +if (fd < 0) {
> +surface_gl_destroy_texture(ssd->gls, ssd->ds);
> +return;
> +}
> +} else {
> +surface_gl_destroy_texture(ssd->gls, ssd->ds);
> +ssd->ds->texture = texture;

Have you tried this series with virgl? (I doubt the renderer accepts
that the scanout texture is changed)

> +}
> +}
> +}
> +
>  trace_qemu_spice_gl_surface(ssd->qxl.id,
>  surface_width(ssd->ds),
>  surface_height(ssd->ds),
> --
> 2.39.2
>
>


-- 
Marc-André Lureau



[PATCH v1 7/7] ui/spice: Create another texture with linear layout when gl=on is enabled

2024-01-19 Thread Vivek Kasireddy
Since most encoders (invoked by Spice) may not work with tiled memory
associated with a texture, we need to create another texture that has
linear memory layout and use that instead.

Note that, there does not seem to be a direct way to indicate to the
GL implementation that a texture's backing memory needs to be linear.
Instead, we have to do it in a roundabout way where we first need to
create a tiled texture and obtain a dmabuf fd associated with it via
EGL. Next, we have to create a memory object by importing the dmabuf
fd created earlier and finally create a linear texture by using the
memory object as the texture storage mechanism.

Cc: Gerd Hoffmann 
Cc: Marc-André Lureau 
Cc: Frediano Ziglio 
Cc: Dongwon Kim 
Signed-off-by: Vivek Kasireddy 
---
 ui/spice-display.c | 33 +
 1 file changed, 33 insertions(+)

diff --git a/ui/spice-display.c b/ui/spice-display.c
index 08b4aec921..94cb378dbe 100644
--- a/ui/spice-display.c
+++ b/ui/spice-display.c
@@ -893,6 +893,7 @@ static void spice_gl_switch(DisplayChangeListener *dcl,
 {
 SimpleSpiceDisplay *ssd = container_of(dcl, SimpleSpiceDisplay, dcl);
 EGLint stride, fourcc;
+GLuint texture = 0;
 int fd;
 
 if (ssd->ds) {
@@ -912,6 +913,38 @@ static void spice_gl_switch(DisplayChangeListener *dcl,
 return;
 }
 
+if (remote_client && surface_format(ssd->ds) != PIXMAN_r5g6b5) {
+/*
+ * We really want to ensure that the memory layout of the texture
+ * is linear; otherwise, the encoder's output may show corruption.
+ */
+surface_gl_create_texture_from_fd(ssd->ds, fd, );
+
+/*
+ * A successful return after glImportMemoryFdEXT() means that
+ * the ownership of fd has been passed to GL. In other words,
+ * the fd we got above should not be used anymore.
+ */
+if (texture > 0) {
+fd = egl_get_fd_for_texture(texture,
+, ,
+NULL);
+if (fd < 0) {
+glDeleteTextures(1, );
+fd = egl_get_fd_for_texture(ssd->ds->texture,
+, ,
+NULL);
+if (fd < 0) {
+surface_gl_destroy_texture(ssd->gls, ssd->ds);
+return;
+}
+} else {
+surface_gl_destroy_texture(ssd->gls, ssd->ds);
+ssd->ds->texture = texture;
+}
+}
+}
+
 trace_qemu_spice_gl_surface(ssd->qxl.id,
 surface_width(ssd->ds),
 surface_height(ssd->ds),
-- 
2.39.2