When testing with gl=on on an Intel Host, it was noticed that the R and B channels were interchanged while the Guest FB image was displayed. This was only seen if the display layer (virtio-gpu) did not directly share the dmabuf fd with Spice (i.e, blob=false).
One of the main differences in the case where blob=true vs blob=false is that we create the dmabuf fd from a texture in the latter case whereas we directly pass the fd from the display layer to Spice in the former case. Although, the surface's format (PIXMAN_BE_b8g8r8x8) is the same in both cases, the creation of the texture (which involves copying data from Pixman image into a GPU buffer) appears to somehow result in having the R and B channels interchanged. One way to ensure correct behavior is we have glformat=GL_RGBA while creating the texture. It looks like having glformat=GL_RGBA and gltype = GL_UNSIGNED_BYTE should work regardless of the Host's endianness but let us limit this change only to this specific use-case for now. Cc: Gerd Hoffmann <kra...@redhat.com> Cc: Marc-André Lureau <marcandre.lur...@redhat.com> Cc: Frediano Ziglio <fredd...@gmail.com> Cc: Dongwon Kim <dongwon....@intel.com> Signed-off-by: Vivek Kasireddy <vivek.kasire...@intel.com> --- ui/spice-display.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/ui/spice-display.c b/ui/spice-display.c index 90c04623ec..08b4aec921 100644 --- a/ui/spice-display.c +++ b/ui/spice-display.c @@ -900,6 +900,9 @@ static void spice_gl_switch(DisplayChangeListener *dcl, } ssd->ds = new_surface; if (ssd->ds) { + if (remote_client && surface_format(ssd->ds) != PIXMAN_r5g6b5) { + ssd->ds->target_glformat = GL_RGBA; + } surface_gl_create_texture(ssd->gls, ssd->ds); fd = egl_get_fd_for_texture(ssd->ds->texture, &stride, &fourcc, -- 2.39.2