Re: [PATCH v14 00/14] Support blob memory and venus on qemu

2024-06-23 Thread Alex Bennée
Dmitry Osipenko  writes:

> On 6/21/24 11:59, Alex Bennée wrote:
>> Dmitry Osipenko  writes:
>> 
>>> On 6/19/24 20:37, Alex Bennée wrote:
 So I've been experimenting with Aarch64 TCG with an Intel backend like
 this:

 ./qemu-system-aarch64 \
-M virt -cpu cortex-a76 \
-device virtio-net-pci,netdev=unet \
-netdev user,id=unet,hostfwd=tcp::-:22 \
-m 8192 \
-object memory-backend-memfd,id=mem,size=8G,share=on \
-serial mon:stdio \
-kernel 
 ~/lsrc/linux.git/builds/arm64.initramfs/arch/arm64/boot/Image \
-append "console=ttyAMA0" \
-device qemu-xhci -device usb-kbd -device usb-tablet \
-device virtio-gpu-gl-pci,blob=true,venus=true,hostmem=4G \
-display sdl,gl=on -d 
 plugin,guest_errors,trace:virtio_gpu_cmd_res_create_blob,trace:virtio_gpu_cmd_res_back_\*,trace:virtio_gpu_cmd_res_xfer_toh_3d,trace:virtio_gpu_cmd_res_xfer_fromh_3d,trace:address_space_map
  

 And I've noticed a couple of things. First trying to launch vkmark to
 run a KMS mode test fails with:

>>> ...
   virgl_render_server[1875931]: vkr: failed to import resource: invalid 
 res_id 5
   virgl_render_server[1875931]: vkr: vkAllocateMemory resulted in CS error 
   virgl_render_server[1875931]: vkr: ring_submit_cmd: vn_dispatch_command 
 failed

 More interestingly when shutting stuff down we see weirdness like:

   address_space_map as:0x561b48ec48c0 addr 0x1008ac4b0:18 write:1 
 attrs:0x1  
   
   virgl_render_server[1875931]: vkr: destroying context 3 (vkmark) with a 
 valid instance 
   
   virgl_render_server[1875931]: vkr: destroying device with valid objects  

  
   vkr_context_remove_object: -7438602987017907480  

  
   vkr_context_remove_object: 7 

  
   vkr_context_remove_object: 5   

 which indicates something has gone very wrong. I'm not super familiar
 with the memory allocation patterns but should stuff that is done as
 virtio_gpu_cmd_res_back_attach() be find-able in the list of resources?
>>>
>>> This is expected to fail. Vkmark creates shmem virgl GBM FB BO on guest
>>> that isn't exportable on host. AFAICT, more code changes should be
>>> needed to support this case.
>> 
>> There are a lot of acronyms there. If this is pure guest memory why
>> isn't it exportable to the host? Or should the underlying mesa library
>> be making sure the allocation happens from the shared region?
>> 
>> Is vkmark particularly special here?
>
> Actually, you could get it to work to a some degree if you'll compile
> virglrenderer with -Dminigbm_allocation=true. On host use GTK/Wayland
> display.

I'll give that a go.

> Vkmark isn't special. It's virglrenderer that has a room for
> improvement. ChromeOS doesn't use KMS in VMs, proper KMS support was
> never a priority for Venus.

Is there a tracking bug for KMS support for Venus? Or Venus should work
fine if virglrenderer can export the buffer to the host?



 This could be a false positive or it could be a race between the guest
 kernel clearing memory while we are still doing
 virtio_gpu_ctrl_response.

 What do you think?
>>>
>>> The memcpy warning looks a bit suspicion, but likely is harmless. I
>>> don't see such warning with TSAN and x86 VM.
>> 
>> TSAN can only pick up these interactions with TCG guests because it can
>> track guest memory accesses. With a KVM guest we have no visibility of
>> the guest accesses. 
>
> I couldn't reproduce this issue with my KVM/TCG/ARM64 setups. Fox x86 I
> checked both KVM and TCG, TSAN only warns about vitio-net memcpy's for
> me.

Hmm OK. I'll keep an eye out as I test the next version.

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



Re: [PATCH v14 00/14] Support blob memory and venus on qemu

2024-06-21 Thread Dmitry Osipenko
On 6/21/24 11:59, Alex Bennée wrote:
> Dmitry Osipenko  writes:
> 
>> On 6/19/24 20:37, Alex Bennée wrote:
>>> So I've been experimenting with Aarch64 TCG with an Intel backend like
>>> this:
>>>
>>> ./qemu-system-aarch64 \
>>>-M virt -cpu cortex-a76 \
>>>-device virtio-net-pci,netdev=unet \
>>>-netdev user,id=unet,hostfwd=tcp::-:22 \
>>>-m 8192 \
>>>-object memory-backend-memfd,id=mem,size=8G,share=on \
>>>-serial mon:stdio \
>>>-kernel 
>>> ~/lsrc/linux.git/builds/arm64.initramfs/arch/arm64/boot/Image \
>>>-append "console=ttyAMA0" \
>>>-device qemu-xhci -device usb-kbd -device usb-tablet \
>>>-device virtio-gpu-gl-pci,blob=true,venus=true,hostmem=4G \
>>>-display sdl,gl=on -d 
>>> plugin,guest_errors,trace:virtio_gpu_cmd_res_create_blob,trace:virtio_gpu_cmd_res_back_\*,trace:virtio_gpu_cmd_res_xfer_toh_3d,trace:virtio_gpu_cmd_res_xfer_fromh_3d,trace:address_space_map
>>>  
>>>
>>> And I've noticed a couple of things. First trying to launch vkmark to
>>> run a KMS mode test fails with:
>>>
>> ...
>>>   virgl_render_server[1875931]: vkr: failed to import resource: invalid 
>>> res_id 5
>>>   virgl_render_server[1875931]: vkr: vkAllocateMemory resulted in CS error 
>>>   virgl_render_server[1875931]: vkr: ring_submit_cmd: vn_dispatch_command 
>>> failed
>>>
>>> More interestingly when shutting stuff down we see weirdness like:
>>>
>>>   address_space_map as:0x561b48ec48c0 addr 0x1008ac4b0:18 write:1 attrs:0x1 
>>> 
>>>
>>>   virgl_render_server[1875931]: vkr: destroying context 3 (vkmark) with a 
>>> valid instance  
>>>  
>>>   virgl_render_server[1875931]: vkr: destroying device with valid objects   
>>> 
>>>
>>>   vkr_context_remove_object: -7438602987017907480   
>>> 
>>>
>>>   vkr_context_remove_object: 7  
>>> 
>>>
>>>   vkr_context_remove_object: 5   
>>>
>>> which indicates something has gone very wrong. I'm not super familiar
>>> with the memory allocation patterns but should stuff that is done as
>>> virtio_gpu_cmd_res_back_attach() be find-able in the list of resources?
>>
>> This is expected to fail. Vkmark creates shmem virgl GBM FB BO on guest
>> that isn't exportable on host. AFAICT, more code changes should be
>> needed to support this case.
> 
> There are a lot of acronyms there. If this is pure guest memory why
> isn't it exportable to the host? Or should the underlying mesa library
> be making sure the allocation happens from the shared region?
> 
> Is vkmark particularly special here?

Actually, you could get it to work to a some degree if you'll compile
virglrenderer with -Dminigbm_allocation=true. On host use GTK/Wayland
display.

Vkmark isn't special. It's virglrenderer that has a room for
improvement. ChromeOS doesn't use KMS in VMs, proper KMS support was
never a priority for Venus.

>> Note that "destroying device with valid objects" msg is fine, won't hurt
>> to silence it in Venus to avoid confusion. It will happen every time
>> guest application is closed without explicitly releasing every VK
>> object.
> 
> I was more concerned with:
> 
>>>   vkr_context_remove_object: -7438602987017907480   
>>> 
>>>
> 
> which looks like a corruption of the object ids (or maybe an offby one)

At first this appeared to be a valid value, otherwise venus should've
crashed Qemu with a debug-assert if ID was invalid. But I never see such
odd IDs with my testing.

>>> I tried running under RR to further debug but weirdly I can't get
>>> working graphics with that. I did try running under threadsan which
>>> complained about a potential data race:
>>>
>>>   vkr_context_add_object: 1 -> 0x7b2c0288
>>>   vkr_context_add_object: 2 -> 0x7b2c0270
>>>   vkr_context_add_object: 3 -> 0x7b387f28
>>>   vkr_context_add_object: 4 -> 0x7b387fa0
>>>   vkr_context_add_object: 5 -> 0x7b48000103f8
>>>   vkr_context_add_object: 6 -> 0x7b48000104a0
>>>   vkr_context_add_object: 7 -> 0x7b4800010440
>>>   virtio_gpu_cmd_res_back_attach res 0x5
>>>   virtio_gpu_cmd_res_back_attach res 0x6
>>>   vkr_context_add_object: 8 -> 0x7b48000103e0
>>>   virgl_render_server[1751430]: vkr: failed to import resource: invalid 
>>> res_id 5
>>>   virgl_render_server[1751430]: vkr: vkAllocateMemory resulted in CS error

Re: [PATCH v14 00/14] Support blob memory and venus on qemu

2024-06-21 Thread Alex Bennée
Dmitry Osipenko  writes:

> On 6/19/24 20:37, Alex Bennée wrote:
>> So I've been experimenting with Aarch64 TCG with an Intel backend like
>> this:
>> 
>> ./qemu-system-aarch64 \
>>-M virt -cpu cortex-a76 \
>>-device virtio-net-pci,netdev=unet \
>>-netdev user,id=unet,hostfwd=tcp::-:22 \
>>-m 8192 \
>>-object memory-backend-memfd,id=mem,size=8G,share=on \
>>-serial mon:stdio \
>>-kernel 
>> ~/lsrc/linux.git/builds/arm64.initramfs/arch/arm64/boot/Image \
>>-append "console=ttyAMA0" \
>>-device qemu-xhci -device usb-kbd -device usb-tablet \
>>-device virtio-gpu-gl-pci,blob=true,venus=true,hostmem=4G \
>>-display sdl,gl=on -d 
>> plugin,guest_errors,trace:virtio_gpu_cmd_res_create_blob,trace:virtio_gpu_cmd_res_back_\*,trace:virtio_gpu_cmd_res_xfer_toh_3d,trace:virtio_gpu_cmd_res_xfer_fromh_3d,trace:address_space_map
>>  
>> 
>> And I've noticed a couple of things. First trying to launch vkmark to
>> run a KMS mode test fails with:
>> 
> ...
>>   virgl_render_server[1875931]: vkr: failed to import resource: invalid 
>> res_id 5
>>   virgl_render_server[1875931]: vkr: vkAllocateMemory resulted in CS error 
>>   virgl_render_server[1875931]: vkr: ring_submit_cmd: vn_dispatch_command 
>> failed
>> 
>> More interestingly when shutting stuff down we see weirdness like:
>> 
>>   address_space_map as:0x561b48ec48c0 addr 0x1008ac4b0:18 write:1 attrs:0x1  
>>  
>>  
>>   virgl_render_server[1875931]: vkr: destroying context 3 (vkmark) with a 
>> valid instance   
>> 
>>   virgl_render_server[1875931]: vkr: destroying device with valid objects
>>  
>>  
>>   vkr_context_remove_object: -7438602987017907480
>>  
>>  
>>   vkr_context_remove_object: 7   
>>  
>>  
>>   vkr_context_remove_object: 5   
>> 
>> which indicates something has gone very wrong. I'm not super familiar
>> with the memory allocation patterns but should stuff that is done as
>> virtio_gpu_cmd_res_back_attach() be find-able in the list of resources?
>
> This is expected to fail. Vkmark creates shmem virgl GBM FB BO on guest
> that isn't exportable on host. AFAICT, more code changes should be
> needed to support this case.

There are a lot of acronyms there. If this is pure guest memory why
isn't it exportable to the host? Or should the underlying mesa library
be making sure the allocation happens from the shared region?

Is vkmark particularly special here?


> Note that "destroying device with valid objects" msg is fine, won't hurt
> to silence it in Venus to avoid confusion. It will happen every time
> guest application is closed without explicitly releasing every VK
> object.

I was more concerned with:

>>   vkr_context_remove_object: -7438602987017907480
>>  
>>  

which looks like a corruption of the object ids (or maybe an offby one)

>
>> I tried running under RR to further debug but weirdly I can't get
>> working graphics with that. I did try running under threadsan which
>> complained about a potential data race:
>> 
>>   vkr_context_add_object: 1 -> 0x7b2c0288
>>   vkr_context_add_object: 2 -> 0x7b2c0270
>>   vkr_context_add_object: 3 -> 0x7b387f28
>>   vkr_context_add_object: 4 -> 0x7b387fa0
>>   vkr_context_add_object: 5 -> 0x7b48000103f8
>>   vkr_context_add_object: 6 -> 0x7b48000104a0
>>   vkr_context_add_object: 7 -> 0x7b4800010440
>>   virtio_gpu_cmd_res_back_attach res 0x5
>>   virtio_gpu_cmd_res_back_attach res 0x6
>>   vkr_context_add_object: 8 -> 0x7b48000103e0
>>   virgl_render_server[1751430]: vkr: failed to import resource: invalid 
>> res_id 5
>>   virgl_render_server[1751430]: vkr: vkAllocateMemory resulted in CS error
>>   virgl_render_server[1751430]: vkr: ring_submit_cmd: vn_dispatch_command 
>> failed
>>   ==
>>   WARNING: ThreadSanitizer: data race (pid=1751256)
>> Read of size 8 at 0x7f7fa0ea9138 by main thread (mutexes: write M0):
>>   #0 memcpy  (qemu-system-aarch64+0x41fede) (BuildId: 
>> 0bab171e77cb6782341ee3407e44af7267974025)
> ..
>>   ==
>>   SUMMARY: ThreadSanitizer: data race 
>> (/home/alex/lsrc/qemu.git/builds/system.threadsan/qemu-system-aarch64+0x41fede)
>>  (BuildId: 0bab171e77cb6782341ee3407e44af7267974025) in __interceptor_memcpy
>> 
>> This could be a fals

Re: [PATCH v14 00/14] Support blob memory and venus on qemu

2024-06-20 Thread Dmitry Osipenko
On 6/19/24 20:37, Alex Bennée wrote:
> So I've been experimenting with Aarch64 TCG with an Intel backend like
> this:
> 
> ./qemu-system-aarch64 \
>-M virt -cpu cortex-a76 \
>-device virtio-net-pci,netdev=unet \
>-netdev user,id=unet,hostfwd=tcp::-:22 \
>-m 8192 \
>-object memory-backend-memfd,id=mem,size=8G,share=on \
>-serial mon:stdio \
>-kernel 
> ~/lsrc/linux.git/builds/arm64.initramfs/arch/arm64/boot/Image \
>-append "console=ttyAMA0" \
>-device qemu-xhci -device usb-kbd -device usb-tablet \
>-device virtio-gpu-gl-pci,blob=true,venus=true,hostmem=4G \
>-display sdl,gl=on -d 
> plugin,guest_errors,trace:virtio_gpu_cmd_res_create_blob,trace:virtio_gpu_cmd_res_back_\*,trace:virtio_gpu_cmd_res_xfer_toh_3d,trace:virtio_gpu_cmd_res_xfer_fromh_3d,trace:address_space_map
>  
> 
> And I've noticed a couple of things. First trying to launch vkmark to
> run a KMS mode test fails with:
> 
...
>   virgl_render_server[1875931]: vkr: failed to import resource: invalid 
> res_id 5
>   virgl_render_server[1875931]: vkr: vkAllocateMemory resulted in CS error 
>   virgl_render_server[1875931]: vkr: ring_submit_cmd: vn_dispatch_command 
> failed
> 
> More interestingly when shutting stuff down we see weirdness like:
> 
>   address_space_map as:0x561b48ec48c0 addr 0x1008ac4b0:18 write:1 attrs:0x1   
>   
>
>   virgl_render_server[1875931]: vkr: destroying context 3 (vkmark) with a 
> valid instance
>
>   virgl_render_server[1875931]: vkr: destroying device with valid objects 
>   
>
>   vkr_context_remove_object: -7438602987017907480 
>   
>
>   vkr_context_remove_object: 7
>   
>
>   vkr_context_remove_object: 5   
> 
> which indicates something has gone very wrong. I'm not super familiar
> with the memory allocation patterns but should stuff that is done as
> virtio_gpu_cmd_res_back_attach() be find-able in the list of resources?

This is expected to fail. Vkmark creates shmem virgl GBM FB BO on guest
that isn't exportable on host. AFAICT, more code changes should be
needed to support this case.

Note that "destroying device with valid objects" msg is fine, won't hurt
to silence it in Venus to avoid confusion. It will happen every time
guest application is closed without explicitly releasing every VK object.

> I tried running under RR to further debug but weirdly I can't get
> working graphics with that. I did try running under threadsan which
> complained about a potential data race:
> 
>   vkr_context_add_object: 1 -> 0x7b2c0288
>   vkr_context_add_object: 2 -> 0x7b2c0270
>   vkr_context_add_object: 3 -> 0x7b387f28
>   vkr_context_add_object: 4 -> 0x7b387fa0
>   vkr_context_add_object: 5 -> 0x7b48000103f8
>   vkr_context_add_object: 6 -> 0x7b48000104a0
>   vkr_context_add_object: 7 -> 0x7b4800010440
>   virtio_gpu_cmd_res_back_attach res 0x5
>   virtio_gpu_cmd_res_back_attach res 0x6
>   vkr_context_add_object: 8 -> 0x7b48000103e0
>   virgl_render_server[1751430]: vkr: failed to import resource: invalid 
> res_id 5
>   virgl_render_server[1751430]: vkr: vkAllocateMemory resulted in CS error
>   virgl_render_server[1751430]: vkr: ring_submit_cmd: vn_dispatch_command 
> failed
>   ==
>   WARNING: ThreadSanitizer: data race (pid=1751256)
> Read of size 8 at 0x7f7fa0ea9138 by main thread (mutexes: write M0):
>   #0 memcpy  (qemu-system-aarch64+0x41fede) (BuildId: 
> 0bab171e77cb6782341ee3407e44af7267974025)
..
>   ==
>   SUMMARY: ThreadSanitizer: data race 
> (/home/alex/lsrc/qemu.git/builds/system.threadsan/qemu-system-aarch64+0x41fede)
>  (BuildId: 0bab171e77cb6782341ee3407e44af7267974025) in __interceptor_memcpy
> 
> This could be a false positive or it could be a race between the guest
> kernel clearing memory while we are still doing
> virtio_gpu_ctrl_response.
> 
> What do you think?

The memcpy warning looks a bit suspicion, but likely is harmless. I
don't see such warning with TSAN and x86 VM.

-- 
Best regards,
Dmitry




Re: [PATCH v14 00/14] Support blob memory and venus on qemu

2024-06-19 Thread Alex Bennée
Dmitry Osipenko  writes:

> Hello,
>
> This series enables Vulkan Venus context support on virtio-gpu.
>
> All virglrender and almost all Linux kernel prerequisite changes
> needed by Venus are already in upstream. For kernel there is a pending
> KVM patchset that fixes mapping of compound pages needed for DRM drivers
> using TTM [1], othewrwise hostmem blob mapping will fail with a KVM error
> from Qemu.

So I've been experimenting with Aarch64 TCG with an Intel backend like
this:

./qemu-system-aarch64 \
   -M virt -cpu cortex-a76 \
   -device virtio-net-pci,netdev=unet \
   -netdev user,id=unet,hostfwd=tcp::-:22 \
   -m 8192 \
   -object memory-backend-memfd,id=mem,size=8G,share=on \
   -serial mon:stdio \
   -kernel 
~/lsrc/linux.git/builds/arm64.initramfs/arch/arm64/boot/Image \
   -append "console=ttyAMA0" \
   -device qemu-xhci -device usb-kbd -device usb-tablet \
   -device virtio-gpu-gl-pci,blob=true,venus=true,hostmem=4G \
   -display sdl,gl=on -d 
plugin,guest_errors,trace:virtio_gpu_cmd_res_create_blob,trace:virtio_gpu_cmd_res_back_\*,trace:virtio_gpu_cmd_res_xfer_toh_3d,trace:virtio_gpu_cmd_res_xfer_fromh_3d,trace:address_space_map
 

And I've noticed a couple of things. First trying to launch vkmark to
run a KMS mode test fails with:

  vkr_context_add_object: 5 -> 0x7f24b81d7198   

   
  address_space_map as:0x561b48ec48c0 addr 0x1008ac648:20 write:0 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x109dc5be0:18 write:0 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x1008ac668:18 write:1 attrs:0x1 

   
  vkr_context_add_object: 6 -> 0x7f24b81d7240   

   
  address_space_map as:0x561b48ec48c0 addr 0x1008ac648:20 write:0 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x109dc5be0:18 write:0 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x1008ac668:18 write:1 attrs:0x1 

   
  vkr_context_add_object: 7 -> 0x7f24b81d71e0   

   
  address_space_map as:0x561b48ec48c0 addr 0x1008ac648:48 write:0 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x1008ac690:18 write:1 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x1008ac570:20 write:0 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x101d64300:40 write:0 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x1008ac590:18 write:1 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x1008ac720:20 write:0 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x1008ac740:18 write:1 attrs:0x1 

   
  virtio_gpu_cmd_res_back_attach res 0x5, 4 entries 

   
  address_space_map as:0x561b48ec48c0 addr 0x109fd5000:2b000 write:0 attrs:0x1  

   
  address_space_map as:0x561b48ec48c0 addr 0x10220:10 write:0 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x100e0:20 write:0 attrs:0x1 

   
  address_space_map as:0x561b48ec48c0 addr 0x10a00:bd000 write:0 attrs:0x1

[PATCH v14 00/14] Support blob memory and venus on qemu

2024-06-15 Thread Dmitry Osipenko
Hello,

This series enables Vulkan Venus context support on virtio-gpu.

All virglrender and almost all Linux kernel prerequisite changes
needed by Venus are already in upstream. For kernel there is a pending
KVM patchset that fixes mapping of compound pages needed for DRM drivers
using TTM [1], othewrwise hostmem blob mapping will fail with a KVM error
from Qemu.

[1] https://lore.kernel.org/kvm/20240229025759.1187910-1-steve...@google.com/

You'll need to use recent Mesa version containing patch that removes
dependency on cross-device feature from Venus that isn't supported by
Qemu [2].

[2] 
https://gitlab.freedesktop.org/mesa/mesa/-/commit/087e9a96d13155e26987befae78b6ccbb7ae242b

Example Qemu cmdline that enables Venus:

  qemu-system-x86_64 -device virtio-vga-gl,hostmem=4G,blob=true,venus=true \
  -machine q35,accel=kvm,memory-backend=mem1 \
  -object memory-backend-memfd,id=mem1,size=8G -m 8G


Changes from V13 to V14

- Fixed erronous fall-through in renderer_state's switch-case that was
  spotted by Marc-André Lureau.

- Reworked HOSTMEM_MR_FINISH_UNMAPPING handling as was suggested by
  Akihiko Odaki. Now it shares the same code path with HOSTMEM_MR_MAPPED.

- Made use of g_autofree in virgl_cmd_resource_create_blob() as was
  suggested by Akihiko Odaki.

- Removed virtio_gpu_virgl_deinit() and moved all deinit code to
  virtio_gpu_gl_device_unrealize() as was suggested by Marc-André Lureau.

- Replaced HAVE_FEATURE in mseon.build with virglrenderer's VERSION_MAJOR
  check as was suggested by Marc-André Lureau.

- Added trace event for cmd-suspension as was suggested by Marc-André Lureau.

- Added patch to replace in-flight printf's with trace events as was
  suggested by Marc-André Lureau

Changes from V12 to V13

- Replaced `res->async_unmap_in_progress` flag with a mapping state,
  moved it to the virtio_gpu_virgl_hostmem_region like was suggested
  by Akihiko Odaki.

- Renamed blob_unmap function and added back cmd_suspended argument
  to it. Suggested by Akihiko Odaki.

- Reordered VirtIOGPUGL refactoring patches to minimize code changes
  like was suggested by Akihiko Odaki.

- Replaced gl->renderer_inited with gl->renderer_state, like was suggested
  by Alex Bennée.

- Added gl->renderer state resetting to gl_device_unrealize(), for
  consistency. Suggested by Alex Bennée.

- Added rb's from Alex and Manos.

- Fixed compiling with !HAVE_VIRGL_RESOURCE_BLOB.

Changes from V11 to V12

- Fixed virgl_cmd_resource_create_blob() error handling. Now it doesn't
  corrupt resource list and releases resource properly on error. Thanks
  to Akihiko Odaki for spotting the bug.

- Added new patch that handles virtio_gpu_virgl_init() failure gracefully,
  fixing Qemu crash. Besides fixing the crash, it allows to implement
  a cleaner virtio_gpu_virgl_deinit().

- virtio_gpu_virgl_deinit() now assumes that previously virgl was
  initialized successfully when it was inited at all. Suggested by
  Akihiko Odaki.

- Fixed missed freeing of print_stats timer in virtio_gpu_virgl_deinit()

- Added back blob unmapping or RESOURCE_UNREF that was requested
  by Akihiko Odaki. Added comment to the code explaining how
  async unmapping works. Added back `res->async_unmap_in_progress`
  flag and added comment telling why it's needed.

- Moved cmdq_resume_bh to VirtIOGPUGL and made coding style changes
  suggested by Akihiko Odaki.

- Added patches that move fence_poll and print_stats timers to VirtIOGPUGL
  for consistency with cmdq_resume_bh.

Changes from V10 to V11

- Replaced cmd_resume bool in struct ctrl_command with
  "cmd->finished + !VIRTIO_GPU_FLAG_FENCE" checking as was requested
  by Akihiko Odaki.

- Reworked virgl_cmd_resource_unmap/unref_blob() to avoid re-adding
  the 'async_unmap_in_progress' flag that was dropped in v9:

1. virgl_cmd_resource_[un]map_blob() now doesn't check itself whether
   resource was previously mapped and lets virglrenderer to do the
   checking.

2. error returned by virgl_renderer_resource_unmap() is now handled
   and reported properly, previously the error wasn't checked. The
   virgl_renderer_resource_unmap() fails if resource wasn't mapped.

3. virgl_cmd_resource_unref_blob() now doesn't allow to unref resource
   that is mapped, it's a error condition if guest didn't unmap resource
   before doing the unref. Previously unref was implicitly unmapping
   resource.

Changes from V9 to V10

- Dropped 'async_unmap_in_progress' variable and switched to use
  aio_bh_new() isntead of oneshot variant in the "blob commands" patch.

- Further improved error messages by printing error code when actual error
  occurrs and using ERR_UNSPEC instead of ERR_ENOMEM when we don't really
  know if it was ENOMEM for sure.

- Added vdc->unrealize for the virtio GL device and freed virgl data.

- Dropped UUID and doc/migration patches. UUID feature isn't needed
  anymore, instead we changed Mesa Venus driver to not require UUID.

- Renamed virtio-