On Mon, Mar 13, 2023 at 11:44 AM Gurchetan Singh <
gurchetansi...@chromium.org> wrote:

> On Mon, Mar 13, 2023 at 5:58 AM Marc-André Lureau
> <marcandre.lur...@gmail.com> wrote:
> >
> > Hi Gurchetan
> >
> > On Tue, Mar 7, 2023 at 2:41 AM Gurchetan Singh
> > <gurchetansi...@chromium.org> wrote:
> > >
> > > On Tue, Jan 31, 2023 at 3:15 PM Dmitry Osipenko
> > > <dmitry.osipe...@collabora.com> wrote:
> > > >
> > > > Hello,
> > > >
> > > > On 1/30/23 20:00, Alex Bennée wrote:
> > > > >
> > > > > Antonio Caggiano <antonio.caggi...@collabora.com> writes:
> > > > >
> > > > >> This series of patches enables support for the Venus VirtIO-GPU
> Vulkan
> > > > >> driver by adding some features required by the driver:
> > > > >>
> > > > >> - CONTEXT_INIT
> > > > >> - HOSTMEM
> > > > >> - RESOURCE_UUID
> > > > >> - BLOB_RESOURCES
> > > > >>
> > > > >> In addition to these features, Venus capset support was required
> > > > >> together with the implementation for Virgl blob resource commands.
> > > > >
> > > > > I managed to apply to current master but I needed a bunch of
> patches to
> > > > > get it to compile with my old virgl:
> > > >
> > > > Thank you for reviewing and testing the patches! Antonio isn't
> working
> > > > on Venus anymore, I'm going to continue this effort. Last year we
> > > > stabilized some of the virglrenderer Venus APIs, this year Venus may
> > > > transition to supporting per-context fences only and require to init
> a
> > > > renderserver, which will result in a more changes to Qemu. I'm going
> to
> > > > wait a bit for Venus to settle down and then make a v4.
> > > >
> > > > In the end we will either need to add more #ifdefs if we will want to
> > > > keep supporting older virglrenderer versions in Qemu, or bump the min
> > > > required virglrenderer version.
> > >
> > > Hi Dmitry,
> > >
> > > Thanks for working on this, it's great to see QEMU graphics moving
> > > forward.  I noticed a few things from your patchset:
> > >
> > > 1)  Older versions of virglrenderer -- supported or not?
> > >
> > > As you alluded to, there have been significant changes to
> > > virglrenderer since the last QEMU graphics update.  For example, the
> > > asynchronous callback introduces an entirely different and
> > > incompatible way to signal fence completion.
> > >
> > > Notionally, QEMU must support older versions of virglrenderer, though
> > > in practice I'm not sure how much that is true.  If we want to keep up
> > > the notion that older versions must be supported, you'll need:
> > >
> > > a) virtio-gpu-virgl.c
> > > b) virtio-gpu-virgl2.c (or an equivalent)
> > >
> > > Similarly for the vhost-user paths (if you want to support that).  If
> > > older versions of virglrenderer don't need to be supported, then that
> > > would simplify the amount of additional paths/#ifdefs.
> >
> > We should support old versions of virgl (as described in
> >
> https://www.qemu.org/docs/master/about/build-platforms.html#linux-os-macos-freebsd-netbsd-openbsd
> ).
> >
> > Whether a new virtio-gpu-virgl2. (or equivalent) is necessary, we
> > can't really tell without seeing the changes involved.
>
> Ack.  Something to keep in mind as Dmitry refactors.
>
> >
> > >
> > > 2) Additional context type: gfxstream [i]?
> > >
> > > One of the major motivations for adding context types in the
> > > virtio-gpu spec was supporting gfxstream.  gfxstream is used in the
> > > Android Studio emulator (a variant of QEMU) [ii], among other places.
> > > That would move the Android emulator closer to the goal of using
> > > upstream QEMU for everything.
> >
> > What is the advantage of using gfxstream over virgl? or zink+venus?
>
> History/backstory:
>
> gfxstream development has its roots in the development of the Android
> Emulator (circa 2010).  In those days, both DRM and Android were
> relatively new and the communities didn't know much about each other.
>
> A method was devised to auto-generate GLES calls (that's all Android
> needed) and stream it over an interface very similar to pipe(..).
> Host generated IDs were used to track shareable buffers.
>
> That same method used to auto-generate GLES was expanded to Vulkan and
> support for coherent memory was added.  In 2018 the Android Emulator
> was the first to ship CTS-compliant virtualized Vulkan via downstream
> kernel interfaces, before work on venus began.
>
> As virtio-gpu continued to mature, gfxstream was actually the first to
> ship both blob resources [1] and context types [2] in production via
> crosvm to form a completely upstreamable solution (I consider AOSP to
> be an "upstream" as well).
>
> [1]
> https://patchwork.kernel.org/project/dri-devel/cover/20200814024000.2485-1-gurchetansi...@chromium.org/
> [2] https://lists.oasis-open.org/archives/virtio-dev/202108/msg00141.html
>
> With this history out of the way, here are some advantages of
> gfxstream GLES over virgl:
>
> - gfxstream GLES actually has much less rendering artifacts than virgl
> since it's autogenerated and not hand-written.  Using an Gallium
> command stream is lossy (partly since the GLES spec is ambiguous and
> drivers are buggy), and we always had better dEQP runs on gfxstream
> GLES than on virgl (especially on closed source drivers).
>
> - Better memory management: virgl makes heavy use of
> RESOURCE_CREATE_3D, which creates shadow buffers for every GL
> texture/buffer.  gfxstream just uses a single guest memory buffer per
> DRM instance for buffer/texture upload.  For example, gfxstream
> doesn't need the virtio-gpu shrinker as much [3] since it doesn't use
> as much guest memory.  Though I know there have been recent fixes for
> this in virgl, but talking from a design POV.
>
> - Performance:  In 2020, a vendor ran the GPU emulation stress test
> comparing virgl and gfxstream GLES.  Here are some results:
>
> CVD: drm_virgl - 7.01 fps
> CVD: gfxstream - 43.68 fps
>
> I've seen similarly dramatic results with gfxbench/3D Mark on some
> automotive platforms.
>
> - Multi-threaded by design:  gfxstream GLES is multi-threaded by
> design.  Each guest GL thread get's its own host thread to decod
> commands. virgl is single threaded (before the asynchronous callback,
> which hasn't landed in QEMU yet)
>
> - Cross-platform:  Windows, MacOS, and Linux support (though with
> downstream QEMU patches unfortunately).  virgl is more a Linux thing.
>
> - Snapshots: Not possible with virgl.  We don't intend to upstream
> live migration snapshot support in the initial CL, but that's
> something to note that users like.
>
> gfxstream is the "native" solution for Android and is thus better
> optimized, just like virgl is the native solution for Linux guests.
>
> Re: Zink/ANGLE/venus versus ANGLE/gfxstream VK
>
> venus in many ways has similar design characteristics as gfxstream VK
> (auto-generated, multi-threaded).  gfxstream VK has better
> cross-platform support, with it shipping on via the Android emulator
> and Google Play Games [4] on PC.  venus is designed with open-source
> Linux platforms in mind, with Chromebook gaming as the initial use
> case [5].
>
> That leads to different design decisions, mostly centered around
> resource sharing/state-tracking.  Snapshots are also a goal for
> gfxstream VK, though not ready yet.
>
> Both venus and gfxstream are Google-sponsored.  There were meetings
> between Android and ChromeOS bigwigs about gfxstream VK/venus in 2019,
> and the outcome seemed to be "we'll share work where it makes sense,
> but there might not be a one-size fits all solution".
>
> Layering which passes CTS is expected to take quite a while,
> especially for a cross-platform target such as the emulator.  It would
> be great to have gfxstream GLES support alone in the interim.
>
> [3]
> https://lore.kernel.org/lkml/20230305221011.1404672-1-dmitry.osipe...@collabora.com/
> [4] https://play.google.com/googleplaygames
> [5] https://www.xda-developers.com/how-to-run-steam-chromebook/
>
> >
> > Only AOSP can run with virgl perhaps? I am not familiar with Android
> > development.. I guess it doesn't make use of Mesa, and thus no virgl
> > at all?
>
> Some AOSP targets (Cuttlefish) can use virgl along with gfxstream,
> just for testing sake.  It's not hard to support both via crosvm, so
> we do it.
>
> https://source.android.com/docs/setup/create/cuttlefish-ref-gpu
>
> The Android Emulator (the most relevant use case here) does ship
> gfxstream when a developer uses Android Studio though, and plans to do
> so in the future.
>
> >
> > >
> > > If (1) is resolved, I don't think it's actually too bad to add
> > > gfxstream support.  We just need an additional layer of dispatch
> > > between virglrenderer and gfxstream (thus, virtio-gpu-virgl2.c would
> > > be renamed virtio-gpu-context-types.c or something similar).  The QEMU
> > > command line will have to be modified to pass in the enabled context
> > > type (--context={virgl, venus, gfxstream}).  crosvm has been using the
> > > same trick.
> > >
> > > If (1) is resolved in v4, I would estimate adding gfxstream support
> > > would at max take 1-2 months for a single engineer.  I'm not saying
> > > gfxstream need necessarily be a part of a v5 patch-stack, but given
> > > this patch-stack has been around for 1 year plus, it certainly could
> > > be.  We can certainly design things in such a way that adding
> > > gfxstream is easy subsequently.
> > >
> > > The hardest part is actually package management (Debian) for
> > > gfxstream, but those can be resolved.
> > >
> >
> > It looks like gfxstream is actually offering an API similar to
> > virlgrenderer (with "pipe_" prefix).
>
> For gfxstream, my ideal solution would not use that "pipe_" API
> directly from QEMU (though vulkan-cereal will be packaged properly).
> Instead, I intend to package the "rutabaga_gfx_ffi.h"  proxy library
> over gfxstream [6]:
>
>
> https://chromium.googlesource.com/chromiumos/platform/crosvm/+/refs/heads/main/rutabaga_gfx/ffi/src/include/rutabaga_gfx_ffi.h
>
> The advantage with this approach is one gets Wayland passthrough [7]
> with Linux guests which is written in Rust , along with gfxstream.
> The main issues are around Debian Rust packaging.
>
> Rough sketch, here's what I think we might need:
>
> a) virtio-gpu-virgl-legacy.c for older versions of virglrenderer
> b) virtio-gpu-virgl2.c
> c) virtio-gpu-rutabaga.c or virtio-gpu-gfxstream.c (depending on rust
> packaging investigations)
>
> Though Wayland passthrough is a "nice to have", upstreaming gfxstream
> for Android Emulator is the most important product goal.  So if Rust
> Debian packaging becomes too onerous (virtio-gpu-rutabaga.c), we can
> backtrack to a simpler solution (virtio-gpu-gfxstream.c).
>
> [6] it can also proxy virglrenderer calls too, but I'll that decision
> to virglrenderer maintainers
> [7] try out the feature here:
> https://crosvm.dev/book/devices/wayland.html
>
> > I suppose the guest needs to be
> > configured in a special way then (how? without mesa?).
>
> For AOSP, androidboot.hardware.vulkan and androidboot.hardware.egl
> allow toggling of GLES and Vulkan impls.  QEMU
> won't have to do anything special given the way the launchers are
> designed (there's an equivalent of a "virtmanager").
>
> There needs to be logic around context selection for Linux guests.
> QEMU needs a "--ctx_type={virgl, venus, drm, gfxstream}" argument.
> See crosvm for an example:
>
>
> https://chromium.googlesource.com/chromiumos/platform/crosvm/+/refs/heads/main/rutabaga_gfx/src/rutabaga_core.rs#910
>
> This argument is important for Linux upcoming "DRM native" context
> types [8] as well.
>
> [8] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21658
>
> >
> > > I'm not sure exactly how QEMU accelerated graphics are utilized in
> > > user-facing actual products currently, so not sure what the standard
> > > is.
> > >
> > > What do QEMU maintainers and users think about these issues,
> > > particularly about the potential gfxstream addition in QEMU as a
> > > context type?  We are most interested in Android guests.
> >
> > It would be great if the Android emulator was more aligned with
> > upstream QEMU development!
>
> Awesome!  I envisage the initial gfxstream integration as just a first
> step.  With the graphics solution upstreamed, subsequent MacOS/Windows
> specific patches will start to make more sense.
>

Okay, I think the next steps would actually be code so you can see our
vision.  I have few questions that will help with my RFC:

1)  Packaging -- before or after?

gfxstream does not have a package in upstream Portage or Debian (though
there are downstream implementations).  Is it sufficient to have a
versioned release (i.e, Git tag) without the package before the change can
be merged into QEMU?

Is packaging required before merging into QEMU?

2) Optional Rust dependencies

To achieve seamless Wayland windowing with the same implementation as
crosvm, we'll need optional Rust dependencies.  There actually has been
interest in making Rust a non-optional dependency:

https://wiki.qemu.org/RustInQemu
https://lists.gnu.org/archive/html/qemu-devel/2021-09/msg04589.html

I actually only want Rust as an optional dependency on Linux, Windows, and
MacOS -- where Rust support is quite good.  Is there any problem with using
Rust library with a C API from QEMU?

3) Rust "Build-Depends" in Debian

This is mostly a question to Debian packagers (CC: mjt@)

Any Rust package would likely depend on 10-30 additional packages (that's
just the way Rust works), but they are all in Debian stable right now.

https://packages.debian.org/stable/rust/

I noticed when enabling virgl, there were complaints about a ton of UI
libraries being pulled in.

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=813658

That necessitated the creation of the `qemu-system-gui` package for people
who don't need a UI.  I want to make gfxstream a Suggested Package in
qemu-system-gui, but that would potentially pull in 10-30 additional Rust
build dependencies I mentioned.

Would the 10-30 Rust Build dependencies be problematic?  I think QEMU
already has hundreds right now.

Thanks!


> >
> > thanks
> >
> > --
> > Marc-André Lureau
>

Reply via email to