Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-17 Thread David Stevens
> > > Of course only virtio drivers would try step (2), other drivers (when
> > > sharing buffers between intel gvt device and virtio-gpu for example)
> > > would go straight to (3).
> >
> > For virtio-gpu as it is today, it's not clear to me that they're
> > equivalent. As I read it, the virtio-gpu spec makes a distinction
> > between the guest memory and the host resource. If virtio-gpu is
> > communicating with non-virtio devices, then obviously you'd just be
> > working with guest memory. But if it's communicating with another
> > virtio device, then there are potentially distinct guest and host
> > buffers that could be used. The spec shouldn't leave any room for
> > ambiguity as to how this distinction is handled.
>
> Yep.  It should be the host side buffer.

I agree that it should be the host side buffer. I just want to make
sure that the meaning of 'import' is clear, and to establish the fact
that importing a buffer by uuid is not necessarily the same thing as
creating a new buffer in a different device from the same sglist (for
example, sharing a guest sglist might require more flushes).

-David



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-16 Thread Gerd Hoffmann
  Hi,

> > Of course only virtio drivers would try step (2), other drivers (when
> > sharing buffers between intel gvt device and virtio-gpu for example)
> > would go straight to (3).
> 
> For virtio-gpu as it is today, it's not clear to me that they're
> equivalent. As I read it, the virtio-gpu spec makes a distinction
> between the guest memory and the host resource. If virtio-gpu is
> communicating with non-virtio devices, then obviously you'd just be
> working with guest memory. But if it's communicating with another
> virtio device, then there are potentially distinct guest and host
> buffers that could be used. The spec shouldn't leave any room for
> ambiguity as to how this distinction is handled.

Yep.  It should be the host side buffer.  The whole point is to avoid
the round trip through the guest after all.  Or does someone see a
useful use case for the guest buffer?  If so we might have to add some
way to explicitly specify whenever we want the guest or host buffer.

cheers,
  Gerd




Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-12 Thread David Stevens
> > > Without buffer sharing support the driver importing a virtio-gpu dma-buf
> > > can send the buffer scatter list to the host.  So both virtio-gpu and
> > > the other device would actually access the same guest pages, but they
> > > are not aware that the buffer is shared between devices.
> >
> > With the uuid approach, how should this case be handled? Should it be
> > equivalent to exporting and importing the buffer which was created
> > first? Should the spec say it's undefined behavior that might work as
> > expected but might not, depending on the device implementation? Does
> > the spec even need to say anything about it?
>
> Using the uuid is an optional optimization.  I'd expect the workflow be
> roughly this:
>
>   (1) exporting driver exports a dma-buf as usual, additionally attaches
>   a uuid to it and notifies the host (using device-specific commands).
>   (2) importing driver will ask the host to use the buffer referenced by
>   the given uuid.
>   (3) if (2) fails for some reason use the dma-buf scatter list instead.
>
> Of course only virtio drivers would try step (2), other drivers (when
> sharing buffers between intel gvt device and virtio-gpu for example)
> would go straight to (3).

For virtio-gpu as it is today, it's not clear to me that they're
equivalent. As I read it, the virtio-gpu spec makes a distinction
between the guest memory and the host resource. If virtio-gpu is
communicating with non-virtio devices, then obviously you'd just be
working with guest memory. But if it's communicating with another
virtio device, then there are potentially distinct guest and host
buffers that could be used. The spec shouldn't leave any room for
ambiguity as to how this distinction is handled.

> > Not just buffers not backed by guest ram, but things like fences. I
> > would suggest the uuids represent 'exported resources' rather than
> > 'exported buffers'.
>
> Hmm, I can't see how this is useful.  Care to outline how you envision
> this to work in a typical use case?

Looking at the spec again, it seems like there's some more work that
would need to be done before this would be possible. But the use case
I was thinking of would be to export a fence from virtio-gpu and share
it with a virtio decoder, to set up a decode pipeline that doesn't
need to go back into the guest for synchronization. I'm fine dropping
this point for now, though, and revisiting it as a separate proposal.

-David



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-12 Thread Gerd Hoffmann
On Thu, Dec 12, 2019 at 09:26:32PM +0900, David Stevens wrote:
> > > > Second I think it is a bad idea
> > > > from the security point of view.  When explicitly exporting buffers it
> > > > is easy to restrict access to the actual exports.
> > >
> > > Restricting access to actual exports could perhaps help catch bugs.
> > > However, I don't think it provides any security guarantees, since the
> > > guest can always just export every buffer before using it.
> >
> > Probably not on the guest/host boundary.
> >
> > It's important for security inside the guest though.  You don't want
> > process A being able to access process B private resources via buffer
> > sharing support, by guessing implicit buffer identifiers.
> 
> At least for the linux guest implementation, I wouldn't think the
> uuids would be exposed from the kernel. To me, it seems like something
> that should be handled internally by the virtio drivers.

That would be one possible use case, yes.  The exporting driver attaches
a uuid to the dma-buf.  The importing driver can see the attached uuid
and use it (if supported, otherwise run with the scatter list).  That
will be transparent to userspace, apps will just export/import dma-bufs
as usual and not even notice the uuid.

I can see other valid use cases though:  A wayland proxy could use
virtio-gpu buffer exports for shared memory and send the buffer uuid
to the host over some stream protocol (vsock, tcp, ...).  For that to
work we have to export the uuid to userspace, for example using a ioctl
on the dma-buf file handle.

> If you use some other guest with untrusted
> userspace drivers, or if you're pulling the uuids out of the kernel to
> give to some non-virtio transport, then I can see it being a concern.

I strongly prefer a design where we don't have to worry about that
concern in the first place instead of discussing whenever we should be
worried or not.

> > Without buffer sharing support the driver importing a virtio-gpu dma-buf
> > can send the buffer scatter list to the host.  So both virtio-gpu and
> > the other device would actually access the same guest pages, but they
> > are not aware that the buffer is shared between devices.
> 
> With the uuid approach, how should this case be handled? Should it be
> equivalent to exporting and importing the buffer which was created
> first? Should the spec say it's undefined behavior that might work as
> expected but might not, depending on the device implementation? Does
> the spec even need to say anything about it?

Using the uuid is an optional optimization.  I'd expect the workflow be
roughly this:

  (1) exporting driver exports a dma-buf as usual, additionally attaches
  a uuid to it and notifies the host (using device-specific commands).
  (2) importing driver will ask the host to use the buffer referenced by
  the given uuid.
  (3) if (2) fails for some reason use the dma-buf scatter list instead.

Of course only virtio drivers would try step (2), other drivers (when
sharing buffers between intel gvt device and virtio-gpu for example)
would go straight to (3).

> > With buffer sharing virtio-gpu would attach a uuid to the dma-buf, and
> > the importing driver can send the uuid (instead of the scatter list) to
> > the host.  So the device can simply lookup the buffer on the host side
> > and use it directly.  Another advantage is that this enables some more
> > use cases like sharing buffers between devices which are not backed by
> > guest ram.
> 
> Not just buffers not backed by guest ram, but things like fences. I
> would suggest the uuids represent 'exported resources' rather than
> 'exported buffers'.

Hmm, I can't see how this is useful.  Care to outline how you envision
this to work in a typical use case?

cheers,
  Gerd




Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-12 Thread David Stevens
> > > Second I think it is a bad idea
> > > from the security point of view.  When explicitly exporting buffers it
> > > is easy to restrict access to the actual exports.
> >
> > Restricting access to actual exports could perhaps help catch bugs.
> > However, I don't think it provides any security guarantees, since the
> > guest can always just export every buffer before using it.
>
> Probably not on the guest/host boundary.
>
> It's important for security inside the guest though.  You don't want
> process A being able to access process B private resources via buffer
> sharing support, by guessing implicit buffer identifiers.

At least for the linux guest implementation, I wouldn't think the
uuids would be exposed from the kernel. To me, it seems like something
that should be handled internally by the virtio drivers. Especially
since the 'export' process would be very much a virtio-specific
action, so it's likely that it wouldn't fit nicely into existing
userspace software. If you use some other guest with untrusted
userspace drivers, or if you're pulling the uuids out of the kernel to
give to some non-virtio transport, then I can see it being a concern.

> > > Instead of using a dedicated buffer sharing device we can also use
> > > virtio-gpu (or any other driver which supports dma-buf exports) to
> > > manage buffers.

Ah, okay. I misunderstood the original statement. I read the sentence
as 'we can use virtio-gpu in place of the dedicated buffer sharing
device', rather than 'every device can manage its own buffers'. I can
agree with the second meaning.

> Without buffer sharing support the driver importing a virtio-gpu dma-buf
> can send the buffer scatter list to the host.  So both virtio-gpu and
> the other device would actually access the same guest pages, but they
> are not aware that the buffer is shared between devices.

With the uuid approach, how should this case be handled? Should it be
equivalent to exporting and importing the buffer which was created
first? Should the spec say it's undefined behavior that might work as
expected but might not, depending on the device implementation? Does
the spec even need to say anything about it?

> With buffer sharing virtio-gpu would attach a uuid to the dma-buf, and
> the importing driver can send the uuid (instead of the scatter list) to
> the host.  So the device can simply lookup the buffer on the host side
> and use it directly.  Another advantage is that this enables some more
> use cases like sharing buffers between devices which are not backed by
> guest ram.

Not just buffers not backed by guest ram, but things like fences. I
would suggest the uuids represent 'exported resources' rather than
'exported buffers'.

> Well, security-wise you want have buffer identifiers which can't be
> easily guessed.  And guessing uuid is pretty much impossible due to
> the namespace being huge.

I guess this depends on what you're passing around within the guest.
If you're passing around the raw uuids, sure. But I would argue it's
better to pass around unforgeable identifiers (e.g. fds), and to
restrict the uuids to when talking directly to the virtio transport.
But I guess there are likely situations where that's not possible.

-David



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-12 Thread Gerd Hoffmann
  Hi,

> > First the addressing is non-trivial, especially with the "transport
> > specific device address" in the tuple.
> 
> There is complexity here, but I think it would also be present in the
> buffer sharing device case. With a buffer sharing device, the same
> identifying information would need to be provided from the exporting
> driver to the buffer sharing driver, so the buffer sharing device
> would be able to identify the right device in the vmm.

No.  The idea is that the buffer sharing device will allocate and manage
the buffers (including identifiers), i.e. it will only export buffers,
never import.

> > Second I think it is a bad idea
> > from the security point of view.  When explicitly exporting buffers it
> > is easy to restrict access to the actual exports.
> 
> Restricting access to actual exports could perhaps help catch bugs.
> However, I don't think it provides any security guarantees, since the
> guest can always just export every buffer before using it.

Probably not on the guest/host boundary.

It's important for security inside the guest though.  You don't want
process A being able to access process B private resources via buffer
sharing support, by guessing implicit buffer identifiers.

With explicit buffer exports that opportunity doesn't exist in the first
place.  Anything not exported can't be accessed via buffer sharing,
period.  And to access the exported buffers you need to know the uuid,
which in turn allows the guest implement any access restrictions it
wants.

> > Instead of using a dedicated buffer sharing device we can also use
> > virtio-gpu (or any other driver which supports dma-buf exports) to
> > manage buffers.
> 
> I don't think adding generic buffer management to virtio-gpu (or any
> specific device type) is a good idea,

There isn't much to add btw.  virtio-gpu has buffer management, buffers
are called "resources" in virtio-gpu terminology.  You can already
export them as dma-bufs (just landed in 5.5-rc1) and import them into
other drivers.

Without buffer sharing support the driver importing a virtio-gpu dma-buf
can send the buffer scatter list to the host.  So both virtio-gpu and
the other device would actually access the same guest pages, but they
are not aware that the buffer is shared between devices.

With buffer sharing virtio-gpu would attach a uuid to the dma-buf, and
the importing driver can send the uuid (instead of the scatter list) to
the host.  So the device can simply lookup the buffer on the host side
and use it directly.  Another advantage is that this enables some more
use cases like sharing buffers between devices which are not backed by
guest ram.

> since that device would then
> become a requirement for buffer sharing between unrelated devices.

No.  When we drop the buffer sharing device idea (which is quite
likely), then any device can create buffers.  If virtio-gpu is involved
anyway, for example because you want show the images from the
virtio-camera device on the virtio-gpu display, it makes sense to use
virtio-gpu of course.  But any other device can create and export
buffers in a similar way.  Without a buffer sharing device there is no
central instance managing the buffers.  A virtio-video spec (video
encoder/decoder) is in discussion at the moment, it will probably get
resource management simliar to virtio-gpu for the video frames, and it
will be able to export/import those buffers (probably not in the first
revision, but it is on the radar).

> > With no central instance (buffer sharing device) being there managing
> > the buffer identifiers I think using uuids as identifiers would be a
> > good idea, to avoid clashes.  Also good for security because it's pretty
> > much impossible to guess buffer identifiers then.
> 
> Using uuids to identify buffers would work. The fact that it provides
> a single way to refer to both guest and host allocated buffers is
> nice. And it could also directly apply to sharing resources other than
> buffers (e.g. fences). Although unless we're positing that there are
> different levels of trust within the guest, I don't think uuids really
> provides much security.

Well, security-wise you want have buffer identifiers which can't be
easily guessed.  And guessing uuid is pretty much impossible due to
the namespace being huge.

> If we're talking about uuids, they could also be used to simplify my
> proposed implicit addressing scheme. Each device could be assigned a
> uuid, which would simplify the shared resource identifier to
> (device-uuid, shmid, offset).

See above for the security aspects of implicit vs. explicit buffer
identifiers.

cheers,
  Gerd




Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-11 Thread David Stevens
> First the addressing is non-trivial, especially with the "transport
> specific device address" in the tuple.

There is complexity here, but I think it would also be present in the
buffer sharing device case. With a buffer sharing device, the same
identifying information would need to be provided from the exporting
driver to the buffer sharing driver, so the buffer sharing device
would be able to identify the right device in the vmm. And then in
both import cases, the buffer is just identified by some opaque bytes
that need to be given to a buffer manager in the vmm to resolve the
actual buffer.

> Second I think it is a bad idea
> from the security point of view.  When explicitly exporting buffers it
> is easy to restrict access to the actual exports.

Restricting access to actual exports could perhaps help catch bugs.
However, I don't think it provides any security guarantees, since the
guest can always just export every buffer before using it. Using
implicit addresses doesn't mean that the buffer import actually has to
be allowed - it can be thought of as fusing the buffer export and
buffer import operations into a single operation. The vmm can still
perform exactly the same security checks.

> Instead of using a dedicated buffer sharing device we can also use
> virtio-gpu (or any other driver which supports dma-buf exports) to
> manage buffers.

I don't think adding generic buffer management to virtio-gpu (or any
specific device type) is a good idea, since that device would then
become a requirement for buffer sharing between unrelated devices. For
example, it's easy to imagine a device with a virtio-camera and a
virtio-encoder (although such protocols don't exist today). It
wouldn't make sense to require a virtio-gpu device to allow those two
devices to share buffers.

> With no central instance (buffer sharing device) being there managing
> the buffer identifiers I think using uuids as identifiers would be a
> good idea, to avoid clashes.  Also good for security because it's pretty
> much impossible to guess buffer identifiers then.

Using uuids to identify buffers would work. The fact that it provides
a single way to refer to both guest and host allocated buffers is
nice. And it could also directly apply to sharing resources other than
buffers (e.g. fences). Although unless we're positing that there are
different levels of trust within the guest, I don't think uuids really
provides much security.

If we're talking about uuids, they could also be used to simplify my
proposed implicit addressing scheme. Each device could be assigned a
uuid, which would simplify the shared resource identifier to
(device-uuid, shmid, offset).

In my opinion, the implicit buffer addressing scheme is fairly similar
to the uuid proposal. As I see it, the difference is that one is
referring to resources as uuids in a global namespace, whereas the
other is referring to resources with fully qualified names. Beyond
that, the implementations would be fairly similar.

-David



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-11 Thread Enrico Granata
On Wed, Dec 11, 2019 at 1:26 AM Gerd Hoffmann  wrote:
>
>   Hi,
>
> > None of the proposals directly address the use case of sharing host
> > allocated buffers between devices, but I think they can be extended to
> > support it. Host buffers can be identified by the following tuple:
> > (transport type enum, transport specific device address, shmid,
> > offset). I think this is sufficient even for host-allocated buffers
> > that aren't visible to the guest (e.g. protected memory, vram), since
> > they can still be given address space in some shared memory region,
> > even if those addresses are actually inaccessible to the guest. At
> > this point, the host buffer identifier can simply be passed in place
> > of the guest ram scatterlist with either proposed buffer sharing
> > mechanism.
>
> > I think the main question here is whether or not the complexity of
> > generic buffers and a buffer sharing device is worth it compared to
> > the more implicit definition of buffers.
>
> Here are two issues mixed up.  First is, whenever we'll go define a
> buffer sharing device or not.  Second is how we are going to address
> buffers.
>
> I think defining (and addressing) buffers implicitly is a bad idea.
> First the addressing is non-trivial, especially with the "transport
> specific device address" in the tuple.  Second I think it is a bad idea
> from the security point of view.  When explicitly exporting buffers it
> is easy to restrict access to the actual exports.
>

Strong +1 to the above. There are definitely use cases of interest
where it makes sense to be able to attach security attributes to
buffers.
Having an explicit interface that can handle all of this, instead of
duplicating logic in several subsystems, seems a worthy endeavor to
me.

> Instead of using a dedicated buffer sharing device we can also use
> virtio-gpu (or any other driver which supports dma-buf exports) to
> manage buffers.  virtio-gpu would create an identifier when exporting a
> buffer (dma-buf exports inside the guest), attach the identifier to the
> dma-buf so other drivers importing the buffer can see and use it.  Maybe
> add an ioctl to query, so guest userspace can query the identifier too.
> Also send the identifier to the host so it can also be used on the host
> side to lookup and access the buffer.
>
> With no central instance (buffer sharing device) being there managing
> the buffer identifiers I think using uuids as identifiers would be a
> good idea, to avoid clashes.  Also good for security because it's pretty
> much impossible to guess buffer identifiers then.
>
> cheers,
>   Gerd
>
>
> -
> To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org
>



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-26 Thread Gerd Hoffmann
> > I'm not convinced this is useful for audio ...
> > 
> > I basically see two modes of operation which are useful:
> > 
> >   (1) send audio data via virtqueue.
> >   (2) map host audio buffers into the guest address space.
> > 
> > The audio driver api (i.e. alsa) typically allows to mmap() the audio
> > data buffers, so it is the host audio driver which handles the
> > allocation. 
> 
> Yes, in regular non VM mode, it's the host driver which allocs the
> buffers.
> 
> My end goal is to be able to share physical SG pages from host to
> guests and HW (including DSP firmwares). 

Yep.  So the host driver would allocate the pages, in a way that the hw
can access them of course.  qemu (or another vmm) would mmap() those
buffer pages, using the usual sound app interface, which would be alsa
on linux.

Virtio got support for shared memory recently (it is in the version 1.2
draft), virtio-pci transport uses a pci bar for the shared memory
regions.  qemu (or other vmms) can use that to map the buffer pages into
guest address space.

There are plans use shared memory in virtio-gpu too, for pretty much the
same reasons.  Some kinds of gpu buffers must be allocated by the host
gpu driver, to make sure the host hardware can use the buffers as
intended.

> >  Let the audio hardware dma from/to userspace-allocated
> > buffers is not possible[1], but we would need that to allow qemu (or
> > other vmms) use guest-allocated buffers.
> 
> My misunderstanding here on how the various proposals being discussed
> all pass buffers between guests & host. I'm reading that some are
> passing buffers via userspace descriptors and this would not be
> workable for audio.

Yep, dma-buf based buffer passing doesn't help much for audio.

cheers,
  Gerd




Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-25 Thread Liam Girdwood
On Wed, 2019-11-20 at 10:53 +0100, Gerd Hoffmann wrote:
>   Hi,
> 
> > > > DSP FW has no access to userspace so we would need some
> > > > additional
> > > > API
> > > > on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?
> > > 
> > > The dma-buf api currently can share guest memory sg-lists.
> > 
> > Ok, IIUC buffers can either be shared using the GPU proposed APIs
> > (above) or using the dma-buf API to share via userspace ? My
> > preference
> > would be to use teh more direct GPU APIs sending physical page
> > addresses from Guest to device driver. I guess this is your use
> > case
> > too ?
> 
> I'm not convinced this is useful for audio ...
> 
> I basically see two modes of operation which are useful:
> 
>   (1) send audio data via virtqueue.
>   (2) map host audio buffers into the guest address space.
> 
> The audio driver api (i.e. alsa) typically allows to mmap() the audio
> data buffers, so it is the host audio driver which handles the
> allocation. 

Yes, in regular non VM mode, it's the host driver which allocs the
buffers.

My end goal is to be able to share physical SG pages from host to
guests and HW (including DSP firmwares). 

>  Let the audio hardware dma from/to userspace-allocated
> buffers is not possible[1], but we would need that to allow qemu (or
> other vmms) use guest-allocated buffers.

My misunderstanding here on how the various proposals being discussed
all pass buffers between guests & host. I'm reading that some are
passing buffers via userspace descriptors and this would not be
workable for audio.

> 
> cheers,
>   Gerd
> 
> [1] Disclaimer: It's been a while I looked at alsa more closely, so
> there is a chance this might have changed without /me noticing.
> 

Your all good here from audio. Disclaimer: I'm new to virtio.

Liam 





Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-20 Thread Gerd Hoffmann
  Hi,

> > > DSP FW has no access to userspace so we would need some additional
> > > API
> > > on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?
> > 
> > The dma-buf api currently can share guest memory sg-lists.
> 
> Ok, IIUC buffers can either be shared using the GPU proposed APIs
> (above) or using the dma-buf API to share via userspace ? My preference
> would be to use teh more direct GPU APIs sending physical page
> addresses from Guest to device driver. I guess this is your use case
> too ?

I'm not convinced this is useful for audio ...

I basically see two modes of operation which are useful:

  (1) send audio data via virtqueue.
  (2) map host audio buffers into the guest address space.

The audio driver api (i.e. alsa) typically allows to mmap() the audio
data buffers, so it is the host audio driver which handles the
allocation.  Let the audio hardware dma from/to userspace-allocated
buffers is not possible[1], but we would need that to allow qemu (or
other vmms) use guest-allocated buffers.

cheers,
  Gerd

[1] Disclaimer: It's been a while I looked at alsa more closely, so
there is a chance this might have changed without /me noticing.




Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-19 Thread Gurchetan Singh
On Tue, Nov 19, 2019 at 7:31 AM Liam Girdwood
 wrote:
>
> On Tue, 2019-11-12 at 14:55 -0800, Gurchetan Singh wrote:
> > On Tue, Nov 12, 2019 at 5:56 AM Liam Girdwood
> >  wrote:
> > >
> > > On Mon, 2019-11-11 at 16:54 -0800, Gurchetan Singh wrote:
> > > > On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann 
> > > > wrote:
> > > > > Each buffer also has some properties to carry metadata, some
> > > > > fixed
> > > > > (id, size, application), but
> > > > > also allow free form (name = value, framebuffers would have
> > > > > width/height/stride/format for example).
> > > >
> > > > Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:
> > > >
> > > > https://patchwork.freedesktop.org/patch/310349/
> > > >
> > > > For virtio-wayland + virtio-vdec, the problem is sharing -- not
> > > > allocation.
> > > >
> > >
> > > Audio also needs to share buffers with firmware running on DSPs.
> > >
> > > > As the buffer reaches a kernel boundary, it's properties devolve
> > > > into
> > > > [fd, size].  Userspace can typically handle sharing
> > > > metadata.  The
> > > > issue is the guest dma-buf fd doesn't mean anything on the host.
> > > >
> > > > One scenario could be:
> > > >
> > > > 1) Guest userspace (say, gralloc) allocates using virtio-
> > > > gpu.  When
> > > > allocating, we call uuidgen() and then pass that via
> > > > RESOURCE_CREATE
> > > > hypercall to the host.
> > > > 2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the
> > > > buffer
> > > > name will be "virtgpu-buffer-${UUID}").
> > > > 3) When importing, virtio-{vdec, video} reads the dma-buf name in
> > > > userspace, and calls fd to handle.  The name is sent to the host
> > > > via
> > > > a
> > > > hypercall, giving host virtio-{vdec, video} enough information to
> > > > identify the buffer.
> > > >
> > > > This solution is entirely userspace -- we can probably come up
> > > > with
> > > > something in kernel space [generate_random_uuid()] if need
> > > > be.  We
> > > > only need two universal IDs: {device ID, buffer ID}.
> > > >
> > >
> > > I need something where I can take a guest buffer and then convert
> > > it to
> > > physical scatter gather page list. I can then either pass the SG
> > > page
> > > list to the DSP firmware (for DMAC IP programming) or have the host
> > > driver program the DMAC directly using the page list (who programs
> > > DMAC
> > > depends on DSP architecture).
> >
> > So you need the HW address space from a guest allocation?
>
> Yes.
>
> >  Would your
> > allocation hypercalls use something like the virtio_gpu_mem_entry
> > (virtio_gpu.h) and the draft virtio_video_mem_entry (draft)?
>
> IIUC, this looks like generic SG buffer allocation ?
>
> >
> > struct {
> > __le64 addr;
> > __le32 length;
> > __le32 padding;
> > };
> >
> > /* VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING */
> > struct virtio_gpu_resource_attach_backing {
> > struct virtio_gpu_ctrl_hdr hdr;
> > __le32 resource_id;
> > __le32 nr_entries;
> >   *struct struct virtio_gpu_mem_entry */
> > };
> >
> > struct virtio_video_mem_entry {
> > __le64 addr;
> > __le32 length;
> > __u8 padding[4];
> > };
> >
> > struct virtio_video_resource_attach_backing {
> > struct virtio_video_ctrl_hdr hdr;
> > __le32 resource_id;
> > __le32 nr_entries;
> > };
> >
> > >
> > > DSP FW has no access to userspace so we would need some additional
> > > API
> > > on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?
> >
> > The dma-buf api currently can share guest memory sg-lists.
>
> Ok, IIUC buffers can either be shared using the GPU proposed APIs
> (above) or using the dma-buf API to share via userspace ?

If we restrict ourselves to guest-sg lists only, then the current
dma-buf API is sufficient to share buffers.  From example, virtio-gpu
can allocate with the following hypercall (as it does now):

struct virtio_gpu_resource_attach_backing {
 struct virtio_gpu_ctrl_hdr hdr;
 __le32 resource_id;
 __le32 nr_entries;
   *struct struct virtio_gpu_mem_entry */
};

Then in the guest kernel, virtio-{video, snd} can get the sg-list via
dma_buf_map_attachment, and then have a hypercall of it's own:

struct virtio_video_resource_import {
 struct virtio_video_ctrl_hdr hdr;
 __le32 video_resource_id;
 __le32 nr_entries;
   *struct struct virtio_gpu_mem_entry */
};

Then it can create dmabuf on the host or get the HW address from the SG list.

The complications come in from sharing host allocated buffers ... for
that we may need a method to translate from guest fds to universal
"virtualized" resource IDs.  I've heard talk about the need to
translate from guest fence fds to host fence fds as well.

> My preference
> would be to use teh more direct GPU APIs sending physical page
> addresses from Guest to device driver. I guess this is your use case
> too ?

For my use case, guest memory is sufficient, especially given the
direction towards 

Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-19 Thread Liam Girdwood
On Tue, 2019-11-12 at 14:55 -0800, Gurchetan Singh wrote:
> On Tue, Nov 12, 2019 at 5:56 AM Liam Girdwood
>  wrote:
> > 
> > On Mon, 2019-11-11 at 16:54 -0800, Gurchetan Singh wrote:
> > > On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann 
> > > wrote:
> > > > Each buffer also has some properties to carry metadata, some
> > > > fixed
> > > > (id, size, application), but
> > > > also allow free form (name = value, framebuffers would have
> > > > width/height/stride/format for example).
> > > 
> > > Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:
> > > 
> > > https://patchwork.freedesktop.org/patch/310349/
> > > 
> > > For virtio-wayland + virtio-vdec, the problem is sharing -- not
> > > allocation.
> > > 
> > 
> > Audio also needs to share buffers with firmware running on DSPs.
> > 
> > > As the buffer reaches a kernel boundary, it's properties devolve
> > > into
> > > [fd, size].  Userspace can typically handle sharing
> > > metadata.  The
> > > issue is the guest dma-buf fd doesn't mean anything on the host.
> > > 
> > > One scenario could be:
> > > 
> > > 1) Guest userspace (say, gralloc) allocates using virtio-
> > > gpu.  When
> > > allocating, we call uuidgen() and then pass that via
> > > RESOURCE_CREATE
> > > hypercall to the host.
> > > 2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the
> > > buffer
> > > name will be "virtgpu-buffer-${UUID}").
> > > 3) When importing, virtio-{vdec, video} reads the dma-buf name in
> > > userspace, and calls fd to handle.  The name is sent to the host
> > > via
> > > a
> > > hypercall, giving host virtio-{vdec, video} enough information to
> > > identify the buffer.
> > > 
> > > This solution is entirely userspace -- we can probably come up
> > > with
> > > something in kernel space [generate_random_uuid()] if need
> > > be.  We
> > > only need two universal IDs: {device ID, buffer ID}.
> > > 
> > 
> > I need something where I can take a guest buffer and then convert
> > it to
> > physical scatter gather page list. I can then either pass the SG
> > page
> > list to the DSP firmware (for DMAC IP programming) or have the host
> > driver program the DMAC directly using the page list (who programs
> > DMAC
> > depends on DSP architecture).
> 
> So you need the HW address space from a guest allocation? 

Yes.

>  Would your
> allocation hypercalls use something like the virtio_gpu_mem_entry
> (virtio_gpu.h) and the draft virtio_video_mem_entry (draft)?

IIUC, this looks like generic SG buffer allocation ?

> 
> struct {
> __le64 addr;
> __le32 length;
> __le32 padding;
> };
> 
> /* VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING */
> struct virtio_gpu_resource_attach_backing {
> struct virtio_gpu_ctrl_hdr hdr;
> __le32 resource_id;
> __le32 nr_entries;
>   *struct struct virtio_gpu_mem_entry */
> };
> 
> struct virtio_video_mem_entry {
> __le64 addr;
> __le32 length;
> __u8 padding[4];
> };
> 
> struct virtio_video_resource_attach_backing {
> struct virtio_video_ctrl_hdr hdr;
> __le32 resource_id;
> __le32 nr_entries;
> };
> 
> > 
> > DSP FW has no access to userspace so we would need some additional
> > API
> > on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?
> 
> The dma-buf api currently can share guest memory sg-lists.

Ok, IIUC buffers can either be shared using the GPU proposed APIs
(above) or using the dma-buf API to share via userspace ? My preference
would be to use teh more direct GPU APIs sending physical page
addresses from Guest to device driver. I guess this is your use case
too ?

Thanks

Liam

> 
> > 
> > Liam
> > 
> > 
> > 
> > -
> > 
> > To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
> > For additional commands, e-mail: 
> > virtio-dev-h...@lists.oasis-open.org
> > 
> 
> -
> To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org
> 




Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-12 Thread Gurchetan Singh
On Tue, Nov 12, 2019 at 5:56 AM Liam Girdwood
 wrote:
>
> On Mon, 2019-11-11 at 16:54 -0800, Gurchetan Singh wrote:
> > On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann 
> > wrote:
> > > Each buffer also has some properties to carry metadata, some fixed
> > > (id, size, application), but
> > > also allow free form (name = value, framebuffers would have
> > > width/height/stride/format for example).
> >
> > Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:
> >
> > https://patchwork.freedesktop.org/patch/310349/
> >
> > For virtio-wayland + virtio-vdec, the problem is sharing -- not
> > allocation.
> >
>
> Audio also needs to share buffers with firmware running on DSPs.
>
> > As the buffer reaches a kernel boundary, it's properties devolve into
> > [fd, size].  Userspace can typically handle sharing metadata.  The
> > issue is the guest dma-buf fd doesn't mean anything on the host.
> >
> > One scenario could be:
> >
> > 1) Guest userspace (say, gralloc) allocates using virtio-gpu.  When
> > allocating, we call uuidgen() and then pass that via RESOURCE_CREATE
> > hypercall to the host.
> > 2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the buffer
> > name will be "virtgpu-buffer-${UUID}").
> > 3) When importing, virtio-{vdec, video} reads the dma-buf name in
> > userspace, and calls fd to handle.  The name is sent to the host via
> > a
> > hypercall, giving host virtio-{vdec, video} enough information to
> > identify the buffer.
> >
> > This solution is entirely userspace -- we can probably come up with
> > something in kernel space [generate_random_uuid()] if need be.  We
> > only need two universal IDs: {device ID, buffer ID}.
> >
>
> I need something where I can take a guest buffer and then convert it to
> physical scatter gather page list. I can then either pass the SG page
> list to the DSP firmware (for DMAC IP programming) or have the host
> driver program the DMAC directly using the page list (who programs DMAC
> depends on DSP architecture).

So you need the HW address space from a guest allocation?  Would your
allocation hypercalls use something like the virtio_gpu_mem_entry
(virtio_gpu.h) and the draft virtio_video_mem_entry (draft)?

struct {
__le64 addr;
__le32 length;
__le32 padding;
};

/* VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING */
struct virtio_gpu_resource_attach_backing {
struct virtio_gpu_ctrl_hdr hdr;
__le32 resource_id;
__le32 nr_entries;
  *struct struct virtio_gpu_mem_entry */
};

struct virtio_video_mem_entry {
__le64 addr;
__le32 length;
__u8 padding[4];
};

struct virtio_video_resource_attach_backing {
struct virtio_video_ctrl_hdr hdr;
__le32 resource_id;
__le32 nr_entries;
};

>
> DSP FW has no access to userspace so we would need some additional API
> on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?

The dma-buf api currently can share guest memory sg-lists.

>
> Liam
>
>
>
> -
> To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org
>



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-12 Thread Liam Girdwood
On Mon, 2019-11-11 at 16:54 -0800, Gurchetan Singh wrote:
> On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann 
> wrote:
> > Each buffer also has some properties to carry metadata, some fixed
> > (id, size, application), but
> > also allow free form (name = value, framebuffers would have
> > width/height/stride/format for example).
> 
> Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:
> 
> https://patchwork.freedesktop.org/patch/310349/
> 
> For virtio-wayland + virtio-vdec, the problem is sharing -- not
> allocation.
> 

Audio also needs to share buffers with firmware running on DSPs.

> As the buffer reaches a kernel boundary, it's properties devolve into
> [fd, size].  Userspace can typically handle sharing metadata.  The
> issue is the guest dma-buf fd doesn't mean anything on the host.
> 
> One scenario could be:
> 
> 1) Guest userspace (say, gralloc) allocates using virtio-gpu.  When
> allocating, we call uuidgen() and then pass that via RESOURCE_CREATE
> hypercall to the host.
> 2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the buffer
> name will be "virtgpu-buffer-${UUID}").
> 3) When importing, virtio-{vdec, video} reads the dma-buf name in
> userspace, and calls fd to handle.  The name is sent to the host via
> a
> hypercall, giving host virtio-{vdec, video} enough information to
> identify the buffer.
> 
> This solution is entirely userspace -- we can probably come up with
> something in kernel space [generate_random_uuid()] if need be.  We
> only need two universal IDs: {device ID, buffer ID}.
> 

I need something where I can take a guest buffer and then convert it to
physical scatter gather page list. I can then either pass the SG page
list to the DSP firmware (for DMAC IP programming) or have the host
driver program the DMAC directly using the page list (who programs DMAC
depends on DSP architecture).

DSP FW has no access to userspace so we would need some additional API
on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?

Liam





Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-11 Thread Gurchetan Singh
On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann  wrote:
> Each buffer also has some properties to carry metadata, some fixed (id, size, 
> application), but
> also allow free form (name = value, framebuffers would have
> width/height/stride/format for example).

Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:

https://patchwork.freedesktop.org/patch/310349/

For virtio-wayland + virtio-vdec, the problem is sharing -- not allocation.

As the buffer reaches a kernel boundary, it's properties devolve into
[fd, size].  Userspace can typically handle sharing metadata.  The
issue is the guest dma-buf fd doesn't mean anything on the host.

One scenario could be:

1) Guest userspace (say, gralloc) allocates using virtio-gpu.  When
allocating, we call uuidgen() and then pass that via RESOURCE_CREATE
hypercall to the host.
2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the buffer
name will be "virtgpu-buffer-${UUID}").
3) When importing, virtio-{vdec, video} reads the dma-buf name in
userspace, and calls fd to handle.  The name is sent to the host via a
hypercall, giving host virtio-{vdec, video} enough information to
identify the buffer.

This solution is entirely userspace -- we can probably come up with
something in kernel space [generate_random_uuid()] if need be.  We
only need two universal IDs: {device ID, buffer ID}.

> On Wed, Nov 6, 2019 at 2:28 PM Geoffrey McRae  wrote:
> The entire point of this for our purposes is due to the fact that we can
> not allocate the buffer, it's either provided by the GPU driver or
> DirectX. If virtio-gpu were to allocate the buffer we might as well
> forget
> all this and continue using the ivshmem device.

We have a similar problem with closed source drivers.  As @lfy
mentioned, it's possible to map memory directory into virtio-gpu's PCI
bar and it's actually a planned feature.  Would that work for your use
case?



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-11 Thread Liam Girdwood
On Mon, 2019-11-11 at 12:04 +0900, David Stevens wrote:
> Having a centralized buffer allocator device is one way to deal with
> sharing buffers, since it gives a definitive buffer identifier that
> can be used by all drivers/devices to refer to the buffer. That being
> said, I think the device as proposed is insufficient, as such a
> centralized buffer allocator should probably be responsible for
> allocating all shared buffers, not just linear guest ram buffers.

This would work for audio. I need to be able to :-

1) Allocate buffers on guests that I can pass as SG physical pages to
DMA engine (via privileged VM driver) for audio data. Can be any memory
as long as it's DMA-able.

2) Export hardware mailbox memory (in a real device PCI BAR) as RO to
each guest to give guests low latency information on each audio stream.
To support use cases like voice calls, gaming, system notifications and
general audio processing.

Liam




Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-07 Thread Gerd Hoffmann
On Thu, Nov 07, 2019 at 11:16:18AM +, Dr. David Alan Gilbert wrote:
> * Gerd Hoffmann (kra...@redhat.com) wrote:
> >   Hi,
> > 
> > > > This is not about host memory, buffers are in guest ram, everything else
> > > > would make sharing those buffers between drivers inside the guest (as
> > > > dma-buf) quite difficult.
> > > 
> > > Given it's just guest memory, can the guest just have a virt queue on
> > > which it places pointers to the memory it wants to share as elements in
> > > the queue?
> > 
> > Well, good question.  I'm actually wondering what the best approach is
> > to handle long-living, large buffers in virtio ...
> > 
> > virtio-blk (and others) are using the approach you describe.  They put a
> > pointer to the io request header, followed by pointer(s) to the io
> > buffers directly into the virtqueue.  That works great with storage for
> > example.  The queue entries are tagged being "in" or "out" (driver to
> > device or visa-versa), so the virtio transport can set up dma mappings
> > accordingly or even transparently copy data if needed.
> > 
> > For long-living buffers where data can potentially flow both ways this
> > model doesn't fit very well though.  So what virtio-gpu does instead is
> > transferring the scatter list as virtio payload.  Does feel a bit
> > unclean as it doesn't really fit the virtio architecture.  It assumes
> > the host can directly access guest memory for example (which is usually
> > the case but explicitly not required by virtio).  It also requires
> > quirks in virtio-gpu to handle VIRTIO_F_IOMMU_PLATFORM properly, which
> > in theory should be handled fully transparently by the virtio-pci
> > transport.
> > 
> > We could instead have a "create-buffer" command which adds the buffer
> > pointers as elements to the virtqueue as you describe.  Then simply
> > continue using the buffer even after completing the "create-buffer"
> > command.  Which isn't exactly clean either.  It would likewise assume
> > direct access to guest memory, and it would likewise need quirks for
> > VIRTIO_F_IOMMU_PLATFORM as the virtio-pci transport tears down the dma
> > mappings for the virtqueue entries after command completion.
> > 
> > Comments, suggestions, ideas?
> 
> What about not completing the command while the device is using the
> memory?

Thought about that too, but I don't think this is a good idea for
buffers which exist for a long time.

Example #1:  A video decoder would setup a bunch of buffers and use
them robin-round, so they would exist until the video playback is
finished.

Example #2:  virtio-gpu creates a framebuffer for fbcon which exists
forever.  And virtio-gpu potentially needs lots of buffers.  With 3d
active there can be tons of objects.  Although they typically don't
stay around that long we would still need a pretty big virtqueue to
store them all I guess.

And it also doesn't fully match the virtio spirit, it still assumes
direct guest memory access.  Without direct guest memory access
updates to the fbcon object would never reach the host for example.
In case a iommu is present we might need additional dma map flushes
for updates happening after submitting the lingering "create-buffer"
command.

cheers,
  Gerd




Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-07 Thread Dr. David Alan Gilbert
* Gerd Hoffmann (kra...@redhat.com) wrote:
>   Hi,
> 
> > > This is not about host memory, buffers are in guest ram, everything else
> > > would make sharing those buffers between drivers inside the guest (as
> > > dma-buf) quite difficult.
> > 
> > Given it's just guest memory, can the guest just have a virt queue on
> > which it places pointers to the memory it wants to share as elements in
> > the queue?
> 
> Well, good question.  I'm actually wondering what the best approach is
> to handle long-living, large buffers in virtio ...
> 
> virtio-blk (and others) are using the approach you describe.  They put a
> pointer to the io request header, followed by pointer(s) to the io
> buffers directly into the virtqueue.  That works great with storage for
> example.  The queue entries are tagged being "in" or "out" (driver to
> device or visa-versa), so the virtio transport can set up dma mappings
> accordingly or even transparently copy data if needed.
> 
> For long-living buffers where data can potentially flow both ways this
> model doesn't fit very well though.  So what virtio-gpu does instead is
> transferring the scatter list as virtio payload.  Does feel a bit
> unclean as it doesn't really fit the virtio architecture.  It assumes
> the host can directly access guest memory for example (which is usually
> the case but explicitly not required by virtio).  It also requires
> quirks in virtio-gpu to handle VIRTIO_F_IOMMU_PLATFORM properly, which
> in theory should be handled fully transparently by the virtio-pci
> transport.
> 
> We could instead have a "create-buffer" command which adds the buffer
> pointers as elements to the virtqueue as you describe.  Then simply
> continue using the buffer even after completing the "create-buffer"
> command.  Which isn't exactly clean either.  It would likewise assume
> direct access to guest memory, and it would likewise need quirks for
> VIRTIO_F_IOMMU_PLATFORM as the virtio-pci transport tears down the dma
> mappings for the virtqueue entries after command completion.
> 
> Comments, suggestions, ideas?

What about not completing the command while the device is using the
memory?

Dave

> cheers,
>   Gerd
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-07 Thread Gerd Hoffmann
  Hi,

> > This is not about host memory, buffers are in guest ram, everything else
> > would make sharing those buffers between drivers inside the guest (as
> > dma-buf) quite difficult.
> 
> Given it's just guest memory, can the guest just have a virt queue on
> which it places pointers to the memory it wants to share as elements in
> the queue?

Well, good question.  I'm actually wondering what the best approach is
to handle long-living, large buffers in virtio ...

virtio-blk (and others) are using the approach you describe.  They put a
pointer to the io request header, followed by pointer(s) to the io
buffers directly into the virtqueue.  That works great with storage for
example.  The queue entries are tagged being "in" or "out" (driver to
device or visa-versa), so the virtio transport can set up dma mappings
accordingly or even transparently copy data if needed.

For long-living buffers where data can potentially flow both ways this
model doesn't fit very well though.  So what virtio-gpu does instead is
transferring the scatter list as virtio payload.  Does feel a bit
unclean as it doesn't really fit the virtio architecture.  It assumes
the host can directly access guest memory for example (which is usually
the case but explicitly not required by virtio).  It also requires
quirks in virtio-gpu to handle VIRTIO_F_IOMMU_PLATFORM properly, which
in theory should be handled fully transparently by the virtio-pci
transport.

We could instead have a "create-buffer" command which adds the buffer
pointers as elements to the virtqueue as you describe.  Then simply
continue using the buffer even after completing the "create-buffer"
command.  Which isn't exactly clean either.  It would likewise assume
direct access to guest memory, and it would likewise need quirks for
VIRTIO_F_IOMMU_PLATFORM as the virtio-pci transport tears down the dma
mappings for the virtqueue entries after command completion.

Comments, suggestions, ideas?

cheers,
  Gerd




Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-11-06 Thread Dr. David Alan Gilbert
* Gerd Hoffmann (kra...@redhat.com) wrote:
>   Hi,
> 
> > > Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> > > resources" is really a good answer for all the different use cases
> > > we have collected over time.  Maybe it is better to have a dedicated
> > > buffer sharing virtio device?  Here is the rough idea:
> > 
> > My concern is that buffer sharing isn't a "device".  It's a primitive
> > used in building other devices.  When someone asks for just buffer
> > sharing it's often because they do not intend to upstream a
> > specification for their device.
> 
> Well, "vsock" isn't a classic device (aka nic/storage/gpu/...) either.
> It is more a service to allow communication between host and guest
> 
> That buffer sharing device falls into the same category.  Maybe it even
> makes sense to build that as virtio-vsock extension.  Not sure how well
> that would work with the multi-transport architecture of vsock though.
> 
> > If this buffer sharing device's main purpose is for building proprietary
> > devices without contributing to VIRTIO, then I don't think it makes
> > sense for the VIRTIO community to assist in its development.
> 
> One possible use case would be building a wayland proxy, using vsock for
> the wayland protocol messages and virtio-buffers for the shared buffers
> (wayland client window content).
> 
> It could also simplify buffer sharing between devices (feed decoded
> video frames from decoder to gpu), although in that case it is less
> clear that it'll actually simplify things because virtio-gpu is
> involved anyway.
> 
> We can't prevent people from using that for proprietary stuff (same goes
> for vsock).
> 
> There is the option to use virtio-gpu instead, i.e. add support to qemu
> to export dma-buf handles for virtio-gpu resources to other processes
> (such as a wayland proxy).  That would provide very similar
> functionality (and thereby create the same loophole).
> 
> > VIRTIO recently gained a shared memory resource concept for access to
> > host memory.  It is being used in virtio-pmem and virtio-fs (and
> > virtio-gpu?).
> 
> virtio-gpu is in progress still unfortunately (all kinds of fixes for
> the qemu drm drivers and virtio-gpu guest driver refactoring kept me
> busy for quite a while ...).
> 
> > If another flavor of shared memory is required it can be
> > added to the spec and new VIRTIO device types can use it.  But it's not
> > clear why this should be its own device.
> 
> This is not about host memory, buffers are in guest ram, everything else
> would make sharing those buffers between drivers inside the guest (as
> dma-buf) quite difficult.

Given it's just guest memory, can the guest just have a virt queue on
which it places pointers to the memory it wants to share as elements in
the queue?

Dave

> > My question would be "what is the actual problem you are trying to
> > solve?".
> 
> Typical use cases center around sharing graphics data between guest
> and host.
> 
> cheers,
>   Gerd
> 
> 
> -
> To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK