Re: [RFC PATCH v2 0/5] vhost-user: Add SHMEM_MAP/UNMAP requests

2024-07-15 Thread David Stevens
On Fri, Jul 12, 2024 at 2:47 PM Michael S. Tsirkin  wrote:
>
> On Fri, Jul 12, 2024 at 11:06:49AM +0900, David Stevens wrote:
> > On Thu, Jul 11, 2024 at 7:56 PM Alyssa Ross  wrote:
> > >
> > > Adding David Stevens, who implemented SHMEM_MAP and SHMEM_UNMAP in
> > > crosvm a couple of years ago.
> > >
> > > David, I'd be particularly interested for your thoughts on the MEM_READ
> > > and MEM_WRITE commands, since as far as I know crosvm doesn't implement
> > > anything like that.  The discussion leading to those being added starts
> > > here:
> > >
> > > https://lore.kernel.org/qemu-devel/20240604185416.gb90...@fedora.redhat.com/
> > >
> > > It would be great if this could be standardised between QEMU and crosvm
> > > (and therefore have a clearer path toward being implemented in other 
> > > VMMs)!
> >
> > Setting aside vhost-user for a moment, the DAX example given by Stefan
> > won't work in crosvm today.
> >
> > Is universal access to virtio shared memory regions actually mandated
> > by the virtio spec? Copying from virtiofs DAX to virtiofs sharing
> > seems reasonable enough, but what about virtio-pmem to virtio-blk?
> > What about screenshotting a framebuffer in virtio-gpu shared memory to
> > virtio-scsi? I guess with some plumbing in the VMM, it's solvable in a
> > virtualized environment. But what about when you have real hardware
> > that speaks virtio involved? That's outside my wheelhouse, but it
> > doesn't seem like that would be easy to solve.
>
> Yes, it can work for physical devices if allowed by host configuration.
> E.g. VFIO supports that I think. Don't think VDPA does.

I'm sure it can work, but that sounds more like a SHOULD (MAY?),
rather than a MUST.

> > For what it's worth, my interpretation of the target scenario:
> >
> > > Other backends don't see these mappings. If the guest submits a vring
> > > descriptor referencing a mapping to another backend, then that backend
> > > won't be able to access this memory
> >
> > is that it's omitting how the implementation is reconciled with
> > section 2.10.1 of v1.3 of the virtio spec, which states that:
> >
> > > References into shared memory regions are represented as offsets from
> > > the beginning of the region instead of absolute memory addresses. Offsets
> > > are used both for references between structures stored within shared
> > > memory and for requests placed in virtqueues that refer to shared memory.
> >
> > My interpretation of that statement is that putting raw guest physical
> > addresses corresponding to virtio shared memory regions into a vring
> > is a driver spec violation.
> >
> > -David
>
> This really applies within device I think. Should be clarified ...

You mean that a virtio device can use absolute memory addresses for
other devices' shared memory regions, but it can't use absolute memory
addresses for its own shared memory regions? That's a rather strange
requirement. Or is the statement simply giving an addressing strategy
that device type specifications are free to ignore?

-David



Re: [RFC PATCH v2 0/5] vhost-user: Add SHMEM_MAP/UNMAP requests

2024-07-11 Thread David Stevens
On Thu, Jul 11, 2024 at 7:56 PM Alyssa Ross  wrote:
>
> Adding David Stevens, who implemented SHMEM_MAP and SHMEM_UNMAP in
> crosvm a couple of years ago.
>
> David, I'd be particularly interested for your thoughts on the MEM_READ
> and MEM_WRITE commands, since as far as I know crosvm doesn't implement
> anything like that.  The discussion leading to those being added starts
> here:
>
> https://lore.kernel.org/qemu-devel/20240604185416.gb90...@fedora.redhat.com/
>
> It would be great if this could be standardised between QEMU and crosvm
> (and therefore have a clearer path toward being implemented in other VMMs)!

Setting aside vhost-user for a moment, the DAX example given by Stefan
won't work in crosvm today.

Is universal access to virtio shared memory regions actually mandated
by the virtio spec? Copying from virtiofs DAX to virtiofs sharing
seems reasonable enough, but what about virtio-pmem to virtio-blk?
What about screenshotting a framebuffer in virtio-gpu shared memory to
virtio-scsi? I guess with some plumbing in the VMM, it's solvable in a
virtualized environment. But what about when you have real hardware
that speaks virtio involved? That's outside my wheelhouse, but it
doesn't seem like that would be easy to solve.

For what it's worth, my interpretation of the target scenario:

> Other backends don't see these mappings. If the guest submits a vring
> descriptor referencing a mapping to another backend, then that backend
> won't be able to access this memory

is that it's omitting how the implementation is reconciled with
section 2.10.1 of v1.3 of the virtio spec, which states that:

> References into shared memory regions are represented as offsets from
> the beginning of the region instead of absolute memory addresses. Offsets
> are used both for references between structures stored within shared
> memory and for requests placed in virtqueues that refer to shared memory.

My interpretation of that statement is that putting raw guest physical
addresses corresponding to virtio shared memory regions into a vring
is a driver spec violation.

-David



[virtio-comment] Request vote for the patch: Cross-device resource sharing

2020-04-27 Thread David Stevens
Request for a vote.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/76

Thanks,
David

On Fri, Mar 20, 2020 at 3:41 PM Gerd Hoffmann  wrote:
>
> On Thu, Mar 19, 2020 at 11:18:21AM +0900, David Stevens wrote:
> > Hi all,
> >
> > This is the next iteration of patches for adding support for sharing
> > resources between different virtio devices. The corresponding Linux
> > implementation is [1].
> >
> > In addition to these patches, the most recent virtio-video patchset
> > includes a patch for importing objects into that device [2].
>
> Looks good to me.
>
> So, open a github issue to kick the TC vote process and get this merged?
> (see virtio-spec/.github/PULL_REQUEST_TEMPLATE.md).
>
> cheers,
>   Gerd
>



Re: [PATCH v4 0/2] Cross-device resource sharing

2020-03-22 Thread David Stevens
Thanks for taking a look at this. I've opened a github issue.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/76

Thanks,
David

On Fri, Mar 20, 2020 at 3:41 PM Gerd Hoffmann  wrote:
>
> On Thu, Mar 19, 2020 at 11:18:21AM +0900, David Stevens wrote:
> > Hi all,
> >
> > This is the next iteration of patches for adding support for sharing
> > resources between different virtio devices. The corresponding Linux
> > implementation is [1].
> >
> > In addition to these patches, the most recent virtio-video patchset
> > includes a patch for importing objects into that device [2].
>
> Looks good to me.
>
> So, open a github issue to kick the TC vote process and get this merged?
> (see virtio-spec/.github/PULL_REQUEST_TEMPLATE.md).
>
> cheers,
>   Gerd
>



[PATCH v4 1/2] content: define what an exported object is

2020-03-18 Thread David Stevens
Define a mechanism for sharing objects between different virtio
devices.

Signed-off-by: David Stevens 
---
 content.tex  | 12 
 introduction.tex |  4 
 2 files changed, 16 insertions(+)

diff --git a/content.tex b/content.tex
index b1ea9b9..c8a367b 100644
--- a/content.tex
+++ b/content.tex
@@ -373,6 +373,18 @@ \section{Driver Notifications} \label{sec:Virtqueues / 
Driver notifications}
 
 \input{shared-mem.tex}
 
+\section{Exporting Objects}\label{sec:Basic Facilities of a Virtio Device / 
Exporting Objects}
+
+When an object created by one virtio device needs to be
+shared with a seperate virtio device, the first device can
+export the object by generating a UUID which can then
+be passed to the second device to identify the object.
+
+What constitutes an object, how to export objects, and
+how to import objects are defined by the individual device
+types. It is RECOMMENDED that devices generate version 4
+UUIDs as specified by \hyperref[intro:rfc4122]{[RFC4122]}.
+
 \chapter{General Initialization And Device Operation}\label{sec:General 
Initialization And Device Operation}
 
 We start with an overview of device initialization, then expand on the
diff --git a/introduction.tex b/introduction.tex
index 40f16f8..fc2aa50 100644
--- a/introduction.tex
+++ b/introduction.tex
@@ -40,6 +40,10 @@ \section{Normative References}\label{sec:Normative 
References}
\phantomsection\label{intro:rfc2119}\textbf{[RFC2119]} &
 Bradner S., ``Key words for use in RFCs to Indicate Requirement
 Levels'', BCP 14, RFC 2119, March 1997. 
\newline\url{http://www.ietf.org/rfc/rfc2119.txt}\\
+   \phantomsection\label{intro:rfc4122}\textbf{[RFC4122]} &
+Leach, P., Mealling, M., and R. Salz, ``A Universally Unique
+IDentifier (UUID) URN Namespace'', RFC 4122, DOI 10.17487/RFC4122,
+July 2005. \newline\url{http://www.ietf.org/rfc/rfc4122.txt}\\
\phantomsection\label{intro:S390 PoP}\textbf{[S390 PoP]} & 
z/Architecture Principles of Operation, IBM Publication SA22-7832, 
\newline\url{http://publibfi.boulder.ibm.com/epubs/pdf/dz9zr009.pdf}, and any 
future revisions\\
\phantomsection\label{intro:S390 Common I/O}\textbf{[S390 Common I/O]} 
& ESA/390 Common I/O-Device and Self-Description, IBM Publication SA22-7204, 
\newline\url{http://publibfp.dhe.ibm.com/cgi-bin/bookmgr/BOOKS/dz9ar501/CCONTENTS},
 and any future revisions\\
\phantomsection\label{intro:PCI}\textbf{[PCI]} &
-- 
2.25.1.481.gfbce0eb801-goog




[PATCH v4 2/2] virtio-gpu: add the ability to export resources

2020-03-18 Thread David Stevens
Signed-off-by: David Stevens 
---
 virtio-gpu.tex | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index af4ca61..e75aafa 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
@@ -35,6 +35,8 @@ \subsection{Feature bits}\label{sec:Device Types / GPU Device 
/ Feature bits}
 \begin{description}
 \item[VIRTIO_GPU_F_VIRGL (0)] virgl 3D mode is supported.
 \item[VIRTIO_GPU_F_EDID  (1)] EDID is supported.
+\item[VIRTIO_GPU_F_RESOURCE_UUID (2)] assigning resources UUIDs for export
+  to other virtio devices is supported.
 \end{description}
 
 \subsection{Device configuration layout}\label{sec:Device Types / GPU Device / 
Device configuration layout}
@@ -181,6 +183,7 @@ \subsubsection{Device Operation: Request 
header}\label{sec:Device Types / GPU De
 VIRTIO_GPU_CMD_GET_CAPSET_INFO,
 VIRTIO_GPU_CMD_GET_CAPSET,
 VIRTIO_GPU_CMD_GET_EDID,
+VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID,
 
 /* cursor commands */
 VIRTIO_GPU_CMD_UPDATE_CURSOR = 0x0300,
@@ -192,6 +195,7 @@ \subsubsection{Device Operation: Request 
header}\label{sec:Device Types / GPU De
 VIRTIO_GPU_RESP_OK_CAPSET_INFO,
 VIRTIO_GPU_RESP_OK_CAPSET,
 VIRTIO_GPU_RESP_OK_EDID,
+VIRTIO_GPU_RESP_OK_RESOURCE_UUID,
 
 /* error responses */
 VIRTIO_GPU_RESP_ERR_UNSPEC = 0x1200,
@@ -454,6 +458,32 @@ \subsubsection{Device Operation: 
controlq}\label{sec:Device Types / GPU Device /
 This detaches any backing pages from a resource, to be used in case of
 guest swapping or object destruction.
 
+\item[VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID] Creates an exported object from
+  a resource. Request data is \field{struct
+virtio_gpu_resource_assign_uuid}.  Response type is
+  VIRTIO_GPU_RESP_OK_RESOURCE_UUID, response data is \field{struct
+virtio_gpu_resp_resource_uuid}. Support is optional and negotiated
+using the VIRTIO_GPU_F_RESOURCE_UUID feature flag.
+
+\begin{lstlisting}
+struct virtio_gpu_resource_assign_uuid {
+struct virtio_gpu_ctrl_hdr hdr;
+le32 resource_id;
+le32 padding;
+};
+
+struct virtio_gpu_resp_resource_uuid {
+struct virtio_gpu_ctrl_hdr hdr;
+u8 uuid[16];
+};
+\end{lstlisting}
+
+The response contains a UUID which identifies the exported object created from
+the host private resource. Note that if the resource has an attached backing,
+modifications made to the host private resource through the exported object by
+other devices are not visible in the attached backing until they are 
transferred
+into the backing.
+
 \end{description}
 
 \subsubsection{Device Operation: cursorq}\label{sec:Device Types / GPU Device 
/ Device Operation / Device Operation: cursorq}
-- 
2.25.1.481.gfbce0eb801-goog




[PATCH v4 0/2] Cross-device resource sharing

2020-03-18 Thread David Stevens
Hi all,

This is the next iteration of patches for adding support for sharing
resources between different virtio devices. The corresponding Linux
implementation is [1].

In addition to these patches, the most recent virtio-video patchset
includes a patch for importing objects into that device [2].

[1] https://markmail.org/thread/bfy6uk4q4v4cus7h
[2] https://markmail.org/message/wxdne5re7aaugbjg

Changes v3 -> v4:
 - Add virtio-gpu feature bit
 - Move virtio-gpu assign uuid command into 2d command group
 - Rename virtio-gpu uuid response

David Stevens (2):
  content: define what an exported object is
  virtio-gpu: add the ability to export resources

 content.tex  | 12 
 introduction.tex |  4 
 virtio-gpu.tex   | 29 +
 3 files changed, 45 insertions(+)

-- 
2.25.1.481.gfbce0eb801-goog




[RFC PATCH v3 0/2] Cross-device resource sharing

2020-02-06 Thread David Stevens
Hi all,

This is the next iteration of patches for adding support for sharing
resources between different virtio devices. In addition to these
patches, the most recent virtio-video patchset includes a patch for
importing objects into that device [1].

[1] https://markmail.org/message/wxdne5re7aaugbjg

Changes v2 -> v3:
* Replace references to guest/host
* Remove unnecessary paragraph and field in exported object section
* Recommend RFC4122 version 4 UUIDs
* Represent UUID as u8[16] instead of le64 pair

David Stevens (2):
  content: define what an exported object is
  virtio-gpu: add the ability to export resources

 content.tex  | 12 
 introduction.tex |  4 
 virtio-gpu.tex   | 29 +
 3 files changed, 45 insertions(+)

-- 
2.25.0.341.g760bfbb309-goog




[RFC PATCH v3 1/2] content: define what an exported object is

2020-02-06 Thread David Stevens
Define a mechanism for sharing objects between different virtio
devices.

Signed-off-by: David Stevens 
---
 content.tex  | 12 
 introduction.tex |  4 
 2 files changed, 16 insertions(+)

diff --git a/content.tex b/content.tex
index b1ea9b9..ad3723c 100644
--- a/content.tex
+++ b/content.tex
@@ -373,6 +373,18 @@ \section{Driver Notifications} \label{sec:Virtqueues / 
Driver notifications}
 
 \input{shared-mem.tex}
 
+\section{Exporting Objects}\label{sec:Basic Facilities of a Virtio Device / 
Exporting Objects}
+
+When an object created by one virtio device needs to be
+shared with a seperate virtio device, the first device can
+export the object by generating a UUID which can then
+be passed to the second device to identify the object.
+
+What constitutes an object, how to export objects, and
+how to import objects are defined by the individual device
+types. It is RECOMMENDED that devices generate version 4
+UUIDs as specified by \hyperref[intro:rfc4122]{[RFC4122]}.
+
 \chapter{General Initialization And Device Operation}\label{sec:General 
Initialization And Device Operation}
 
 We start with an overview of device initialization, then expand on the
diff --git a/introduction.tex b/introduction.tex
index 40f16f8..fc2aa50 100644
--- a/introduction.tex
+++ b/introduction.tex
@@ -40,6 +40,10 @@ \section{Normative References}\label{sec:Normative 
References}
\phantomsection\label{intro:rfc2119}\textbf{[RFC2119]} &
 Bradner S., ``Key words for use in RFCs to Indicate Requirement
 Levels'', BCP 14, RFC 2119, March 1997. 
\newline\url{http://www.ietf.org/rfc/rfc2119.txt}\\
+   \phantomsection\label{intro:rfc4122}\textbf{[RFC4122]} &
+Leach, P., Mealling, M., and R. Salz, ``A Universally Unique
+IDentifier (UUID) URN Namespace'', RFC 4122, DOI 10.17487/RFC4122,
+July 2005. \newline\url{http://www.ietf.org/rfc/rfc4122.txt}\\
\phantomsection\label{intro:S390 PoP}\textbf{[S390 PoP]} & 
z/Architecture Principles of Operation, IBM Publication SA22-7832, 
\newline\url{http://publibfi.boulder.ibm.com/epubs/pdf/dz9zr009.pdf}, and any 
future revisions\\
\phantomsection\label{intro:S390 Common I/O}\textbf{[S390 Common I/O]} 
& ESA/390 Common I/O-Device and Self-Description, IBM Publication SA22-7204, 
\newline\url{http://publibfp.dhe.ibm.com/cgi-bin/bookmgr/BOOKS/dz9ar501/CCONTENTS},
 and any future revisions\\
\phantomsection\label{intro:PCI}\textbf{[PCI]} &
-- 
2.25.0.341.g760bfbb309-goog




[RFC PATCH v3 2/2] virtio-gpu: add the ability to export resources

2020-02-06 Thread David Stevens
Signed-off-by: David Stevens 
---
 virtio-gpu.tex | 29 +
 1 file changed, 29 insertions(+)

diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index af4ca61..e950ad3 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
@@ -186,12 +186,16 @@ \subsubsection{Device Operation: Request 
header}\label{sec:Device Types / GPU De
 VIRTIO_GPU_CMD_UPDATE_CURSOR = 0x0300,
 VIRTIO_GPU_CMD_MOVE_CURSOR,
 
+/* misc commands */
+VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID = 0x0400,
+
 /* success responses */
 VIRTIO_GPU_RESP_OK_NODATA = 0x1100,
 VIRTIO_GPU_RESP_OK_DISPLAY_INFO,
 VIRTIO_GPU_RESP_OK_CAPSET_INFO,
 VIRTIO_GPU_RESP_OK_CAPSET,
 VIRTIO_GPU_RESP_OK_EDID,
+VIRTIO_GPU_RESP_OK_RESOURCE_ASSIGN_UUID,
 
 /* error responses */
 VIRTIO_GPU_RESP_ERR_UNSPEC = 0x1200,
@@ -454,6 +458,31 @@ \subsubsection{Device Operation: 
controlq}\label{sec:Device Types / GPU Device /
 This detaches any backing pages from a resource, to be used in case of
 guest swapping or object destruction.
 
+\item[VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID] Creates an exported object from
+  a resource. Request data is \field{struct
+virtio_gpu_resource_assign_uuid}.  Response type is
+  VIRTIO_GPU_RESP_OK_RESOURCE_ASSIGN_UUID, response data is \field{struct
+virtio_gpu_resp_resource_assign_uuid}.
+
+\begin{lstlisting}
+struct virtio_gpu_resource_assign_uuid {
+struct virtio_gpu_ctrl_hdr hdr;
+le32 resource_id;
+le32 padding;
+};
+
+struct virtio_gpu_resp_resource_assign_uuid {
+struct virtio_gpu_ctrl_hdr hdr;
+u8 uuid[16];
+};
+\end{lstlisting}
+
+The response contains a UUID which identifies the exported object created from
+the host private resource. Note that if the resource has an attached backing,
+modifications made to the host private resource through the exported object by
+other devices are not visible in the attached backing until they are 
transferred
+into the backing.
+
 \end{description}
 
 \subsubsection{Device Operation: cursorq}\label{sec:Device Types / GPU Device 
/ Device Operation / Device Operation: cursorq}
-- 
2.25.0.341.g760bfbb309-goog




Re: [virtio-dev][RFC PATCH v1 2/2] virtio-gpu: add the ability to export resources

2020-01-22 Thread David Stevens
> ok but how is this then used? will there be more commands to pass
> this uuid to another device?

This is intended to be used with the virtio video device being
discussed here https://markmail.org/thread/ingyqlps4rbcuazh. I don't
have a specific patch for how that will work, but it will likely be an
extension to VIRTIO_VIDEO_T_RESOURCE_CREATE.

> > +The response contains a uuid which identifies the exported object created 
> > from
> > +the host private resource.
>
> Are the uuids as specified in rfc-4122? I guess we need to link to that spec 
> then

I don't think it's terribly important to specify how the uuids are
generated, as long as they're actually unique. That being said, I'm
not opposed to defining them as rfc-4122 version 4 uuids. Although if
we do that, it should go in the patch that defines what exported
objects and uuids are in the context of virtio, not in the virtio-gpu
section.

> > Note that if the resource has an attached backing,
> > +modifications made to the host private resource through the exported 
> > object by
> > +other devices are not visible in the attached backing until they are
> > transferred
> > +into the backing.
> > +
>
> s/host/device/?

The virtio-gpu is based around "resources private to the host", to
quote the existing specification. I think consistency with that
language is important.

-David



Re: [virtio-dev][RFC PATCH v1 1/2] content: define what an exported object is

2020-01-22 Thread David Stevens
> > +When an object created by one virtio device needs to be
> > +shared with a seperate virtio device, the first device can
> > +export the object by generating a \field{uuid}
>
> This is a field where?

It's a property of the exported object, but I guess it doesn't really
correspond to any concrete field. I'll remove \field.

> > which the
> > +guest can pass to the second device to identify the object.
>
> s/guest/Driver/ ?

The uuid can be passed to a second device controlled by a different
driver, so I think 'driver' by itself is ambiguous. I'm using guest as
a shorthand for 'system which includes the drivers and software which
sits on top of the drivers', and that meaning does seem to be
compatible with language in the rest of the spec. If that shorthand
isn't acceptable, I can rewrite the sentence passively as '... a uuid
which can then be passed to a second device ...'.

> Also - what are guest and host here?

There are a number of places in the virtio spec where 'guest' is used
to refer to the system where drivers run and where 'host' is used to
refer to the system where devices run. I guess those terms aren't
concretely defined within the spec, but they do seem to have a well
understood meaning. Or is the guest/host language discouraged in new
additions to the spec?

-David



[virtio-dev][RFC PATCH v1 2/2] virtio-gpu: add the ability to export resources

2020-01-21 Thread David Stevens
Signed-off-by: David Stevens 
---
 virtio-gpu.tex | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index af4ca61..a1f0210 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
@@ -186,12 +186,16 @@ \subsubsection{Device Operation: Request
header}\label{sec:Device Types / GPU De
 VIRTIO_GPU_CMD_UPDATE_CURSOR = 0x0300,
 VIRTIO_GPU_CMD_MOVE_CURSOR,

+/* misc commands */
+VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID = 0x0400,
+
 /* success responses */
 VIRTIO_GPU_RESP_OK_NODATA = 0x1100,
 VIRTIO_GPU_RESP_OK_DISPLAY_INFO,
 VIRTIO_GPU_RESP_OK_CAPSET_INFO,
 VIRTIO_GPU_RESP_OK_CAPSET,
 VIRTIO_GPU_RESP_OK_EDID,
+VIRTIO_GPU_RESP_OK_RESOURCE_ASSIGN_UUID,

 /* error responses */
 VIRTIO_GPU_RESP_ERR_UNSPEC = 0x1200,
@@ -454,6 +458,32 @@ \subsubsection{Device Operation:
controlq}\label{sec:Device Types / GPU Device /
 This detaches any backing pages from a resource, to be used in case of
 guest swapping or object destruction.

+\item[VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID] Creates an exported object from
+  a resource. Request data is \field{struct
+virtio_gpu_resource_assign_uuid}.  Response type is
+  VIRTIO_GPU_RESP_OK_RESOURCE_ASSIGN_UUID, response data is \field{struct
+virtio_gpu_resp_resource_assign_uuid}.
+
+\begin{lstlisting}
+struct virtio_gpu_resource_assign_uuid {
+struct virtio_gpu_ctrl_hdr hdr;
+le32 resource_id;
+le32 padding;
+};
+
+struct virtio_gpu_resp_resource_assign_uuid {
+struct virtio_gpu_ctrl_hdr hdr;
+le64 uuid_low;
+le64 uuid_high;
+};
+\end{lstlisting}
+
+The response contains a uuid which identifies the exported object created from
+the host private resource. Note that if the resource has an attached backing,
+modifications made to the host private resource through the exported object by
+other devices are not visible in the attached backing until they are
transferred
+into the backing.
+
 \end{description}

 \subsubsection{Device Operation: cursorq}\label{sec:Device Types /
GPU Device / Device Operation / Device Operation: cursorq}
-- 
2.25.0.341.g760bfbb309-goog



[virtio-dev][RFC PATCH v2 0/2] Cross-device resource sharing

2020-01-21 Thread David Stevens
This RFC comes from the recent discussion on buffer sharing [1],
specifically about the need to share resources between different
virtio devices. For a concrete use case, this can be used to share
virtio-gpu allocated buffers with the recently proposed virtio video
device [2], without the need to memcpy decoded frames through the
guest.

[1] https://markmail.org/thread/jeh5xjjxvylyrbur
[2] https://markmail.org/thread/yb25fim2dqfuktgf

Changes v1 -> v2:
Rename exported resource to exported object
Rename the virtio-gpu export command

David Stevens (2):
  content: define what an exported object is
  virtio-gpu: add the ability to export resources

 content.tex| 18 ++
 virtio-gpu.tex | 30 ++
 2 files changed, 48 insertions(+)



[virtio-dev][RFC PATCH v1 1/2] content: define what an exported object is

2020-01-21 Thread David Stevens
Define a mechanism for sharing objects between different virtio
devices.

Signed-off-by: David Stevens 
---
 content.tex | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/content.tex b/content.tex
index b1ea9b9..6c6dd59 100644
--- a/content.tex
+++ b/content.tex
@@ -373,6 +373,24 @@ \section{Driver Notifications}
\label{sec:Virtqueues / Driver notifications}

 \input{shared-mem.tex}

+\section{Exporting Objects}\label{sec:Basic Facilities of a Virtio
Device / Exporting Objects}
+
+When an object created by one virtio device needs to be
+shared with a seperate virtio device, the first device can
+export the object by generating a \field{uuid} which the
+guest can pass to the second device to identify the object.
+
+What constitutes an object, how to export objects, and
+how to import objects are defined by the individual device
+types. The generation method of a \field{uuid} is dependent
+upon the implementation of the exporting device.
+
+Whether a particular exported object can be imported into
+a device is dependent upon the implementations of the exporting
+and importing devices. Generally speaking, the guest should
+have some knowledge of the host configuration before trying to
+use exported objects.
+
 \chapter{General Initialization And Device
Operation}\label{sec:General Initialization And Device Operation}

 We start with an overview of device initialization, then expand on the
-- 
2.25.0.341.g760bfbb309-goog



Re: [virtio-dev][RFC PATCH v1 1/2] content: define what exporting a resource is

2020-01-09 Thread David Stevens
> > that isn't just a leaf node of the spec. I think it's better to define
> > 'resource' as a top level concept for virtio devices, even if the specifics
> > of what a 'resource' is are defined by individual device types.
>
> Your patch doesn't define what a resource is though.  It only refers to
> something it calls 'resource' ...

Reading it again, what I wrote was a little ambiguous. Stating things
more clearly, the top level defines an 'exported resource' as a
'resource' associated with a uuid for the purpose of sharing between
different virtio devices. It leaves the definition of what constitutes
a 'resource' to individual device types. Perhaps it would be better to
use 'object' or something instead of 'resource', to avoid the
collision with virtio-gpu resources.

-David



Re: [virtio-dev][RFC PATCH v1 1/2] content: define what exporting a resource is

2020-01-08 Thread David Stevens
>
> Hmm, I'd suggest to move the whole thing into the virtio-gpu section.
> There is no such thing as a "resource" in general virtio context ...
>

If this is moved into the virtio-gpu section, then any device type that
imports resources will have to refer to something defined by the GPU device
type. This would make the GPU device type a sort of special device type
that isn't just a leaf node of the spec. I think it's better to define
'resource' as a top level concept for virtio devices, even if the specifics
of what a 'resource' is are defined by individual device types.

-David


Re: [virtio-dev][RFC PATCH v1 2/2] virtio-gpu: add the ability to export resources

2020-01-08 Thread David Stevens
> Is there a specific reason why you want the host pick the uuid?  I would
> let the guest define the uuid, i.e. move the uuid fields to
> virtio_gpu_export_resource and scratch virtio_gpu_resp_export_resource.

Sending the uuid in the original request doesn't really buy us
anything, at least in terms of asynchronicity. The guest would still
need to wait for the response to arrive before it could safely pass
the uuid to any other virtio devices, to prevent a race where the
import fails because it is processed before virtio-gpu processes the
export. Perhaps this wouldn't be the case if we supported sharing
fences between virtio devices, but even then, fences are more of a
thing for the operation of a pipeline, not for the setup of a
pipeline.

At that point, I think it's just a matter of aesthetics. I lean
slightly towards returning the uuid from the host, since that rules
out any implementation with the aforementioned race. That being said,
if there are any specific reasons or preferences to assigning the uuid
from the guest, I can switch to that direction.

-David



[virtio-dev][RFC PATCH v1 1/2] content: define what exporting a resource is

2020-01-08 Thread David Stevens
Define a mechanism for sharing resources between different virtio
devices.

Signed-off-by: David Stevens 
---
 content.tex | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/content.tex b/content.tex
index b1ea9b9..73bd28e 100644
--- a/content.tex
+++ b/content.tex
@@ -373,6 +373,24 @@ \section{Driver Notifications}
\label{sec:Virtqueues / Driver notifications}

 \input{shared-mem.tex}

+\section{Exporting Resources}\label{sec:Basic Facilities of a Virtio
Device / Exporting Resources}
+
+When a resource created by one virtio device needs to be
+shared with a seperate virtio device, the first device can
+export the resource by generating a \field{uuid} which the
+guest can pass to the second device to identify the resource.
+
+What constitutes a resource, how to export resources, and
+how to import resources are defined by the individual device
+types. The generation method of a \field{uuid} is dependent
+upon the implementation of the exporting device.
+
+Whether a particular exported resource can be imported into
+a device is dependent upon the implementations of the exporting
+and importing devices. Generally speaking, the guest should
+have some knowledge of the host configuration before trying to
+use exported resources.
+
 \chapter{General Initialization And Device
Operation}\label{sec:General Initialization And Device Operation}

 We start with an overview of device initialization, then expand on the
-- 
2.24.1.735.g03f4e72817-goog



[virtio-dev][RFC PATCH v1 0/2] Cross-device resource sharing

2020-01-08 Thread David Stevens
This RFC comes from the recent discussion on buffer sharing [1],
specifically about the need to share resources between different
virtio devices. For a concrete use case, this can be used to share
virtio-gpu allocated buffers with the recently proposed virtio video
device [2], without the need to memcpy decoded frames through the
guest.

[1] https://markmail.org/thread/jeh5xjjxvylyrbur
[2] https://markmail.org/thread/yb25fim2dqfuktgf



[virtio-dev][RFC PATCH v1 2/2] virtio-gpu: add the ability to export resources

2020-01-08 Thread David Stevens
Signed-off-by: David Stevens 
---
 virtio-gpu.tex | 29 +
 1 file changed, 29 insertions(+)

diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index af4ca61..522f478 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
@@ -186,12 +186,16 @@ \subsubsection{Device Operation: Request
header}\label{sec:Device Types / GPU De
 VIRTIO_GPU_CMD_UPDATE_CURSOR = 0x0300,
 VIRTIO_GPU_CMD_MOVE_CURSOR,

+/* misc commands */
+VIRTIO_GPU_CMD_EXPORT_RESOURCE = 0x0400,
+
 /* success responses */
 VIRTIO_GPU_RESP_OK_NODATA = 0x1100,
 VIRTIO_GPU_RESP_OK_DISPLAY_INFO,
 VIRTIO_GPU_RESP_OK_CAPSET_INFO,
 VIRTIO_GPU_RESP_OK_CAPSET,
 VIRTIO_GPU_RESP_OK_EDID,
+VIRTIO_GPU_RESP_OK_EXPORT_RESOURCE,

 /* error responses */
 VIRTIO_GPU_RESP_ERR_UNSPEC = 0x1200,
@@ -454,6 +458,31 @@ \subsubsection{Device Operation:
controlq}\label{sec:Device Types / GPU Device /
 This detaches any backing pages from a resource, to be used in case of
 guest swapping or object destruction.

+\item[VIRTIO_GPU_CMD_EXPORT_RESOURCE] Exports a resource for use by other
+  virtio devices. Request data is \field{struct
+virtio_gpu_export_resource}.  Response type is
+  VIRTIO_GPU_RESP_OK_EXPORT_RESOURCE, response data is \field{struct
+virtio_gpu_resp_export_resource}.
+
+\begin{lstlisting}
+struct virtio_gpu_export_resource {
+struct virtio_gpu_ctrl_hdr hdr;
+le32 resource_id;
+le32 padding;
+};
+
+struct virtio_gpu_resp_export_resource {
+struct virtio_gpu_ctrl_hdr hdr;
+le64 uuid_low;
+le64 uuid_high;
+};
+\end{lstlisting}
+
+The response contains a uuid which identifies the host private resource to
+other virtio devices. Note that if the resource has an attached backing,
+modifications made to an exported resource by other devices are not visible
+in the attached backing until they are transferred into the backing.
+
 \end{description}

 \subsubsection{Device Operation: cursorq}\label{sec:Device Types /
GPU Device / Device Operation / Device Operation: cursorq}
-- 
2.24.1.735.g03f4e72817-goog



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-17 Thread David Stevens
> > > Of course only virtio drivers would try step (2), other drivers (when
> > > sharing buffers between intel gvt device and virtio-gpu for example)
> > > would go straight to (3).
> >
> > For virtio-gpu as it is today, it's not clear to me that they're
> > equivalent. As I read it, the virtio-gpu spec makes a distinction
> > between the guest memory and the host resource. If virtio-gpu is
> > communicating with non-virtio devices, then obviously you'd just be
> > working with guest memory. But if it's communicating with another
> > virtio device, then there are potentially distinct guest and host
> > buffers that could be used. The spec shouldn't leave any room for
> > ambiguity as to how this distinction is handled.
>
> Yep.  It should be the host side buffer.

I agree that it should be the host side buffer. I just want to make
sure that the meaning of 'import' is clear, and to establish the fact
that importing a buffer by uuid is not necessarily the same thing as
creating a new buffer in a different device from the same sglist (for
example, sharing a guest sglist might require more flushes).

-David



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-12 Thread David Stevens
> > > Without buffer sharing support the driver importing a virtio-gpu dma-buf
> > > can send the buffer scatter list to the host.  So both virtio-gpu and
> > > the other device would actually access the same guest pages, but they
> > > are not aware that the buffer is shared between devices.
> >
> > With the uuid approach, how should this case be handled? Should it be
> > equivalent to exporting and importing the buffer which was created
> > first? Should the spec say it's undefined behavior that might work as
> > expected but might not, depending on the device implementation? Does
> > the spec even need to say anything about it?
>
> Using the uuid is an optional optimization.  I'd expect the workflow be
> roughly this:
>
>   (1) exporting driver exports a dma-buf as usual, additionally attaches
>   a uuid to it and notifies the host (using device-specific commands).
>   (2) importing driver will ask the host to use the buffer referenced by
>   the given uuid.
>   (3) if (2) fails for some reason use the dma-buf scatter list instead.
>
> Of course only virtio drivers would try step (2), other drivers (when
> sharing buffers between intel gvt device and virtio-gpu for example)
> would go straight to (3).

For virtio-gpu as it is today, it's not clear to me that they're
equivalent. As I read it, the virtio-gpu spec makes a distinction
between the guest memory and the host resource. If virtio-gpu is
communicating with non-virtio devices, then obviously you'd just be
working with guest memory. But if it's communicating with another
virtio device, then there are potentially distinct guest and host
buffers that could be used. The spec shouldn't leave any room for
ambiguity as to how this distinction is handled.

> > Not just buffers not backed by guest ram, but things like fences. I
> > would suggest the uuids represent 'exported resources' rather than
> > 'exported buffers'.
>
> Hmm, I can't see how this is useful.  Care to outline how you envision
> this to work in a typical use case?

Looking at the spec again, it seems like there's some more work that
would need to be done before this would be possible. But the use case
I was thinking of would be to export a fence from virtio-gpu and share
it with a virtio decoder, to set up a decode pipeline that doesn't
need to go back into the guest for synchronization. I'm fine dropping
this point for now, though, and revisiting it as a separate proposal.

-David



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-12 Thread David Stevens
> > > Second I think it is a bad idea
> > > from the security point of view.  When explicitly exporting buffers it
> > > is easy to restrict access to the actual exports.
> >
> > Restricting access to actual exports could perhaps help catch bugs.
> > However, I don't think it provides any security guarantees, since the
> > guest can always just export every buffer before using it.
>
> Probably not on the guest/host boundary.
>
> It's important for security inside the guest though.  You don't want
> process A being able to access process B private resources via buffer
> sharing support, by guessing implicit buffer identifiers.

At least for the linux guest implementation, I wouldn't think the
uuids would be exposed from the kernel. To me, it seems like something
that should be handled internally by the virtio drivers. Especially
since the 'export' process would be very much a virtio-specific
action, so it's likely that it wouldn't fit nicely into existing
userspace software. If you use some other guest with untrusted
userspace drivers, or if you're pulling the uuids out of the kernel to
give to some non-virtio transport, then I can see it being a concern.

> > > Instead of using a dedicated buffer sharing device we can also use
> > > virtio-gpu (or any other driver which supports dma-buf exports) to
> > > manage buffers.

Ah, okay. I misunderstood the original statement. I read the sentence
as 'we can use virtio-gpu in place of the dedicated buffer sharing
device', rather than 'every device can manage its own buffers'. I can
agree with the second meaning.

> Without buffer sharing support the driver importing a virtio-gpu dma-buf
> can send the buffer scatter list to the host.  So both virtio-gpu and
> the other device would actually access the same guest pages, but they
> are not aware that the buffer is shared between devices.

With the uuid approach, how should this case be handled? Should it be
equivalent to exporting and importing the buffer which was created
first? Should the spec say it's undefined behavior that might work as
expected but might not, depending on the device implementation? Does
the spec even need to say anything about it?

> With buffer sharing virtio-gpu would attach a uuid to the dma-buf, and
> the importing driver can send the uuid (instead of the scatter list) to
> the host.  So the device can simply lookup the buffer on the host side
> and use it directly.  Another advantage is that this enables some more
> use cases like sharing buffers between devices which are not backed by
> guest ram.

Not just buffers not backed by guest ram, but things like fences. I
would suggest the uuids represent 'exported resources' rather than
'exported buffers'.

> Well, security-wise you want have buffer identifiers which can't be
> easily guessed.  And guessing uuid is pretty much impossible due to
> the namespace being huge.

I guess this depends on what you're passing around within the guest.
If you're passing around the raw uuids, sure. But I would argue it's
better to pass around unforgeable identifiers (e.g. fds), and to
restrict the uuids to when talking directly to the virtio transport.
But I guess there are likely situations where that's not possible.

-David



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-11 Thread David Stevens
> First the addressing is non-trivial, especially with the "transport
> specific device address" in the tuple.

There is complexity here, but I think it would also be present in the
buffer sharing device case. With a buffer sharing device, the same
identifying information would need to be provided from the exporting
driver to the buffer sharing driver, so the buffer sharing device
would be able to identify the right device in the vmm. And then in
both import cases, the buffer is just identified by some opaque bytes
that need to be given to a buffer manager in the vmm to resolve the
actual buffer.

> Second I think it is a bad idea
> from the security point of view.  When explicitly exporting buffers it
> is easy to restrict access to the actual exports.

Restricting access to actual exports could perhaps help catch bugs.
However, I don't think it provides any security guarantees, since the
guest can always just export every buffer before using it. Using
implicit addresses doesn't mean that the buffer import actually has to
be allowed - it can be thought of as fusing the buffer export and
buffer import operations into a single operation. The vmm can still
perform exactly the same security checks.

> Instead of using a dedicated buffer sharing device we can also use
> virtio-gpu (or any other driver which supports dma-buf exports) to
> manage buffers.

I don't think adding generic buffer management to virtio-gpu (or any
specific device type) is a good idea, since that device would then
become a requirement for buffer sharing between unrelated devices. For
example, it's easy to imagine a device with a virtio-camera and a
virtio-encoder (although such protocols don't exist today). It
wouldn't make sense to require a virtio-gpu device to allow those two
devices to share buffers.

> With no central instance (buffer sharing device) being there managing
> the buffer identifiers I think using uuids as identifiers would be a
> good idea, to avoid clashes.  Also good for security because it's pretty
> much impossible to guess buffer identifiers then.

Using uuids to identify buffers would work. The fact that it provides
a single way to refer to both guest and host allocated buffers is
nice. And it could also directly apply to sharing resources other than
buffers (e.g. fences). Although unless we're positing that there are
different levels of trust within the guest, I don't think uuids really
provides much security.

If we're talking about uuids, they could also be used to simplify my
proposed implicit addressing scheme. Each device could be assigned a
uuid, which would simplify the shared resource identifier to
(device-uuid, shmid, offset).

In my opinion, the implicit buffer addressing scheme is fairly similar
to the uuid proposal. As I see it, the difference is that one is
referring to resources as uuids in a global namespace, whereas the
other is referring to resources with fully qualified names. Beyond
that, the implementations would be fairly similar.

-David



Re: guest / host buffer sharing ...

2019-12-10 Thread David Stevens
There are three issues being discussed here that aren't being clearly
delineated: sharing guest allocated memory with the host, sharing host
allocated memory with the guest, and sharing buffers between devices.

Right now, guest allocated memory can be shared with the host through
the virtqueues or by passing a scatterlist in the virtio payload (i.e.
what virtio-gpu does). Host memory can be shared with the guest using
the new shared memory regions. As far as I can tell, these mechanisms
should be sufficient for sharing memory between the guest and host and
vice versa.

Where things are not sufficient is when we talk about sharing buffers
between devices. For starters, a 'buffer' as we're discussing here is
not something that is currently defined by the virtio spec. The
original proposal defines a buffer as a generic object that is guest
ram+id+metadata, and is created by a special buffer allocation device.
With this approach, buffers can be cleanly shared between devices.

An alternative that Tomasz suggested would be to avoid defining a
generic buffer object, and instead state that the scatterlist which
virtio-gpu currently uses is the 'correct' way for virtio device
protocols to define buffers. With this approach, sharing buffers
between devices potentially requires the host to map different
scatterlists back to a consistent representation of a buffer.

None of the proposals directly address the use case of sharing host
allocated buffers between devices, but I think they can be extended to
support it. Host buffers can be identified by the following tuple:
(transport type enum, transport specific device address, shmid,
offset). I think this is sufficient even for host-allocated buffers
that aren't visible to the guest (e.g. protected memory, vram), since
they can still be given address space in some shared memory region,
even if those addresses are actually inaccessible to the guest. At
this point, the host buffer identifier can simply be passed in place
of the guest ram scatterlist with either proposed buffer sharing
mechanism.

I think the main question here is whether or not the complexity of
generic buffers and a buffer sharing device is worth it compared to
the more implicit definition of buffers. Personally, I lean towards
the implicit definition of buffers, since a buffer sharing device
brings a lot of complexity and there aren't any clear clients of the
buffer metadata feature.

Cheers,
David

On Thu, Dec 5, 2019 at 7:22 AM Dylan Reid  wrote:
>
> On Thu, Nov 21, 2019 at 4:59 PM Tomasz Figa  wrote:
> >
> > On Thu, Nov 21, 2019 at 6:41 AM Geoffrey McRae  
> > wrote:
> > >
> > >
> > >
> > > On 2019-11-20 23:13, Tomasz Figa wrote:
> > > > Hi Geoffrey,
> > > >
> > > > On Thu, Nov 7, 2019 at 7:28 AM Geoffrey McRae 
> > > > wrote:
> > > >>
> > > >>
> > > >>
> > > >> On 2019-11-06 23:41, Gerd Hoffmann wrote:
> > > >> > On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
> > > >> >> > (1) The virtio device
> > > >> >> > =
> > > >> >> >
> > > >> >> > Has a single virtio queue, so the guest can send commands to 
> > > >> >> > register
> > > >> >> > and unregister buffers.  Buffers are allocated in guest ram.  
> > > >> >> > Each buffer
> > > >> >> > has a list of memory ranges for the data. Each buffer also has 
> > > >> >> > some
> > > >> >>
> > > >> >> Allocating from guest ram would work most of the time, but I think
> > > >> >> it's insufficient for many use cases. It doesn't really support 
> > > >> >> things
> > > >> >> such as contiguous allocations, allocations from carveouts or <4GB,
> > > >> >> protected buffers, etc.
> > > >> >
> > > >> > If there are additional constrains (due to gpu hardware I guess)
> > > >> > I think it is better to leave the buffer allocation to virtio-gpu.
> > > >>
> > > >> The entire point of this for our purposes is due to the fact that we
> > > >> can
> > > >> not allocate the buffer, it's either provided by the GPU driver or
> > > >> DirectX. If virtio-gpu were to allocate the buffer we might as well
> > > >> forget
> > > >> all this and continue using the ivshmem device.
> > > >
> > > > I don't understand why virtio-gpu couldn't allocate those buffers.
> > > > Allocation doesn't necessarily mean creating new memory. Since t

Re: guest / host buffer sharing ...

2019-11-10 Thread David Stevens
> My question would be "what is the actual problem you are trying to
> solve?".

One problem that needs to be solved is sharing buffers between
devices. With the out-of-tree Wayland device, to share virtio-gpu
buffers we've been using the virtio resource id. However, that
approach isn't necessarily the right approach, especially once there
are more devices allocating/sharing buffers. Specifically, this issue
came up in the recent RFC about adding a virtio video decoder device.

Having a centralized buffer allocator device is one way to deal with
sharing buffers, since it gives a definitive buffer identifier that
can be used by all drivers/devices to refer to the buffer. That being
said, I think the device as proposed is insufficient, as such a
centralized buffer allocator should probably be responsible for
allocating all shared buffers, not just linear guest ram buffers.

-David



Re: guest / host buffer sharing ...

2019-11-06 Thread David Stevens
> (1) The virtio device
> =
>
> Has a single virtio queue, so the guest can send commands to register
> and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> has a list of memory ranges for the data. Each buffer also has some

Allocating from guest ram would work most of the time, but I think
it's insufficient for many use cases. It doesn't really support things
such as contiguous allocations, allocations from carveouts or <4GB,
protected buffers, etc.

> properties to carry metadata, some fixed (id, size, application), but

What exactly do you mean by application?

> also allow free form (name = value, framebuffers would have
> width/height/stride/format for example).

Is this approach expected to handle allocating buffers with
hardware-specific constraints such as stride/height alignment or
tiling? Or would there need to be some alternative channel for
determining those values and then calculating the appropriate buffer
size?

-David

On Tue, Nov 5, 2019 at 7:55 PM Gerd Hoffmann  wrote:
>
>   Hi folks,
>
> The issue of sharing buffers between guests and hosts keeps poping
> up again and again in different contexts.  Most recently here:
>
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg656685.html
>
> So, I'm grabbing the recipient list of the virtio-vdec thread and some
> more people I know might be interested in this, hoping to have everyone
> included.
>
> Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> resources" is really a good answer for all the different use cases
> we have collected over time.  Maybe it is better to have a dedicated
> buffer sharing virtio device?  Here is the rough idea:
>
>
> (1) The virtio device
> =
>
> Has a single virtio queue, so the guest can send commands to register
> and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> has a list of memory ranges for the data.  Each buffer also has some
> properties to carry metadata, some fixed (id, size, application), but
> also allow free form (name = value, framebuffers would have
> width/height/stride/format for example).
>
>
> (2) The linux guest implementation
> ==
>
> I guess I'd try to make it a drm driver, so we can re-use drm
> infrastructure (shmem helpers for example).  Buffers are dumb drm
> buffers.  dma-buf import and export is supported (shmem helpers
> get us that for free).  Some device-specific ioctls to get/set
> properties and to register/unregister the buffers on the host.
>
>
> (3) The qemu host implementation
> 
>
> qemu (likewise other vmms) can use the udmabuf driver to create
> host-side dma-bufs for the buffers.  The dma-bufs can be passed to
> anyone interested, inside and outside qemu.  We'll need some protocol
> for communication between qemu and external users interested in those
> buffers, to receive dma-bufs (via unix file descriptor passing) and
> update notifications.  Dispatching updates could be done based on the
> application property, which could be "virtio-vdec" or "wayland-proxy"
> for example.
>
>
> commments?
>
> cheers,
>   Gerd
>



[Qemu-devel] Re: [PATCH] virtio: invoke set_features on load

2010-05-10 Thread David Stevens
Michael S. Tsirkin m...@redhat.com wrote on 05/09/2010 09:42:09 AM:

 After migration, vhost was not getting features
 acked because set_features callback was never invoked.
 The fix is just to invoke that callback.
 
 Reported-by: David L Stevens dlstev...@us.ibm.com
 Signed-off-by: Michael S. Tsirkin m...@redhat.com
 ---
 
 David, a tested-by tag would be appreciated.

Tested-by: David L Stevens dlstev...@us.ibm.com

 
  hw/virtio.c |2 ++
  1 files changed, 2 insertions(+), 0 deletions(-)
 
 diff --git a/hw/virtio.c b/hw/virtio.c
 index 5d686f0..74c450d 100644
 --- a/hw/virtio.c
 +++ b/hw/virtio.c
 @@ -692,6 +692,8 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f)
  features, supported_features);
  return -1;
  }
 +if (vdev-set_features)
 +vdev-set_features(vdev, features);
  vdev-guest_features = features;
  vdev-config_len = qemu_get_be32(f);
  qemu_get_buffer(f, vdev-config, vdev-config_len);
 -- 
 1.7.1.12.g42b7f