[virtio-dev] Re: [VIRTIO GPU PATCH v2 1/1] virtio-gpu: Add new feature flag VIRTIO_GPU_F_FREEZING

2023-07-19 Thread Gerd Hoffmann
> +\begin{lstlisting}
> +struct virtio_gpu_status_freezing {
> +  struct virtio_gpu_ctrl_hdr hdr;
> +  __u32 freezing;
> +};
> +\end{lstlisting}

> +This is added for S3 function in guest with virtio-gpu. When guest does
> +S3, let it notify QEMU if virtio-gpu is in freezing status or not in
> +\field{freezing}. Zero means it is not in freezing status, none-zero
> +is the opposite. When virtio-gpu is in freezing status, QEMU will not
> +destroy resources which are created by using commands
> +VIRTIO_GPU_CMD_RESOURCE_CREATE_*, so that guest can use those resources
> +to resume display.

Good, thanks for the update.

> +Note: this change is not enough to solve the problems of S4 function.
> +QEMU may lose resources after hibernation. It needs more research and
> +development.

So, eventually we will see changes to this in the future.  I'd suggest
to use an enum for freezing, so this can be extended with more modes in
the future, i.e. something along the lines of:

enum virtio_gpu_freeze_mode {
VIRTIO_GPU_FREEZE_MODE_UNFREEZE  = 0,
VIRTIO_GPU_FREEZE_MODE_FREEZE_S3 = 3,
};

Maybe also add a "_S3" postfix to the feature flag name.  In case we add
S4 support at some point in the future we will need another feature flag
for that.

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [VIRTIO GPU PATCH 1/1] virtio-gpu: Add new feature flag VIRTIO_GPU_F_FREEZING

2023-07-06 Thread Gerd Hoffmann
  Hi,

> +\item[VIRTIO_GPU_CMD_STATUS_FREEZING]
> +Notify freezing status through controlq.
> +Request data is \field{struct virtio_gpu_status_freezing}.
> +Response type is VIRTIO_GPU_RESP_OK_NODATA.
> +
> +Guest notifies QEMU if virtio-gpu is in freezing status or not in 
> \field{freezing}.
> +zero means it is not in freezing status, none-zero is the opposite.
> +When guest is in freezing status, QEMU will not destroy resources.

This should be more clear which resources are meant here.  Just
resources created by VIRTIO_GPU_CMD_RESOURCE_CREATE_* commands?
Or all objects (including cursors, scanouts, 3D contexts, ...)?
Or some other set?

I've seen on the (code) patch discussion the topic of different
suspend modes (s3/s4) came up.
What was the conclusion there?
Should a note about this be added to the spec too?

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3 00/12] Context types, v3

2021-09-29 Thread Gerd Hoffmann
> Gurchetan Singh (10):
>   virtio-gpu api: multiple context types with explicit initialization
>   drm/virtgpu api: create context init feature
>   drm/virtio: implement context init: track valid capabilities in a mask
>   drm/virtio: implement context init: track {ring_idx, emit_fence_info}
> in virtio_gpu_fence
>   drm/virtio: implement context init: plumb {base_fence_ctx, ring_idx}
> to virtio_gpu_fence_alloc
>   drm/virtio: implement context init: stop using drv->context when
> creating fence
>   drm/virtio: implement context init: allocate an array of fence
> contexts
>   drm/virtio: implement context init: handle
> VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK
>   drm/virtio: implement context init: add virtio_gpu_fence_event
>   drm/virtio: implement context init: advertise feature to userspace

Pushed to drm-misc-next.

thanks,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v1 08/12] drm/virtio: implement context init: stop using drv->context when creating fence

2021-09-15 Thread Gerd Hoffmann
  Hi,

> > I guess you need to also update virtio_gpu_fence_event_process()
> > then?  It currently has the strict ordering logic baked in ...
> 
> The update to virtio_gpu_fence_event_process was done as a preparation a
> few months back:
> 
> https://cgit.freedesktop.org/drm/drm-misc/commit/drivers/gpu/drm/virtio/virtgpu_fence.c?id=36549848ed27c22bb2ffd5d1468efc6505b05f97

Ah, ok, missed the detail that the context check is already there.

thanks,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v1 08/12] drm/virtio: implement context init: stop using drv->context when creating fence

2021-09-14 Thread Gerd Hoffmann
On Wed, Sep 08, 2021 at 06:37:13PM -0700, Gurchetan Singh wrote:
> The plumbing is all here to do this.  Since we always use the
> default fence context when allocating a fence, this makes no
> functional difference.
> 
> We can't process just the largest fence id anymore, since it's
> it's associated with different timelines.  It's fine for fence_id
> 260 to signal before 259.  As such, process each fence_id
> individually.

I guess you need to also update virtio_gpu_fence_event_process()
then?  It currently has the strict ordering logic baked in ...

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Is virtio-snd merged and upstreamed to qemu.org

2021-02-25 Thread Gerd Hoffmann
On Thu, Feb 25, 2021 at 01:51:16PM +0530, Veera wrote:
> Hi,
> 
> Is virtio-snd patches merged and upstreamed to Qemu.org?

There are no qemu patches right now (there are other hypervisors beside
qemu/kvm ...).

Shreyansh Chouhan (added to Cc) is working on a qemu host
implementation.

> When Linux kernel virtio-snd driver will be upstreamed to kernel.org
> or Greg KH testing branch?

mst picked up the patches posted recently, they should land mainline
in 5.12 or 5.13.

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v8] virtio-snd: add virtio sound device specification

2021-01-11 Thread Gerd Hoffmann
On Fri, Jan 08, 2021 at 02:56:31PM +0100, Guennadi Liakhovetski wrote:
> Hi Anton,
> 
> I see the standard has been merged into the VirtIO spec - congratulations! I
> also saw, that there was a GSOC project for VirtIO sound support in QEMU.

No one looked at qemu support in gsoc 2020.
A few days ago Shreyansh Chouhan (added to Cc:) indicated he wants work
on this (outside gsoc).  Some early discussions are @ qemu-devel right
now.

While being at it: what is the linux guest driver status?  Seems to not
be merged upstream yet ...

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] [PATCH v4] virtio-i2c: add the device specification

2020-11-17 Thread Gerd Hoffmann
On Tue, Nov 17, 2020 at 02:55:14PM +0800, Jie Deng wrote:
> 
> On 2020/11/16 16:16, Paolo Bonzini wrote:
> > On 16/11/20 03:12, Jie Deng wrote:
> > > Fore example, the frontend may kick the sequence "write read read
> > > ..." to the backend at a time.
> > > 
> > > The segments can be aggregated into "i2c_msg list" and sent to the
> > > hardware at a time by the backend.
> > > 
> > > The host native drivers will finally send these segments using the
> > > methods they support.
> > 
> > If so, the spec needs to specify what the backend must/should/may
> > aggregate into a single host-side transaction.
> > 
> > What my proposal does is essentially saying that the backend MUST
> > aggregate a write and a read segment into a single transaction if it is
> > sent with a single virtio-i2c request.
> > 
> > Paolo
> > 
> This may depend on the host hardware. I can add a host feature bit to
> indicate it.

That is not enough.  You also need that for the transactions.  If the
driver sends a write and a read message the device needs to know
whenever that is one or two transactions.  So if you want continue
with the i2c_msg list idea you need some way to group your messages
into transactions.

Or you go with paolos idea which looks simpler to me.

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v3 0/6] Add virtio-iommu built-in topology

2020-09-25 Thread Gerd Hoffmann
  Hi,

> Many power platforms are OF based, thus without ACPI or DT support.

pseries has lots of stuff below /proc/device-tree.  Dunno whenever that
is the same kind of device tree we have on arm ...

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3 0/6] Add virtio-iommu built-in topology

2020-09-24 Thread Gerd Hoffmann
On Thu, Sep 24, 2020 at 12:02:55PM +0200, Joerg Roedel wrote:
> On Thu, Sep 24, 2020 at 05:38:13AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Sep 24, 2020 at 11:21:29AM +0200, Joerg Roedel wrote:
> > > On Thu, Sep 24, 2020 at 05:00:35AM -0400, Michael S. Tsirkin wrote:
> > > > OK so this looks good. Can you pls repost with the minor tweak
> > > > suggested and all acks included, and I will queue this?
> > > 
> > > My NACK still stands, as long as a few questions are open:
> > > 
> > >   1) The format used here will be the same as in the ACPI table? I
> > >  think the answer to this questions must be Yes, so this leads
> > >  to the real question:
> > 
> > I am not sure it's a must.
> 
> It is, having only one parser for the ACPI and MMIO descriptions was one
> of the selling points for MMIO in past discussions and I think it makes
> sense to keep them in sync.

So that requirement basically kills the "we have something to play with
while the acpi table spec is in progress" argument.  Also note that qemu
microvm got acpi support meanwhile.

Are there other cases where neither ACPI nor DT are available?

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 10/23] virtio-gpu api: host visible feature

2020-09-09 Thread Gerd Hoffmann
  Hi,

> --- a/include/uapi/drm/virtgpu_drm.h

kernel <-> userspace API.

> --- a/include/uapi/linux/virtio_gpu.h

host <-> guest API.

Please create sepparate patches for these.

thanks,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 10/23] virtio-gpu api: host visible feature

2020-09-09 Thread Gerd Hoffmann
  Hi,

> +enum virtio_gpu_shm_id {
> + VIRTIO_GPU_SHM_ID_UNDEFINED = 0,
> + VIRTIO_GPU_SHM_ID_HOST_VISIBLE = 1
> +};

I think this is also not in the virtio spec update.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 04/23] virtio: Add get_shm_region method

2020-09-09 Thread Gerd Hoffmann
On Wed, Sep 02, 2020 at 05:00:25PM -0700, Gurchetan Singh wrote:
> On Wed, Sep 2, 2020 at 3:15 PM Vivek Goyal  wrote:
> 
> > Hi Gurchetan,
> >
> > Now Miklos has queued, these tree virtio patches for shared memory
> > region in his tree as part of virtiofs dax patch series.
> >
> > I am hoping this will get merged in 5.10 through his tree.
> >
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse.git/log/?h=dax
> 
> 
> Terrific ... !  Maybe we can queue the version Miklos has in drm-misc-next
> to avoid merge conflicts ?!?

I guess it would either be merging the fuse tree into drm-misc-next,
or cherry-picking the three virtio shm patches from the fuse tree.

Daniel?  What is the usual way to handle this?

thanks,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 09/23] virtio-gpu api: blob resources

2020-09-09 Thread Gerd Hoffmann
  Hi,

> @@ -100,7 +102,7 @@ struct drm_virtgpu_resource_info {
>   __u32 bo_handle;
>   __u32 res_handle;
>   __u32 size;
> - __u32 stride;
> + __u32 blob_mem;
>  };

Huh?  This is not in the virtio spec update proposed.

>  struct drm_virtgpu_3d_box {
> @@ -117,6 +119,8 @@ struct drm_virtgpu_3d_transfer_to_host {
>   struct drm_virtgpu_3d_box box;
>   __u32 level;
>   __u32 offset;
> + __u32 stride;
> + __u32 layer_stride;
>  };

Same here.

>  struct drm_virtgpu_3d_transfer_from_host {
> @@ -124,6 +128,8 @@ struct drm_virtgpu_3d_transfer_from_host {
>   struct drm_virtgpu_3d_box box;
>   __u32 level;
>   __u32 offset;
> + __u32 stride;
> + __u32 layer_stride;
>  };

And here.

take care,
  Gerd

PS: cherry-picked patches 1-3 into drm-misc-next.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: virtio-input for copy/paste?

2020-08-20 Thread Gerd Hoffmann
  Hi,

> > Guest UI too (you can make the same argument the other way around) if we
> > want support guest->host cut+paste.
> 
> OK, I was less worried about the host stealing from the guest, since it
> normally already can.

Yes, typically that would less concerning.  I wouldn't completely ignore
it though, SEV exists for a reason.  Also you might not want cut+paste
inside the guest overwrite the host clipboard.

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: virtio-input for copy/paste?

2020-08-19 Thread Gerd Hoffmann
  Hi,

> Right; what I was interested in was whether there would be a way to
> plumb copy/paste through for a VNC or local Gtk display; the guest view
> should be independent of the transport protocol.

Well.  Full-blown cut+paste is quite complex.  spice goes all-in and
supports cut+paste everything both ways.  The minimal thing would be
host -> guest support for "text/plain; charset=utf-8".  Quick google
search suggests the vnc protocol supports just that.

So, what exactly we are talking about?

> Perhaps that way is just to standardise on the virtio-serial channel
> that spice already uses and provide that for other transports;
> but if not then it feels like there should be some standard.

Not that easy I think, the channel is not used exclusively for
cut+paste but also some other, spice-specific stuff.

We could try integrate this into qemu guest agent.

Or use something completely separate.  qemu guest agent works at system
level whereas cut+paste would work on user session level.  spice solves
this by having both system and user daemon, where the user daemon talks
to the system daemon.  With a separate channel you wouldn't need the
system daemon as middle man though.

> > You also need the agent as user interface, so the user can explicitly
> > enable clipboard access for the other side (you don't want do this
> > automatically for security reasons: the guest should not be able to
> > sniff the passwords which you cut+paste on the host from password
> > manager to browser).
> 
> That should be on the host side-UI shouldn't it?

Guest UI too (you can make the same argument the other way around) if we
want support guest->host cut+paste.

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH] virtio: fix build for configs without dma-bufs

2020-08-19 Thread Gerd Hoffmann
On Wed, Aug 19, 2020 at 12:10:11PM +0900, David Stevens wrote:
> Reported-by: kernel test robot 
> Signed-off-by: David Stevens 

Pushed to drm-misc-next

thanks for the qick fix,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v6 0/3] Support virtio cross-device resources

2020-08-18 Thread Gerd Hoffmann
On Tue, Aug 18, 2020 at 10:37:41AM +0900, David Stevens wrote:
> This patchset implements the current proposal for virtio cross-device
> resource sharing [1]. It will be used to import virtio resources into
> the virtio-video driver currently under discussion [2]. The patch
> under consideration to add support in the virtio-video driver is [3].
> It uses the APIs from v3 of this series, but the changes to update it
> are relatively minor.
> 
> This patchset adds a new flavor of dma-bufs that supports querying the
> underlying virtio object UUID, as well as adding support for exporting
> resources from virtgpu.
> 
> [1] https://markmail.org/thread/2ypjt5cfeu3m6lxu
> [2] https://markmail.org/thread/p5d3k566srtdtute
> [3] https://markmail.org/thread/j4xlqaaim266qpks
> 
> v5 -> v6 changes:
>  - Fix >80 character comment

Hmm, checkpatch still complains, full log below.

IIRC "dim checkpatch" runs scripts/checkpatch.pl with --strict
so it is a bit more picky ...

The FILE_PATH_CHANGES isn't a problem given that the new file
is covered by existing wildcard.

take care,
  Gerd

---
+ dim checkpatch drm-misc-next..drm-qemu-next drm-misc
ee194dc222ae virtio: add dma-buf support for exported objects
-:29: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does 
MAINTAINERS need updating?
#29: 
new file mode 100644

-:65: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#65: FILE: include/linux/virtio_dma_buf.h:32:
+struct dma_buf *virtio_dma_buf_export(

-:112: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#112: FILE: drivers/virtio/virtio_dma_buf.c:19:
+struct dma_buf *virtio_dma_buf_export(

-:115: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#115: FILE: drivers/virtio/virtio_dma_buf.c:22:
+   const struct virtio_dma_buf_ops *virtio_ops = container_of(

-:119: CHECK:LOGICAL_CONTINUATIONS: Logical continuations should be on the 
previous line
#119: FILE: drivers/virtio/virtio_dma_buf.c:26:
+   if (!exp_info->ops
+   || exp_info->ops->attach != _dma_buf_attach

-:120: CHECK:LOGICAL_CONTINUATIONS: Logical continuations should be on the 
previous line
#120: FILE: drivers/virtio/virtio_dma_buf.c:27:
+   || exp_info->ops->attach != _dma_buf_attach
+   || !virtio_ops->get_uuid) {

-:135: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#135: FILE: drivers/virtio/virtio_dma_buf.c:42:
+   const struct virtio_dma_buf_ops *ops = container_of(

-:167: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#167: FILE: drivers/virtio/virtio_dma_buf.c:74:
+   const struct virtio_dma_buf_ops *ops = container_of(

total: 0 errors, 1 warnings, 7 checks, 144 lines checked
76c9c2abbe6b virtio-gpu: add VIRTIO_GPU_F_RESOURCE_UUID feature
9c3f3edd1cc4 (HEAD -> drm-qemu-next, kraxel.org/drm-qemu-next) drm/virtio: 
Support virtgpu exported resources
-:53: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#53: FILE: drivers/gpu/drm/virtio/virtgpu_drv.h:222:
+   spinlock_t resource_export_lock;

-:250: CHECK:PREFER_KERNEL_TYPES: Prefer kernel type 'u32' over 'uint32_t'
#250: FILE: drivers/gpu/drm/virtio/virtgpu_vq.c:1118:
+   uint32_t resp_type = le32_to_cpu(resp->hdr.type);

-:256: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#256: FILE: drivers/gpu/drm/virtio/virtgpu_vq.c:1124:
+   if (resp_type == VIRTIO_GPU_RESP_OK_RESOURCE_UUID &&
+   obj->uuid_state == UUID_INITIALIZING) {

-:286: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#286: FILE: drivers/gpu/drm/virtio/virtgpu_vq.c:1154:
+   cmd_p = virtio_gpu_alloc_cmd_resp(vgdev,
+   virtio_gpu_cmd_resource_uuid_cb, , sizeof(*cmd_p),

total: 0 errors, 0 warnings, 4 checks, 250 lines checked
+ exit 1


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v5 0/3] Support virtio cross-device resources

2020-08-17 Thread Gerd Hoffmann
On Mon, Aug 17, 2020 at 12:50:08PM +0200, Gerd Hoffmann wrote:
> On Tue, Jun 23, 2020 at 10:31:28AM +0900, David Stevens wrote:
> > Unless there are any remaining objections to these patches, what are
> > the next steps towards getting these merged? Sorry, I'm not familiar
> > with the workflow for contributing patches to Linux.
> 
> Sorry, just have been busy and not paying as much attention to drm
> patches as usual.  Playing catch-up now.  Queued for drm-misc-next,
> unless something goes wrong in my testing the patches should land
> in linux-next soon and be merged upstream in the next merge window.

Oh, spoke too soon.  scripts/checkpatch.pl has a bunch of codestyle
warnings.  Can you fix them and resend?

thanks,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v5 0/3] Support virtio cross-device resources

2020-08-17 Thread Gerd Hoffmann
On Tue, Jun 23, 2020 at 10:31:28AM +0900, David Stevens wrote:
> Unless there are any remaining objections to these patches, what are
> the next steps towards getting these merged? Sorry, I'm not familiar
> with the workflow for contributing patches to Linux.

Sorry, just have been busy and not paying as much attention to drm
patches as usual.  Playing catch-up now.  Queued for drm-misc-next,
unless something goes wrong in my testing the patches should land
in linux-next soon and be merged upstream in the next merge window.

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: virtio-input for copy/paste?

2020-08-16 Thread Gerd Hoffmann
On Mon, Aug 10, 2020 at 05:41:13PM +0100, Dr. David Alan Gilbert wrote:
> Hi,
>   Is there anywhere that virtio has where host/guest copy/paste would
> fit?

Well, spice supports it, with spice client and spice guest agent talking
to each other using a virtio-serial channel (guest <-> spice-server) and
a spice protocol tunnel (spice-server <-> spice-client).

You'll need a guest agent anyway, to bridge between window system
(x11/wayland/...) and device, so running some protocol over a stream
doesn't look bad to me as design choice.  These days you might choose
vsock instead (depends on how you design the host side though).

You also need the agent as user interface, so the user can explicitly
enable clipboard access for the other side (you don't want do this
automatically for security reasons: the guest should not be able to
sniff the passwords which you cut+paste on the host from password
manager to browser).

> I don't think it's something we have defined at the moment
> and I was wondering if virtio-input would be the right place to add it.
> If the client driver supported the feature then the host would
> hand it to the client, if it didn't it could insert a string
> of key events.

The problem with such a fallback is keyboard maps.  virtio-input sends
scancodes not keysyms.  Even with us-ascii this is a problem (querty vs.
quertz), and it doesn't become easier with € æ ä ß and emoji ...

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v4 0/3] Support virtio cross-device resources

2020-05-28 Thread Gerd Hoffmann
On Tue, May 26, 2020 at 07:58:08PM +0900, David Stevens wrote:
> This patchset implements the current proposal for virtio cross-device
> resource sharing [1]. It will be used to import virtio resources into
> the virtio-video driver currently under discussion [2]. The patch
> under consideration to add support in the virtio-video driver is [3].
> It uses the APIs from v3 of this series, but the changes to update it
> are relatively minor.
> 
> This patchset adds a new flavor of dma-bufs that supports querying the
> underlying virtio object UUID, as well as adding support for exporting
> resources from virtgpu.
> 
> [1] https://markmail.org/thread/2ypjt5cfeu3m6lxu
> [2] https://markmail.org/thread/p5d3k566srtdtute
> [3] https://markmail.org/thread/j4xlqaaim266qpks
> 
> v3 -> v4 changes:
>  - Replace dma-buf hooks with virtio dma-buf from v1.
>  - Remove virtio_attach callback, as the work that had been done
>in that callback is now done on dma-buf export. The documented
>requirement that get_uuid only be called on attached virtio
>dma-bufs is also removed.
>  - Rebase and add call to virtio_gpu_notify for ASSIGN_UUID.
> 
> David Stevens (3):
>   virtio: add dma-buf support for exported objects
>   virtio-gpu: add VIRTIO_GPU_F_RESOURCE_UUID feature
>   drm/virtio: Support virtgpu exported resources

Looks all sane to me.  mst, have you looked at the virtio core changes?
How we are going to merge this?  If you ack I can merge via
drm-misc-next.  Merging through virtio queue would be fine too.

thanks,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3 1/4] dma-buf: add support for virtio exported objects

2020-05-14 Thread Gerd Hoffmann
  Hi,

> - for the runtime upcasting the usual approach is to check the ->ops
> pointer. Which means that would need to be the same for all virtio
> dma_bufs, which might get a bit awkward. But I'd really prefer we not
> add allocator specific stuff like this to dma-buf.

This is exactly the problem, it gets messy quickly, also when it comes
to using the drm_prime.c helpers ...

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v8] virtio-snd: add virtio sound device specification

2020-04-28 Thread Gerd Hoffmann
On Tue, Apr 14, 2020 at 12:41:52PM +0200, Anton Yakovlev wrote:
> Hi all,
> 
> There seems to be no additional comments here. Is it ready to vote then?

Yes, I think so.  I see you did that meanwhile.

> On 30.03.2020 18:34, Anton Yakovlev wrote:
> > From: Anton Yakovlev 
> > 
> > This patch proposes virtio specification for a new virtio sound device,
> > that may be useful in case when having audio is required but a device
> > passthrough or emulation is not an option.
> > 
> > Signed-off-by: Anton Yakovlev 

Reviewed-by: Gerd Hoffmann 

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v8] virtio-snd: add virtio sound device specification

2020-04-03 Thread Gerd Hoffmann
  Hi,

> This patch proposes virtio specification for a new virtio sound device,
> that may be useful in case when having audio is required but a device
> passthrough or emulation is not an option.
> 
> Signed-off-by: Anton Yakovlev 
> ---
> 
> v7->v8 changes:
> 1. Add a universal control request to query information about any
>configuration item (jacks, streams and channel maps). The request
>allows to get information in one by one or in a batch manner.
> 2. Add a common information header that allows to link items together
>in a unified way.
> 3. Rework a jack configuration as a jack information structure.
> 4. Rework a stream configuration as a stream information structure.
> 5. Make a channel map as a separate item and add the
>"Channel Map Control Messages" section.

Looks good to me.  The new *INFO commands are certainly an improvement.
I guess you have both guest and host implementations for this meanwhile?

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3 0/4] Support virtio cross-device resources

2020-03-20 Thread Gerd Hoffmann
On Wed, Mar 11, 2020 at 08:20:00PM +0900, David Stevens wrote:
> This patchset implements the current proposal for virtio cross-device
> resource sharing [1], with minor changes based on recent comments. It
> is expected that this will be used to import virtio resources into the
> virtio-video driver currently under discussion [2].
> 
> This patchset adds a new hook to dma-buf, for querying the dma-buf's
> underlying virtio UUID. This hook is then plumbed through DRM PRIME
> buffers, and finally implemented in virtgpu.

Looks all fine to me.  We should wait for the virtio protocol update
(patch 3/4) being accepted into the virtio specification.  When this is
done I'll go commit this to drm-misc-next.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v7] virtio-snd: add virtio sound device specification

2020-03-17 Thread Gerd Hoffmann
On Mon, Mar 16, 2020 at 09:30:59PM +0100, Anton Yakovlev wrote:
> Hi,
> 
> On 16.03.2020 11:19, Gerd Hoffmann wrote:
> >Hi,
> > 
> > > 2. Rework jack configuration structure, now it's more aligned with the 
> > > HDA spec.
> > 
> > Yep, that looks good.
> > 
> > > +\item[\field{hda_fn_nid}] indicates a functional node identifier
> > > +(see \hyperref[intro:HDA]{HDA}, section 7.1.2).
> > 
> > How is the nid used?
> 
> Since a functional group in HDA links entities of different kinds
> together, as an example, the linux driver can provide different groups
> as different PCM devices (and this well aligns with hardware case when
> streams with different capabilities live in different groups). This may
> be necessary to reassign the jack, because you need to know which
> streams are attached to the jacks. In addition, it will allow you to
> associate device controls with the stream/PCM device.

Ah, there is virtio_snd_pcm_config.hda_fn_nid.  So this is how jacks are
associated with streams.  Havn't noticed this update before.

> > In general some of the defconf and caps fields might need some
> > clarification if and how they are used, given we just cherry-pick
> > some items from the HDA spec ...
> 
> Yes, I also thought about it. But will this lead to a direct dependence
> on the HDA specification? If something changes there, we should update
> here. Therefore, I left these elements transparent and allowed the
> device/driver to decide on the wise use of these fields.

I'm mostly thinking about virtio-sound specific stuff which obviously
isn't covered by the HDA spec, like this:

> > If the "Presence Detect Capable" bit is set the status is available in
> > the "connected" field.

Most fields don't need a comment in virtio-sound, things like the jack
color should be pretty clear without that.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v7] virtio-snd: add virtio sound device specification

2020-03-16 Thread Gerd Hoffmann
  Hi,

> 2. Rework jack configuration structure, now it's more aligned with the HDA 
> spec.

Yep, that looks good.

> +\item[\field{hda_fn_nid}] indicates a functional node identifier
> +(see \hyperref[intro:HDA]{HDA}, section 7.1.2).

How is the nid used?

In general some of the defconf and caps fields might need some
clarification if and how they are used, given we just cherry-pick
some items from the HDA spec ...

> +\item[\field{hda_reg_caps}] indicates a pin capabilities value
> +(see \hyperref[intro:HDA]{HDA}, section 7.3.4.9).

... for example:

If the "Presence Detect Capable" bit is set the status is available in
the "connected" field.

"VRef Control" is not supported and must be zero.

> +\item[\field{connected}] indicates the current jack connection status
> +(1 - connected, 0 - disconnected).

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH v2 4/4] drm/virtio: Support virtgpu exported resources

2020-03-04 Thread Gerd Hoffmann
On Tue, Mar 03, 2020 at 11:42:22AM +0900, David Stevens wrote:
> > cmd_p->hdr.ctx_id =
> >
> > Before this completion of this hypercall, this resource can be
> > considered context local, while afterward it can be considered
> > "exported".
> 
> Maybe I'm misunderstanding render contexts, but exporting a resource
> doesn't seem related to render contexts.

It isn't indeed.  Binding resources to contexts might need dma-buf
imports/exports on the host side, but that is another story and not
related to dma-buf exports inside the guest.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 4/4] drm/virtio: Support virtgpu exported resources

2020-03-04 Thread Gerd Hoffmann
  Hi,

> + if (vgdev->has_resource_assign_uuid) {
> + spin_lock(>resource_export_lock);
> + if (bo->uuid_state == UUID_NOT_INITIALIZED) {
> + bo->uuid_state = UUID_INITIALIZING;
> + needs_init = true;
> + }
> + spin_unlock(>resource_export_lock);
> +
> + if (needs_init) {
> + ret = virtio_gpu_cmd_resource_assign_uuid(vgdev, bo);

You can submit a fenced command, then wait on the fence here.  Removes
the need for UUID_INITIALIZING.

Also note that this function will be called only once, on the first
export.  When exporting the same object again drm will simply reuse
the existing dmabuf.  You can drop UUID_NOT_INITIALIZED and needs_init.

So you are left with only two uuid_state states.  You could turn uuid
into a pointer, so it gets only allocated when needed.  Also uuid ==
NULL can be used for "uuid not available" then.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 1/4] dma-buf: add support for virtio exported objects

2020-03-03 Thread Gerd Hoffmann
On Mon, Mar 02, 2020 at 09:15:21PM +0900, David Stevens wrote:
> This change adds a new dma-buf operation that allows dma-bufs to be used
> by virtio drivers to share exported objects. The new operation allows
> the importing driver to query the exporting driver for the UUID which
> identifies the underlying exported object.
> 
> Signed-off-by: David Stevens 
> ---
>  drivers/dma-buf/dma-buf.c | 14 ++
>  include/linux/dma-buf.h   | 22 ++
>  2 files changed, 36 insertions(+)
> 
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index d4097856c86b..a04632284ec2 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1158,6 +1158,20 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void 
> *vaddr)
>  }
>  EXPORT_SYMBOL_GPL(dma_buf_vunmap);
>  
> +#ifdef CONFIG_VIRTIO
> +int dma_buf_get_uuid(struct dma_buf *dmabuf, uuid_t *uuid)

Hmm, I think I would drop the #ifdef

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3 1/2] virtio-video: Add virtio video device specification

2020-03-03 Thread Gerd Hoffmann
  Hi,

> > With a feature flag both driver and device can choose whenever they want
> > support v1 or v2 or both.  With a version config field this is more
> > limited, the device can't decide to support both.  So the bonus points
> > for a smooth transition go to the feature flags not the version field ;)
> 
> I agree that feature flags would be preferable in general, but I'm
> concerned by the fact that there is (IIUC) a limited number of them.

We have 64 total, some reserved, 24 are available to devices right now,
see https://www.kraxel.org/virtio/virtio-v1.1-cs01-video-v3.html#x1-130002

> Video tends to change significantly over time, and to have optional
> features that would also be presented as feature flags. After a while
> we may run out of them, while a new protocol version would allow us to
> extend the config struct with some new flags. Or am I missing
> something?

Not everything needs a feature flag.  For example we have
VIRTIO_VIDEO_CMD_QUERY_CAPABILITY, and we can add new video formats
without needing a feature flag.  Maybe it is a good idea to explicitly
say in the specs that this can happen and that the driver should simply
ignore any unknown format returned by the device.

> I also wonder how "support v1 or v2 or both" would work with feature
> flags. In order to make it possible to opt out of v1, I guess we would
> need "v1 supported" flag to begin with?

The driver can ignore any device without v2 feature flag set.
The device can refuse to accept a driver without v2 support (don't allow
setting the FEATURES_OK bit).

A explicit v1 feature flag would allow the guest driver print a more
specific error message ("device doesn't support v1" instead of "feature
negotiation failed"), but that's it.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v6] virtio-snd: add virtio sound device specification

2020-03-03 Thread Gerd Hoffmann
On Tue, Mar 03, 2020 at 01:27:38PM +0100, Anton Yakovlev wrote:
> On 03.03.2020 13:10, Gerd Hoffmann wrote:
> > On Tue, Mar 03, 2020 at 12:28:32PM +0100, Anton Yakovlev wrote:
> > > Hi,
> > > 
> > > On 03.03.2020 12:20, Gerd Hoffmann wrote:
> > > > Hi,
> > > > 
> > > > > 4. Introduce the polling mode feature for a message-based transport.
> > > > 
> > > > BTW: is that driver -> device or device -> driver or both?
> > > > In case both: should we have two separate feature flags maybe?
> > > 
> > > It's driver -> device direction. The driver needs notifications from the
> > > device in any case, so it makes no sense to suppress notifications in this
> > > direction (device -> driver).
> > 
> > Hmm, unless I missed something it doesn't matter whenever the guest
> > kernel checks virtqueues from irq handler or timer ...
> 
> Because in the general case, it is impossible to predict when a notification
> will come from the device. Even if we are talking about a period time, this
> does not mean that the device will send notifications precisely at these
> intervals. And this means that the driver must do a tricky poll, trying
> to guess when the notification will arrive, i.e. just waste CPU resources.

Well, the simplest way to handle this would be that each time the driver
submits a buffer to the device it also checks whenever the device has
completed buffers.  No irq, no timer -> no extra wakeups to collect
completed buffers.

Drawback is that completed buffers might be unnoticed for a while then
due to jitter, the driver might see sometimes no buffers and sometimes
multiple buffers.  For latency-sensitive workloads this will probably
not work very well.

So, not sure whenever it makes sense to have that as an option or
whenever guest drivers would just use notifications unconditionally
anyway.

But in any case we should clarify in the spec that this means the driver
doesn't notify and the device polls.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v6] virtio-snd: add virtio sound device specification

2020-03-03 Thread Gerd Hoffmann
On Tue, Mar 03, 2020 at 12:28:32PM +0100, Anton Yakovlev wrote:
> Hi,
> 
> On 03.03.2020 12:20, Gerd Hoffmann wrote:
> >Hi,
> > 
> > > 4. Introduce the polling mode feature for a message-based transport.
> > 
> > BTW: is that driver -> device or device -> driver or both?
> > In case both: should we have two separate feature flags maybe?
> 
> It's driver -> device direction. The driver needs notifications from the
> device in any case, so it makes no sense to suppress notifications in this
> direction (device -> driver).

Hmm, unless I missed something it doesn't matter whenever the guest
kernel checks virtqueues from irq handler or timer ...

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v6] virtio-snd: add virtio sound device specification

2020-03-03 Thread Gerd Hoffmann
  Hi,

> 4. Introduce the polling mode feature for a message-based transport.

BTW: is that driver -> device or device -> driver or both?
In case both: should we have two separate feature flags maybe?

Otherwise this looks good to me.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v5] virtio-snd: add virtio sound device specification

2020-02-28 Thread Gerd Hoffmann
On Thu, Feb 27, 2020 at 03:08:59PM +0100, Anton Yakovlev wrote:
> Hello all,
> 
> We have completed the implementation of the PoC specification v5. Below are
> our comments on what might or should be improved in the specification.
> 
> Initialization:
> 
> 1. I think we should add the direction of the jack (in/out) to the
> configuration of the jack. Because some properties (for example, a label for
> some jack types) are based on this direction. The Linux HDA subsystem takes
> this information from a pin/complex configuration. But we don't have (and
> don't need) such entities.

Sounds good.

> 2. The stream configuration contains “features” that partially work as normal
> virtio features, and partially do not. Maybe it is better to rename the field
> to “flags” (as well as in the set_params request) and enter two sets of flags:
> one is reported in the stream configuration, and the other is used in the
> set_params request? This should make things less confusing.

Maybe have both features and flags fields?

> 1. In the latest version of the specification, we resized the “features” field
> in the stream configuration (u8->le32), but forgot to do the same in the
> set_params request. This should be fixed.

Sure.

> 2. The device may report polling support for message-based transport. In this
> case, the driver can be optimized so that it does not kick virtqueue for each
> message.

Makes sense too, especially given that the packets should arrive at
regular intervals and not (unlike for example network packets) at
unpredictable times.

> 3. For the input stream (capture), we decided to report the actual filled size
> using the len field in the virtqueue descriptor. Should the specification
> clearly indicate that the value contains sizeof(struct virtio_snd_pcm_status)
> + the size of the recorded frames?

Doesn't hurt to explicitly say so even though it should be clear that
the descriptor size covers the complete payload and not only the
recorded frames.

> 4. We also need to add a device requirement to complete all pending messages
> for a stream on a RELEASE request. Otherwise, they may become stuck in the
> virtqueue.

Yes, that makes sense.  Maybe also explicitly say that the RELEASE
request should be completed last (only after all other pending messages
are completed).

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [RFC] Upstreaming virtio-wayland (or an alternative)

2020-02-28 Thread Gerd Hoffmann
On Fri, Feb 28, 2020 at 07:11:40PM +0900, David Stevens wrote:
> > But there also is "unix socket", or maybe a somewhat broader "stream",
> > which would be another feature flag I guess because virtio-ipc would
> > just tunnel the stream without the help from other devices.
> 
> Can you elaborate on what you mean by this? I can envision how
> virtio-ipc would be a generic mechanism for passing data+virtio
> resources, including any new types of resources it itself defines.
> However, if "unix sockets" or a generic "stream" expands beyond
> virtio, that seems too broad, with too many edge cases to be feasible
> to implement.

As far I know this is exactly what virtio-wayland does today if you try
to pass a unix socket file descriptor to the other side, so I assume
this functionality is needed ...

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [RFC] Upstreaming virtio-wayland (or an alternative)

2020-02-28 Thread Gerd Hoffmann
  Hi,

> > Yes, sure, we need to exactly specify the different kinds of file
> > handles / resources. I think it makes sense to have a virtio feature
> > flag for each of them, so guest+host can easily negotiate what they are
> > able to handle and what not.
> 
> I was expecting that to be a feature of the resource producers
> (virtio-gpu, virtio-fs, ...) rather than a feature of virtio-ipc
> itself.

"resources from other virtio devices" would be one virtio-ipc feature
flag.  And, yes, that would for the most part have the other device
handle the problem.

But there also is "unix socket", or maybe a somewhat broader "stream",
which would be another feature flag I guess because virtio-ipc would
just tunnel the stream without the help from other devices.

Possibly there will be more ...

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [RFC] Upstreaming virtio-wayland (or an alternative)

2020-02-27 Thread Gerd Hoffmann
  Hi,

> > > Can you provide more detail about the envisioned scope of this
> > > framework?
> >
> > The scope is "generic message+FD passing" interface, which is pretty
> > much what virtio-wl provides.
> 
> I think that scope is too broad. A socket is a 'generic message+FD'
> interface. Unless there's the expectation that the interface should
> eventually be as flexible as a regular domain socket, I think it would
> be a good idea to frame the scope of the interface more precisely.

Yes, sure, we need to exactly specify the different kinds of file
handles / resources.  I think it makes sense to have a virtio feature
flag for each of them, so guest+host can easily negotiate what they are
able to handle and what not.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [RFC] Upstreaming virtio-wayland (or an alternative)

2020-02-27 Thread Gerd Hoffmann
  Hi,

> > So one possible approach for the host side implementation would be to
> > write the virtio protocol parser as support library, then implement the
> > host applications (i.e. the host wayland proxy) as vhost-user process
> > using that library.
> > 
> > That would not work well with the singleton device approach though.
> 
> Hm, so you'd have several virtio-ipc devices exposed to the guest and
> the guest user would have to select one,

Yes.  We would have to attach names to the devices so the guest can
pick the one it wants, something like protocol=org.freedesktop.wayland

> or would the guest only see one virtio-ipc device? When I said
> singleton it was from the guest perspective (though I thought this
> would also apply to the host).

Yes, you'll have the same model on both guest and host.

A singleton device would work too, but would require an extra middle man
on the host side.  The vmm directly (qemu/crosvm), or a vhost-user
process, or a vhost kernel module would have to dispatch messages to the
actual applications (wayland proxy, ...).

A protocol-specific vhost-user process could drive the virtio rings
directly instead.  Of course this approach has its drawbacks too.
Porting applications is probably easier if the familiar socket interface
is available.

> guest side prime export + FD transfer on an IPC connection:
> ---
> 
> * guest_virtio_gpu->prime_handle_to_fd()
>   + virtio_gpu_resource_to_uuid() request sent on link3 to get a UUID
> - virglrenderer_get_resource_uuid() (the UUID is generated if it
>   doesn't exist yet)
> - host_virtio_gpu exports the resource to a pseudo-'struct file'
>   representation ('struct file' only exists kernel side, but maybe
>   we can add the concept of resource on the host user space side
>   and define an interface for this resource object?)

Yep, that needs be sorted.  If everything happens inside the same
process this should be easy, it'll be just an vmm implementation detail.

Supporting multi-process vmms (see vhost-user) will be more tricky.
Let the vmm manage in this case too?  Or should the kernel help here?
Or should we have a dbus service for that?  Other ideas?

>   We'll also need resource objs to be exported as plain FDs so we
>   can pass them through a unix socket.

virtio-gpu would use dma-bufs for that.

> I'm still unsure how FDs coming from a host application can be
> converted to resource objects (and then UUIDs so they can be passed
> to the ipc_connection) if they've not been previously created/passed
> by the guest though.

Depends on the kind of fd I think.  For unix sockets or virtio-fs files
it should be easy.  dma-bufs or sysv shmem would be more tricky because
we have to map them into the guest address space (if we want support
that).  Not impossible though, we could use the new shared memory
support for that.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3 1/2] virtio-video: Add virtio video device specification

2020-02-27 Thread Gerd Hoffmann
  Hi,

> Dmitry's virtio-video driver
> https://patchwork.linuxtv.org/patch/61717/.
> Once it becomes fully functional, I'll post a list of possible
> improvements of protocol.

Cool.  Actually implementing things can find design problems
in the protocol you didn't notice earlier.

> > > +\begin{description}
> > > +\item[\field{version}] is the protocol version that the device talks.
> > > +  The device MUST set this to 0.
> >
> > What is the intended use case for this?
> >
> > Given that virtio has feature flags to negotiate support for optional
> > features and protocol extensions between driver and device, why do you
> > think this is needed?
> 
> While feature flags work well when we "extend" the protocol with an
> optional feature, they don't when we want to "drop" or "modify"
> features.
> For example, I guess it'd be useful when we want:
> * to abandon a non-optional command,
> * to change a non-optional struct's layout,or
> * to change the order of commands in which the device expects to be sent.
> 
> Though it might be possible to handle these changes by feature flags,
> I suspect the version number allow us to transition protocols more
> smoothly.

Feature flags can be mandatory, both device and driver can fail
initialization when a specific feature is not supported by the other
end.  So in case we did screw up things so badly that we have to
effectively start over (which I hope wouldn't be the case) we can add a
VERSION_2 feature flag for a new set of commands with new structs and
new semantics.

With a feature flag both driver and device can choose whenever they want
support v1 or v2 or both.  With a version config field this is more
limited, the device can't decide to support both.  So the bonus points
for a smooth transition go to the feature flags not the version field ;)

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH 1/2] virtio: add dma-buf support for exported objects

2020-02-26 Thread Gerd Hoffmann
On Wed, Feb 26, 2020 at 12:56:58PM +0900, David Stevens wrote:
> On Tue, Feb 25, 2020 at 3:10 PM Gerd Hoffmann  wrote:
> >
> > How about dma_buf_{get,set}_uuid, simliar to dma_buf_set_name?
> 
> While I'm not opposed to such an API, I'm also hesitant to make
> changes to the dma-buf API for a single use case.

See virtio-wayland discussion.  I expect we will see more cases show up.
Maybe this should even go one level up, to struct file.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [RFC] Upstreaming virtio-wayland (or an alternative)

2020-02-26 Thread Gerd Hoffmann
  Hi,
 
> So, I'm about to start working on virtio-pipe (I realize the name is
> not that great since pipes are normally unidirectional, but I'm sure
> we'll have plenty of time to bikeshed on that particular aspect once
> the other bits are sorted out :)).

virtio-ipc?

> This device would be a singleton
> (there can only be one per VM),

Why?

Alex already mentioned vhost-user.  This is basically a way to emulate
virtio devices outside qemu (not sure whenever other vmms support that
too).  With vhost emulation is done in the kernel, with vhost-user
emulation is done in another process instead.  Communication with qemu
runs over a socked with messages roughly equivalent to vhost ioctls.

There is a vhost-user implementation of virtio-gpu (see
contrib/vhost-user-gpu/ in qemu git repo).  Main motivation is that a
separate process can have a much stricter sandbox (no need to have
access to network or guest disk images), which is especially useful for
a complex thing like 3D rendering.

So one possible approach for the host side implementation would be to
write the virtio protocol parser as support library, then implement the
host applications (i.e. the host wayland proxy) as vhost-user process
using that library.

That would not work well with the singleton device approach though.

A vhost-user approach would allow for multiple vmms sharing the
implementation.  It would allow to easily pick a language other
than C (mostly relevant for qemu, crosvm isn't C anyway ...).

> * manage a central UUID <-> 'struct file' map that allows virtio-pipe
>   to convert FDs to UUIDs, pass UUIDs through a pipe and convert those
>   UUIDs back to FDs on the other end
>   - we need to expose an API to let each subsystem register/unregister
> their UUID <-> FD mapping (subsystems are responsible for the UUID
> creation/negotiation)

Do you have a rough plan how to do that?
On the guest side?
On the host side?

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3 2/2] virtio-video: Define a feature for exported objects from different virtio devices

2020-02-25 Thread Gerd Hoffmann
  Hi,

> +/*
> + * Followed by either
> + * - struct virtio_video_mem_entry entries[]
> + *   for VIRTIO_VIDEO_MEM_TYPE_GUEST_PAGES
> + * - struct virtio_video_object_entry entries[]
> + *   for VIRTIO_VIDEO_MEM_TYPE_VIRTIO_OBJECT

Wouldn't that be a single virtio_video_object_entry?
Or could it be one per plane?

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3 1/2] virtio-video: Add virtio video device specification

2020-02-25 Thread Gerd Hoffmann
On Thu, Feb 06, 2020 at 07:20:57PM +0900, Keiichi Watanabe wrote:
> From: Dmitry Sepp 
> 
> The virtio video encoder device and decoder device provide functionalities to
> encode and decode video stream respectively.
> Though video encoder and decoder are provided as different devices, they use a
> same protocol.
> 
> Signed-off-by: Dmitry Sepp 
> Signed-off-by: Keiichi Watanabe 

Finally found the time for a closer look.
Pretty good overall, some minor nits below ...

> +\begin{description}
> +\item[\field{version}] is the protocol version that the device talks.
> +  The device MUST set this to 0.

What is the intended use case for this?

Given that virtio has feature flags to negotiate support for optional
features and protocol extensions between driver and device, why do you
think this is needed?

> +The format description \field{virtio_video_format_desc} is defined as
> +follows:
> +\begin{lstlisting}
> +enum virtio_video_format {
> +/* Raw formats */
> +VIRTIO_VIDEO_FORMAT_RAW_MIN = 1,
> +VIRTIO_VIDEO_FORMAT_ARGB = VIRTIO_VIDEO_FORMAT_RAW_MIN,
> +VIRTIO_VIDEO_FORMAT_BGRA,
> +VIRTIO_VIDEO_FORMAT_NV12, /* 12  Y/CbCr 4:2:0  */
> +VIRTIO_VIDEO_FORMAT_YUV420, /* 12  YUV 4:2:0 */
> +VIRTIO_VIDEO_FORMAT_YVU420, /* 12  YVU 4:2:0 */
> +VIRTIO_VIDEO_FORMAT_RAW_MAX = VIRTIO_VIDEO_FORMAT_YVU420,

I'm wondering what the *_MIN and *_MAX values here (and elsewhere) are
good for?  I doubt drivers would actually loop over formats from min to
max, I'd expect they check for specific formats they can handle instead.

If you want define the range for valid raw formats I'd suggest to leave
some room, so new formats can be added without changing MAX values, i.e.
use -- for example -- RAW_MIN = 0x100, RAW_MAX = 0x1ff, CODED_MIN=0x200,
CODED_MAX=0x2ff.  Or just drop them ...

> +struct virtio_video_query_control_level {
> +le32 profile; /* One of VIRTIO_VIDEO_PROFILE_* */
^^^  LEVEL ?

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH 2/2] drm/virtio: Support virtgpu exported resources

2020-02-24 Thread Gerd Hoffmann
  Hi,

> +struct dma_buf *virtgpu_gem_prime_export(struct drm_gem_object *obj,
> +  int flags)
> +{
[ ... ]
> +}
> +
> +struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
> + struct dma_buf *buf)
> +{
[ ... ]
> +}

More code duplication.

> diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
> index 0c85914d9369..9c428ef03060 100644
> --- a/include/uapi/linux/virtio_gpu.h
> +++ b/include/uapi/linux/virtio_gpu.h

API change should go to a separate patch.

> +/*
> + * VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID
> + */
> +#define VIRTIO_GPU_F_CROSS_DEVICE2

Hmm, how about VIRTIO_GPU_F_RESOURCE_UUID ?

> @@ -87,6 +92,7 @@ enum virtio_gpu_ctrl_type {
>   VIRTIO_GPU_RESP_OK_CAPSET_INFO,
>   VIRTIO_GPU_RESP_OK_CAPSET,
>   VIRTIO_GPU_RESP_OK_EDID,
> + VIRTIO_GPU_RESP_OK_RESOURCE_ASSIGN_UUID,

The "assign" doesn't make sense in the reply.  I'd name that
VIRTIO_GPU_RESP_OK_RESOURCE_UUID or just VIRTIO_GPU_RESP_OK_UUID,

> +/* VIRTIO_GPU_RESP_OK_RESOURCE_ASSIGN_UUID */
> +struct virtio_gpu_resp_resource_assign_uuid {
> + struct virtio_gpu_ctrl_hdr hdr;
> + __u8 uuid[16];
> +};

Same here.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH 1/2] virtio: add dma-buf support for exported objects

2020-02-24 Thread Gerd Hoffmann
On Wed, Feb 19, 2020 at 05:06:36PM +0900, David Stevens wrote:
> This change adds a new flavor of dma-bufs that can be used by virtio
> drivers to share exported objects. A virtio dma-buf can be queried by
> virtio drivers to obtain the UUID which identifies the underlying
> exported object.

That duplicates a bunch of code from dma-buf.c in virtio_dma_buf.c.

How about dma_buf_{get,set}_uuid, simliar to dma_buf_set_name?

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [RFC] Upstreaming virtio-wayland (or an alternative)

2020-02-24 Thread Gerd Hoffmann
  Hi,

> But let's say we go for a dedicated virtio-device to preserve this
> granularity. Should we aim at providing a generic virtio-msg device or
> should we keep this so-called wayland-specific virtio device (I'd like
> to remind you that it's actually protocol-agnostic)?

I think it totally makes sense to have something protocol-agnostic,
especially given that people figured it isn't really wayland-specific
and already using the existing implementation for non-wayland things.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [RFC] Upstreaming virtio-wayland (or an alternative)

2020-02-24 Thread Gerd Hoffmann
  Hi,

> > Because wayland benefits from allocation and sharing of
> > virtio-gpu buffers, a virtio-gpu combo device simplifies access to
> > those buffers, whereas the separate virtio devices as implemented in
> > crosvm requires bridging of resource handles (in guest kernel) and FDs
> > (in crosvm).
> 
> I must admit I'm not a huge fan of integrating a protocol-agnostic
> message passing interface (with FD-passing support) to virtio-gpu.
> Sounds like something that could be re-used without virtio-gpu being
> involved to me.

Agree.

> As for the resource bridging thing, that's more or less
> what I proposed with the generic interface to translate resource handles
> (what I called VFDs) to local FDs.

Yes, I think it makes sense to figure a sensible way to translate/manage
resource handles and have multiple virtio device work together.  What if
you want pass a handle to a virtio-fs file instead of a virtio-gpu
dma-buf between host and guest?

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [RFC] Upstreaming virtio-wayland (or an alternative)

2020-02-17 Thread Gerd Hoffmann
  Hi,

> As pointed in my reply to David's email, I'm a bit worried by the
> security implications of this approach. As long as the dmabuf -> UUID
> conversion stays in kernel space we should be safe, but if we start
> allowing a guest proxy (running in userland) to send raw UUIDs on a
> VSOCK connection, we lose the ability to check if the process is
> actually allowed to use this resource.

Correct.

The UUID namespace is huge (128bit), so playing guessing games to access
resources you don't own isn't exactly trivial.  I would not be concerned
too much.  If you want avoid uuids nevertheless the only way is option
#1 (extent vsock) I think.

> to attach UUIDs to resources. This could be addressed with the
> virtio-dmabuf infrastructure you proposed, but it looks like David's
> proposal does not generalize this interface (it's a virtio-gpu-only
> thing right now).

The idea is that virtio-gpu guest driver adds a uuid to any dma-buf it
exports.  Then other drivers importing the dma-buf can figure what the
uuid is.  We could also add a ioctl (on the dma-buf fd) to query the
uuid.

> > Also it is a fact that approach #1 didn't went anywhere so far but we
> > have a working implementation of approach #3.  So I guess I wouldn't
> > veto approach #3 if you pick it after evaluating the options on the
> > table.
> 
> I'd really like to investigate option #1, unless you have strong
> reasons to think this is a dead end.

Last time I discussed this with Stefan Hajnoczi he didn't like the idea
too much.  IIRC the main concern was that it would work on specific
kinds of file handles only and that vsock would need special support
code for every file handle type.

It quite a while ago though, maybe time for a re-evaluation.  The
limitation to specific file handles wouldn't go away, and likewise the
need for special support code.  We have a more clear plan for dma-buf
support though.  Also we got virtio-fs meanwhile and being able to pass
virtio-fs file handles over vsock looks like a pretty cool feature to
me.

Stefan?

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [RFC] Upstreaming virtio-wayland (or an alternative)

2020-02-10 Thread Gerd Hoffmann
  Hi,

> #1 might require extra care if we want to make it safe, as pointed
> out by Stefan here [4] (but I wonder if the problem is not the same
> for a virtio-wayland based solution). Of course you also need a bit of
> infrastructure to register FD <-> VFD mappings (VFD being a virtual
> file descriptor that's only used as unique IDs identifying the resource
> backed by the local FD). FD <-> VFD mappings would have to be created
> by the subsystem in charge of the object backing the FD (virtio-gpu for
> exported GEM buffers, virtio-vdec for video buffers, vsock for unix
> sockets if we decide to bridge unix and vsock sockets to make it
> transparent, ...). The FD <-> VFD mapping would also have to be created
> on the host side, probably by the virtio device implementation
> (virglrenderer for GEM bufs for instance), which means host and guest
> need a way to inform the other end that a new FD <-> VFD mapping has
> been created so the other end can create a similar mapping (I guess this
> requires extra device-specific commands to work). Note that this
> solution doesn't look so different from the virtio-dmabuf [5] approach
> proposed by Gerd a few months back, it's just extended to be a global
> VFD <-> FD registry instead of a dmabuf <-> unique-handle one. One
> great thing about this approach is that we can re-use it for any kind
> of FD sharing, not just dmabufs.

A dedicated is most likely not going to happen.  Probably virtio-gpu and
possibly other devices too will support attaching a unique handle (a
uuid probably) to a dma-buf, then share buffers like this:

  (1) guest userspace export virtio-gpu resource as dma-buf
  (2) guest kernel generates uuid, sends uuid to the host (or maybe we
  let the host generate the uuid, not clear yet), attaches it to
  the (guest) dma-buf.
  (3) guest userspace imports the dma-buf into virtio-vdev
  (4) virtio-vdec driver finds the uuid attached to the buffer and passes
  it on to the host,
  (5) virtio-vdev device can use the uuid to lookup the buffer on the
  host, then have the host decoder send the data directly to the
  host gpu ...

It certainly makes sense to use this for wayland too, so you just pass
around the uuid if you want pass around a dma-buf (no matter which of
the three solutions).

> #2 is a bit challenging, since it requires the proxy to know about all
> possible kind of FDs and do a FD <-> unique handle conversion with some
> help from the subsystem backing the FD. For dmabufs, that means we
> need to know who created the dmabuf, or assume that only one device is
> used for all allocations (virtio-gpu?).

See above, the uuid idea should simplify this.

> #3 is pretty similar to #1 in its design except that, instead of using
> the VSOCK infrastructure it's using a new type of virtio device. I
> guess it has the same pros and cons #1 has, and the name should probably
> be changed to reflect the fact that it can transmit any kind of data not
> just wayland.

Even though vsock looks simple at first it isn't when you look at the
details.  You'll want support more streams than virtqueues.  So you'll
go multiplex.  You want good performance, but you also don't want allow
streams to DoS the device by filling up the queue.

Thats why I don't like the new virtio device idea much and would prefer
vhost being reused, either directly (#1) or via proxy (#2).

Note that vhost aims to be hypervisor-agnostic and we have (unless I
missed something) three transports for it: virtio, vmware and hyperv.
So extending that with virtio-only features might not be the best idea.

Also it is a fact that approach #1 didn't went anywhere so far but we
have a working implementation of approach #3.  So I guess I wouldn't
veto approach #3 if you pick it after evaluating the options on the
table.

Final note: We have a new kid on the block: virtio-fs.  I think
virtio-fs with dax enabled should allow for shared file mappings between
host and guest.  That is something a proxy (#2) might be able to make
use of.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v2 0/1] VirtIO video device specification

2020-01-20 Thread Gerd Hoffmann
> > > Can't this problem be solved by adding "offset" field in 
> > > virtio_video_mem_entry?
> > >
> > > struct virtio_video_mem_entry {
> > >   le64 addr;
> > >   le32 length;
> > >   le32 offset;
> > >   u8 padding[4];
> > > };
> > >
> > > Here, "addr" must be the same in every mem_entry for (1) hardware.
> >
> > No.  virtio_video_mem_entry is basically a scatter list entry, you use
> > an *array* of these entries to describe your buffer (unless you are
> > using CMA due to hardware requirements, in this special case you have
> > only one entry in your array).
> 
> I see. I forgot about scatter list.
> However, I'm still not sure about the usage for CMA.
> if we're using CMA for a multiplanar format, how can the device know
> where the second plane start from?
> In my understanding, the number of entries in this case should be the
> same with the number of planes and
> "entries[0].addr + entries[0].length == entries[1].addr" should hold.

With the one-buffer-per-frame model you could add a plane_offsets[4]
field to virtio_video_resource_create.

The virtio_video_mem_entry array describes the whole buffer for all
planes, the plane_offsets array says where the individual planes start
inside the buffer.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v2 0/1] VirtIO video device specification

2020-01-20 Thread Gerd Hoffmann
  Hi,

> > Hmm, using (ii) the API, then check whenever your three plane buffers
> > happen to have the correct layout for (1) hardware looks somewhat
> > backwards to me.
> 
> Can't this problem be solved by adding "offset" field in 
> virtio_video_mem_entry?
> 
> struct virtio_video_mem_entry {
>   le64 addr;
>   le32 length;
>   le32 offset;
>   u8 padding[4];
> };
> 
> Here, "addr" must be the same in every mem_entry for (1) hardware.

No.  virtio_video_mem_entry is basically a scatter list entry, you use
an *array* of these entries to describe your buffer (unless you are
using CMA due to hardware requirements, in this special case you have
only one entry in your array).

> > I'd suggest to use (i) API and allow the device specify alignment
> > requirements.  So (1) hardware would say "need_align=0", whereas (3)
> > hardware would probably say "need_align=PAGE_SIZE" so it can easily
> > split the single buffer into three per-plane buffers.
> 
> Just to confirm, is "need_align" a field added in virtio_video_format_desc?

Given that different formats might have different alignment
requirements this looks like a good place to me.  Maybe rename to
plane_align to make clear what kind of alignment this is.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v2 0/1] VirtIO video device specification

2020-01-15 Thread Gerd Hoffmann
  Hi,

> > If you have (1) hardware you simply can't import buffers with arbitrary
> > plane offsets, so I'd expect software would prefer the single buffer
> > layout (i) over (ii), even when using another driver + dmabuf
> > export/import, to be able to support as much hardware as possible.
> > So (ii) might end up being unused in practice.
> >
> > But maybe not, was just an idea, feel free to scratch it.
> 
> That's true, simple user space would often do that. However, if more
> devices are in the game, often some extra alignment or padding between
> planes is needed and that is not allowed by (1), even though all the
> planes are in the same buffer.
> 
> My suggestion, based on the latest V4L2 discussion on unifying the
> UAPI of i) and ii), is that we may want to instead always specify
> buffers on a per-plane basis. Any additional requirements would be
> then validated by the host, which could check if the planes end up in
> the same buffer (or different buffers for (3)) and/or at the right
> offsets.

Hmm, using (ii) the API, then check whenever your three plane buffers
happen to have the correct layout for (1) hardware looks somewhat
backwards to me.

I'd suggest to use (i) API and allow the device specify alignment
requirements.  So (1) hardware would say "need_align=0", whereas (3)
hardware would probably say "need_align=PAGE_SIZE" so it can easily
split the single buffer into three per-plane buffers.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v5] virtio-snd: add virtio sound device specification

2020-01-14 Thread Gerd Hoffmann
On Thu, Jan 09, 2020 at 03:07:43PM +0100, Anton Yakovlev wrote:
> This patch proposes virtio specification for a new virtio sound device,
> that may be useful in case when having audio is required but a device
> passthrough or emulation is not an option.

Looks good to me.  Where do we stand in terms of device/driver
implementations?  Trying to actually implement things sometimes
finds problems in the specification ...

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v2 0/1] VirtIO video device specification

2020-01-13 Thread Gerd Hoffmann
  Hi,

> > Well, no.  Tomasz Figa had splitted the devices into three groups:
> >
> >   (1) requires single buffer.
> >   (2) allows any layout (including the one (1) devices want).
> >   (3) requires per-plane buffers.
> >
> > Category (3) devices are apparently rare and old.  Both category (1)+(2)
> > devices can handle single buffers just fine.  So maybe support only
> > that?
> 
> From the guest implementation point of view, Linux V4L2 currently
> supports 2 cases, if used in allocation-mode (i.e. the buffers are
> allocated locally by V4L2):
> 
> i) single buffer with plane offsets predetermined by the format, (can
> be handled by devices from category 1) and 2))
> ii) per-plane buffers with planes at the beginning of their own
> buffers. (can be handled by devices from category 2) and 3))
> 
> Support for ii) is required if one wants to be able to import buffers
> with arbitrary plane offsets, so I'd consider it unavoidable.

If you have (1) hardware you simply can't import buffers with arbitrary
plane offsets, so I'd expect software would prefer the single buffer
layout (i) over (ii), even when using another driver + dmabuf
export/import, to be able to support as much hardware as possible.
So (ii) might end up being unused in practice.

But maybe not, was just an idea, feel free to scratch it.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v2 0/1] VirtIO video device specification

2020-01-13 Thread Gerd Hoffmann
On Mon, Jan 13, 2020 at 11:41:45AM +0100, Dmitry Sepp wrote:
> Hi Gerd,
> 
> Thanks for reviewing!
> 
> On Montag, 13. Januar 2020 10:56:36 CET Gerd Hoffmann wrote:
> >   Hi,
> > 
> > > This also means that we cannot have unspec for planes layout. Device
> > > either
> > > expects planes in separate buffers or in one buffer with some offsets,
> > > there cannot be mixed cases.
> > 
> > Hmm.  Is it useful to support both?  Or maybe support the "one buffer +
> > offsets" case only?  Splitting one buffer into multiple smaller ones
> > internally if needed shouldn't be a problem, and it would simplify the
> > interface a bit ...
> > 
> 
> Ok, let's consider the following case:
>  - the device expects planes in separate buffers.
>  - say, Android on the guest side also imports planes in separate buffers 
> into the driver.
>  - but he driver only supports one buffer + offsets.
> 
> Do you mean the driver then can join the underlying page lists and set 
> offsets then? Yes, 
> this would probably make sense.

Well, no.  Tomasz Figa had splitted the devices into three groups:

  (1) requires single buffer.
  (2) allows any layout (including the one (1) devices want).
  (3) requires per-plane buffers.

Category (3) devices are apparently rare and old.  Both category (1)+(2)
devices can handle single buffers just fine.  So maybe support only
that?

Hope it's more clear this time,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v2 0/1] VirtIO video device specification

2020-01-13 Thread Gerd Hoffmann
  Hi,

> This also means that we cannot have unspec for planes layout. Device either 
> expects planes in separate buffers or in one buffer with some offsets, there 
> cannot be mixed cases.

Hmm.  Is it useful to support both?  Or maybe support the "one buffer +
offsets" case only?  Splitting one buffer into multiple smaller ones
internally if needed shouldn't be a problem, and it would simplify the
interface a bit ...

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v4] virtio-snd: add virtio sound device specification

2020-01-09 Thread Gerd Hoffmann
  Hi,

> > Repeating "device-readable" or "device-writable" for each struct field
> > looks a bit odd because this applies to the whole struct.  Not so much
> > for these structs with a single field only, but there are structs with
> > more fields further down the spec ...
> 
> Well, I'm not sure here. Previously, I was told to mark explicitly every
> field.

Was that for virtqueue structs, or for config space?

For config space which allows random access it surely makes sense.  In
contrast the virtqueue has in and out buffers, so you requests structs
are placed in "out" buffers and are device-readable. whereas response
structs are placed in "in" buffers and are device-writable.

> Also, regarding requirement for populating tx/rx queues with period size
> buffers. Maybe it's better to replace MUST with SHOULD? It would give more
> flexibility for the driver. What do you think?

I think relaxing this to SHOULD is fine.  I can't think of any bad side
effects of that.   The driver obviously can't use the buffer completion
as period notification then.  But that is the driver's choice and it
wouldn't break the virtio protocol ...

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v2 1/1] virtio-video: Add virtio video device specification

2020-01-09 Thread Gerd Hoffmann
  Hi,

> Regarding re-using, the driver can simply re-queue buffers returned by
> the device. If the device needs a buffer as reference frame, it must
> not return the buffer.

Ok, that'll work.

> I'll describe this rule in the next version of the patch.

Good.  You should also add a note about ordering.  If the device returns
the buffers as soon as they are no longer needed for decoding they
might be completed out-of-order, specifically B-Frames might complete
before the reference I/P frame.  Is the device allowed to do that or
should it complete the buffers in playback order?

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev][RFC PATCH v1 2/2] virtio-gpu: add the ability to export resources

2020-01-09 Thread Gerd Hoffmann
  Hi,

> At that point, I think it's just a matter of aesthetics. I lean
> slightly towards returning the uuid from the host, since that rules
> out any implementation with the aforementioned race.

Ok, design the API in a way that you can't get it wrong.  Makes sense.
I'd still name it ressource_assign_uuid though.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev][RFC PATCH v1 1/2] content: define what exporting a resource is

2020-01-09 Thread Gerd Hoffmann
  Hi,

> that isn't just a leaf node of the spec. I think it's better to define
> 'resource' as a top level concept for virtio devices, even if the specifics
> of what a 'resource' is are defined by individual device types.

Your patch doesn't define what a resource is though.  It only refers to
something it calls 'resource' ...

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev][RFC PATCH v1 2/2] virtio-gpu: add the ability to export resources

2020-01-08 Thread Gerd Hoffmann
> +\begin{lstlisting}
> +struct virtio_gpu_export_resource {
> +struct virtio_gpu_ctrl_hdr hdr;
> +le32 resource_id;
> +le32 padding;
> +};
> +
> +struct virtio_gpu_resp_export_resource {
> +struct virtio_gpu_ctrl_hdr hdr;
> +le64 uuid_low;
> +le64 uuid_high;
> +};
> +\end{lstlisting}

Is there a specific reason why you want the host pick the uuid?  I would
let the guest define the uuid, i.e. move the uuid fields to
virtio_gpu_export_resource and scratch virtio_gpu_resp_export_resource.

Also I'd siggest to name the command (and struct) RESOURCE_ASSIGN_UUID.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev][RFC PATCH v1 1/2] content: define what exporting a resource is

2020-01-08 Thread Gerd Hoffmann
On Wed, Jan 08, 2020 at 06:01:58PM +0900, David Stevens wrote:
> Define a mechanism for sharing resources between different virtio
> devices.
> 
> Signed-off-by: David Stevens 
> ---
>  content.tex | 18 ++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/content.tex b/content.tex
> index b1ea9b9..73bd28e 100644
> --- a/content.tex
> +++ b/content.tex
> @@ -373,6 +373,24 @@ \section{Driver Notifications}
> \label{sec:Virtqueues / Driver notifications}
> 
>  \input{shared-mem.tex}
> 
> +\section{Exporting Resources}\label{sec:Basic Facilities of a Virtio
> Device / Exporting Resources}
> +
> +When a resource created by one virtio device needs to be
> +shared with a seperate virtio device, the first device can
> +export the resource by generating a \field{uuid} which the
> +guest can pass to the second device to identify the resource.
> +
> +What constitutes a resource, how to export resources, and
> +how to import resources are defined by the individual device
> +types. The generation method of a \field{uuid} is dependent
> +upon the implementation of the exporting device.
> +
> +Whether a particular exported resource can be imported into
> +a device is dependent upon the implementations of the exporting
> +and importing devices. Generally speaking, the guest should
> +have some knowledge of the host configuration before trying to
> +use exported resources.

Hmm, I'd suggest to move the whole thing into the virtio-gpu section.
There is no such thing as a "resource" in general virtio context ...

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v4] virtio-snd: add virtio sound device specification

2020-01-08 Thread Gerd Hoffmann
On Tue, Jan 07, 2020 at 01:50:49PM +0100, Anton Yakovlev wrote:
> This patch proposes virtio specification for a new virtio sound device,
> that may be useful in case when having audio is required but a device
> passthrough or emulation is not an option.

Looks pretty good overall.  Some small nits:

> +/* a common header */
> +struct virtio_snd_hdr {
> +le32 code;
> +};
> +
> +/* an event notification */
> +struct virtio_snd_event {
> +le32 type;

Use "struct virtio_snd_hdr hdr" instead of "type", for consistency with
the other structs?

> +\begin{description}
> +\item[\field{code}] (device-readable) specifies a device request type
> +(VIRTIO_SND_R_*).
> +\end{description}
> +
> +A response part has, or consists of, a common header containing the following
> +field:
> +
> +\begin{description}
> +\item[\field{code}] (device-writable) specifies a device request status
> +(VIRTIO_SND_S_*).
> +\end{description}

Repeating "device-readable" or "device-writable" for each struct field
looks a bit odd because this applies to the whole struct.  Not so much
for these structs with a single field only, but there are structs with
more fields further down the spec ...

> +\begin{description}
> +\item[\field{type}] (device-writable) specifies an event type 
> (VIRTIO_SND_EVT_*).
> +\item[\field{data}] (device-writable) specifies an optional event data.
> +\end{description}

... here for example.

> +/* a response containing PCM stream capabilities */
> +struct virtio_snd_pcm_caps {
> +struct virtio_snd_hdr hdr; /* VIRTIO_SND_S_XXX */
> +u8 directions; /* 1 << VIRTIO_SND_PCM_T_XXX */
> +u8 channels_min;
> +u8 channels_max;
> +u8 features; /* 1 << VIRTIO_SND_PCM_F_XXX */
> +le64 formats; /* 1 << VIRTIO_SND_PCM_FMT_XXX */
> +le32 rates; /* 1 << VIRTIO_SND_PCM_RATE_XXX */

Use le64 for rates too, just in case?

For features a few more bits don't hurt too, we already have 5 defined,
so having only 8 total available isn't that much for possible
extensions ...

> +\paragraph{VIRTIO_SND_R_PCM_PAUSE}
> +\paragraph{VIRTIO_SND_R_PCM_UNPAUSE}

Still here.

So there are good reasons to keep them instead of just using stop/start?

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 1/1] virtio-video: Add virtio video device specification

2020-01-06 Thread Gerd Hoffmann
  Hi,

> How should one deal with multiplanar formats? Do we create one resource per 
> plane? Otherwise we need a way to send mem entries for each plane in one 
> request.

DRM uses arrays of handles and offsets (see struct drm_framebuffer).  A
handle references a gem object (roughly the same as a resource), and the
offset specifies the start of the plane within the gem object.  That
allows both a single gem object with planes stored at different offsets
and one gem object per plane.  virtio-video could do the same, or pick
one of the two approaches and support only that.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v2 1/1] virtio-video: Add virtio video device specification

2020-01-06 Thread Gerd Hoffmann
  Hi,

> > We also see the need to add a max_streams value to this structure so as to
> > explicitly provide a limit on the number of streams the guest can create.
> 
> What would be the advantage over just trying to create one and
> failing? The maximum number would be only meaningful for the special
> case when the streams are always only created by one user space
> process. Otherwise, if several processes do that, they are not aware
> of each other and the number could be higher than they can actually
> create, because other processes could have some streams created
> already.

Also the number of streams might not be fixed but depend on stream
parameters, i.e. hardware can decode one hd or two sd streams ...

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v3] virtio-snd: add virtio sound device specification

2019-12-20 Thread Gerd Hoffmann
  Hi,

> > Period notification would be implicit (playback buffer completion) or
> > explicit event queue message?
> 
> Good question. I think, for message-base transport they would be implicit,
> since we will require to enqueue period_size length buffers. The only
> exception here will be the end of the stream for playback.

Why is the end-of-stream different?

> As for shared memory, I think that they could work as an optional notification
> (as suggested earlier), and the driver should enable them using the set params
> request (setting the feature bit).

Sounds good.

> > Yes, a simple enum would do the trick I think.
> 
> And just reuse existing values from the HDA spec, right?

Looks useful, simplifies things.

> > Not sure if we want add more properties.  hda codecs can also specify
> > the color coding of physical jacks (line-in is red, ...) for example.
> > Probably not useful for most use cases (i.e. when sound data lands @
> > pulseaudio where the user can re-route things as he wants), but it might
> > be useful when a fixed physical host output is assigned to the guest.
> 
> Well, if we are gonna introduce jacks colors, then we should introduce a jack
> entity itself. Like, define some kind of jack descriptor, enumerate all
> available jacks for the stream, putting into the descriptor its color and
> other properties, maybe allow jack hotplug/unplug, etc.

We also need routing then (enable/disable jacks per stream), and link
detection (is something plugged into the jack?) could be useful too.

Should we make that an optional extension (and possibly hash out these
details later)?  If you assign host machine jacks to your guest all this
is useful.  If your guests's sound is routed to pulseaudio for playback
not so much.

Note: I'll be offline for xmas and new year until january.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3] virtio-snd: add virtio sound device specification

2019-12-19 Thread Gerd Hoffmann
  Hi,

> > qemu has rather small buffers backend buffers, to keep latencies low.
> > So, yes it would copy data to backend buffers.  No, it would most likely
> > not copy over everything immediately.  It will most likely leave buffers
> > in the virtqueue, reading the data piecewise in the audio backend timer
> > callback, complete the virtio request when it has read all data.  So
> > there wouldn't be a huge gap between completing the request and feeding
> > the data to the host hardware.
> 
> Yes, there is a gap, and this causes the following two concerns:
> 
> 1. An application can rewind its pointer and rewrite part of this buffer. The
> worst case is if the application rewrites the part currently being read by the
> device.

Depends on the driver implementation.  If the driver will copy over the
data to a buffer not visible to the userspace application this can't
happen.  If the driver allows userspace mmap the buffer, then queues
virtio requests with pointers to those buffers, then yes, the
application could modify buffers which are sitting in the queue waiting
to be processed by the device.

> > While talking about latencies:  What happened to the idea to signal
> > additional device latencies (backend buffering and processing ...) to
> > the driver?
> 
> Yeah, such an additional latency has a runtime nature, and we could report it
> as part of an I/O request completion. We only need to choose units (time or
> bytes).

I'd stick to bytes, for consistency with period and buffer sizes.

> Now we have only optional notifications (periods and xrun). But let's say,
> that the device MAY send the xrun notifications and MUST send the period
> notifications, and remove them from feature bits.
> 
> Then we define requirements for message-based transport:
> 1. the tx and rx queues are filled in with buffers having period_bytes size,
> 2. buffer_bytes % period_bytes == 0,
> 3. total buffers size in the queue is buffer_bytes.
> 
> It is still not clear, when to complete playback buffers, but at least such
> approach makes the driver being interrupt-driven.

Period notification would be implicit (playback buffer completion) or
explicit event queue message?

On playback buffer completion: adding a latency field to the completion
field could solve that nicely I think.  The latency field would specify
how long it will take until the device finishes playing that buffer.  So
in case the device completes the buffer when playback is finished it
would fill in "0" there.  In case the device completes the buffer after
copying over the data to the internal buffer it would fill in the
internal buffer size there (or maybe even the number of bytes which are
currently used in the internal buffer).

> > Completely different topic:  I think we should consider adding tags to
> > streams.  Some background:  In qemu we have two almost identical hda
> > codecs: hda-duplex and hda-micro.  The only difference is that
> > hda-duplex has the input stream tagged as "line-in" and the output
> > stream as "line-out", whereas hda-micro has the input channel tagged as
> > "microphone" and the output channel as "speaker".  hda-micro was created
> > because some windows application refused to accept "line-in" as audio
> > source.  So applications clearly do care, and in case you have two input
> > streams it would help applications pick a reasonable default.
> 
> Should it be a fixed-size field in the capabilities structure?

Yes, a simple enum would do the trick I think.

Not sure if we want add more properties.  hda codecs can also specify
the color coding of physical jacks (line-in is red, ...) for example.
Probably not useful for most use cases (i.e. when sound data lands @
pulseaudio where the user can re-route things as he wants), but it might
be useful when a fixed physical host output is assigned to the guest.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v2 1/1] virtio-video: Add virtio video device specification

2019-12-19 Thread Gerd Hoffmann
  Hi,

> > However that still doesn't let the driver know which buffers will be
> > dequeued when. A simple example of this scenario is when the guest is
> > done displaying a frame and requeues the buffer back to the decoder.
> > Then the decoder will not choose it for decoding next frames into as
> > long as the frame in that buffer is still used as a reference frame,
> > even if one sends the drain request.
> It might be that I'm getting your point wrong, but do you mean some hardware 
> can mark a buffer as ready to be displayed yet still using the underlying 
> memory to decode other frames?

Yes, this is how I understand Tomasz Figa.

> This means, if you occasionally/intentionally 
> write to the buffer you mess up the whole decoding pipeline.

And to avoid this the buffer handling aspect must be clarified in the
specification.  Is the device allowed to continue using the buffer after
finishing decoding and completing the queue request?  If so, how do we
hand over buffer ownership back to the driver so it can free the pages?
drain request?  How do we handle re-using buffers?  Can the driver
simply re-queue them and expect the device figures by itself whenever it
can use the buffer or whenever it is still needed as reference frame?

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v2 1/1] virtio-video: Add virtio video device specification

2019-12-19 Thread Gerd Hoffmann
  Hi,

> > Not clearly defined in the spec:  When is the decoder supposed to send
> > the response for a queue request?  When it finished decoding (i.e. frame
> > is ready for playback), or when it doesn't need the buffer any more for
> > decoding (i.e. buffer can be re-queued or pages can be released)?
> In my eyes the both statements mean almost the same and both are valid.

Well, no.  When the device decoded a P-Frame it can notify the device,
saying "here is your decoded frame".  But the device might still need
the buffer with the decoded frame to properly decode the following B/I
Frames which reference the P-Frame.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 1/1] virtio-video: Add virtio video device specification

2019-12-19 Thread Gerd Hoffmann
  Hi,

> > I also can't see why the flag is needed in the first place.  The driver
> > should know which buffers are queued still and be able to figure
> > whenever the drain is complete or not without depending on that flag.
> > So I'd suggest to simply drop it.
> This flag is used not for drain only. In marks the completion of whatever 
> specific buffer sequence, like a full end-of-stream, resolution change, drain 
> etc. We also need this to handle nested sequences. For instance, a resolution 
> change event might happen while in drain.

Ah, ok.  That makes sense (please clarify this in the spec).

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH v2 1/1] virtio-video: Add virtio video device specification

2019-12-18 Thread Gerd Hoffmann
On Wed, Dec 18, 2019 at 11:08:37PM +0900, Tomasz Figa wrote:
> On Wed, Dec 18, 2019 at 10:40 PM Gerd Hoffmann  wrote:
> >
> >   Hi,
> >
> > > +The device MUST mark the last buffer with the
> > > +VIRTIO_VIDEO_BUFFER_F_EOS flag to denote completion of the drain
> > > +sequence.
> >
> > No, that would build a race condition into the protocol.  The device
> > could complete the last buffer after the driver has sent the drain
> > command but before the device saw it.  So the flag would not be
> > reliable.
> >
> > I also can't see why the flag is needed in the first place.  The driver
> > should know which buffers are queued still and be able to figure
> > whenever the drain is complete or not without depending on that flag.
> > So I'd suggest to simply drop it.
> 
> Unfortunately video decoders are not that simple. There are always
> going to be some buffers on the decoder side used as reference frames.
> Only the decoder knows when to release them, as it continues decoding
> the stream.

Not clearly defined in the spec:  When is the decoder supposed to send
the response for a queue request?  When it finished decoding (i.e. frame
is ready for playback), or when it doesn't need the buffer any more for
decoding (i.e. buffer can be re-queued or pages can be released)?

> How we defined this in the V4L2 stateful decoder interface is that if
> the decoder happened to return the last framebuffer before the drain
> request arrived, it would return one more, empty.

Ok.  That is not clear from the spec.  I've read the drain request as as
"please stop decoding and complete all queue requests which are in the
virtqueue, even when you didn't process them yet".  In which case
completing the last pending queue request would clearly indicate the
draining request is complete.  Also completing the drain request should
only happen when the operation is finished.

I think the various states a buffer can have and how queuing & deciding
& draining changes these states should be clarified in the
specification.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3] virtio-snd: add virtio sound device specification

2019-12-18 Thread Gerd Hoffmann
  Hi,

> > The driver can still split the data from the application into a set of
> > smaller virtio buffers.  And this is how I would write a driver.  Create
> > a bunch of buffers, period_bytes each, enough to cover buffer_bytes.
> > Then go submit them, re-use them robin-round (each time the device
> > completes a buffer re-fill and re-submit it), without a special case
> > with one big buffer for the (playback) stream start.
> 
> Yes, but what about device side? Most likely, it will copy the contents of the
> buffer to another location and immediately complete the request (otherwise it
> is not clear when to complete).

qemu has rather small buffers backend buffers, to keep latencies low.
So, yes it would copy data to backend buffers.  No, it would most likely
not copy over everything immediately.  It will most likely leave buffers
in the virtqueue, reading the data piecewise in the audio backend timer
callback, complete the virtio request when it has read all data.  So
there wouldn't be a huge gap between completing the request and feeding
the data to the host hardware.

Another reason for this (beside keeping latencies low) is that it is a
good idea to avoid or at least limit guest-triggered allocations on the
host for security reasons.

While talking about latencies:  What happened to the idea to signal
additional device latencies (backend buffering and processing ...) to
the driver?

> In addition, the driver must advance the
> hardware position, and completing the request is the only possible place to do
> this. In most cases, all this will not coincide with the period notification.

So, period notification should be triggered when the device has actually
completed processing the data (i.e. host playback finished)?  I can see
that this can be useful.  I think qemu wouldn't be able to support that
with the current audio backend design, but that isn't a blocker given
this is an optional feature.  Also I think the spec should to clarify
how period notification is supposed to work, how that relates to
buffer completion and latencies.

> This means that the notification itself is an optional feature that may or may
> not be enabled for message-based transport.

Which transports are expected to support that?
If multiple transports:  Can a device signal it supports these
notifications for one transport but not for another?

> > > I think, it's better not to set queued buffer size restrictions.
> > 
> > Lets make it SHOULD then.
> 
> Again, if periods are not used, then there's no period_bytes. And they are not
> used, for example, in most low-latency applications.
> 
> (Well, technically speaking, low-latency applications in Linux must configure
> period size for ALSA. But then they usually suppress period notifications and
> send very small chunks of data, which usually do not correspond to the size of
> the period at all.)

So you want pass the data on to the device in whatever chunks the
userspace application hands them to the driver?  Ok, reasonable.

Should we define period_bytes as upper limit then, i.e. virtio buffers
should be period_bytes or smaller?

Completely different topic:  I think we should consider adding tags to
streams.  Some background:  In qemu we have two almost identical hda
codecs: hda-duplex and hda-micro.  The only difference is that
hda-duplex has the input stream tagged as "line-in" and the output
stream as "line-out", whereas hda-micro has the input channel tagged as
"microphone" and the output channel as "speaker".  hda-micro was created
because some windows application refused to accept "line-in" as audio
source.  So applications clearly do care, and in case you have two input
streams it would help applications pick a reasonable default.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 1/1] virtio-video: Add virtio video device specification

2019-12-18 Thread Gerd Hoffmann
  Hi,

> +The device MUST mark the last buffer with the
> +VIRTIO_VIDEO_BUFFER_F_EOS flag to denote completion of the drain
> +sequence.

No, that would build a race condition into the protocol.  The device
could complete the last buffer after the driver has sent the drain
command but before the device saw it.  So the flag would not be
reliable.

I also can't see why the flag is needed in the first place.  The driver
should know which buffers are queued still and be able to figure
whenever the drain is complete or not without depending on that flag.
So I'd suggest to simply drop it.

That is the only issue I've spotted in the protocol on a first quick
look.  There are a few places where the spec text could be improved.
I'll try to set aside some time to go over this in detail, but I can't
promise I'll find the time to do that before xmas and new year.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3] virtio-snd: add virtio sound device specification

2019-12-18 Thread Gerd Hoffmann
> > > +/* supported PCM stream features */
> > > +enum {
> > > +VIRTIO_SND_PCM_F_HOST_MEM = 0,
> > > +VIRTIO_SND_PCM_F_GUEST_MEM,
> > 
> > Is this useful as stream property?  I would expect when supported by a
> > device it would work for all streams.
> 
> Since we allowed different capabilities for different streams, now it's
> possible to have different backends for them. I’m not sure how much this can
> be a real scenario, but it is theoretically possible to have one backend that
> is able to work with shared memory, and the other is not. The same can be said
> for other stream feature bits.

Hmm, ok.  Point.

Independant from that they should either be a note that these two are
nor (yet) specified and are just placeholders.  Or simply be deleted and
re-added with the actual specification for guestmem + hostmem.

> > > +VIRTIO_SND_PCM_F_EVT_PERIOD,
> > 
> > I think this is only useful for hostmem/guestmem.
> 
> Using message-based transport, an application can send the entire playback
> buffer at the beginning, and then send period-sized parts on each period
> notification.

The driver can still split the data from the application into a set of
smaller virtio buffers.  And this is how I would write a driver.  Create
a bunch of buffers, period_bytes each, enough to cover buffer_bytes.
Then go submit them, re-use them robin-round (each time the device
completes a buffer re-fill and re-submit it), without a special case
with one big buffer for the (playback) stream start.

I still think this is not needed for message-based transport.

Maybe this is not needed for hostmem/guestmem either.  The point of
using shared memory is to avoid system calls and vmexits for buffer
management, so VIRTIO_SND_PCM_F_EVT_PERIOD doesn't look that useful
here too.  But maybe it is nice to have that as option.

> > > +\subsubsection{PCM I/O Messages}\label{sec:Device Types / Sound Device / 
> > > Device Operation / PCM IO Messages}
> > > +
> > > +An I/O message consists of the header part, followed by the buffer, and 
> > > then
> > > +the status part.
> > 
> > Each buffer MUST (or SHOULD?) be period_bytes in size.
> > 
> > The driver SHOULD queue (buffer_bytes / period_bytes) virtio buffers, so
> > the device has buffer_bytes total buffer space available.
> 
> I think, it's better not to set queued buffer size restrictions.

Lets make it SHOULD then.

> Periods are an optional feature. If the driver is not interested in
> period notifications, it's unclear what period_bytes means here.

Well, the driver has to manage buffers, so I can't see how a driver is
not interested in completion notifications.  If the driver isn't
interested in keeping the latency low by using lots of small buffers it
still needs to queue at least two (larger) buffers, so the device still
has data to continue playback when it completed a buffer.  And the
driver needs to know when the device is done with a buffer so it can
either recycle the buffer or free the pages.

> Although, for the input stream we could say, that the total size of
> buffers in the rx queue should be buffer_bytes.

Yes.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [RFC RESEND] virtio-video: Add virtio video device specification

2019-12-17 Thread Gerd Hoffmann
On Tue, Dec 17, 2019 at 05:13:59PM +0100, Dmitry Sepp wrote:
> Hi,
> 
> On Dienstag, 17. Dezember 2019 15:09:16 CET Keiichi Watanabe wrote:
> > Hi,
> > 
> > Thanks Tomasz and Gerd for the suggestions and information.
> > 
> > On Tue, Dec 17, 2019 at 10:39 PM Gerd Hoffmann  wrote:
> > >   Hi,
> > >   
> > > > On the host side, the encode and decode APIs are different as well, so
> > > > having separate implementation decoder and encoder, possibly just
> > > > sharing some helper code, would make much more sense.
> > > 
> > > When going down that route I'd suggest to use two device ids (even when
> > > specifying both variants in one spec section and one header file due to
> > > the overlaps) instead of feature flags.
> > 
> > Sounds good. It makes sense to use different device IDs for different
> > devices.
> Does this mean one driver handles both? Or we have two separate drivers?

That is the driver writers choice.  You can have a single kernel module
registering as driver for both IDs.  Or you can have two kernel modules,
each registering for one of the IDs, possibly with a third library
module with helper code for common bits like buffer management.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v3] virtio-snd: add virtio sound device specification

2019-12-17 Thread Gerd Hoffmann
  Hi,

> +\subsection{Feature bits}\label{sec:Device Types / Sound Device / Feature 
> bits}
> +
> +None currently defined.

Flags for hostmem & guestmem here?

This could be an information the driver might want to know before
initializing the virtqueues.

> +\item START
> +\item PAUSE
> +\item UNPAUSE
> +\item STOP

Why pause/unpause?

I don't see a reason compelling for this given the driver can restart a
stream (i.e. set-params -> prepare -> start -> stop -> start -> stop ->
release).  Would a device handle stop and pause in a different way?

> +/* supported PCM stream features */
> +enum {
> +VIRTIO_SND_PCM_F_HOST_MEM = 0,
> +VIRTIO_SND_PCM_F_GUEST_MEM,

Is this useful as stream property?  I would expect when supported by a
device it would work for all streams.

> +VIRTIO_SND_PCM_F_EVT_PERIOD,

I think this is only useful for hostmem/guestmem.

> +struct virtio_snd_pcm_set_params {
> +struct virtio_snd_pcm_hdr hdr; /* .code = VIRTIO_SND_R_PCM_SET_PARAMS */
> +u8 features;
> +u8 channels;
> +u8 format;
> +u8 rate;
> +le32 buffer_bytes;
> +le32 period_bytes;
> +};
> +\end{lstlisting}
> +
> +The request contains the following fields:
> +
> +\begin{description}
> +\item[\field{features}] (device-readable) is a selected feature bit map:
> +\begin{itemize*}
> +\item VIRTIO_SND_PCM_F_HOST_MEM: use shared memory allocated by the host 
> (TBD),
> +\item VIRTIO_SND_PCM_F_GUEST_MEM: use shared memory allocated by the guest 
> (TBD),

I'd suggest to drop those bits here.  For hostmem + guestmem we probably
need a virtio_snd_pcm_set_params_{hostmem,guestmem} struct (and command)
with additional parameters anyway.

> +\item VIRTIO_SND_PCM_F_EVT_PERIOD: enable elapsed period notifications,

Same comment as above ;)

> +\begin{itemize*}
> +\item VIRTIO_SND_EVT_PCM_PERIOD: send notifications about each elapsed 
> period,
> +the size of the period is specified in the \field{period_bytes} field.

Same comment as above ;)

> +\subsubsection{PCM I/O Messages}\label{sec:Device Types / Sound Device / 
> Device Operation / PCM IO Messages}
> +
> +An I/O message consists of the header part, followed by the buffer, and then
> +the status part.

Each buffer MUST (or SHOULD?) be period_bytes in size.

The driver SHOULD queue (buffer_bytes / period_bytes) virtio buffers, so
the device has buffer_bytes total buffer space available.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [RFC RESEND] virtio-video: Add virtio video device specification

2019-12-17 Thread Gerd Hoffmann
  Hi,

> On the host side, the encode and decode APIs are different as well, so
> having separate implementation decoder and encoder, possibly just
> sharing some helper code, would make much more sense.

When going down that route I'd suggest to use two device ids (even when
specifying both variants in one spec section and one header file due to
the overlaps) instead of feature flags.

> > I don't think using fourcc is a problem, and given that both drm and
> > v4l2 use fourcc already this would be a good choice I think.
> 
> Both DRM and V4L2 use two mutually incompatible sets of FourCCs, so
> I'm not sure how it could be a good choice. At least unless we decide
> to pick a specific set of FourCC. It doesn't help that Windows/DirectX
> has its own set of FourCCs that's again slightly different than the
> two mentioned before.

Ouch, wasn't aware of that.  That makes reusing fourcc codes much less
useful.

> > But the definition should be more specific than just "fourcc".  Best
> > would be to explicitly list and define each format supported by the
> > spec.
> 
> Why not be consistent with virtio-gpu and just define new formats as
> we add support for them as sequential integers?

Yes, lets do that.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-16 Thread Gerd Hoffmann
  Hi,

> > Of course only virtio drivers would try step (2), other drivers (when
> > sharing buffers between intel gvt device and virtio-gpu for example)
> > would go straight to (3).
> 
> For virtio-gpu as it is today, it's not clear to me that they're
> equivalent. As I read it, the virtio-gpu spec makes a distinction
> between the guest memory and the host resource. If virtio-gpu is
> communicating with non-virtio devices, then obviously you'd just be
> working with guest memory. But if it's communicating with another
> virtio device, then there are potentially distinct guest and host
> buffers that could be used. The spec shouldn't leave any room for
> ambiguity as to how this distinction is handled.

Yep.  It should be the host side buffer.  The whole point is to avoid
the round trip through the guest after all.  Or does someone see a
useful use case for the guest buffer?  If so we might have to add some
way to explicitly specify whenever we want the guest or host buffer.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [RFC RESEND] virtio-video: Add virtio video device specification

2019-12-16 Thread Gerd Hoffmann
  Hi,

> > Hmm, modern GPUs support both encoding and decoding ...
> 
> Many SoC architectures have completely separate IP blocks for encoding
> and decoding. Similarly, in GPUs those are usually completely separate
> parts of the pipeline.

In the OS there is one driver per GPU handling both ...

> > I don't think we should bake that restriction into the specification.
> > It probably makes sense to use one virtqueue per function though, that
> > will simplify dispatching in both host and guest.
> >
> 
> Wouldn't it make the handling easier if we had one virtio device per function?

I don't think there is much of a difference between one device per
function and one virtqueue per function.  You'll probably have a single
driver for both anyway, to share common code for buffer management etc.

> > Use separate pixel_format (fourcc) and stream_format (H.264 etc.) enums?
> 
> I'd specifically avoid FourCC here, as it's very loosely defined and
> could introduce confusion.

I don't think using fourcc is a problem, and given that both drm and
v4l2 use fourcc already this would be a good choice I think.

But the definition should be more specific than just "fourcc".  Best
would be to explicitly list and define each format supported by the
spec.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-12 Thread Gerd Hoffmann
On Thu, Dec 12, 2019 at 09:26:32PM +0900, David Stevens wrote:
> > > > Second I think it is a bad idea
> > > > from the security point of view.  When explicitly exporting buffers it
> > > > is easy to restrict access to the actual exports.
> > >
> > > Restricting access to actual exports could perhaps help catch bugs.
> > > However, I don't think it provides any security guarantees, since the
> > > guest can always just export every buffer before using it.
> >
> > Probably not on the guest/host boundary.
> >
> > It's important for security inside the guest though.  You don't want
> > process A being able to access process B private resources via buffer
> > sharing support, by guessing implicit buffer identifiers.
> 
> At least for the linux guest implementation, I wouldn't think the
> uuids would be exposed from the kernel. To me, it seems like something
> that should be handled internally by the virtio drivers.

That would be one possible use case, yes.  The exporting driver attaches
a uuid to the dma-buf.  The importing driver can see the attached uuid
and use it (if supported, otherwise run with the scatter list).  That
will be transparent to userspace, apps will just export/import dma-bufs
as usual and not even notice the uuid.

I can see other valid use cases though:  A wayland proxy could use
virtio-gpu buffer exports for shared memory and send the buffer uuid
to the host over some stream protocol (vsock, tcp, ...).  For that to
work we have to export the uuid to userspace, for example using a ioctl
on the dma-buf file handle.

> If you use some other guest with untrusted
> userspace drivers, or if you're pulling the uuids out of the kernel to
> give to some non-virtio transport, then I can see it being a concern.

I strongly prefer a design where we don't have to worry about that
concern in the first place instead of discussing whenever we should be
worried or not.

> > Without buffer sharing support the driver importing a virtio-gpu dma-buf
> > can send the buffer scatter list to the host.  So both virtio-gpu and
> > the other device would actually access the same guest pages, but they
> > are not aware that the buffer is shared between devices.
> 
> With the uuid approach, how should this case be handled? Should it be
> equivalent to exporting and importing the buffer which was created
> first? Should the spec say it's undefined behavior that might work as
> expected but might not, depending on the device implementation? Does
> the spec even need to say anything about it?

Using the uuid is an optional optimization.  I'd expect the workflow be
roughly this:

  (1) exporting driver exports a dma-buf as usual, additionally attaches
  a uuid to it and notifies the host (using device-specific commands).
  (2) importing driver will ask the host to use the buffer referenced by
  the given uuid.
  (3) if (2) fails for some reason use the dma-buf scatter list instead.

Of course only virtio drivers would try step (2), other drivers (when
sharing buffers between intel gvt device and virtio-gpu for example)
would go straight to (3).

> > With buffer sharing virtio-gpu would attach a uuid to the dma-buf, and
> > the importing driver can send the uuid (instead of the scatter list) to
> > the host.  So the device can simply lookup the buffer on the host side
> > and use it directly.  Another advantage is that this enables some more
> > use cases like sharing buffers between devices which are not backed by
> > guest ram.
> 
> Not just buffers not backed by guest ram, but things like fences. I
> would suggest the uuids represent 'exported resources' rather than
> 'exported buffers'.

Hmm, I can't see how this is useful.  Care to outline how you envision
this to work in a typical use case?

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: guest / host buffer sharing ...

2019-12-12 Thread Gerd Hoffmann
  Hi,

> > First the addressing is non-trivial, especially with the "transport
> > specific device address" in the tuple.
> 
> There is complexity here, but I think it would also be present in the
> buffer sharing device case. With a buffer sharing device, the same
> identifying information would need to be provided from the exporting
> driver to the buffer sharing driver, so the buffer sharing device
> would be able to identify the right device in the vmm.

No.  The idea is that the buffer sharing device will allocate and manage
the buffers (including identifiers), i.e. it will only export buffers,
never import.

> > Second I think it is a bad idea
> > from the security point of view.  When explicitly exporting buffers it
> > is easy to restrict access to the actual exports.
> 
> Restricting access to actual exports could perhaps help catch bugs.
> However, I don't think it provides any security guarantees, since the
> guest can always just export every buffer before using it.

Probably not on the guest/host boundary.

It's important for security inside the guest though.  You don't want
process A being able to access process B private resources via buffer
sharing support, by guessing implicit buffer identifiers.

With explicit buffer exports that opportunity doesn't exist in the first
place.  Anything not exported can't be accessed via buffer sharing,
period.  And to access the exported buffers you need to know the uuid,
which in turn allows the guest implement any access restrictions it
wants.

> > Instead of using a dedicated buffer sharing device we can also use
> > virtio-gpu (or any other driver which supports dma-buf exports) to
> > manage buffers.
> 
> I don't think adding generic buffer management to virtio-gpu (or any
> specific device type) is a good idea,

There isn't much to add btw.  virtio-gpu has buffer management, buffers
are called "resources" in virtio-gpu terminology.  You can already
export them as dma-bufs (just landed in 5.5-rc1) and import them into
other drivers.

Without buffer sharing support the driver importing a virtio-gpu dma-buf
can send the buffer scatter list to the host.  So both virtio-gpu and
the other device would actually access the same guest pages, but they
are not aware that the buffer is shared between devices.

With buffer sharing virtio-gpu would attach a uuid to the dma-buf, and
the importing driver can send the uuid (instead of the scatter list) to
the host.  So the device can simply lookup the buffer on the host side
and use it directly.  Another advantage is that this enables some more
use cases like sharing buffers between devices which are not backed by
guest ram.

> since that device would then
> become a requirement for buffer sharing between unrelated devices.

No.  When we drop the buffer sharing device idea (which is quite
likely), then any device can create buffers.  If virtio-gpu is involved
anyway, for example because you want show the images from the
virtio-camera device on the virtio-gpu display, it makes sense to use
virtio-gpu of course.  But any other device can create and export
buffers in a similar way.  Without a buffer sharing device there is no
central instance managing the buffers.  A virtio-video spec (video
encoder/decoder) is in discussion at the moment, it will probably get
resource management simliar to virtio-gpu for the video frames, and it
will be able to export/import those buffers (probably not in the first
revision, but it is on the radar).

> > With no central instance (buffer sharing device) being there managing
> > the buffer identifiers I think using uuids as identifiers would be a
> > good idea, to avoid clashes.  Also good for security because it's pretty
> > much impossible to guess buffer identifiers then.
> 
> Using uuids to identify buffers would work. The fact that it provides
> a single way to refer to both guest and host allocated buffers is
> nice. And it could also directly apply to sharing resources other than
> buffers (e.g. fences). Although unless we're positing that there are
> different levels of trust within the guest, I don't think uuids really
> provides much security.

Well, security-wise you want have buffer identifiers which can't be
easily guessed.  And guessing uuid is pretty much impossible due to
the namespace being huge.

> If we're talking about uuids, they could also be used to simplify my
> proposed implicit addressing scheme. Each device could be assigned a
> uuid, which would simplify the shared resource identifier to
> (device-uuid, shmid, offset).

See above for the security aspects of implicit vs. explicit buffer
identifiers.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: guest / host buffer sharing ...

2019-12-11 Thread Gerd Hoffmann
  Hi,

> None of the proposals directly address the use case of sharing host
> allocated buffers between devices, but I think they can be extended to
> support it. Host buffers can be identified by the following tuple:
> (transport type enum, transport specific device address, shmid,
> offset). I think this is sufficient even for host-allocated buffers
> that aren't visible to the guest (e.g. protected memory, vram), since
> they can still be given address space in some shared memory region,
> even if those addresses are actually inaccessible to the guest. At
> this point, the host buffer identifier can simply be passed in place
> of the guest ram scatterlist with either proposed buffer sharing
> mechanism.

> I think the main question here is whether or not the complexity of
> generic buffers and a buffer sharing device is worth it compared to
> the more implicit definition of buffers.

Here are two issues mixed up.  First is, whenever we'll go define a
buffer sharing device or not.  Second is how we are going to address
buffers.

I think defining (and addressing) buffers implicitly is a bad idea.
First the addressing is non-trivial, especially with the "transport
specific device address" in the tuple.  Second I think it is a bad idea
from the security point of view.  When explicitly exporting buffers it
is easy to restrict access to the actual exports.

Instead of using a dedicated buffer sharing device we can also use
virtio-gpu (or any other driver which supports dma-buf exports) to
manage buffers.  virtio-gpu would create an identifier when exporting a
buffer (dma-buf exports inside the guest), attach the identifier to the
dma-buf so other drivers importing the buffer can see and use it.  Maybe
add an ioctl to query, so guest userspace can query the identifier too.
Also send the identifier to the host so it can also be used on the host
side to lookup and access the buffer.

With no central instance (buffer sharing device) being there managing
the buffer identifiers I think using uuids as identifiers would be a
good idea, to avoid clashes.  Also good for security because it's pretty
much impossible to guess buffer identifiers then.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH] virtio-gpu: add shared resource feature

2019-12-10 Thread Gerd Hoffmann
  Hi,

> > They are never taken away from guest ram.  The guest is free to do
> > whatever it wants with the pages.  It's the job of the guest driver to
> > make sure the pages are not released until the host stopped using them.
> > So the host must drop all references before completing the
> > DETACH_BACKING command.
> 
> Let's write that in the spec.

Yes, saying so explicitly that totally makes sense.
I'll update the patch accordingly.

> We (you?) need to modify drm_gem_shmem_* helpers to always be cached
> for virtgpu for now, since the TTM removal changed that behavior.

We can just roll our own mmap function to handle that.
I'll prepare a patch (that is independent from F_SHARED anyway).

> Then, we should expose host properties to the guest, like [1] suggests
> (maybe something like VIRTIO_GPU_F_GUEST_ATTRIBUTE_OVERRIDE,
> VIRTIO_GPU_F_GUEST_CACHE_OPS).  If there's already a method to do
> this, that would be awesome.

We'll need that for sure when mapping host resources into the guest, but
we are not there yet with F_SHARED only.

> > Also: what is special in virtualization here?  There are arm drivers
> > using drm_gem_shmem_* helpers.  How do they manage this?
> 
> Panfrost, v3d.  I'm surprised they haven't hit this issue yet.

Maybe because all mappings are consistent.  With virtio-gpu you could
end up in a situation where the guest uses wc whereas the host uses
cached and things go boom from there.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH] virtio-gpu: add shared resource feature

2019-12-09 Thread Gerd Hoffmann
> > > 1) Driver sends RESOURCE_CREATE_2D shared request
> > > 2) Driver sends ATTACH_BACKING request
> > > 3) Device creates a shared resource
> > > 4) Driver sends SET_SCANOUT request
> > > 5) Device sends shared resource to display
> > > 6) Driver sends DETACH_BACKING request
> >
> > Hmm, I think you can crash the current qemu code that way (didn't test
> > though).  I think the proper solution would be to return -EBUSY on the
> > DETACH_BACKING request.  Guess I'll add a new error code for that.
> 
> That would add the extra complication of a "delayed detach list" in
> the guest kernel.

Why?  I can't see how that can happen with the linux guest driver.  You
need a drm framebuffer to map an gem object to a scanout.  The drm
framebuffer holds a reference of the gem object, so the linux kernel
wouldn't try to delete the gem object (-> DETACH_BACKING + UNREF) while
it is mapped to the scanout because the refcount wouldn't go down to
zero.

> I don't know if it's absolutely necessary --
> because the open source implementation (udmabuf) seems to take a
> reference on all constituent pages like DRM gem does [a][b][c][d].  We
> should figure out if the extra reference is sufficient to prevent the
> pages from going back to guest RAM (the guest kernel takes references
> on those pages too -- I don't know if guest references are treated
> differently).

They are never taken away from guest ram.  The guest is free to do
whatever it wants with the pages.  It's the job of the guest driver to
make sure the pages are not released until the host stopped using them.
So the host must drop all references before completing the
DETACH_BACKING command.

> > I think we can do 90% of that optimization without changing the
> > protocol.  Display updates typically involve multiple commands.  A
> > transfer, possibly a set-scanout (when page-flipping), finally flush.
> > The driver can first queue all commands, then notify the host, so we
> > will have only one vmexit per display update instead of one vmexit per
> > command.
> 
> We also want to prevent this scenario:
> 
> 1) Guest creates a writecombine mapping (like drm_gem_shmem_* helpers
> do), but the pages were previously accessed and thus some data is in
> the cache.
> 2) Cache eviction
> 3) Non-zero data in buffer.

This can happen on arm I guess?  I know caching is a more complicated on
arm than on x86.  Care to explaint this (or do you have a good link?).

Also: what is special in virtualization here?  There are arm drivers
using drm_gem_shmem_* helpers.  How do they manage this?

> > Is there a good reason why the driver should be aware of that?
> 
> Technically TRANSFER_TO_HOST is legal for 2D resources (though rarely
> used).

It's used for dumb gem objects.

> I suppose we can modify RESOURCE_INFO to inform userspace the
> resource is shared, and thus avoid hypercalls.
> 
> If we leave the decision to create the shared resource to the driver
> and not the device, *_2D_SHARED makes sense.  Right now, it's
> *_2D_MAYBE_SHARED.

Yes, the guest can't force it.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [RFC RESEND] virtio-video: Add virtio video device specification

2019-12-09 Thread Gerd Hoffmann
  Hi,

> > For (1) you'll simply do a QUEUE_BUFFER.  The command carries references
> > to the buffer pages.  No resource management needed.
> >
> > For (2) you'll have RESOURCE_CREATE + RESOURCE_DESTROY + QUEUE_RESOURCE,
> > where RESOURCE_CREATE passes the scatter list of buffer pages to the
> > host and QUEUE_RESOURCE will carry just the resource id.
> >
> > For (3) you'll have RESOURCE_IMPORT + RESOURCE_DESTROY + QUEUE_RESOURCE.
> >
> 
> Thanks for the clarification.
> On second thought, however, I'm wondering if we should keep all
> RESOURCE controls.
> This should be an option (4).

Well, that's actually (2), aka "we use RESOURCE_* commands to manage
resources".  With the commands in the latest draft that would be:

  (1) RESOURCE_CREATE
  (2) RESOURCE_ATTACH_BACKING
  (3) RESOURCE_QUEUE (resource can be used multiple times here)
  (4) RESOURCE_DETACH_BACKING
  (5) RESOURCE_DESTROY

I'm not sure we need the separate *_BACKING commands.  I think we could
also have RESOURCE_CREATE pass the buffer pages scatter list instead.

Why it is done this way?  Just because the design was copied from
virtio-gpu?  Or is there some specific reason?

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH] snd: Add virtio sound device specification

2019-12-06 Thread Gerd Hoffmann
  Hi,

> > I don't think there is a use case which guestmem can handle but hostmem
> > can not.
> 
> There may be a special security requirement like "a guest must not have any
> access to a host memory" (especially in case of type 1 hv).

Hmm, ok.  Point.

> Also, sharing memory implies sharing of two things: buffer itself and a page
> containing position counter. And it goes in both directions (a host shares
> buffer and hardware position or a guest shares buffer and application
> position). And it seems, that overall solution can benefit if there will be
> four independent entities. For example, message-based transport can have
> some benefits, if a host will provide its hardware pointer in a shared page.

Hmm, I'm not sure mixing things is a good idea.  When using shared
memory I would use it for both data and pointer.

> > > > So, it's not a "here is a buffer, fill it please", "here is the next,
> > > > ..." ping pong game between driver and device.  There is a queue with
> > > > multiple buffers instead, and the device fills them one by one.
> > > 
> > > Then, should we make this pattern to be mandatory?
> > 
> > It's pretty standard for virtio.  The network receive queue works
> > fundamentally the same way for example, except that network buffers
> > actually might be half-filled only because you get network packets
> > where you don't know the size beforehand instead of a constant stream
> > of sound samples which you can easily pack into buffers as you like.
> 
> Except the end of the capture stream case. Can we use the len field in
> virtq_used_elem for storing actual captured size then?

Yes.

> > Note that it depends on the use case whenever long latencies are
> > actually worse.  Some use cases don't care much (music playback).  For
> > some use cases it is good enough to know the exact latency so you can
> > take it into account for synchronization (video playback).  Some
> > usecases are pretty much impossible with long latencies (games, voip).
> 
> By the way, there is a good point here. Maybe it makes sense for the device
> somehow reports its current/additionally latency value? Like, as a part of a
> response to the set_params request?

You mean for latencies additional to the ones implied by buffer size?
Yes, that sounds useful.

> > > There's always a trade-off and an implementer might choose
> > > whatever fits better.
> > 
> > Sure.  But you choose once, when configuring the stream, not for each
> > individual buffer, right?  So the driver will:
> > 
> >(1) set-params (including period_bytes), taking into account what
> >userspace asked for.
> >(2) queue buffers for the stream, each being period_bytes in size.
> >(3) start stream.
> 
> Originally, we introduced period_bytes for enabling period notification
> feature (which is optional). At least, it was not supposed to be used for
> slicing a buffer into pieces (although, the driver may chose to work in this
> way).

Ah, I've missed the detail that you've decoupled buffers and period
notification.  Guess we talked past each other because of that.  So I'm
trying to outline things again, just to be sure we are on the same page:

The driver picks total buffer size (say 16k for example) and period size
(say 2k).  The driver allocates 8 virtio buffers with 2k each.
Allocating one big buffer and slice it works too of course.  Then the
driver goes queue up these 8 virtio buffers to the virtqueue.  The
device returns the filled virtio buffers to the driver.  Then you don't
separate period notification in the first place, you can simply use the
buffer completion notifications for that.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [RFC RESEND] virtio-video: Add virtio video device specification

2019-12-05 Thread Gerd Hoffmann
  Hi,

> > > > > +\subsection{Device ID}\label{sec:Device Types / Video Device / 
> > > > > Device ID}
> > > > > +
> > > > > +TBD.
> > > >
> > > > I'm wondering how and when we can determine and reserve this ID?
> > >
> > > Grab the next free, update the spec accordingly, submit the one-line
> > > patch.
> 
> Thanks. I will do so at the next version of the patch.

No.  Submit as separate one-liner patch which does nothing but grabbing
an ID.

> > > > I'm wondering if we can use FOURCC instead. So, we can avoid 
> > > > reinventing a
> > > > mapping from formats to integers.
> > > > Also, I suppose the word "pixel formats" means only raw (decoded) 
> > > > formats.
> > > > But, it can be encoded format like H.264. So, I guess "image format" or
> > > > "fourcc" is a better word choice.
> > >
> > > Use separate pixel_format (fourcc) and stream_format (H.264 etc.) enums?
> > >
> 
> Fourcc is used for both raw and coded formats.
> I'm not sure if it makes much sense to define different enums for raw
> and coded formats, as
> we reuse any other structs for both types of formats.
> 
> What I'd suggest is like this:
> 
> #define FOURCC(a,b,c,d) (a | (b << 8) | (c << 16) | (d << 24))
> 
> enum virtio_video_fourcc {
> VIRTIO_VIDEO_FOURCC_UNDEFINED = 0,
> 
> /* Coded formats */
> VIRTIO_VIDEO_FOURCC_H264 = FOURCC('H', '2', '6', '4'),
> ...
> 
> /* Raw formats */
> VIRTIO_VIDEO_FOURCC_NV12 = FOURCC('N', 'V', '1', '2'),
> ...
> }

Ok, that'll work.

I've linked fourcc to drm fourcc codes in my head, and drm hasn't codes
for the compressed formats.

When defining things this way we should of course make sure that the raw
format codes are identical to the ones drm uses.

Is there a formal standard for these codes btw?

> > As an interim solution, this form of "manual resource backing-store
> > management" could be specified as a feature flag.
> > Once there is an established solution for buffer sharing, we would
> > then go ahead and introduce a new feature flag for "works with the
> > buffer sharing mechanism", as an alternative to this manual mechanism.
> >
> > wdyt?
> 
> It'd be a good idea to change buffer management method by a feature flag.

I don't think so.  A device might want support multiple kinds of buffer
management, most notably both their own buffers and imported buffers.
Indicating which methods are available can be done with feature flags,
but actually picking one not.

> > > Well.  For buffer management there are a bunch of options.
> > >
> > >  (1) Simply stick the buffers (well, pointers to the buffer pages) into
> > >  the virtqueue.  This is the standard virtio way.
> > >
> > >  (2) Create resources, then put the resource ids into the virtqueue.
> > >  virtio-gpu uses that model.  First, because virtio-gpu needs an id
> > >  to reference resources in the rendering command stream
> > >  (virtio-video doesn't need this).  Also because (some kinds of)
> > >  resources are around for a long time and the guest-physical ->
> > >  host-virtual mapping needs to be done only once that way (which
> > >  I think would be the case for virtio-video too because v4l2
> > >  re-uses buffers in robin-round fashion).  Drawback is this
> > >  assumes shared memory between host and guest (which is the case
> > >  in typical use cases but it is not mandated by the virtio spec).
> > >
> > >  (3) Import external resources (from virtio-gpu for example).
> > >  Out of scope for now, will probably added as optional feature
> > >  later.
> > >
> > > I guess long-term we want support either (1)+(3) or (2)+(3).
> > >
> 
> In the first version of spec, we might want to support the minimal workable 
> set
> of controls. Then, we'll be able to add additional controls with a new feature
> flag as Enrico suggested.
> 
> So, the problem is what's the simplest scenario and which types of controls 
> are
> required there. I guess it's enough for (1) and (2) if we have 
> T_RESOURCE_CREATE
> and T_RESOURCE_DESTROY.

For (1) you'll simply do a QUEUE_BUFFER.  The command carries references
to the buffer pages.  No resource management needed.

For (2) you'll have RESOURCE_CREATE + RESOURCE_DESTROY + QUEUE_RESOURCE,
where RESOURCE_CREATE passes the scatter list of buffer pages to the
host and QUEUE_RESOURCE will carry just the resource id.

For (3) you'll have RESOURCE_IMPORT + RESOURCE_DESTROY + QUEUE_RESOURCE.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH] virtio-gpu: add shared resource feature

2019-12-04 Thread Gerd Hoffmann
  Hi,

> If the following scenario happens:
> 
> 1) Driver sends RESOURCE_CREATE_2D shared request
> 2) Driver sends ATTACH_BACKING request
> 3) Device creates a shared resource
> 4) Driver sends SET_SCANOUT request
> 5) Device sends shared resource to display
> 6) Driver sends DETACH_BACKING request

Hmm, I think you can crash the current qemu code that way (didn't test
though).  I think the proper solution would be to return -EBUSY on the
DETACH_BACKING request.  Guess I'll add a new error code for that.

> > +Otherwise the shared resources are used like normal resources.
> > +Especially the driver must send explicit VIRTIO_GPU_CMD_TRANSFER_*
> > +commands to the device for both normal and shared resources.  Reason:
> > +The device might have to flush caches.
> 
> Can we make this spec stronger to avoid to avoid transfers all the
> time (i.e, in virtio_gpu_update_dumb_bo)?
> 
> drm_gem_shmem_* helpers seem to use WC buffers, and dumb buffers are
> traditionally WC as well.  If we mandate the host ("device?" here)
> must properly clean caches at creation time, it may be possible to
> avoid hypercalls for 2D_SHARED resources.

I think we can do 90% of that optimization without changing the
protocol.  Display updates typically involve multiple commands.  A
transfer, possibly a set-scanout (when page-flipping), finally flush.
The driver can first queue all commands, then notify the host, so we
will have only one vmexit per display update instead of one vmexit per
command.

> > The device MAY also choose to
> > +not create mapping for the shared resource.  Especially for small
> > +resources it might be more efficient to just copy the data instead of
> > +establishing a shared mapping.
> 
> Should the device send a response code  (OK_SHARED} to inform the
> driver that the resource is shared?

Is there a good reason why the driver should be aware of that?

I'd prefer to make that an internal implementation detail of
the device and not inform the guest.

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH] snd: Add virtio sound device specification

2019-12-04 Thread Gerd Hoffmann
  Hi,

> > > For example, in case
> > > of GUEST_MEM the request could be followed by a buffer sg-list.
> > 
> > I'm not convinced guest_mem is that useful.  host_mem allows to give the
> > guest access to the buffers used by the hosts sound hardware, which is
> > probably what you need if the MSG transport can't handle the latency
> > requirements you have.
> 
> Actually, it might be pretty useful.
> 
> If a device is not capable of sharing host memory with a guest but still
> capable of using guest shared memory, then there's a good use case for that:

Note that "hostmem" only means that the buffer is allocated by the host
(i.e. the device).  Whenever the allocation is backed by actual host
sound buffers or just normal ram depends on the device implementation
and maybe the host sound hardware capabilities.  Qemu would probably use
normal ram due to the extra indirection caused by the sound backend
interface.

I don't think there is a use case which guestmem can handle but hostmem
can not.

> when a buffer is mapped into a user space application. It assumes, that the
> driver is not involved in frame transfer at all (thus, it can not queue
> buffers into a virtqueue and send notifications to the device).

The sound data can actually be in userspace mapped buffers even for the
message transport.  Adding a buffer to the virtqueue (which actually
stores pointer(s) to the buffer pages) effectively moves the application
position then.  But, yes, userspace must do a syscall for that which is
not needed when updating the position in a shared page.

> But if that
> memory is shared with the device (as long as some piece of memory containing
> an application position as well), then it's possible to implement quite
> simple poller in the device. And it would be pretty aligned with common
> software mixer workflow (like in PA or QEmu), where a mixer invokes client's
> callbacks for reading/writing next piece of data. The device will need only
> to check an application position and directly read from/write to a shared
> buffer.

Note that it is possible to store things in the virtqueue without
notifying the device and expect the device poll the virtqueue instead.
Likewise for the other direction.  That allows operating the queue
without vmexits and might work more efficient with lots of small
buffers.  Of course device and driver must negotiate whenever they want
use polling or notifications.

> > > If we gonna introduce any buffer constrains, it must be set by the
> > > device in a stream configuration.
> > 
> > If we want allow the device specify min/max period_bytes which it can
> > handle, then yes, that should go into the stream configuration.
> > 
> > Or we use negotiation: driver asks for period_bytes in set-params, the
> > driver picks the closest period_bytes value it can handle and returns
> > that.
> 
> As I said before, periods are not used everywhere. Also, even in ALSA such
> kind of negotiations may be non trivial. I would propose to leave choosing the
> period_bytes value up to the driver. We could add yet one mandatory field to
> the set_params request - driver's buffer size. (If the driver wants to use
> period notification feature, then buffer_bytes % period_bytes must be 0).

Sounds good to me.

> > > But you described exactly "blockable" case: an I/O request is completed 
> > > not
> > > immediately but upon some condition (the buffer is full). In case of 
> > > message-
> > > based transport, both the device and the driver will have its own buffers.
> > 
> > Well, no.  The device doesn't need any buffers, it can use the buffers
> > submitted by the driver.  Typical workflow:
> > 
> >(1) The driver puts a bunch of empty buffers into the rx (record/read)
> >virtqueue (each being period_bytes in size).
> >(2) The driver starts recording.
> >(3) The device fills the first buffer with recorded sound data.
> >(4) When the buffer is full the device returns it to the driver,
> >takes the next from the virtqueue to continue recording.
> >(5) The driver takes the filled buffer and does whatever it wants do
> >with the data (typically pass on to the userspace app).
> >(6) The driver submits a new empty buffer to the virtqueue to make
> >sure the device doesn't run out of buffers.
> > 
> > So, it's not a "here is a buffer, fill it please", "here is the next,
> > ..." ping pong game between driver and device.  There is a queue with
> > multiple buffers instead, and the device fills them one by one.
> 
> Then, should we make this pattern to be mandatory?

It's pretty standard for virtio.  The network receive queue works
fundamentally the same way for example, except that network buffers
actually might be half-filled only because you get network packets
where you don't know the size beforehand instead of a constant stream
of sound samples which you can easily pack into buffers as you like.

But, yes, it probably makes sense to explicitly say so in the 

Re: [virtio-dev] [RFC RESEND] virtio-video: Add virtio video device specification

2019-12-04 Thread Gerd Hoffmann
  Hi,

> 1. Focus on only decoder/encoder functionalities first.
> 
> As Tomasz said earlier in this thread, it'd be too complicated to support 
> camera
> usage at the same time. So, I'd suggest to make it just a generic mem-to-mem
> video processing device protocol for now.
> If we finally decide to support camera in this protocol, we can add it later.

Agree.

> 2. Only one feature bit can be specified for one device.
> 
> I'd like to have a decoder device and encoder device separately.
> It'd be natural to assume it because a decoder and an encoder are provided as
> different hardware.

Hmm, modern GPUs support both encoding and decoding ...

I don't think we should bake that restriction into the specification.
It probably makes sense to use one virtqueue per function though, that
will simplify dispatching in both host and guest.

> 3. Separate buffer allocation functionalities from virtio-video protocol.
> 
> To support various ways of guest/host buffer sharing, we might want to have a
> dedicated buffer sharing device as we're discussing in another thread. Until 
> we
> reach consensus there, it'd be good not to have buffer allocation
> functionalities in virtio-video.

I think virtio-video should be able to work as stand-alone device,
so we need some way to allocate buffers ...

Buffer sharing with other devices can be added later.

> > +The virtio video device is a virtual video streaming device that supports 
> > the
> > +following functions: encoder, decoder, capture, output.
> > +
> > +\subsection{Device ID}\label{sec:Device Types / Video Device / Device ID}
> > +
> > +TBD.
> 
> I'm wondering how and when we can determine and reserve this ID?

Grab the next free, update the spec accordingly, submit the one-line
patch.

> > +\begin{lstlisting}
> > +enum virtio_video_pixel_format {
> > +   VIRTIO_VIDEO_PIX_FMT_UNDEFINED = 0,
> > +
> > +   VIRTIO_VIDEO_PIX_FMT_H264 = 0x0100,
> > +   VIRTIO_VIDEO_PIX_FMT_NV12,
> > +   VIRTIO_VIDEO_PIX_FMT_NV21,
> > +   VIRTIO_VIDEO_PIX_FMT_I420,
> > +   VIRTIO_VIDEO_PIX_FMT_I422,
> > +   VIRTIO_VIDEO_PIX_FMT_XBGR,
> > +};
> 
> I'm wondering if we can use FOURCC instead. So, we can avoid reinventing a
> mapping from formats to integers.
> Also, I suppose the word "pixel formats" means only raw (decoded) formats.
> But, it can be encoded format like H.264. So, I guess "image format" or
> "fourcc" is a better word choice.

Use separate pixel_format (fourcc) and stream_format (H.264 etc.) enums?

> > +\begin{lstlisting}
> > +struct virtio_video_function {
> > +   struct virtio_video_desc desc;
> > +   __le32 function_type; /* One of VIRTIO_VIDEO_FUNC_* types */
> > +   __le32 function_id;
> > +   struct virtio_video_params in_params;
> > +   struct virtio_video_params out_params;
> > +   __le32 num_caps;
> > +   __u8 padding[4];
> > +   /* Followed by struct virtio_video_capability video_caps[]; */
> > +};
> > +\end{lstlisting}
> 
> If one device only has one functionality, virtio_video_function's fields will 
> be
> no longer needed except in_params and out_params. So, we'd be able to remove
> virtio_video_function and have in_params and out_params in
> virtio_video_capability instead.

Same goes for per-function virtqueues (used virtqueue implies function).

> > +\begin{lstlisting}
> > +struct virtio_video_resource_detach_backing {
> > +   struct virtio_video_ctrl_hdr hdr;
> > +   __le32 resource_id;
> > +   __u8 padding[4];
> > +};
> > +\end{lstlisting}
> > +
> > +\begin{description}
> > +\item[\field{resource_id}] internal id of the resource.
> > +\end{description}
> 
> I suppose that it'd be better not to have the above series of T_RESOURCE
> controls at least until we reach a conclusion in the thread of buffer-sharing
> device. If we end up concluding this type of controls is the best way, we'll 
> be
> able to revisit here.

Well.  For buffer management there are a bunch of options.

 (1) Simply stick the buffers (well, pointers to the buffer pages) into
 the virtqueue.  This is the standard virtio way.

 (2) Create resources, then put the resource ids into the virtqueue.
 virtio-gpu uses that model.  First, because virtio-gpu needs an id
 to reference resources in the rendering command stream
 (virtio-video doesn't need this).  Also because (some kinds of)
 resources are around for a long time and the guest-physical ->
 host-virtual mapping needs to be done only once that way (which
 I think would be the case for virtio-video too because v4l2
 re-uses buffers in robin-round fashion).  Drawback is this
 assumes shared memory between host and guest (which is the case
 in typical use cases but it is not mandated by the virtio spec).

 (3) Import external resources (from virtio-gpu for example).
 Out of scope for now, will probably added as optional feature
 later.

I guess long-term we want support either (1)+(3) or (2)+(3).

cheers,
  Gerd



Re: [virtio-dev] [PATCH] virtio-gpu: add shared resource feature

2019-12-03 Thread Gerd Hoffmann
  Hi,

> > > Do we already have a mechanism
> > > to import memory to virtio-gpu via importing a bo handle from somewhere
> > > else?
> >
> > Inside the guest?  Import is not working yet, that needs another update
> > (blob resources) due to current virtio-gpu resources requiring a format
> > at creation time and dmabufs not having one.
> >
> >
> Gotcha, I was just poking around related code and it looked like I could
> use a resource w/ R8 format and width = bytes.

Yep, that trick might work for some use cases.  But mapping such an
imported dma-buf to a scanout isn't going to work for example ...

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH] virtio-gpu: some edid clarifications

2019-12-03 Thread Gerd Hoffmann
On Thu, Nov 28, 2019 at 12:42:43PM +0100, Gerd Hoffmann wrote:
> Add some notes about fetching the EDID information.

https://github.com/oasis-tcs/virtio-spec/issues/64

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH] virtio-gpu: add shared resource feature

2019-12-03 Thread Gerd Hoffmann
On Mon, Dec 02, 2019 at 09:24:31AM -0800, Frank Yang wrote:
> Interesting; what's the use case for this?

Mostly reduce data copying.

> Do we already have a mechanism
> to import memory to virtio-gpu via importing a bo handle from somewhere
> else?

Inside the guest?  Import is not working yet, that needs another update
(blob resources) due to current virtio-gpu resources requiring a format
at creation time and dmabufs not having one.

See https://www.kraxel.org/blog/2019/11/virtio-gpu-status-and-plans/ for
some background.

Exporting virtio-gpu resources works in the drm-misc-next branch, should
land in mainline this merge window (i.e. 5.5-rc1).

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH] snd: Add virtio sound device specification

2019-12-03 Thread Gerd Hoffmann
On Mon, Dec 02, 2019 at 02:06:53PM +0100, Anton Yakovlev wrote:
> Hi,
> 
> Sorry for the late reply, I was not available for a week.
> 
> 
> On 25.11.2019 10:31, Gerd Hoffmann wrote:
> >Hi,
> > 
> > > > Instead of PCM_OUTPUT and PCM_INPUT I would put the number of input and
> > > > output streams into the device config space (and explicitly allow count
> > > > being zero).
> > > 
> > > There are several options for the number of streams:
> > > 
> > > 1. The device can provide one input and/or output stream.
> > > 2. The device can provide several input and/or output streams, and all of 
> > > them
> > > share the same capabilities.
> > > 3. The device can provide several input and/or output streams, and they 
> > > can
> > > have different set of capabilities.
> > > 
> > > Overall design depends on chosen option. Current draft is based on p.1. 
> > > But we
> > > have never discussed what is the best solution here.
> > 
> > (3) looks most useful to me.
> 
> Then we need to refactor device configuration. I would propose to put into
> configuration space only a total number of all available PCM streams and
> introduce special control request to query per-stream configuration. A
> response should contain a stream type (input/output) and all its capabilities.

Yes, that will fine work too.

> > > > PCM_MSG -- I would drop the feature bit and make that mandatory, so we
> > > > have a common baseline on which all drivers and devices can agree on.
> > > 
> > > Then we need to inform device what "transport" will be in use (I assumed 
> > > it
> > > would be feature negotiation).
> > 
> > Whenever other transports (i.e. via shared memory) are supported: yes,
> > that should be a feature bit.
> > 
> > Not sure about choosing the transport.  If both msg (i.e. via virtqueue)
> > and shared memory are available, does it make sense to allow the driver
> > choose the transport each time it starts a stream?
> 
> Shared memory based transport in any case will require some additional
> actions. For HOST_MEM case the driver will need to get an access to host
> buffer somehow. In GUEST_MEM case the driver will need to provide a buffer for
> the host.
> 
> At first sight, we could extend the set_params request with the
> transport_type field and some additional information.

Or have a per-transport set_params request command.

> For example, in case
> of GUEST_MEM the request could be followed by a buffer sg-list.

I'm not convinced guest_mem is that useful.  host_mem allows to give the
guest access to the buffers used by the hosts sound hardware, which is
probably what you need if the MSG transport can't handle the latency
requirements you have.

> > > > > struct virtio_snd_pcm_status {
> > > > >   le32 status;
> > > > >   le32 actual_length;
> > > > > };
> > > > 
> > > > Hmm.  Why actual_length?  Do we want allow short transfers?  Why?
> > > > Wouldn't it be more useful to just fill the buffer completely?
> > > 
> > > In current design we have no buffer size requirements. It means, that in
> > > theory the device and the driver could have buffers with different sizes.
> > 
> > Should we define that the (virtio) buffer size must be period_bytes then?
> 
> It will have no sense, since the driver chooses period_bytes at runtime.

Yep, the driver will choose period_bytes when starting a stream.
The driver will also queue the buffers (for the MSG transport),
so it should be no problem for the driver to make the two match.

> If we gonna introduce any buffer constrains, it must be set by the
> device in a stream configuration.

If we want allow the device specify min/max period_bytes which it can
handle, then yes, that should go into the stream configuration.

Or we use negotiation: driver asks for period_bytes in set-params, the
driver picks the closest period_bytes value it can handle and returns
that.

> > > Also, the capture stream is a special case. Now we don't state explicitly
> > > whether read request is blockable or not.
> > 
> > The concept of "blockable" doesn't exist at that level.  The driver
> > submits buffers to the device, the device fills them and notifies the
> > driver when the buffer is full.  It simply doesn't work like a read(2)
> > syscall.
> 
> But you described exactly "blockable" case: an I/O request is completed not
> immediately but upon some condition (the buffer is full). In case of message-
> based tr

[virtio-dev] [PATCH] virtio-gpu: add 3d command overview

2019-11-29 Thread Gerd Hoffmann
Add 3d commands to the command enumeration.
Add a section with a very short overview.

Signed-off-by: Gerd Hoffmann 
---
 virtio-gpu.tex | 35 +++
 1 file changed, 35 insertions(+)

diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index eced3095a494..631aaf24ea18 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
@@ -213,6 +213,16 @@ \subsubsection{Device Operation: Request 
header}\label{sec:Device Types / GPU De
 VIRTIO_GPU_CMD_GET_EDID,
 VIRTIO_GPU_CMD_RESOURCE_CREATE_2D_SHARED,
 
+/* 3d commands (OpenGL) */
+VIRTIO_GPU_CMD_CTX_CREATE = 0x0200,
+VIRTIO_GPU_CMD_CTX_DESTROY,
+VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE,
+VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE,
+VIRTIO_GPU_CMD_RESOURCE_CREATE_3D,
+VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D,
+VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D,
+VIRTIO_GPU_CMD_SUBMIT_3D,
+
 /* cursor commands */
 VIRTIO_GPU_CMD_UPDATE_CURSOR = 0x0300,
 VIRTIO_GPU_CMD_MOVE_CURSOR,
@@ -488,6 +498,31 @@ \subsubsection{Device Operation: 
controlq}\label{sec:Device Types / GPU Device /
 
 \end{description}
 
+\subsubsection{Device Operation: controlq (3d)}\label{sec:Device Types / GPU 
Device / Device Operation / Device Operation: controlq (3d)}
+
+These commands are supported by the device if the VIRTIO_GPU_F_VIRGL
+feature flag is set.
+
+\begin{description}
+
+\item[VIRTIO_GPU_CMD_CTX_CREATE]
+\item[VIRTIO_GPU_CMD_CTX_DESTROY]
+\item[VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE]
+\item[VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE]
+  Manage OpenGL contexts.
+
+\item[VIRTIO_GPU_CMD_RESOURCE_CREATE_3D]
+  Create OpenGL resources.
+
+\item[VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D]
+\item[VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D]
+  Transfer data from and to OpenGL resources.
+
+\item[VIRTIO_GPU_CMD_SUBMIT_3D]
+  Submit rendering commands (mesa gallium command stream).
+
+\end{description}
+
 \subsubsection{Device Operation: cursorq}\label{sec:Device Types / GPU Device 
/ Device Operation / Device Operation: cursorq}
 
 Both cursorq commands use the same command struct.
-- 
2.18.1


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH] virtio-gpu: add shared resource feature

2019-11-28 Thread Gerd Hoffmann
On Thu, Nov 28, 2019 at 07:04:23PM +0100, Matti Moell wrote:
> Hi Gerd,
> 
> I really like this!
> 
> On 28.11.19 14:19, Gerd Hoffmann wrote:
> > @@ -186,6 +211,7 @@ \subsubsection{Device Operation: Request 
> > header}\label{sec:Device Types / GPU De
> >   VIRTIO_GPU_CMD_GET_CAPSET_INFO,
> >   VIRTIO_GPU_CMD_GET_CAPSET,
> >   VIRTIO_GPU_CMD_GET_EDID,
> > +VIRTIO_GPU_CMD_RESOURCE_CREATE_2D_SHARED,
> Shouldn't there also the RESOURCE_CREATE_3D_SHARED? I know that 3D isn't
> speced here but it looks awkward not to reserve it.

Yep, none of the 3D commands is in the specs, thats why
RESOURCE_CREATE_3D_SHARED isn't there too (but, yes, it exists).

Guess it might be a good idea to at least list the commands,
but that is something for a separate patch ...

cheers,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH] virtio-gpu: add shared resource feature

2019-11-28 Thread Gerd Hoffmann
This patch adds a new virtio feature for shared resources.

Shared resources are allocated by the guest from normal ram, like
traditional resources.  They can be used by the guest almost like
traditional resources, the only exception is that shared resources can
only be used with backing storage attached (the linux driver does that
anyway so the workflow doesn't change).  The host can map these
resources and use them directly (using udmabuf), so any guest changes to
these resources can be visible on the host without explicit transfer
request.  Transfer requests are still needed though as the host might
still have syncronize the resource, for example in case importing the
udmabuf failed for some reason.

guest/host interface: Two new commands are added to create 2D and 3D
shared resources.  They work exactly like the non-shared counterparts.

qemu patches:
  https://git.kraxel.org/cgit/qemu/log/?h=sirius/virtio-gpu-feature-shared

linux patches:
  https://git.kraxel.org/cgit/linux/log/?h=drm-virtio-feature-shared

Signed-off-by: Gerd Hoffmann 
---
 virtio-gpu.tex | 27 +++
 1 file changed, 27 insertions(+)

diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index 15dbf9f2ec82..ca27b3d5fa45 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
@@ -35,6 +35,7 @@ \subsection{Feature bits}\label{sec:Device Types / GPU Device 
/ Feature bits}
 \begin{description}
 \item[VIRTIO_GPU_F_VIRGL (0)] virgl 3D mode is supported.
 \item[VIRTIO_GPU_F_EDID  (1)] EDID is supported.
+\item[VIRTIO_GPU_F_RESOURCE_SHARED  2)] shared resources are supported.
 \end{description}
 
 \subsection{Device configuration layout}\label{sec:Device Types / GPU Device / 
Device configuration layout}
@@ -103,6 +104,8 @@ \subsubsection{Device Operation: Create a framebuffer and 
configure scanout}
 
 \begin{itemize*}
 \item Create a host resource using VIRTIO_GPU_CMD_RESOURCE_CREATE_2D.
+  In case VIRTIO_GPU_F_RESOURCE_SHARED is negotiated the driver can
+  also use VIRTIO_GPU_CMD_RESOURCE_CREATE_2D_SHARED.
 \item Allocate a framebuffer from guest ram, and attach it as backing
   storage to the resource just created, using
   VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING.  Scatter lists are
@@ -166,6 +169,28 @@ \subsubsection{Device Operation: Configure mouse cursor}
 the pointer shape and position.  To move the pointer without updating
 the shape use VIRTIO_GPU_CMD_MOVE_CURSOR instead.
 
+\subsubsection{Device Operation: Shared resources}
+
+In case VIRTIO_GPU_F_RESOURCE_SHARED is negotiated the driver can use
+the VIRTIO_GPU_CMD_RESOURCE_CREATE_2D_SHARED command to create shared
+resources.  Normal resources have both a guest buffer and host buffer
+buffer for the resource and the VIRTIO_GPU_CMD_TRANSFER_* commands are
+used to transfer data between host and guest.  Shared (guest
+allocated) buffers CAN be used directly by the host, to remove or
+reduce the data copies needed.
+
+The driver MUST attach backing storage to a shared resource before
+using it.  Any changes on the shared resource MAY be instantly visible
+on the host.
+
+Otherwise the shared resources are used like normal resources.
+Especially the driver must send explicit VIRTIO_GPU_CMD_TRANSFER_*
+commands to the device for both normal and shared resources.  Reason:
+The device might have to flush caches.  The device MAY also choose to
+not create mapping for the shared resource.  Especially for small
+resources it might be more efficient to just copy the data instead of
+establishing a shared mapping.
+
 \subsubsection{Device Operation: Request header}\label{sec:Device Types / GPU 
Device / Device Operation / Device Operation: Request header}
 
 All requests and responses on the virt queues have a fixed header
@@ -186,6 +211,7 @@ \subsubsection{Device Operation: Request 
header}\label{sec:Device Types / GPU De
 VIRTIO_GPU_CMD_GET_CAPSET_INFO,
 VIRTIO_GPU_CMD_GET_CAPSET,
 VIRTIO_GPU_CMD_GET_EDID,
+VIRTIO_GPU_CMD_RESOURCE_CREATE_2D_SHARED,
 
 /* cursor commands */
 VIRTIO_GPU_CMD_UPDATE_CURSOR = 0x0300,
@@ -205,6 +231,7 @@ \subsubsection{Device Operation: Request 
header}\label{sec:Device Types / GPU De
 VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID,
 VIRTIO_GPU_RESP_ERR_INVALID_CONTEXT_ID,
 VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER,
+VIRTIO_GPU_RESP_ERR_NO_BACKING_STORAGE,
 };
 
 #define VIRTIO_GPU_FLAG_FENCE (1 << 0)
-- 
2.18.1


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH] virtio-gpu: some edid clarifications

2019-11-28 Thread Gerd Hoffmann
Add some notes about fetching the EDID information.

Signed-off-by: Gerd Hoffmann 
---
 virtio-gpu.tex | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index af4ca610d235..15dbf9f2ec82 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
@@ -70,16 +70,21 @@ \subsubsection{Events}
 \begin{description}
 \item[VIRTIO_GPU_EVENT_DISPLAY] Display configuration has changed.
   The driver SHOULD use the VIRTIO_GPU_CMD_GET_DISPLAY_INFO command to
-  fetch the information from the device.
+  fetch the information from the device.  In case EDID support is
+  negotiated (VIRTIO_GPU_F_EDID feature flag) the device SHOULD also
+  fetch the updated EDID blobs using the VIRTIO_GPU_CMD_GET_EDID
+  command.
 \end{description}
 
 \devicenormative{\subsection}{Device Initialization}{Device Types / GPU Device 
/ Device Initialization}
 
 The driver SHOULD query the display information from the device using
 the VIRTIO_GPU_CMD_GET_DISPLAY_INFO command and use that information
-for the initial scanout setup.  In case no information is available or
-all displays are disabled the driver MAY choose to use a fallback,
-such as 1024x768 at display 0.
+for the initial scanout setup.  In case EDID support is negotiated
+(VIRTIO_GPU_F_EDID feature flag) the device SHOULD also fetch the EDID
+information using the VIRTIO_GPU_CMD_GET_EDID command.  If no
+information is available or all displays are disabled the driver MAY
+choose to use a fallback, such as 1024x768 at display 0.
 
 \subsection{Device Operation}\label{sec:Device Types / GPU Device / Device 
Operation}
 
-- 
2.18.1


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



  1   2   >