Re: [virtio-dev] Re: [RFC PATCH v6] virtio-video: Add virtio video device specification

2023-05-07 Thread Alexandre Courbot
On Fri, May 5, 2023 at 8:55 PM Alexander Gordeev
 wrote:
>
> On 03.05.23 16:04, Cornelia Huck wrote:
> > On Fri, Apr 28 2023, Alexander Gordeev  
> > wrote:
> >
> >> On 27.04.23 15:16, Alexandre Courbot wrote:
> >>> But in any case, that's irrelevant to the guest-host interface, and I
> >>> think a big part of the disagreement stems from the misconception that
> >>> V4L2 absolutely needs to be used on either the guest or the host,
> >>> which is absolutely not the case.
> >>
> >> I understand this, of course. I'm arguing, that it is harder to
> >> implement it, get it straight and then maintain it over years. Also it
> >> brings limitations, that sometimes can be workarounded in the virtio
> >> spec, but this always comes at a cost of decreased readability and
> >> increased complexity. Overall it looks clearly as a downgrade compared
> >> to virtio-video for our use-case. And I believe it would be the same for
> >> every developer, that has to actually implement the spec, not just do
> >> the pass through. So if we think of V4L2 UAPI pass through as a
> >> compatibility device (which I believe it is), then it is fine to have
> >> both and keep improving the virtio-video, including taking the best
> >> ideas from the V4L2 and overall using it as a reference to make writing
> >> the driver simpler.
> >
> > Let me jump in here and ask another question:
> >
> > Imagine that, some years in the future, somebody wants to add a virtio
> > device for handling video encoding/decoding to their hypervisor.
> >
> > Option 1: There are different devices to chose from. How is the person
> > implementing this supposed to pick a device? They might have a narrow
> > use case, where it is clear which of the devices is the one that needs to
> > be supported; but they also might have multiple, diverse use cases, and
> > end up needing to implement all of the devices.
>
> I think in this case virtio-v4l2 should be used as a compatibility
> device exclusively. This means discouraging increasing its complexity
> even more with more patches in the spec. virtio-video should eventually
> cover all the use-cases of V4L2, so I think it is reasonable to use it
> in both complex use-cases and in simple use-cases, where there is no
> decoder/encoder V4L2 device on the host.
>
> > Option 2: There is one device with various optional features. The person
> > implementing this can start off with a certain subset of features
> > depending on their expected use cases, and add to it later, if needed;
> > but the upfront complexity might be too high for specialized use cases.

I don't see that many negociable features we can provide for a
decoder/encoder device - at least not many that are not considered
basic (like guest buffers). In terms of provided features for codecs
virtio-video and virtio-v4l2 are essentially equivalent.

The question is more: do we want a decoder/encoder specification and
another one for cameras or do we want something that covers video
devices in general. And is it desirable to reuse an already existing
protocol for communication between a non-privileged entity and a
privileged one (V4L2) or define our own?

About the latter point, Alex Bennée mentioned that it was difficult to
find the engineering time to define virtio-camera. And virtio-video
has also not been particularly fast to come together. virtio-v4l2
basically serves us both on a plate for a lower effort.

>
> In this case I'd prefer to have the simpler device first, that is the
> current virtio-video, then to add features incrementally using feature
> flags and taking into account the virtualization context. V4L2 is a
> complex thing from a different context. They already tried to carve out
> some of the use-cases like stateful decoder/encoder API, but this work
> is not finished (struct v4l2_buffer can serve as an evidence). This is
> like dissecting a monolith. Also it has to be patched to make it more
> appropriate for virtualization (we can see this in Alexandre's PoC already).
>
> > Leaving concrete references to V4L2 out of the picture, we're currently
> > trying to decide whether our future will be more like Option 1 or Option
> > 2, with their respective trade-offs.
>
> I'd like to rely on opinions of people, who know more about virtio
> development and goals. I would be happy to present or reiterate my
> arguments to anyone interested if necessary.
>
> > I'm slightly biased towards Option 2; does it look feasible at all, or
> > am I missing something essential here? (I had the impression that some
> > previous confusion had been cleared up; apologies in advance if I'm
> > misrepresenting things.)
>
> Indeed some of the previous confusion has been cleared up. But not the
> key thing. Alexandre still claims, that this patched V4L2 UAPI pass
> through is only marginally more complex, for example. I don't agree with
> this and I have evidence. We haven't finished discussing this evidence.

Are you talking about v4l2_buffer?


[virtio-dev] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-07 Thread Jason Wang
On Sun, May 7, 2023 at 9:44 PM Michael S. Tsirkin  wrote:
>
> On Sat, May 06, 2023 at 10:31:30AM +0800, Jason Wang wrote:
> > On Sat, May 6, 2023 at 8:02 AM Parav Pandit  wrote:
> > >
> > > This short series introduces legacy registers access commands for the 
> > > owner
> > > group member PCI PF to access the legacy registers of the member VFs.
> > >
> > > If in future any SIOV devices to support legacy registers, they
> > > can be easily supported using same commands by using the group
> > > member identifiers of the future SIOV devices.
> > >
> > > More details as overview, motivation, use case are further described
> > > below.
> > >
> > > Patch summary:
> > > --
> > > patch-1 adds administrative virtuqueue commands
> > > patch-2 adds its conformance section
> > >
> > > This short series is on top of latest work [1] from Michael.
> > > It uses the newly introduced administrative virtqueue facility with 3 new
> > > commands which uses the existing virtio_admin_cmd.
> > >
> > > [1] 
> > > https://lists.oasis-open.org/archives/virtio-comment/202305/msg00112.html
> > >
> > > Usecase:
> > > 
> > > 1. A hypervisor/system needs to provide transitional
> > >virtio devices to the guest VM at scale of thousands,
> > >typically, one to eight devices per VM.
> > >
> > > 2. A hypervisor/system needs to provide such devices using a
> > >vendor agnostic driver in the hypervisor system.
> > >
> > > 3. A hypervisor system prefers to have single stack regardless of
> > >virtio device type (net/blk) and be future compatible with a
> > >single vfio stack using SR-IOV or other scalable device
> > >virtualization technology to map PCI devices to the guest VM.
> > >(as transitional or otherwise)
> > >
> > > Motivation/Background:
> > > --
> > > The existing virtio transitional PCI device is missing support for
> > > PCI SR-IOV based devices. Currently it does not work beyond
> > > PCI PF, or as software emulated device in reality. Currently it
> > > has below cited system level limitations:
> > >
> > > [a] PCIe spec citation:
> > > VFs do not support I/O Space and thus VF BARs shall not indicate I/O 
> > > Space.
> > >
> > > [b] cpu arch citiation:
> > > Intel 64 and IA-32 Architectures Software Developer’s Manual:
> > > The processor’s I/O address space is separate and distinct from
> > > the physical-memory address space. The I/O address space consists
> > > of 64K individually addressable 8-bit I/O ports, numbered 0 through H.
> > >
> > > [c] PCIe spec citation:
> > > If a bridge implements an I/O address range,...I/O address range will be
> > > aligned to a 4 KB boundary.
> > >
> > > Overview:
> > > -
> > > Above usecase requirements can be solved by PCI PF group owner accessing
> > > its group member PCI VFs legacy registers using an admin virtqueue of
> > > the group owner PCI PF.
> > >
> > > Two new admin virtqueue commands are added which read/write PCI VF
> > > registers.
> > >
> > > The third command suggested by Jason queries the VF device's driver
> > > notification region.
> > >
> > > Software usage example:
> > > ---
> > > One way to use and map to the guest VM is by using vfio driver
> > > framework in Linux kernel.
> > >
> > > +--+
> > > |pci_dev_id = 0x100X   |
> > > +---|pci_rev_id = 0x0  |-+
> > > |vfio device|BAR0 = I/O region | |
> > > |   |Other attributes  | |
> > > |   +--+ |
> > > ||
> > > +   +--+ +-+ |
> > > |   |I/O BAR to AQ | | Other vfio  | |
> > > |   |rd/wr mapper  | | functionalities | |
> > > |   +--+ +-+ |
> > > ||
> > > +--+-+---+
> > >| |
> > >   +++   +++
> > >   | +-+ |   | PCI VF device A |
> > >   | | AQ  |-+>+-+ |
> > >   | +-+ |   |   | | legacy regs | |
> > >   | PCI PF device   |   |   | +-+ |
> > >   +-+   |   +-+
> > > |
> > > |   +++
> > > |   | PCI VF device N |
> > > +>+-+ |
> > > | | legacy regs | |
> > > | +-+ |
> > > +-+
> > >
> > > 2. Virtio pci driver to bind to the listed device id and
> > >use it as native device in the host.
> > >
> > > 3. Use it in a light weight hypervisor to run bare-metal OS.
> > >
> > > Please review.
> > >
> > > Fixes: https://github.com/oasis-tcs/virtio-spec/issues/167
> > > Signed-off-by: Parav Pandit 
> 

[virtio-dev] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-07 Thread Michael S. Tsirkin
On Sat, May 06, 2023 at 10:31:30AM +0800, Jason Wang wrote:
> On Sat, May 6, 2023 at 8:02 AM Parav Pandit  wrote:
> >
> > This short series introduces legacy registers access commands for the owner
> > group member PCI PF to access the legacy registers of the member VFs.
> >
> > If in future any SIOV devices to support legacy registers, they
> > can be easily supported using same commands by using the group
> > member identifiers of the future SIOV devices.
> >
> > More details as overview, motivation, use case are further described
> > below.
> >
> > Patch summary:
> > --
> > patch-1 adds administrative virtuqueue commands
> > patch-2 adds its conformance section
> >
> > This short series is on top of latest work [1] from Michael.
> > It uses the newly introduced administrative virtqueue facility with 3 new
> > commands which uses the existing virtio_admin_cmd.
> >
> > [1] 
> > https://lists.oasis-open.org/archives/virtio-comment/202305/msg00112.html
> >
> > Usecase:
> > 
> > 1. A hypervisor/system needs to provide transitional
> >virtio devices to the guest VM at scale of thousands,
> >typically, one to eight devices per VM.
> >
> > 2. A hypervisor/system needs to provide such devices using a
> >vendor agnostic driver in the hypervisor system.
> >
> > 3. A hypervisor system prefers to have single stack regardless of
> >virtio device type (net/blk) and be future compatible with a
> >single vfio stack using SR-IOV or other scalable device
> >virtualization technology to map PCI devices to the guest VM.
> >(as transitional or otherwise)
> >
> > Motivation/Background:
> > --
> > The existing virtio transitional PCI device is missing support for
> > PCI SR-IOV based devices. Currently it does not work beyond
> > PCI PF, or as software emulated device in reality. Currently it
> > has below cited system level limitations:
> >
> > [a] PCIe spec citation:
> > VFs do not support I/O Space and thus VF BARs shall not indicate I/O Space.
> >
> > [b] cpu arch citiation:
> > Intel 64 and IA-32 Architectures Software Developer’s Manual:
> > The processor’s I/O address space is separate and distinct from
> > the physical-memory address space. The I/O address space consists
> > of 64K individually addressable 8-bit I/O ports, numbered 0 through H.
> >
> > [c] PCIe spec citation:
> > If a bridge implements an I/O address range,...I/O address range will be
> > aligned to a 4 KB boundary.
> >
> > Overview:
> > -
> > Above usecase requirements can be solved by PCI PF group owner accessing
> > its group member PCI VFs legacy registers using an admin virtqueue of
> > the group owner PCI PF.
> >
> > Two new admin virtqueue commands are added which read/write PCI VF
> > registers.
> >
> > The third command suggested by Jason queries the VF device's driver
> > notification region.
> >
> > Software usage example:
> > ---
> > One way to use and map to the guest VM is by using vfio driver
> > framework in Linux kernel.
> >
> > +--+
> > |pci_dev_id = 0x100X   |
> > +---|pci_rev_id = 0x0  |-+
> > |vfio device|BAR0 = I/O region | |
> > |   |Other attributes  | |
> > |   +--+ |
> > ||
> > +   +--+ +-+ |
> > |   |I/O BAR to AQ | | Other vfio  | |
> > |   |rd/wr mapper  | | functionalities | |
> > |   +--+ +-+ |
> > ||
> > +--+-+---+
> >| |
> >   +++   +++
> >   | +-+ |   | PCI VF device A |
> >   | | AQ  |-+>+-+ |
> >   | +-+ |   |   | | legacy regs | |
> >   | PCI PF device   |   |   | +-+ |
> >   +-+   |   +-+
> > |
> > |   +++
> > |   | PCI VF device N |
> > +>+-+ |
> > | | legacy regs | |
> > | +-+ |
> > +-+
> >
> > 2. Virtio pci driver to bind to the listed device id and
> >use it as native device in the host.
> >
> > 3. Use it in a light weight hypervisor to run bare-metal OS.
> >
> > Please review.
> >
> > Fixes: https://github.com/oasis-tcs/virtio-spec/issues/167
> > Signed-off-by: Parav Pandit 
> >
> > ---
> > changelog:
> > v1->v2:
> > - addressed comments from Michael
> > - added theory of operation
> > - grammar corrections
> > - removed group fields description from individual commands as
> >   it is already present in generic section
> > - added endianness normative for legacy device 

[virtio-dev] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-07 Thread Michael S. Tsirkin
On Sat, May 06, 2023 at 03:01:33AM +0300, Parav Pandit wrote:
> This short series introduces legacy registers access commands for the owner
> group member PCI PF to access the legacy registers of the member VFs.
> 
> If in future any SIOV devices to support legacy registers, they
> can be easily supported using same commands by using the group
> member identifiers of the future SIOV devices.
> 
> More details as overview, motivation, use case are further described
> below.
> 
> Patch summary:
> --
> patch-1 adds administrative virtuqueue commands
> patch-2 adds its conformance section
> 
> This short series is on top of latest work [1] from Michael.
> It uses the newly introduced administrative virtqueue facility with 3 new
> commands which uses the existing virtio_admin_cmd.
> 
> [1] https://lists.oasis-open.org/archives/virtio-comment/202305/msg00112.html
> 
> Usecase:
> 
> 1. A hypervisor/system needs to provide transitional
>virtio devices to the guest VM at scale of thousands,
>typically, one to eight devices per VM.
> 
> 2. A hypervisor/system needs to provide such devices using a
>vendor agnostic driver in the hypervisor system.
> 
> 3. A hypervisor system prefers to have single stack regardless of
>virtio device type (net/blk) and be future compatible with a
>single vfio stack using SR-IOV or other scalable device
>virtualization technology to map PCI devices to the guest VM.
>(as transitional or otherwise)
> 
> Motivation/Background:
> --
> The existing virtio transitional PCI device is missing support for
> PCI SR-IOV based devices. Currently it does not work beyond
> PCI PF, or as software emulated device in reality. Currently it
> has below cited system level limitations:
> 
> [a] PCIe spec citation:
> VFs do not support I/O Space and thus VF BARs shall not indicate I/O Space.
> 
> [b] cpu arch citiation:
> Intel 64 and IA-32 Architectures Software Developer’s Manual:
> The processor’s I/O address space is separate and distinct from
> the physical-memory address space. The I/O address space consists
> of 64K individually addressable 8-bit I/O ports, numbered 0 through H.
> 
> [c] PCIe spec citation:
> If a bridge implements an I/O address range,...I/O address range will be
> aligned to a 4 KB boundary.
> 
> Overview:
> -
> Above usecase requirements can be solved by PCI PF group owner accessing
> its group member PCI VFs legacy registers using an admin virtqueue of
> the group owner PCI PF.
> 
> Two new admin virtqueue commands are added which read/write PCI VF
> registers.
> 
> The third command suggested by Jason queries the VF device's driver
> notification region.
> 
> Software usage example:
> ---
> One way to use and map to the guest VM is by using vfio driver
> framework in Linux kernel.
> 
> +--+
> |pci_dev_id = 0x100X   |
> +---|pci_rev_id = 0x0  |-+
> |vfio device|BAR0 = I/O region | |
> |   |Other attributes  | |
> |   +--+ |
> ||
> +   +--+ +-+ |
> |   |I/O BAR to AQ | | Other vfio  | |
> |   |rd/wr mapper  | | functionalities | |
> |   +--+ +-+ |
> ||
> +--+-+---+
>| |
>   +++   +++
>   | +-+ |   | PCI VF device A |
>   | | AQ  |-+>+-+ |
>   | +-+ |   |   | | legacy regs | |
>   | PCI PF device   |   |   | +-+ |
>   +-+   |   +-+
> |
> |   +++
> |   | PCI VF device N |
> +>+-+ |
> | | legacy regs | |
> | +-+ |
> +-+
> 
> 2. Virtio pci driver to bind to the listed device id and
>use it as native device in the host.
> 
> 3. Use it in a light weight hypervisor to run bare-metal OS.
> 
> Please review.
> 
> Fixes: https://github.com/oasis-tcs/virtio-spec/issues/167
> Signed-off-by: Parav Pandit 
> 
> ---
> changelog:
> v1->v2:
> - addressed comments from Michael
> - added theory of operation
> - grammar corrections
> - removed group fields description from individual commands as
>   it is already present in generic section
> - added endianness normative for legacy device registers region
> - renamed the file to drop vf and add legacy prefix
> - added overview in commit log
> - renamed subsection to reflect command

one things I still don't see addressed here is support for
legacy interrupts. legacy driver can disable msix and
interrupts will be then