[virtio-dev] Re: [PATCH v2 1/2] transport-pci: Introduce legacy registers access commands

2023-05-16 Thread Michael S. Tsirkin
On Sat, May 06, 2023 at 03:01:34AM +0300, Parav Pandit wrote:
> +\begin{lstlisting}
> +struct virtio_admin_cmd_lreg_wr_data {
> + u8 offset; /* Starting byte offset of the register(s) to write */
> + u8 register[];
> +};
> +\end{lstlisting}

So to summarize, I think the main comment here was to have separate
commands for access to common versus device config. Just today I thought
some more and came up with another reason why this is a good idea: if we
ever want legacy MMIO emulation, we can reuse the device config command.

Jason also feels the common config command can be shared
with vq transport effort. We can discuss this, for sure,
I guess its putting code in hardware versus in hypervisor,
but if there are hardware vendors who strictly want
this code in hardware I don't have a big problem with this
even if I don't exactly get why.

With this idea all hypervisor has to do is subtract the offset from
device config access, sounds like a small price to pay.  Does this sound
like a reasonable compromize to you?

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [libcamera-devel] [virtio-dev] Re: [RFC PATCH v6] virtio-video: Add virtio video device specification

2023-05-16 Thread Tomasz Figa
On Sat, May 6, 2023 at 5:12 PM Laurent Pinchart via libcamera-devel
 wrote:
>
> Hello,
>
> On Fri, May 05, 2023 at 04:55:33PM +0100, Alex Bennée via libcamera-devel 
> wrote:
> > Kieran Bingham writes:
> >
> > > Hi All,
> > >
> > > Coming in late, thanks to lei/lore spotting the libcamera keyword.
> > >
> > > + Cc: libcamera-devel to raise awareness of the discussion there.
> > >
> > > Quoting Alexander Gordeev (2023-05-05 10:57:29)
> > >> On 03.05.23 17:53, Cornelia Huck wrote:
> > >> > On Wed, May 03 2023, Alex Bennée  wrote:
> > >> >> Cornelia Huck  writes:
> > >> >>> On Fri, Apr 28 2023, Alexander Gordeev 
> > >> >>>  wrote:
> > >>  On 27.04.23 15:16, Alexandre Courbot wrote:
> > >> > But in any case, that's irrelevant to the guest-host interface, 
> > >> > and I
> > >> > think a big part of the disagreement stems from the misconception 
> > >> > that
> > >> > V4L2 absolutely needs to be used on either the guest or the host,
> > >> > which is absolutely not the case.
> > >> 
> > >>  I understand this, of course. I'm arguing, that it is harder to
> > >>  implement it, get it straight and then maintain it over years. Also 
> > >>  it
> > >>  brings limitations, that sometimes can be workarounded in the virtio
> > >>  spec, but this always comes at a cost of decreased readability and
> > >>  increased complexity. Overall it looks clearly as a downgrade 
> > >>  compared
> > >>  to virtio-video for our use-case. And I believe it would be the 
> > >>  same for
> > >>  every developer, that has to actually implement the spec, not just 
> > >>  do
> > >>  the pass through. So if we think of V4L2 UAPI pass through as a
> > >>  compatibility device (which I believe it is), then it is fine to 
> > >>  have
> > >>  both and keep improving the virtio-video, including taking the best
> > >>  ideas from the V4L2 and overall using it as a reference to make 
> > >>  writing
> > >>  the driver simpler.
> > >> >>>
> > >> >>> Let me jump in here and ask another question:
> > >> >>>
> > >> >>> Imagine that, some years in the future, somebody wants to add a 
> > >> >>> virtio
> > >> >>> device for handling video encoding/decoding to their hypervisor.
> > >> >>>
> > >> >>> Option 1: There are different devices to chose from. How is the 
> > >> >>> person
> > >> >>> implementing this supposed to pick a device? They might have a narrow
> > >> >>> use case, where it is clear which of the devices is the one that 
> > >> >>> needs to
> > >> >>> be supported; but they also might have multiple, diverse use cases, 
> > >> >>> and
> > >> >>> end up needing to implement all of the devices.
> > >> >>>
> > >> >>> Option 2: There is one device with various optional features. The 
> > >> >>> person
> > >> >>> implementing this can start off with a certain subset of features
> > >> >>> depending on their expected use cases, and add to it later, if 
> > >> >>> needed;
> > >> >>> but the upfront complexity might be too high for specialized use 
> > >> >>> cases.
> > >> >>>
> > >> >>> Leaving concrete references to V4L2 out of the picture, we're 
> > >> >>> currently
> > >> >>> trying to decide whether our future will be more like Option 1 or 
> > >> >>> Option
> > >> >>> 2, with their respective trade-offs.
> > >> >>>
> > >> >>> I'm slightly biased towards Option 2; does it look feasible at all, 
> > >> >>> or
> > >> >>> am I missing something essential here? (I had the impression that 
> > >> >>> some
> > >> >>> previous confusion had been cleared up; apologies in advance if I'm
> > >> >>> misrepresenting things.)
> > >> >>>
> > >> >>> I'd really love to see some kind of consensus for 1.3, if at all
> > >> >>> possible :)
> > >> >>
> > >> >> I think feature discovery and extensibility is a key part of the 
> > >> >> VirtIO
> > >> >> paradigm which is why I find the virtio-v4l approach limiting. By
> > >> >> pegging the device to a Linux API we effectively limit the growth of 
> > >> >> the
> > >> >> device specification to as fast as the Linux API changes. I'm not 
> > >> >> fully
> > >> >> immersed in v4l but I don't think it is seeing any additional features
> > >> >> developed for it and its limitations for camera are one of the reasons
> > >> >> stuff is being pushed to userspace in solutions like libcamera:
> > >> >>
> > >> >>How is libcamera different from V4L2?
> > >> >>
> > >> >>We see libcamera as a continuation of V4L2. One that can more 
> > >> >> easily
> > >> >>handle the recent advances in hardware design. As embedded cameras 
> > >> >> have
> > >> >>developed, all of the complexity has been pushed on to the 
> > >> >> developers.
> > >> >>With libcamera, all of that complexity is simplified and a single 
> > >> >> model
> > >> >>is presented to application developers.
> > >> >
> > >> > Ok, that is interesting; thanks for the information.
> > >> >
> > >> >>
> > >> >> That said its not totally our exp

[virtio-dev] Re: [virtio-comment] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Michael S. Tsirkin
On Tue, May 16, 2023 at 09:49:19PM +, Parav Pandit wrote:
> 
> > From: Michael S. Tsirkin 
> > Sent: Tuesday, May 16, 2023 5:12 PM
> 
> > 
> > Nah, we don't need a "break randomly unless it's a full moon and you cross 
> > your
> > heart three times" mode. If you are going to implement support for legacy
> > emulation implement it in a way that either predictably works or predictably
> > refuses to load.
> 
> It works with same level of predictability as current legacy interface 
> definition for x86_64 platform when msix in guest is supported.
> 
> Supporting intx we discussed to deal incrementally after this.

Only thing I am worried is that we do not decide to go back
to msix vector after all because of the shifting offset
of device config. As I said, split device and transport
configs to separate commands rather than just using offset
and then I think it's ok to defer INTx work a bit.

Though I feel it's valuable and will help us move towards
more important goals like LM.
-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Michael S. Tsirkin
On Tue, May 16, 2023 at 09:41:07PM +, Parav Pandit wrote:
> 
> > From: Michael S. Tsirkin 
> > Sent: Tuesday, May 16, 2023 5:09 PM
> 
> > > Possibly one can set. I don’t know if any actual device really supported
> > endianness.
> > > No users have asked for it, even asking explicitly to those 
> > > non_little_endian
> > users.
> > 
> > For sure, ARM BE does exist and Red Hat supports it because yes, it was
> > requested. 
> Interesting.
> Any vdpa device support this? How is the guest endianness conveyed to this 
> vdpa device?

I don't think, so far.

> And which barrier this ARM uses before issuing notification on legacy?

Would be challenging with hardware, but can work for software virtio
for nested virt I guess?

> > You can not just go by what marketing tells you today we either try
> > to build future proof interfaces or we don't bother at all - by the time 
> > these
> > things are in the field everything shifted.
> > 
> Huh.
> One can add the BE support incrementally when the device plans to support it.
> 
> 1.x is already future proof from endianness, so no need to bring endianness 
> for it.
> Only legacy can work in the single platform with hw in wider deployment as 
> David acked.
> So over engineering is avoided by not introducing BE support.
> 
> > > So may be when there is/if a real user, it can be done in future.
> > 
> > The concern is if you assume LE is default, no way to say "I do not support 
> > LE".
> > Something like an explicit "SET LE" and "SET BE" commands will automatically
> > allow discovering which endian-ness is supported.
> > 
> This is good suggestion.
> The LE for legacy is assumed because barrier etc cannot work.
> So, avoiding theoretical spec commands here that may not find any user.

maybe at least outlike what happens if some device needs to add BE and
drop LE?


> > > > >
> > > > > > For any case, having a simple and deterministic device design is
> > > > > > always better assuming we've agreed that mediation is a must for
> > > > > > legacy drivers. Using dedicated commands can make sure the
> > > > > > implementation won't need to go with corner cases of legacy.
> > > > > >
> > > > > Corner case handling just moved from the device to hypervisor.
> > > >
> > > > That's not safe, the device should be ready for malicious inputs.
> > > >
> > > With current offset, register, device will be just fine as most 
> > > implementations
> > have to program control path etc on these registers write/read etc.
> > > So, device will be able to handle them with plain 2 commands.
> > 
> > Except legacy is broken broken broken.  It just does not work for physical
> > devices in a crazy number of ways. How about the weird 48 bit limitation on
> > PAs? Because no one ever will need any more.
> > 
> Legacy is broken and works only in small but widely used platform.
> Hence, it attempts to support it.
> 
> 48-bit PA limitation in virtio?

For queue base, yes. One of the crazy things in virtio ...

> > I have 0 sympathy to this idea of reviving all these bugs then circling 
> > back and
> > coming up with weird work arounds.  Please, just build a sane transport and
> > have software deal with bugs as best it can.
> > 
> When I proposed the register interface that adheres to the alignment 
> boundary, you suggested to follow legacy semantics.
> Now when legacy semantics is followed, you go opposite direction of let sw 
> decide it.

Sorry if I was unclear.
I was making a distinction between device specific with arbitrary
alignment, where we should just follow legacy semantics, and
common config where I think sw should decide it.


> Can we agree that alignment, width and offset to be decided by the sw?

For device type specific we can't since spec said we can't :(
For common sure.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [virtio-comment] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Parav Pandit

> From: Michael S. Tsirkin 
> Sent: Tuesday, May 16, 2023 5:12 PM

> 
> Nah, we don't need a "break randomly unless it's a full moon and you cross 
> your
> heart three times" mode. If you are going to implement support for legacy
> emulation implement it in a way that either predictably works or predictably
> refuses to load.

It works with same level of predictability as current legacy interface 
definition for x86_64 platform when msix in guest is supported.

Supporting intx we discussed to deal incrementally after this.


[virtio-dev] Re: [PATCH] content: Replace guest OS with driver

2023-05-16 Thread Michael S. Tsirkin
On Tue, May 16, 2023 at 09:31:43PM +, Parav Pandit wrote:
> 
> > From: Michael S. Tsirkin 
> > Sent: Tuesday, May 16, 2023 5:25 PM
> 
> > > Flavor of a RHEL has inbuilt.
> > 
> > What does this mean exactly? That virtio core is compiled into kernel?
> > That does not matter at all.
> > 
> Why does it does not matter? What is the definition of driver, is the 
> device_structure structure or module binary?

Neither. It's the part that handles transport or specific device.




> > > > > Does driver only matter with device_driver structure or module 
> > > > > binary?...
> > > >
> > > > Can't parse your question.
> > > >
> > > > > Driver is largely the software entity that drives the device.
> > > > > I think we can keep the spec simple enough to not mix these
> > > > > details and just
> > > > call it a "driver".
> > > >
> > > > Not just linux there are lots of drivers like this.  the two bits
> > > > pass useful information the way you changed it this distinction is lots.
> > > > I agree it is worth thinking what exactly does it mean.
> > > > Since you researched it - what exactly do drivers such as uefi and
> > > > the unnamed "some OS variant" do exactly?
> > > There is just one "driver" virtio_net_pci that has sets the required bits.
> > >
> > > > when do they set ACKNOWLEDGE and when DRIVER?
> > > >
> > > Not any different flow.
> > > Entity is one.
> > 
> > question is, what happens
> > - before ACKNOWLEDGE
> > - after ACKNOWLEDGE before DRIVER
> > ?
> It follows the same sequence described in the spec as "driver" not as "guest 
> OS".


Looks like my description is spot on then - check vendor ID,
set ACKNOWLEDGE, check device ID, see that it's a known good value
matching device type we expect, set DRIVER.

Maybe replace with "transport part of the driver" and
"device type specific part of the driver"? A bit verbose but if
it rocks your boat ...

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Parav Pandit

> From: Michael S. Tsirkin 
> Sent: Tuesday, May 16, 2023 5:09 PM

> > Possibly one can set. I don’t know if any actual device really supported
> endianness.
> > No users have asked for it, even asking explicitly to those 
> > non_little_endian
> users.
> 
> For sure, ARM BE does exist and Red Hat supports it because yes, it was
> requested. 
Interesting.
Any vdpa device support this? How is the guest endianness conveyed to this vdpa 
device?
And which barrier this ARM uses before issuing notification on legacy?

> You can not just go by what marketing tells you today we either try
> to build future proof interfaces or we don't bother at all - by the time these
> things are in the field everything shifted.
> 
Huh.
One can add the BE support incrementally when the device plans to support it.

1.x is already future proof from endianness, so no need to bring endianness for 
it.
Only legacy can work in the single platform with hw in wider deployment as 
David acked.
So over engineering is avoided by not introducing BE support.

> > So may be when there is/if a real user, it can be done in future.
> 
> The concern is if you assume LE is default, no way to say "I do not support 
> LE".
> Something like an explicit "SET LE" and "SET BE" commands will automatically
> allow discovering which endian-ness is supported.
> 
This is good suggestion.
The LE for legacy is assumed because barrier etc cannot work.
So, avoiding theoretical spec commands here that may not find any user.

> > > >
> > > > > For any case, having a simple and deterministic device design is
> > > > > always better assuming we've agreed that mediation is a must for
> > > > > legacy drivers. Using dedicated commands can make sure the
> > > > > implementation won't need to go with corner cases of legacy.
> > > > >
> > > > Corner case handling just moved from the device to hypervisor.
> > >
> > > That's not safe, the device should be ready for malicious inputs.
> > >
> > With current offset, register, device will be just fine as most 
> > implementations
> have to program control path etc on these registers write/read etc.
> > So, device will be able to handle them with plain 2 commands.
> 
> Except legacy is broken broken broken.  It just does not work for physical
> devices in a crazy number of ways. How about the weird 48 bit limitation on
> PAs? Because no one ever will need any more.
> 
Legacy is broken and works only in small but widely used platform.
Hence, it attempts to support it.

48-bit PA limitation in virtio?

> I have 0 sympathy to this idea of reviving all these bugs then circling back 
> and
> coming up with weird work arounds.  Please, just build a sane transport and
> have software deal with bugs as best it can.
> 
When I proposed the register interface that adheres to the alignment boundary, 
you suggested to follow legacy semantics.
Now when legacy semantics is followed, you go opposite direction of let sw 
decide it.

Can we agree that alignment, width and offset to be decided by the sw?


[virtio-dev] RE: [PATCH] content: Replace guest OS with driver

2023-05-16 Thread Parav Pandit


> From: Michael S. Tsirkin 
> Sent: Tuesday, May 16, 2023 5:25 PM

> > Flavor of a RHEL has inbuilt.
> 
> What does this mean exactly? That virtio core is compiled into kernel?
> That does not matter at all.
> 
Why does it does not matter? What is the definition of driver, is the 
device_structure structure or module binary?

> > > > Does driver only matter with device_driver structure or module 
> > > > binary?...
> > >
> > > Can't parse your question.
> > >
> > > > Driver is largely the software entity that drives the device.
> > > > I think we can keep the spec simple enough to not mix these
> > > > details and just
> > > call it a "driver".
> > >
> > > Not just linux there are lots of drivers like this.  the two bits
> > > pass useful information the way you changed it this distinction is lots.
> > > I agree it is worth thinking what exactly does it mean.
> > > Since you researched it - what exactly do drivers such as uefi and
> > > the unnamed "some OS variant" do exactly?
> > There is just one "driver" virtio_net_pci that has sets the required bits.
> >
> > > when do they set ACKNOWLEDGE and when DRIVER?
> > >
> > Not any different flow.
> > Entity is one.
> 
> question is, what happens
> - before ACKNOWLEDGE
> - after ACKNOWLEDGE before DRIVER
> ?
It follows the same sequence described in the spec as "driver" not as "guest 
OS".


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Parav Pandit


> From: Michael S. Tsirkin 
> Sent: Tuesday, May 16, 2023 5:23 PM
> 
> On Tue, May 16, 2023 at 09:19:02PM +, Parav Pandit wrote:
> >
> > > From: Michael S. Tsirkin 
> > > Sent: Tuesday, May 16, 2023 4:58 PM
> >
> > > > > Let me give you an example. I feel that device specific config
> > > > > should allow arbitrary length accesses so that e.g.
> > > > > mac can be read and written atomically.
> > > > >
> > > > > On the other hand, this is not the case for common config.
> > > > >
> > > > Why is this not the case with common config?
> > > > Spec snippet: "When using the legacy interface the driver MAY
> > > > access the device-specific configuration region using any width 
> > > > accesses"
> > >
> > > I mean, what you said above exactly?
> > > device-specific configuration excludes common config.
> > >
> > >
> > So, what prevents the device to fail AQ command with the error code when
> alignment/width/size is not right?
> 
> Nothing but you then need to specify what is and what isn't right.
> 
It follows the existing legacy interface section, you asked to not duplicate it.

> 
> > > > > I feel we do not need to allow e.g. access to both common config
> > > > > and device specific one in a single operation, that is just messy.
> > > > It is just an offset and value.
> > > > What part bothers you?
> > >
> > > E.g. that it can cross the coundary between common and device config.
> > >
> > This is the legacy transport.
> >
> > > Look we didn't build modern because we wanted to make work, we did
> > > because legacy is broken.  So either let legacy die already or let's
> > > build a sane interface to emulate it please.
> > Almost everyone building virtio device would prefer for legacy to die.
> > I am still trying to understand that why bisecting register definition of 
> > legacy
> makes it sane.
> 
> Because legacy common config is very evil, do not copy it.
> Legacy device config is only bad in minor tolerable ways.
> 
It is not used outside of the legacy interface.

The point is guest driver will follow certain config write/read pattern.
Hypervisor has no way to fail it.
So what do you expect hypervisor to do on undesired sequence?
Do you propose to correct it and send it to device?
And if so, the intention is building sane device for legacy interface? If that 
is the intention, sound fine.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH] content: Replace guest OS with driver

2023-05-16 Thread Michael S. Tsirkin
On Tue, May 16, 2023 at 08:59:32PM +, Parav Pandit wrote:
> 
> > From: Michael S. Tsirkin 
> > Sent: Tuesday, May 16, 2023 4:48 PM
> > 
> > On Tue, May 16, 2023 at 07:50:48PM +, Parav Pandit wrote:
> > >
> > > > From: Michael S. Tsirkin 
> > > > Sent: Tuesday, May 16, 2023 6:05 AM
> > >
> > > >
> > > > SO I propose:
> > > >
> > > > \item[ACKNOWLEDGE (1)] Indicates that a transport driver has found the
> > > > device and recognized it as a valid virtio device transport.
> > > >
> > > > \item[DRIVER (2)] Indicates that a device type specific driver was found
> > > > and will attempt to attach to the device.
> > > >
> > > Above bisection is a implementation specific example of Linux (though 
> > > valid
> > and widely used one).
> > >
> > > The UEFI virtio driver doesn't even have such two drivers.
> > > In some OS variant drivers are merged to single kernel binary.
> > 
> > which one?
> > 
> Flavor of a RHEL has inbuilt.

What does this mean exactly? That virtio core is compiled into kernel?
That does not matter at all.

> > > Does driver only matter with device_driver structure or module binary?...
> > 
> > Can't parse your question.
> > 
> > > Driver is largely the software entity that drives the device.
> > > I think we can keep the spec simple enough to not mix these details and 
> > > just
> > call it a "driver".
> > 
> > Not just linux there are lots of drivers like this.  the two bits pass 
> > useful
> > information the way you changed it this distinction is lots.
> > I agree it is worth thinking what exactly does it mean.
> > Since you researched it - what exactly do drivers such as uefi and the 
> > unnamed
> > "some OS variant" do exactly?
> There is just one "driver" virtio_net_pci that has sets the required bits.
> 
> > when do they set ACKNOWLEDGE and when DRIVER?
> > 
> Not any different flow.
> Entity is one.

question is, what happens
- before ACKNOWLEDGE
- after ACKNOWLEDGE before DRIVER 
?


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Michael S. Tsirkin
On Tue, May 16, 2023 at 09:19:02PM +, Parav Pandit wrote:
> 
> > From: Michael S. Tsirkin 
> > Sent: Tuesday, May 16, 2023 4:58 PM
> 
> > > > Let me give you an example. I feel that device specific config
> > > > should allow arbitrary length accesses so that e.g.
> > > > mac can be read and written atomically.
> > > >
> > > > On the other hand, this is not the case for common config.
> > > >
> > > Why is this not the case with common config?
> > > Spec snippet: "When using the legacy interface the driver MAY access
> > > the device-specific configuration region using any width accesses"
> > 
> > I mean, what you said above exactly?
> > device-specific configuration excludes common config.
> > 
> > 
> So, what prevents the device to fail AQ command with the error code when 
> alignment/width/size is not right?

Nothing but you then need to specify what is and what isn't right.


> > > > I feel we do not need to allow e.g. access to both common config and
> > > > device specific one in a single operation, that is just messy.
> > > It is just an offset and value.
> > > What part bothers you?
> > 
> > E.g. that it can cross the coundary between common and device config.
> > 
> This is the legacy transport.
> 
> > Look we didn't build modern because we wanted to make work, we did because
> > legacy is broken.  So either let legacy die already or let's build a sane 
> > interface
> > to emulate it please.
> Almost everyone building virtio device would prefer for legacy to die.
> I am still trying to understand that why bisecting register definition of 
> legacy makes it sane.

Because legacy common config is very evil, do not copy it.
Legacy device config is only bad in minor tolerable ways.

> Device behaves based on the register offset, is fine for the legacy 
> transport, guest driver will make the register accesses anyway it prefers.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Parav Pandit

> From: Michael S. Tsirkin 
> Sent: Tuesday, May 16, 2023 4:58 PM

> > > Let me give you an example. I feel that device specific config
> > > should allow arbitrary length accesses so that e.g.
> > > mac can be read and written atomically.
> > >
> > > On the other hand, this is not the case for common config.
> > >
> > Why is this not the case with common config?
> > Spec snippet: "When using the legacy interface the driver MAY access
> > the device-specific configuration region using any width accesses"
> 
> I mean, what you said above exactly?
> device-specific configuration excludes common config.
> 
> 
So, what prevents the device to fail AQ command with the error code when 
alignment/width/size is not right?

> > > I feel we do not need to allow e.g. access to both common config and
> > > device specific one in a single operation, that is just messy.
> > It is just an offset and value.
> > What part bothers you?
> 
> E.g. that it can cross the coundary between common and device config.
> 
This is the legacy transport.

> Look we didn't build modern because we wanted to make work, we did because
> legacy is broken.  So either let legacy die already or let's build a sane 
> interface
> to emulate it please.
Almost everyone building virtio device would prefer for legacy to die.
I am still trying to understand that why bisecting register definition of 
legacy makes it sane.
Device behaves based on the register offset, is fine for the legacy transport, 
guest driver will make the register accesses anyway it prefers.


[virtio-dev] Re: [virtio-comment] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Michael S. Tsirkin
On Tue, May 16, 2023 at 07:35:20PM +, Parav Pandit wrote:
> 
> > From: virtio-comm...@lists.oasis-open.org  > open.org> On Behalf Of Jason Wang
> > Sent: Monday, May 15, 2023 11:55 PM
> > > I don’t see how this is being any different than register-offset 
> > > interface.
> 
> > > It bisects more things at hypervisor level that makes things hard to add 
> > > #12th
> > entry.
> > >
> > >> 1) device features
> > >> 2) driver features
> > >> 3) queue address
> > >> 4) queue size
> > >> 5) queue select
> > >> 6) queue notify
> > >> 7) device status
> > >> 8) ISR status
> > >> 9) config msix
> > >> 10) queue msix
> > >> 11) device configuration space
> > >>
> > >> It focuses on the facilities instead of transport specific details like 
> > >> registers
> > (we
> > >> don't even need legacy registers in this case), I gives more 
> > >> deterministic
> > >> behavior so we don't need to care about the cross registers read/write.
> > >>
> > > 1.x has these registers at raw level and that seems fine.
> > 
> > 
> > Note that 1.x has more, the above is dumped from the section of "Legacy
> > Interfaces: A Note on PCI Device Layout".
> 
> Yeah.
> Current two commands proposal has one box to transport legacy registers that 
> follows the legacy semantics.
> Above commands in future with legacy commands of this work will be able to 
> work together in future anyway.

Nah, we don't need a "break randomly unless it's a full moon and you
cross your heart three times" mode. If you are going to implement
support for legacy emulation implement it in a way that either
predictably works or predictably refuses to load.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Michael S. Tsirkin
On Tue, May 16, 2023 at 07:29:44PM +, Parav Pandit wrote:
> 
> > From: Jason Wang 
> > Sent: Tuesday, May 16, 2023 12:08 AM
> 
> > > For guest and by guest are two different things.
> > > I am proposing that a cfgq/cmdq should be used directly by the guest 
> > > driver as
> > base line like any other VQ.
> > 
> > I'm fine with this, but if I understand correctly, but I don't see any 
> > proposal like
> > this that is posted to the list?
> > 
> Because it is orthogonal to the work here, it's hard to make progress by 
> mixing everything.
> 
> > > And if there is a need for mediation, it should be possible.
> > >
> > > > I think we've discussed many times that legacy is tricky for hardware.
> > > And we want to proceed for the described use case and requirements.
> > >
> > > > And your proposal proves this further by introducing modern alike
> > > > notification areas.
> > > Not really. A modern device is offering the notification area like legacy 
> > > to the
> > legacy interface.
> > > So this is just fine.
> > 
> > I don't say it's not fine, it's pretty fine and transport virtqueue have 
> > similar
> > commands. And this also answer the question that you think tunnel PCI over
> > adminq is not scalable. We can keep this command in that case as well, or
> > invent new capabilities.
> > 
> Ok. so virtio_admin_q_cmd_lq_notify_result is fine.
> 
> > >
> > > > We probably need to deal with others like endianess.
> > > >
> > > No point in discussing this again. One size does not fit all.
> > > It solves large part of the use cases so it is valuable and will be 
> > > supported.
> > 
> > This is doable, we need to leave some space for future development. Or is it
> > just complicated to have an endian command?
> > 
> Possibly one can set. I don’t know if any actual device really supported 
> endianness.
> No users have asked for it, even asking explicitly to those non_little_endian 
> users.

For sure, ARM BE does exist and Red Hat supports it because yes,
it was requested. You can not just go by what marketing tells you
today we either try to build future proof interfaces or we don't
bother at all - by the time these things are in the field everything
shifted.

> So may be when there is/if a real user, it can be done in future.

The concern is if you assume LE is default, no way to say
"I do not support LE". Something like an explicit
"SET LE" and "SET BE" commands will automatically allow
discovering which endian-ness is supported.

> > >
> > > > For any case, having a simple and deterministic device design is
> > > > always better assuming we've agreed that mediation is a must for
> > > > legacy drivers. Using dedicated commands can make sure the
> > > > implementation won't need to go with corner cases of legacy.
> > > >
> > > Corner case handling just moved from the device to hypervisor.
> > 
> > That's not safe, the device should be ready for malicious inputs.
> > 
> With current offset, register, device will be just fine as most 
> implementations have to program control path etc on these registers 
> write/read etc.
> So, device will be able to handle them with plain 2 commands.

Except legacy is broken broken broken.  It just does not work for
physical devices in a crazy number of ways. How about the weird 48 bit
limitation on PAs? Because no one ever will need any more.

I have 0 sympathy to this idea of reviving all these bugs then circling
back and coming up with weird work arounds.  Please, just build a sane
transport and have software deal with bugs as best it can.



> > > For SIOV there is no backward compatibility requirement.
> > 
> > It has several ways to support current virtio-pci drivers in the guest, and 
> > such
> > compatibility is almost a must for the public cloud.
> > We can't enforce the user to reinstall/recompile their kernels/application 
> > in
> > order to work for SIOV.
> Sure, an SIOV device emulating PCI device for guest is one requirement.
> 
> And SIOV device working natively spawned from the parent PF and used without 
> guest is another requirement.
> 
> And second can be done bit elegantly in self-contained way without depending 
> on the parent PF.
> And requires more work unrelated to this one.
> 
> > I think we are discussing what it can be, so see the above reply for the
> > notification areas, we can stick virtio_admin_cmd_lq_notify_query_result
> > command for PCI over adminq, or invent new capabilities to make it fully 
> > self
> > contained.
> > 
> As we both understand well that a queue is not going to transport a doorbell 
> and actual IMS interrupts, we will mostly have commands to service some needs 
> for backward compat.
> It is a separate work when there are no PCI VF devices.
> 
> > >
> > > >
> > > > >
> > > > > You only want to exchange currently defined config registers,
> > > > > interrupts and
> > > > notification configuration using AQ.
> > > > >
> > > > > Transport VQ is NOT transporting actual data, it is not
> > > > > transporting
> > > 

[virtio-dev] RE: [PATCH] content: Replace guest OS with driver

2023-05-16 Thread Parav Pandit


> From: Michael S. Tsirkin 
> Sent: Tuesday, May 16, 2023 4:48 PM
> 
> On Tue, May 16, 2023 at 07:50:48PM +, Parav Pandit wrote:
> >
> > > From: Michael S. Tsirkin 
> > > Sent: Tuesday, May 16, 2023 6:05 AM
> >
> > >
> > > SO I propose:
> > >
> > > \item[ACKNOWLEDGE (1)] Indicates that a transport driver has found the
> > > device and recognized it as a valid virtio device transport.
> > >
> > > \item[DRIVER (2)] Indicates that a device type specific driver was found
> > > and will attempt to attach to the device.
> > >
> > Above bisection is a implementation specific example of Linux (though valid
> and widely used one).
> >
> > The UEFI virtio driver doesn't even have such two drivers.
> > In some OS variant drivers are merged to single kernel binary.
> 
> which one?
> 
Flavor of a RHEL has inbuilt.

> > Does driver only matter with device_driver structure or module binary?...
> 
> Can't parse your question.
> 
> > Driver is largely the software entity that drives the device.
> > I think we can keep the spec simple enough to not mix these details and just
> call it a "driver".
> 
> Not just linux there are lots of drivers like this.  the two bits pass useful
> information the way you changed it this distinction is lots.
> I agree it is worth thinking what exactly does it mean.
> Since you researched it - what exactly do drivers such as uefi and the unnamed
> "some OS variant" do exactly?
There is just one "driver" virtio_net_pci that has sets the required bits.

> when do they set ACKNOWLEDGE and when DRIVER?
> 
Not any different flow.
Entity is one.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Michael S. Tsirkin
On Tue, May 16, 2023 at 07:11:36PM +, Parav Pandit wrote:
> 
> > From: Michael S. Tsirkin 
> > Sent: Tuesday, May 16, 2023 2:22 AM
> > 
> > On Mon, May 15, 2023 at 03:59:41PM +, Parav Pandit wrote:
> > >
> > > > From: Jason Wang 
> > > > Sent: Monday, May 15, 2023 3:31 AM
> > >
> > > > > What do we gain from bisecting it?
> > > >
> > > >
> > > > 1) No need to deal with the offset in the hardware
> > > >
> > > In context of legacy is doesn’t matter much given that its over AQ.
> > 
> > Let me give you an example. I feel that device specific config should allow
> > arbitrary length accesses so that e.g.
> > mac can be read and written atomically.
> > 
> > On the other hand, this is not the case for common config.
> > 
> Why is this not the case with common config?
> Spec snippet: "When using the legacy interface the driver MAY access the 
> device-specific configuration region using any
> width accesses"

I mean, what you said above exactly?
device-specific configuration excludes common config.


> > I feel we do not need to allow e.g. access to both common config and device
> > specific one in a single operation, that is just messy. 
> It is just an offset and value.
> What part bothers you?

E.g. that it can cross the coundary between common and device config.


> > Now sure, we can add text
> > with an explanation, but if instead there's a flag saying "this is common 
> > cfg"
> > or "this is device cfg" then it fall out automatically.
> > 
> > Is this a deal breaker? No, but note how neither I nor Jason did not think 
> > about
> > this ahead of the time and the correct usage just falls out of the interface
> > automatically.  This is a mark of a good interface design. Lots of corner 
> > cases
> > that one has to be careful around is not.
> It just moves corner case one layer above in multiple software hypervisors 
> and hypervisor is not failing it either on cfg read/writes as expected by the 
> guest.
> So bisecting is not adding anything extra. Those failure checks are going to 
> be ignored in hypervisor anyway.

Look we didn't build modern because we wanted to make work, we did
because legacy is broken.  So either let legacy die already or let's
build a sane interface to emulate it please.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH] content: Replace guest OS with driver

2023-05-16 Thread Michael S. Tsirkin
On Tue, May 16, 2023 at 07:50:48PM +, Parav Pandit wrote:
> 
> > From: Michael S. Tsirkin 
> > Sent: Tuesday, May 16, 2023 6:05 AM
> 
> > 
> > SO I propose:
> > 
> > \item[ACKNOWLEDGE (1)] Indicates that a transport driver has found the
> > device and recognized it as a valid virtio device transport.
> > 
> > \item[DRIVER (2)] Indicates that a device type specific driver was found
> > and will attempt to attach to the device.
> > 
> Above bisection is a implementation specific example of Linux (though valid 
> and widely used one).
> 
> The UEFI virtio driver doesn't even have such two drivers.
> In some OS variant drivers are merged to single kernel binary.

which one?

> Does driver only matter with device_driver structure or module binary?...

Can't parse your question.

> Driver is largely the software entity that drives the device.
> I think we can keep the spec simple enough to not mix these details and just 
> call it a "driver".

Not just linux there are lots of drivers like this.  the two bits pass
useful information the way you changed it this distinction is lots.
I agree it is worth thinking what exactly does it mean.
Since you researched it - what exactly do drivers
such as uefi and the unnamed "some OS variant" do exactly?
when do they set ACKNOWLEDGE and when DRIVER?




> > 
> > BTW somewhat related, I would maybe fix
> > device-types/mem/description.tex:change
> > not to say "device driver", just "driver" for brevity.
> > 
> Ok. will fix.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Michael S. Tsirkin
On Tue, May 16, 2023 at 06:45:58PM +, Parav Pandit wrote:
> > From: Michael S. Tsirkin 
> > Sent: Tuesday, May 16, 2023 12:33 AM
> 
> [..]
> > > > > A better way is to have an eventq of depth = num_vfs, like many
> > > > > other virtio devices have it.
> > > > >
> > > > > An eventq can hold per VF interrupt entry including the isr value
> > > > > that you suggest above.
> > > > >
> > > > > Something like,
> > > > >
> > > > > union eventq_entry {
> > > > >   u8 raw_data[16];
> > > > >   struct intx_entry {
> > > > >   u8 event_opcode;
> > > > >   u8 group_type;
> > > > >   u8 reserved[6];
> > > > >   le64 group_identifier;
> > > > >   u8 isr_status;
> > > > >   };
> > > > > };
> > > > >
> > > > > This eventq resides on the owner parent PF.
> > > > > isr_status is read on clear like today.
> > > >
> > > > This is what I wrote no?
> > > > lore.kernel.org/all/20230507050146-mutt-send-email-
> > > > mst%40kernel.org/t.mbox.gz
> > > >
> > > > how about a special command that is used when device would
> > > > normally send INT#x? it can also return ISR to reduce latency.
> > > >
> > > In response to your above suggestion of AQ command, I suggested the
> > > eventq that contains the isr_status that reduces latency as you suggest.
> > 
> > I don't see why we need to keep adding queues though.
> > Just use one of admin queues.
> > 
> Admin queue is cmd->rsp mode.
> The purpose is different here. A device wants to async, notify the interrupt 
> for each VF.
> Like a nic rq.
> Filling up the AQ full of these commands is sub-optimal for those VFs who may 
> generate interrupt.

I don't get the difference sorry. what you do with rq
is exactly fill it up with buffers. here you fill adminq with
commands.



> > > An eventq is a variation of it, where device can keep reporting the events
> > without doing the extra mapping and without too many commands.
> > 
> > I don't get the difference then. The format you showed seems very close to
> > admin command. What is the difference? How do you avoid the need to add a
> > command per VF using INTx#?
> > 
> The main difference is it behaves like nic rq.
> Driver posts empty descriptors, device reports the consumed entries.
> Specific entry is not attached to any specific VF.
> The eventq_entry structure is written by the device for a VF there fired the 
> interrupt fired.

If this is what you want, we already have commands that do not refer to
a specific VF, specifically VIRTIO_ADMIN_CMD_LIST_QUERY.
But I think this is misguided see below.


> > How do you avoid the need to add a command per VF using INTx#?
> The empty buffer is posted pointing to eventq_entry.
> Total number of buffers posted = number_of_vfs in intx mode.
> So entry is not attached to a specific VF, entries are in a pool (like nic 
> rq).

There is a very simple problem with your approach:
you need to report to VF read of ISR since that
clears the interrupt line.
This command has to be per VF.
As I tried to explain before, I proposed combining the two:
same command clears ISR for a VF and if ISR is not 0
returns immediately. If it is 0 the buffer is queued and
used later for interrupt. I prefer the approach
where the buffer is reused by the same VF,
relying on out of order use.
But yes, if you want use in-order, this does not work.
Write the VF that triggered interrupt in command specific
data maybe?



> > > Intel CPU has 256 per core (per vcpu). So they are really a lot.
> > > One needs to connect lot more devices to the cpu to run out of it.
> > > So yes, I would like to try the command to fail.
> > 
> > order of 128 functions then for a 1vcpu VM. You were previously talking 
> > about
> > tends of 1000s of functions as justification for avoiding config space.
> > 
> Will try.
> 
> > 
> > > > Do specific customers event use guests with msi-x disabled? Maybe no.
> > > > Does anyone use virtio with msi-x disabled? Most likely yes.
> > > I just feel that INTx emulation is extremely rare/narrow case of some
> > applications that may never find its use on hw based devices.
> > 
> > If we use a dedicated command for this, I guess devices can just avoid
> > implementing the command if they do not feel like it?
> > 
> I didn't follow. The guest device?
> Do you mean guest driver?

Device. Clear the bit in VIRTIO_ADMIN_CMD_LIST_QUERY and
then hypervisor knows it can't support INTx.

> > > A driver will be easily able to fail the call on INTx configuration 
> > > failing the
> > guest.
> > 
> > There's no configuration - INTx is the default - and no way to fail 
> > gracefully for
> > legacy. That is one of the things we should fix, at least hypervisor should 
> > be able
> > to detect failures.
> For sure hypervisor can detect the failure on INTx configuration done by the 
> guest and fail the call for this config.
> What we are discussing is not anything legacy specific, it applies to 1.x as 
> well sadly.

Problem is, enablin

[virtio-dev] RE: [PATCH] content: Replace guest OS with driver

2023-05-16 Thread Parav Pandit


> From: Michael S. Tsirkin 
> Sent: Tuesday, May 16, 2023 6:05 AM

> 
> SO I propose:
> 
> \item[ACKNOWLEDGE (1)] Indicates that a transport driver has found the
> device and recognized it as a valid virtio device transport.
> 
> \item[DRIVER (2)] Indicates that a device type specific driver was found
> and will attempt to attach to the device.
> 
Above bisection is a implementation specific example of Linux (though valid and 
widely used one).

The UEFI virtio driver doesn't even have such two drivers.
In some OS variant drivers are merged to single kernel binary.
Does driver only matter with device_driver structure or module binary?...

Driver is largely the software entity that drives the device.
I think we can keep the spec simple enough to not mix these details and just 
call it a "driver".

> 
> BTW somewhat related, I would maybe fix
> device-types/mem/description.tex:change
> not to say "device driver", just "driver" for brevity.
> 
Ok. will fix.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [virtio-comment] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Parav Pandit

> From: virtio-comm...@lists.oasis-open.org  open.org> On Behalf Of Jason Wang
> Sent: Monday, May 15, 2023 11:55 PM
> > I don’t see how this is being any different than register-offset interface.

> > It bisects more things at hypervisor level that makes things hard to add 
> > #12th
> entry.
> >
> >> 1) device features
> >> 2) driver features
> >> 3) queue address
> >> 4) queue size
> >> 5) queue select
> >> 6) queue notify
> >> 7) device status
> >> 8) ISR status
> >> 9) config msix
> >> 10) queue msix
> >> 11) device configuration space
> >>
> >> It focuses on the facilities instead of transport specific details like 
> >> registers
> (we
> >> don't even need legacy registers in this case), I gives more deterministic
> >> behavior so we don't need to care about the cross registers read/write.
> >>
> > 1.x has these registers at raw level and that seems fine.
> 
> 
> Note that 1.x has more, the above is dumped from the section of "Legacy
> Interfaces: A Note on PCI Device Layout".

Yeah.
Current two commands proposal has one box to transport legacy registers that 
follows the legacy semantics.
Above commands in future with legacy commands of this work will be able to work 
together in future anyway.


[virtio-dev] RE: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Parav Pandit

> From: Jason Wang 
> Sent: Tuesday, May 16, 2023 12:08 AM

> > For guest and by guest are two different things.
> > I am proposing that a cfgq/cmdq should be used directly by the guest driver 
> > as
> base line like any other VQ.
> 
> I'm fine with this, but if I understand correctly, but I don't see any 
> proposal like
> this that is posted to the list?
> 
Because it is orthogonal to the work here, it's hard to make progress by mixing 
everything.

> > And if there is a need for mediation, it should be possible.
> >
> > > I think we've discussed many times that legacy is tricky for hardware.
> > And we want to proceed for the described use case and requirements.
> >
> > > And your proposal proves this further by introducing modern alike
> > > notification areas.
> > Not really. A modern device is offering the notification area like legacy 
> > to the
> legacy interface.
> > So this is just fine.
> 
> I don't say it's not fine, it's pretty fine and transport virtqueue have 
> similar
> commands. And this also answer the question that you think tunnel PCI over
> adminq is not scalable. We can keep this command in that case as well, or
> invent new capabilities.
> 
Ok. so virtio_admin_q_cmd_lq_notify_result is fine.

> >
> > > We probably need to deal with others like endianess.
> > >
> > No point in discussing this again. One size does not fit all.
> > It solves large part of the use cases so it is valuable and will be 
> > supported.
> 
> This is doable, we need to leave some space for future development. Or is it
> just complicated to have an endian command?
> 
Possibly one can set. I don’t know if any actual device really supported 
endianness.
No users have asked for it, even asking explicitly to those non_little_endian 
users.
So may be when there is/if a real user, it can be done in future.

> >
> > > For any case, having a simple and deterministic device design is
> > > always better assuming we've agreed that mediation is a must for
> > > legacy drivers. Using dedicated commands can make sure the
> > > implementation won't need to go with corner cases of legacy.
> > >
> > Corner case handling just moved from the device to hypervisor.
> 
> That's not safe, the device should be ready for malicious inputs.
> 
With current offset, register, device will be just fine as most implementations 
have to program control path etc on these registers write/read etc.
So, device will be able to handle them with plain 2 commands.

> > For SIOV there is no backward compatibility requirement.
> 
> It has several ways to support current virtio-pci drivers in the guest, and 
> such
> compatibility is almost a must for the public cloud.
> We can't enforce the user to reinstall/recompile their kernels/application in
> order to work for SIOV.
Sure, an SIOV device emulating PCI device for guest is one requirement.

And SIOV device working natively spawned from the parent PF and used without 
guest is another requirement.

And second can be done bit elegantly in self-contained way without depending on 
the parent PF.
And requires more work unrelated to this one.

> I think we are discussing what it can be, so see the above reply for the
> notification areas, we can stick virtio_admin_cmd_lq_notify_query_result
> command for PCI over adminq, or invent new capabilities to make it fully self
> contained.
> 
As we both understand well that a queue is not going to transport a doorbell 
and actual IMS interrupts, we will mostly have commands to service some needs 
for backward compat.
It is a separate work when there are no PCI VF devices.

> >
> > >
> > > >
> > > > You only want to exchange currently defined config registers,
> > > > interrupts and
> > > notification configuration using AQ.
> > > >
> > > > Transport VQ is NOT transporting actual data, it is not
> > > > transporting
> > > notifications etc.
> > > > It is just a config channel.
> > > > Hence, they are just commands not a "full transport".
> > >
> > >
> > > For any case, a virtqueue could not be a full transport. It needs
> > > bus facilities to do bootstrap/DMA/notifications at least. The idea
> > > is to save for transport specific resources and use a more scalable
> > > way to config devices.
> > >
> > Hence it needs only needs a cfgq, not the full transport. :)
> >
> 
> Well, not a native speaker but I'm fine if you want to call it configq.
Michael called it admin vq, so will stick to it for now. :)


[virtio-dev] RE: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Parav Pandit

> From: Michael S. Tsirkin 
> Sent: Tuesday, May 16, 2023 2:22 AM
> 
> On Mon, May 15, 2023 at 03:59:41PM +, Parav Pandit wrote:
> >
> > > From: Jason Wang 
> > > Sent: Monday, May 15, 2023 3:31 AM
> >
> > > > What do we gain from bisecting it?
> > >
> > >
> > > 1) No need to deal with the offset in the hardware
> > >
> > In context of legacy is doesn’t matter much given that its over AQ.
> 
> Let me give you an example. I feel that device specific config should allow
> arbitrary length accesses so that e.g.
> mac can be read and written atomically.
> 
> On the other hand, this is not the case for common config.
> 
Why is this not the case with common config?
Spec snippet: "When using the legacy interface the driver MAY access the 
device-specific configuration region using any
width accesses"

> I feel we do not need to allow e.g. access to both common config and device
> specific one in a single operation, that is just messy. 
It is just an offset and value.
What part bothers you?

> Now sure, we can add text
> with an explanation, but if instead there's a flag saying "this is common cfg"
> or "this is device cfg" then it fall out automatically.
> 
> Is this a deal breaker? No, but note how neither I nor Jason did not think 
> about
> this ahead of the time and the correct usage just falls out of the interface
> automatically.  This is a mark of a good interface design. Lots of corner 
> cases
> that one has to be careful around is not.
It just moves corner case one layer above in multiple software hypervisors and 
hypervisor is not failing it either on cfg read/writes as expected by the guest.
So bisecting is not adding anything extra. Those failure checks are going to be 
ignored in hypervisor anyway.


[virtio-dev] RE: [virtio-comment] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ

2023-05-16 Thread Parav Pandit
> From: Michael S. Tsirkin 
> Sent: Tuesday, May 16, 2023 12:33 AM

[..]
> > > > A better way is to have an eventq of depth = num_vfs, like many
> > > > other virtio devices have it.
> > > >
> > > > An eventq can hold per VF interrupt entry including the isr value
> > > > that you suggest above.
> > > >
> > > > Something like,
> > > >
> > > > union eventq_entry {
> > > > u8 raw_data[16];
> > > > struct intx_entry {
> > > > u8 event_opcode;
> > > > u8 group_type;
> > > > u8 reserved[6];
> > > > le64 group_identifier;
> > > > u8 isr_status;
> > > > };
> > > > };
> > > >
> > > > This eventq resides on the owner parent PF.
> > > > isr_status is read on clear like today.
> > >
> > > This is what I wrote no?
> > > lore.kernel.org/all/20230507050146-mutt-send-email-
> > > mst%40kernel.org/t.mbox.gz
> > >
> > >   how about a special command that is used when device would
> > >   normally send INT#x? it can also return ISR to reduce latency.
> > >
> > In response to your above suggestion of AQ command, I suggested the
> > eventq that contains the isr_status that reduces latency as you suggest.
> 
> I don't see why we need to keep adding queues though.
> Just use one of admin queues.
> 
Admin queue is cmd->rsp mode.
The purpose is different here. A device wants to async, notify the interrupt 
for each VF.
Like a nic rq.
Filling up the AQ full of these commands is sub-optimal for those VFs who may 
generate interrupt.

> > An eventq is a variation of it, where device can keep reporting the events
> without doing the extra mapping and without too many commands.
> 
> I don't get the difference then. The format you showed seems very close to
> admin command. What is the difference? How do you avoid the need to add a
> command per VF using INTx#?
> 
The main difference is it behaves like nic rq.
Driver posts empty descriptors, device reports the consumed entries.
Specific entry is not attached to any specific VF.
The eventq_entry structure is written by the device for a VF there fired the 
interrupt fired.

> How do you avoid the need to add a command per VF using INTx#?
The empty buffer is posted pointing to eventq_entry.
Total number of buffers posted = number_of_vfs in intx mode.
So entry is not attached to a specific VF, entries are in a pool (like nic rq).

> > Intel CPU has 256 per core (per vcpu). So they are really a lot.
> > One needs to connect lot more devices to the cpu to run out of it.
> > So yes, I would like to try the command to fail.
> 
> order of 128 functions then for a 1vcpu VM. You were previously talking about
> tends of 1000s of functions as justification for avoiding config space.
> 
Will try.

> 
> > > Do specific customers event use guests with msi-x disabled? Maybe no.
> > > Does anyone use virtio with msi-x disabled? Most likely yes.
> > I just feel that INTx emulation is extremely rare/narrow case of some
> applications that may never find its use on hw based devices.
> 
> If we use a dedicated command for this, I guess devices can just avoid
> implementing the command if they do not feel like it?
> 
I didn't follow. The guest device?
Do you mean guest driver?

> > A driver will be easily able to fail the call on INTx configuration failing 
> > the
> guest.
> 
> There's no configuration - INTx is the default - and no way to fail 
> gracefully for
> legacy. That is one of the things we should fix, at least hypervisor should 
> be able
> to detect failures.
For sure hypervisor can detect the failure on INTx configuration done by the 
guest and fail the call for this config.

What we are discussing is not anything legacy specific, it applies to 1.x as 
well sadly.
So, we don't need to mix INTx support with register emulation proposal, better 
to handle in the follow up proposal itself.
I also have some patches to improve driver resiliency when guest is not 
involved to work without INTx.

And INTx can still work from system and device point of view, because INTx 
generation does not have function id in it.
Existing PCIe INTx can work if PCI PF device can trigger INTx for the VFs.
So eventq scheme is still optional and orthogonal to legacy.

I would like to discuss it separately outside of legacy proposal.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [RFC PATCH v6] virtio-video: Add virtio video device specification

2023-05-16 Thread Alexander Gordeev

On 12.05.23 06:09, Alexandre Courbot wrote:

On Thu, May 11, 2023 at 5:50 PM Alexander Gordeev
 wrote:


On 08.05.23 06:55, Alexandre Courbot wrote:

On Fri, May 5, 2023 at 8:55 PM Alexander Gordeev
 wrote:


On 03.05.23 16:04, Cornelia Huck wrote:

On Fri, Apr 28 2023, Alexander Gordeev  
wrote:


On 27.04.23 15:16, Alexandre Courbot wrote:

But in any case, that's irrelevant to the guest-host interface, and I
think a big part of the disagreement stems from the misconception that
V4L2 absolutely needs to be used on either the guest or the host,
which is absolutely not the case.


I understand this, of course. I'm arguing, that it is harder to
implement it, get it straight and then maintain it over years. Also it
brings limitations, that sometimes can be workarounded in the virtio
spec, but this always comes at a cost of decreased readability and
increased complexity. Overall it looks clearly as a downgrade compared
to virtio-video for our use-case. And I believe it would be the same for
every developer, that has to actually implement the spec, not just do
the pass through. So if we think of V4L2 UAPI pass through as a
compatibility device (which I believe it is), then it is fine to have
both and keep improving the virtio-video, including taking the best
ideas from the V4L2 and overall using it as a reference to make writing
the driver simpler.


Let me jump in here and ask another question:

Imagine that, some years in the future, somebody wants to add a virtio
device for handling video encoding/decoding to their hypervisor.

Option 1: There are different devices to chose from. How is the person
implementing this supposed to pick a device? They might have a narrow
use case, where it is clear which of the devices is the one that needs to
be supported; but they also might have multiple, diverse use cases, and
end up needing to implement all of the devices.


I think in this case virtio-v4l2 should be used as a compatibility
device exclusively. This means discouraging increasing its complexity
even more with more patches in the spec. virtio-video should eventually
cover all the use-cases of V4L2, so I think it is reasonable to use it
in both complex use-cases and in simple use-cases, where there is no
decoder/encoder V4L2 device on the host.


Option 2: There is one device with various optional features. The person
implementing this can start off with a certain subset of features
depending on their expected use cases, and add to it later, if needed;
but the upfront complexity might be too high for specialized use cases.


I don't see that many negociable features we can provide for a
decoder/encoder device - at least not many that are not considered
basic (like guest buffers). In terms of provided features for codecs
virtio-video and virtio-v4l2 are essentially equivalent.


Actually I see a lot of potential in using the virtio feature flag
negotiation for virtio-video:

1. We already have some feature flags related to memory management.

2. I think it would be great to take V4L2 controls negotiation and turn
it into the feature flags negotiation. I really like this idea and I'd
like to implement it. Not all the controls at once, of course. Still it
would be very easy to port more given the existing process. They
correspond well enough to each other, I think. This way we don't need to
introduce something like the VIDIOC_QUERYCTRL/VIDIOC_QUERY_EXT_CTRL, we
don't need two mechanisms for feature negotiations (like it would be
with virtio-v4l2, right?), also all the features would be in one place.
Then we can directly reference some enums from V4L2, like
v4l2_mpeg_video_h264_profile or v4l2_mpeg_video_h264_level. That's what
I call taking the best from V4L2.

3. We can have things, that V4L2 doesn't support in their stateful UAPI.
For example, dequeuing output buffers in decoder order.


... provided the host can do that (i.e. has a stateless decoder
interface). What is the use-case for this btw?


We have discussed this already. All the relevant quotes can be seen
closer to the end of the email here:
https://markmail.org/message/skotipxlqfiijj7c#query:+page:1+mid:hvkyxsj4tjyq56hj+state:results


The question is more: do we want a decoder/encoder specification and
another one for cameras or do we want something that covers video
devices in general. And is it desirable to reuse an already existing
protocol for communication between a non-privileged entity and a
privileged one (V4L2) or define our own?


Please note, that virtio-video could be extended to support everything
V4L2 does in the future. Including cameras if necessary. So it can cover
video devices in general too.


That's a very, very bold claim, and if you do you will inevitably need
to add things that are not relevant to the codec case, which is your
complaint about V4L2. Not to mention the many new pages of spec that
this will require.


Hmm, you're right. My statement was not specific enough. Indeed V4L2
does a lot of stuff. I'll rephrase.
I think

[virtio-dev] Re: [libcamera-devel] [virtio-dev] Re: [RFC PATCH v6] virtio-video: Add virtio video device specification

2023-05-16 Thread Alexander Gordeev

Hello Laurent,

+CC: Andrii

On 06.05.23 10:16, Laurent Pinchart wrote:

I'm also CC'ing the linux-me...@vger.kernel.org mailing list for these
discussions, I'm sure there are folks there who are interested in codec
and camera virtualization.

On Sat, May 06, 2023 at 11:12:29AM +0300, Laurent Pinchart via libcamera-devel 
wrote:

On Fri, May 05, 2023 at 04:55:33PM +0100, Alex Bennée via libcamera-devel wrote:

Kieran Bingham writes:

Quoting Alexander Gordeev (2023-05-05 10:57:29)

On 03.05.23 17:53, Cornelia Huck wrote:

On Wed, May 03 2023, Alex Bennée  wrote:

Cornelia Huck  writes:

On Fri, Apr 28 2023, Alexander Gordeev  
wrote:

On 27.04.23 15:16, Alexandre Courbot wrote:

But in any case, that's irrelevant to the guest-host interface, and I
think a big part of the disagreement stems from the misconception that
V4L2 absolutely needs to be used on either the guest or the host,
which is absolutely not the case.


I understand this, of course. I'm arguing, that it is harder to
implement it, get it straight and then maintain it over years. Also it
brings limitations, that sometimes can be workarounded in the virtio
spec, but this always comes at a cost of decreased readability and
increased complexity. Overall it looks clearly as a downgrade compared
to virtio-video for our use-case. And I believe it would be the same for
every developer, that has to actually implement the spec, not just do
the pass through. So if we think of V4L2 UAPI pass through as a
compatibility device (which I believe it is), then it is fine to have
both and keep improving the virtio-video, including taking the best
ideas from the V4L2 and overall using it as a reference to make writing
the driver simpler.


Let me jump in here and ask another question:

Imagine that, some years in the future, somebody wants to add a virtio
device for handling video encoding/decoding to their hypervisor.

Option 1: There are different devices to chose from. How is the person
implementing this supposed to pick a device? They might have a narrow
use case, where it is clear which of the devices is the one that needs to
be supported; but they also might have multiple, diverse use cases, and
end up needing to implement all of the devices.

Option 2: There is one device with various optional features. The person
implementing this can start off with a certain subset of features
depending on their expected use cases, and add to it later, if needed;
but the upfront complexity might be too high for specialized use cases.

Leaving concrete references to V4L2 out of the picture, we're currently
trying to decide whether our future will be more like Option 1 or Option
2, with their respective trade-offs.

I'm slightly biased towards Option 2; does it look feasible at all, or
am I missing something essential here? (I had the impression that some
previous confusion had been cleared up; apologies in advance if I'm
misrepresenting things.)

I'd really love to see some kind of consensus for 1.3, if at all
possible :)


I think feature discovery and extensibility is a key part of the VirtIO
paradigm which is why I find the virtio-v4l approach limiting. By
pegging the device to a Linux API we effectively limit the growth of the
device specification to as fast as the Linux API changes. I'm not fully
immersed in v4l but I don't think it is seeing any additional features
developed for it and its limitations for camera are one of the reasons
stuff is being pushed to userspace in solutions like libcamera:

How is libcamera different from V4L2?

We see libcamera as a continuation of V4L2. One that can more easily
handle the recent advances in hardware design. As embedded cameras have
developed, all of the complexity has been pushed on to the developers.
With libcamera, all of that complexity is simplified and a single model
is presented to application developers.


Ok, that is interesting; thanks for the information.



That said its not totally our experience to have virtio devices act as
simple pipes for some higher level protocol. The virtio-gpu spec says
very little about the details of how 3D devices work and simply offers
an opaque pipe to push a (potentially propriety) command stream to the
back end. As far as I'm aware the proposals for Vulkan and Wayland
device support doesn't even offer a feature bit but simply changes the
graphics stream type in the command packets.

We could just offer a VIRTIO_VIDEO_F_V4L feature bit, document it as
incompatible with other feature bits and make that the baseline
implementation but it's not really in the spirit of what VirtIO is
trying to achieve.


I'd not be in favour of an incompatible feature flag,
either... extensions are good, but conflicting features is something
that I'd like to avoid.

So, given that I'd still prefer to have a single device: How well does
the proposed virtio-video device map to a Linux driver implementation
that hooks into V4L2?


IMO it hooks into V4L2 pretty well. And I'm going

Re: [virtio-dev] Re: [RFC PATCH v6] virtio-video: Add virtio video device specification

2023-05-16 Thread Alexander Gordeev

On 05.05.23 17:55, Alex Bennée wrote:


Kieran Bingham  writes:


Hi All,

Coming in late, thanks to lei/lore spotting the libcamera keyword.

+ Cc: libcamera-devel to raise awareness of the discussion there.

Quoting Alexander Gordeev (2023-05-05 10:57:29)

On 03.05.23 17:53, Cornelia Huck wrote:

On Wed, May 03 2023, Alex Bennée  wrote:


Cornelia Huck  writes:


On Fri, Apr 28 2023, Alexander Gordeev  
wrote:


On 27.04.23 15:16, Alexandre Courbot wrote:

But in any case, that's irrelevant to the guest-host interface, and I
think a big part of the disagreement stems from the misconception that
V4L2 absolutely needs to be used on either the guest or the host,
which is absolutely not the case.


I understand this, of course. I'm arguing, that it is harder to
implement it, get it straight and then maintain it over years. Also it
brings limitations, that sometimes can be workarounded in the virtio
spec, but this always comes at a cost of decreased readability and
increased complexity. Overall it looks clearly as a downgrade compared
to virtio-video for our use-case. And I believe it would be the same for
every developer, that has to actually implement the spec, not just do
the pass through. So if we think of V4L2 UAPI pass through as a
compatibility device (which I believe it is), then it is fine to have
both and keep improving the virtio-video, including taking the best
ideas from the V4L2 and overall using it as a reference to make writing
the driver simpler.


Let me jump in here and ask another question:

Imagine that, some years in the future, somebody wants to add a virtio
device for handling video encoding/decoding to their hypervisor.

Option 1: There are different devices to chose from. How is the person
implementing this supposed to pick a device? They might have a narrow
use case, where it is clear which of the devices is the one that needs to
be supported; but they also might have multiple, diverse use cases, and
end up needing to implement all of the devices.

Option 2: There is one device with various optional features. The person
implementing this can start off with a certain subset of features
depending on their expected use cases, and add to it later, if needed;
but the upfront complexity might be too high for specialized use cases.

Leaving concrete references to V4L2 out of the picture, we're currently
trying to decide whether our future will be more like Option 1 or Option
2, with their respective trade-offs.

I'm slightly biased towards Option 2; does it look feasible at all, or
am I missing something essential here? (I had the impression that some
previous confusion had been cleared up; apologies in advance if I'm
misrepresenting things.)

I'd really love to see some kind of consensus for 1.3, if at all
possible :)


I think feature discovery and extensibility is a key part of the VirtIO
paradigm which is why I find the virtio-v4l approach limiting. By
pegging the device to a Linux API we effectively limit the growth of the
device specification to as fast as the Linux API changes. I'm not fully
immersed in v4l but I don't think it is seeing any additional features
developed for it and its limitations for camera are one of the reasons
stuff is being pushed to userspace in solutions like libcamera:

How is libcamera different from V4L2?

We see libcamera as a continuation of V4L2. One that can more easily
handle the recent advances in hardware design. As embedded cameras have
developed, all of the complexity has been pushed on to the developers.
With libcamera, all of that complexity is simplified and a single model
is presented to application developers.


Ok, that is interesting; thanks for the information.



That said its not totally our experience to have virtio devices act as
simple pipes for some higher level protocol. The virtio-gpu spec says
very little about the details of how 3D devices work and simply offers
an opaque pipe to push a (potentially propriety) command stream to the
back end. As far as I'm aware the proposals for Vulkan and Wayland
device support doesn't even offer a feature bit but simply changes the
graphics stream type in the command packets.

We could just offer a VIRTIO_VIDEO_F_V4L feature bit, document it as
incompatible with other feature bits and make that the baseline
implementation but it's not really in the spirit of what VirtIO is
trying to achieve.


I'd not be in favour of an incompatible feature flag,
either... extensions are good, but conflicting features is something
that I'd like to avoid.

So, given that I'd still prefer to have a single device: How well does
the proposed virtio-video device map to a Linux driver implementation
that hooks into V4L2?


IMO it hooks into V4L2 pretty well. And I'm going to spend next few
months making the existing driver fully V4L2 compliant. If this goal
requires changing the spec, than we still have time to do that. I don't
expect a lot of problems on this side. There might be problems wi

[virtio-dev] Re: [PATCH] content: Replace guest OS with driver

2023-05-16 Thread Cornelia Huck
On Tue, May 16 2023, "Michael S. Tsirkin"  wrote:

> On Tue, May 16, 2023 at 10:24:12AM +0200, Cornelia Huck wrote:
>> On Tue, May 16 2023, "Michael S. Tsirkin"  wrote:
>> 
>> > On Tue, May 16, 2023 at 06:01:39AM +0300, Parav Pandit wrote:
>> >> diff --git a/content.tex b/content.tex
>> >> index 9df81b8..417d476 100644
>> >> --- a/content.tex
>> >> +++ b/content.tex
>> >> @@ -26,10 +26,10 @@ \section{\field{Device Status} Field}\label{sec:Basic 
>> >> Facilities of a Virtio Dev
>> >>  following bits are defined (listed below in the order in which
>> >>  they would be typically set):
>> >>  \begin{description}
>> >> -\item[ACKNOWLEDGE (1)] Indicates that the guest OS has found the
>> >> +\item[ACKNOWLEDGE (1)] Indicates that the driver has found the
>> >>device and recognized it as a valid virtio device.
>> >>  
>> >> -\item[DRIVER (2)] Indicates that the guest OS knows how to drive the
>> >> +\item[DRIVER (2)] Indicates that the driver knows how to drive the
>> >>device.
>> >>\begin{note}
>> >>  There could be a significant (or infinite) delay before setting
>> >
>> > Actually, there is a subtle difference here that this is losing.
>> > "guest OS" really refers to e.g. Linux virtio core code here.
>> >
>> >
>> > ACKNOWLEDGE and DRIVER are used by virtio core.
>> >
>> > ACKNOWLEDGE tells you virtio core attached to device, and DRIVER
>> > tells you core found a device specific driver.
>> >
>> >
>> >
>> > If you really want to make things better, let's find a way to explain
>> > all this.
>> 
>> Agreed, this is a really old part of the spec, and likely had been
>> written with the Linux device probing sequence in mind.
>> 
>> Basically, we want to distinguish between "something on the driver side
>> has discovered the device" and "something on the driver side knows how
>> to drive this specific device". If we consider "driver" as a catch-all
>> of the whole thing talking to a device, we need to be extra descriptive
>> (and we can add examples, as this is a non-normative section.)
>> 
>> For ACKNOWLEDGE, maybe "indicates that the driver has discovered the
>> device and recognized it as a valid virtio device" (i.e. mostly what we
>> have now), but also add "For example, this can indicate that non-device
>> specific virtio driver code has attached to the device."
>> 
>> For DRIVER, maybe "indicates that the driver has discovered that it
>> knows how to drive this device specifically. For example, this can
>> indicate that device-specific driver code has attached to the device."
>
>
> I feel examples are a weak way to document it - if we can not say
> what this is specifically, what purpose does it serve?

Not really documenting, but rather illustrating it.

> Actually, we do have a distinction, between transport and device type.
> Can't we use that? It seems more consistent than "non-device
> specific" and "device specific".

What do we consider to be the "transport"? {pci,mmio,ccw} + virtio ring,
or only {pci,mmio,ccw}? This might be confusiong, since at least in the
Linux case, it is the virtio ring/generic code that sets ACKNOWLEDGE,
while the pci/mmio/ccw code is not fiddling with the status at that
stage.

> SO I propose:
>
> \item[ACKNOWLEDGE (1)] Indicates that a transport driver has found the
> device and recognized it as a valid virtio device transport.

What is a "virtio device transport"?

>   
> \item[DRIVER (2)] Indicates that a device type specific driver was found
> and will attempt to attach to the device.
>
>
>
> BTW somewhat related, I would maybe fix
> device-types/mem/description.tex:change
> not to say "device driver", just "driver" for brevity.

That one makes sense to me.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH] content: Replace guest OS with driver

2023-05-16 Thread Michael S. Tsirkin
On Tue, May 16, 2023 at 10:24:12AM +0200, Cornelia Huck wrote:
> On Tue, May 16 2023, "Michael S. Tsirkin"  wrote:
> 
> > On Tue, May 16, 2023 at 06:01:39AM +0300, Parav Pandit wrote:
> >> diff --git a/content.tex b/content.tex
> >> index 9df81b8..417d476 100644
> >> --- a/content.tex
> >> +++ b/content.tex
> >> @@ -26,10 +26,10 @@ \section{\field{Device Status} Field}\label{sec:Basic 
> >> Facilities of a Virtio Dev
> >>  following bits are defined (listed below in the order in which
> >>  they would be typically set):
> >>  \begin{description}
> >> -\item[ACKNOWLEDGE (1)] Indicates that the guest OS has found the
> >> +\item[ACKNOWLEDGE (1)] Indicates that the driver has found the
> >>device and recognized it as a valid virtio device.
> >>  
> >> -\item[DRIVER (2)] Indicates that the guest OS knows how to drive the
> >> +\item[DRIVER (2)] Indicates that the driver knows how to drive the
> >>device.
> >>\begin{note}
> >>  There could be a significant (or infinite) delay before setting
> >
> > Actually, there is a subtle difference here that this is losing.
> > "guest OS" really refers to e.g. Linux virtio core code here.
> >
> >
> > ACKNOWLEDGE and DRIVER are used by virtio core.
> >
> > ACKNOWLEDGE tells you virtio core attached to device, and DRIVER
> > tells you core found a device specific driver.
> >
> >
> >
> > If you really want to make things better, let's find a way to explain
> > all this.
> 
> Agreed, this is a really old part of the spec, and likely had been
> written with the Linux device probing sequence in mind.
> 
> Basically, we want to distinguish between "something on the driver side
> has discovered the device" and "something on the driver side knows how
> to drive this specific device". If we consider "driver" as a catch-all
> of the whole thing talking to a device, we need to be extra descriptive
> (and we can add examples, as this is a non-normative section.)
> 
> For ACKNOWLEDGE, maybe "indicates that the driver has discovered the
> device and recognized it as a valid virtio device" (i.e. mostly what we
> have now), but also add "For example, this can indicate that non-device
> specific virtio driver code has attached to the device."
> 
> For DRIVER, maybe "indicates that the driver has discovered that it
> knows how to drive this device specifically. For example, this can
> indicate that device-specific driver code has attached to the device."


I feel examples are a weak way to document it - if we can not say
what this is specifically, what purpose does it serve?
Actually, we do have a distinction, between transport and device type.
Can't we use that? It seems more consistent than "non-device
specific" and "device specific".


SO I propose:

\item[ACKNOWLEDGE (1)] Indicates that a transport driver has found the
device and recognized it as a valid virtio device transport.
  
\item[DRIVER (2)] Indicates that a device type specific driver was found
and will attempt to attach to the device.



BTW somewhat related, I would maybe fix
device-types/mem/description.tex:change
not to say "device driver", just "driver" for brevity.





> Probably need some more overhaul :) Not an editorial change in any case.
> 
> >> @@ -473,13 +473,13 @@ \section{Device Initialization}\label{sec:General 
> >> Initialization And Device Oper
> >>  \begin{enumerate}
> >>  \item Reset the device.
> >>  
> >> -\item Set the ACKNOWLEDGE status bit: the guest OS has noticed the device.
> >> +\item Set the ACKNOWLEDGE status bit: the driver has noticed the device.
> >>  
> >> -\item Set the DRIVER status bit: the guest OS knows how to drive the 
> >> device.
> >> +\item Set the DRIVER status bit: the driver knows how to drive the device.
> >
> > besides the above, "drivers knows how to drive" sounds bad.
> >
> >>  \item\label{itm:General Initialization And Device Operation /
> >>  Device Initialization / Read feature bits} Read device feature bits, and 
> >> write the subset of feature bits
> >> -   understood by the OS and driver to the device.  During this step the
> >> +   understood by the driver to the device.  During this step the
> >
> > Again the "the OS" here referred to core virtio (e.g. ring features).
> > Less of a problem to remove but if we come up with
> > a better terminology for ACKNOWLEDGE/DRIVER then I guess we can use it
> > here, too.
> 
> Hm, I'm not sure how far we need to distinguish between generic and
> device-specific features in this case. The "driver" as the whole entity
> driving the device needs to decide on the subset; at this stage, it does
> not really matter which parts of the driver code accepted which
> feature. We probably want to be explicit that features are ring,
> transport, and device features, though.

OK.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH] content: Replace guest OS with driver

2023-05-16 Thread Cornelia Huck
On Tue, May 16 2023, "Michael S. Tsirkin"  wrote:

> On Tue, May 16, 2023 at 06:01:39AM +0300, Parav Pandit wrote:
>> diff --git a/content.tex b/content.tex
>> index 9df81b8..417d476 100644
>> --- a/content.tex
>> +++ b/content.tex
>> @@ -26,10 +26,10 @@ \section{\field{Device Status} Field}\label{sec:Basic 
>> Facilities of a Virtio Dev
>>  following bits are defined (listed below in the order in which
>>  they would be typically set):
>>  \begin{description}
>> -\item[ACKNOWLEDGE (1)] Indicates that the guest OS has found the
>> +\item[ACKNOWLEDGE (1)] Indicates that the driver has found the
>>device and recognized it as a valid virtio device.
>>  
>> -\item[DRIVER (2)] Indicates that the guest OS knows how to drive the
>> +\item[DRIVER (2)] Indicates that the driver knows how to drive the
>>device.
>>\begin{note}
>>  There could be a significant (or infinite) delay before setting
>
> Actually, there is a subtle difference here that this is losing.
> "guest OS" really refers to e.g. Linux virtio core code here.
>
>
> ACKNOWLEDGE and DRIVER are used by virtio core.
>
> ACKNOWLEDGE tells you virtio core attached to device, and DRIVER
> tells you core found a device specific driver.
>
>
>
> If you really want to make things better, let's find a way to explain
> all this.

Agreed, this is a really old part of the spec, and likely had been
written with the Linux device probing sequence in mind.

Basically, we want to distinguish between "something on the driver side
has discovered the device" and "something on the driver side knows how
to drive this specific device". If we consider "driver" as a catch-all
of the whole thing talking to a device, we need to be extra descriptive
(and we can add examples, as this is a non-normative section.)

For ACKNOWLEDGE, maybe "indicates that the driver has discovered the
device and recognized it as a valid virtio device" (i.e. mostly what we
have now), but also add "For example, this can indicate that non-device
specific virtio driver code has attached to the device."

For DRIVER, maybe "indicates that the driver has discovered that it
knows how to drive this device specifically. For example, this can
indicate that device-specific driver code has attached to the device."

Probably need some more overhaul :) Not an editorial change in any case.

>> @@ -473,13 +473,13 @@ \section{Device Initialization}\label{sec:General 
>> Initialization And Device Oper
>>  \begin{enumerate}
>>  \item Reset the device.
>>  
>> -\item Set the ACKNOWLEDGE status bit: the guest OS has noticed the device.
>> +\item Set the ACKNOWLEDGE status bit: the driver has noticed the device.
>>  
>> -\item Set the DRIVER status bit: the guest OS knows how to drive the device.
>> +\item Set the DRIVER status bit: the driver knows how to drive the device.
>
> besides the above, "drivers knows how to drive" sounds bad.
>
>>  \item\label{itm:General Initialization And Device Operation /
>>  Device Initialization / Read feature bits} Read device feature bits, and 
>> write the subset of feature bits
>> -   understood by the OS and driver to the device.  During this step the
>> +   understood by the driver to the device.  During this step the
>
> Again the "the OS" here referred to core virtio (e.g. ring features).
> Less of a problem to remove but if we come up with
> a better terminology for ACKNOWLEDGE/DRIVER then I guess we can use it
> here, too.

Hm, I'm not sure how far we need to distinguish between generic and
device-specific features in this case. The "driver" as the whole entity
driving the device needs to decide on the subset; at this stage, it does
not really matter which parts of the driver code accepted which
feature. We probably want to be explicit that features are ring,
transport, and device features, though.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org