[virtio-dev] RE: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Parav Pandit


> From: Jason Wang 
> Sent: Wednesday, April 12, 2023 1:38 AM

> > Modern device says FEAETURE_1 must be offered and must be negotiated by
> driver.
> > Legacy has Mac as RW area. (hypervisor can do it).
> > Reset flow is difference between the legacy and modern.
> 
> Just to make sure we're at the same page. We're talking in the context of
> mediation. Without mediation, your proposal can't work.
>
Right.
 
> So in this case, the guest driver is not talking with the device directly. 
> Qemu
> needs to traps whatever it wants to achieve the
> mediation:
> 
I prefer to avoid picking specific sw component here, but yes. QEMU can trap.

> 1) It's perfectly fine that Qemu negotiated VERSION_1 but presented a
> mediated legacy device to guests.
Right but if VERSION_1 is negotiated, device will work as V_1 with 12B 
virtio_net_hdr.

> 2) For MAC and Reset, Qemu can trap and do anything it wants.
>
The idea is not to poke in the fields even though such sw can.
MAC is RW in legacy.
Mac ia RO in 1.x.

So QEMU cannot make RO register into RW.

The proposed solution in this series enables it and avoid per field sw 
interpretation and mediation in parsing values etc.

> > The PCI device exposed is transitional to the guest VM, so it can do legacy 
> > as
> well as newer features.
> > And if BAR 0 is hard coded, it may not be able to support features that may
> need additional BAR.
> 
> This part I don't understand, you can just put existing modern capabilities in
> BAR0, then everything is fine.
>
Not sure I follow.
May be a step back.

What is proposed here, that 
a. legacy registers are emulated as MMIO in a BAR.
b. This can be either be BAR0 or some other BAR

Your question was why this flexibility?

The reason is: 
a. if device prefers to implement only two BARs, it can do so and have window 
for this 60+ config registers in an existing BAR.
b. if device prefers to implement a new BAR dedicated for legacy registers 
emulation, it is fine too.

A mediating sw will be able to forward them regardless.

> > Right, it doesn’t. But spec shouldn’t write BAR0 is only for legacy MMIO
> emulation, that would prevent BAR0 usage.
> 
> How can it be prevented? Can you give me an example?

I mean to say, that say if we write a spec like below,

A device exposes BAR 0 of size X bytes for supporting legacy configuration and 
device specific registers as memory mapped region.

Above line will prevent using BAR0 beyond legacy register emulation.



[virtio-dev] RE: [virtio-comment] Re: [PATCH 00/11] Introduce transitional mmr pci device

2023-04-11 Thread Parav Pandit


> From: Michael S. Tsirkin 
> Sent: Wednesday, April 12, 2023 1:24 AM

> You are writing into modern register here. *what* are you writing there?
> with modern register the format depends on features. here you did not
> negotiate any features. 
Q notify content.

> what if NOTIFICATION_DATA is a required feature?
How can it be required when it is not defined in the legacy spec.

> with legacy there's no FEATURES_OK so no way to report failure.  driver 
> barrels
> on, sends wrong data in the kick and hangs.
>
There is no need for FEATURE_OK because as 1.x configuration registers are not 
touched.

The device provide q notify register region for forwarding.

> 
> Look I know proposed this originally. I thought it's a small thing too.
> It was an idea. I am not sure it pans out though. Not all ideas work.

This one do work and we have been already testing it.
Described changes with modern devices are simpler than the original proposal 
indeed.



-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Jason Wang
On Wed, Apr 12, 2023 at 1:30 PM Parav Pandit  wrote:
>
>
>
> > From: Jason Wang 
> > Sent: Wednesday, April 12, 2023 1:15 AM
>
> > > Because the spec for modern device do not allow it. Discussed in these
> > threads.
> >
> > Can you please tell me which part of the spec disallows it? There's a long
> > discussion in the virtualization-list a few months ago about mediating 
> > legacy
> > devices on top of modern. We don't see any blocker do you?
> >
> Modern device says FEAETURE_1 must be offered and must be negotiated by 
> driver.
> Legacy has Mac as RW area. (hypervisor can do it).
> Reset flow is difference between the legacy and modern.

Just to make sure we're at the same page. We're talking in the context
of mediation. Without mediation, your proposal can't work.

So in this case, the guest driver is not talking with the device
directly. Qemu needs to traps whatever it wants to achieve the
mediation:

1) It's perfectly fine that Qemu negotiated VERSION_1 but presented a
mediated legacy device to guests.
2) For MAC and Reset, Qemu can trap and do anything it wants.

>
> > >
> > > > 1) legacy MMIO bar: spec changes, hypervisor mediation
> > > > 2) modern device: no spec changes, hypervisor mediation
> > > >
> > > This question repeats the same discussion occurred in this patch series.
> > > You might want to refer it again to avoid repeating all over again.
> >
> > No, I think you miss the point that modern devices could be used for 
> > mediating
> > legacy devices.
> >
> > >
> > > > 1) it's better not invent any of new facilities for legacy
> > > > 2) if legacy is insisted, allow MMIO BAR0 is much simpler and better
> > > You have missed few emails. :)
> > > MMIO BAR is proposed here and it is not limited to BAR 0.
> >
> > In the context of mediation, why do you need that flexibility?
> >
> The PCI device exposed is transitional to the guest VM, so it can do legacy 
> as well as newer features.
> And if BAR 0 is hard coded, it may not be able to support features that may 
> need additional BAR.

This part I don't understand, you can just put existing modern
capabilities in BAR0, then everything is fine.

>
> > > It is left for the device to either map something in existing BAR or use 
> > > BAR 0.
> > > Because PCI has only 3 BARs.
> > > A device may want to support legacy and non-legacy functionality both at 
> > > the
> > same time.
> >
> > This is perfectly fine, this is how Qemu did:
> >
> > /*
> >  * virtio pci bar layout used by default.
> >  * subclasses can re-arrange things if needed.
> >  *
> >  *   region 0   --  virtio legacy io bar
> >  *   region 1   --  msi-x bar
> >  *   region 2   --  virtio modern io bar (off by default)
> >  *   region 4+5 --  virtio modern memory (64bit) bar
> >  *
> >  */
> >
> > > So better not to hard wire for BAR 0.
> >
> > Using BAR0 doesn't prevent you from adding modern capabilities in BAR0, no?
> >
> Right, it doesn’t. But spec shouldn’t write BAR0 is only for legacy MMIO 
> emulation, that would prevent BAR0 usage.

How can it be prevented? Can you give me an example?

Thanks


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Jason Wang
On Wed, Apr 12, 2023 at 1:25 PM Michael S. Tsirkin  wrote:
>
> On Wed, Apr 12, 2023 at 12:53:52PM +0800, Jason Wang wrote:
> > On Wed, Apr 12, 2023 at 12:20 PM Michael S. Tsirkin  wrote:
> > >
> > > On Wed, Apr 12, 2023 at 12:07:26PM +0800, Jason Wang wrote:
> > > > On Wed, Apr 12, 2023 at 5:25 AM Michael S. Tsirkin  
> > > > wrote:
> > > > >
> > > > > On Tue, Apr 11, 2023 at 07:01:16PM +, Parav Pandit wrote:
> > > > > >
> > > > > > > From: virtio-dev@lists.oasis-open.org 
> > > > > > >  On
> > > > > > > Behalf Of Jason Wang
> > > > > > > Sent: Monday, April 10, 2023 11:29 PM
> > > > > >
> > > > > > > > However, it is not backward compatible, if the device place 
> > > > > > > > them in
> > > > > > > > extended capability, it will not work.
> > > > > > > >
> > > > > > >
> > > > > > > It is kind of intended since it is only used for new PCI-E 
> > > > > > > features:
> > > > > > >
> > > > > > New fields in new extended pci cap area is fine.
> > > > > > Migrating old fields to be present in the new extended pci cap, is 
> > > > > > not your intention. Right?
> > > > > >
> > > > > > > "
> > > > > > > +The location of the virtio structures that depend on the PCI 
> > > > > > > Express
> > > > > > > +capability are specified using a vendor-specific extended 
> > > > > > > capabilities
> > > > > > > +on the extended capabilities list in PCI Express extended 
> > > > > > > configuration
> > > > > > > +space of the device.
> > > > > > > "
> > > > > > >
> > > > > > > > To make it backward compatible, a device needs to expose 
> > > > > > > > existing
> > > > > > > > structure in legacy area. And extended structure for same 
> > > > > > > > capability
> > > > > > > > in extended pci capability region.
> > > > > > > >
> > > > > > > > In other words, it will have to be a both places.
> > > > > > >
> > > > > > > Then we will run out of config space again?
> > > > > > No.
> > > > > > Only currently defined caps to be placed in two places.
> > > > > > New fields don’t need to be placed in PCI cap, because no driver is 
> > > > > > looking there.
> > > > > >
> > > > > > We probably already discussed this in previous email by now.
> > > > > >
> > > > > > > Otherwise we need to deal with the
> > > > > > > case when existing structures were only placed at extended 
> > > > > > > capability. Michael
> > > > > > > suggest to add a new feature, but the driver may not negotiate 
> > > > > > > the feature
> > > > > > > which requires more thought.
> > > > > > >
> > > > > > Not sure I understand feature bit.
> > > > >
> > > > > This is because we have a concept of dependency between
> > > > > features but not a concept of dependency of feature on
> > > > > capability.
> > > > >
> > > > > > PCI transport fields existence is usually not dependent on upper 
> > > > > > layer protocol.
> > > > > >
> > > > > > > > We may need it even sooner than this because the AQ patch is 
> > > > > > > > expanding
> > > > > > > > the structure located in legacy area.
> > > > > > >
> > > > > > > Just to make sure I understand this, assuming we have adminq, any 
> > > > > > > reason a
> > > > > > > dedicated pcie ext cap is required?
> > > > > > >
> > > > > > No. it was my short sight. I responded right after above text that 
> > > > > > AQ doesn’t need cap extension.
> > > > >
> > > > >
> > > > >
> > > > > You know, thinking about this, I begin to feel that we should
> > > > > require that if at least one extended config exists then
> > > > > all caps present in the regular config are *also*
> > > > > mirrored in the extended config. IOW extended >= regular.
> > > > > The reason is that extended config can be emulated more efficiently
> > > > > (2x less exits).
> > > >
> > > > Any reason for it to get less exits?
> > >
> > > For a variety of reasons having to do with buggy hardware e.g. linux
> > > likes to use cf8/cfc for legacy ranges. 2 accesses are required for each
> > > read/write.  extended space is just 1.
> > >
> >
> > Ok.
> >
> > >
> > > > At least it has not been done in
> > > > current Qemu's emulation. (And do we really care about the performance
> > > > of config space access?)
> > > >
> > > > Thanks
> > >
> > > For boot speed, yes. Not minor 5% things but 2x, sure.
> >
> > If we care about boot speed we should avoid using the PCI layer in the
> > guest completely.
> >
> > Thanks
>
> Woa. And do what? Add a ton of functionality in a PV way to MMIO?

Probably, we have microVM already. And hyperv drops PCI since Gen2.

> NUMA, MSI, power management  the list goes on and on.
> If you have pci on the host it is way easier to pass that
> through to guest than do a completely different thing.

It's a balance. If you want functionality, PCI is probably a must. But
if you care about only the boot speed, the boot speed is not slowed
down by a single device but the whole PCI layer.

Thanks

>
> --
> MST
>


-
To unsubscribe, e-mail: 

[virtio-dev] RE: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Parav Pandit


> From: Jason Wang 
> Sent: Wednesday, April 12, 2023 1:15 AM

> > Because the spec for modern device do not allow it. Discussed in these
> threads.
> 
> Can you please tell me which part of the spec disallows it? There's a long
> discussion in the virtualization-list a few months ago about mediating legacy
> devices on top of modern. We don't see any blocker do you?
>
Modern device says FEAETURE_1 must be offered and must be negotiated by driver.
Legacy has Mac as RW area. (hypervisor can do it).
Reset flow is difference between the legacy and modern.

> >
> > > 1) legacy MMIO bar: spec changes, hypervisor mediation
> > > 2) modern device: no spec changes, hypervisor mediation
> > >
> > This question repeats the same discussion occurred in this patch series.
> > You might want to refer it again to avoid repeating all over again.
> 
> No, I think you miss the point that modern devices could be used for mediating
> legacy devices.
> 
> >
> > > 1) it's better not invent any of new facilities for legacy
> > > 2) if legacy is insisted, allow MMIO BAR0 is much simpler and better
> > You have missed few emails. :)
> > MMIO BAR is proposed here and it is not limited to BAR 0.
> 
> In the context of mediation, why do you need that flexibility?
> 
The PCI device exposed is transitional to the guest VM, so it can do legacy as 
well as newer features.
And if BAR 0 is hard coded, it may not be able to support features that may 
need additional BAR.

> > It is left for the device to either map something in existing BAR or use 
> > BAR 0.
> > Because PCI has only 3 BARs.
> > A device may want to support legacy and non-legacy functionality both at the
> same time.
> 
> This is perfectly fine, this is how Qemu did:
> 
> /*
>  * virtio pci bar layout used by default.
>  * subclasses can re-arrange things if needed.
>  *
>  *   region 0   --  virtio legacy io bar
>  *   region 1   --  msi-x bar
>  *   region 2   --  virtio modern io bar (off by default)
>  *   region 4+5 --  virtio modern memory (64bit) bar
>  *
>  */
> 
> > So better not to hard wire for BAR 0.
> 
> Using BAR0 doesn't prevent you from adding modern capabilities in BAR0, no?
>
Right, it doesn’t. But spec shouldn’t write BAR0 is only for legacy MMIO 
emulation, that would prevent BAR0 usage.


[virtio-dev] Re: [virtio-comment] Re: [PATCH 10/11] transport-pci: Use driver notification PCI capability

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 12, 2023 at 05:24:57AM +, Parav Pandit wrote:
> 
> 
> > From: virtio-comm...@lists.oasis-open.org  > open.org> On Behalf Of Michael S. Tsirkin
> 
> > > Such registers do not start the VQ data path engines.
> > > 1.x spec has done the right thing to have dedicated notification region 
> > > which
> > something an actual pci hw can implement.
> > 
> > okay so.. in fact it turns out existing hardware is not really happy to 
> > emulate
> > legacy. it does not really fit that well.
> > so maybe we should stop propagating the legacy interface to modern hardware
> > then.
> It is not about current hw, utilize what hw has to offer and utilize what sw 
> has to offer.
> The key efficiency is coming by reusing what 1.x has already to offer.

Just ... no. Either run 0.X or 1.X. This mix opens a ton of corner
cases. NOTIFICATION_DATA is just off the top of my head, there's more
for sure.  It's a can of worms we won't be able to close.

> > Add the legacy net header size feature and be done with it.
> Hypervisor need to mediate and participate in all life cycle of the device.
> 
> One wants to run the block device too. Current proposal with other changes we 
> discussed cover wider case.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 12, 2023 at 12:53:52PM +0800, Jason Wang wrote:
> On Wed, Apr 12, 2023 at 12:20 PM Michael S. Tsirkin  wrote:
> >
> > On Wed, Apr 12, 2023 at 12:07:26PM +0800, Jason Wang wrote:
> > > On Wed, Apr 12, 2023 at 5:25 AM Michael S. Tsirkin  
> > > wrote:
> > > >
> > > > On Tue, Apr 11, 2023 at 07:01:16PM +, Parav Pandit wrote:
> > > > >
> > > > > > From: virtio-dev@lists.oasis-open.org 
> > > > > >  On
> > > > > > Behalf Of Jason Wang
> > > > > > Sent: Monday, April 10, 2023 11:29 PM
> > > > >
> > > > > > > However, it is not backward compatible, if the device place them 
> > > > > > > in
> > > > > > > extended capability, it will not work.
> > > > > > >
> > > > > >
> > > > > > It is kind of intended since it is only used for new PCI-E features:
> > > > > >
> > > > > New fields in new extended pci cap area is fine.
> > > > > Migrating old fields to be present in the new extended pci cap, is 
> > > > > not your intention. Right?
> > > > >
> > > > > > "
> > > > > > +The location of the virtio structures that depend on the PCI 
> > > > > > Express
> > > > > > +capability are specified using a vendor-specific extended 
> > > > > > capabilities
> > > > > > +on the extended capabilities list in PCI Express extended 
> > > > > > configuration
> > > > > > +space of the device.
> > > > > > "
> > > > > >
> > > > > > > To make it backward compatible, a device needs to expose existing
> > > > > > > structure in legacy area. And extended structure for same 
> > > > > > > capability
> > > > > > > in extended pci capability region.
> > > > > > >
> > > > > > > In other words, it will have to be a both places.
> > > > > >
> > > > > > Then we will run out of config space again?
> > > > > No.
> > > > > Only currently defined caps to be placed in two places.
> > > > > New fields don’t need to be placed in PCI cap, because no driver is 
> > > > > looking there.
> > > > >
> > > > > We probably already discussed this in previous email by now.
> > > > >
> > > > > > Otherwise we need to deal with the
> > > > > > case when existing structures were only placed at extended 
> > > > > > capability. Michael
> > > > > > suggest to add a new feature, but the driver may not negotiate the 
> > > > > > feature
> > > > > > which requires more thought.
> > > > > >
> > > > > Not sure I understand feature bit.
> > > >
> > > > This is because we have a concept of dependency between
> > > > features but not a concept of dependency of feature on
> > > > capability.
> > > >
> > > > > PCI transport fields existence is usually not dependent on upper 
> > > > > layer protocol.
> > > > >
> > > > > > > We may need it even sooner than this because the AQ patch is 
> > > > > > > expanding
> > > > > > > the structure located in legacy area.
> > > > > >
> > > > > > Just to make sure I understand this, assuming we have adminq, any 
> > > > > > reason a
> > > > > > dedicated pcie ext cap is required?
> > > > > >
> > > > > No. it was my short sight. I responded right after above text that AQ 
> > > > > doesn’t need cap extension.
> > > >
> > > >
> > > >
> > > > You know, thinking about this, I begin to feel that we should
> > > > require that if at least one extended config exists then
> > > > all caps present in the regular config are *also*
> > > > mirrored in the extended config. IOW extended >= regular.
> > > > The reason is that extended config can be emulated more efficiently
> > > > (2x less exits).
> > >
> > > Any reason for it to get less exits?
> >
> > For a variety of reasons having to do with buggy hardware e.g. linux
> > likes to use cf8/cfc for legacy ranges. 2 accesses are required for each
> > read/write.  extended space is just 1.
> >
> 
> Ok.
> 
> >
> > > At least it has not been done in
> > > current Qemu's emulation. (And do we really care about the performance
> > > of config space access?)
> > >
> > > Thanks
> >
> > For boot speed, yes. Not minor 5% things but 2x, sure.
> 
> If we care about boot speed we should avoid using the PCI layer in the
> guest completely.
> 
> Thanks

Woa. And do what? Add a ton of functionality in a PV way to MMIO?
NUMA, MSI, power management  the list goes on and on.
If you have pci on the host it is way easier to pass that
through to guest than do a completely different thing.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [virtio-comment] Re: [PATCH 10/11] transport-pci: Use driver notification PCI capability

2023-04-11 Thread Parav Pandit



> From: virtio-comm...@lists.oasis-open.org  open.org> On Behalf Of Michael S. Tsirkin

> > Such registers do not start the VQ data path engines.
> > 1.x spec has done the right thing to have dedicated notification region 
> > which
> something an actual pci hw can implement.
> 
> okay so.. in fact it turns out existing hardware is not really happy to 
> emulate
> legacy. it does not really fit that well.
> so maybe we should stop propagating the legacy interface to modern hardware
> then.
It is not about current hw, utilize what hw has to offer and utilize what sw 
has to offer.
The key efficiency is coming by reusing what 1.x has already to offer.

> Add the legacy net header size feature and be done with it.
Hypervisor need to mediate and participate in all life cycle of the device.

One wants to run the block device too. Current proposal with other changes we 
discussed cover wider case.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH 00/11] Introduce transitional mmr pci device

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 12, 2023 at 05:15:40AM +, Parav Pandit wrote:
> 
> 
> > From: virtio-comm...@lists.oasis-open.org  > open.org> On Behalf Of Michael S. Tsirkin
> 
> > New issue I found today:
> > - if guest disables MSI-X host can not disable MSI-X.
> >   need some other channel to notify device about this.
> >
> I will look into it. 
>  
> > Old issues we discussed before today:
> > - reset needs some special handling because real hardware
> >   can not guarantee returning 0 on the 1st read
> When done through the legacy MMR as we discussed, it will.
> 
> > - if guest writes into mac, reusing host mac (which is RO)
> >   will not work, need extra registers
> No. Again legacy interface section of MMR behaves as legacy.
> 
> > - something about notification makes you want to poke
> >   at modern notification register? which of course
> >   is its own can of worms with VIRTIO_F_NOTIFICATION_DATA
> >   changing the format completely.
> There is no NOTIFICATION_DATA with legacy. So it is not applicable.
> Format is same with only vq index.

You are writing into modern register here. *what* are you writing there?
with modern register the format depends on features. here you did not
negotiate any features. what if NOTIFICATION_DATA is a required feature?
with legacy there's no FEATURES_OK so no way to report failure.  driver
barrels on, sends wrong data in the kick and hangs.


Sure, we can add a section to notification register and come up with
some behaviour. Without even trying to guess how messy or clean that
will be, far from being a small clean contained thing this feature is
sticking its fingers in lots of pies.



Look I know proposed this originally. I thought it's a small thing too.
It was an idea. I am not sure it pans out though. Not all ideas work.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH 10/11] transport-pci: Use driver notification PCI capability

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 12, 2023 at 05:06:09AM +, Parav Pandit wrote:
> 
> 
> > From: virtio-comm...@lists.oasis-open.org  > open.org> On Behalf Of Michael S. Tsirkin
> 
> > > On that occurrence, hypervisor driver forwards the q notification using 
> > > the q
> > notification region which is defined by struct virtio_pci_notify_cap.
> > 
> > What is wrong with forwarding it on the legacy window?
> Legacy window is configuration registers.
> One has sandwitched q notification register in middle of all the config 
> registers.
> Such registers do not start the VQ data path engines.
> 1.x spec has done the right thing to have dedicated notification region which 
> something an actual pci hw can implement.

okay so.. in fact it turns out existing hardware is not
really happy to emulate legacy. it does not really fit that well.
so maybe we should stop propagating the legacy interface to
modern hardware then.
Add the legacy net header size feature and be done with it.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [virtio-comment] Re: [PATCH 00/11] Introduce transitional mmr pci device

2023-04-11 Thread Parav Pandit



> From: virtio-comm...@lists.oasis-open.org  open.org> On Behalf Of Michael S. Tsirkin

> New issue I found today:
> - if guest disables MSI-X host can not disable MSI-X.
>   need some other channel to notify device about this.
>
I will look into it. 
 
> Old issues we discussed before today:
> - reset needs some special handling because real hardware
>   can not guarantee returning 0 on the 1st read
When done through the legacy MMR as we discussed, it will.

> - if guest writes into mac, reusing host mac (which is RO)
>   will not work, need extra registers
No. Again legacy interface section of MMR behaves as legacy.

> - something about notification makes you want to poke
>   at modern notification register? which of course
>   is its own can of worms with VIRTIO_F_NOTIFICATION_DATA
>   changing the format completely.
There is no NOTIFICATION_DATA with legacy. So it is not applicable.
Format is same with only vq index.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Jason Wang
On Wed, Apr 12, 2023 at 1:01 PM Parav Pandit  wrote:
>
>
>
> > From: Jason Wang 
> > Sent: Wednesday, April 12, 2023 12:51 AM
> >
> > Yes, but this proposal will drag us back to legacy, isn't it?
> No. This proposal supports the legacy transitional pci device.
>
> > Or if it is used only in
> > the vitalization environment, what's the advantages of doing it over simply
> > mediating on top of modern device?
> >
> Because the spec for modern device do not allow it. Discussed in these 
> threads.

Can you please tell me which part of the spec disallows it? There's a
long discussion in the virtualization-list a few months ago about
mediating legacy devices on top of modern. We don't see any blocker do
you?

>
> > 1) legacy MMIO bar: spec changes, hypervisor mediation
> > 2) modern device: no spec changes, hypervisor mediation
> >
> This question repeats the same discussion occurred in this patch series.
> You might want to refer it again to avoid repeating all over again.

No, I think you miss the point that modern devices could be used for
mediating legacy devices.

>
> > 1) it's better not invent any of new facilities for legacy
> > 2) if legacy is insisted, allow MMIO BAR0 is much simpler and better
> You have missed few emails. :)
> MMIO BAR is proposed here and it is not limited to BAR 0.

In the context of mediation, why do you need that flexibility?

> It is left for the device to either map something in existing BAR or use BAR 
> 0.
> Because PCI has only 3 BARs.
> A device may want to support legacy and non-legacy functionality both at the 
> same time.

This is perfectly fine, this is how Qemu did:

/*
 * virtio pci bar layout used by default.
 * subclasses can re-arrange things if needed.
 *
 *   region 0   --  virtio legacy io bar
 *   region 1   --  msi-x bar
 *   region 2   --  virtio modern io bar (off by default)
 *   region 4+5 --  virtio modern memory (64bit) bar
 *
 */

> So better not to hard wire for BAR 0.

Using BAR0 doesn't prevent you from adding modern capabilities in BAR0, no?

Thanks


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH 00/11] Introduce transitional mmr pci device

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 12, 2023 at 04:52:09AM +, Parav Pandit wrote:
> 
> > From: Michael S. Tsirkin 
> > Sent: Wednesday, April 12, 2023 12:48 AM
> 
> > Here is a counter proposal:
> > 
> > #define VIRTIO_NET_F_LEGACY_HEADER  52  /* Use the legacy 10 byte
> > header for all packets */
> > 
> > 
> > Yes, sorry to say, you need to emulate legacy pci in software.
> > 
> > With notification hacks, and reset hacks, and legacy interrupt hacks, and
> > writeable mac ...  this thing best belongs in vdpa anyway.
> 
> What? I don't follow.
> Suddenly you attribute everything as hack with least explanation.
> 

Again hacks is not a bad thing but it's an attempt at reusing things in
unexpected ways.

New issue I found today:
- if guest disables MSI-X host can not disable MSI-X.
  need some other channel to notify device about this.

Old issues we discussed before today:
- reset needs some special handling because real hardware
  can not guarantee returning 0 on the 1st read
- if guest writes into mac, reusing host mac (which is RO)
  will not work, need extra registers
- something about notification makes you want to poke
  at modern notification register? which of course
  is its own can of worms with VIRTIO_F_NOTIFICATION_DATA
  changing the format completely.


-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH 00/11] Introduce transitional mmr pci device

2023-04-11 Thread Halil Pasic
On Fri, 31 Mar 2023 01:58:23 +0300
Parav Pandit  wrote:

> Overview:
> -
> The Transitional MMR device is a variant of the transitional PCI device.
> It has its own small Device ID range. It does not have I/O
> region BAR; instead it exposes legacy configuration and device
> specific registers at an offset in the memory region BAR.
> 

I have some conceptual problems with the current wording. I have to
admit the ongoing discussion is for me quite difficult to follow.
Nevertheless I have the feeling most of my concerns were already
voiced by mostly Michael.

 
[..]
> Please review.
> 

After reading trough everything, I have a feeling, I have at least a
basic understanding of what you are trying to accomplish. If you
don't mind I would prefer to wait for a v2, maybe things will
settle down a little by then. I feel like me commenting right now
would just muddy the waters even further, without any real merrit. But if
you insist, you can get some...

Regards,
Halil

[..]


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [virtio-comment] Re: [PATCH 10/11] transport-pci: Use driver notification PCI capability

2023-04-11 Thread Parav Pandit



> From: virtio-comm...@lists.oasis-open.org  open.org> On Behalf Of Michael S. Tsirkin

> > On that occurrence, hypervisor driver forwards the q notification using the 
> > q
> notification region which is defined by struct virtio_pci_notify_cap.
> 
> What is wrong with forwarding it on the legacy window?
Legacy window is configuration registers.
One has sandwitched q notification register in middle of all the config 
registers.
Such registers do not start the VQ data path engines.
1.x spec has done the right thing to have dedicated notification region which 
something an actual pci hw can implement.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH 10/11] transport-pci: Use driver notification PCI capability

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 12, 2023 at 04:48:41AM +, Parav Pandit wrote:
> > From: Michael S. Tsirkin 
> > Sent: Wednesday, April 12, 2023 12:43 AM
> > To: Parav Pandit 
> > Cc: virtio-dev@lists.oasis-open.org; coh...@redhat.com; virtio-
> > comm...@lists.oasis-open.org; Shahaf Shuler ;
> > Satananda Burla 
> > Subject: Re: [PATCH 10/11] transport-pci: Use driver notification PCI 
> > capability
> > 
> > On Wed, Apr 12, 2023 at 04:37:05AM +, Parav Pandit wrote:
> > >
> > >
> > > > From: Michael S. Tsirkin 
> > > > Sent: Wednesday, April 12, 2023 12:31 AM
> > > >
> > > > On Fri, Mar 31, 2023 at 01:58:33AM +0300, Parav Pandit wrote:
> > > > > PCI devices support memory BAR regions for performant driver
> > > > > notifications using the notification capability.
> > > > > Enable transitional MMR devices to use it in simpler manner.
> > > > >
> > > > > Co-developed-by: Satananda Burla 
> > > > > Signed-off-by: Parav Pandit 
> > > > > ---
> > > > >  transport-pci.tex | 28 
> > > > >  1 file changed, 28 insertions(+)
> > > > >
> > > > > diff --git a/transport-pci.tex b/transport-pci.tex index
> > > > > 55a6aa0..4fd9898 100644
> > > > > --- a/transport-pci.tex
> > > > > +++ b/transport-pci.tex
> > > > > @@ -763,6 +763,34 @@ \subsubsection{Notification structure
> > > > > layout}\label{sec:Virtio Transport Options  cap.length >=
> > > > > queue_notify_off * notify_off_multiplier + 4  \end{lstlisting}
> > > > >
> > > > > +\paragraph{Transitional MMR Interface: A note on Notification
> > > > > +Capability} \label{sec:Virtio Transport Options / Virtio Over PCI
> > > > > +Bus / Virtio Structure PCI Capabilities / Notification capability
> > > > > +/ Transitional MMR Interface}
> > > > > +
> > > > > +The transitional MMR device benefits from receiving driver
> > > > > +notifications at the Queue Notification address offered using the
> > > > > +notification capability, rather than via the memory mapped legacy
> > > > > +QueueNotify configuration register.
> > > > > +
> > > > > +Transitional MMR device uses same Queue Notification address
> > > > > +within a BAR for all virtqueues:
> > > > > +\begin{lstlisting}
> > > > > +cap.offset
> > > > > +\end{lstlisting}
> > > > > +
> > > > > +The transitional MMR device MUST support Queue Notification
> > > > > +address within a BAR for all virtqueues at:
> > > > > +\begin{lstlisting}
> > > > > +cap.offset
> > > > > +\end{lstlisting}
> > > > > +
> > > > > +The transitional MMR driver that wants to use driver
> > > > > +notifications offered using notification capability MUST use same
> > > > > +Queue Notification address within a BAR for all virtqueues at:
> > > > > +
> > > > > +\begin{lstlisting}
> > > > > +cap.offset
> > > > > +\end{lstlisting}
> > > > > +
> > > > Why? What exactly is going on here? legacy drivers will not do this.
> > >
> > > Legacy driver does in the q notify register that was sandwitched in 
> > > between
> > of slow configuration registers.
> > > This is the notification offset for the hypervisor driver to perform the
> > notification on behalf of the guest driver so that the acceleration 
> > available for
> > the non-transitional device can be utilized here as well.
> > 
> > I don't get it. What acceleration? for guests you need a separate page so 
> > card
> > can be mapped directly while config causes an exit. But hypervisor can 
> > access
> > any register without vmexits.
> 
> Typically when guest VM writes to IOBAR q notification register, a vmexit 
> occurs.
> On that occurrence, hypervisor driver forwards the q notification using the q 
> notification region which is defined by struct virtio_pci_notify_cap.

What is wrong with forwarding it on the legacy window?

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Parav Pandit


> From: Jason Wang 
> Sent: Wednesday, April 12, 2023 12:51 AM
> 
> Yes, but this proposal will drag us back to legacy, isn't it? 
No. This proposal supports the legacy transitional pci device.

> Or if it is used only in
> the vitalization environment, what's the advantages of doing it over simply
> mediating on top of modern device?
> 
Because the spec for modern device do not allow it. Discussed in these threads.

> 1) legacy MMIO bar: spec changes, hypervisor mediation
> 2) modern device: no spec changes, hypervisor mediation
>
This question repeats the same discussion occurred in this patch series.
You might want to refer it again to avoid repeating all over again.

> 1) it's better not invent any of new facilities for legacy
> 2) if legacy is insisted, allow MMIO BAR0 is much simpler and better
You have missed few emails. :)
MMIO BAR is proposed here and it is not limited to BAR 0.
It is left for the device to either map something in existing BAR or use BAR 0.
Because PCI has only 3 BARs.
A device may want to support legacy and non-legacy functionality both at the 
same time.
So better not to hard wire for BAR 0.


Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Jason Wang
On Wed, Apr 12, 2023 at 12:21 PM Michael S. Tsirkin  wrote:
>
> On Wed, Apr 12, 2023 at 12:04:45PM +0800, Jason Wang wrote:
> > On Wed, Apr 12, 2023 at 3:01 AM Parav Pandit  wrote:
> > >
> > >
> > > > From: virtio-dev@lists.oasis-open.org  
> > > > On
> > > > Behalf Of Jason Wang
> > > > Sent: Monday, April 10, 2023 11:29 PM
> > >
> > > > > However, it is not backward compatible, if the device place them in
> > > > > extended capability, it will not work.
> > > > >
> > > >
> > > > It is kind of intended since it is only used for new PCI-E features:
> > > >
> > > New fields in new extended pci cap area is fine.
> > > Migrating old fields to be present in the new extended pci cap, is not 
> > > your intention. Right?
> >
> > Right, but what I want to say is, such migration may cause unnecessary
> > problems. And I don't see why it is a must for your legacy MMIO bar
> > proposal.
> >
> > >
> > > > "
> > > > +The location of the virtio structures that depend on the PCI Express
> > > > +capability are specified using a vendor-specific extended capabilities
> > > > +on the extended capabilities list in PCI Express extended configuration
> > > > +space of the device.
> > > > "
> > > >
> > > > > To make it backward compatible, a device needs to expose existing
> > > > > structure in legacy area. And extended structure for same capability
> > > > > in extended pci capability region.
> > > > >
> > > > > In other words, it will have to be a both places.
> > > >
> > > > Then we will run out of config space again?
> > > No.
> > > Only currently defined caps to be placed in two places.
> >
> > What's the advantage of doing this?
> >
> > New drivers should provide backward compatibility so they must scan the pci 
> > cap.
>
> No, they can start with express cap. Finding one they can skip
> the old cap completely.

Then the driver can't work on the old device.

Thanks

>
> > The Old driver can only scan the pci cap.
> >
> > > New fields don’t need to be placed in PCI cap, because no driver is 
> > > looking there.
> >
> > It would be much more simple if we forbid placing new fields in the
> > PCI cap, it is already out of space.
> >
> > Thanks
> >
> > >
> > > We probably already discussed this in previous email by now.
> > >
> > > > Otherwise we need to deal with the
> > > > case when existing structures were only placed at extended capability. 
> > > > Michael
> > > > suggest to add a new feature, but the driver may not negotiate the 
> > > > feature
> > > > which requires more thought.
> > > >
> > > Not sure I understand feature bit.
> > > PCI transport fields existence is usually not dependent on upper layer 
> > > protocol.
> > >
> > > > > We may need it even sooner than this because the AQ patch is expanding
> > > > > the structure located in legacy area.
> > > >
> > > > Just to make sure I understand this, assuming we have adminq, any 
> > > > reason a
> > > > dedicated pcie ext cap is required?
> > > >
> > > No. it was my short sight. I responded right after above text that AQ 
> > > doesn’t need cap extension.
>


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Jason Wang
On Wed, Apr 12, 2023 at 12:20 PM Michael S. Tsirkin  wrote:
>
> On Wed, Apr 12, 2023 at 12:07:26PM +0800, Jason Wang wrote:
> > On Wed, Apr 12, 2023 at 5:25 AM Michael S. Tsirkin  wrote:
> > >
> > > On Tue, Apr 11, 2023 at 07:01:16PM +, Parav Pandit wrote:
> > > >
> > > > > From: virtio-dev@lists.oasis-open.org 
> > > > >  On
> > > > > Behalf Of Jason Wang
> > > > > Sent: Monday, April 10, 2023 11:29 PM
> > > >
> > > > > > However, it is not backward compatible, if the device place them in
> > > > > > extended capability, it will not work.
> > > > > >
> > > > >
> > > > > It is kind of intended since it is only used for new PCI-E features:
> > > > >
> > > > New fields in new extended pci cap area is fine.
> > > > Migrating old fields to be present in the new extended pci cap, is not 
> > > > your intention. Right?
> > > >
> > > > > "
> > > > > +The location of the virtio structures that depend on the PCI Express
> > > > > +capability are specified using a vendor-specific extended 
> > > > > capabilities
> > > > > +on the extended capabilities list in PCI Express extended 
> > > > > configuration
> > > > > +space of the device.
> > > > > "
> > > > >
> > > > > > To make it backward compatible, a device needs to expose existing
> > > > > > structure in legacy area. And extended structure for same capability
> > > > > > in extended pci capability region.
> > > > > >
> > > > > > In other words, it will have to be a both places.
> > > > >
> > > > > Then we will run out of config space again?
> > > > No.
> > > > Only currently defined caps to be placed in two places.
> > > > New fields don’t need to be placed in PCI cap, because no driver is 
> > > > looking there.
> > > >
> > > > We probably already discussed this in previous email by now.
> > > >
> > > > > Otherwise we need to deal with the
> > > > > case when existing structures were only placed at extended 
> > > > > capability. Michael
> > > > > suggest to add a new feature, but the driver may not negotiate the 
> > > > > feature
> > > > > which requires more thought.
> > > > >
> > > > Not sure I understand feature bit.
> > >
> > > This is because we have a concept of dependency between
> > > features but not a concept of dependency of feature on
> > > capability.
> > >
> > > > PCI transport fields existence is usually not dependent on upper layer 
> > > > protocol.
> > > >
> > > > > > We may need it even sooner than this because the AQ patch is 
> > > > > > expanding
> > > > > > the structure located in legacy area.
> > > > >
> > > > > Just to make sure I understand this, assuming we have adminq, any 
> > > > > reason a
> > > > > dedicated pcie ext cap is required?
> > > > >
> > > > No. it was my short sight. I responded right after above text that AQ 
> > > > doesn’t need cap extension.
> > >
> > >
> > >
> > > You know, thinking about this, I begin to feel that we should
> > > require that if at least one extended config exists then
> > > all caps present in the regular config are *also*
> > > mirrored in the extended config. IOW extended >= regular.
> > > The reason is that extended config can be emulated more efficiently
> > > (2x less exits).
> >
> > Any reason for it to get less exits?
>
> For a variety of reasons having to do with buggy hardware e.g. linux
> likes to use cf8/cfc for legacy ranges. 2 accesses are required for each
> read/write.  extended space is just 1.
>

Ok.

>
> > At least it has not been done in
> > current Qemu's emulation. (And do we really care about the performance
> > of config space access?)
> >
> > Thanks
>
> For boot speed, yes. Not minor 5% things but 2x, sure.

If we care about boot speed we should avoid using the PCI layer in the
guest completely.

Thanks

>
> > > WDYT?
> > >
> > >
> > > --
> > > MST
> > >
>
>
> This publicly archived list offers a means to provide input to the
> OASIS Virtual I/O Device (VIRTIO) TC.
>
> In order to verify user consent to the Feedback License terms and
> to minimize spam in the list archive, subscription is required
> before posting.
>
> Subscribe: virtio-comment-subscr...@lists.oasis-open.org
> Unsubscribe: virtio-comment-unsubscr...@lists.oasis-open.org
> List help: virtio-comment-h...@lists.oasis-open.org
> List archive: https://lists.oasis-open.org/archives/virtio-comment/
> Feedback License: https://www.oasis-open.org/who/ipr/feedback_license.pdf
> List Guidelines: https://www.oasis-open.org/policies-guidelines/mailing-lists
> Committee: https://www.oasis-open.org/committees/virtio/
> Join OASIS: https://www.oasis-open.org/join/
>


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [PATCH 00/11] Introduce transitional mmr pci device

2023-04-11 Thread Parav Pandit


> From: Michael S. Tsirkin 
> Sent: Wednesday, April 12, 2023 12:48 AM

> Here is a counter proposal:
> 
> #define VIRTIO_NET_F_LEGACY_HEADER  52  /* Use the legacy 10 byte
> header for all packets */
> 
> 
> Yes, sorry to say, you need to emulate legacy pci in software.
> 
> With notification hacks, and reset hacks, and legacy interrupt hacks, and
> writeable mac ...  this thing best belongs in vdpa anyway.

What? I don't follow.
Suddenly you attribute everything as hack with least explanation.



-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Jason Wang
On Wed, Apr 12, 2023 at 12:15 PM Michael S. Tsirkin  wrote:
>
> On Wed, Apr 12, 2023 at 11:58:39AM +0800, Jason Wang wrote:
> > On Tue, Apr 11, 2023 at 6:42 PM Michael S. Tsirkin  wrote:
> > >
> > > On Tue, Apr 11, 2023 at 05:01:59PM +0800, Jason Wang wrote:
> > > > On Tue, Apr 11, 2023 at 3:04 PM Michael S. Tsirkin  
> > > > wrote:
> > > > >
> > > > > On Tue, Apr 11, 2023 at 10:13:40AM +0800, Jason Wang wrote:
> > > > > > On Mon, Apr 10, 2023 at 6:06 PM Michael S. Tsirkin 
> > > > > >  wrote:
> > > > > > >
> > > > > > > On Mon, Apr 10, 2023 at 03:20:52PM +0800, Jason Wang wrote:
> > > > > > > > On Mon, Apr 10, 2023 at 2:40 PM Michael S. Tsirkin 
> > > > > > > >  wrote:
> > > > > > > > >
> > > > > > > > > On Mon, Apr 10, 2023 at 02:20:16PM +0800, Jason Wang wrote:
> > > > > > > > > > On Mon, Apr 10, 2023 at 2:15 PM Michael S. Tsirkin 
> > > > > > > > > >  wrote:
> > > > > > > > > > >
> > > > > > > > > > > On Mon, Apr 10, 2023 at 09:33:32AM +0800, Jason Wang 
> > > > > > > > > > > wrote:
> > > > > > > > > > > > This is fine for vDPA but not for virtio if the design 
> > > > > > > > > > > > can only work
> > > > > > > > > > > > for some specific setups (OSes/archs).
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks
> > > > > > > > > > >
> > > > > > > > > > > Well virtio legacy has a long history of documenting 
> > > > > > > > > > > existing hacks :)
> > > > > > > > > >
> > > > > > > > > > Exactly, so the legacy behaviour is not (or can't be) 
> > > > > > > > > > defined by the
> > > > > > > > > > spec but the codes.
> > > > > > > > >
> > > > > > > > > I mean driver behaviour derives from the code but we do 
> > > > > > > > > document it in
> > > > > > > > > the spec to help people build devices.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > > But yes, VIRTIO_F_ORDER_PLATFORM has to be documented.
> > > > > > > > > > > And we have to decide what to do about ACCESS_PLATFORM 
> > > > > > > > > > > since
> > > > > > > > > > > there's a security problem if device allows not acking it.
> > > > > > > > > > > Two options:
> > > > > > > > > > > - relax the rules a bit and say device will assume 
> > > > > > > > > > > ACCESS_PLATFORM
> > > > > > > > > > >   is acked anyway
> > > > > > > > > >
> > > > > > > > > > This will break legacy drivers which assume physical 
> > > > > > > > > > addresses.
> > > > > > > > >
> > > > > > > > > not that they are not already broken.
> > > > > > > >
> > > > > > > > I may miss something, the whole point is to allow legacy 
> > > > > > > > drivers to
> > > > > > > > run otherwise a modern device is sufficient?
> > > > > > >
> > > > > > > yes and if legacy drivers don't work in a given setup then we
> > > > > > > should not worry.
> > > > > > >
> > > > > > > > >
> > > > > > > > > > > - a new flag that is insecure (so useful for sec but 
> > > > > > > > > > > useless for dpdk) but optional
> > > > > > > > > >
> > > > > > > > > > This looks like a new "hack" for the legacy hacks.
> > > > > > > > >
> > > > > > > > > it's not just for legacy.
> > > > > > > >
> > > > > > > > We have the ACCESS_PLATFORM feature bit, what is the useage for 
> > > > > > > > this new flag?
> > > > > > >
> > > > > > >
> > > > > > > ACCESS_PLATFORM is also a security boundary. so devices must fail
> > > > > > > negotiation if it's not there. this new one won't be.
> > > > > > >
> > > > > > >
> > > > > > > > >
> > > > > > > > > > And what about ORDER_PLATFORM, I don't think we can modify 
> > > > > > > > > > legacy drivers...
> > > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > You play some tricks with shadow VQ I guess.
> > > > > > > >
> > > > > > > > Do we really want to add a new feature in the virtio spec that 
> > > > > > > > can
> > > > > > > > only work with the datapath mediation?
> > > > > > > >
> > > > > > > > Thanks
> > > > > > >
> > > > > > > As long as a feature is useful and can't be supported otherwise
> > > > > > > we are out of options.
> > > > > >
> > > > > > Probably not? Is it as simple as relaxing this:
> > > > > >
> > > > > > "Transitional devices MUST expose the Legacy Interface in I/O space 
> > > > > > in BAR0."
> > > > > >
> > > > > > To allow memory space.
> > > > > >
> > > > > > This works for both software and hardware devices (I had a handy
> > > > > > hardware that supports legacy virtio drivers in this way).
> > > > > >
> > > > > > Thanks
> > > > >
> > > > > Yes it is certainly simpler.
> > > > >
> > > > > Question: what happens if you try to run existing windows guests or 
> > > > > dpdk
> > > > > on these? Do they crash horribly or exit gracefully?
> > > >
> > > > Haven't tried DPDK and windows. But I remember DPDK supported legacy
> > > > MMIO bars for a while.
> > > >
> > > > Adding Maxime and Yan for comments here.
> > > >
> > > > >
> > > > > The point of the capability is to allow using modern device ID so such
> > > > > guests will not even try to bind.
> > > >
> > > > It means a mediation layer is required to use. Then it's not 

[virtio-dev] RE: [PATCH 10/11] transport-pci: Use driver notification PCI capability

2023-04-11 Thread Parav Pandit
> From: Michael S. Tsirkin 
> Sent: Wednesday, April 12, 2023 12:43 AM
> To: Parav Pandit 
> Cc: virtio-dev@lists.oasis-open.org; coh...@redhat.com; virtio-
> comm...@lists.oasis-open.org; Shahaf Shuler ;
> Satananda Burla 
> Subject: Re: [PATCH 10/11] transport-pci: Use driver notification PCI 
> capability
> 
> On Wed, Apr 12, 2023 at 04:37:05AM +, Parav Pandit wrote:
> >
> >
> > > From: Michael S. Tsirkin 
> > > Sent: Wednesday, April 12, 2023 12:31 AM
> > >
> > > On Fri, Mar 31, 2023 at 01:58:33AM +0300, Parav Pandit wrote:
> > > > PCI devices support memory BAR regions for performant driver
> > > > notifications using the notification capability.
> > > > Enable transitional MMR devices to use it in simpler manner.
> > > >
> > > > Co-developed-by: Satananda Burla 
> > > > Signed-off-by: Parav Pandit 
> > > > ---
> > > >  transport-pci.tex | 28 
> > > >  1 file changed, 28 insertions(+)
> > > >
> > > > diff --git a/transport-pci.tex b/transport-pci.tex index
> > > > 55a6aa0..4fd9898 100644
> > > > --- a/transport-pci.tex
> > > > +++ b/transport-pci.tex
> > > > @@ -763,6 +763,34 @@ \subsubsection{Notification structure
> > > > layout}\label{sec:Virtio Transport Options  cap.length >=
> > > > queue_notify_off * notify_off_multiplier + 4  \end{lstlisting}
> > > >
> > > > +\paragraph{Transitional MMR Interface: A note on Notification
> > > > +Capability} \label{sec:Virtio Transport Options / Virtio Over PCI
> > > > +Bus / Virtio Structure PCI Capabilities / Notification capability
> > > > +/ Transitional MMR Interface}
> > > > +
> > > > +The transitional MMR device benefits from receiving driver
> > > > +notifications at the Queue Notification address offered using the
> > > > +notification capability, rather than via the memory mapped legacy
> > > > +QueueNotify configuration register.
> > > > +
> > > > +Transitional MMR device uses same Queue Notification address
> > > > +within a BAR for all virtqueues:
> > > > +\begin{lstlisting}
> > > > +cap.offset
> > > > +\end{lstlisting}
> > > > +
> > > > +The transitional MMR device MUST support Queue Notification
> > > > +address within a BAR for all virtqueues at:
> > > > +\begin{lstlisting}
> > > > +cap.offset
> > > > +\end{lstlisting}
> > > > +
> > > > +The transitional MMR driver that wants to use driver
> > > > +notifications offered using notification capability MUST use same
> > > > +Queue Notification address within a BAR for all virtqueues at:
> > > > +
> > > > +\begin{lstlisting}
> > > > +cap.offset
> > > > +\end{lstlisting}
> > > > +
> > > Why? What exactly is going on here? legacy drivers will not do this.
> >
> > Legacy driver does in the q notify register that was sandwitched in between
> of slow configuration registers.
> > This is the notification offset for the hypervisor driver to perform the
> notification on behalf of the guest driver so that the acceleration available 
> for
> the non-transitional device can be utilized here as well.
> 
> I don't get it. What acceleration? for guests you need a separate page so card
> can be mapped directly while config causes an exit. But hypervisor can access
> any register without vmexits.

Typically when guest VM writes to IOBAR q notification register, a vmexit 
occurs.
On that occurrence, hypervisor driver forwards the q notification using the q 
notification region which is defined by struct virtio_pci_notify_cap.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH 00/11] Introduce transitional mmr pci device

2023-04-11 Thread Michael S. Tsirkin
On Fri, Mar 31, 2023 at 01:58:23AM +0300, Parav Pandit wrote:
> Overview:
> -
> The Transitional MMR device is a variant of the transitional PCI device.
> It has its own small Device ID range. It does not have I/O
> region BAR; instead it exposes legacy configuration and device
> specific registers at an offset in the memory region BAR.
> 
> Such transitional MMR devices will be used at the scale of
> thousands of devices using PCI SR-IOV and/or future scalable
> virtualization technology to provide backward
> compatibility (for legacy devices) and also future
> compatibility with new features.
> 
> Usecase:
> 
> 1. A hypervisor/system needs to provide transitional
>virtio devices to the guest VM at scale of thousands,
>typically, one to eight devices per VM.
> 
> 2. A hypervisor/system needs to provide such devices using a
>vendor agnostic driver in the hypervisor system.
> 
> 3. A hypervisor system prefers to have single stack regardless of
>virtio device type (net/blk) and be future compatible with a
>single vfio stack using SR-IOV or other scalable device
>virtualization technology to map PCI devices to the guest VM.
>(as transitional or otherwise)


The more I look at it the more issues I see.

Here is a counter proposal:

#define VIRTIO_NET_F_LEGACY_HEADER  52  /* Use the legacy 10 byte header 
for all packets */


Yes, sorry to say, you need to emulate legacy pci in software.

With notification hacks, and reset hacks, and legacy interrupt hacks,
and writeable mac ...  this thing best belongs in vdpa anyway.


-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH 10/11] transport-pci: Use driver notification PCI capability

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 12, 2023 at 04:37:05AM +, Parav Pandit wrote:
> 
> 
> > From: Michael S. Tsirkin 
> > Sent: Wednesday, April 12, 2023 12:31 AM
> > 
> > On Fri, Mar 31, 2023 at 01:58:33AM +0300, Parav Pandit wrote:
> > > PCI devices support memory BAR regions for performant driver
> > > notifications using the notification capability.
> > > Enable transitional MMR devices to use it in simpler manner.
> > >
> > > Co-developed-by: Satananda Burla 
> > > Signed-off-by: Parav Pandit 
> > > ---
> > >  transport-pci.tex | 28 
> > >  1 file changed, 28 insertions(+)
> > >
> > > diff --git a/transport-pci.tex b/transport-pci.tex index
> > > 55a6aa0..4fd9898 100644
> > > --- a/transport-pci.tex
> > > +++ b/transport-pci.tex
> > > @@ -763,6 +763,34 @@ \subsubsection{Notification structure
> > > layout}\label{sec:Virtio Transport Options  cap.length >=
> > > queue_notify_off * notify_off_multiplier + 4  \end{lstlisting}
> > >
> > > +\paragraph{Transitional MMR Interface: A note on Notification
> > > +Capability} \label{sec:Virtio Transport Options / Virtio Over PCI Bus
> > > +/ Virtio Structure PCI Capabilities / Notification capability /
> > > +Transitional MMR Interface}
> > > +
> > > +The transitional MMR device benefits from receiving driver
> > > +notifications at the Queue Notification address offered using the
> > > +notification capability, rather than via the memory mapped legacy
> > > +QueueNotify configuration register.
> > > +
> > > +Transitional MMR device uses same Queue Notification address within a
> > > +BAR for all virtqueues:
> > > +\begin{lstlisting}
> > > +cap.offset
> > > +\end{lstlisting}
> > > +
> > > +The transitional MMR device MUST support Queue Notification address
> > > +within a BAR for all virtqueues at:
> > > +\begin{lstlisting}
> > > +cap.offset
> > > +\end{lstlisting}
> > > +
> > > +The transitional MMR driver that wants to use driver notifications
> > > +offered using notification capability MUST use same Queue
> > > +Notification address within a BAR for all virtqueues at:
> > > +
> > > +\begin{lstlisting}
> > > +cap.offset
> > > +\end{lstlisting}
> > > +
> > Why? What exactly is going on here? legacy drivers will not do this.
> 
> Legacy driver does in the q notify register that was sandwitched in between 
> of slow configuration registers.
> This is the notification offset for the hypervisor driver to perform the 
> notification on behalf of the guest driver so that the acceleration available 
> for the non-transitional device can be utilized here as well.

I don't get it. What acceleration? for guests you need a separate page
so card can be mapped directly while config causes an exit. But
hypervisor can access any register without vmexits.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [PATCH 10/11] transport-pci: Use driver notification PCI capability

2023-04-11 Thread Parav Pandit



> From: Michael S. Tsirkin 
> Sent: Wednesday, April 12, 2023 12:31 AM
> 
> On Fri, Mar 31, 2023 at 01:58:33AM +0300, Parav Pandit wrote:
> > PCI devices support memory BAR regions for performant driver
> > notifications using the notification capability.
> > Enable transitional MMR devices to use it in simpler manner.
> >
> > Co-developed-by: Satananda Burla 
> > Signed-off-by: Parav Pandit 
> > ---
> >  transport-pci.tex | 28 
> >  1 file changed, 28 insertions(+)
> >
> > diff --git a/transport-pci.tex b/transport-pci.tex index
> > 55a6aa0..4fd9898 100644
> > --- a/transport-pci.tex
> > +++ b/transport-pci.tex
> > @@ -763,6 +763,34 @@ \subsubsection{Notification structure
> > layout}\label{sec:Virtio Transport Options  cap.length >=
> > queue_notify_off * notify_off_multiplier + 4  \end{lstlisting}
> >
> > +\paragraph{Transitional MMR Interface: A note on Notification
> > +Capability} \label{sec:Virtio Transport Options / Virtio Over PCI Bus
> > +/ Virtio Structure PCI Capabilities / Notification capability /
> > +Transitional MMR Interface}
> > +
> > +The transitional MMR device benefits from receiving driver
> > +notifications at the Queue Notification address offered using the
> > +notification capability, rather than via the memory mapped legacy
> > +QueueNotify configuration register.
> > +
> > +Transitional MMR device uses same Queue Notification address within a
> > +BAR for all virtqueues:
> > +\begin{lstlisting}
> > +cap.offset
> > +\end{lstlisting}
> > +
> > +The transitional MMR device MUST support Queue Notification address
> > +within a BAR for all virtqueues at:
> > +\begin{lstlisting}
> > +cap.offset
> > +\end{lstlisting}
> > +
> > +The transitional MMR driver that wants to use driver notifications
> > +offered using notification capability MUST use same Queue
> > +Notification address within a BAR for all virtqueues at:
> > +
> > +\begin{lstlisting}
> > +cap.offset
> > +\end{lstlisting}
> > +
> Why? What exactly is going on here? legacy drivers will not do this.

Legacy driver does in the q notify register that was sandwitched in between of 
slow configuration registers.
This is the notification offset for the hypervisor driver to perform the 
notification on behalf of the guest driver so that the acceleration available 
for the non-transitional device can be utilized here as well.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Michael S. Tsirkin
On Fri, Mar 31, 2023 at 01:58:32AM +0300, Parav Pandit wrote:
> Legacy virtio configuration registers and adjacent
> device configuration registers are located somewhere
> in a memory BAR.
> 
> A new capability supplies the location of these registers
> which a driver can use to map I/O access to legacy
> memory mapped registers.
> 
> This gives the ability to locate legacy registers in either
> the existing memory BAR or as completely new BAR at BAR 0.
> 
> A below example diagram attempts to depicts it in an existing
> memory BAR.
> 
> +--+
> |Transitional  |
> |MMR SRIOV VF  |
> |  |
> ++---+ |
> ||dev_id =   | |
> ||{0x10f9-0x10ff}| |
> |+---+ |
> |  |
> +++|
> || PCIe ext cap = 0xB ||
> || cfg_type = 10  ||
> || offset   = 0x1000  ||
> || bar  = A {0..5}||
> |+--|-+|
> |   |  |
> |   |  |
> |   |+---+ |
> |   || Memory BAR = A| |
> |   ||   | |
> |   +-->+--+ | |
> ||  |legacy virtio | | |
> ||  |+ dev cfg | | |
> ||  |registers | | |
> ||  +--+ | |
> |+-+ | |
> +--+
> 
> Co-developed-by: Satananda Burla 
> Signed-off-by: Parav Pandit 
> ---
>  transport-pci.tex | 33 +++--
>  1 file changed, 31 insertions(+), 2 deletions(-)
> 
> diff --git a/transport-pci.tex b/transport-pci.tex
> index aeda4a1..55a6aa0 100644
> --- a/transport-pci.tex
> +++ b/transport-pci.tex
> @@ -168,6 +168,7 @@ \subsection{Virtio Structure PCI 
> Capabilities}\label{sec:Virtio Transport Option
>  \item ISR Status
>  \item Device-specific configuration (optional)
>  \item PCI configuration access
> +\item Legacy memory mapped configuration registers (optional)
>  \end{itemize}
>  
>  Each structure can be mapped by a Base Address register (BAR) belonging to
> @@ -228,6 +229,8 @@ \subsection{Virtio Structure PCI 
> Capabilities}\label{sec:Virtio Transport Option
>  #define VIRTIO_PCI_CAP_SHARED_MEMORY_CFG 8
>  /* Vendor-specific data */
>  #define VIRTIO_PCI_CAP_VENDOR_CFG9
> +/* Legacy configuration registers capability */
> +#define VIRTIO_PCI_CAP_LEGACY_MMR_CFG10
>  \end{lstlisting}
>  
>  Any other value is reserved for future use.
> @@ -682,6 +685,18 @@ \subsubsection{Common configuration structure 
> layout}\label{sec:Virtio Transport
>  Configuration Space / Legacy Interface: Device Configuration
>  Space}~\nameref{sec:Basic Facilities of a Virtio Device / Device 
> Configuration Space / Legacy Interface: Device Configuration Space} for 
> workarounds.
>  
> +\paragraph{Transitional MMR Interface: A Note on Configuration Registers}
> +\label{sec:Virtio Transport Options / Virtio Over PCI Bus / Virtio Structure 
> PCI Capabilities / Common configuration structure layout / Transitional MMR 
> Interface: A Note on Configuration Registers}
> +
> +The transitional MMR device MUST present legacy virtio registers
> +consisting of legacy common configuration registers followed by
> +legacy device specific configuration registers described in section
> +\ref{sec:Virtio Transport Options / Virtio Over PCI Bus / Virtio Structure 
> PCI Capabilities / Common configuration structure layout / Legacy Interfaces: 
> A Note on Configuration Registers}
> +in a memory region PCI BAR.


So considering common legacy registers. How exactly is INT#x
handled? It's required for old guests since they fail over to that
when MSI-X fails for some reason.




> +
> +The transitional MMR device MUST provide the location of the
> +legacy virtio configuration registers using a legacy memory mapped
> +registers capability described in section \ref{sec:Virtio Transport Options 
> / Virtio Over PCI Bus / Virtio Structure PCI Capabilities / Transitional MMR 
> Interface: Legacy Memory Mapped Configuration Registers Capability}.
>  
>  \subsubsection{Notification structure layout}\label{sec:Virtio Transport 
> Options / Virtio Over PCI Bus / PCI Device Layout / Notification capability}
>  
> @@ -956,9 +971,23 @@ \subsubsection{PCI configuration access 
> capability}\label{sec:Virtio Transport O
>  specified by some other Virtio Structure PCI Capability
>  of type other than \field{VIRTIO_PCI_CAP_PCI_CFG}.
>  
> +\subsubsection{Transitional MMR Interface: Legacy Memory Mapped 
> Configuration Registers Capability}
> +\label{sec:Virtio Transport Options / Virtio Over PCI Bus / Virtio Structure 
> PCI Capabilities / Transitional MMR Interface: Legacy Memory Mapped 
> Configuration Registers Capability}
> +
> +The optional VIRTIO_PCI_CAP_LEGACY_MMR_CFG capability defines
> +the location of the legacy virtio configuration 

[virtio-dev] Re: [PATCH 10/11] transport-pci: Use driver notification PCI capability

2023-04-11 Thread Michael S. Tsirkin
On Fri, Mar 31, 2023 at 01:58:33AM +0300, Parav Pandit wrote:
> PCI devices support memory BAR regions for performant driver
> notifications using the notification capability.
> Enable transitional MMR devices to use it in simpler manner.
> 
> Co-developed-by: Satananda Burla 
> Signed-off-by: Parav Pandit 
> ---
>  transport-pci.tex | 28 
>  1 file changed, 28 insertions(+)
> 
> diff --git a/transport-pci.tex b/transport-pci.tex
> index 55a6aa0..4fd9898 100644
> --- a/transport-pci.tex
> +++ b/transport-pci.tex
> @@ -763,6 +763,34 @@ \subsubsection{Notification structure 
> layout}\label{sec:Virtio Transport Options
>  cap.length >= queue_notify_off * notify_off_multiplier + 4
>  \end{lstlisting}
>  
> +\paragraph{Transitional MMR Interface: A note on Notification Capability}
> +\label{sec:Virtio Transport Options / Virtio Over PCI Bus / Virtio Structure 
> PCI Capabilities / Notification capability / Transitional MMR Interface}
> +
> +The transitional MMR device benefits from receiving driver
> +notifications at the Queue Notification address offered using
> +the notification capability, rather than via the memory mapped
> +legacy QueueNotify configuration register.
> +
> +Transitional MMR device uses same Queue Notification address
> +within a BAR for all virtqueues:
> +\begin{lstlisting}
> +cap.offset
> +\end{lstlisting}
> +
> +The transitional MMR device MUST support Queue Notification
> +address within a BAR for all virtqueues at:
> +\begin{lstlisting}
> +cap.offset
> +\end{lstlisting}
> +
> +The transitional MMR driver that wants to use driver
> +notifications offered using notification capability MUST use
> +same Queue Notification address within a BAR for all virtqueues at:
> +
> +\begin{lstlisting}
> +cap.offset
> +\end{lstlisting}
> +
>  \subsubsection{ISR status capability}\label{sec:Virtio Transport Options / 
> Virtio Over PCI Bus / PCI Device Layout / ISR status capability}
>  
>  The VIRTIO_PCI_CAP_ISR_CFG capability

Why? What exactly is going on here? legacy drivers will
not do this.

> -- 
> 2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 12, 2023 at 12:04:45PM +0800, Jason Wang wrote:
> On Wed, Apr 12, 2023 at 3:01 AM Parav Pandit  wrote:
> >
> >
> > > From: virtio-dev@lists.oasis-open.org  On
> > > Behalf Of Jason Wang
> > > Sent: Monday, April 10, 2023 11:29 PM
> >
> > > > However, it is not backward compatible, if the device place them in
> > > > extended capability, it will not work.
> > > >
> > >
> > > It is kind of intended since it is only used for new PCI-E features:
> > >
> > New fields in new extended pci cap area is fine.
> > Migrating old fields to be present in the new extended pci cap, is not your 
> > intention. Right?
> 
> Right, but what I want to say is, such migration may cause unnecessary
> problems. And I don't see why it is a must for your legacy MMIO bar
> proposal.
> 
> >
> > > "
> > > +The location of the virtio structures that depend on the PCI Express
> > > +capability are specified using a vendor-specific extended capabilities
> > > +on the extended capabilities list in PCI Express extended configuration
> > > +space of the device.
> > > "
> > >
> > > > To make it backward compatible, a device needs to expose existing
> > > > structure in legacy area. And extended structure for same capability
> > > > in extended pci capability region.
> > > >
> > > > In other words, it will have to be a both places.
> > >
> > > Then we will run out of config space again?
> > No.
> > Only currently defined caps to be placed in two places.
> 
> What's the advantage of doing this?
> 
> New drivers should provide backward compatibility so they must scan the pci 
> cap.

No, they can start with express cap. Finding one they can skip
the old cap completely.

> The Old driver can only scan the pci cap.
> 
> > New fields don’t need to be placed in PCI cap, because no driver is looking 
> > there.
> 
> It would be much more simple if we forbid placing new fields in the
> PCI cap, it is already out of space.
> 
> Thanks
> 
> >
> > We probably already discussed this in previous email by now.
> >
> > > Otherwise we need to deal with the
> > > case when existing structures were only placed at extended capability. 
> > > Michael
> > > suggest to add a new feature, but the driver may not negotiate the feature
> > > which requires more thought.
> > >
> > Not sure I understand feature bit.
> > PCI transport fields existence is usually not dependent on upper layer 
> > protocol.
> >
> > > > We may need it even sooner than this because the AQ patch is expanding
> > > > the structure located in legacy area.
> > >
> > > Just to make sure I understand this, assuming we have adminq, any reason a
> > > dedicated pcie ext cap is required?
> > >
> > No. it was my short sight. I responded right after above text that AQ 
> > doesn’t need cap extension.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 12, 2023 at 12:07:26PM +0800, Jason Wang wrote:
> On Wed, Apr 12, 2023 at 5:25 AM Michael S. Tsirkin  wrote:
> >
> > On Tue, Apr 11, 2023 at 07:01:16PM +, Parav Pandit wrote:
> > >
> > > > From: virtio-dev@lists.oasis-open.org  
> > > > On
> > > > Behalf Of Jason Wang
> > > > Sent: Monday, April 10, 2023 11:29 PM
> > >
> > > > > However, it is not backward compatible, if the device place them in
> > > > > extended capability, it will not work.
> > > > >
> > > >
> > > > It is kind of intended since it is only used for new PCI-E features:
> > > >
> > > New fields in new extended pci cap area is fine.
> > > Migrating old fields to be present in the new extended pci cap, is not 
> > > your intention. Right?
> > >
> > > > "
> > > > +The location of the virtio structures that depend on the PCI Express
> > > > +capability are specified using a vendor-specific extended capabilities
> > > > +on the extended capabilities list in PCI Express extended configuration
> > > > +space of the device.
> > > > "
> > > >
> > > > > To make it backward compatible, a device needs to expose existing
> > > > > structure in legacy area. And extended structure for same capability
> > > > > in extended pci capability region.
> > > > >
> > > > > In other words, it will have to be a both places.
> > > >
> > > > Then we will run out of config space again?
> > > No.
> > > Only currently defined caps to be placed in two places.
> > > New fields don’t need to be placed in PCI cap, because no driver is 
> > > looking there.
> > >
> > > We probably already discussed this in previous email by now.
> > >
> > > > Otherwise we need to deal with the
> > > > case when existing structures were only placed at extended capability. 
> > > > Michael
> > > > suggest to add a new feature, but the driver may not negotiate the 
> > > > feature
> > > > which requires more thought.
> > > >
> > > Not sure I understand feature bit.
> >
> > This is because we have a concept of dependency between
> > features but not a concept of dependency of feature on
> > capability.
> >
> > > PCI transport fields existence is usually not dependent on upper layer 
> > > protocol.
> > >
> > > > > We may need it even sooner than this because the AQ patch is expanding
> > > > > the structure located in legacy area.
> > > >
> > > > Just to make sure I understand this, assuming we have adminq, any 
> > > > reason a
> > > > dedicated pcie ext cap is required?
> > > >
> > > No. it was my short sight. I responded right after above text that AQ 
> > > doesn’t need cap extension.
> >
> >
> >
> > You know, thinking about this, I begin to feel that we should
> > require that if at least one extended config exists then
> > all caps present in the regular config are *also*
> > mirrored in the extended config. IOW extended >= regular.
> > The reason is that extended config can be emulated more efficiently
> > (2x less exits).
> 
> Any reason for it to get less exits?

For a variety of reasons having to do with buggy hardware e.g. linux
likes to use cf8/cfc for legacy ranges. 2 accesses are required for each
read/write.  extended space is just 1.


> At least it has not been done in
> current Qemu's emulation. (And do we really care about the performance
> of config space access?)
> 
> Thanks

For boot speed, yes. Not minor 5% things but 2x, sure.

> > WDYT?
> >
> >
> > --
> > MST
> >


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 12, 2023 at 11:58:39AM +0800, Jason Wang wrote:
> On Tue, Apr 11, 2023 at 6:42 PM Michael S. Tsirkin  wrote:
> >
> > On Tue, Apr 11, 2023 at 05:01:59PM +0800, Jason Wang wrote:
> > > On Tue, Apr 11, 2023 at 3:04 PM Michael S. Tsirkin  
> > > wrote:
> > > >
> > > > On Tue, Apr 11, 2023 at 10:13:40AM +0800, Jason Wang wrote:
> > > > > On Mon, Apr 10, 2023 at 6:06 PM Michael S. Tsirkin  
> > > > > wrote:
> > > > > >
> > > > > > On Mon, Apr 10, 2023 at 03:20:52PM +0800, Jason Wang wrote:
> > > > > > > On Mon, Apr 10, 2023 at 2:40 PM Michael S. Tsirkin 
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > On Mon, Apr 10, 2023 at 02:20:16PM +0800, Jason Wang wrote:
> > > > > > > > > On Mon, Apr 10, 2023 at 2:15 PM Michael S. Tsirkin 
> > > > > > > > >  wrote:
> > > > > > > > > >
> > > > > > > > > > On Mon, Apr 10, 2023 at 09:33:32AM +0800, Jason Wang wrote:
> > > > > > > > > > > This is fine for vDPA but not for virtio if the design 
> > > > > > > > > > > can only work
> > > > > > > > > > > for some specific setups (OSes/archs).
> > > > > > > > > > >
> > > > > > > > > > > Thanks
> > > > > > > > > >
> > > > > > > > > > Well virtio legacy has a long history of documenting 
> > > > > > > > > > existing hacks :)
> > > > > > > > >
> > > > > > > > > Exactly, so the legacy behaviour is not (or can't be) defined 
> > > > > > > > > by the
> > > > > > > > > spec but the codes.
> > > > > > > >
> > > > > > > > I mean driver behaviour derives from the code but we do 
> > > > > > > > document it in
> > > > > > > > the spec to help people build devices.
> > > > > > > >
> > > > > > > >
> > > > > > > > > > But yes, VIRTIO_F_ORDER_PLATFORM has to be documented.
> > > > > > > > > > And we have to decide what to do about ACCESS_PLATFORM since
> > > > > > > > > > there's a security problem if device allows not acking it.
> > > > > > > > > > Two options:
> > > > > > > > > > - relax the rules a bit and say device will assume 
> > > > > > > > > > ACCESS_PLATFORM
> > > > > > > > > >   is acked anyway
> > > > > > > > >
> > > > > > > > > This will break legacy drivers which assume physical 
> > > > > > > > > addresses.
> > > > > > > >
> > > > > > > > not that they are not already broken.
> > > > > > >
> > > > > > > I may miss something, the whole point is to allow legacy drivers 
> > > > > > > to
> > > > > > > run otherwise a modern device is sufficient?
> > > > > >
> > > > > > yes and if legacy drivers don't work in a given setup then we
> > > > > > should not worry.
> > > > > >
> > > > > > > >
> > > > > > > > > > - a new flag that is insecure (so useful for sec but 
> > > > > > > > > > useless for dpdk) but optional
> > > > > > > > >
> > > > > > > > > This looks like a new "hack" for the legacy hacks.
> > > > > > > >
> > > > > > > > it's not just for legacy.
> > > > > > >
> > > > > > > We have the ACCESS_PLATFORM feature bit, what is the useage for 
> > > > > > > this new flag?
> > > > > >
> > > > > >
> > > > > > ACCESS_PLATFORM is also a security boundary. so devices must fail
> > > > > > negotiation if it's not there. this new one won't be.
> > > > > >
> > > > > >
> > > > > > > >
> > > > > > > > > And what about ORDER_PLATFORM, I don't think we can modify 
> > > > > > > > > legacy drivers...
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > >
> > > > > > > > You play some tricks with shadow VQ I guess.
> > > > > > >
> > > > > > > Do we really want to add a new feature in the virtio spec that can
> > > > > > > only work with the datapath mediation?
> > > > > > >
> > > > > > > Thanks
> > > > > >
> > > > > > As long as a feature is useful and can't be supported otherwise
> > > > > > we are out of options.
> > > > >
> > > > > Probably not? Is it as simple as relaxing this:
> > > > >
> > > > > "Transitional devices MUST expose the Legacy Interface in I/O space 
> > > > > in BAR0."
> > > > >
> > > > > To allow memory space.
> > > > >
> > > > > This works for both software and hardware devices (I had a handy
> > > > > hardware that supports legacy virtio drivers in this way).
> > > > >
> > > > > Thanks
> > > >
> > > > Yes it is certainly simpler.
> > > >
> > > > Question: what happens if you try to run existing windows guests or dpdk
> > > > on these? Do they crash horribly or exit gracefully?
> > >
> > > Haven't tried DPDK and windows. But I remember DPDK supported legacy
> > > MMIO bars for a while.
> > >
> > > Adding Maxime and Yan for comments here.
> > >
> > > >
> > > > The point of the capability is to allow using modern device ID so such
> > > > guests will not even try to bind.
> > >
> > > It means a mediation layer is required to use. Then it's not an issue
> > > for this simple relaxing any more?
> > >
> > > An advantage of such relaxing is that, for the legacy drivers like an
> > > ancient Linux version that can already do MMIO access to legacy BAR it
> > > can work without any mediation.
> >
> > Yes. But the capability approach does not prevent that -
> > just use a transitional ID.
> > The disadvantage of a 

RE: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Parav Pandit


> From: Jason Wang 
> Sent: Wednesday, April 12, 2023 12:05 AM

> > > It is kind of intended since it is only used for new PCI-E features:
> > >
> > New fields in new extended pci cap area is fine.
> > Migrating old fields to be present in the new extended pci cap, is not your
> intention. Right?
> 
> Right, but what I want to say is, such migration may cause unnecessary
> problems. And I don't see why it is a must for your legacy MMIO bar proposal.
>
As I explained in the commit log, there is very small space left in the PCI 
capabilities.
Last time I counted, our VF doesn’t even have space in the PCI cap 192 bytes 
anymore.

So starting with the extended PCI cap for legacy MMIO bar window.

> > Only currently defined caps to be placed in two places.
> 
> What's the advantage of doing this?
>
I think Michael found one advantage of being 2x faster?
 
> New drivers should provide backward compatibility so they must scan the pci
> cap.
Yes.
> The Old driver can only scan the pci cap.
> 
Yes.
> > New fields don’t need to be placed in PCI cap, because no driver is looking
> there.
> 
> It would be much more simple if we forbid placing new fields in the PCI cap, 
> it is
> already out of space.
> 
Yes. We both are saying same point. :)



Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Jason Wang
On Wed, Apr 12, 2023 at 5:25 AM Michael S. Tsirkin  wrote:
>
> On Tue, Apr 11, 2023 at 07:01:16PM +, Parav Pandit wrote:
> >
> > > From: virtio-dev@lists.oasis-open.org  On
> > > Behalf Of Jason Wang
> > > Sent: Monday, April 10, 2023 11:29 PM
> >
> > > > However, it is not backward compatible, if the device place them in
> > > > extended capability, it will not work.
> > > >
> > >
> > > It is kind of intended since it is only used for new PCI-E features:
> > >
> > New fields in new extended pci cap area is fine.
> > Migrating old fields to be present in the new extended pci cap, is not your 
> > intention. Right?
> >
> > > "
> > > +The location of the virtio structures that depend on the PCI Express
> > > +capability are specified using a vendor-specific extended capabilities
> > > +on the extended capabilities list in PCI Express extended configuration
> > > +space of the device.
> > > "
> > >
> > > > To make it backward compatible, a device needs to expose existing
> > > > structure in legacy area. And extended structure for same capability
> > > > in extended pci capability region.
> > > >
> > > > In other words, it will have to be a both places.
> > >
> > > Then we will run out of config space again?
> > No.
> > Only currently defined caps to be placed in two places.
> > New fields don’t need to be placed in PCI cap, because no driver is looking 
> > there.
> >
> > We probably already discussed this in previous email by now.
> >
> > > Otherwise we need to deal with the
> > > case when existing structures were only placed at extended capability. 
> > > Michael
> > > suggest to add a new feature, but the driver may not negotiate the feature
> > > which requires more thought.
> > >
> > Not sure I understand feature bit.
>
> This is because we have a concept of dependency between
> features but not a concept of dependency of feature on
> capability.
>
> > PCI transport fields existence is usually not dependent on upper layer 
> > protocol.
> >
> > > > We may need it even sooner than this because the AQ patch is expanding
> > > > the structure located in legacy area.
> > >
> > > Just to make sure I understand this, assuming we have adminq, any reason a
> > > dedicated pcie ext cap is required?
> > >
> > No. it was my short sight. I responded right after above text that AQ 
> > doesn’t need cap extension.
>
>
>
> You know, thinking about this, I begin to feel that we should
> require that if at least one extended config exists then
> all caps present in the regular config are *also*
> mirrored in the extended config. IOW extended >= regular.
> The reason is that extended config can be emulated more efficiently
> (2x less exits).

Any reason for it to get less exits? At least it has not been done in
current Qemu's emulation. (And do we really care about the performance
of config space access?)

Thanks

> WDYT?
>
>
> --
> MST
>


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Jason Wang
On Wed, Apr 12, 2023 at 3:01 AM Parav Pandit  wrote:
>
>
> > From: virtio-dev@lists.oasis-open.org  On
> > Behalf Of Jason Wang
> > Sent: Monday, April 10, 2023 11:29 PM
>
> > > However, it is not backward compatible, if the device place them in
> > > extended capability, it will not work.
> > >
> >
> > It is kind of intended since it is only used for new PCI-E features:
> >
> New fields in new extended pci cap area is fine.
> Migrating old fields to be present in the new extended pci cap, is not your 
> intention. Right?

Right, but what I want to say is, such migration may cause unnecessary
problems. And I don't see why it is a must for your legacy MMIO bar
proposal.

>
> > "
> > +The location of the virtio structures that depend on the PCI Express
> > +capability are specified using a vendor-specific extended capabilities
> > +on the extended capabilities list in PCI Express extended configuration
> > +space of the device.
> > "
> >
> > > To make it backward compatible, a device needs to expose existing
> > > structure in legacy area. And extended structure for same capability
> > > in extended pci capability region.
> > >
> > > In other words, it will have to be a both places.
> >
> > Then we will run out of config space again?
> No.
> Only currently defined caps to be placed in two places.

What's the advantage of doing this?

New drivers should provide backward compatibility so they must scan the pci cap.
The Old driver can only scan the pci cap.

> New fields don’t need to be placed in PCI cap, because no driver is looking 
> there.

It would be much more simple if we forbid placing new fields in the
PCI cap, it is already out of space.

Thanks

>
> We probably already discussed this in previous email by now.
>
> > Otherwise we need to deal with the
> > case when existing structures were only placed at extended capability. 
> > Michael
> > suggest to add a new feature, but the driver may not negotiate the feature
> > which requires more thought.
> >
> Not sure I understand feature bit.
> PCI transport fields existence is usually not dependent on upper layer 
> protocol.
>
> > > We may need it even sooner than this because the AQ patch is expanding
> > > the structure located in legacy area.
> >
> > Just to make sure I understand this, assuming we have adminq, any reason a
> > dedicated pcie ext cap is required?
> >
> No. it was my short sight. I responded right after above text that AQ doesn’t 
> need cap extension.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Jason Wang
On Tue, Apr 11, 2023 at 6:42 PM Michael S. Tsirkin  wrote:
>
> On Tue, Apr 11, 2023 at 05:01:59PM +0800, Jason Wang wrote:
> > On Tue, Apr 11, 2023 at 3:04 PM Michael S. Tsirkin  wrote:
> > >
> > > On Tue, Apr 11, 2023 at 10:13:40AM +0800, Jason Wang wrote:
> > > > On Mon, Apr 10, 2023 at 6:06 PM Michael S. Tsirkin  
> > > > wrote:
> > > > >
> > > > > On Mon, Apr 10, 2023 at 03:20:52PM +0800, Jason Wang wrote:
> > > > > > On Mon, Apr 10, 2023 at 2:40 PM Michael S. Tsirkin 
> > > > > >  wrote:
> > > > > > >
> > > > > > > On Mon, Apr 10, 2023 at 02:20:16PM +0800, Jason Wang wrote:
> > > > > > > > On Mon, Apr 10, 2023 at 2:15 PM Michael S. Tsirkin 
> > > > > > > >  wrote:
> > > > > > > > >
> > > > > > > > > On Mon, Apr 10, 2023 at 09:33:32AM +0800, Jason Wang wrote:
> > > > > > > > > > This is fine for vDPA but not for virtio if the design can 
> > > > > > > > > > only work
> > > > > > > > > > for some specific setups (OSes/archs).
> > > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > Well virtio legacy has a long history of documenting existing 
> > > > > > > > > hacks :)
> > > > > > > >
> > > > > > > > Exactly, so the legacy behaviour is not (or can't be) defined 
> > > > > > > > by the
> > > > > > > > spec but the codes.
> > > > > > >
> > > > > > > I mean driver behaviour derives from the code but we do document 
> > > > > > > it in
> > > > > > > the spec to help people build devices.
> > > > > > >
> > > > > > >
> > > > > > > > > But yes, VIRTIO_F_ORDER_PLATFORM has to be documented.
> > > > > > > > > And we have to decide what to do about ACCESS_PLATFORM since
> > > > > > > > > there's a security problem if device allows not acking it.
> > > > > > > > > Two options:
> > > > > > > > > - relax the rules a bit and say device will assume 
> > > > > > > > > ACCESS_PLATFORM
> > > > > > > > >   is acked anyway
> > > > > > > >
> > > > > > > > This will break legacy drivers which assume physical addresses.
> > > > > > >
> > > > > > > not that they are not already broken.
> > > > > >
> > > > > > I may miss something, the whole point is to allow legacy drivers to
> > > > > > run otherwise a modern device is sufficient?
> > > > >
> > > > > yes and if legacy drivers don't work in a given setup then we
> > > > > should not worry.
> > > > >
> > > > > > >
> > > > > > > > > - a new flag that is insecure (so useful for sec but useless 
> > > > > > > > > for dpdk) but optional
> > > > > > > >
> > > > > > > > This looks like a new "hack" for the legacy hacks.
> > > > > > >
> > > > > > > it's not just for legacy.
> > > > > >
> > > > > > We have the ACCESS_PLATFORM feature bit, what is the useage for 
> > > > > > this new flag?
> > > > >
> > > > >
> > > > > ACCESS_PLATFORM is also a security boundary. so devices must fail
> > > > > negotiation if it's not there. this new one won't be.
> > > > >
> > > > >
> > > > > > >
> > > > > > > > And what about ORDER_PLATFORM, I don't think we can modify 
> > > > > > > > legacy drivers...
> > > > > > > >
> > > > > > > > Thanks
> > > > > > >
> > > > > > > You play some tricks with shadow VQ I guess.
> > > > > >
> > > > > > Do we really want to add a new feature in the virtio spec that can
> > > > > > only work with the datapath mediation?
> > > > > >
> > > > > > Thanks
> > > > >
> > > > > As long as a feature is useful and can't be supported otherwise
> > > > > we are out of options.
> > > >
> > > > Probably not? Is it as simple as relaxing this:
> > > >
> > > > "Transitional devices MUST expose the Legacy Interface in I/O space in 
> > > > BAR0."
> > > >
> > > > To allow memory space.
> > > >
> > > > This works for both software and hardware devices (I had a handy
> > > > hardware that supports legacy virtio drivers in this way).
> > > >
> > > > Thanks
> > >
> > > Yes it is certainly simpler.
> > >
> > > Question: what happens if you try to run existing windows guests or dpdk
> > > on these? Do they crash horribly or exit gracefully?
> >
> > Haven't tried DPDK and windows. But I remember DPDK supported legacy
> > MMIO bars for a while.
> >
> > Adding Maxime and Yan for comments here.
> >
> > >
> > > The point of the capability is to allow using modern device ID so such
> > > guests will not even try to bind.
> >
> > It means a mediation layer is required to use. Then it's not an issue
> > for this simple relaxing any more?
> >
> > An advantage of such relaxing is that, for the legacy drivers like an
> > ancient Linux version that can already do MMIO access to legacy BAR it
> > can work without any mediation.
>
> Yes. But the capability approach does not prevent that -
> just use a transitional ID.
> The disadvantage of a transitional ID is some old drivers might crash, or fail
> to work in other ways.

If the driver is wrote correctly it should fail gracefully if it can't
request legacy I/O regions.

> Using a modern ID with a capability
> prevents old drivers from attaching.

Just to make sure we are at the same page.

It looks like the motivation for this 

[virtio-dev] Re: [PATCH v12] virtio-net: support inner header hash

2023-04-11 Thread Parav Pandit



On 4/11/2023 11:05 PM, Heng Qi wrote:

NOTE: We only have two modes now. The mode is not equal to the command. 
We can add VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG for commands, but there are 
still only two modes:

"
This specification defines the following modes that a device MAY 
implement for operation with multiple transmit/receive virtqueues:

\begin{itemize}
\item Automatic receive steering as defined in \ref{sec:Device Types / 
Network Device / Device Operation / Control Virtqueue / Automatic 
receive steering in multiqueue mode}.
   If a device supports this mode, it offers the VIRTIO_NET_F_MQ feature 
bit.
\item Receive-side scaling as defined in \ref{devicenormative:Device 
Types / Network Device / Device Operation / Control Virtqueue / 
Receive-side scaling (RSS) / RSS processing}.
   If a device supports this mode, it offers the VIRTIO_NET_F_RSS 
feature bit.

\end{itemize}



So, it is not incorrect that we put VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG here.


Got it. I got confused between mode and command with the text

"The command selects the mode of multiqueue operation, as follows:"

But its clear to me now.
Thanks.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v12] virtio-net: support inner header hash

2023-04-11 Thread Heng Qi




在 2023/4/12 上午5:03, Parav Pandit 写道:



On 4/3/2023 12:58 AM, Heng Qi wrote:

To achieve this, the device can calculate a suitable hash based on 
the inner headers
of this flow, for example using the Toeplitz combined with a 
symmetric hash key.


I am not sure you need symmetric hash key. Toeplitz with symmetric 
hashing without the symmetric key is possible too.


So just mentioning it as a 'combined with symmetric hashing' is enough.


Yes, as discussed with Michael, we will also support XOR hashing or 
Toeplitz symmetric hashing or even both, I'm thinking of starting a 
draft in a separate thread to let us have initial discussions.


The statement here I will modify.



  \subsubsection{Legacy Interface: Feature bits}\label{sec:Device 
Types / Network Device / Feature bits / Legacy Interface: Feature bits}
@@ -198,6 +202,7 @@ \subsection{Device configuration 
layout}\label{sec:Device Types / Network Device

  u8 rss_max_key_size;
  le16 rss_max_indirection_table_length;
  le32 supported_hash_types;
+    le32 supported_tunnel_hash_types;
  };
Given that a set command is added via cvq, it make sense to also do 
symetrric work to get it via a cvq.
This is also similar to the latest work for notification coalescing 
for VQ where get and set done using single channel = cvq.


Only set is given to match the existing RSS/HASH configuration methods. 
But we should really look ahead as you suggest.




Granted that RSS and other fields are done differently, but it was bit 
in the past.


Yes.



With that no need to define two fields at two different places in 
config area and also in cvq.


Just the new opcode is needed for GET and things will be fine.


Right.



+If VIRTIO_NET_F_HASH_TUNNEL has been negotiated, the device supports 
inner header hash and
+the driver can configure the inner header hash calculation for 
encapsulated packets \ref{sec:Device Types / Network Device / Device 
OperatiHn / Processing of Incoming Packets / Hash calculation for 
incoming packets / Tunnel/Encapsulated packet}
+by issuing the command VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG from the 
class VIRTIO_NET_CTRL_MQ.
+The command sets \field{hash_tunnel_types} in the structure 
virtio_net_hash_tunnel_config.

+
+struct virtio_net_hash_tunnel_config {
+    le32 hash_tunnel_types;
+};
+

VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG_SET
and
VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG_GET


Will do.



+Filed \field{hash_tunnel_types} contains a bitmask of supported hash 
tunnel types as
+defined in \ref{sec:Device Types / Network Device / Device Operation 
/ Processing of Incoming Packets / Hash calculation for incoming 
packets / Supported/enabled hash tunnel types}.

+
+\subparagraph{Tunnel/Encapsulated packet}
+\label{sec:Device Types / Network Device / Device Operation / 
Processing of Incoming Packets / Hash calculation for incoming 
packets / Tunnel/Encapsulated packet}

+
+A tunnel packet is encapsulated from the original packet based on 
the tunneling
+protocol (only a single level of encapsulation is currently 
supported). The
+encapsulated packet contains an outer header and an inner header, 
and the device

+calculates the hash over either the inner header or the outer header.
+
+If VIRTIO_NET_F_HASH_TUNNEL is negotiated and a received 
encapsulated packet's
+outer header matches one of the supported \field{hash_tunnel_types}, 
the hash

+of the inner header is calculated.

s/one of the supported/one of the configured/
Because support comes from GET or config space area; out of which a 
smaller or equal subset of tunnel types are configured.


Yes, configured is obviously more accurate than supported.



+\devicenormative{\subparagraph}{Inner Header Hash}{Device Types / 
Network Device / Device Operation / Control Virtqueue / Inner Header 
Hash}

+
+The device MUST calculate the outer header hash if the received 
encapsulated packet has an encapsulation type not in 
\field{supported_tunnel_hash_types}.

+

Since the configured set can be smaller, a better reword is:
The device MUST calculate the hash from the outer header if the 
received encapsulated packet type is not matching from hash_tunnel_types.


Will modify.



+The device MUST respond to the VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG 
command with VIRTIO_NET_ERR if the device
+received an unrecognized or unsupported VIRTIO_NET_HASH_TUNNEL_TYPE_ 
flag.

+
+Upon reset, the device MUST initialize \field{hash_tunnel_type} to 0.
+
+\drivernormative{\subparagraph}{Inner Header Hash}{Device Types / 
Network Device / Device Operation / Control Virtqueue / Inner Header 
Hash}

+
+The driver MUST have negotiated the feature VIRTIO_NET_F_HASH_TUNNEL 
when issuing the command VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG.

+
+The driver MUST NOT set any VIRTIO_NET_HASH_TUNNEL_TYPE_ flags that 
are not supported by the device.

+
  \paragraph{Hash reporting for incoming packets}
  \label{sec:Device Types / Network Device / Device Operation / 
Processing of Incoming Packets / Hash reporting for 

Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 12, 2023 at 12:40:53AM +, Parav Pandit wrote:
> > From: Michael S. Tsirkin 
> > Sent: Tuesday, April 11, 2023 5:25 PM
> > 
> > On Tue, Apr 11, 2023 at 07:01:16PM +, Parav Pandit wrote:
> > >
> > > > From: virtio-dev@lists.oasis-open.org
> > > >  On Behalf Of Jason Wang
> > > > Sent: Monday, April 10, 2023 11:29 PM
> > >
> > > > > However, it is not backward compatible, if the device place them
> > > > > in extended capability, it will not work.
> > > > >
> > > >
> > > > It is kind of intended since it is only used for new PCI-E features:
> > > >
> > > New fields in new extended pci cap area is fine.
> > > Migrating old fields to be present in the new extended pci cap, is not 
> > > your
> > intention. Right?
> > >
> > > > "
> > > > +The location of the virtio structures that depend on the PCI
> > > > +Express capability are specified using a vendor-specific extended
> > > > +capabilities on the extended capabilities list in PCI Express
> > > > +extended configuration space of the device.
> > > > "
> > > >
> > > > > To make it backward compatible, a device needs to expose existing
> > > > > structure in legacy area. And extended structure for same
> > > > > capability in extended pci capability region.
> > > > >
> > > > > In other words, it will have to be a both places.
> > > >
> > > > Then we will run out of config space again?
> > > No.
> > > Only currently defined caps to be placed in two places.
> > > New fields don’t need to be placed in PCI cap, because no driver is 
> > > looking
> > there.
> > >
> > > We probably already discussed this in previous email by now.
> > >
> > > > Otherwise we need to deal with the
> > > > case when existing structures were only placed at extended
> > > > capability. Michael suggest to add a new feature, but the driver may
> > > > not negotiate the feature which requires more thought.
> > > >
> > > Not sure I understand feature bit.
> > 
> > This is because we have a concept of dependency between features but not a
> > concept of dependency of feature on capability.
> > 
> A feature bit is accessible after virtio level initialization is complete.
> Virtio level initialization depends on the PCI layer transport attributes.
> So pci level attributes should exists regardless of upper layer 
> initialization.

sure

> > > PCI transport fields existence is usually not dependent on upper layer
> > protocol.
> > >
> > > > > We may need it even sooner than this because the AQ patch is
> > > > > expanding the structure located in legacy area.
> > > >
> > > > Just to make sure I understand this, assuming we have adminq, any
> > > > reason a dedicated pcie ext cap is required?
> > > >
> > > No. it was my short sight. I responded right after above text that AQ 
> > > doesn’t
> > need cap extension.
> > 
> > 
> > 
> > You know, thinking about this, I begin to feel that we should require that 
> > if at
> > least one extended config exists then all caps present in the regular 
> > config are
> > *also* mirrored in the extended config. IOW extended >= regular.
> Fair but why is this a requirement to mirror?
> Is the below one to improve the perf?

yes

> If so, it is fine, but I guess it is optional.

It can not be optional otherwise guest has to scan legacy capability list too
losing perf advantage :).

> > The reason is that extended config can be emulated more efficiently (2x less
> > exits).
> > WDYT?
> > 
> > 
> > --
> > MST
> 


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



RE: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Parav Pandit
> From: Michael S. Tsirkin 
> Sent: Tuesday, April 11, 2023 5:25 PM
> 
> On Tue, Apr 11, 2023 at 07:01:16PM +, Parav Pandit wrote:
> >
> > > From: virtio-dev@lists.oasis-open.org
> > >  On Behalf Of Jason Wang
> > > Sent: Monday, April 10, 2023 11:29 PM
> >
> > > > However, it is not backward compatible, if the device place them
> > > > in extended capability, it will not work.
> > > >
> > >
> > > It is kind of intended since it is only used for new PCI-E features:
> > >
> > New fields in new extended pci cap area is fine.
> > Migrating old fields to be present in the new extended pci cap, is not your
> intention. Right?
> >
> > > "
> > > +The location of the virtio structures that depend on the PCI
> > > +Express capability are specified using a vendor-specific extended
> > > +capabilities on the extended capabilities list in PCI Express
> > > +extended configuration space of the device.
> > > "
> > >
> > > > To make it backward compatible, a device needs to expose existing
> > > > structure in legacy area. And extended structure for same
> > > > capability in extended pci capability region.
> > > >
> > > > In other words, it will have to be a both places.
> > >
> > > Then we will run out of config space again?
> > No.
> > Only currently defined caps to be placed in two places.
> > New fields don’t need to be placed in PCI cap, because no driver is looking
> there.
> >
> > We probably already discussed this in previous email by now.
> >
> > > Otherwise we need to deal with the
> > > case when existing structures were only placed at extended
> > > capability. Michael suggest to add a new feature, but the driver may
> > > not negotiate the feature which requires more thought.
> > >
> > Not sure I understand feature bit.
> 
> This is because we have a concept of dependency between features but not a
> concept of dependency of feature on capability.
> 
A feature bit is accessible after virtio level initialization is complete.
Virtio level initialization depends on the PCI layer transport attributes.
So pci level attributes should exists regardless of upper layer initialization.

> > PCI transport fields existence is usually not dependent on upper layer
> protocol.
> >
> > > > We may need it even sooner than this because the AQ patch is
> > > > expanding the structure located in legacy area.
> > >
> > > Just to make sure I understand this, assuming we have adminq, any
> > > reason a dedicated pcie ext cap is required?
> > >
> > No. it was my short sight. I responded right after above text that AQ 
> > doesn’t
> need cap extension.
> 
> 
> 
> You know, thinking about this, I begin to feel that we should require that if 
> at
> least one extended config exists then all caps present in the regular config 
> are
> *also* mirrored in the extended config. IOW extended >= regular.
Fair but why is this a requirement to mirror?
Is the below one to improve the perf?
If so, it is fine, but I guess it is optional.

> The reason is that extended config can be emulated more efficiently (2x less
> exits).
> WDYT?
> 
> 
> --
> MST



Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Michael S. Tsirkin
On Tue, Apr 11, 2023 at 07:01:16PM +, Parav Pandit wrote:
> 
> > From: virtio-dev@lists.oasis-open.org  On
> > Behalf Of Jason Wang
> > Sent: Monday, April 10, 2023 11:29 PM
> 
> > > However, it is not backward compatible, if the device place them in
> > > extended capability, it will not work.
> > >
> > 
> > It is kind of intended since it is only used for new PCI-E features:
> > 
> New fields in new extended pci cap area is fine.
> Migrating old fields to be present in the new extended pci cap, is not your 
> intention. Right?
> 
> > "
> > +The location of the virtio structures that depend on the PCI Express
> > +capability are specified using a vendor-specific extended capabilities
> > +on the extended capabilities list in PCI Express extended configuration
> > +space of the device.
> > "
> > 
> > > To make it backward compatible, a device needs to expose existing
> > > structure in legacy area. And extended structure for same capability
> > > in extended pci capability region.
> > >
> > > In other words, it will have to be a both places.
> > 
> > Then we will run out of config space again? 
> No. 
> Only currently defined caps to be placed in two places.
> New fields don’t need to be placed in PCI cap, because no driver is looking 
> there.
> 
> We probably already discussed this in previous email by now.
> 
> > Otherwise we need to deal with the
> > case when existing structures were only placed at extended capability. 
> > Michael
> > suggest to add a new feature, but the driver may not negotiate the feature
> > which requires more thought.
> > 
> Not sure I understand feature bit.

This is because we have a concept of dependency between
features but not a concept of dependency of feature on
capability.

> PCI transport fields existence is usually not dependent on upper layer 
> protocol.
> 
> > > We may need it even sooner than this because the AQ patch is expanding
> > > the structure located in legacy area.
> > 
> > Just to make sure I understand this, assuming we have adminq, any reason a
> > dedicated pcie ext cap is required?
> > 
> No. it was my short sight. I responded right after above text that AQ doesn’t 
> need cap extension.



You know, thinking about this, I begin to feel that we should
require that if at least one extended config exists then
all caps present in the regular config are *also*
mirrored in the extended config. IOW extended >= regular.
The reason is that extended config can be emulated more efficiently
(2x less exits).
WDYT?


-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH] vimrc: Increase number of characters per line to 80

2023-04-11 Thread Parav Pandit
Increase number of characters per line to 80 to avoid aggressive short
lines.

Signed-off-by: Parav Pandit 
---
 _vimrc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/_vimrc b/_vimrc
index 045b3c5..375a043 100644
--- a/_vimrc
+++ b/_vimrc
@@ -2,4 +2,4 @@
 "following settings (without the " symbol) as last two lines in $HOME/.vimrc:
 "set secure
 "set exrc
-set textwidth=65
+set textwidth=80
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v12] virtio-net: support inner header hash

2023-04-11 Thread Parav Pandit




On 4/3/2023 12:58 AM, Heng Qi wrote:


To achieve this, the device can calculate a suitable hash based on the inner 
headers
of this flow, for example using the Toeplitz combined with a symmetric hash key.

I am not sure you need symmetric hash key. Toeplitz with symmetric 
hashing without the symmetric key is possible too.


So just mentioning it as a 'combined with symmetric hashing' is enough.


  \subsubsection{Legacy Interface: Feature bits}\label{sec:Device Types / 
Network Device / Feature bits / Legacy Interface: Feature bits}
@@ -198,6 +202,7 @@ \subsection{Device configuration layout}\label{sec:Device 
Types / Network Device
  u8 rss_max_key_size;
  le16 rss_max_indirection_table_length;
  le32 supported_hash_types;
+le32 supported_tunnel_hash_types;
  };
Given that a set command is added via cvq, it make sense to also do 
symetrric work to get it via a cvq.
This is also similar to the latest work for notification coalescing for 
VQ where get and set done using single channel = cvq.


Granted that RSS and other fields are done differently, but it was bit 
in the past.


With that no need to define two fields at two different places in config 
area and also in cvq.


Just the new opcode is needed for GET and things will be fine.


+If VIRTIO_NET_F_HASH_TUNNEL has been negotiated, the device supports inner 
header hash and
+the driver can configure the inner header hash calculation for encapsulated 
packets \ref{sec:Device Types / Network Device / Device OperatiHn / Processing 
of Incoming Packets / Hash calculation for incoming packets / 
Tunnel/Encapsulated packet}
+by issuing the command VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG from the class 
VIRTIO_NET_CTRL_MQ.
+The command sets \field{hash_tunnel_types} in the structure 
virtio_net_hash_tunnel_config.
+
+struct virtio_net_hash_tunnel_config {
+le32 hash_tunnel_types;
+};
+

VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG_SET
and
VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG_GET


+Filed \field{hash_tunnel_types} contains a bitmask of supported hash tunnel 
types as
+defined in \ref{sec:Device Types / Network Device / Device Operation / 
Processing of Incoming Packets / Hash calculation for incoming packets / 
Supported/enabled hash tunnel types}.
+
+\subparagraph{Tunnel/Encapsulated packet}
+\label{sec:Device Types / Network Device / Device Operation / Processing of 
Incoming Packets / Hash calculation for incoming packets / Tunnel/Encapsulated 
packet}
+
+A tunnel packet is encapsulated from the original packet based on the tunneling
+protocol (only a single level of encapsulation is currently supported). The
+encapsulated packet contains an outer header and an inner header, and the 
device
+calculates the hash over either the inner header or the outer header.
+
+If VIRTIO_NET_F_HASH_TUNNEL is negotiated and a received encapsulated packet's
+outer header matches one of the supported \field{hash_tunnel_types}, the hash
+of the inner header is calculated.

s/one of the supported/one of the configured/
Because support comes from GET or config space area; out of which a 
smaller or equal subset of tunnel types are configured.



+\devicenormative{\subparagraph}{Inner Header Hash}{Device Types / Network 
Device / Device Operation / Control Virtqueue / Inner Header Hash}
+
+The device MUST calculate the outer header hash if the received encapsulated 
packet has an encapsulation type not in \field{supported_tunnel_hash_types}.
+

Since the configured set can be smaller, a better reword is:
The device MUST calculate the hash from the outer header if the received 
encapsulated packet type is not matching from hash_tunnel_types.



+The device MUST respond to the VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG command with 
VIRTIO_NET_ERR if the device
+received an unrecognized or unsupported VIRTIO_NET_HASH_TUNNEL_TYPE_ flag.
+
+Upon reset, the device MUST initialize \field{hash_tunnel_type} to 0.
+
+\drivernormative{\subparagraph}{Inner Header Hash}{Device Types / Network 
Device / Device Operation / Control Virtqueue / Inner Header Hash}
+
+The driver MUST have negotiated the feature VIRTIO_NET_F_HASH_TUNNEL when 
issuing the command VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG.
+
+The driver MUST NOT set any VIRTIO_NET_HASH_TUNNEL_TYPE_ flags that are not 
supported by the device.
+
  \paragraph{Hash reporting for incoming packets}
  \label{sec:Device Types / Network Device / Device Operation / Processing of 
Incoming Packets / Hash reporting for incoming packets}
  
@@ -1308,6 +1450,7 @@ \subsubsection{Control Virtqueue}\label{sec:Device Types / Network Device / Devi

   #define VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET0 (for automatic receive 
steering)
   #define VIRTIO_NET_CTRL_MQ_RSS_CONFIG  1 (for configurable receive 
steering)
   #define VIRTIO_NET_CTRL_MQ_HASH_CONFIG 2 (for configurable hash 
calculation)
+ #define VIRTIO_NET_CTRL_MQ_TUNNEL_CONFIG   3 (for configurable inner 
header hash)

MQ_TUNNEL_CONFIG is not mutually exclusive with HASH 

[virtio-dev] [PATCH] transport-mmio: Replace virtual queue with virtqueue

2023-04-11 Thread Parav Pandit
Basic facilities define the virtqueue construct for device <-> driver
communication.

PCI transport and individual devices description also refers to it as
virtqueue.

MMIO refers to it as 'virtual queue'.

Align MMIO transport description to call such object a virtqueue.

This patch is on top of [1].

[1] https://lists.oasis-open.org/archives/virtio-dev/202304/msg00253.html
Signed-off-by: Parav Pandit 
---
 transport-mmio.tex | 50 +++---
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/transport-mmio.tex b/transport-mmio.tex
index 2d24b4c..f5aaace 100644
--- a/transport-mmio.tex
+++ b/transport-mmio.tex
@@ -108,15 +108,15 @@ \subsection{MMIO Device Register Layout}\label{sec:Virtio 
Transport Options / Vi
 bits accessible by writing to \field{DriverFeatures}.
   }
   \hline
-  \mmioreg{QueueSel}{Virtual queue index}{0x030}{W}{%
-Writing to this register selects the virtual queue that the
+  \mmioreg{QueueSel}{Virtqueue index}{0x030}{W}{%
+Writing to this register selects the virtqueue that the
 following operations on \field{QueueSizeMax},
 \field{QueueSize}, \field{QueueReady},
 \field{QueueDescLow}, \field{QueueDescHigh}, \field{QueueDriverlLow}, 
\field{QueueDriverHigh},
 \field{QueueDeviceLow}, \field{QueueDeviceHigh} and \field{QueueReset} 
apply to.
   }
   \hline
-  \mmioreg{QueueSizeMax}{Maximum virtual queue size}{0x034}{R}{%
+  \mmioreg{QueueSizeMax}{Maximum virtqueue size}{0x034}{R}{%
 Reading from the register returns the maximum size (number of
 elements) of the queue the device is ready to process or
 zero (0x0) if the queue is not available. This applies to the
@@ -126,7 +126,7 @@ \subsection{MMIO Device Register Layout}\label{sec:Virtio 
Transport Options / Vi
 \end{note}
   }
   \hline
-  \mmioreg{QueueSize}{Virtual queue size}{0x038}{W}{%
+  \mmioreg{QueueSize}{Virtqueue size}{0x038}{W}{%
 Queue size is the number of elements in the queue.
 Writing to this register notifies the device what size of the
 queue the driver will use. This applies to the queue selected by
@@ -136,9 +136,9 @@ \subsection{MMIO Device Register Layout}\label{sec:Virtio 
Transport Options / Vi
 \end{note}
   }
   \hline
-  \mmioreg{QueueReady}{Virtual queue ready bit}{0x044}{RW}{%
+  \mmioreg{QueueReady}{Virtqueue ready bit}{0x044}{RW}{%
 Writing one (0x1) to this register notifies the device that it can
-execute requests from this virtual queue. Reading from this register
+execute requests from this virtqueue. Reading from this register
 returns the last value written to it. Both read and write
 accesses apply to the queue selected by writing to \field{QueueSel}.
   }
@@ -166,7 +166,7 @@ \subsection{MMIO Device Register Layout}\label{sec:Virtio 
Transport Options / Vi
 \begin{description}
   \item[Used Buffer Notification] - bit 0 - the interrupt was asserted
 because the device has used a buffer
-in at least one of the active virtual queues.
+in at least one of the active virtqueues.
   \item [Configuration Change Notification] - bit 1 - the interrupt was
 asserted because the configuration of the device has changed.
 \end{description}
@@ -187,21 +187,21 @@ \subsection{MMIO Device Register Layout}\label{sec:Virtio 
Transport Options / Vi
 See also p. \ref{sec:Virtio Transport Options / Virtio Over MMIO / 
MMIO-specific Initialization And Device Operation / Device 
Initialization}~\nameref{sec:Virtio Transport Options / Virtio Over MMIO / 
MMIO-specific Initialization And Device Operation / Device Initialization}.
   }
   \hline
-  \mmiodreg{QueueDescLow}{QueueDescHigh}{Virtual queue's Descriptor Area 64 
bit long physical address}{0x080}{0x084}{W}{%
+  \mmiodreg{QueueDescLow}{QueueDescHigh}{Virtqueue's Descriptor Area 64 bit 
long physical address}{0x080}{0x084}{W}{%
 Writing to these two registers (lower 32 bits of the address
 to \field{QueueDescLow}, higher 32 bits to \field{QueueDescHigh}) notifies
 the device about location of the Descriptor Area of the queue
 selected by writing to \field{QueueSel} register.
   }
   \hline
-  \mmiodreg{QueueDriverLow}{QueueDriverHigh}{Virtual queue's Driver Area 64 
bit long physical address}{0x090}{0x094}{W}{%
+  \mmiodreg{QueueDriverLow}{QueueDriverHigh}{Virtqueue's Driver Area 64 bit 
long physical address}{0x090}{0x094}{W}{%
 Writing to these two registers (lower 32 bits of the address
 to \field{QueueDriverLow}, higher 32 bits to \field{QueueDriverHigh}) 
notifies
 the device about location of the Driver Area of the queue
 selected by writing to \field{QueueSel}.
   }
   \hline
-  \mmiodreg{QueueDeviceLow}{QueueDeviceHigh}{Virtual queue's Device Area 64 
bit long physical address}{0x0a0}{0x0a4}{W}{%
+  \mmiodreg{QueueDeviceLow}{QueueDeviceHigh}{Virtqueue's Device Area 64 bit 
long physical address}{0x0a0}{0x0a4}{W}{%
 Writing to these two registers 

[virtio-dev] [PATCH] device-types/multiple: replace queues with enqueues

2023-04-11 Thread Parav Pandit
Queue is a verb and noun both. Replacing it with enqueue avoids
ambiguity around plural queues noun vs verb; similar to virtio fs device
description.

Suggested-by: Michael S. Tsirkin 
Signed-off-by: Parav Pandit 
---
 device-types/blk/description.tex   | 2 +-
 device-types/gpio/description.tex  | 4 ++--
 device-types/i2c/description.tex   | 2 +-
 device-types/scsi/description.tex  | 2 +-
 device-types/vsock/description.tex | 2 +-
 5 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/device-types/blk/description.tex b/device-types/blk/description.tex
index 517b012..f04c932 100644
--- a/device-types/blk/description.tex
+++ b/device-types/blk/description.tex
@@ -427,7 +427,7 @@ \subsubsection{Legacy Interface: Device 
Initialization}\label{sec:Device Types /
 
 \subsection{Device Operation}\label{sec:Device Types / Block Device / Device 
Operation}
 
-The driver queues requests to the virtqueues, and they are used by
+The driver enqueues requests to the virtqueues, and they are used by
 the device (not necessarily in order). Each request except
 VIRTIO_BLK_T_ZONE_APPEND is of form:
 
diff --git a/device-types/gpio/description.tex 
b/device-types/gpio/description.tex
index 8e5c7f0..d51fbe1 100644
--- a/device-types/gpio/description.tex
+++ b/device-types/gpio/description.tex
@@ -358,7 +358,7 @@ \subsubsection{requestq Operation: Set IRQ 
Type}\label{sec:Device Types / GPIO D
 \subsubsection{requestq Operation: Message Flow}\label{sec:Device Types / GPIO 
Device / requestq Operation / Message Flow}
 
 \begin{itemize}
-\item The driver queues \field{struct virtio_gpio_request} and
+\item The driver enqueues \field{struct virtio_gpio_request} and
 \field{virtio_gpio_response} buffers to the \field{requestq} virtqueue,
 after filling all fields of the \field{struct virtio_gpio_request} buffer 
as
 defined by the specific message type.
@@ -458,7 +458,7 @@ \subsection{Device Operation: eventq}\label{sec:Device 
Types / GPIO Device / eve
 
 The \field{eventq} virtqueue is used by the driver to unmask the interrupts and
 used by the device to notify the driver of newly sensed interrupts. In order to
-unmask interrupt on a GPIO line, the driver queues a pair of buffers,
+unmask interrupt on a GPIO line, the driver enqueues a pair of buffers,
 \field{struct virtio_gpio_irq_request} (filled by driver) and \field{struct
 virtio_gpio_irq_response} (to be filled by device later), to the \field{eventq}
 virtqueue. A separate pair of buffers must be queued for each GPIO line, the
diff --git a/device-types/i2c/description.tex b/device-types/i2c/description.tex
index 5d407cb..861529a 100644
--- a/device-types/i2c/description.tex
+++ b/device-types/i2c/description.tex
@@ -47,7 +47,7 @@ \subsection{Device Operation}\label{sec:Device Types / I2C 
Adapter Device / Devi
 
 \subsubsection{Device Operation: Request Queue}\label{sec:Device Types / I2C 
Adapter Device / Device Operation: Request Queue}
 
-The driver queues requests to the virtqueue, and they are used by the
+The driver enqueues requests to the virtqueue, and they are used by the
 device. The request is the representation of segments of an I2C
 transaction. Each request is of the form:
 
diff --git a/device-types/scsi/description.tex 
b/device-types/scsi/description.tex
index 904c4a7..478b558 100644
--- a/device-types/scsi/description.tex
+++ b/device-types/scsi/description.tex
@@ -158,7 +158,7 @@ \subsection{Device Operation}\label{sec:Device Types / SCSI 
Host Device / Device
 
 \subsubsection{Device Operation: Request Queues}\label{sec:Device Types / SCSI 
Host Device / Device Operation / Device Operation: Request Queues}
 
-The driver queues requests to an arbitrary request queue, and
+The driver enqueues requests to an arbitrary request queue, and
 they are used by the device on that same queue. It is the
 responsibility of the driver to ensure strict request ordering
 for commands placed on different queues, because they will be
diff --git a/device-types/vsock/description.tex 
b/device-types/vsock/description.tex
index 105bb30..07909d6 100644
--- a/device-types/vsock/description.tex
+++ b/device-types/vsock/description.tex
@@ -218,7 +218,7 @@ \subsubsection{Buffer Space Management}\label{sec:Device 
Types / Socket Device /
 \field{buf_alloc} and \field{fwd_cnt} fields.
 
 \subsubsection{Receive and Transmit}\label{sec:Device Types / Socket Device / 
Device Operation / Receive and Transmit}
-The driver queues outgoing packets on the tx virtqueue and incoming packet
+The driver enqueues outgoing packets on the tx virtqueue and incoming packet
 receive buffers on the rx virtqueue. Packets are of the following form:
 
 \begin{lstlisting}
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH 0/2] transport-pci: msix register desc improve

2023-04-11 Thread Parav Pandit
Problem:

Current wording queue_msix_vector and config_msix_vector says
"for MSI-X", which is bit confusing.

config_msix_vector and queue_msix_vector are the msix vector
number for configuration change and queue related interrupts.

Hence, reword it.

---
changelog:
v0->v1:
- added 'here'
- dropped 'receiving'
- dropped already merged patch for empty line

Parav Pandit (2):
  transport-pci: Improve config msix vector description
  transport-pci: Improve queue msix vector register desc

 transport-pci.tex | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH 2/2] transport-pci: Improve queue msix vector register desc

2023-04-11 Thread Parav Pandit
queue_msix_vector register is for receiving interrupts from the device
for the virtqueue.

"for MSI-X" is confusing term.

Also it is the register that driver "writes" to, similar to
many other registers such as queue_desc, queue_driver etc.

Hence, replace the verb from use to write.

Signed-off-by: Parav Pandit 
Reviewed-by: Max Gurtovoy 
---
changelog:
v0->v1:
- added 'here'
- dropped 'receiving'
---
 transport-pci.tex | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/transport-pci.tex b/transport-pci.tex
index cb9fadd..00669b5 100644
--- a/transport-pci.tex
+++ b/transport-pci.tex
@@ -367,7 +367,7 @@ \subsubsection{Common configuration structure 
layout}\label{sec:Virtio Transport
 A 0 means the queue is unavailable.
 
 \item[\field{queue_msix_vector}]
-The driver uses this to specify the queue vector for MSI-X.
+The driver writes an MSI-X vector number here for virtqueue interrupts.
 
 \item[\field{queue_enable}]
 The driver uses this to selectively prevent the device from executing 
requests from this virtqueue.
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH 1/2] transport-pci: Improve config msix vector description

2023-04-11 Thread Parav Pandit
config_msix_vector is the register that holds the MSI-X vector number
for receiving configuration change related interrupts.

It is not "for MSI-X".

Hence, replace the confusing text with appropriate one.

Reviewed-by: Max Gurtovoy 
Signed-off-by: Parav Pandit 

---
changelog:
v0->v1:
- added 'here'
- dropped 'receiving'
---
 transport-pci.tex | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/transport-pci.tex b/transport-pci.tex
index 42be072..cb9fadd 100644
--- a/transport-pci.tex
+++ b/transport-pci.tex
@@ -343,7 +343,8 @@ \subsubsection{Common configuration structure 
layout}\label{sec:Virtio Transport
 Driver Feature Bits selected by \field{driver_feature_select}.
 
 \item[\field{config_msix_vector}]
-The driver sets the Configuration Vector for MSI-X.
+The driver writes an MSI-X vector number here for Configuration change
+interrupts.
 
 \item[\field{num_queues}]
 The device specifies the maximum number of virtqueues supported here.
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH] introduction: Add missing helping verb

2023-04-11 Thread Parav Pandit
The terminology of Transitional device and driver is missing the helping
verb 'is' similar to other terminologies.

Hence, add them to complete the sentence.

Signed-off-by: Parav Pandit 
---
 introduction.tex | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/introduction.tex b/introduction.tex
index 287c5fc..e8b34e3 100644
--- a/introduction.tex
+++ b/introduction.tex
@@ -145,14 +145,14 @@ \subsection{Legacy Interface: 
Terminology}\label{intro:Legacy
 
 \begin{description}
 \item[Transitional Device]
-a device supporting both drivers conforming to this
+is a device supporting both drivers conforming to this
 specification, and allowing legacy drivers.
 \end{description}
 
 Similarly, a driver MAY implement:
 \begin{description}
 \item[Transitional Driver]
-a driver supporting both devices conforming to this
+is a driver supporting both devices conforming to this
 specification, and legacy devices.
 \end{description}
 
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH v13 10/11] virtio-net: Describe RSS using rss rq id

2023-04-11 Thread Parav Pandit
The content of the indirection table and unclassified_queue were
originally described based on mathematical operations. In order to
make it easier to understand and to avoid intermixing the array
index with the vq index, introduce a structure
rss_rq_id (RSS receive queue
ID) and use it to describe the unclassified_queue and
indirection_table fields.

As part of it, have the example that uses non-zero virtqueue
index which helps to have better mapping between receiveX
object with virtqueue index and the actual value in the
indirection table.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/163
Signed-off-by: Parav Pandit 

---
changelog:
v12->v13:
- rename vqn to vq_index
- renamed vq index to virtqueue index
v11->v12:
- use 'virtqueue index' instead of 'virtqueue number'
v10->v11:
- commit log updated to drop old reference to rq_handle, replaced with
  rss_rq_id detail
v8->v9:
- reword rss_rq_id and unclassified_packets description
- use article
- use 'vq number' instead of 'virtqueue number'
v4->v5:
- change subject to reflect rss_rq_id
- fixed accidental removal of "unclassifed packets".
- simplfied text around indirection_table mask to use the term
  destination receive virtqueue. This aligns with next line about queue
  reset.
- removed rss_rq_id references as indirection table and
  unclassified_queue data type is self explanatory
v3->v4:
- renamed rq_handle to rss_rq_id
- moved rss_rq_id definition close to its usage in rss_config struct
v2->v3:
- moved rq_handle definition before using it
- removed duplicate example as rq_handle is now described first
v0->v1:
- new patch suggested by Michael Tsirkin
---
 device-types/net/description.tex | 27 ---
 1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/device-types/net/description.tex b/device-types/net/description.tex
index f3f9f1d..e4d236e 100644
--- a/device-types/net/description.tex
+++ b/device-types/net/description.tex
@@ -1434,11 +1434,16 @@ \subsubsection{Control Virtqueue}\label{sec:Device 
Types / Network Device / Devi
 
 Driver sends a VIRTIO_NET_CTRL_MQ_RSS_CONFIG command using the following 
format for \field{command-specific-data}:
 \begin{lstlisting}
+struct rss_rq_id {
+   le16 vq_index_1_16: 15; /* Bits 1 to 16 of the virtqueue index */
+   le16 reserved: 1; /* Set to zero */
+};
+
 struct virtio_net_rss_config {
 le32 hash_types;
 le16 indirection_table_mask;
-le16 unclassified_queue;
-le16 indirection_table[indirection_table_length];
+struct rss_rq_id unclassified_queue;
+struct rss_rq_id indirection_table[indirection_table_length];
 le16 max_tx_vq;
 u8 hash_key_length;
 u8 hash_key_data[hash_key_length];
@@ -1453,10 +1458,15 @@ \subsubsection{Control Virtqueue}\label{sec:Device 
Types / Network Device / Devi
 \field{indirection_table} array.
 Number of entries in \field{indirection_table} is 
(\field{indirection_table_mask} + 1).
 
-Field \field{unclassified_queue} contains the 0-based index of
-the receive virtqueue to place unclassified packets in. Index 0 corresponds to 
receiveq1.
+\field{rss_rq_id} is a receive virtqueue id. \field{vq_index_1_16}
+consists of bits 1 to 16 of a virtqueue index. For example, a
+\field{vq_index_1_16} value of 3 corresponds to virtqueue index 6,
+which maps to receiveq4.
+
+Field \field{unclassified_queue} contains the receive virtqueue
+in which to place unclassified packets.
 
-Field \field{indirection_table} contains an array of 0-based indices of 
receive virtqueues. Index 0 corresponds to receiveq1.
+Field \field{indirection_table} is an array of receive virtqueues.
 
 A driver sets \field{max_tx_vq} to inform a device how many transmit 
virtqueues it may use (transmitq1\ldots transmitq \field{max_tx_vq}).
 
@@ -1466,7 +1476,8 @@ \subsubsection{Control Virtqueue}\label{sec:Device Types 
/ Network Device / Devi
 
 A driver MUST NOT send the VIRTIO_NET_CTRL_MQ_RSS_CONFIG command if the 
feature VIRTIO_NET_F_RSS has not been negotiated.
 
-A driver MUST fill the \field{indirection_table} array only with indices of 
enabled queues. Index 0 corresponds to receiveq1.
+A driver MUST fill the \field{indirection_table} array only with
+enabled receive virtqueues.
 
 The number of entries in \field{indirection_table} 
(\field{indirection_table_mask} + 1) MUST be a power of two.
 
@@ -1479,7 +1490,9 @@ \subsubsection{Control Virtqueue}\label{sec:Device Types 
/ Network Device / Devi
 \begin{itemize}
 \item Calculate the hash of the packet as defined in \ref{sec:Device Types / 
Network Device / Device Operation / Processing of Incoming Packets / Hash 
calculation for incoming packets}.
 \item If the device did not calculate the hash for the specific packet, the 
device directs the packet to the receiveq specified by 
\field{unclassified_queue} of virtio_net_rss_config structure.
-\item Apply \field{indirection_table_mask} to the calculated hash and use the 
result as the index in the indirection table to get 0-based number of 

[virtio-dev] [PATCH v13 02/11] content.tex Replace virtqueue number with index

2023-04-11 Thread Parav Pandit
Replace virtqueue number with index to align to rest of the
specification.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/163
Signed-off-by: Parav Pandit 

---
changelog:
v11->v12:
- new patch
---
 content.tex | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/content.tex b/content.tex
index 43be58b..e79d722 100644
--- a/content.tex
+++ b/content.tex
@@ -405,7 +405,7 @@ \section{Driver Notifications} \label{sec:Basic Facilities 
of a Virtio Device /
 
 When VIRTIO_F_NOTIFICATION_DATA has not been negotiated,
 this notification involves sending the
-virtqueue number to the device (method depending on the transport).
+virtqueue index to the device (method depending on the transport).
 
 However, some devices benefit from the ability to find out the
 amount of available data in the queue without accessing the virtqueue in 
memory:
@@ -789,13 +789,13 @@ \chapter{Reserved Feature Bits}\label{sec:Reserved 
Feature Bits}
   buffer notifications.
   As mentioned in section \ref{sec:Basic Facilities of a Virtio Device / 
Driver notifications}, when the
   driver is required to send an available buffer notification to the device, it
-  sends the virtqueue number to be notified. The method of delivering
+  sends the virtqueue index to be notified. The method of delivering
   notifications is transport specific.
   With the PCI transport, the device can optionally provide a per-virtqueue 
value
-  for the driver to use in driver notifications, instead of the virtqueue 
number.
+  for the driver to use in driver notifications, instead of the virtqueue 
index.
   Some devices may benefit from this flexibility by providing, for example,
   an internal virtqueue identifier, or an internal offset related to the
-  virtqueue number.
+  virtqueue index.
 
   This feature indicates the availability of such value. The definition of the
   data to be provided in driver notification and the delivery method is
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH v13 06/11] transport-mmio: Avoid referring to zero based index

2023-04-11 Thread Parav Pandit
VQ range is already described in the first patch in basic virtqueue
section. Hence remove the duplicate reference to it.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/163
Signed-off-by: Parav Pandit 

---
changelog:
v12->v13:
- corrected number to index
v11->v12:
- remove changes related to 'vq number'
v8->v9:
- added 'by' at two places
- replaced 'queue number' with 'vq number'

v6->v7:
- remove text around first vq as it is already covered in the basic
  virtqueues facility section
---
 transport-mmio.tex | 13 -
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/transport-mmio.tex b/transport-mmio.tex
index 164e640..2d24b4c 100644
--- a/transport-mmio.tex
+++ b/transport-mmio.tex
@@ -113,8 +113,7 @@ \subsection{MMIO Device Register Layout}\label{sec:Virtio 
Transport Options / Vi
 following operations on \field{QueueSizeMax},
 \field{QueueSize}, \field{QueueReady},
 \field{QueueDescLow}, \field{QueueDescHigh}, \field{QueueDriverlLow}, 
\field{QueueDriverHigh},
-\field{QueueDeviceLow}, \field{QueueDeviceHigh} and \field{QueueReset} 
apply to. The index
-number of the first queue is zero (0x0).
+\field{QueueDeviceLow}, \field{QueueDeviceHigh} and \field{QueueReset} 
apply to.
   }
   \hline
   \mmioreg{QueueSizeMax}{Maximum virtual queue size}{0x034}{R}{%
@@ -363,8 +362,7 @@ \subsubsection{Virtqueue Configuration}\label{sec:Virtio 
Transport Options / Vir
 The driver will typically initialize the virtual queue in the following way:
 
 \begin{enumerate}
-\item Select the queue writing its index (first queue is 0) to
-   \field{QueueSel}.
+\item Select the queue by writing its index to \field{QueueSel}.
 
 \item Check if the queue is not already in use: read \field{QueueReady},
and expect a returned value of zero (0x0).
@@ -474,9 +472,7 @@ \subsection{Legacy interface}\label{sec:Virtio Transport 
Options / Virtio Over M
 Writing to this register selects the virtual queue that the
 following operations on the \field{QueueSizeMax},
 \field{QueueSize}, \field{QueueAlign}
-and \field{QueuePFN} registers apply to. The index
-number of the first queue is zero (0x0).
-.
+and \field{QueuePFN} registers apply to.
   }
   \hline
   \mmioreg{QueueSizeMax}{Maximum virtual queue size}{0x034}{R}{%
@@ -550,8 +546,7 @@ \subsection{Legacy interface}\label{sec:Virtio Transport 
Options / Virtio Over M
 
 The virtual queue is configured as follows:
 \begin{enumerate}
-\item Select the queue writing its index (first queue is 0) to
-   \field{QueueSel}.
+\item Select the queue by writing its index to \field{QueueSel}.
 
 \item Check if the queue is not already in use: read \field{QueuePFN},
expecting a returned value of zero (0x0).
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH v13 09/11] virtio-net: Avoid duplicate receive queue example

2023-04-11 Thread Parav Pandit
Receive queue number/index example is duplicate which is already defined
in the Setting RSS parameters section.

Hence, avoid such duplicate example and prepare it for the subsequent
patch to describe using receive queue handle.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/163
Reviewed-by: Cornelia Huck 
Signed-off-by: Parav Pandit 
---
 device-types/net/description.tex | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/device-types/net/description.tex b/device-types/net/description.tex
index 875c4e6..f3f9f1d 100644
--- a/device-types/net/description.tex
+++ b/device-types/net/description.tex
@@ -1478,8 +1478,8 @@ \subsubsection{Control Virtqueue}\label{sec:Device Types 
/ Network Device / Devi
 The device MUST determine the destination queue for a network packet as 
follows:
 \begin{itemize}
 \item Calculate the hash of the packet as defined in \ref{sec:Device Types / 
Network Device / Device Operation / Processing of Incoming Packets / Hash 
calculation for incoming packets}.
-\item If the device did not calculate the hash for the specific packet, the 
device directs the packet to the receiveq specified by 
\field{unclassified_queue} of virtio_net_rss_config structure (value of 0 
corresponds to receiveq1).
-\item Apply \field{indirection_table_mask} to the calculated hash and use the 
result as the index in the indirection table to get 0-based number of 
destination receiveq (value of 0 corresponds to receiveq1).
+\item If the device did not calculate the hash for the specific packet, the 
device directs the packet to the receiveq specified by 
\field{unclassified_queue} of virtio_net_rss_config structure.
+\item Apply \field{indirection_table_mask} to the calculated hash and use the 
result as the index in the indirection table to get 0-based number of 
destination receiveq.
 \item If the destination receive queue is being reset (See \ref{sec:Basic 
Facilities of a Virtio Device / Virtqueues / Virtqueue Reset}), the device MUST 
drop the packet.
 \end{itemize}
 
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH v13 00/11] Rename queue number to queue index

2023-04-11 Thread Parav Pandit
1. Currently, virtqueue is identified between driver and device
interchangeably using either number or index terminology.

2. Between PCI and MMIO transport the queue size (depth) is
defined as queue_size and QueueNum respectively.

To avoid confusion and to have consistency, unify them to use
index.

3. Field name vqn in the driver notification structure is
ambiguous as it is supposed to hold either vq index or device
supplied vq config data.

4. Device is really supplying queue identifier or a opaque data in the 
queue_notify_data register, and this often get confused with
very similar looking feature bit NOTIFICATION_DATA.

Solution:
a. Use virtqueue index description, and rename MMIO register as QueueSize.
b. Replace virtqueue number with virtqueue index
c. RSS area of virtio net has inherited some logic, describe it
using abstract rss_rq_id.
d. rename queue_notifify_data to queue_notify_config_data.
e. rename vqn to vq_notify_config_data to reflect it can hold either vq
index of device supplied some id.

Patch summary:
patch-1 introduce vq number as generic term
patch-2 renames index to number for pci transport
patch-3 rename queue_notify_data to queue_notify_config_data
patch-4 remove first vq index reference
patch-5 renames mmio register from Num to Size
patch-6 renames index to number for mmio transport
patch-7 renames num field to size for ccw transport
patch-8 renames vq by its index for ccw transport
patch-9 for virtio-net removes duplicate example from requirements
patch-10 for virtio-net updates rss description to use vq index
patch-11 for virtio-net to update cvq notifications coalescing commands

This series only improves the documentation, it does not change any
transport or device functionality.

Please review.
This series fixes the issue [1].

[1] https://github.com/oasis-tcs/virtio-spec/issues/163

---
changelog:
v12->v13:
- renamed queue_notify_id to queue_notify_config_data
- added patch to cover notifications coalescing commands after rebase
- fixed left out virtqueue number to virtqueue index
- dropped abbreviation of virtqueue to vq
v11->v12:
- replace number to index
- avoid confusion around vqn field and rename to vq_notify_id
- rename queue_notify_data to avoid confusing with NOTIFY_DATA
v10->v11:
- added Reviewed-by for all the reviewed patches
- updated commit log of patch-8 to drop rq_handle reference
- skipped comment to further use rss_rq_id, as rss_rq_id usage
  and structure are self describing
v9->v10:
- added virtqueue number part in content in braces
- replaced queue_select to vqn in ccw
- avoided aggrasive alignment of 65 chars
- updated commit log to drop reference to already merged patches
- added review-by tag for already reviewed patches
v8->v9:
- addressed comments from David
- few corrections with article
- renaming 'virtqueue number' to 'vq number'
- improving text and wording for rss_rq_id, avail notification
- commit log of specific change in individual patches
v7->v8:
- remove note about first virtqueue number
- skipped Max's comment to put word 'structure' in same line as its
  crosses 65 chars limit per line
- reworded queue_notification data set line, as '=' and vq number
  wording was odd
v6->v7:
- remove text around first vq as it is already covered in the basic
  virtqueues facility section
v5->v6:
- moved the vq number description from middle of vq operation
  to beginning of vq introduction
v4->v5:
- fixed accidental removal of "unclassifed packets".
- simplfied text around indirection_table mask
- removed rss_rq_id references as indirection table and
  unclassified_queue data type is self explanatory
v3->v4:
- moved note to comment for ccw
- renamed rq_handle to rss_rq_id
- moved rss_rq_id next to rss_config structure
- define rss_config structure using rss_rq_id
v2->v3:
- addressed comments from Michael
- added previous definitions for ccw fields
- moved rq_handle definition before using it
- added first patch to describe vq number
- updated pci for available buffer notification section
v1->v2:
- added patches for virtio net for rss area
- added patches for covering ccw transport
- added missing entries to refer in mmio transport

Parav Pandit (11):
  content: Add vq index text
  content.tex Replace virtqueue number with index
  content: Rename confusing queue_notify_data and vqn names
  transport-pci: Avoid first vq index reference
  transport-mmio: Rename QueueNum register
  transport-mmio: Avoid referring to zero based index
  transport-ccw: Rename queue depth/size to other transports
  transport-ccw: Refer to the vq by its index
  virtio-net: Avoid duplicate receive queue example
  virtio-net: Describe RSS using rss rq id
  virtio-net: Update vqn to vq_index for cvq cmds

 content.tex  | 20 
 device-types/net/description.tex | 53 ++
 notifications-be.c   |  2 +-
 notifications-le.c   |  2 +-
 transport-ccw.tex| 15 +
 transport-mmio.tex

[virtio-dev] [PATCH v13 04/11] transport-pci: Avoid first vq index reference

2023-04-11 Thread Parav Pandit
Drop reference to first virtqueue as it is already
covered now by the generic section in first patch.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/163
Signed-off-by: Parav Pandit 
---
changelog:
v11->v12:
- drop changes related to vq number
v9->v10:
- updated commit log to drop reference to old patch
v8->v9:
- reword the sentence to avoid future tense, like rest of the other
  fields description
- reword the sentence to avoid multiple verbs use and put -> uses
- use shorter name 'vq number' instead of 'virtqueue number'
v7->v8:
- remove note about first virtqueue number
- skipped Max's comment to put word 'structure' in same line as its
  crosses 65 chars limit per line
- reworded queue_notification data set line, as '=' and vq number
  wording was odd
v2->v3:
- addressed comments from Michael
- changed vqn to virtqueue number in the Note
- refer to vqn field instead of virtqueue number
---
 transport-pci.tex | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/transport-pci.tex b/transport-pci.tex
index 53c8ee6..42be072 100644
--- a/transport-pci.tex
+++ b/transport-pci.tex
@@ -1011,7 +1011,7 @@ \subsubsection{Device Initialization}\label{sec:Virtio 
Transport Options / Virti
 The driver typically does this as follows, for each virtqueue a device has:
 
 \begin{enumerate}
-\item Write the virtqueue index (first queue is 0) to \field{queue_select}.
+\item Write the virtqueue index to \field{queue_select}.
 
 \item Read the virtqueue size from \field{queue_size}. This controls how big 
the virtqueue is
   (see \ref{sec:Basic Facilities of a Virtio Device / 
Virtqueues}~\nameref{sec:Basic Facilities of a Virtio Device / Virtqueues}). If 
this field is 0, the virtqueue does not exist.
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH v13 11/11] virtio-net: Update vqn to vq_index for cvq cmds

2023-04-11 Thread Parav Pandit
Replace field name vqn to vq_index for recent virtqueue level commands.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/163
Signed-off-by: Parav Pandit 

---
changelog:
v12->v13:
- new patch
---
 device-types/net/description.tex | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/device-types/net/description.tex b/device-types/net/description.tex
index e4d236e..a4c 100644
--- a/device-types/net/description.tex
+++ b/device-types/net/description.tex
@@ -1560,7 +1560,7 @@ \subsubsection{Control Virtqueue}\label{sec:Device Types 
/ Network Device / Devi
 };
 
 struct virtio_net_ctrl_coal_vq {
-le16 vqn;
+le16 vq_index;
 le16 reserved;
 struct virtio_net_ctrl_coal coal;
 };
@@ -1574,7 +1574,7 @@ \subsubsection{Control Virtqueue}\label{sec:Device Types 
/ Network Device / Devi
 
 Coalescing parameters:
 \begin{itemize}
-\item \field{vqn}: The virtqueue number of an enabled transmit or receive 
virtqueue.
+\item \field{vq_index}: The virtqueue index of an enabled transmit or receive 
virtqueue.
 \item \field{max_usecs} for RX: Maximum number of microseconds to delay a RX 
notification.
 \item \field{max_usecs} for TX: Maximum number of microseconds to delay a TX 
notification.
 \item \field{max_packets} for RX: Maximum number of packets to receive before 
a RX notification.
@@ -1587,7 +1587,7 @@ \subsubsection{Control Virtqueue}\label{sec:Device Types 
/ Network Device / Devi
 \begin{itemize}
 \item For commands VIRTIO_NET_CTRL_NOTF_COAL_TX_SET and 
VIRTIO_NET_CTRL_NOTF_COAL_RX_SET, the structure virtio_net_ctrl_coal is 
write-only for the driver.
 \item For the command VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET, the structure 
virtio_net_ctrl_coal_vq is write-only for the driver.
-\item For the command VIRTIO_NET_CTRL_NOTF_COAL_VQ_GET, \field{vqn} and 
\field{reserved} are write-only
+\item For the command VIRTIO_NET_CTRL_NOTF_COAL_VQ_GET, \field{vq_index} and 
\field{reserved} are write-only
   for the driver, and the structure virtio_net_ctrl_coal is read-only for 
the driver.
 \end{itemize}
 
@@ -1596,9 +1596,9 @@ \subsubsection{Control Virtqueue}\label{sec:Device Types 
/ Network Device / Devi
 \item VIRTIO_NET_CTRL_NOTF_COAL_TX_SET: use the structure virtio_net_ctrl_coal 
to set the \field{max_usecs} and \field{max_packets} parameters for all 
transmit virtqueues.
 \item VIRTIO_NET_CTRL_NOTF_COAL_RX_SET: use the structure virtio_net_ctrl_coal 
to set the \field{max_usecs} and \field{max_packets} parameters for all receive 
virtqueues.
 \item VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET: use the structure 
virtio_net_ctrl_coal_vq to set the \field{max_usecs} and \field{max_packets} 
parameters
-for an enabled transmit/receive 
virtqueue whose number is \field{vqn}.
+for an enabled transmit/receive 
virtqueue whose index is \field{vq_index}.
 \item VIRTIO_NET_CTRL_NOTF_COAL_VQ_GET: use the structure 
virtio_net_ctrl_coal_vq to get the \field{max_usecs} and \field{max_packets} 
parameters
-for an enabled transmit/receive 
virtqueue whose number is \field{vqn}.
+for an enabled transmit/receive 
virtqueue whose index is \field{vq_index}.
 \end{enumerate}
 
 The device may generate notifications more or less frequently than specified 
by set commands of the VIRTIO_NET_CTRL_NOTF_COAL class.
@@ -1608,12 +1608,12 @@ \subsubsection{Control Virtqueue}\label{sec:Device 
Types / Network Device / Devi
 with two pairs of virtqueues as an example:
 Each of the following commands sets \field{max_usecs} and \field{max_packets} 
parameters for virtqueues.
 \begin{itemize}
-\item Command1: VIRTIO_NET_CTRL_NOTF_COAL_RX_SET sets coalescing parameters 
for virtqueues having vqn 0 and vqn 2. Virtqueues having vqn 1 and vqn 3 retain 
their previous parameters.
-\item Command2: VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET with \field{vqn} = 0 sets 
coalescing parameters for virtqueue having vqn 0. Virtqueue having vqn 2 
retains the parameters from command1.
-\item Command3: VIRTIO_NET_CTRL_NOTF_COAL_VQ_GET with \field{vqn} = 0, the 
device responds with coalescing parameters of vqn 0 set by command2.
-\item Command4: VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET with \field{vqn} = 1 sets 
coalescing parameters for virtqueue having vqn 1. Virtqueue having vqn 3 
retains its previous parameters.
-\item Command5: VIRTIO_NET_CTRL_NOTF_COAL_TX_SET sets coalescing parameters 
for virtqueues having vqn 1 and vqn 3, and overrides the parameters set by 
command4.
-\item Command6: VIRTIO_NET_CTRL_NOTF_COAL_VQ_GET with \field{vqn} = 1, the 
device responds with coalescing parameters of vqn 1 set by command5.
+\item Command1: VIRTIO_NET_CTRL_NOTF_COAL_RX_SET sets coalescing parameters 
for virtqueues having index 0 and index 2. Virtqueues having index 1 and index 
3 retain their previous parameters.
+\item Command2: VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET with 

[virtio-dev] [PATCH v13 08/11] transport-ccw: Refer to the vq by its index

2023-04-11 Thread Parav Pandit
Currently specification uses virtqueue index and
number interchangeably to refer to the virtqueue.

Instead refer to it by its index.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/163
Signed-off-by: Parav Pandit 
---
changelog:
v11->v12:
- removed changes related to index
- replace number with index
- added 'only' to make it more clear that
  notification has only vq index
v9->v10:
- replaced queue_select with vqn
- used lower case letter for first word in comment
v8->v9:
- replaced 'named' with 'known'
- replaced 'queue number' with 'vq number'
v3->v4:
- moved note to comment
v2->v3:
- added comment note for queue_select similar to max_queue_size
v0->v1:
- new patch
---
 transport-ccw.tex | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/transport-ccw.tex b/transport-ccw.tex
index cb476c7..86272d1 100644
--- a/transport-ccw.tex
+++ b/transport-ccw.tex
@@ -545,7 +545,7 @@ \subsubsection{Guest->Host Notification}\label{sec:Virtio 
Transport Options / Vi
 \end{tabular}
 
 When VIRTIO_F_NOTIFICATION_DATA has not been negotiated,
-the \field{Notification data} contains the Virtqueue number.
+the \field{Notification data} contains the virtqueue index.
 
 When VIRTIO_F_NOTIFICATION_DATA has been negotiated,
 the value has the following format:
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH v13 05/11] transport-mmio: Rename QueueNum register

2023-04-11 Thread Parav Pandit
These are further named differently between pci and mmio transport.
PCI transport indicates queue size as queue_size.

To bring consistency between pci and mmio transport,
rename the QueueNumMax and QueueNum
registers to QueueSizeMax and QueueSize respectively.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/163
Reviewed-by: Cornelia Huck 
Reviewed-by: Jiri Pirko 
Signed-off-by: Parav Pandit 

---
changelog:
v8->v9:
- added field tag to indicate field name instead of English word
v0->v1:
- replaced references of QueueNumMax to QueueSizeMax
- replaced references of QueueNum to QueueSize
- added note for renamed fields old name suggested by @Michael Tsirkin
---
 transport-mmio.tex | 42 --
 1 file changed, 28 insertions(+), 14 deletions(-)

diff --git a/transport-mmio.tex b/transport-mmio.tex
index f884a2c..164e640 100644
--- a/transport-mmio.tex
+++ b/transport-mmio.tex
@@ -110,24 +110,31 @@ \subsection{MMIO Device Register Layout}\label{sec:Virtio 
Transport Options / Vi
   \hline
   \mmioreg{QueueSel}{Virtual queue index}{0x030}{W}{%
 Writing to this register selects the virtual queue that the
-following operations on \field{QueueNumMax}, \field{QueueNum}, 
\field{QueueReady},
+following operations on \field{QueueSizeMax},
+\field{QueueSize}, \field{QueueReady},
 \field{QueueDescLow}, \field{QueueDescHigh}, \field{QueueDriverlLow}, 
\field{QueueDriverHigh},
 \field{QueueDeviceLow}, \field{QueueDeviceHigh} and \field{QueueReset} 
apply to. The index
 number of the first queue is zero (0x0).
   }
   \hline
-  \mmioreg{QueueNumMax}{Maximum virtual queue size}{0x034}{R}{%
+  \mmioreg{QueueSizeMax}{Maximum virtual queue size}{0x034}{R}{%
 Reading from the register returns the maximum size (number of
 elements) of the queue the device is ready to process or
 zero (0x0) if the queue is not available. This applies to the
 queue selected by writing to \field{QueueSel}.
+\begin{note}
+\field{QueueSizeMax} was previously known as \field{QueueNumMax}.
+\end{note}
   }
   \hline
-  \mmioreg{QueueNum}{Virtual queue size}{0x038}{W}{%
+  \mmioreg{QueueSize}{Virtual queue size}{0x038}{W}{%
 Queue size is the number of elements in the queue.
 Writing to this register notifies the device what size of the
 queue the driver will use. This applies to the queue selected by
 writing to \field{QueueSel}.
+\begin{note}
+\field{QueueSize} was previously known as \field{QueueNum}.
+\end{note}
   }
   \hline
   \mmioreg{QueueReady}{Virtual queue ready bit}{0x044}{RW}{%
@@ -308,11 +315,11 @@ \subsection{MMIO Device Register Layout}\label{sec:Virtio 
Transport Options / Vi
 
 Before writing to the \field{DriverFeatures} register, the driver MUST write a 
value to the \field{DriverFeaturesSel} register.
 
-The driver MUST write a value to \field{QueueNum} which is less than
-or equal to the value presented by the device in \field{QueueNumMax}.
+The driver MUST write a value to \field{QueueSize} which is less than
+or equal to the value presented by the device in \field{QueueSizeMax}.
 
 When \field{QueueReady} is not zero, the driver MUST NOT access
-\field{QueueNum}, \field{QueueDescLow}, \field{QueueDescHigh},
+\field{QueueSize}, \field{QueueDescLow}, \field{QueueDescHigh},
 \field{QueueDriverLow}, \field{QueueDriverHigh}, \field{QueueDeviceLow}, 
\field{QueueDeviceHigh}.
 
 To stop using the queue the driver MUST write zero (0x0) to this
@@ -363,14 +370,14 @@ \subsubsection{Virtqueue Configuration}\label{sec:Virtio 
Transport Options / Vir
and expect a returned value of zero (0x0).
 
 \item Read maximum queue size (number of elements) from
-   \field{QueueNumMax}. If the returned value is zero (0x0) the
+   \field{QueueSizeMax}. If the returned value is zero (0x0) the
queue is not available.
 
 \item Allocate and zero the queue memory, making sure the memory
is physically contiguous.
 
 \item Notify the device about the queue size by writing the size to
-   \field{QueueNum}.
+   \field{QueueSize}.
 
 \item Write physical addresses of the queue's Descriptor Area,
Driver Area and Device Area to (respectively) the
@@ -465,25 +472,32 @@ \subsection{Legacy interface}\label{sec:Virtio Transport 
Options / Virtio Over M
   \hline
   \mmioreg{QueueSel}{Virtual queue index}{0x030}{W}{%
 Writing to this register selects the virtual queue that the
-following operations on the \field{QueueNumMax}, \field{QueueNum}, 
\field{QueueAlign}
+following operations on the \field{QueueSizeMax},
+\field{QueueSize}, \field{QueueAlign}
 and \field{QueuePFN} registers apply to. The index
 number of the first queue is zero (0x0).
 .
   }
   \hline
-  \mmioreg{QueueNumMax}{Maximum virtual queue size}{0x034}{R}{%
+  \mmioreg{QueueSizeMax}{Maximum virtual queue size}{0x034}{R}{%
 Reading from the register returns the maximum size of the queue
 the device is ready to process or zero 

[virtio-dev] [PATCH v13 07/11] transport-ccw: Rename queue depth/size to other transports

2023-04-11 Thread Parav Pandit
max_num field reflects the maximum queue size/depth. Hence align name of
this field with similar field in PCI and MMIO transport to
max_queue_size.
Similarly rename 'num' to 'size'.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/163
Signed-off-by: Parav Pandit 

---
changelog:
v9>v10:
- used lower case letter for comment first word
v8->v9:
- replaced 'named' as 'known'
v3->v4:
- moved note to comment
---
 transport-ccw.tex | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/transport-ccw.tex b/transport-ccw.tex
index c492cb9..cb476c7 100644
--- a/transport-ccw.tex
+++ b/transport-ccw.tex
@@ -237,12 +237,12 @@ \subsubsection{Configuring a Virtqueue}\label{sec:Virtio 
Transport Options / Vir
 \begin{lstlisting}
 struct vq_config_block {
 be16 index;
-be16 max_num;
+be16 max_queue_size; /* previously known as max_num */
 };
 \end{lstlisting}
 
 The requested number of buffers for queue \field{index} is returned in
-\field{max_num}.
+\field{max_queue_size}.
 
 Afterwards, CCW_CMD_SET_VQ is issued by the driver to inform the
 device about the location used for its queue. The transmitted
@@ -253,7 +253,7 @@ \subsubsection{Configuring a Virtqueue}\label{sec:Virtio 
Transport Options / Vir
 be64 desc;
 be32 res0;
 be16 index;
-be16 num;
+be16 size; /* previously known as num */
 be64 driver;
 be64 device;
 };
@@ -262,7 +262,7 @@ \subsubsection{Configuring a Virtqueue}\label{sec:Virtio 
Transport Options / Vir
 \field{desc}, \field{driver} and \field{device} contain the guest
 addresses for the descriptor area,
 available area and used area for queue \field{index}, respectively. The actual
-virtqueue size (number of allocated buffers) is transmitted in \field{num}.
+virtqueue size (number of allocated buffers) is transmitted in \field{size}.
 
 \devicenormative{\paragraph}{Configuring a Virtqueue}{Virtio Transport Options 
/ Virtio over channel I/O / Device Initialization / Configuring a Virtqueue}
 
@@ -278,11 +278,12 @@ \subsubsection{Configuring a Virtqueue}\label{sec:Virtio 
Transport Options / Vir
 be64 queue;
 be32 align;
 be16 index;
-be16 num;
+be16 size; /* previously known as num */
 };
 \end{lstlisting}
 
-\field{queue} contains the guest address for queue \field{index}, \field{num} 
the number of buffers
+\field{queue} contains the guest address for queue \field{index},
+\field{size} the number of buffers
 and \field{align} the alignment. The queue layout follows \ref{sec:Basic 
Facilities of a Virtio Device / Virtqueues / Legacy Interfaces: A Note on 
Virtqueue Layout}~\nameref{sec:Basic Facilities of a Virtio Device / Virtqueues 
/ Legacy Interfaces: A Note on Virtqueue Layout}.
 
 \subsubsection{Communicating Status Information}\label{sec:Virtio Transport 
Options / Virtio over channel I/O / Device Initialization / Communicating 
Status Information}
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH v13 01/11] content: Add vq index text

2023-04-11 Thread Parav Pandit
Introduce vq index and its range so that subsequent patches can refer
to it.

Fixes: https://github.com/oasis-tcs/virtio-spec/issues/163
Signed-off-by: Parav Pandit 
---
changelog:
v12->v13:
- avoid virtqueue -> vq abbreviation
- removed Cornelia's reviewed-by due to vq abbreviation change
v11->v12:
- renamed 'number' to 'index'
v9->v10:
- added braces around vq number wording
- added vqn as another term for vq number
v8->v9:
- added inclusive when describing the vq number range
- skipped comment to put virtqueue number wording first because we
  prefer to use shorter vq number as much as possible
v5->v6:
- moved description close to introduction, it was in middle of
  queue data transfer description
v2->v3:
- new patch
---
 content.tex | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/content.tex b/content.tex
index cff548a..43be58b 100644
--- a/content.tex
+++ b/content.tex
@@ -298,6 +298,9 @@ \section{Virtqueues}\label{sec:Basic Facilities of a Virtio 
Device / Virtqueues}
 virtqueues\footnote{For example, the simplest network device has one virtqueue 
for
 transmit and one for receive.}.
 
+Each virtqueue is identified by a virtqueue index; virtqueue index range is
+from 0 to 65535 inclusive.
+
 Driver makes requests available to device by adding
 an available buffer to the queue, i.e., adding a buffer
 describing the request to a virtqueue, and optionally triggering
-- 
2.26.2


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH v13 03/11] content: Rename confusing queue_notify_data and vqn names

2023-04-11 Thread Parav Pandit
Currently queue_notify_data register indicates the device
internal queue notification content. This register is
meaningful only when feature bit VIRTIO_F_NOTIF_CONFIG_DATA is
negotiated.

However, above register name often get confusing association with
very similar feature bit VIRTIO_F_NOTIFICATION_DATA.

When VIRTIO_F_NOTIFICATION_DATA feature bit is negotiated,
notification really involves sending additional queue progress
related information (not queue identifier or index).

Hence
1. to avoid any misunderstanding and association of
queue_notify_data with similar name VIRTIO_F_NOTIFICATION_DATA,

and
2. to reflect that queue_notify_data is the actual device
internal virtqueue identifier/index/data/cookie,

a. rename queue_notify_data to queue_notify_config_data.
queue_notify_config_data.

b. rename ambiguous vqn to a union of vq_index and vq_config_data

c. The driver notification section assumes that queue notification contains
vq index always. CONFIG_DATA feature bit introduction missed to
update the driver notification section. Hence, correct it.

Signed-off-by: Parav Pandit 

---
changelog:
v12->v13:
- replace _id with _config_data
- dropped vq identifier
- dropped the idea of union as description is for config data feature
v11->v12:
- new patch
---
 content.tex| 11 ---
 notifications-be.c |  2 +-
 notifications-le.c |  2 +-
 transport-pci.tex  | 26 --
 4 files changed, 26 insertions(+), 15 deletions(-)

diff --git a/content.tex b/content.tex
index e79d722..601c069 100644
--- a/content.tex
+++ b/content.tex
@@ -404,8 +404,12 @@ \section{Driver Notifications} \label{sec:Basic Facilities 
of a Virtio Device /
 notification to the device.
 
 When VIRTIO_F_NOTIFICATION_DATA has not been negotiated,
-this notification involves sending the
-virtqueue index to the device (method depending on the transport).
+this notification contains either virtqueue
+index if VIRTIO_F_NOTIF_CONFIG_DATA is not negotiated or device
+supplied virtqueue notification data if
+VIRTIO_F_NOTIF_CONFIG_DATA is negotiated.
+
+A method to supply such virtqueue notification data is transport specific.
 
 However, some devices benefit from the ability to find out the
 amount of available data in the queue without accessing the virtqueue in 
memory:
@@ -416,7 +420,8 @@ \section{Driver Notifications} \label{sec:Basic Facilities 
of a Virtio Device /
 the following information:
 
 \begin{description}
-\item [vqn] VQ number to be notified.
+\item [vq_index_config_data] either virtqueue index or device supplied
+  queue notification config data corresponding to a virtqueue.
 \item [next_off] Offset
   within the ring where the next available ring entry
   will be written.
diff --git a/notifications-be.c b/notifications-be.c
index 5be947e..02f0624 100644
--- a/notifications-be.c
+++ b/notifications-be.c
@@ -1,5 +1,5 @@
 be32 {
-   vqn : 16;
+   vq_config_data: 16; /* previously known as vqn */
next_off : 15;
next_wrap : 1;
 };
diff --git a/notifications-le.c b/notifications-le.c
index fe51267..f73c6a5 100644
--- a/notifications-le.c
+++ b/notifications-le.c
@@ -1,5 +1,5 @@
 le32 {
-   vqn : 16;
+   vq_config_data: 16; /* previously known as vqn */
next_off : 15;
next_wrap : 1;
 };
diff --git a/transport-pci.tex b/transport-pci.tex
index 5d98467..53c8ee6 100644
--- a/transport-pci.tex
+++ b/transport-pci.tex
@@ -319,7 +319,7 @@ \subsubsection{Common configuration structure 
layout}\label{sec:Virtio Transport
 le64 queue_desc;/* read-write */
 le64 queue_driver;  /* read-write */
 le64 queue_device;  /* read-write */
-le16 queue_notify_data; /* read-only for driver */
+le16 queue_notify_config_data;  /* read-only for driver */
 le16 queue_reset;   /* read-write */
 };
 \end{lstlisting}
@@ -388,17 +388,21 @@ \subsubsection{Common configuration structure 
layout}\label{sec:Virtio Transport
 \item[\field{queue_device}]
 The driver writes the physical address of Device Area here.  See 
section \ref{sec:Basic Facilities of a Virtio Device / Virtqueues}.
 
-\item[\field{queue_notify_data}]
+\item[\field{queue_notify_config_data}]
 This field exists only if VIRTIO_F_NOTIF_CONFIG_DATA has been 
negotiated.
-The driver will use this value to put it in the 'virtqueue number' 
field
-in the available buffer notification structure.
+The driver will use this value when driver sends available buffer
+notification to the device.
 See section \ref{sec:Virtio Transport Options / Virtio Over PCI Bus / 
PCI-specific Initialization And Device Operation / Available Buffer 
Notifications}.
 \begin{note}
 This field provides the device with flexibility to determine how 
virtqueues
 will be referred to in available buffer notifications.
-In a trivial case the device can set 

RE: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Parav Pandit

> From: virtio-dev@lists.oasis-open.org  On
> Behalf Of Jason Wang
> Sent: Monday, April 10, 2023 11:29 PM

> > However, it is not backward compatible, if the device place them in
> > extended capability, it will not work.
> >
> 
> It is kind of intended since it is only used for new PCI-E features:
> 
New fields in new extended pci cap area is fine.
Migrating old fields to be present in the new extended pci cap, is not your 
intention. Right?

> "
> +The location of the virtio structures that depend on the PCI Express
> +capability are specified using a vendor-specific extended capabilities
> +on the extended capabilities list in PCI Express extended configuration
> +space of the device.
> "
> 
> > To make it backward compatible, a device needs to expose existing
> > structure in legacy area. And extended structure for same capability
> > in extended pci capability region.
> >
> > In other words, it will have to be a both places.
> 
> Then we will run out of config space again? 
No. 
Only currently defined caps to be placed in two places.
New fields don’t need to be placed in PCI cap, because no driver is looking 
there.

We probably already discussed this in previous email by now.

> Otherwise we need to deal with the
> case when existing structures were only placed at extended capability. Michael
> suggest to add a new feature, but the driver may not negotiate the feature
> which requires more thought.
> 
Not sure I understand feature bit.
PCI transport fields existence is usually not dependent on upper layer protocol.

> > We may need it even sooner than this because the AQ patch is expanding
> > the structure located in legacy area.
> 
> Just to make sure I understand this, assuming we have adminq, any reason a
> dedicated pcie ext cap is required?
> 
No. it was my short sight. I responded right after above text that AQ doesn’t 
need cap extension.


[virtio-dev] RE: [PATCH v11 04/10] admin: introduce virtio admin virtqueues

2023-04-11 Thread Parav Pandit



 From: Michael S. Tsirkin 
> Sent: Tuesday, April 11, 2023 10:01 AM
> 
> On Tue, Apr 11, 2023 at 01:39:32PM +, Parav Pandit wrote:
> > > From: Michael S. Tsirkin 
> > > Sent: Monday, April 10, 2023 4:05 PM
> > > > >
> > > > With my limited knowledge, when written for third person doing the
> adding
> > > entries to the queue, "enqueue" is suitable.
> > >
> > > Look it up in a dictionary.
> >
> > Queue is verb and noun both.
> > As opposed to that enqueue is not ambiguous being only verb.
> >
> > When "driver queues" is read, it needs tiny more efforts to realize that
> queues here is verb and not plural driver queues.
> >
> > So, without strong opinion, enqueue is preferred but it's up to you.
> 
> If you feel like this feel free to change all of text to enqueue.

Ok. Thanks.
Will change. AQ series doesn't need to wait for this cleanup.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [virtio-comment] Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Parav Pandit



> From: Michael S. Tsirkin 
> Sent: Tuesday, April 11, 2023 10:10 AM
> 
> well, of course, my question was for a new, future device, can we put all
> capabilities in the extended space? will we be able to teach bios to access
> these?
Cannot speak for BIOS vendors, 
but from PCI spec side, the extended capability exists for more than a decade 
ago.
ACS, SR-IOV etc are part of extended.

We still need to teach BIOS for the new device where to look for.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Michael S. Tsirkin
On Tue, Apr 11, 2023 at 05:07:11PM +0800, Jason Wang wrote:
> > now, imagine someone building a new device. if existing drivers are not
> > a concern, it is possible to move capabilities all to extended space. is
> > that possible while keeping the bios working?
> 
> This is possible but I'm not sure it's worthwhile. What happens if the
> device puts all capabilities in the extended space but the bios can't
> scan there?

that's my question. why can't bios scan there?

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Michael S. Tsirkin
On Tue, Apr 11, 2023 at 02:07:27PM +, Parav Pandit wrote:
> 
> 
> > From: virtio-comm...@lists.oasis-open.org  > open.org> On Behalf Of Michael S. Tsirkin
> 
> > > >
> > > > now, imagine someone building a new device. if existing drivers are
> > > > not a concern, it is possible to move capabilities all to extended
> > > > space. is that possible while keeping the bios working?
> > > >
> > > unlikely as bios is looking in the legacy section.
> > 
> > what do you mean by the legacy section?
> >
> Existing virtio PCI capabilities which are in the PCI capabilities region in 
> first 256 bytes of the PCI config space.

well, of course, my question was for a new, future device, can
we put all capabilities in the extended space? will
we be able to teach bios to access these?

> > > New capabilities like legacy (and pasid) has no backward compat
> > > requirement from bios and drivers.
> > > so they can start from new.
> > 
> > sure.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [virtio-comment] Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Parav Pandit



> From: virtio-comm...@lists.oasis-open.org  open.org> On Behalf Of Michael S. Tsirkin

> > >
> > > now, imagine someone building a new device. if existing drivers are
> > > not a concern, it is possible to move capabilities all to extended
> > > space. is that possible while keeping the bios working?
> > >
> > unlikely as bios is looking in the legacy section.
> 
> what do you mean by the legacy section?
>
Existing virtio PCI capabilities which are in the PCI capabilities region in 
first 256 bytes of the PCI config space.
 
> > New capabilities like legacy (and pasid) has no backward compat
> > requirement from bios and drivers.
> > so they can start from new.
> 
> sure.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Michael S. Tsirkin
On Tue, Apr 11, 2023 at 09:47:50AM -0400, Parav Pandit wrote:
> 
> 
> On 4/11/2023 3:00 AM, Michael S. Tsirkin wrote:
> 
> > > I meant, it depends on the capability semantics. Both PASID and legacy
> > > MMIO don't need to be accessed by BIOS. We can't change legacy BIOS to
> > > use legacy MMIO bars.
> > > 
> > > Thanks
> > 
> > makes sense.
> > 
> > 
> > now, imagine someone building a new device. if existing drivers are not
> > a concern, it is possible to move capabilities all to extended space. is
> > that possible while keeping the bios working?
> > 
> unlikely as bios is looking in the legacy section.

what do you mean by the legacy section?

> New capabilities like legacy (and pasid) has no backward compat requirement
> from bios and drivers.
> so they can start from new.

sure.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v11 04/10] admin: introduce virtio admin virtqueues

2023-04-11 Thread Michael S. Tsirkin
On Tue, Apr 11, 2023 at 01:39:32PM +, Parav Pandit wrote:
> > From: Michael S. Tsirkin 
> > Sent: Monday, April 10, 2023 4:05 PM
> > > >
> > > With my limited knowledge, when written for third person doing the adding
> > entries to the queue, "enqueue" is suitable.
> > 
> > Look it up in a dictionary.
> 
> Queue is verb and noun both.
> As opposed to that enqueue is not ambiguous being only verb.
> 
> When "driver queues" is read, it needs tiny more efforts to realize that 
> queues here is verb and not plural driver queues.
> 
> So, without strong opinion, enqueue is preferred but it's up to you.

If you feel like this feel free to change all of text to enqueue.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



RE: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Parav Pandit


> From: Jason Wang 
> Sent: Tuesday, April 11, 2023 5:07 AM

> This is possible but I'm not sure it's worthwhile. What happens if the device
> puts all capabilities in the extended space but the bios can't scan there? We 
> can
> place them at both but it then doesn't address the out of space issue. Things
> will be easier if we allow new features/capabilities to be placed on the
> extended space.

+1.
Driver side code to refer to extended pci area for new cap negligible being ext 
cap is far more common outside of virtio.


[virtio-dev] RE: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Parav Pandit

> From: Jason Wang 
> Sent: Tuesday, April 11, 2023 5:02 AM

> > The point of the capability is to allow using modern device ID so such
> > guests will not even try to bind.
> 
> It means a mediation layer is required to use. Then it's not an issue for this
> simple relaxing any more?
> 
It is desired to for the best performance because the notification 2B is 
sandwitched by legacy in middle of the configuration registers.
So like described in patch-10, a mediation layer will be able to use dedicated 
notification region.

Secondly, PCI has only effectively 3 BARs.
It is better to reuse one of the existing BAR when possible for the device 
depending on the features it offered.
So better not to hard code to BAR0 always.

> An advantage of such relaxing is that, for the legacy drivers like an ancient 
> Linux
> version that can already do MMIO access to legacy BAR it can work without any
> mediation.
May be, such MMIO BAR mapping to user space will need page aligned BAR instead 
of 32B or 64B
(You asked about 4K in other thread).
It is unlikely performant due to intermixing of notification and config 
registers.

So letting it driven by notification region is better, may be some device can 
map to the same notification register in middle of the config area.


Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Parav Pandit




On 4/11/2023 3:00 AM, Michael S. Tsirkin wrote:


I meant, it depends on the capability semantics. Both PASID and legacy
MMIO don't need to be accessed by BIOS. We can't change legacy BIOS to
use legacy MMIO bars.

Thanks


makes sense.


now, imagine someone building a new device. if existing drivers are not
a concern, it is possible to move capabilities all to extended space. is
that possible while keeping the bios working?


unlikely as bios is looking in the legacy section.

New capabilities like legacy (and pasid) has no backward compat 
requirement from bios and drivers.

so they can start from new.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [PATCH v11 04/10] admin: introduce virtio admin virtqueues

2023-04-11 Thread Parav Pandit
> From: Michael S. Tsirkin 
> Sent: Monday, April 10, 2023 4:05 PM
> > >
> > With my limited knowledge, when written for third person doing the adding
> entries to the queue, "enqueue" is suitable.
> 
> Look it up in a dictionary.

Queue is verb and noun both.
As opposed to that enqueue is not ambiguous being only verb.

When "driver queues" is read, it needs tiny more efforts to realize that queues 
here is verb and not plural driver queues.

So, without strong opinion, enqueue is preferred but it's up to you.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] RE: [virtio-comment] RE: [virtio-dev] RE: [PATCH v12 03/10] content: Rename confusing queue_notify_data and vqn names

2023-04-11 Thread Parav Pandit


> From: Cornelia Huck 
> Sent: Tuesday, April 11, 2023 4:56 AM

> 
> Yes, please leave it as F_CONFIG_DATA, as we're just putting some "data"
> there in the end (and F_CONFIG_COOKIE might indeed be confusing for the
> ccw case.)

Since Halil didn't respond for 5+ days + Michel and you propose to continue use 
CONFIG_DATA and this is rare used field, I will rename 

vqn to union of

vq_index
vq_config_data

Thanks. Will roll v13.

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Yan Vugenfirer



Sent from my iPhone

> On 11 Apr 2023, at 13:39, Michael S. Tsirkin  wrote:
> 
> On Tue, Apr 11, 2023 at 12:19:14PM +0300, Yan Vugenfirer wrote:
>>> On Tue, Apr 11, 2023 at 12:02 PM Jason Wang  wrote:
>>> 
>>> On Tue, Apr 11, 2023 at 3:04 PM Michael S. Tsirkin  wrote:
 
 On Tue, Apr 11, 2023 at 10:13:40AM +0800, Jason Wang wrote:
> On Mon, Apr 10, 2023 at 6:06 PM Michael S. Tsirkin  
> wrote:
>> 
>> On Mon, Apr 10, 2023 at 03:20:52PM +0800, Jason Wang wrote:
>>> On Mon, Apr 10, 2023 at 2:40 PM Michael S. Tsirkin  
>>> wrote:
 
 On Mon, Apr 10, 2023 at 02:20:16PM +0800, Jason Wang wrote:
> On Mon, Apr 10, 2023 at 2:15 PM Michael S. Tsirkin  
> wrote:
>> 
>> On Mon, Apr 10, 2023 at 09:33:32AM +0800, Jason Wang wrote:
>>> This is fine for vDPA but not for virtio if the design can only work
>>> for some specific setups (OSes/archs).
>>> 
>>> Thanks
>> 
>> Well virtio legacy has a long history of documenting existing hacks 
>> :)
> 
> Exactly, so the legacy behaviour is not (or can't be) defined by the
> spec but the codes.
 
 I mean driver behaviour derives from the code but we do document it in
 the spec to help people build devices.
 
 
>> But yes, VIRTIO_F_ORDER_PLATFORM has to be documented.
>> And we have to decide what to do about ACCESS_PLATFORM since
>> there's a security problem if device allows not acking it.
>> Two options:
>> - relax the rules a bit and say device will assume ACCESS_PLATFORM
>>  is acked anyway
> 
> This will break legacy drivers which assume physical addresses.
 
 not that they are not already broken.
>>> 
>>> I may miss something, the whole point is to allow legacy drivers to
>>> run otherwise a modern device is sufficient?
>> 
>> yes and if legacy drivers don't work in a given setup then we
>> should not worry.
>> 
 
>> - a new flag that is insecure (so useful for sec but useless for 
>> dpdk) but optional
> 
> This looks like a new "hack" for the legacy hacks.
 
 it's not just for legacy.
>>> 
>>> We have the ACCESS_PLATFORM feature bit, what is the useage for this 
>>> new flag?
>> 
>> 
>> ACCESS_PLATFORM is also a security boundary. so devices must fail
>> negotiation if it's not there. this new one won't be.
>> 
>> 
 
> And what about ORDER_PLATFORM, I don't think we can modify legacy 
> drivers...
> 
> Thanks
 
 You play some tricks with shadow VQ I guess.
>>> 
>>> Do we really want to add a new feature in the virtio spec that can
>>> only work with the datapath mediation?
>>> 
>>> Thanks
>> 
>> As long as a feature is useful and can't be supported otherwise
>> we are out of options.
> 
> Probably not? Is it as simple as relaxing this:
> 
> "Transitional devices MUST expose the Legacy Interface in I/O space in 
> BAR0."
> 
> To allow memory space.
> 
> This works for both software and hardware devices (I had a handy
> hardware that supports legacy virtio drivers in this way).
> 
> Thanks
 
 Yes it is certainly simpler.
 
 Question: what happens if you try to run existing windows guests or dpdk
 on these? Do they crash horribly or exit gracefully?
>>> 
>>> Haven't tried DPDK and windows. But I remember DPDK supported legacy
>>> MMIO bars for a while.
>> 
>> Regarding Windows drivers:
>> 1. We are acking VIRTIO_F_ACCESS_PLATFORM in the driver. But, if you
>> remember the "ATS" issue (Windows is either not detecting it or, even
>> if detected, not using it) - then actually, we are not forcing Windows
>> to remap the memory because Window fails to work with it correctly.
>> 2. Current Windows drivers implementation doesn't support MMIO bars.
>> We can enable the support if needed.
>> 
>> Best regards,
>> Yan.
> 
> The question was about old legacy drivers, not modern ones.
> What if they attach to a device and BAR0
> is a memory not an IO bar?
> Will they fail gracefully or crash?
The drivers should fail to load gracefully. There will be no crash, except in 
the case of the virtio-blk or virtio-scsi as a boot device.

> 
> 
>> 
>>> 
>>> Adding Maxime and Yan for comments here.
>>> 
 
 The point of the capability is to allow using modern device ID so such
 guests will not even try to bind.
>>> 
>>> It means a mediation layer is required to use. Then it's not an issue
>>> for this simple relaxing any more?
>>> 
>>> An advantage of such relaxing is that, for the legacy drivers like an
>>> ancient Linux version that can already do MMIO access to legacy BAR it
>>> can work without any mediation.
>>> 
>>> At least, if ID 

Re: [virtio-dev] [PATCH v14] virtio-net: support the virtqueue coalescing moderation

2023-04-11 Thread Cornelia Huck
On Tue, Apr 11 2023, "Michael S. Tsirkin"  wrote:

> On Tue, Apr 11, 2023 at 10:05:11AM +0200, Cornelia Huck wrote:
>> On Thu, Mar 23 2023, Heng Qi  wrote:
>> 
>> > Hi all! If after reviewing it you feel it meets the necessary 
>> > requirements, I kindly request a vote.
>> >
>> > Thanks a lot!
>> >
>> > Fixes: https://github.com/oasis-tcs/virtio-spec/issues/166
>> 
>> So this has been voted upon and accepted, but we have further discussion
>> downthread.
>> 
>> Should we merge the voted-upon version now and do further changes on
>> top, or wait for the discussion to conclude first so that we can merge
>> all of it in one go?
>
> I think we should merge as is. vqn/idx/id are not new discussions
> and have been going on before voting took place, so the TC
> was informed. Parav is working on making tree-wide changes
> around vq index applying this will not make his work
> measureably harder.

I had not looked at the discussion in detail and was unsure whether it
was covering something new... I went ahead and applied it.


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Michael S. Tsirkin
On Tue, Apr 11, 2023 at 05:07:11PM +0800, Jason Wang wrote:
> On Tue, Apr 11, 2023 at 3:00 PM Michael S. Tsirkin  wrote:
> >
> > On Tue, Apr 11, 2023 at 10:19:39AM +0800, Jason Wang wrote:
> > > On Mon, Apr 10, 2023 at 6:04 PM Michael S. Tsirkin  
> > > wrote:
> > > >
> > > > On Mon, Apr 10, 2023 at 03:16:46PM +0800, Jason Wang wrote:
> > > > > On Mon, Apr 10, 2023 at 2:24 PM Michael S. Tsirkin  
> > > > > wrote:
> > > > > >
> > > > > > On Mon, Apr 10, 2023 at 09:36:17AM +0800, Jason Wang wrote:
> > > > > > > On Fri, Mar 31, 2023 at 7:00 AM Parav Pandit  
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > PCI device configuration space for capabilities is limited to 
> > > > > > > > only 192
> > > > > > > > bytes shared by many PCI capabilities of generic PCI device and 
> > > > > > > > virtio
> > > > > > > > specific.
> > > > > > > >
> > > > > > > > Hence, introduce virtio extended capability that uses PCI 
> > > > > > > > Express
> > > > > > > > extended capability.
> > > > > > > > Subsequent patch uses this virtio extended capability.
> > > > > > > >
> > > > > > > > Co-developed-by: Satananda Burla 
> > > > > > > > Signed-off-by: Parav Pandit 
> > > > > > >
> > > > > > > Can you explain the differences compared to what I've used to 
> > > > > > > propose?
> > > > > > >
> > > > > > > https://www.mail-archive.com/virtio-dev@lists.oasis-open.org/msg08078.html
> > > > > > >
> > > > > > > This can save time for everybody.
> > > > > > >
> > > > > > > Thanks
> > > > > >
> > > > > > BTW another advantage of extended capabilities is - these are 
> > > > > > actually
> > > > > > cheaper to access from a VM than classic config space.
> > > > >
> > > > > Config space/BAR is allowed by both of the proposals or anything I 
> > > > > missed?
> > > > >
> > > > > >
> > > > > >
> > > > > > Several points
> > > > > > - I don't like it that yours is 32 bit. We do not need 2 variants 
> > > > > > just
> > > > > >   make it all 64 bit
> > > > >
> > > > > That's fine.
> > > > >
> > > > > > - We need to document that if driver does not scan extended 
> > > > > > capbilities it will not find them.
> > > > >
> > > > > This is implicit since I remember we don't have such documentation for
> > > > > pci capability, anything makes pcie special?
> > > >
> > > > yes - the fact that there are tons of existing drivers expecting
> > > > everything in standard space.
> > > >
> > > >
> > > > > >   And existing drivers do not scan them. So what is safe
> > > > > >   to put there? vendor specific? extra access types?
> > > > >
> > > > > For PASID at least, since it's a PCI-E feature, vendor specific should
> > > > > be fine. Not sure about legacy MMIO then.
> > > > >
> > > > > >   Can we make scanning these mandatory in future drivers? future 
> > > > > > devices?
> > > > > >   I guess we can add a feature bit to flag that.
> > > > >
> > > > > For PASID, it doesn't need this, otherwise we may duplicate transport
> > > > > specific features.
> > > >
> > > > i don't get it. what does PASID have to do with it?
> > >
> > > My proposal is to allow PASID capability to be placed on top.
> >
> > Assuming you mean a patch applied on top of this one.
> >
> > > So what
> > > I meant is:
> > >
> > > if the driver needs to use PASID, it needs to scan extend capability
> > >
> > > So it is only used for future drivers. I think this applies to legacy
> > > MMIO as well.
> >
> > sure
> >
> > > > A new feature will allow clean split at least:
> > > > we make any new features and new devices that expect
> > > > express capability depend on this new feature bit.
> > > >
> > > > > >   Is accessing these possible from bios?
> > > > >
> > > > > Not at least for the two use cases now PASID or legacy MMIO.
> > > >
> > > > can't parse english here. what does this mean?
> > >
> > > I meant, it depends on the capability semantics. Both PASID and legacy
> > > MMIO don't need to be accessed by BIOS. We can't change legacy BIOS to
> > > use legacy MMIO bars.
> > >
> > > Thanks
> >
> > makes sense.
> >
> >
> > now, imagine someone building a new device. if existing drivers are not
> > a concern, it is possible to move capabilities all to extended space. is
> > that possible while keeping the bios working?
> 
> This is possible but I'm not sure it's worthwhile. What happens if the
> device puts all capabilities in the extended space but the bios can't
> scan there?

that's my question. can seabios access extended caps?

> We can place them at both but it then doesn't address the
> out of space issue. Things will be easier if we allow new
> features/capabilities to be placed on the extended space.
> 
> Thanks
> 
> >
> > > >
> > > >
> > > > > >
> > > > > > So I like this one better as a basis - care reviewing it and adding
> > > > > > stuff?
> > > > >
> > > > > There are very few differences and I will have a look.
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > --
> > > > > > MST
> > > > > >
> > > >
> >



[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Michael S. Tsirkin
On Tue, Apr 11, 2023 at 05:01:59PM +0800, Jason Wang wrote:
> On Tue, Apr 11, 2023 at 3:04 PM Michael S. Tsirkin  wrote:
> >
> > On Tue, Apr 11, 2023 at 10:13:40AM +0800, Jason Wang wrote:
> > > On Mon, Apr 10, 2023 at 6:06 PM Michael S. Tsirkin  
> > > wrote:
> > > >
> > > > On Mon, Apr 10, 2023 at 03:20:52PM +0800, Jason Wang wrote:
> > > > > On Mon, Apr 10, 2023 at 2:40 PM Michael S. Tsirkin  
> > > > > wrote:
> > > > > >
> > > > > > On Mon, Apr 10, 2023 at 02:20:16PM +0800, Jason Wang wrote:
> > > > > > > On Mon, Apr 10, 2023 at 2:15 PM Michael S. Tsirkin 
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > On Mon, Apr 10, 2023 at 09:33:32AM +0800, Jason Wang wrote:
> > > > > > > > > This is fine for vDPA but not for virtio if the design can 
> > > > > > > > > only work
> > > > > > > > > for some specific setups (OSes/archs).
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > >
> > > > > > > > Well virtio legacy has a long history of documenting existing 
> > > > > > > > hacks :)
> > > > > > >
> > > > > > > Exactly, so the legacy behaviour is not (or can't be) defined by 
> > > > > > > the
> > > > > > > spec but the codes.
> > > > > >
> > > > > > I mean driver behaviour derives from the code but we do document it 
> > > > > > in
> > > > > > the spec to help people build devices.
> > > > > >
> > > > > >
> > > > > > > > But yes, VIRTIO_F_ORDER_PLATFORM has to be documented.
> > > > > > > > And we have to decide what to do about ACCESS_PLATFORM since
> > > > > > > > there's a security problem if device allows not acking it.
> > > > > > > > Two options:
> > > > > > > > - relax the rules a bit and say device will assume 
> > > > > > > > ACCESS_PLATFORM
> > > > > > > >   is acked anyway
> > > > > > >
> > > > > > > This will break legacy drivers which assume physical addresses.
> > > > > >
> > > > > > not that they are not already broken.
> > > > >
> > > > > I may miss something, the whole point is to allow legacy drivers to
> > > > > run otherwise a modern device is sufficient?
> > > >
> > > > yes and if legacy drivers don't work in a given setup then we
> > > > should not worry.
> > > >
> > > > > >
> > > > > > > > - a new flag that is insecure (so useful for sec but useless 
> > > > > > > > for dpdk) but optional
> > > > > > >
> > > > > > > This looks like a new "hack" for the legacy hacks.
> > > > > >
> > > > > > it's not just for legacy.
> > > > >
> > > > > We have the ACCESS_PLATFORM feature bit, what is the useage for this 
> > > > > new flag?
> > > >
> > > >
> > > > ACCESS_PLATFORM is also a security boundary. so devices must fail
> > > > negotiation if it's not there. this new one won't be.
> > > >
> > > >
> > > > > >
> > > > > > > And what about ORDER_PLATFORM, I don't think we can modify legacy 
> > > > > > > drivers...
> > > > > > >
> > > > > > > Thanks
> > > > > >
> > > > > > You play some tricks with shadow VQ I guess.
> > > > >
> > > > > Do we really want to add a new feature in the virtio spec that can
> > > > > only work with the datapath mediation?
> > > > >
> > > > > Thanks
> > > >
> > > > As long as a feature is useful and can't be supported otherwise
> > > > we are out of options.
> > >
> > > Probably not? Is it as simple as relaxing this:
> > >
> > > "Transitional devices MUST expose the Legacy Interface in I/O space in 
> > > BAR0."
> > >
> > > To allow memory space.
> > >
> > > This works for both software and hardware devices (I had a handy
> > > hardware that supports legacy virtio drivers in this way).
> > >
> > > Thanks
> >
> > Yes it is certainly simpler.
> >
> > Question: what happens if you try to run existing windows guests or dpdk
> > on these? Do they crash horribly or exit gracefully?
> 
> Haven't tried DPDK and windows. But I remember DPDK supported legacy
> MMIO bars for a while.
> 
> Adding Maxime and Yan for comments here.
> 
> >
> > The point of the capability is to allow using modern device ID so such
> > guests will not even try to bind.
> 
> It means a mediation layer is required to use. Then it's not an issue
> for this simple relaxing any more?
> 
> An advantage of such relaxing is that, for the legacy drivers like an
> ancient Linux version that can already do MMIO access to legacy BAR it
> can work without any mediation.

Yes. But the capability approach does not prevent that -
just use a transitional ID.
The disadvantage of a transitional ID is some old drivers might crash, or fail
to work in other ways. Using a modern ID with a capability
prevents old drivers from attaching. Using a capability the system
designed is in control and can decide which drivers to use.



> At least, if ID can help, it can be used with this as well.
> 
> Thanks


I don't know what that means.

> 
> 
> >
> >
> > > > Keeping field practice things out of the
> > > > spec helps no one.
> > > >
> > > > > >
> > > > > > --
> > > > > > MST
> > > > > >
> > > >
> >


-
To unsubscribe, e-mail: 

[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Michael S. Tsirkin
On Tue, Apr 11, 2023 at 12:19:14PM +0300, Yan Vugenfirer wrote:
> On Tue, Apr 11, 2023 at 12:02 PM Jason Wang  wrote:
> >
> > On Tue, Apr 11, 2023 at 3:04 PM Michael S. Tsirkin  wrote:
> > >
> > > On Tue, Apr 11, 2023 at 10:13:40AM +0800, Jason Wang wrote:
> > > > On Mon, Apr 10, 2023 at 6:06 PM Michael S. Tsirkin  
> > > > wrote:
> > > > >
> > > > > On Mon, Apr 10, 2023 at 03:20:52PM +0800, Jason Wang wrote:
> > > > > > On Mon, Apr 10, 2023 at 2:40 PM Michael S. Tsirkin 
> > > > > >  wrote:
> > > > > > >
> > > > > > > On Mon, Apr 10, 2023 at 02:20:16PM +0800, Jason Wang wrote:
> > > > > > > > On Mon, Apr 10, 2023 at 2:15 PM Michael S. Tsirkin 
> > > > > > > >  wrote:
> > > > > > > > >
> > > > > > > > > On Mon, Apr 10, 2023 at 09:33:32AM +0800, Jason Wang wrote:
> > > > > > > > > > This is fine for vDPA but not for virtio if the design can 
> > > > > > > > > > only work
> > > > > > > > > > for some specific setups (OSes/archs).
> > > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > Well virtio legacy has a long history of documenting existing 
> > > > > > > > > hacks :)
> > > > > > > >
> > > > > > > > Exactly, so the legacy behaviour is not (or can't be) defined 
> > > > > > > > by the
> > > > > > > > spec but the codes.
> > > > > > >
> > > > > > > I mean driver behaviour derives from the code but we do document 
> > > > > > > it in
> > > > > > > the spec to help people build devices.
> > > > > > >
> > > > > > >
> > > > > > > > > But yes, VIRTIO_F_ORDER_PLATFORM has to be documented.
> > > > > > > > > And we have to decide what to do about ACCESS_PLATFORM since
> > > > > > > > > there's a security problem if device allows not acking it.
> > > > > > > > > Two options:
> > > > > > > > > - relax the rules a bit and say device will assume 
> > > > > > > > > ACCESS_PLATFORM
> > > > > > > > >   is acked anyway
> > > > > > > >
> > > > > > > > This will break legacy drivers which assume physical addresses.
> > > > > > >
> > > > > > > not that they are not already broken.
> > > > > >
> > > > > > I may miss something, the whole point is to allow legacy drivers to
> > > > > > run otherwise a modern device is sufficient?
> > > > >
> > > > > yes and if legacy drivers don't work in a given setup then we
> > > > > should not worry.
> > > > >
> > > > > > >
> > > > > > > > > - a new flag that is insecure (so useful for sec but useless 
> > > > > > > > > for dpdk) but optional
> > > > > > > >
> > > > > > > > This looks like a new "hack" for the legacy hacks.
> > > > > > >
> > > > > > > it's not just for legacy.
> > > > > >
> > > > > > We have the ACCESS_PLATFORM feature bit, what is the useage for 
> > > > > > this new flag?
> > > > >
> > > > >
> > > > > ACCESS_PLATFORM is also a security boundary. so devices must fail
> > > > > negotiation if it's not there. this new one won't be.
> > > > >
> > > > >
> > > > > > >
> > > > > > > > And what about ORDER_PLATFORM, I don't think we can modify 
> > > > > > > > legacy drivers...
> > > > > > > >
> > > > > > > > Thanks
> > > > > > >
> > > > > > > You play some tricks with shadow VQ I guess.
> > > > > >
> > > > > > Do we really want to add a new feature in the virtio spec that can
> > > > > > only work with the datapath mediation?
> > > > > >
> > > > > > Thanks
> > > > >
> > > > > As long as a feature is useful and can't be supported otherwise
> > > > > we are out of options.
> > > >
> > > > Probably not? Is it as simple as relaxing this:
> > > >
> > > > "Transitional devices MUST expose the Legacy Interface in I/O space in 
> > > > BAR0."
> > > >
> > > > To allow memory space.
> > > >
> > > > This works for both software and hardware devices (I had a handy
> > > > hardware that supports legacy virtio drivers in this way).
> > > >
> > > > Thanks
> > >
> > > Yes it is certainly simpler.
> > >
> > > Question: what happens if you try to run existing windows guests or dpdk
> > > on these? Do they crash horribly or exit gracefully?
> >
> > Haven't tried DPDK and windows. But I remember DPDK supported legacy
> > MMIO bars for a while.
> 
> Regarding Windows drivers:
> 1. We are acking VIRTIO_F_ACCESS_PLATFORM in the driver. But, if you
> remember the "ATS" issue (Windows is either not detecting it or, even
> if detected, not using it) - then actually, we are not forcing Windows
> to remap the memory because Window fails to work with it correctly.
> 2. Current Windows drivers implementation doesn't support MMIO bars.
> We can enable the support if needed.
> 
> Best regards,
> Yan.

The question was about old legacy drivers, not modern ones.
What if they attach to a device and BAR0
is a memory not an IO bar?
Will they fail gracefully or crash?


> 
> >
> > Adding Maxime and Yan for comments here.
> >
> > >
> > > The point of the capability is to allow using modern device ID so such
> > > guests will not even try to bind.
> >
> > It means a mediation layer is required to use. Then it's not an issue
> > for this simple relaxing any more?
> >
> > An advantage of 

Re: [virtio-dev] [PATCH v14] virtio-net: support the virtqueue coalescing moderation

2023-04-11 Thread Michael S. Tsirkin
On Wed, Apr 05, 2023 at 04:54:44PM +0200, Halil Pasic wrote:
> On Wed, 5 Apr 2023 05:12:12 -0400
> "Michael S. Tsirkin"  wrote:
> 
> > > > it's not necessarily an identifier. can be e.g. just 0 for all vqs.
> > > > whatever the device needs.  
> > > For driver its just an id, content doesn't matter.  
> > 
> > No, this value might or might not be somehow related to the vq but it
> > does not have to identify it. So it's some data, not an id.  Let's try
> > to use words for what they mean in english not try to come up with our
> > own language.
> 
> I agree with Michael wholeheartedly, we should try to use words for what
> they mean in English, especially in Computer Science English, and even
> try to pick the most fitting and least ambiguous option if multiple
> options are possible.
> 
> In that spirit I would say that "queue_notify_data" is actually a magic
> cookie. https://en.wikipedia.org/wiki/Magic_cookie A possible name
> or abbreviation would be vqn_cookie like "a virtqueue
> notification cookie" (I'm not sure about mixing in the direction, but
> this is only about driver -> device notifications). 
> 
> 
> And then 
> 
> le32 {
> vqn : 16;
> next_off : 15;
> next_wrap : 1;
> };
> 
> could become something like
> 
> struct vq_notif_data {
> union {
>   le16 vqn_cookie;
>   le16 vq_index;
>   };
> le16 {
>   next_off : 15;
>   next_wrap : 1;
>   };
> };
> 
> BTW since "identifier" and "unique identifier" are not the same, in my
> opinion "identifier" is still viable, if we explain that it is called
> identifier because the idea behind that field is to be used to identify
> the queue, but that there is in fact no requirement on selectivity let
> alone uniqueness.
> 
> 
> Regards,
> Halil 
> 

Well "identifier" seems to come from "identity" meaning "same" so I
think yes, it implies a 1:1 relationship.
And more importantly, it might not identify the queue, there is
in fact no requirement on how it is used at all.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH v14] virtio-net: support the virtqueue coalescing moderation

2023-04-11 Thread Michael S. Tsirkin
On Tue, Apr 11, 2023 at 10:05:11AM +0200, Cornelia Huck wrote:
> On Thu, Mar 23 2023, Heng Qi  wrote:
> 
> > Hi all! If after reviewing it you feel it meets the necessary 
> > requirements, I kindly request a vote.
> >
> > Thanks a lot!
> >
> > Fixes: https://github.com/oasis-tcs/virtio-spec/issues/166
> 
> So this has been voted upon and accepted, but we have further discussion
> downthread.
> 
> Should we merge the voted-upon version now and do further changes on
> top, or wait for the discussion to conclude first so that we can merge
> all of it in one go?

I think we should merge as is. vqn/idx/id are not new discussions
and have been going on before voting took place, so the TC
was informed. Parav is working on making tree-wide changes
around vq index applying this will not make his work
measureably harder.

-- 
MST


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Jason Wang
On Tue, Apr 11, 2023 at 3:00 PM Michael S. Tsirkin  wrote:
>
> On Tue, Apr 11, 2023 at 10:19:39AM +0800, Jason Wang wrote:
> > On Mon, Apr 10, 2023 at 6:04 PM Michael S. Tsirkin  wrote:
> > >
> > > On Mon, Apr 10, 2023 at 03:16:46PM +0800, Jason Wang wrote:
> > > > On Mon, Apr 10, 2023 at 2:24 PM Michael S. Tsirkin  
> > > > wrote:
> > > > >
> > > > > On Mon, Apr 10, 2023 at 09:36:17AM +0800, Jason Wang wrote:
> > > > > > On Fri, Mar 31, 2023 at 7:00 AM Parav Pandit  
> > > > > > wrote:
> > > > > > >
> > > > > > > PCI device configuration space for capabilities is limited to 
> > > > > > > only 192
> > > > > > > bytes shared by many PCI capabilities of generic PCI device and 
> > > > > > > virtio
> > > > > > > specific.
> > > > > > >
> > > > > > > Hence, introduce virtio extended capability that uses PCI Express
> > > > > > > extended capability.
> > > > > > > Subsequent patch uses this virtio extended capability.
> > > > > > >
> > > > > > > Co-developed-by: Satananda Burla 
> > > > > > > Signed-off-by: Parav Pandit 
> > > > > >
> > > > > > Can you explain the differences compared to what I've used to 
> > > > > > propose?
> > > > > >
> > > > > > https://www.mail-archive.com/virtio-dev@lists.oasis-open.org/msg08078.html
> > > > > >
> > > > > > This can save time for everybody.
> > > > > >
> > > > > > Thanks
> > > > >
> > > > > BTW another advantage of extended capabilities is - these are actually
> > > > > cheaper to access from a VM than classic config space.
> > > >
> > > > Config space/BAR is allowed by both of the proposals or anything I 
> > > > missed?
> > > >
> > > > >
> > > > >
> > > > > Several points
> > > > > - I don't like it that yours is 32 bit. We do not need 2 variants just
> > > > >   make it all 64 bit
> > > >
> > > > That's fine.
> > > >
> > > > > - We need to document that if driver does not scan extended 
> > > > > capbilities it will not find them.
> > > >
> > > > This is implicit since I remember we don't have such documentation for
> > > > pci capability, anything makes pcie special?
> > >
> > > yes - the fact that there are tons of existing drivers expecting
> > > everything in standard space.
> > >
> > >
> > > > >   And existing drivers do not scan them. So what is safe
> > > > >   to put there? vendor specific? extra access types?
> > > >
> > > > For PASID at least, since it's a PCI-E feature, vendor specific should
> > > > be fine. Not sure about legacy MMIO then.
> > > >
> > > > >   Can we make scanning these mandatory in future drivers? future 
> > > > > devices?
> > > > >   I guess we can add a feature bit to flag that.
> > > >
> > > > For PASID, it doesn't need this, otherwise we may duplicate transport
> > > > specific features.
> > >
> > > i don't get it. what does PASID have to do with it?
> >
> > My proposal is to allow PASID capability to be placed on top.
>
> Assuming you mean a patch applied on top of this one.
>
> > So what
> > I meant is:
> >
> > if the driver needs to use PASID, it needs to scan extend capability
> >
> > So it is only used for future drivers. I think this applies to legacy
> > MMIO as well.
>
> sure
>
> > > A new feature will allow clean split at least:
> > > we make any new features and new devices that expect
> > > express capability depend on this new feature bit.
> > >
> > > > >   Is accessing these possible from bios?
> > > >
> > > > Not at least for the two use cases now PASID or legacy MMIO.
> > >
> > > can't parse english here. what does this mean?
> >
> > I meant, it depends on the capability semantics. Both PASID and legacy
> > MMIO don't need to be accessed by BIOS. We can't change legacy BIOS to
> > use legacy MMIO bars.
> >
> > Thanks
>
> makes sense.
>
>
> now, imagine someone building a new device. if existing drivers are not
> a concern, it is possible to move capabilities all to extended space. is
> that possible while keeping the bios working?

This is possible but I'm not sure it's worthwhile. What happens if the
device puts all capabilities in the extended space but the bios can't
scan there? We can place them at both but it then doesn't address the
out of space issue. Things will be easier if we allow new
features/capabilities to be placed on the extended space.

Thanks

>
> > >
> > >
> > > > >
> > > > > So I like this one better as a basis - care reviewing it and adding
> > > > > stuff?
> > > >
> > > > There are very few differences and I will have a look.
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > --
> > > > > MST
> > > > >
> > >
>


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] RE: [virtio-dev] RE: [PATCH v12 03/10] content: Rename confusing queue_notify_data and vqn names

2023-04-11 Thread Cornelia Huck
On Fri, Apr 07 2023, "Michael S. Tsirkin"  wrote:

> On Wed, Apr 05, 2023 at 03:58:55PM +, Parav Pandit wrote:
>> For sure "cookie" is better than "config_data" and I don't have objection to 
>> "cookie".
>> 
>> But I disagree to the claim that "identifier" is less good than "cookie".
>> 
>> It is pointless debate of "identifier" vs "cookie".
>> 
>> The union format is very useful to describe this crisply, I will use it.
>
> I guess I'm fine with "cookie" in that in CS it is by now widely
> understood to mean "some opaque data".
> identifier comes from "identical", so implies a 1:1 mapping IMO.

Agreed, a "cookie" is not the same as an "identifier", although it may
contain one.

>
>
> The logic behind using a cookie is that it's a bit similar
> to host cookie from ccw.
> However, with ccw host cookie is used unconditionally, as
> opposed to depending on the flag.
>
>
>
>> Do you prefer to rename F_CONFIG_DATA To F_CONFIG_COOKIE?
>
> It should all be consistent, but I worry about ccw which uses cookies
> unconditionally. I am fine with leaving it as F_CONFIG_DATA for now
> unless we see a good way to avoid confusion for ccw.

Yes, please leave it as F_CONFIG_DATA, as we're just putting some "data"
there in the end (and F_CONFIG_COOKIE might indeed be confusing for the
ccw case.)


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Cornelia Huck
On Mon, Apr 10 2023, "Michael S. Tsirkin"  wrote:

> On Mon, Apr 10, 2023 at 07:57:08PM +, Parav Pandit wrote:
>> 
>> > From: Michael S. Tsirkin 
>> > Sent: Monday, April 10, 2023 3:49 PM
>> 
>> > Attribution is nice but Signed-off-by is not that.
>> 
>> Then what is Signed-off-by for virtio spec?
>
> we never defined it. using it kind of by 
>
>> Can it be same definition as what Linux kernel and many other projects use 
>> like [1]?
>> 
>> [1] 
>> https://www.kernel.org/doc/html/latest/process/submitting-patches.html?highlight=signed%20off
>
> That is DCO. useful for linux pointless for us as is since
> we want people to agree to our IPR.
> Unless you want to make DCO refer to our IPR.
> That might be possible.
> We will need to float this past OASIS stuff.
> I prefer explicit agreement to license personally.

In most projects, s-o-b either means "I abide by the DCO", or "I'm
adding this because it seems to be the usual thing to do". "Agreeing to
the IPR" would be a somewhat surprising meaning, so I'd prefer to keep
IPR agreement separate as well. Not sure if we want to actually
specify what we mean by s-o-b; when I'm attaching it when merging I'm
using it mostly in the "let's record the chain" meaning. (Specifying
that we indeed use it in that sense, and not the full DCO sense, might
make sense -- or it might be overkill.)

>
>> > And fundamentally people go read the PDF where the Signed-off-by does not
>> > appear at all no one pokes at git history.
>> When people read PDF, they do not care about the sign-off. Signed-off-by is 
>> not for that.
>> 
>> > Let's just do:
>> > 
>> > Thanks-to: name 
>> > 
>> Why to now learn a new term?
>> Why terminology of [1] is not enough like AQ status codes? :)
>
> I have no idea what problem you are trying to address.
> If it is attribution Signed-off-by is not that.
> If it is IPR Signed-off-by is not that either but might be
> made to imply that.

Yes, s-o-b is distinct from attribution (and that's why the Linux kernel
requires it on top of a Co-developed-by: to satisfy the DCO, not instead
of it.) If we think that Co-developed-by: without a s-o-b might be
confusing, a Thanks-to: could be a better term (and also a broader one
meaning "I discussed this with this person, and I want to acknowledge
them".)


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH v14] virtio-net: support the virtqueue coalescing moderation

2023-04-11 Thread Cornelia Huck
On Thu, Mar 23 2023, Heng Qi  wrote:

> Hi all! If after reviewing it you feel it meets the necessary 
> requirements, I kindly request a vote.
>
> Thanks a lot!
>
> Fixes: https://github.com/oasis-tcs/virtio-spec/issues/166

So this has been voted upon and accepted, but we have further discussion
downthread.

Should we merge the voted-upon version now and do further changes on
top, or wait for the discussion to conclude first so that we can merge
all of it in one go?


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [virtio-comment] Re: [PATCH 09/11] transport-pci: Describe PCI MMR dev config registers

2023-04-11 Thread Michael S. Tsirkin
On Tue, Apr 11, 2023 at 10:13:40AM +0800, Jason Wang wrote:
> On Mon, Apr 10, 2023 at 6:06 PM Michael S. Tsirkin  wrote:
> >
> > On Mon, Apr 10, 2023 at 03:20:52PM +0800, Jason Wang wrote:
> > > On Mon, Apr 10, 2023 at 2:40 PM Michael S. Tsirkin  
> > > wrote:
> > > >
> > > > On Mon, Apr 10, 2023 at 02:20:16PM +0800, Jason Wang wrote:
> > > > > On Mon, Apr 10, 2023 at 2:15 PM Michael S. Tsirkin  
> > > > > wrote:
> > > > > >
> > > > > > On Mon, Apr 10, 2023 at 09:33:32AM +0800, Jason Wang wrote:
> > > > > > > This is fine for vDPA but not for virtio if the design can only 
> > > > > > > work
> > > > > > > for some specific setups (OSes/archs).
> > > > > > >
> > > > > > > Thanks
> > > > > >
> > > > > > Well virtio legacy has a long history of documenting existing hacks 
> > > > > > :)
> > > > >
> > > > > Exactly, so the legacy behaviour is not (or can't be) defined by the
> > > > > spec but the codes.
> > > >
> > > > I mean driver behaviour derives from the code but we do document it in
> > > > the spec to help people build devices.
> > > >
> > > >
> > > > > > But yes, VIRTIO_F_ORDER_PLATFORM has to be documented.
> > > > > > And we have to decide what to do about ACCESS_PLATFORM since
> > > > > > there's a security problem if device allows not acking it.
> > > > > > Two options:
> > > > > > - relax the rules a bit and say device will assume ACCESS_PLATFORM
> > > > > >   is acked anyway
> > > > >
> > > > > This will break legacy drivers which assume physical addresses.
> > > >
> > > > not that they are not already broken.
> > >
> > > I may miss something, the whole point is to allow legacy drivers to
> > > run otherwise a modern device is sufficient?
> >
> > yes and if legacy drivers don't work in a given setup then we
> > should not worry.
> >
> > > >
> > > > > > - a new flag that is insecure (so useful for sec but useless for 
> > > > > > dpdk) but optional
> > > > >
> > > > > This looks like a new "hack" for the legacy hacks.
> > > >
> > > > it's not just for legacy.
> > >
> > > We have the ACCESS_PLATFORM feature bit, what is the useage for this new 
> > > flag?
> >
> >
> > ACCESS_PLATFORM is also a security boundary. so devices must fail
> > negotiation if it's not there. this new one won't be.
> >
> >
> > > >
> > > > > And what about ORDER_PLATFORM, I don't think we can modify legacy 
> > > > > drivers...
> > > > >
> > > > > Thanks
> > > >
> > > > You play some tricks with shadow VQ I guess.
> > >
> > > Do we really want to add a new feature in the virtio spec that can
> > > only work with the datapath mediation?
> > >
> > > Thanks
> >
> > As long as a feature is useful and can't be supported otherwise
> > we are out of options.
> 
> Probably not? Is it as simple as relaxing this:
> 
> "Transitional devices MUST expose the Legacy Interface in I/O space in BAR0."
> 
> To allow memory space.
> 
> This works for both software and hardware devices (I had a handy
> hardware that supports legacy virtio drivers in this way).
> 
> Thanks

Yes it is certainly simpler.

Question: what happens if you try to run existing windows guests or dpdk
on these? Do they crash horribly or exit gracefully?

The point of the capability is to allow using modern device ID so such
guests will not even try to bind.


> > Keeping field practice things out of the
> > spec helps no one.
> >
> > > >
> > > > --
> > > > MST
> > > >
> >


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH 08/11] transport-pci: Introduce virtio extended capability

2023-04-11 Thread Michael S. Tsirkin
On Tue, Apr 11, 2023 at 10:19:39AM +0800, Jason Wang wrote:
> On Mon, Apr 10, 2023 at 6:04 PM Michael S. Tsirkin  wrote:
> >
> > On Mon, Apr 10, 2023 at 03:16:46PM +0800, Jason Wang wrote:
> > > On Mon, Apr 10, 2023 at 2:24 PM Michael S. Tsirkin  
> > > wrote:
> > > >
> > > > On Mon, Apr 10, 2023 at 09:36:17AM +0800, Jason Wang wrote:
> > > > > On Fri, Mar 31, 2023 at 7:00 AM Parav Pandit  wrote:
> > > > > >
> > > > > > PCI device configuration space for capabilities is limited to only 
> > > > > > 192
> > > > > > bytes shared by many PCI capabilities of generic PCI device and 
> > > > > > virtio
> > > > > > specific.
> > > > > >
> > > > > > Hence, introduce virtio extended capability that uses PCI Express
> > > > > > extended capability.
> > > > > > Subsequent patch uses this virtio extended capability.
> > > > > >
> > > > > > Co-developed-by: Satananda Burla 
> > > > > > Signed-off-by: Parav Pandit 
> > > > >
> > > > > Can you explain the differences compared to what I've used to propose?
> > > > >
> > > > > https://www.mail-archive.com/virtio-dev@lists.oasis-open.org/msg08078.html
> > > > >
> > > > > This can save time for everybody.
> > > > >
> > > > > Thanks
> > > >
> > > > BTW another advantage of extended capabilities is - these are actually
> > > > cheaper to access from a VM than classic config space.
> > >
> > > Config space/BAR is allowed by both of the proposals or anything I missed?
> > >
> > > >
> > > >
> > > > Several points
> > > > - I don't like it that yours is 32 bit. We do not need 2 variants just
> > > >   make it all 64 bit
> > >
> > > That's fine.
> > >
> > > > - We need to document that if driver does not scan extended capbilities 
> > > > it will not find them.
> > >
> > > This is implicit since I remember we don't have such documentation for
> > > pci capability, anything makes pcie special?
> >
> > yes - the fact that there are tons of existing drivers expecting
> > everything in standard space.
> >
> >
> > > >   And existing drivers do not scan them. So what is safe
> > > >   to put there? vendor specific? extra access types?
> > >
> > > For PASID at least, since it's a PCI-E feature, vendor specific should
> > > be fine. Not sure about legacy MMIO then.
> > >
> > > >   Can we make scanning these mandatory in future drivers? future 
> > > > devices?
> > > >   I guess we can add a feature bit to flag that.
> > >
> > > For PASID, it doesn't need this, otherwise we may duplicate transport
> > > specific features.
> >
> > i don't get it. what does PASID have to do with it?
> 
> My proposal is to allow PASID capability to be placed on top.

Assuming you mean a patch applied on top of this one.

> So what
> I meant is:
> 
> if the driver needs to use PASID, it needs to scan extend capability
> 
> So it is only used for future drivers. I think this applies to legacy
> MMIO as well.

sure

> > A new feature will allow clean split at least:
> > we make any new features and new devices that expect
> > express capability depend on this new feature bit.
> >
> > > >   Is accessing these possible from bios?
> > >
> > > Not at least for the two use cases now PASID or legacy MMIO.
> >
> > can't parse english here. what does this mean?
> 
> I meant, it depends on the capability semantics. Both PASID and legacy
> MMIO don't need to be accessed by BIOS. We can't change legacy BIOS to
> use legacy MMIO bars.
> 
> Thanks

makes sense.


now, imagine someone building a new device. if existing drivers are not
a concern, it is possible to move capabilities all to extended space. is
that possible while keeping the bios working?

> >
> >
> > > >
> > > > So I like this one better as a basis - care reviewing it and adding
> > > > stuff?
> > >
> > > There are very few differences and I will have a look.
> > >
> > > Thanks
> > >
> > > >
> > > > --
> > > > MST
> > > >
> >


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org