Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-12 Thread Jason Wang



On 2020/2/12 下午5:16, Michael S. Tsirkin wrote:

Thanks for the advice.:)

Actually when looking into pci, the queue_msix_vector/msix_config is the
msi vector index, which is the same as the mmio register MsiVecSel
(0x0d0).

So we don't introduce two extra registers for mapping even in sharing
mode.

What do you think?


I'm not sure I get the point, but I still prefer the separate vector_sel
from queue_msix_vector.

Btw, Michael propose per vq registers which could also work.

Thanks


Right and I'd even ask a question: do we need shared MSI at all?



I guess it is still needed at least for the current virtio code. 
Technically we may have thousands queues.


Thanks



Is it somehow better than legacy interrupt? And why?
Performance numbers please.






Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-12 Thread Michael S. Tsirkin
On Wed, Feb 12, 2020 at 05:06:52PM +0800, Jason Wang wrote:
> 
> On 2020/2/12 上午11:54, Liu, Jing2 wrote:
> > 
> > 
> > On 2/11/2020 3:40 PM, Jason Wang wrote:
> > > 
> > > On 2020/2/11 下午2:02, Liu, Jing2 wrote:
> > > > 
> > > > 
> > > > On 2/11/2020 12:02 PM, Jason Wang wrote:
> > > > > 
> > > > > On 2020/2/11 上午11:35, Liu, Jing2 wrote:
> > > > > > 
> > > > > > On 2/11/2020 11:17 AM, Jason Wang wrote:
> > > > > > > 
> > > > > > > On 2020/2/10 下午5:05, Zha Bin wrote:
> > > > > > > > From: Liu Jiang
> > > > > > > > 
> > > > > > > > Userspace VMMs (e.g. Qemu microvm, Firecracker)
> > > > > > > > take advantage of using
> > > > > > > > virtio over mmio devices as a lightweight machine model for 
> > > > > > > > modern
> > > > > > > > cloud. The standard virtio over MMIO transport
> > > > > > > > layer only supports one
> > > > > > > > legacy interrupt, which is much heavier than
> > > > > > > > virtio over PCI transport
> > > > > > > > layer using MSI. Legacy interrupt has long work
> > > > > > > > path and causes specific
> > > > > > > > VMExits in following cases, which would considerably slow down 
> > > > > > > > the
> > > > > > > > performance:
> > > > > > > > 
> > > > > > > > 1) read interrupt status register
> > > > > > > > 2) update interrupt status register
> > > > > > > > 3) write IOAPIC EOI register
> > > > > > > > 
> > > > > > > > We proposed to add MSI support for virtio over MMIO via new 
> > > > > > > > feature
> > > > > > > > bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt 
> > > > > > > > performance.
> > > > > > > > 
> > > > > > > > With the VIRTIO_F_MMIO_MSI feature bit
> > > > > > > > supported, the virtio-mmio MSI
> > > > > > > > uses msi_sharing[1] to indicate the event and vector mapping.
> > > > > > > > Bit 1 is 0: device uses non-sharing and fixed
> > > > > > > > vector per event mapping.
> > > > > > > > Bit 1 is 1: device uses sharing mode and dynamic mapping.
> > > > > > > 
> > > > > > > 
> > > > > > > I believe dynamic mapping should cover the case of fixed vector?
> > > > > > > 
> > > > > > Actually this bit *aims* for msi sharing or msi non-sharing.
> > > > > > 
> > > > > > It means, when msi sharing bit is 1, device doesn't want
> > > > > > vector per queue
> > > > > > 
> > > > > > (it wants msi vector sharing as name) and doesn't want a
> > > > > > high interrupt rate.
> > > > > > 
> > > > > > So driver turns to !per_vq_vectors and has to do dynamical mapping.
> > > > > > 
> > > > > > So they are opposite not superset.
> > > > > > 
> > > > > > Thanks!
> > > > > > 
> > > > > > Jing
> > > > > 
> > > > > 
> > > > > I think you need add more comments on the command.
> > > > > 
> > > > > E.g if I want to map vector 0 to queue 1, how do I need to do?
> > > > > 
> > > > > write(1, queue_sel);
> > > > > write(0, vector_sel);
> > > > 
> > > > That's true. Besides, two commands are used for msi sharing mode,
> > > > 
> > > > VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.
> > > > 
> > > > "To set up the event and vector mapping for MSI sharing mode,
> > > > driver SHOULD write a valid MsiVecSel followed by
> > > > VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE
> > > > command to map the configuration change/selected queue events
> > > > respectively.  " (See spec patch 5/5)
> > > > 
> > > > So if driver detects the msi sharing mode, when it does setup
> > > > vq, writes the queue_sel (this already exists in setup vq),
> > > > vector sel and then MAP_QUEUE command to do the queue event
> > > > mapping.
> > > > 
> > > 
> > > So actually the per vq msix could be done through this.
> > 
> > Right, per vq msix can also be mapped by the 2 commands if we want.
> > 
> > The current design benefits for those devices requesting per vq msi that
> > driver
> > 
> > doesn't have to map during setup each queue,
> > 
> > since we define the relationship by default.
> > 
> 
> Well since you've defined the dynamic mapping, having some "default" mapping
> won't help to reduce the complexity but increase it.
> 
> 
> > 
> > > I don't get why you need to introduce MSI_SHARING_MASK which is the
> > > charge of driver instead of device.
> > 
> > MSI_SHARING_MASK is just for identifying the msi_sharing bit in
> > readl(MsiState) (0x0c4). The device tells whether it wants msi_sharing.
> > 
> > MsiState register: R
> > 
> > le32 {
> >     msi_enabled : 1;
> >     msi_sharing: 1;
> >     reserved : 30;
> > };
> > 
> 
> The question is why device want such information.
> 
> 
> > 
> > > The interrupt rate should have no direct relationship with whether
> > > it has been shared or not.
> > 
> > > 
> > > Btw, you introduce mask/unmask without pending, how to deal with the
> > > lost interrupt during the masking then?
> > > 
> > > 
> > > > For msi non-sharing mode, no special action is needed because we
> > > > make the rule of per_vq_vector and fixed relationship.
> > > > 
> > > > Correct me if this is not that clear for spec/code comments.
> > > > 
> > > 
> > > The ABI is not as 

Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-12 Thread Michael S. Tsirkin
On Wed, Feb 12, 2020 at 05:03:00PM +0800, Jason Wang wrote:
> 
> On 2020/2/11 下午10:00, Michael S. Tsirkin wrote:
> > > Yes, it can work but it may bring extra effort when you want to mask a
> > > vector which is was shared by a lot of queues.
> > > 
> > > Thanks
> > > 
> > masking should be per vq too.
> > 
> > -- 
> 
> 
> Yes, but if the vector is shard by e.g 16 queues, then all those virtqueues
> needs to be masked which is expensive.
> 
> Thanks

I think that's ok - masking is rare. in fact if vectors can be
changed atomically I'm no longer sure they are needed,
maybe a destructive "disable" which can lose interrupts
is enough.

-- 
MST




Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-12 Thread Jason Wang



On 2020/2/12 上午11:54, Liu, Jing2 wrote:



On 2/11/2020 3:40 PM, Jason Wang wrote:


On 2020/2/11 下午2:02, Liu, Jing2 wrote:



On 2/11/2020 12:02 PM, Jason Wang wrote:


On 2020/2/11 上午11:35, Liu, Jing2 wrote:


On 2/11/2020 11:17 AM, Jason Wang wrote:


On 2020/2/10 下午5:05, Zha Bin wrote:

From: Liu Jiang

Userspace VMMs (e.g. Qemu microvm, Firecracker) take advantage 
of using

virtio over mmio devices as a lightweight machine model for modern
cloud. The standard virtio over MMIO transport layer only 
supports one
legacy interrupt, which is much heavier than virtio over PCI 
transport
layer using MSI. Legacy interrupt has long work path and causes 
specific

VMExits in following cases, which would considerably slow down the
performance:

1) read interrupt status register
2) update interrupt status register
3) write IOAPIC EOI register

We proposed to add MSI support for virtio over MMIO via new feature
bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.

With the VIRTIO_F_MMIO_MSI feature bit supported, the 
virtio-mmio MSI

uses msi_sharing[1] to indicate the event and vector mapping.
Bit 1 is 0: device uses non-sharing and fixed vector per event 
mapping.

Bit 1 is 1: device uses sharing mode and dynamic mapping.



I believe dynamic mapping should cover the case of fixed vector?


Actually this bit *aims* for msi sharing or msi non-sharing.

It means, when msi sharing bit is 1, device doesn't want vector 
per queue


(it wants msi vector sharing as name) and doesn't want a high 
interrupt rate.


So driver turns to !per_vq_vectors and has to do dynamical mapping.

So they are opposite not superset.

Thanks!

Jing



I think you need add more comments on the command.

E.g if I want to map vector 0 to queue 1, how do I need to do?

write(1, queue_sel);
write(0, vector_sel);


That's true. Besides, two commands are used for msi sharing mode,

VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.

"To set up the event and vector mapping for MSI sharing mode, driver 
SHOULD write a valid MsiVecSel followed by 
VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command 
to map the configuration change/selected queue events respectively.  
" (See spec patch 5/5)


So if driver detects the msi sharing mode, when it does setup vq, 
writes the queue_sel (this already exists in setup vq), vector sel 
and then MAP_QUEUE command to do the queue event mapping.




So actually the per vq msix could be done through this. 


Right, per vq msix can also be mapped by the 2 commands if we want.

The current design benefits for those devices requesting per vq msi 
that driver


doesn't have to map during setup each queue,

since we define the relationship by default.



Well since you've defined the dynamic mapping, having some "default" 
mapping won't help to reduce the complexity but increase it.





I don't get why you need to introduce MSI_SHARING_MASK which is the 
charge of driver instead of device. 


MSI_SHARING_MASK is just for identifying the msi_sharing bit in 
readl(MsiState) (0x0c4). The device tells whether it wants msi_sharing.


MsiState register: R

le32 {
    msi_enabled : 1;
    msi_sharing: 1;
    reserved : 30;
};



The question is why device want such information.




The interrupt rate should have no direct relationship with whether it 
has been shared or not.




Btw, you introduce mask/unmask without pending, how to deal with the 
lost interrupt during the masking then?



For msi non-sharing mode, no special action is needed because we 
make the rule of per_vq_vector and fixed relationship.


Correct me if this is not that clear for spec/code comments.



The ABI is not as straightforward as PCI did. Why not just reuse the 
PCI layout?


E.g having

queue_sel
queue_msix_vector
msix_config

for configuring map between msi vector and queues/config


Thanks for the advice. :)

Actually when looking into pci, the queue_msix_vector/msix_config is 
the msi vector index, which is the same as the mmio register MsiVecSel 
(0x0d0).


So we don't introduce two extra registers for mapping even in sharing 
mode.


What do you think?



I'm not sure I get the point, but I still prefer the separate vector_sel 
from queue_msix_vector.


Btw, Michael propose per vq registers which could also work.

Thanks






Then

vector_sel
address
data
pending
mask
unmask

for configuring msi table?


PCI-like msix table is not introduced to device and instead simply use 
commands to tell the mask/configure/enable.


Thanks!

Jing



Thanks



Thanks!

Jing




?

Thanks






Thanks



- 


To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: 
virtio-dev-h...@lists.oasis-open.org













Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-12 Thread Jason Wang



On 2020/2/11 下午10:00, Michael S. Tsirkin wrote:

Yes, it can work but it may bring extra effort when you want to mask a
vector which is was shared by a lot of queues.

Thanks


masking should be per vq too.

--



Yes, but if the vector is shard by e.g 16 queues, then all those 
virtqueues needs to be masked which is expensive.


Thanks





Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-11 Thread Michael S. Tsirkin
On Wed, Feb 12, 2020 at 11:54:53AM +0800, Liu, Jing2 wrote:
> 
> On 2/11/2020 3:40 PM, Jason Wang wrote:
> 
> 
> On 2020/2/11 下午2:02, Liu, Jing2 wrote:
> 
> 
> 
> On 2/11/2020 12:02 PM, Jason Wang wrote:
> 
> 
> On 2020/2/11 上午11:35, Liu, Jing2 wrote:
> 
> 
> On 2/11/2020 11:17 AM, Jason Wang wrote:
> 
> 
> On 2020/2/10 下午5:05, Zha Bin wrote:
> 
> From: Liu Jiang
> 
> Userspace VMMs (e.g. Qemu microvm, Firecracker) take
> advantage of using
> virtio over mmio devices as a lightweight machine 
> model
> for modern
> cloud. The standard virtio over MMIO transport layer
> only supports one
> legacy interrupt, which is much heavier than virtio
> over PCI transport
> layer using MSI. Legacy interrupt has long work path
> and causes specific
> VMExits in following cases, which would considerably
> slow down the
> performance:
> 
> 1) read interrupt status register
> 2) update interrupt status register
> 3) write IOAPIC EOI register
> 
> We proposed to add MSI support for virtio over MMIO 
> via
> new feature
> bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt
> performance.
> 
> With the VIRTIO_F_MMIO_MSI feature bit supported, the
> virtio-mmio MSI
> uses msi_sharing[1] to indicate the event and vector
> mapping.
> Bit 1 is 0: device uses non-sharing and fixed vector
> per event mapping.
> Bit 1 is 1: device uses sharing mode and dynamic
> mapping.
> 
> 
> 
> I believe dynamic mapping should cover the case of fixed
> vector?
> 
> 
> Actually this bit *aims* for msi sharing or msi non-sharing.
> 
> It means, when msi sharing bit is 1, device doesn't want 
> vector
> per queue
> 
> (it wants msi vector sharing as name) and doesn't want a high
> interrupt rate.
> 
> So driver turns to !per_vq_vectors and has to do dynamical
> mapping.
> 
> So they are opposite not superset.
> 
> Thanks!
> 
> Jing
> 
> 
> 
> I think you need add more comments on the command.
> 
> E.g if I want to map vector 0 to queue 1, how do I need to do?
> 
> write(1, queue_sel);
> write(0, vector_sel);
> 
> 
> That's true. Besides, two commands are used for msi sharing mode,
> 
> VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.
> 
> "To set up the event and vector mapping for MSI sharing mode, driver
> SHOULD write a valid MsiVecSel followed by
> VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command 
> to
> map the configuration change/selected queue events respectively.  "
> (See spec patch 5/5)
> 
> So if driver detects the msi sharing mode, when it does setup vq,
> writes the queue_sel (this already exists in setup vq), vector sel and
> then MAP_QUEUE command to do the queue event mapping.
> 
> 
> 
> So actually the per vq msix could be done through this.
> 
> Right, per vq msix can also be mapped by the 2 commands if we want. 
> 
> The current design benefits for those devices requesting per vq msi that 
> driver
> 
> doesn't have to map during setup each queue,
> 
> since we define the relationship by default.

Point being to save some exits for configuration? How much does it
save? IMHO we need to find a way to drastically simplify this interface,
to cut down the new LOC to at most 100-200, proportionally to the
performance gain it gives.


> 
> I don't get why you need to introduce MSI_SHARING_MASK which is the charge
> of driver instead of device.
> 
> MSI_SHARING_MASK is just for identifying the msi_sharing bit in 
> readl(MsiState)
> (0x0c4). The device tells whether it wants msi_sharing.
> 
> MsiState register: R
> 
> le32 {
>     msi_enabled : 1;
>     msi_sharing: 1;
>     reserved : 30;
> };
> 
> The interrupt rate should have no direct relationship with whether it has
> been shared or not.
> 
> 
> 
> Btw, you introduce mask/unmask without pending, how to deal with the lost
> interrupt during the masking then?
> 
> 
> 
> For msi non-sharing mode, no special action is needed because we make
>

Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-11 Thread Liu, Jing2


On 2/11/2020 3:40 PM, Jason Wang wrote:


On 2020/2/11 下午2:02, Liu, Jing2 wrote:



On 2/11/2020 12:02 PM, Jason Wang wrote:


On 2020/2/11 上午11:35, Liu, Jing2 wrote:


On 2/11/2020 11:17 AM, Jason Wang wrote:


On 2020/2/10 下午5:05, Zha Bin wrote:

From: Liu Jiang

Userspace VMMs (e.g. Qemu microvm, Firecracker) take advantage of 
using

virtio over mmio devices as a lightweight machine model for modern
cloud. The standard virtio over MMIO transport layer only 
supports one
legacy interrupt, which is much heavier than virtio over PCI 
transport
layer using MSI. Legacy interrupt has long work path and causes 
specific

VMExits in following cases, which would considerably slow down the
performance:

1) read interrupt status register
2) update interrupt status register
3) write IOAPIC EOI register

We proposed to add MSI support for virtio over MMIO via new feature
bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.

With the VIRTIO_F_MMIO_MSI feature bit supported, the virtio-mmio 
MSI

uses msi_sharing[1] to indicate the event and vector mapping.
Bit 1 is 0: device uses non-sharing and fixed vector per event 
mapping.

Bit 1 is 1: device uses sharing mode and dynamic mapping.



I believe dynamic mapping should cover the case of fixed vector?


Actually this bit *aims* for msi sharing or msi non-sharing.

It means, when msi sharing bit is 1, device doesn't want vector per 
queue


(it wants msi vector sharing as name) and doesn't want a high 
interrupt rate.


So driver turns to !per_vq_vectors and has to do dynamical mapping.

So they are opposite not superset.

Thanks!

Jing



I think you need add more comments on the command.

E.g if I want to map vector 0 to queue 1, how do I need to do?

write(1, queue_sel);
write(0, vector_sel);


That's true. Besides, two commands are used for msi sharing mode,

VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.

"To set up the event and vector mapping for MSI sharing mode, driver 
SHOULD write a valid MsiVecSel followed by 
VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command 
to map the configuration change/selected queue events respectively.  
" (See spec patch 5/5)


So if driver detects the msi sharing mode, when it does setup vq, 
writes the queue_sel (this already exists in setup vq), vector sel 
and then MAP_QUEUE command to do the queue event mapping.




So actually the per vq msix could be done through this. 


Right, per vq msix can also be mapped by the 2 commands if we want.

The current design benefits for those devices requesting per vq msi that 
driver


doesn't have to map during setup each queue,

since we define the relationship by default.


I don't get why you need to introduce MSI_SHARING_MASK which is the 
charge of driver instead of device. 


MSI_SHARING_MASK is just for identifying the msi_sharing bit in 
readl(MsiState) (0x0c4). The device tells whether it wants msi_sharing.


MsiState register: R

le32 {
    msi_enabled : 1;
    msi_sharing: 1;
    reserved : 30;
};

The interrupt rate should have no direct relationship with whether it 
has been shared or not.




Btw, you introduce mask/unmask without pending, how to deal with the 
lost interrupt during the masking then?



For msi non-sharing mode, no special action is needed because we make 
the rule of per_vq_vector and fixed relationship.


Correct me if this is not that clear for spec/code comments.



The ABI is not as straightforward as PCI did. Why not just reuse the 
PCI layout?


E.g having

queue_sel
queue_msix_vector
msix_config

for configuring map between msi vector and queues/config


Thanks for the advice. :)

Actually when looking into pci, the queue_msix_vector/msix_config is the 
msi vector index, which is the same as the mmio register MsiVecSel (0x0d0).


So we don't introduce two extra registers for mapping even in sharing mode.

What do you think?




Then

vector_sel
address
data
pending
mask
unmask

for configuring msi table?


PCI-like msix table is not introduced to device and instead simply use 
commands to tell the mask/configure/enable.


Thanks!

Jing



Thanks



Thanks!

Jing




?

Thanks






Thanks



-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org









Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-11 Thread Michael S. Tsirkin
On Tue, Feb 11, 2020 at 08:18:54PM +0800, Jason Wang wrote:
> 
> On 2020/2/11 下午8:08, Michael S. Tsirkin wrote:
> > On Tue, Feb 11, 2020 at 08:04:24PM +0800, Jason Wang wrote:
> > > On 2020/2/11 下午7:58, Michael S. Tsirkin wrote:
> > > > On Tue, Feb 11, 2020 at 03:40:23PM +0800, Jason Wang wrote:
> > > > > On 2020/2/11 下午2:02, Liu, Jing2 wrote:
> > > > > > On 2/11/2020 12:02 PM, Jason Wang wrote:
> > > > > > > On 2020/2/11 上午11:35, Liu, Jing2 wrote:
> > > > > > > > On 2/11/2020 11:17 AM, Jason Wang wrote:
> > > > > > > > > On 2020/2/10 下午5:05, Zha Bin wrote:
> > > > > > > > > > From: Liu Jiang
> > > > > > > > > > 
> > > > > > > > > > Userspace VMMs (e.g. Qemu microvm, Firecracker) take
> > > > > > > > > > advantage of using
> > > > > > > > > > virtio over mmio devices as a lightweight machine model for 
> > > > > > > > > > modern
> > > > > > > > > > cloud. The standard virtio over MMIO transport layer
> > > > > > > > > > only supports one
> > > > > > > > > > legacy interrupt, which is much heavier than virtio over
> > > > > > > > > > PCI transport
> > > > > > > > > > layer using MSI. Legacy interrupt has long work path and
> > > > > > > > > > causes specific
> > > > > > > > > > VMExits in following cases, which would considerably slow 
> > > > > > > > > > down the
> > > > > > > > > > performance:
> > > > > > > > > > 
> > > > > > > > > > 1) read interrupt status register
> > > > > > > > > > 2) update interrupt status register
> > > > > > > > > > 3) write IOAPIC EOI register
> > > > > > > > > > 
> > > > > > > > > > We proposed to add MSI support for virtio over MMIO via new 
> > > > > > > > > > feature
> > > > > > > > > > bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt 
> > > > > > > > > > performance.
> > > > > > > > > > 
> > > > > > > > > > With the VIRTIO_F_MMIO_MSI feature bit supported, the 
> > > > > > > > > > virtio-mmio MSI
> > > > > > > > > > uses msi_sharing[1] to indicate the event and vector 
> > > > > > > > > > mapping.
> > > > > > > > > > Bit 1 is 0: device uses non-sharing and fixed vector per
> > > > > > > > > > event mapping.
> > > > > > > > > > Bit 1 is 1: device uses sharing mode and dynamic mapping.
> > > > > > > > > I believe dynamic mapping should cover the case of fixed 
> > > > > > > > > vector?
> > > > > > > > > 
> > > > > > > > Actually this bit*aims*  for msi sharing or msi non-sharing.
> > > > > > > > 
> > > > > > > > It means, when msi sharing bit is 1, device doesn't want vector
> > > > > > > > per queue
> > > > > > > > 
> > > > > > > > (it wants msi vector sharing as name) and doesn't want a high
> > > > > > > > interrupt rate.
> > > > > > > > 
> > > > > > > > So driver turns to !per_vq_vectors and has to do dynamical 
> > > > > > > > mapping.
> > > > > > > > 
> > > > > > > > So they are opposite not superset.
> > > > > > > > 
> > > > > > > > Thanks!
> > > > > > > > 
> > > > > > > > Jing
> > > > > > > I think you need add more comments on the command.
> > > > > > > 
> > > > > > > E.g if I want to map vector 0 to queue 1, how do I need to do?
> > > > > > > 
> > > > > > > write(1, queue_sel);
> > > > > > > write(0, vector_sel);
> > > > > > That's true. Besides, two commands are used for msi sharing mode,
> > > > > > 
> > > > > > VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.
> > > > > > 
> > > > > > "To set up the event and vector mapping for MSI sharing mode, driver
> > > > > > SHOULD write a valid MsiVecSel followed by
> > > > > > VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE 
> > > > > > command to
> > > > > > map the configuration change/selected queue events respectively.  " 
> > > > > > (See
> > > > > > spec patch 5/5)
> > > > > > 
> > > > > > So if driver detects the msi sharing mode, when it does setup vq, 
> > > > > > writes
> > > > > > the queue_sel (this already exists in setup vq), vector sel and then
> > > > > > MAP_QUEUE command to do the queue event mapping.
> > > > > > 
> > > > > So actually the per vq msix could be done through this. I don't get 
> > > > > why you
> > > > > need to introduce MSI_SHARING_MASK which is the charge of driver 
> > > > > instead of
> > > > > device. The interrupt rate should have no direct relationship with 
> > > > > whether
> > > > > it has been shared or not.
> > > > > 
> > > > > Btw, you introduce mask/unmask without pending, how to deal with the 
> > > > > lost
> > > > > interrupt during the masking then?
> > > > pending can be an internal device register. as long as device
> > > > does not lose interrupts while masked, all's well.
> > > 
> > > You meant raise the interrupt during unmask automatically?
> > > 
> > 
> > yes - that's also what pci does.
> > 
> > the guest visible pending bit is partially implemented in qemu
> > as per pci spec but it's unused.
> 
> 
> Ok.
> 
> 
> > 
> > > > There's value is being able to say "this queue sends no
> > > > interrupts do not bother checking used notification area".
> > > > so we need way to say that. So I guess an enable interrupts
> > > > 

Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-11 Thread Jason Wang



On 2020/2/11 下午8:08, Michael S. Tsirkin wrote:

On Tue, Feb 11, 2020 at 08:04:24PM +0800, Jason Wang wrote:

On 2020/2/11 下午7:58, Michael S. Tsirkin wrote:

On Tue, Feb 11, 2020 at 03:40:23PM +0800, Jason Wang wrote:

On 2020/2/11 下午2:02, Liu, Jing2 wrote:

On 2/11/2020 12:02 PM, Jason Wang wrote:

On 2020/2/11 上午11:35, Liu, Jing2 wrote:

On 2/11/2020 11:17 AM, Jason Wang wrote:

On 2020/2/10 下午5:05, Zha Bin wrote:

From: Liu Jiang

Userspace VMMs (e.g. Qemu microvm, Firecracker) take
advantage of using
virtio over mmio devices as a lightweight machine model for modern
cloud. The standard virtio over MMIO transport layer
only supports one
legacy interrupt, which is much heavier than virtio over
PCI transport
layer using MSI. Legacy interrupt has long work path and
causes specific
VMExits in following cases, which would considerably slow down the
performance:

1) read interrupt status register
2) update interrupt status register
3) write IOAPIC EOI register

We proposed to add MSI support for virtio over MMIO via new feature
bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.

With the VIRTIO_F_MMIO_MSI feature bit supported, the virtio-mmio MSI
uses msi_sharing[1] to indicate the event and vector mapping.
Bit 1 is 0: device uses non-sharing and fixed vector per
event mapping.
Bit 1 is 1: device uses sharing mode and dynamic mapping.

I believe dynamic mapping should cover the case of fixed vector?


Actually this bit*aims*  for msi sharing or msi non-sharing.

It means, when msi sharing bit is 1, device doesn't want vector
per queue

(it wants msi vector sharing as name) and doesn't want a high
interrupt rate.

So driver turns to !per_vq_vectors and has to do dynamical mapping.

So they are opposite not superset.

Thanks!

Jing

I think you need add more comments on the command.

E.g if I want to map vector 0 to queue 1, how do I need to do?

write(1, queue_sel);
write(0, vector_sel);

That's true. Besides, two commands are used for msi sharing mode,

VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.

"To set up the event and vector mapping for MSI sharing mode, driver
SHOULD write a valid MsiVecSel followed by
VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command to
map the configuration change/selected queue events respectively.  " (See
spec patch 5/5)

So if driver detects the msi sharing mode, when it does setup vq, writes
the queue_sel (this already exists in setup vq), vector sel and then
MAP_QUEUE command to do the queue event mapping.


So actually the per vq msix could be done through this. I don't get why you
need to introduce MSI_SHARING_MASK which is the charge of driver instead of
device. The interrupt rate should have no direct relationship with whether
it has been shared or not.

Btw, you introduce mask/unmask without pending, how to deal with the lost
interrupt during the masking then?

pending can be an internal device register. as long as device
does not lose interrupts while masked, all's well.


You meant raise the interrupt during unmask automatically?



yes - that's also what pci does.

the guest visible pending bit is partially implemented in qemu
as per pci spec but it's unused.



Ok.





There's value is being able to say "this queue sends no
interrupts do not bother checking used notification area".
so we need way to say that. So I guess an enable interrupts
register might have some value...
But besides that, it's enough to have mask/unmask/address/data
per vq.


Just to check, do you mean "per vector" here?

Thanks


No, per VQ. An indirection VQ -> vector -> address/data isn't
necessary for PCI either, but that ship has sailed.



Yes, it can work but it may bring extra effort when you want to mask a 
vector which is was shared by a lot of queues.


Thanks








Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-11 Thread Michael S. Tsirkin
On Tue, Feb 11, 2020 at 08:04:24PM +0800, Jason Wang wrote:
> 
> On 2020/2/11 下午7:58, Michael S. Tsirkin wrote:
> > On Tue, Feb 11, 2020 at 03:40:23PM +0800, Jason Wang wrote:
> > > On 2020/2/11 下午2:02, Liu, Jing2 wrote:
> > > > On 2/11/2020 12:02 PM, Jason Wang wrote:
> > > > > On 2020/2/11 上午11:35, Liu, Jing2 wrote:
> > > > > > On 2/11/2020 11:17 AM, Jason Wang wrote:
> > > > > > > On 2020/2/10 下午5:05, Zha Bin wrote:
> > > > > > > > From: Liu Jiang
> > > > > > > > 
> > > > > > > > Userspace VMMs (e.g. Qemu microvm, Firecracker) take
> > > > > > > > advantage of using
> > > > > > > > virtio over mmio devices as a lightweight machine model for 
> > > > > > > > modern
> > > > > > > > cloud. The standard virtio over MMIO transport layer
> > > > > > > > only supports one
> > > > > > > > legacy interrupt, which is much heavier than virtio over
> > > > > > > > PCI transport
> > > > > > > > layer using MSI. Legacy interrupt has long work path and
> > > > > > > > causes specific
> > > > > > > > VMExits in following cases, which would considerably slow down 
> > > > > > > > the
> > > > > > > > performance:
> > > > > > > > 
> > > > > > > > 1) read interrupt status register
> > > > > > > > 2) update interrupt status register
> > > > > > > > 3) write IOAPIC EOI register
> > > > > > > > 
> > > > > > > > We proposed to add MSI support for virtio over MMIO via new 
> > > > > > > > feature
> > > > > > > > bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt 
> > > > > > > > performance.
> > > > > > > > 
> > > > > > > > With the VIRTIO_F_MMIO_MSI feature bit supported, the 
> > > > > > > > virtio-mmio MSI
> > > > > > > > uses msi_sharing[1] to indicate the event and vector mapping.
> > > > > > > > Bit 1 is 0: device uses non-sharing and fixed vector per
> > > > > > > > event mapping.
> > > > > > > > Bit 1 is 1: device uses sharing mode and dynamic mapping.
> > > > > > > I believe dynamic mapping should cover the case of fixed vector?
> > > > > > > 
> > > > > > Actually this bit*aims*  for msi sharing or msi non-sharing.
> > > > > > 
> > > > > > It means, when msi sharing bit is 1, device doesn't want vector
> > > > > > per queue
> > > > > > 
> > > > > > (it wants msi vector sharing as name) and doesn't want a high
> > > > > > interrupt rate.
> > > > > > 
> > > > > > So driver turns to !per_vq_vectors and has to do dynamical mapping.
> > > > > > 
> > > > > > So they are opposite not superset.
> > > > > > 
> > > > > > Thanks!
> > > > > > 
> > > > > > Jing
> > > > > I think you need add more comments on the command.
> > > > > 
> > > > > E.g if I want to map vector 0 to queue 1, how do I need to do?
> > > > > 
> > > > > write(1, queue_sel);
> > > > > write(0, vector_sel);
> > > > That's true. Besides, two commands are used for msi sharing mode,
> > > > 
> > > > VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.
> > > > 
> > > > "To set up the event and vector mapping for MSI sharing mode, driver
> > > > SHOULD write a valid MsiVecSel followed by
> > > > VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command to
> > > > map the configuration change/selected queue events respectively.  " (See
> > > > spec patch 5/5)
> > > > 
> > > > So if driver detects the msi sharing mode, when it does setup vq, writes
> > > > the queue_sel (this already exists in setup vq), vector sel and then
> > > > MAP_QUEUE command to do the queue event mapping.
> > > > 
> > > So actually the per vq msix could be done through this. I don't get why 
> > > you
> > > need to introduce MSI_SHARING_MASK which is the charge of driver instead 
> > > of
> > > device. The interrupt rate should have no direct relationship with whether
> > > it has been shared or not.
> > > 
> > > Btw, you introduce mask/unmask without pending, how to deal with the lost
> > > interrupt during the masking then?
> > pending can be an internal device register. as long as device
> > does not lose interrupts while masked, all's well.
> 
> 
> You meant raise the interrupt during unmask automatically?
> 


yes - that's also what pci does.

the guest visible pending bit is partially implemented in qemu
as per pci spec but it's unused.

> > 
> > There's value is being able to say "this queue sends no
> > interrupts do not bother checking used notification area".
> > so we need way to say that. So I guess an enable interrupts
> > register might have some value...
> > But besides that, it's enough to have mask/unmask/address/data
> > per vq.
> 
> 
> Just to check, do you mean "per vector" here?
> 
> Thanks
> 

No, per VQ. An indirection VQ -> vector -> address/data isn't
necessary for PCI either, but that ship has sailed.

-- 
MST




Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-11 Thread Jason Wang



On 2020/2/11 下午7:58, Michael S. Tsirkin wrote:

On Tue, Feb 11, 2020 at 03:40:23PM +0800, Jason Wang wrote:

On 2020/2/11 下午2:02, Liu, Jing2 wrote:

On 2/11/2020 12:02 PM, Jason Wang wrote:

On 2020/2/11 上午11:35, Liu, Jing2 wrote:

On 2/11/2020 11:17 AM, Jason Wang wrote:

On 2020/2/10 下午5:05, Zha Bin wrote:

From: Liu Jiang

Userspace VMMs (e.g. Qemu microvm, Firecracker) take
advantage of using
virtio over mmio devices as a lightweight machine model for modern
cloud. The standard virtio over MMIO transport layer
only supports one
legacy interrupt, which is much heavier than virtio over
PCI transport
layer using MSI. Legacy interrupt has long work path and
causes specific
VMExits in following cases, which would considerably slow down the
performance:

1) read interrupt status register
2) update interrupt status register
3) write IOAPIC EOI register

We proposed to add MSI support for virtio over MMIO via new feature
bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.

With the VIRTIO_F_MMIO_MSI feature bit supported, the virtio-mmio MSI
uses msi_sharing[1] to indicate the event and vector mapping.
Bit 1 is 0: device uses non-sharing and fixed vector per
event mapping.
Bit 1 is 1: device uses sharing mode and dynamic mapping.

I believe dynamic mapping should cover the case of fixed vector?


Actually this bit*aims*  for msi sharing or msi non-sharing.

It means, when msi sharing bit is 1, device doesn't want vector
per queue

(it wants msi vector sharing as name) and doesn't want a high
interrupt rate.

So driver turns to !per_vq_vectors and has to do dynamical mapping.

So they are opposite not superset.

Thanks!

Jing

I think you need add more comments on the command.

E.g if I want to map vector 0 to queue 1, how do I need to do?

write(1, queue_sel);
write(0, vector_sel);

That's true. Besides, two commands are used for msi sharing mode,

VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.

"To set up the event and vector mapping for MSI sharing mode, driver
SHOULD write a valid MsiVecSel followed by
VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command to
map the configuration change/selected queue events respectively.  " (See
spec patch 5/5)

So if driver detects the msi sharing mode, when it does setup vq, writes
the queue_sel (this already exists in setup vq), vector sel and then
MAP_QUEUE command to do the queue event mapping.


So actually the per vq msix could be done through this. I don't get why you
need to introduce MSI_SHARING_MASK which is the charge of driver instead of
device. The interrupt rate should have no direct relationship with whether
it has been shared or not.

Btw, you introduce mask/unmask without pending, how to deal with the lost
interrupt during the masking then?

pending can be an internal device register. as long as device
does not lose interrupts while masked, all's well.



You meant raise the interrupt during unmask automatically?




There's value is being able to say "this queue sends no
interrupts do not bother checking used notification area".
so we need way to say that. So I guess an enable interrupts
register might have some value...
But besides that, it's enough to have mask/unmask/address/data
per vq.



Just to check, do you mean "per vector" here?

Thanks










Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-11 Thread Michael S. Tsirkin
On Tue, Feb 11, 2020 at 03:40:23PM +0800, Jason Wang wrote:
> 
> On 2020/2/11 下午2:02, Liu, Jing2 wrote:
> > 
> > 
> > On 2/11/2020 12:02 PM, Jason Wang wrote:
> > > 
> > > On 2020/2/11 上午11:35, Liu, Jing2 wrote:
> > > > 
> > > > On 2/11/2020 11:17 AM, Jason Wang wrote:
> > > > > 
> > > > > On 2020/2/10 下午5:05, Zha Bin wrote:
> > > > > > From: Liu Jiang
> > > > > > 
> > > > > > Userspace VMMs (e.g. Qemu microvm, Firecracker) take
> > > > > > advantage of using
> > > > > > virtio over mmio devices as a lightweight machine model for modern
> > > > > > cloud. The standard virtio over MMIO transport layer
> > > > > > only supports one
> > > > > > legacy interrupt, which is much heavier than virtio over
> > > > > > PCI transport
> > > > > > layer using MSI. Legacy interrupt has long work path and
> > > > > > causes specific
> > > > > > VMExits in following cases, which would considerably slow down the
> > > > > > performance:
> > > > > > 
> > > > > > 1) read interrupt status register
> > > > > > 2) update interrupt status register
> > > > > > 3) write IOAPIC EOI register
> > > > > > 
> > > > > > We proposed to add MSI support for virtio over MMIO via new feature
> > > > > > bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.
> > > > > > 
> > > > > > With the VIRTIO_F_MMIO_MSI feature bit supported, the virtio-mmio 
> > > > > > MSI
> > > > > > uses msi_sharing[1] to indicate the event and vector mapping.
> > > > > > Bit 1 is 0: device uses non-sharing and fixed vector per
> > > > > > event mapping.
> > > > > > Bit 1 is 1: device uses sharing mode and dynamic mapping.
> > > > > 
> > > > > 
> > > > > I believe dynamic mapping should cover the case of fixed vector?
> > > > > 
> > > > Actually this bit *aims* for msi sharing or msi non-sharing.
> > > > 
> > > > It means, when msi sharing bit is 1, device doesn't want vector
> > > > per queue
> > > > 
> > > > (it wants msi vector sharing as name) and doesn't want a high
> > > > interrupt rate.
> > > > 
> > > > So driver turns to !per_vq_vectors and has to do dynamical mapping.
> > > > 
> > > > So they are opposite not superset.
> > > > 
> > > > Thanks!
> > > > 
> > > > Jing
> > > 
> > > 
> > > I think you need add more comments on the command.
> > > 
> > > E.g if I want to map vector 0 to queue 1, how do I need to do?
> > > 
> > > write(1, queue_sel);
> > > write(0, vector_sel);
> > 
> > That's true. Besides, two commands are used for msi sharing mode,
> > 
> > VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.
> > 
> > "To set up the event and vector mapping for MSI sharing mode, driver
> > SHOULD write a valid MsiVecSel followed by
> > VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command to
> > map the configuration change/selected queue events respectively.  " (See
> > spec patch 5/5)
> > 
> > So if driver detects the msi sharing mode, when it does setup vq, writes
> > the queue_sel (this already exists in setup vq), vector sel and then
> > MAP_QUEUE command to do the queue event mapping.
> > 
> 
> So actually the per vq msix could be done through this. I don't get why you
> need to introduce MSI_SHARING_MASK which is the charge of driver instead of
> device. The interrupt rate should have no direct relationship with whether
> it has been shared or not.
> 
> Btw, you introduce mask/unmask without pending, how to deal with the lost
> interrupt during the masking then?

pending can be an internal device register. as long as device
does not lose interrupts while masked, all's well.

There's value is being able to say "this queue sends no
interrupts do not bother checking used notification area".
so we need way to say that. So I guess an enable interrupts
register might have some value...
But besides that, it's enough to have mask/unmask/address/data
per vq.


> 
> > For msi non-sharing mode, no special action is needed because we make
> > the rule of per_vq_vector and fixed relationship.
> > 
> > Correct me if this is not that clear for spec/code comments.
> > 
> 
> The ABI is not as straightforward as PCI did. Why not just reuse the PCI
> layout?
> 
> E.g having
> 
> queue_sel
> queue_msix_vector
> msix_config
> 
> for configuring map between msi vector and queues/config
> 
> Then
> 
> vector_sel
> address
> data
> pending
> mask
> unmask
> 
> for configuring msi table?
> 
> Thanks
> 
> 
> > Thanks!
> > 
> > Jing
> > 
> > 
> > > 
> > > ?
> > > 
> > > Thanks
> > > 
> > > 
> > > > 
> > > > 
> > > > > Thanks
> > > > > 
> > > > > 
> > > > > 
> > > > > -
> > > > > To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
> > > > > For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org
> > > > > 
> > > > 
> > > 




Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-11 Thread Michael S. Tsirkin
On Tue, Feb 11, 2020 at 11:35:43AM +0800, Liu, Jing2 wrote:
> 
> On 2/11/2020 11:17 AM, Jason Wang wrote:
> > 
> > On 2020/2/10 下午5:05, Zha Bin wrote:
> > > From: Liu Jiang
> > > 
> > > Userspace VMMs (e.g. Qemu microvm, Firecracker) take advantage of using
> > > virtio over mmio devices as a lightweight machine model for modern
> > > cloud. The standard virtio over MMIO transport layer only supports one
> > > legacy interrupt, which is much heavier than virtio over PCI transport
> > > layer using MSI. Legacy interrupt has long work path and causes specific
> > > VMExits in following cases, which would considerably slow down the
> > > performance:
> > > 
> > > 1) read interrupt status register
> > > 2) update interrupt status register
> > > 3) write IOAPIC EOI register
> > > 
> > > We proposed to add MSI support for virtio over MMIO via new feature
> > > bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.
> > > 
> > > With the VIRTIO_F_MMIO_MSI feature bit supported, the virtio-mmio MSI
> > > uses msi_sharing[1] to indicate the event and vector mapping.
> > > Bit 1 is 0: device uses non-sharing and fixed vector per event mapping.
> > > Bit 1 is 1: device uses sharing mode and dynamic mapping.
> > 
> > 
> > I believe dynamic mapping should cover the case of fixed vector?
> > 
> Actually this bit *aims* for msi sharing or msi non-sharing.
> 
> It means, when msi sharing bit is 1, device doesn't want vector per queue
> 
> (it wants msi vector sharing as name) and doesn't want a high interrupt
> rate.
> 
> So driver turns to !per_vq_vectors and has to do dynamical mapping.
> 
> So they are opposite not superset.
> 
> Thanks!
> 
> Jing

what's the point of all this flexibility? the cover letter
complains about the code size of pci, then you go back
and add a ton of options with no justification.



> 
> > Thanks
> > 
> > 
> > 
> > -
> > To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
> > For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org
> > 




Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-10 Thread Jason Wang



On 2020/2/11 下午2:02, Liu, Jing2 wrote:



On 2/11/2020 12:02 PM, Jason Wang wrote:


On 2020/2/11 上午11:35, Liu, Jing2 wrote:


On 2/11/2020 11:17 AM, Jason Wang wrote:


On 2020/2/10 下午5:05, Zha Bin wrote:

From: Liu Jiang

Userspace VMMs (e.g. Qemu microvm, Firecracker) take advantage of 
using

virtio over mmio devices as a lightweight machine model for modern
cloud. The standard virtio over MMIO transport layer only supports 
one
legacy interrupt, which is much heavier than virtio over PCI 
transport
layer using MSI. Legacy interrupt has long work path and causes 
specific

VMExits in following cases, which would considerably slow down the
performance:

1) read interrupt status register
2) update interrupt status register
3) write IOAPIC EOI register

We proposed to add MSI support for virtio over MMIO via new feature
bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.

With the VIRTIO_F_MMIO_MSI feature bit supported, the virtio-mmio MSI
uses msi_sharing[1] to indicate the event and vector mapping.
Bit 1 is 0: device uses non-sharing and fixed vector per event 
mapping.

Bit 1 is 1: device uses sharing mode and dynamic mapping.



I believe dynamic mapping should cover the case of fixed vector?


Actually this bit *aims* for msi sharing or msi non-sharing.

It means, when msi sharing bit is 1, device doesn't want vector per 
queue


(it wants msi vector sharing as name) and doesn't want a high 
interrupt rate.


So driver turns to !per_vq_vectors and has to do dynamical mapping.

So they are opposite not superset.

Thanks!

Jing



I think you need add more comments on the command.

E.g if I want to map vector 0 to queue 1, how do I need to do?

write(1, queue_sel);
write(0, vector_sel);


That's true. Besides, two commands are used for msi sharing mode,

VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.

"To set up the event and vector mapping for MSI sharing mode, driver 
SHOULD write a valid MsiVecSel followed by 
VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command 
to map the configuration change/selected queue events respectively.  " 
(See spec patch 5/5)


So if driver detects the msi sharing mode, when it does setup vq, 
writes the queue_sel (this already exists in setup vq), vector sel and 
then MAP_QUEUE command to do the queue event mapping.




So actually the per vq msix could be done through this. I don't get why 
you need to introduce MSI_SHARING_MASK which is the charge of driver 
instead of device. The interrupt rate should have no direct relationship 
with whether it has been shared or not.


Btw, you introduce mask/unmask without pending, how to deal with the 
lost interrupt during the masking then?



For msi non-sharing mode, no special action is needed because we make 
the rule of per_vq_vector and fixed relationship.


Correct me if this is not that clear for spec/code comments.



The ABI is not as straightforward as PCI did. Why not just reuse the PCI 
layout?


E.g having

queue_sel
queue_msix_vector
msix_config

for configuring map between msi vector and queues/config

Then

vector_sel
address
data
pending
mask
unmask

for configuring msi table?

Thanks



Thanks!

Jing




?

Thanks






Thanks



-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org










Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-10 Thread Liu, Jing2


On 2/11/2020 12:02 PM, Jason Wang wrote:


On 2020/2/11 上午11:35, Liu, Jing2 wrote:


On 2/11/2020 11:17 AM, Jason Wang wrote:


On 2020/2/10 下午5:05, Zha Bin wrote:

From: Liu Jiang

Userspace VMMs (e.g. Qemu microvm, Firecracker) take advantage of 
using

virtio over mmio devices as a lightweight machine model for modern
cloud. The standard virtio over MMIO transport layer only supports one
legacy interrupt, which is much heavier than virtio over PCI transport
layer using MSI. Legacy interrupt has long work path and causes 
specific

VMExits in following cases, which would considerably slow down the
performance:

1) read interrupt status register
2) update interrupt status register
3) write IOAPIC EOI register

We proposed to add MSI support for virtio over MMIO via new feature
bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.

With the VIRTIO_F_MMIO_MSI feature bit supported, the virtio-mmio MSI
uses msi_sharing[1] to indicate the event and vector mapping.
Bit 1 is 0: device uses non-sharing and fixed vector per event 
mapping.

Bit 1 is 1: device uses sharing mode and dynamic mapping.



I believe dynamic mapping should cover the case of fixed vector?


Actually this bit *aims* for msi sharing or msi non-sharing.

It means, when msi sharing bit is 1, device doesn't want vector per 
queue


(it wants msi vector sharing as name) and doesn't want a high 
interrupt rate.


So driver turns to !per_vq_vectors and has to do dynamical mapping.

So they are opposite not superset.

Thanks!

Jing



I think you need add more comments on the command.

E.g if I want to map vector 0 to queue 1, how do I need to do?

write(1, queue_sel);
write(0, vector_sel);


That's true. Besides, two commands are used for msi sharing mode,

VIRTIO_MMIO_MSI_CMD_MAP_CONFIG and VIRTIO_MMIO_MSI_CMD_MAP_QUEUE.

"To set up the event and vector mapping for MSI sharing mode, driver 
SHOULD write a valid MsiVecSel followed by 
VIRTIO_MMIO_MSI_CMD_MAP_CONFIG/VIRTIO_MMIO_MSI_CMD_MAP_QUEUE command to 
map the configuration change/selected queue events respectively.  " (See 
spec patch 5/5)


So if driver detects the msi sharing mode, when it does setup vq, writes 
the queue_sel (this already exists in setup vq), vector sel and then 
MAP_QUEUE command to do the queue event mapping.


For msi non-sharing mode, no special action is needed because we make 
the rule of per_vq_vector and fixed relationship.


Correct me if this is not that clear for spec/code comments.

Thanks!

Jing




?

Thanks






Thanks



-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org







Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-10 Thread Jason Wang



On 2020/2/11 上午11:35, Liu, Jing2 wrote:


On 2/11/2020 11:17 AM, Jason Wang wrote:


On 2020/2/10 下午5:05, Zha Bin wrote:

From: Liu Jiang

Userspace VMMs (e.g. Qemu microvm, Firecracker) take advantage of using
virtio over mmio devices as a lightweight machine model for modern
cloud. The standard virtio over MMIO transport layer only supports one
legacy interrupt, which is much heavier than virtio over PCI transport
layer using MSI. Legacy interrupt has long work path and causes 
specific

VMExits in following cases, which would considerably slow down the
performance:

1) read interrupt status register
2) update interrupt status register
3) write IOAPIC EOI register

We proposed to add MSI support for virtio over MMIO via new feature
bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.

With the VIRTIO_F_MMIO_MSI feature bit supported, the virtio-mmio MSI
uses msi_sharing[1] to indicate the event and vector mapping.
Bit 1 is 0: device uses non-sharing and fixed vector per event mapping.
Bit 1 is 1: device uses sharing mode and dynamic mapping.



I believe dynamic mapping should cover the case of fixed vector?


Actually this bit *aims* for msi sharing or msi non-sharing.

It means, when msi sharing bit is 1, device doesn't want vector per queue

(it wants msi vector sharing as name) and doesn't want a high 
interrupt rate.


So driver turns to !per_vq_vectors and has to do dynamical mapping.

So they are opposite not superset.

Thanks!

Jing



I think you need add more comments on the command.

E.g if I want to map vector 0 to queue 1, how do I need to do?

write(1, queue_sel);
write(0, vector_sel);

?

Thanks






Thanks



-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org








Re: [virtio-dev] Re: [PATCH v2 4/5] virtio-mmio: add MSI interrupt feature support

2020-02-10 Thread Liu, Jing2



On 2/11/2020 11:17 AM, Jason Wang wrote:


On 2020/2/10 下午5:05, Zha Bin wrote:

From: Liu Jiang

Userspace VMMs (e.g. Qemu microvm, Firecracker) take advantage of using
virtio over mmio devices as a lightweight machine model for modern
cloud. The standard virtio over MMIO transport layer only supports one
legacy interrupt, which is much heavier than virtio over PCI transport
layer using MSI. Legacy interrupt has long work path and causes specific
VMExits in following cases, which would considerably slow down the
performance:

1) read interrupt status register
2) update interrupt status register
3) write IOAPIC EOI register

We proposed to add MSI support for virtio over MMIO via new feature
bit VIRTIO_F_MMIO_MSI[1] which increases the interrupt performance.

With the VIRTIO_F_MMIO_MSI feature bit supported, the virtio-mmio MSI
uses msi_sharing[1] to indicate the event and vector mapping.
Bit 1 is 0: device uses non-sharing and fixed vector per event mapping.
Bit 1 is 1: device uses sharing mode and dynamic mapping.



I believe dynamic mapping should cover the case of fixed vector?


Actually this bit *aims* for msi sharing or msi non-sharing.

It means, when msi sharing bit is 1, device doesn't want vector per queue

(it wants msi vector sharing as name) and doesn't want a high interrupt 
rate.


So driver turns to !per_vq_vectors and has to do dynamical mapping.

So they are opposite not superset.

Thanks!

Jing



Thanks



-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org