[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-24 Thread Marcel Apfelbaum
On 09/23/2015 06:46 AM, Yuanhan Liu wrote:
> On Tue, Sep 22, 2015 at 05:51:02PM +0300, Marcel Apfelbaum wrote:
[...]
>>> It's proved to work after the fix (at least in my testing), but
>>> it's late here and I'm gonna send a new version tomorrow, including
>>> some other comments addressing. Please do more test then :)
>>>
>
> It's unlikely that I will send another version unless I have clear clue
> how to address a comment from Michael about vring flush.

Hi,

I don't pretend to understand how exactly this works in DPDK, but since the 
objective
is to not have packets in vring you could:

1. Disable the vq processing of new packets. (You introduced the 'enable' flag)
2. Wait a reasonable amount of time until the processing cores
finish dealing with current packets.
3. Check the vqs that no packets are waiting for processing.

Again, this is only a suggestion and may be incomplete (or naive).

>
> But anyway, you still could help me to prove the fix works. You can
> apply the attachment on top of my old patchset, and it should work.
>

I tested it and it works just fine!

Thanks again,
Marcel

>   --yliu
>>
>> Those are very good news!
>> Tomorrow we have holidays but the day after that I'll try it for sure.
>



[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-23 Thread Yuanhan Liu
On Tue, Sep 22, 2015 at 05:51:02PM +0300, Marcel Apfelbaum wrote:
> >It's proved to work after the fix (at least in my testing), but
> >it's late here and I'm gonna send a new version tomorrow, including
> >some other comments addressing. Please do more test then :)
> >

It's unlikely that I will send another version unless I have clear clue
how to address a comment from Michael about vring flush.

But anyway, you still could help me to prove the fix works. You can
apply the attachment on top of my old patchset, and it should work.

--yliu
> 
> Those are very good news!
> Tomorrow we have holidays but the day after that I'll try it for sure.

-- next part --
diff --git a/lib/librte_vhost/virtio-net.c b/lib/librte_vhost/virtio-net.c
index 33bdacd..d304ee6 100644
--- a/lib/librte_vhost/virtio-net.c
+++ b/lib/librte_vhost/virtio-net.c
@@ -467,6 +467,8 @@ static int
 set_features(struct vhost_device_ctx ctx, uint64_t *pu)
 {
struct virtio_net *dev;
+   uint16_t vhost_hlen;
+   uint16_t i;

dev = get_device(ctx);
if (dev == NULL)
@@ -474,27 +476,26 @@ set_features(struct vhost_device_ctx ctx, uint64_t *pu)
if (*pu & ~VHOST_FEATURES)
return -1;

-   /* Store the negotiated feature list for the device. */
dev->features = *pu;
-
-   /* Set the vhost_hlen depending on if VIRTIO_NET_F_MRG_RXBUF is set. */
if (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) {
LOG_DEBUG(VHOST_CONFIG,
"(%"PRIu64") Mergeable RX buffers enabled\n",
dev->device_fh);
-   dev->virtqueue[VIRTIO_RXQ]->vhost_hlen =
-   sizeof(struct virtio_net_hdr_mrg_rxbuf);
-   dev->virtqueue[VIRTIO_TXQ]->vhost_hlen =
-   sizeof(struct virtio_net_hdr_mrg_rxbuf);
+   vhost_hlen = sizeof(struct virtio_net_hdr_mrg_rxbuf);
} else {
LOG_DEBUG(VHOST_CONFIG,
"(%"PRIu64") Mergeable RX buffers disabled\n",
dev->device_fh);
-   dev->virtqueue[VIRTIO_RXQ]->vhost_hlen =
-   sizeof(struct virtio_net_hdr);
-   dev->virtqueue[VIRTIO_TXQ]->vhost_hlen =
-   sizeof(struct virtio_net_hdr);
+   vhost_hlen = sizeof(struct virtio_net_hdr);
+   }
+
+   for (i = 0; i < dev->virt_qp_nb; i++) {
+   uint16_t base_idx = i * VIRTIO_QNUM;
+
+   dev->virtqueue[base_idx + VIRTIO_RXQ]->vhost_hlen = vhost_hlen;
+   dev->virtqueue[base_idx + VIRTIO_TXQ]->vhost_hlen = vhost_hlen;
}
+
return 0;
 }



[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-22 Thread Yuanhan Liu
On Tue, Sep 22, 2015 at 01:06:17PM +0300, Marcel Apfelbaum wrote:
> On 09/22/2015 12:21 PM, Yuanhan Liu wrote:
> >On Tue, Sep 22, 2015 at 11:47:34AM +0300, Marcel Apfelbaum wrote:
> >>On 09/22/2015 11:34 AM, Yuanhan Liu wrote:
> >>>On Tue, Sep 22, 2015 at 11:10:13AM +0300, Marcel Apfelbaum wrote:
> On 09/22/2015 10:31 AM, Yuanhan Liu wrote:
> >On Mon, Sep 21, 2015 at 08:56:30PM +0300, Marcel Apfelbaum wrote:
> [...]
> >>>
> >>>Hi,
> >>>
> >>>I have made 4 cleanup patches few weeks before, including the patch
> >>>to define kickfd and callfd as int type, and they have already got
> >>>the ACK from Huawei Xie, and Chuangchun Ouyang. It's likely that
> >>>they will be merged, hence I made this patchset based on them.
> >>>
> >>>This will also answer the question from your another email: can't
> >>>apply.
> >>
> >>Hi,
> >>Thank you for the response, it makes sense now.
> >>
> >>T have another issue, maybe you can help.
> >>I have some problems making it work with OVS/DPDK backend and 
> >>virtio-net driver in guest.
> >>
> >>I am using a simple setup:
> >> http://wiki.qemu.org/Features/vhost-user-ovs-dpdk
> >>that connects 2 VMs using OVS's dpdkvhostuser ports (regular virtio-net 
> >>driver in guest, not the PMD driver).
> >>
> >>The setup worked fine with the prev DPDK MQ implementation (V4), 
> >>however on this one the traffic stops
> >>once I set queues=n in guest.
> >
> >Hi,
> >
> >Could you be more specific about that? It also would be helpful if you
> >could tell me the steps, besides those setup steps you mentioned in the
> >qemu wiki and this email, you did for testing.
> >
> 
> Hi,
> Thank you for your help.
> 
> I am sorry the wiki is not enough, I'll be happy to add all the missing 
> parts.
> In the meantime maybe you can tell me where the problem is, I also 
> suggest to
> post here the output of journalctl command.
> 
> We only need a regular machine and we want traffic between 2 VMs. I'll 
> try to summarize the steps:
> 
> 1. Be sure you have enough hugepages enabled (2M pages are enough) and 
> mounted.
> 2. Configure and start OVS following the wiki
> - we only want one bridge with 2 dpdkvhostuser ports.
> 3. Start VMs using the wiki command line
> - check journalctl for possible errors. You can use
>  journalctl  --since `date +%T --date="-10 minutes"`
>   to see only last 10 minutes.
> 4. Configure the guests IPs.
> - Disable the Network Manager as described bellow in the mail.
> 5. At this point you should be able to ping between guests.
> 
> Please let me know if you have any problem until this point.
> I'll be happy to help. Please point any special steps you made that
> are not in the WIKI. The journalctl logs would also help.
> 
> Does the ping between VMS work now?
> >>>
> >>>Yes, it works, too. I can ping the other vm inside a vm.
> >>>
> >>> [root at dpdk-kvm ~]# ethtool -l eth0
> >>> Channel parameters for eth0:
> >>> Pre-set maximums:
> >>> RX: 0
> >>> TX: 0
> >>> Other:  0
> >>> Combined:   2
> >>> Current hardware settings:
> >>> RX: 0
> >>> TX: 0
> >>> Other:  0
> >>> Combined:   2
> >>>
> >>> [root at dpdk-kvm ~]# ifconfig eth0
> >>> eth0: flags=4163  mtu 1500
> >>> inet 192.168.100.11  netmask 255.255.255.0  broadcast 
> >>> 192.168.100.255
> >>> inet6 fe80::5054:ff:fe12:3459  prefixlen 64  scopeid 
> >>> 0x20
> >>> ether 52:54:00:12:34:59  txqueuelen 1000  (Ethernet)
> >>> RX packets 56  bytes 5166 (5.0 KiB)
> >>> RX errors 0  dropped 0  overruns 0  frame 0
> >>> TX packets 84  bytes 8303 (8.1 KiB)
> >>> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >>>
> >>> [root at dpdk-kvm ~]# ping 192.168.100.10
> >>> PING 192.168.100.10 (192.168.100.10) 56(84) bytes of data.
> >>> 64 bytes from 192.168.100.10: icmp_seq=1 ttl=64 time=0.213 ms
> >>> 64 bytes from 192.168.100.10: icmp_seq=2 ttl=64 time=0.094 ms
> >>> 64 bytes from 192.168.100.10: icmp_seq=3 ttl=64 time=0.246 ms
> >>> 64 bytes from 192.168.100.10: icmp_seq=4 ttl=64 time=0.153 ms
> >>> 64 bytes from 192.168.100.10: icmp_seq=5 ttl=64 time=0.104 ms
> >>> ^C
> 
> If yes, please let me know and I'll go over MQ enabling.
> >>>
> >>>I'm just wondering why it doesn't work on your side.
> >>
> >>Hi,
> >>
> >>This is working also for me, but without enabling the MQ. (ethtool -L eth0 
> >>combined n (n>1) )
> >>The problem starts when I am applying the patches and I enable MQ. (Need a 
> >>slightly different QEMU commandline)
> >>
> >>>
> 
> 

[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-22 Thread Marcel Apfelbaum
On 09/22/2015 05:22 PM, Yuanhan Liu wrote:
> On Tue, Sep 22, 2015 at 01:06:17PM +0300, Marcel Apfelbaum wrote:
>> On 09/22/2015 12:21 PM, Yuanhan Liu wrote:
>>> On Tue, Sep 22, 2015 at 11:47:34AM +0300, Marcel Apfelbaum wrote:
 On 09/22/2015 11:34 AM, Yuanhan Liu wrote:
> On Tue, Sep 22, 2015 at 11:10:13AM +0300, Marcel Apfelbaum wrote:
>> On 09/22/2015 10:31 AM, Yuanhan Liu wrote:
>>> On Mon, Sep 21, 2015 at 08:56:30PM +0300, Marcel Apfelbaum wrote:
>> [...]
>
> Hi,
>
> I have made 4 cleanup patches few weeks before, including the patch
> to define kickfd and callfd as int type, and they have already got
> the ACK from Huawei Xie, and Chuangchun Ouyang. It's likely that
> they will be merged, hence I made this patchset based on them.
>
> This will also answer the question from your another email: can't
> apply.

 Hi,
 Thank you for the response, it makes sense now.

 T have another issue, maybe you can help.
 I have some problems making it work with OVS/DPDK backend and 
 virtio-net driver in guest.

 I am using a simple setup:
  http://wiki.qemu.org/Features/vhost-user-ovs-dpdk
 that connects 2 VMs using OVS's dpdkvhostuser ports (regular 
 virtio-net driver in guest, not the PMD driver).

 The setup worked fine with the prev DPDK MQ implementation (V4), 
 however on this one the traffic stops
 once I set queues=n in guest.
>>>
>>> Hi,
>>>
>>> Could you be more specific about that? It also would be helpful if you
>>> could tell me the steps, besides those setup steps you mentioned in the
>>> qemu wiki and this email, you did for testing.
>>>
>>
>> Hi,
>> Thank you for your help.
>>
>> I am sorry the wiki is not enough, I'll be happy to add all the missing 
>> parts.
>> In the meantime maybe you can tell me where the problem is, I also 
>> suggest to
>> post here the output of journalctl command.
>>
>> We only need a regular machine and we want traffic between 2 VMs. I'll 
>> try to summarize the steps:
>>
>> 1. Be sure you have enough hugepages enabled (2M pages are enough) and 
>> mounted.
>> 2. Configure and start OVS following the wiki
>> - we only want one bridge with 2 dpdkvhostuser ports.
>> 3. Start VMs using the wiki command line
>> - check journalctl for possible errors. You can use
>>  journalctl  --since `date +%T --date="-10 minutes"`
>>   to see only last 10 minutes.
>> 4. Configure the guests IPs.
>> - Disable the Network Manager as described bellow in the mail.
>> 5. At this point you should be able to ping between guests.
>>
>> Please let me know if you have any problem until this point.
>> I'll be happy to help. Please point any special steps you made that
>> are not in the WIKI. The journalctl logs would also help.
>>
>> Does the ping between VMS work now?
>
> Yes, it works, too. I can ping the other vm inside a vm.
>
>  [root at dpdk-kvm ~]# ethtool -l eth0
>  Channel parameters for eth0:
>  Pre-set maximums:
>  RX: 0
>  TX: 0
>  Other:  0
>  Combined:   2
>  Current hardware settings:
>  RX: 0
>  TX: 0
>  Other:  0
>  Combined:   2
>
>  [root at dpdk-kvm ~]# ifconfig eth0
>  eth0: flags=4163  mtu 1500
>  inet 192.168.100.11  netmask 255.255.255.0  broadcast 
> 192.168.100.255
>  inet6 fe80::5054:ff:fe12:3459  prefixlen 64  scopeid 
> 0x20
>  ether 52:54:00:12:34:59  txqueuelen 1000  (Ethernet)
>  RX packets 56  bytes 5166 (5.0 KiB)
>  RX errors 0  dropped 0  overruns 0  frame 0
>  TX packets 84  bytes 8303 (8.1 KiB)
>  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>  [root at dpdk-kvm ~]# ping 192.168.100.10
>  PING 192.168.100.10 (192.168.100.10) 56(84) bytes of data.
>  64 bytes from 192.168.100.10: icmp_seq=1 ttl=64 time=0.213 ms
>  64 bytes from 192.168.100.10: icmp_seq=2 ttl=64 time=0.094 ms
>  64 bytes from 192.168.100.10: icmp_seq=3 ttl=64 time=0.246 ms
>  64 bytes from 192.168.100.10: icmp_seq=4 ttl=64 time=0.153 ms
>  64 bytes from 192.168.100.10: icmp_seq=5 ttl=64 time=0.104 ms
>  ^C
>>
>> If yes, please let me know and I'll go over MQ enabling.
>
> I'm just wondering why it doesn't work on your side.

 Hi,

 This is working also for me, but without enabling the MQ. (ethtool -L eth0 
 combined n (n>1) )
 The 

[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-22 Thread Yuanhan Liu
On Tue, Sep 22, 2015 at 11:47:34AM +0300, Marcel Apfelbaum wrote:
> On 09/22/2015 11:34 AM, Yuanhan Liu wrote:
> >On Tue, Sep 22, 2015 at 11:10:13AM +0300, Marcel Apfelbaum wrote:
> >>On 09/22/2015 10:31 AM, Yuanhan Liu wrote:
> >>>On Mon, Sep 21, 2015 at 08:56:30PM +0300, Marcel Apfelbaum wrote:
> >>[...]
> >
> >Hi,
> >
> >I have made 4 cleanup patches few weeks before, including the patch
> >to define kickfd and callfd as int type, and they have already got
> >the ACK from Huawei Xie, and Chuangchun Ouyang. It's likely that
> >they will be merged, hence I made this patchset based on them.
> >
> >This will also answer the question from your another email: can't
> >apply.
> 
> Hi,
> Thank you for the response, it makes sense now.
> 
> T have another issue, maybe you can help.
> I have some problems making it work with OVS/DPDK backend and virtio-net 
> driver in guest.
> 
> I am using a simple setup:
>  http://wiki.qemu.org/Features/vhost-user-ovs-dpdk
> that connects 2 VMs using OVS's dpdkvhostuser ports (regular virtio-net 
> driver in guest, not the PMD driver).
> 
> The setup worked fine with the prev DPDK MQ implementation (V4), however 
> on this one the traffic stops
> once I set queues=n in guest.
> >>>
> >>>Hi,
> >>>
> >>>Could you be more specific about that? It also would be helpful if you
> >>>could tell me the steps, besides those setup steps you mentioned in the
> >>>qemu wiki and this email, you did for testing.
> >>>
> >>
> >>Hi,
> >>Thank you for your help.
> >>
> >>I am sorry the wiki is not enough, I'll be happy to add all the missing 
> >>parts.
> >>In the meantime maybe you can tell me where the problem is, I also suggest 
> >>to
> >>post here the output of journalctl command.
> >>
> >>We only need a regular machine and we want traffic between 2 VMs. I'll try 
> >>to summarize the steps:
> >>
> >>1. Be sure you have enough hugepages enabled (2M pages are enough) and 
> >>mounted.
> >>2. Configure and start OVS following the wiki
> >>- we only want one bridge with 2 dpdkvhostuser ports.
> >>3. Start VMs using the wiki command line
> >>- check journalctl for possible errors. You can use
> >> journalctl  --since `date +%T --date="-10 minutes"`
> >>  to see only last 10 minutes.
> >>4. Configure the guests IPs.
> >>- Disable the Network Manager as described bellow in the mail.
> >>5. At this point you should be able to ping between guests.
> >>
> >>Please let me know if you have any problem until this point.
> >>I'll be happy to help. Please point any special steps you made that
> >>are not in the WIKI. The journalctl logs would also help.
> >>
> >>Does the ping between VMS work now?
> >
> >Yes, it works, too. I can ping the other vm inside a vm.
> >
> > [root at dpdk-kvm ~]# ethtool -l eth0
> > Channel parameters for eth0:
> > Pre-set maximums:
> > RX: 0
> > TX: 0
> > Other:  0
> > Combined:   2
> > Current hardware settings:
> > RX: 0
> > TX: 0
> > Other:  0
> > Combined:   2
> >
> > [root at dpdk-kvm ~]# ifconfig eth0
> > eth0: flags=4163  mtu 1500
> > inet 192.168.100.11  netmask 255.255.255.0  broadcast 
> > 192.168.100.255
> > inet6 fe80::5054:ff:fe12:3459  prefixlen 64  scopeid 0x20
> > ether 52:54:00:12:34:59  txqueuelen 1000  (Ethernet)
> > RX packets 56  bytes 5166 (5.0 KiB)
> > RX errors 0  dropped 0  overruns 0  frame 0
> > TX packets 84  bytes 8303 (8.1 KiB)
> > TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> > [root at dpdk-kvm ~]# ping 192.168.100.10
> > PING 192.168.100.10 (192.168.100.10) 56(84) bytes of data.
> > 64 bytes from 192.168.100.10: icmp_seq=1 ttl=64 time=0.213 ms
> > 64 bytes from 192.168.100.10: icmp_seq=2 ttl=64 time=0.094 ms
> > 64 bytes from 192.168.100.10: icmp_seq=3 ttl=64 time=0.246 ms
> > 64 bytes from 192.168.100.10: icmp_seq=4 ttl=64 time=0.153 ms
> > 64 bytes from 192.168.100.10: icmp_seq=5 ttl=64 time=0.104 ms
> > ^C
> >>
> >>If yes, please let me know and I'll go over MQ enabling.
> >
> >I'm just wondering why it doesn't work on your side.
> 
> Hi,
> 
> This is working also for me, but without enabling the MQ. (ethtool -L eth0 
> combined n (n>1) )
> The problem starts when I am applying the patches and I enable MQ. (Need a 
> slightly different QEMU commandline)
> 
> >
> >>
> >>>I had a very rough testing based on your test guides, I indeed found
> >>>an issue: the IP address assigned by "ifconfig" disappears soon in the
> >>>first few times and after about 2 or 3 times reset, it never changes.
> >>>
> >>>(well, I saw that quite few times before while trying different QEMU
> >>>net devices. So, it might be a system configuration 

[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-22 Thread Yuanhan Liu
On Tue, Sep 22, 2015 at 11:10:13AM +0300, Marcel Apfelbaum wrote:
> On 09/22/2015 10:31 AM, Yuanhan Liu wrote:
> >On Mon, Sep 21, 2015 at 08:56:30PM +0300, Marcel Apfelbaum wrote:
> [...]
> >>>
> >>>Hi,
> >>>
> >>>I have made 4 cleanup patches few weeks before, including the patch
> >>>to define kickfd and callfd as int type, and they have already got
> >>>the ACK from Huawei Xie, and Chuangchun Ouyang. It's likely that
> >>>they will be merged, hence I made this patchset based on them.
> >>>
> >>>This will also answer the question from your another email: can't
> >>>apply.
> >>
> >>Hi,
> >>Thank you for the response, it makes sense now.
> >>
> >>T have another issue, maybe you can help.
> >>I have some problems making it work with OVS/DPDK backend and virtio-net 
> >>driver in guest.
> >>
> >>I am using a simple setup:
> >> http://wiki.qemu.org/Features/vhost-user-ovs-dpdk
> >>that connects 2 VMs using OVS's dpdkvhostuser ports (regular virtio-net 
> >>driver in guest, not the PMD driver).
> >>
> >>The setup worked fine with the prev DPDK MQ implementation (V4), however on 
> >>this one the traffic stops
> >>once I set queues=n in guest.
> >
> >Hi,
> >
> >Could you be more specific about that? It also would be helpful if you
> >could tell me the steps, besides those setup steps you mentioned in the
> >qemu wiki and this email, you did for testing.
> >
> 
> Hi,
> Thank you for your help.
> 
> I am sorry the wiki is not enough, I'll be happy to add all the missing parts.
> In the meantime maybe you can tell me where the problem is, I also suggest to
> post here the output of journalctl command.
> 
> We only need a regular machine and we want traffic between 2 VMs. I'll try to 
> summarize the steps:
> 
> 1. Be sure you have enough hugepages enabled (2M pages are enough) and 
> mounted.
> 2. Configure and start OVS following the wiki
>- we only want one bridge with 2 dpdkvhostuser ports.
> 3. Start VMs using the wiki command line
>- check journalctl for possible errors. You can use
> journalctl  --since `date +%T --date="-10 minutes"`
>  to see only last 10 minutes.
> 4. Configure the guests IPs.
>- Disable the Network Manager as described bellow in the mail.
> 5. At this point you should be able to ping between guests.
> 
> Please let me know if you have any problem until this point.
> I'll be happy to help. Please point any special steps you made that
> are not in the WIKI. The journalctl logs would also help.
> 
> Does the ping between VMS work now?

Yes, it works, too. I can ping the other vm inside a vm.

[root at dpdk-kvm ~]# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX: 0
TX: 0
Other:  0
Combined:   2
Current hardware settings:
RX: 0
TX: 0
Other:  0
Combined:   2

[root at dpdk-kvm ~]# ifconfig eth0
eth0: flags=4163  mtu 1500
inet 192.168.100.11  netmask 255.255.255.0  broadcast 
192.168.100.255
inet6 fe80::5054:ff:fe12:3459  prefixlen 64  scopeid 0x20
ether 52:54:00:12:34:59  txqueuelen 1000  (Ethernet)
RX packets 56  bytes 5166 (5.0 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 84  bytes 8303 (8.1 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root at dpdk-kvm ~]# ping 192.168.100.10
PING 192.168.100.10 (192.168.100.10) 56(84) bytes of data.
64 bytes from 192.168.100.10: icmp_seq=1 ttl=64 time=0.213 ms
64 bytes from 192.168.100.10: icmp_seq=2 ttl=64 time=0.094 ms
64 bytes from 192.168.100.10: icmp_seq=3 ttl=64 time=0.246 ms
64 bytes from 192.168.100.10: icmp_seq=4 ttl=64 time=0.153 ms
64 bytes from 192.168.100.10: icmp_seq=5 ttl=64 time=0.104 ms
^C
> 
> If yes, please let me know and I'll go over MQ enabling.

I'm just wondering why it doesn't work on your side.

> 
> >I had a very rough testing based on your test guides, I indeed found
> >an issue: the IP address assigned by "ifconfig" disappears soon in the
> >first few times and after about 2 or 3 times reset, it never changes.
> >
> >(well, I saw that quite few times before while trying different QEMU
> >net devices. So, it might be a system configuration issue, or something
> >else?)
> >
> 
> You are right, this is a guest config issue, I think you should disable 
> NetworkManager

Yeah, I figured it out by my self, and it worked when I hardcoded it at
/etc/sysconfig/network-scripts/ifcfg-eth0.

> for static IP addresses. Please use only the virtio-net device.
> 
> You cant try this:
> sudo systemctl stop NetworkManager
> sudo systemctl disable NetworkManager

Thanks for the info and tip!

> 
> >Besides that, it works, say, I can wget a big file from host.
> >
> 
> The target here is traffic between 2 VMs.
> We want to be able to ping (for example) between VMS when MQ > 1 is enabled 
> on 

[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-22 Thread Yuanhan Liu
On Mon, Sep 21, 2015 at 08:56:30PM +0300, Marcel Apfelbaum wrote:
> On 09/21/2015 05:06 AM, Yuanhan Liu wrote:
> >On Sun, Sep 20, 2015 at 04:58:42PM +0300, Marcel Apfelbaum wrote:
> >>On 09/18/2015 06:10 PM, Yuanhan Liu wrote:
> >>>All queue pairs, including the default (the first) queue pair,
> >>>are allocated dynamically, when a vring_call message is received
> >>>first time for a specific queue pair.
> >>>
> >>>This is a refactor work for enabling vhost-user multiple queue;
> >>>it should not break anything as it does no functional changes:
> >>>we don't support mq set, so there is only one mq at max.
> >>>
> >>>This patch is based on Changchun's patch.
> >>>
> >>>Signed-off-by: Yuanhan Liu 
> >>>---
> >>>  lib/librte_vhost/rte_virtio_net.h |   3 +-
> >>>  lib/librte_vhost/vhost_user/virtio-net-user.c |  44 +-
> >>>  lib/librte_vhost/virtio-net.c | 121 
> >>> --
> >>>  3 files changed, 102 insertions(+), 66 deletions(-)
> >>>
> >>>diff --git a/lib/librte_vhost/rte_virtio_net.h 
> >>>b/lib/librte_vhost/rte_virtio_net.h
> >>>index e3a21e5..5dd6493 100644
> >>>--- a/lib/librte_vhost/rte_virtio_net.h
> >>>+++ b/lib/librte_vhost/rte_virtio_net.h
> >>>@@ -96,7 +96,7 @@ struct vhost_virtqueue {
> >>>   * Device structure contains all configuration information relating to 
> >>> the device.
> >>>   */
> >>>  struct virtio_net {
> >>>-  struct vhost_virtqueue  *virtqueue[VIRTIO_QNUM];/**< Contains 
> >>>all virtqueue information. */
> >>>+  struct vhost_virtqueue  *virtqueue[VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX];
> >>>/**< Contains all virtqueue information. */
> >>>   struct virtio_memory*mem;   /**< QEMU memory and memory 
> >>> region information. */
> >>>   uint64_tfeatures;   /**< Negotiated feature set. */
> >>>   uint64_tprotocol_features;  /**< Negotiated 
> >>> protocol feature set. */
> >>>@@ -104,6 +104,7 @@ struct virtio_net {
> >>>   uint32_tflags;  /**< Device flags. Only used to 
> >>> check if device is running on data core. */
> >>>  #define IF_NAME_SZ (PATH_MAX > IFNAMSIZ ? PATH_MAX : IFNAMSIZ)
> >>>   charifname[IF_NAME_SZ]; /**< Name of the tap 
> >>> device or socket path. */
> >>>+  uint32_tvirt_qp_nb; /**< number of queue pair we 
> >>>have allocated */
> >>>   void*priv;  /**< private context */
> >>>  } __rte_cache_aligned;
> >>>
> >>>diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c 
> >>>b/lib/librte_vhost/vhost_user/virtio-net-user.c
> >>>index 360254e..e83d279 100644
> >>>--- a/lib/librte_vhost/vhost_user/virtio-net-user.c
> >>>+++ b/lib/librte_vhost/vhost_user/virtio-net-user.c
> >>>@@ -206,25 +206,33 @@ err_mmap:
> >>>  }
> >>>
> >>
> >>Hi,
> >>
> >>>  static int
> >>>+vq_is_ready(struct vhost_virtqueue *vq)
> >>>+{
> >>>+  return vq && vq->desc   &&
> >>>+ vq->kickfd != -1 &&
> >>>+ vq->callfd != -1;
> >>
> >>  kickfd and callfd are unsigned
> >
> >Hi,
> >
> >I have made 4 cleanup patches few weeks before, including the patch
> >to define kickfd and callfd as int type, and they have already got
> >the ACK from Huawei Xie, and Chuangchun Ouyang. It's likely that
> >they will be merged, hence I made this patchset based on them.
> >
> >This will also answer the question from your another email: can't
> >apply.
> 
> Hi,
> Thank you for the response, it makes sense now.
> 
> T have another issue, maybe you can help.
> I have some problems making it work with OVS/DPDK backend and virtio-net 
> driver in guest.
> 
> I am using a simple setup:
> http://wiki.qemu.org/Features/vhost-user-ovs-dpdk
> that connects 2 VMs using OVS's dpdkvhostuser ports (regular virtio-net 
> driver in guest, not the PMD driver).
> 
> The setup worked fine with the prev DPDK MQ implementation (V4), however on 
> this one the traffic stops
> once I set queues=n in guest.

Hi,

Could you be more specific about that? It also would be helpful if you
could tell me the steps, besides those setup steps you mentioned in the
qemu wiki and this email, you did for testing.

I had a very rough testing based on your test guides, I indeed found
an issue: the IP address assigned by "ifconfig" disappears soon in the
first few times and after about 2 or 3 times reset, it never changes.

(well, I saw that quite few times before while trying different QEMU
net devices. So, it might be a system configuration issue, or something
else?)

Besides that, it works, say, I can wget a big file from host.

--yliu

> (virtio-net uses only one queue when the guest starts, even if QEMU has 
> multiple queues).
> 
> Two steps are required in order to enable multiple queues in OVS.
> 1. Apply the following patch:
>  - https://www.mail-archive.com/dev at openvswitch.org/msg49198.html
>  - It needs merging (I think)
> 2. Configure ovs for multiqueue:
>  - ovs-vsctl set Open_vSwitch . 

[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-22 Thread Marcel Apfelbaum
On 09/22/2015 12:21 PM, Yuanhan Liu wrote:
> On Tue, Sep 22, 2015 at 11:47:34AM +0300, Marcel Apfelbaum wrote:
>> On 09/22/2015 11:34 AM, Yuanhan Liu wrote:
>>> On Tue, Sep 22, 2015 at 11:10:13AM +0300, Marcel Apfelbaum wrote:
 On 09/22/2015 10:31 AM, Yuanhan Liu wrote:
> On Mon, Sep 21, 2015 at 08:56:30PM +0300, Marcel Apfelbaum wrote:
 [...]
>>>
>>> Hi,
>>>
>>> I have made 4 cleanup patches few weeks before, including the patch
>>> to define kickfd and callfd as int type, and they have already got
>>> the ACK from Huawei Xie, and Chuangchun Ouyang. It's likely that
>>> they will be merged, hence I made this patchset based on them.
>>>
>>> This will also answer the question from your another email: can't
>>> apply.
>>
>> Hi,
>> Thank you for the response, it makes sense now.
>>
>> T have another issue, maybe you can help.
>> I have some problems making it work with OVS/DPDK backend and virtio-net 
>> driver in guest.
>>
>> I am using a simple setup:
>>  http://wiki.qemu.org/Features/vhost-user-ovs-dpdk
>> that connects 2 VMs using OVS's dpdkvhostuser ports (regular virtio-net 
>> driver in guest, not the PMD driver).
>>
>> The setup worked fine with the prev DPDK MQ implementation (V4), however 
>> on this one the traffic stops
>> once I set queues=n in guest.
>
> Hi,
>
> Could you be more specific about that? It also would be helpful if you
> could tell me the steps, besides those setup steps you mentioned in the
> qemu wiki and this email, you did for testing.
>

 Hi,
 Thank you for your help.

 I am sorry the wiki is not enough, I'll be happy to add all the missing 
 parts.
 In the meantime maybe you can tell me where the problem is, I also suggest 
 to
 post here the output of journalctl command.

 We only need a regular machine and we want traffic between 2 VMs. I'll try 
 to summarize the steps:

 1. Be sure you have enough hugepages enabled (2M pages are enough) and 
 mounted.
 2. Configure and start OVS following the wiki
 - we only want one bridge with 2 dpdkvhostuser ports.
 3. Start VMs using the wiki command line
 - check journalctl for possible errors. You can use
  journalctl  --since `date +%T --date="-10 minutes"`
   to see only last 10 minutes.
 4. Configure the guests IPs.
 - Disable the Network Manager as described bellow in the mail.
 5. At this point you should be able to ping between guests.

 Please let me know if you have any problem until this point.
 I'll be happy to help. Please point any special steps you made that
 are not in the WIKI. The journalctl logs would also help.

 Does the ping between VMS work now?
>>>
>>> Yes, it works, too. I can ping the other vm inside a vm.
>>>
>>>  [root at dpdk-kvm ~]# ethtool -l eth0
>>>  Channel parameters for eth0:
>>>  Pre-set maximums:
>>>  RX: 0
>>>  TX: 0
>>>  Other:  0
>>>  Combined:   2
>>>  Current hardware settings:
>>>  RX: 0
>>>  TX: 0
>>>  Other:  0
>>>  Combined:   2
>>>
>>>  [root at dpdk-kvm ~]# ifconfig eth0
>>>  eth0: flags=4163  mtu 1500
>>>  inet 192.168.100.11  netmask 255.255.255.0  broadcast 
>>> 192.168.100.255
>>>  inet6 fe80::5054:ff:fe12:3459  prefixlen 64  scopeid 0x20
>>>  ether 52:54:00:12:34:59  txqueuelen 1000  (Ethernet)
>>>  RX packets 56  bytes 5166 (5.0 KiB)
>>>  RX errors 0  dropped 0  overruns 0  frame 0
>>>  TX packets 84  bytes 8303 (8.1 KiB)
>>>  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>>>
>>>  [root at dpdk-kvm ~]# ping 192.168.100.10
>>>  PING 192.168.100.10 (192.168.100.10) 56(84) bytes of data.
>>>  64 bytes from 192.168.100.10: icmp_seq=1 ttl=64 time=0.213 ms
>>>  64 bytes from 192.168.100.10: icmp_seq=2 ttl=64 time=0.094 ms
>>>  64 bytes from 192.168.100.10: icmp_seq=3 ttl=64 time=0.246 ms
>>>  64 bytes from 192.168.100.10: icmp_seq=4 ttl=64 time=0.153 ms
>>>  64 bytes from 192.168.100.10: icmp_seq=5 ttl=64 time=0.104 ms
>>>  ^C

 If yes, please let me know and I'll go over MQ enabling.
>>>
>>> I'm just wondering why it doesn't work on your side.
>>
>> Hi,
>>
>> This is working also for me, but without enabling the MQ. (ethtool -L eth0 
>> combined n (n>1) )
>> The problem starts when I am applying the patches and I enable MQ. (Need a 
>> slightly different QEMU commandline)
>>
>>>

> I had a very rough testing based on your test guides, I indeed found
> an issue: the IP address assigned by "ifconfig" disappears soon in the
> first few times and after about 2 or 3 times reset, it never changes.

[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-22 Thread Marcel Apfelbaum
On 09/22/2015 11:34 AM, Yuanhan Liu wrote:
> On Tue, Sep 22, 2015 at 11:10:13AM +0300, Marcel Apfelbaum wrote:
>> On 09/22/2015 10:31 AM, Yuanhan Liu wrote:
>>> On Mon, Sep 21, 2015 at 08:56:30PM +0300, Marcel Apfelbaum wrote:
>> [...]
>
> Hi,
>
> I have made 4 cleanup patches few weeks before, including the patch
> to define kickfd and callfd as int type, and they have already got
> the ACK from Huawei Xie, and Chuangchun Ouyang. It's likely that
> they will be merged, hence I made this patchset based on them.
>
> This will also answer the question from your another email: can't
> apply.

 Hi,
 Thank you for the response, it makes sense now.

 T have another issue, maybe you can help.
 I have some problems making it work with OVS/DPDK backend and virtio-net 
 driver in guest.

 I am using a simple setup:
  http://wiki.qemu.org/Features/vhost-user-ovs-dpdk
 that connects 2 VMs using OVS's dpdkvhostuser ports (regular virtio-net 
 driver in guest, not the PMD driver).

 The setup worked fine with the prev DPDK MQ implementation (V4), however 
 on this one the traffic stops
 once I set queues=n in guest.
>>>
>>> Hi,
>>>
>>> Could you be more specific about that? It also would be helpful if you
>>> could tell me the steps, besides those setup steps you mentioned in the
>>> qemu wiki and this email, you did for testing.
>>>
>>
>> Hi,
>> Thank you for your help.
>>
>> I am sorry the wiki is not enough, I'll be happy to add all the missing 
>> parts.
>> In the meantime maybe you can tell me where the problem is, I also suggest to
>> post here the output of journalctl command.
>>
>> We only need a regular machine and we want traffic between 2 VMs. I'll try 
>> to summarize the steps:
>>
>> 1. Be sure you have enough hugepages enabled (2M pages are enough) and 
>> mounted.
>> 2. Configure and start OVS following the wiki
>> - we only want one bridge with 2 dpdkvhostuser ports.
>> 3. Start VMs using the wiki command line
>> - check journalctl for possible errors. You can use
>>  journalctl  --since `date +%T --date="-10 minutes"`
>>   to see only last 10 minutes.
>> 4. Configure the guests IPs.
>> - Disable the Network Manager as described bellow in the mail.
>> 5. At this point you should be able to ping between guests.
>>
>> Please let me know if you have any problem until this point.
>> I'll be happy to help. Please point any special steps you made that
>> are not in the WIKI. The journalctl logs would also help.
>>
>> Does the ping between VMS work now?
>
> Yes, it works, too. I can ping the other vm inside a vm.
>
>  [root at dpdk-kvm ~]# ethtool -l eth0
>  Channel parameters for eth0:
>  Pre-set maximums:
>  RX: 0
>  TX: 0
>  Other:  0
>  Combined:   2
>  Current hardware settings:
>  RX: 0
>  TX: 0
>  Other:  0
>  Combined:   2
>
>  [root at dpdk-kvm ~]# ifconfig eth0
>  eth0: flags=4163  mtu 1500
>  inet 192.168.100.11  netmask 255.255.255.0  broadcast 
> 192.168.100.255
>  inet6 fe80::5054:ff:fe12:3459  prefixlen 64  scopeid 0x20
>  ether 52:54:00:12:34:59  txqueuelen 1000  (Ethernet)
>  RX packets 56  bytes 5166 (5.0 KiB)
>  RX errors 0  dropped 0  overruns 0  frame 0
>  TX packets 84  bytes 8303 (8.1 KiB)
>  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>  [root at dpdk-kvm ~]# ping 192.168.100.10
>  PING 192.168.100.10 (192.168.100.10) 56(84) bytes of data.
>  64 bytes from 192.168.100.10: icmp_seq=1 ttl=64 time=0.213 ms
>  64 bytes from 192.168.100.10: icmp_seq=2 ttl=64 time=0.094 ms
>  64 bytes from 192.168.100.10: icmp_seq=3 ttl=64 time=0.246 ms
>  64 bytes from 192.168.100.10: icmp_seq=4 ttl=64 time=0.153 ms
>  64 bytes from 192.168.100.10: icmp_seq=5 ttl=64 time=0.104 ms
>  ^C
>>
>> If yes, please let me know and I'll go over MQ enabling.
>
> I'm just wondering why it doesn't work on your side.

Hi,

This is working also for me, but without enabling the MQ. (ethtool -L eth0 
combined n (n>1) )
The problem starts when I am applying the patches and I enable MQ. (Need a 
slightly different QEMU commandline)

>
>>
>>> I had a very rough testing based on your test guides, I indeed found
>>> an issue: the IP address assigned by "ifconfig" disappears soon in the
>>> first few times and after about 2 or 3 times reset, it never changes.
>>>
>>> (well, I saw that quite few times before while trying different QEMU
>>> net devices. So, it might be a system configuration issue, or something
>>> else?)
>>>
>>
>> You are right, this is a guest config issue, I think you should disable 
>> NetworkManager
>
> Yeah, I figured it out by my self, and it worked when I hardcoded it at
> 

[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-22 Thread Marcel Apfelbaum
On 09/22/2015 10:31 AM, Yuanhan Liu wrote:
> On Mon, Sep 21, 2015 at 08:56:30PM +0300, Marcel Apfelbaum wrote:
[...]
>>>
>>> Hi,
>>>
>>> I have made 4 cleanup patches few weeks before, including the patch
>>> to define kickfd and callfd as int type, and they have already got
>>> the ACK from Huawei Xie, and Chuangchun Ouyang. It's likely that
>>> they will be merged, hence I made this patchset based on them.
>>>
>>> This will also answer the question from your another email: can't
>>> apply.
>>
>> Hi,
>> Thank you for the response, it makes sense now.
>>
>> T have another issue, maybe you can help.
>> I have some problems making it work with OVS/DPDK backend and virtio-net 
>> driver in guest.
>>
>> I am using a simple setup:
>>  http://wiki.qemu.org/Features/vhost-user-ovs-dpdk
>> that connects 2 VMs using OVS's dpdkvhostuser ports (regular virtio-net 
>> driver in guest, not the PMD driver).
>>
>> The setup worked fine with the prev DPDK MQ implementation (V4), however on 
>> this one the traffic stops
>> once I set queues=n in guest.
>
> Hi,
>
> Could you be more specific about that? It also would be helpful if you
> could tell me the steps, besides those setup steps you mentioned in the
> qemu wiki and this email, you did for testing.
>

Hi,
Thank you for your help.

I am sorry the wiki is not enough, I'll be happy to add all the missing parts.
In the meantime maybe you can tell me where the problem is, I also suggest to
post here the output of journalctl command.

We only need a regular machine and we want traffic between 2 VMs. I'll try to 
summarize the steps:

1. Be sure you have enough hugepages enabled (2M pages are enough) and mounted.
2. Configure and start OVS following the wiki
- we only want one bridge with 2 dpdkvhostuser ports.
3. Start VMs using the wiki command line
- check journalctl for possible errors. You can use
 journalctl  --since `date +%T --date="-10 minutes"`
  to see only last 10 minutes.
4. Configure the guests IPs.
- Disable the Network Manager as described bellow in the mail.
5. At this point you should be able to ping between guests.

Please let me know if you have any problem until this point.
I'll be happy to help. Please point any special steps you made that
are not in the WIKI. The journalctl logs would also help.

Does the ping between VMS work now?

If yes, please let me know and I'll go over MQ enabling.

> I had a very rough testing based on your test guides, I indeed found
> an issue: the IP address assigned by "ifconfig" disappears soon in the
> first few times and after about 2 or 3 times reset, it never changes.
>
> (well, I saw that quite few times before while trying different QEMU
> net devices. So, it might be a system configuration issue, or something
> else?)
>

You are right, this is a guest config issue, I think you should disable 
NetworkManager
for static IP addresses. Please use only the virtio-net device.

You cant try this:
sudo systemctl stop NetworkManager
sudo systemctl disable NetworkManager


> Besides that, it works, say, I can wget a big file from host.
>

The target here is traffic between 2 VMs.
We want to be able to ping (for example) between VMS when MQ > 1 is enabled on 
both guests:
- ethtool -L eth0 combined 

Thank you again for the involvement, this is very much appreciated!
Marcel

>   --yliu
>
>> (virtio-net uses only one queue when the guest starts, even if QEMU has 
>> multiple queues).
>>
>> Two steps are required in order to enable multiple queues in OVS.
>> 1. Apply the following patch:
>>   - https://www.mail-archive.com/dev at openvswitch.org/msg49198.html
>>   - It needs merging (I think)
>> 2. Configure ovs for multiqueue:
>>   - ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=> same as QEMU>
>>   - ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=> queues, say 0xff00>
>> 3. In order to set queues=n in guest use:
>>   - ethtool -L eth0 combined 
>>
>> Any pointers/ideas would be appreciated.
>>
>> Thank you,
>> Marcel
>>
>>
>>
>>>
[...]


[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-21 Thread Marcel Apfelbaum
On 09/21/2015 05:06 AM, Yuanhan Liu wrote:
> On Sun, Sep 20, 2015 at 04:58:42PM +0300, Marcel Apfelbaum wrote:
>> On 09/18/2015 06:10 PM, Yuanhan Liu wrote:
>>> All queue pairs, including the default (the first) queue pair,
>>> are allocated dynamically, when a vring_call message is received
>>> first time for a specific queue pair.
>>>
>>> This is a refactor work for enabling vhost-user multiple queue;
>>> it should not break anything as it does no functional changes:
>>> we don't support mq set, so there is only one mq at max.
>>>
>>> This patch is based on Changchun's patch.
>>>
>>> Signed-off-by: Yuanhan Liu 
>>> ---
>>>   lib/librte_vhost/rte_virtio_net.h |   3 +-
>>>   lib/librte_vhost/vhost_user/virtio-net-user.c |  44 +-
>>>   lib/librte_vhost/virtio-net.c | 121 
>>> --
>>>   3 files changed, 102 insertions(+), 66 deletions(-)
>>>
>>> diff --git a/lib/librte_vhost/rte_virtio_net.h 
>>> b/lib/librte_vhost/rte_virtio_net.h
>>> index e3a21e5..5dd6493 100644
>>> --- a/lib/librte_vhost/rte_virtio_net.h
>>> +++ b/lib/librte_vhost/rte_virtio_net.h
>>> @@ -96,7 +96,7 @@ struct vhost_virtqueue {
>>>* Device structure contains all configuration information relating to 
>>> the device.
>>>*/
>>>   struct virtio_net {
>>> -   struct vhost_virtqueue  *virtqueue[VIRTIO_QNUM];/**< Contains 
>>> all virtqueue information. */
>>> +   struct vhost_virtqueue  *virtqueue[VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX];
>>> /**< Contains all virtqueue information. */
>>> struct virtio_memory*mem;   /**< QEMU memory and memory 
>>> region information. */
>>> uint64_tfeatures;   /**< Negotiated feature set. */
>>> uint64_tprotocol_features;  /**< Negotiated 
>>> protocol feature set. */
>>> @@ -104,6 +104,7 @@ struct virtio_net {
>>> uint32_tflags;  /**< Device flags. Only used to 
>>> check if device is running on data core. */
>>>   #define IF_NAME_SZ (PATH_MAX > IFNAMSIZ ? PATH_MAX : IFNAMSIZ)
>>> charifname[IF_NAME_SZ]; /**< Name of the tap 
>>> device or socket path. */
>>> +   uint32_tvirt_qp_nb; /**< number of queue pair we 
>>> have allocated */
>>> void*priv;  /**< private context */
>>>   } __rte_cache_aligned;
>>>
>>> diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c 
>>> b/lib/librte_vhost/vhost_user/virtio-net-user.c
>>> index 360254e..e83d279 100644
>>> --- a/lib/librte_vhost/vhost_user/virtio-net-user.c
>>> +++ b/lib/librte_vhost/vhost_user/virtio-net-user.c
>>> @@ -206,25 +206,33 @@ err_mmap:
>>>   }
>>>
>>
>> Hi,
>>
>>>   static int
>>> +vq_is_ready(struct vhost_virtqueue *vq)
>>> +{
>>> +   return vq && vq->desc   &&
>>> +  vq->kickfd != -1 &&
>>> +  vq->callfd != -1;
>>
>>   kickfd and callfd are unsigned
>
> Hi,
>
> I have made 4 cleanup patches few weeks before, including the patch
> to define kickfd and callfd as int type, and they have already got
> the ACK from Huawei Xie, and Chuangchun Ouyang. It's likely that
> they will be merged, hence I made this patchset based on them.
>
> This will also answer the question from your another email: can't
> apply.

Hi,
Thank you for the response, it makes sense now.

T have another issue, maybe you can help.
I have some problems making it work with OVS/DPDK backend and virtio-net driver 
in guest.

I am using a simple setup:
 http://wiki.qemu.org/Features/vhost-user-ovs-dpdk
that connects 2 VMs using OVS's dpdkvhostuser ports (regular virtio-net driver 
in guest, not the PMD driver).

The setup worked fine with the prev DPDK MQ implementation (V4), however on 
this one the traffic stops
once I set queues=n in guest. (virtio-net uses only one queue when the guest 
starts, even if QEMU has multiple queues).

Two steps are required in order to enable multiple queues in OVS.
1. Apply the following patch:
  - https://www.mail-archive.com/dev at openvswitch.org/msg49198.html
  - It needs merging (I think)
2. Configure ovs for multiqueue:
  - ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=
  - ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=
3. In order to set queues=n in guest use:
  - ethtool -L eth0 combined 

Any pointers/ideas would be appreciated.

Thank you,
Marcel



>
> Sorry for not pointing it out, as I assume Thomas(cc'ed) will apply
> them soon. And thanks for the review, anyway.
>
>   --yliu
>>
>>> +}
>>> +
>>> +static int
>>>   virtio_is_ready(struct virtio_net *dev)
>>>   {
>>> struct vhost_virtqueue *rvq, *tvq;
>>> +   uint32_t i;
>>>
>>> -   /* mq support in future.*/
>>> -   rvq = dev->virtqueue[VIRTIO_RXQ];
>>> -   tvq = dev->virtqueue[VIRTIO_TXQ];
>>> -   if (rvq && tvq && rvq->desc && tvq->desc &&
>>> -   (rvq->kickfd != -1) &&
>>> -   (rvq->callfd != -1) &&
>>> -   (tvq->kickfd != -1) &&
>>> -   (tvq->callfd != -1)) {
>>> - 

[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-21 Thread Michael S. Tsirkin
On Sun, Sep 20, 2015 at 04:58:42PM +0300, Marcel Apfelbaum wrote:
> On 09/18/2015 06:10 PM, Yuanhan Liu wrote:
> >All queue pairs, including the default (the first) queue pair,
> >are allocated dynamically, when a vring_call message is received
> >first time for a specific queue pair.
> >
> >This is a refactor work for enabling vhost-user multiple queue;
> >it should not break anything as it does no functional changes:
> >we don't support mq set, so there is only one mq at max.
> >
> >This patch is based on Changchun's patch.
> >
> >Signed-off-by: Yuanhan Liu 
> >---
> >  lib/librte_vhost/rte_virtio_net.h |   3 +-
> >  lib/librte_vhost/vhost_user/virtio-net-user.c |  44 +-
> >  lib/librte_vhost/virtio-net.c | 121 
> > --
> >  3 files changed, 102 insertions(+), 66 deletions(-)
> >
> >diff --git a/lib/librte_vhost/rte_virtio_net.h 
> >b/lib/librte_vhost/rte_virtio_net.h
> >index e3a21e5..5dd6493 100644
> >--- a/lib/librte_vhost/rte_virtio_net.h
> >+++ b/lib/librte_vhost/rte_virtio_net.h
> >@@ -96,7 +96,7 @@ struct vhost_virtqueue {
> >   * Device structure contains all configuration information relating to the 
> > device.
> >   */
> >  struct virtio_net {
> >-struct vhost_virtqueue  *virtqueue[VIRTIO_QNUM];/**< Contains 
> >all virtqueue information. */
> >+struct vhost_virtqueue  *virtqueue[VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX];
> >/**< Contains all virtqueue information. */
> > struct virtio_memory*mem;   /**< QEMU memory and memory 
> > region information. */
> > uint64_tfeatures;   /**< Negotiated feature set. */
> > uint64_tprotocol_features;  /**< Negotiated 
> > protocol feature set. */
> >@@ -104,6 +104,7 @@ struct virtio_net {
> > uint32_tflags;  /**< Device flags. Only used to 
> > check if device is running on data core. */
> >  #define IF_NAME_SZ (PATH_MAX > IFNAMSIZ ? PATH_MAX : IFNAMSIZ)
> > charifname[IF_NAME_SZ]; /**< Name of the tap 
> > device or socket path. */
> >+uint32_tvirt_qp_nb; /**< number of queue pair we 
> >have allocated */
> > void*priv;  /**< private context */
> >  } __rte_cache_aligned;
> >
> >diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c 
> >b/lib/librte_vhost/vhost_user/virtio-net-user.c
> >index 360254e..e83d279 100644
> >--- a/lib/librte_vhost/vhost_user/virtio-net-user.c
> >+++ b/lib/librte_vhost/vhost_user/virtio-net-user.c
> >@@ -206,25 +206,33 @@ err_mmap:
> >  }
> >
> 
> Hi,
> 
> >  static int
> >+vq_is_ready(struct vhost_virtqueue *vq)
> >+{
> >+return vq && vq->desc   &&
> >+   vq->kickfd != -1 &&
> >+   vq->callfd != -1;
> 
>  kickfd and callfd are unsigned

That's probably a bug.
fds are signed, and -1 is what qemu uses to mean "nop".
This comparison will convert -1 to unsigned int so it'll work.
The >= ones below won't work.

I think fd types need to be fixed.


> >+}
> >+
> >+static int
> >  virtio_is_ready(struct virtio_net *dev)
> >  {
> > struct vhost_virtqueue *rvq, *tvq;
> >+uint32_t i;
> >
> >-/* mq support in future.*/
> >-rvq = dev->virtqueue[VIRTIO_RXQ];
> >-tvq = dev->virtqueue[VIRTIO_TXQ];
> >-if (rvq && tvq && rvq->desc && tvq->desc &&
> >-(rvq->kickfd != -1) &&
> >-(rvq->callfd != -1) &&
> >-(tvq->kickfd != -1) &&
> >-(tvq->callfd != -1)) {
> >-RTE_LOG(INFO, VHOST_CONFIG,
> >-"virtio is now ready for processing.\n");
> >-return 1;
> >+for (i = 0; i < dev->virt_qp_nb; i++) {
> >+rvq = dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_RXQ];
> >+tvq = dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_TXQ];
> >+
> >+if (!vq_is_ready(rvq) || !vq_is_ready(tvq)) {
> >+RTE_LOG(INFO, VHOST_CONFIG,
> >+"virtio is not ready for processing.\n");
> >+return 0;
> >+}
> > }
> >+
> > RTE_LOG(INFO, VHOST_CONFIG,
> >-"virtio isn't ready for processing.\n");
> >-return 0;
> >+"virtio is now ready for processing.\n");
> >+return 1;
> >  }
> >
> >  void
> >@@ -290,13 +298,9 @@ user_get_vring_base(struct vhost_device_ctx ctx,
> >  * sent and only sent in vhost_vring_stop.
> >  * TODO: cleanup the vring, it isn't usable since here.
> >  */
> >-if ((dev->virtqueue[VIRTIO_RXQ]->kickfd) >= 0) {
> >-close(dev->virtqueue[VIRTIO_RXQ]->kickfd);
> >-dev->virtqueue[VIRTIO_RXQ]->kickfd = -1;
> >-}
> >-if ((dev->virtqueue[VIRTIO_TXQ]->kickfd) >= 0) {
> >-close(dev->virtqueue[VIRTIO_TXQ]->kickfd);
> >-dev->virtqueue[VIRTIO_TXQ]->kickfd = -1;
> >+if ((dev->virtqueue[state->index]->kickfd) >= 0) {
> 
> always >= 0
> 
> >+close(dev->virtqueue[state->index]->kickfd);

[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-21 Thread Yuanhan Liu
On Sun, Sep 20, 2015 at 04:58:42PM +0300, Marcel Apfelbaum wrote:
> On 09/18/2015 06:10 PM, Yuanhan Liu wrote:
> >All queue pairs, including the default (the first) queue pair,
> >are allocated dynamically, when a vring_call message is received
> >first time for a specific queue pair.
> >
> >This is a refactor work for enabling vhost-user multiple queue;
> >it should not break anything as it does no functional changes:
> >we don't support mq set, so there is only one mq at max.
> >
> >This patch is based on Changchun's patch.
> >
> >Signed-off-by: Yuanhan Liu 
> >---
> >  lib/librte_vhost/rte_virtio_net.h |   3 +-
> >  lib/librte_vhost/vhost_user/virtio-net-user.c |  44 +-
> >  lib/librte_vhost/virtio-net.c | 121 
> > --
> >  3 files changed, 102 insertions(+), 66 deletions(-)
> >
> >diff --git a/lib/librte_vhost/rte_virtio_net.h 
> >b/lib/librte_vhost/rte_virtio_net.h
> >index e3a21e5..5dd6493 100644
> >--- a/lib/librte_vhost/rte_virtio_net.h
> >+++ b/lib/librte_vhost/rte_virtio_net.h
> >@@ -96,7 +96,7 @@ struct vhost_virtqueue {
> >   * Device structure contains all configuration information relating to the 
> > device.
> >   */
> >  struct virtio_net {
> >-struct vhost_virtqueue  *virtqueue[VIRTIO_QNUM];/**< Contains 
> >all virtqueue information. */
> >+struct vhost_virtqueue  *virtqueue[VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX];
> >/**< Contains all virtqueue information. */
> > struct virtio_memory*mem;   /**< QEMU memory and memory 
> > region information. */
> > uint64_tfeatures;   /**< Negotiated feature set. */
> > uint64_tprotocol_features;  /**< Negotiated 
> > protocol feature set. */
> >@@ -104,6 +104,7 @@ struct virtio_net {
> > uint32_tflags;  /**< Device flags. Only used to 
> > check if device is running on data core. */
> >  #define IF_NAME_SZ (PATH_MAX > IFNAMSIZ ? PATH_MAX : IFNAMSIZ)
> > charifname[IF_NAME_SZ]; /**< Name of the tap 
> > device or socket path. */
> >+uint32_tvirt_qp_nb; /**< number of queue pair we 
> >have allocated */
> > void*priv;  /**< private context */
> >  } __rte_cache_aligned;
> >
> >diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c 
> >b/lib/librte_vhost/vhost_user/virtio-net-user.c
> >index 360254e..e83d279 100644
> >--- a/lib/librte_vhost/vhost_user/virtio-net-user.c
> >+++ b/lib/librte_vhost/vhost_user/virtio-net-user.c
> >@@ -206,25 +206,33 @@ err_mmap:
> >  }
> >
> 
> Hi,
> 
> >  static int
> >+vq_is_ready(struct vhost_virtqueue *vq)
> >+{
> >+return vq && vq->desc   &&
> >+   vq->kickfd != -1 &&
> >+   vq->callfd != -1;
> 
>  kickfd and callfd are unsigned

Hi,

I have made 4 cleanup patches few weeks before, including the patch
to define kickfd and callfd as int type, and they have already got
the ACK from Huawei Xie, and Chuangchun Ouyang. It's likely that
they will be merged, hence I made this patchset based on them.

This will also answer the question from your another email: can't
apply.

Sorry for not pointing it out, as I assume Thomas(cc'ed) will apply
them soon. And thanks for the review, anyway.

--yliu
> 
> >+}
> >+
> >+static int
> >  virtio_is_ready(struct virtio_net *dev)
> >  {
> > struct vhost_virtqueue *rvq, *tvq;
> >+uint32_t i;
> >
> >-/* mq support in future.*/
> >-rvq = dev->virtqueue[VIRTIO_RXQ];
> >-tvq = dev->virtqueue[VIRTIO_TXQ];
> >-if (rvq && tvq && rvq->desc && tvq->desc &&
> >-(rvq->kickfd != -1) &&
> >-(rvq->callfd != -1) &&
> >-(tvq->kickfd != -1) &&
> >-(tvq->callfd != -1)) {
> >-RTE_LOG(INFO, VHOST_CONFIG,
> >-"virtio is now ready for processing.\n");
> >-return 1;
> >+for (i = 0; i < dev->virt_qp_nb; i++) {
> >+rvq = dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_RXQ];
> >+tvq = dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_TXQ];
> >+
> >+if (!vq_is_ready(rvq) || !vq_is_ready(tvq)) {
> >+RTE_LOG(INFO, VHOST_CONFIG,
> >+"virtio is not ready for processing.\n");
> >+return 0;
> >+}
> > }
> >+
> > RTE_LOG(INFO, VHOST_CONFIG,
> >-"virtio isn't ready for processing.\n");
> >-return 0;
> >+"virtio is now ready for processing.\n");
> >+return 1;
> >  }
> >
> >  void
> >@@ -290,13 +298,9 @@ user_get_vring_base(struct vhost_device_ctx ctx,
> >  * sent and only sent in vhost_vring_stop.
> >  * TODO: cleanup the vring, it isn't usable since here.
> >  */
> >-if ((dev->virtqueue[VIRTIO_RXQ]->kickfd) >= 0) {
> >-close(dev->virtqueue[VIRTIO_RXQ]->kickfd);
> >-dev->virtqueue[VIRTIO_RXQ]->kickfd = -1;
> >-}
> >-if 

[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-20 Thread Marcel Apfelbaum
On 09/18/2015 06:10 PM, Yuanhan Liu wrote:
> All queue pairs, including the default (the first) queue pair,
> are allocated dynamically, when a vring_call message is received
> first time for a specific queue pair.
>
> This is a refactor work for enabling vhost-user multiple queue;
> it should not break anything as it does no functional changes:
> we don't support mq set, so there is only one mq at max.
>
> This patch is based on Changchun's patch.
>
> Signed-off-by: Yuanhan Liu 
> ---
>   lib/librte_vhost/rte_virtio_net.h |   3 +-
>   lib/librte_vhost/vhost_user/virtio-net-user.c |  44 +-
>   lib/librte_vhost/virtio-net.c | 121 
> --
>   3 files changed, 102 insertions(+), 66 deletions(-)
>
> diff --git a/lib/librte_vhost/rte_virtio_net.h 
> b/lib/librte_vhost/rte_virtio_net.h
> index e3a21e5..5dd6493 100644
> --- a/lib/librte_vhost/rte_virtio_net.h
> +++ b/lib/librte_vhost/rte_virtio_net.h
> @@ -96,7 +96,7 @@ struct vhost_virtqueue {
>* Device structure contains all configuration information relating to the 
> device.
>*/
>   struct virtio_net {
> - struct vhost_virtqueue  *virtqueue[VIRTIO_QNUM];/**< Contains 
> all virtqueue information. */
> + struct vhost_virtqueue  *virtqueue[VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX];
> /**< Contains all virtqueue information. */
>   struct virtio_memory*mem;   /**< QEMU memory and memory 
> region information. */
>   uint64_tfeatures;   /**< Negotiated feature set. */
>   uint64_tprotocol_features;  /**< Negotiated 
> protocol feature set. */
> @@ -104,6 +104,7 @@ struct virtio_net {
>   uint32_tflags;  /**< Device flags. Only used to 
> check if device is running on data core. */
>   #define IF_NAME_SZ (PATH_MAX > IFNAMSIZ ? PATH_MAX : IFNAMSIZ)
>   charifname[IF_NAME_SZ]; /**< Name of the tap 
> device or socket path. */
> + uint32_tvirt_qp_nb; /**< number of queue pair we 
> have allocated */
>   void*priv;  /**< private context */
>   } __rte_cache_aligned;
>
> diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c 
> b/lib/librte_vhost/vhost_user/virtio-net-user.c
> index 360254e..e83d279 100644
> --- a/lib/librte_vhost/vhost_user/virtio-net-user.c
> +++ b/lib/librte_vhost/vhost_user/virtio-net-user.c
> @@ -206,25 +206,33 @@ err_mmap:
>   }
>

Hi,

>   static int
> +vq_is_ready(struct vhost_virtqueue *vq)
> +{
> + return vq && vq->desc   &&
> +vq->kickfd != -1 &&
> +vq->callfd != -1;

  kickfd and callfd are unsigned

> +}
> +
> +static int
>   virtio_is_ready(struct virtio_net *dev)
>   {
>   struct vhost_virtqueue *rvq, *tvq;
> + uint32_t i;
>
> - /* mq support in future.*/
> - rvq = dev->virtqueue[VIRTIO_RXQ];
> - tvq = dev->virtqueue[VIRTIO_TXQ];
> - if (rvq && tvq && rvq->desc && tvq->desc &&
> - (rvq->kickfd != -1) &&
> - (rvq->callfd != -1) &&
> - (tvq->kickfd != -1) &&
> - (tvq->callfd != -1)) {
> - RTE_LOG(INFO, VHOST_CONFIG,
> - "virtio is now ready for processing.\n");
> - return 1;
> + for (i = 0; i < dev->virt_qp_nb; i++) {
> + rvq = dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_RXQ];
> + tvq = dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_TXQ];
> +
> + if (!vq_is_ready(rvq) || !vq_is_ready(tvq)) {
> + RTE_LOG(INFO, VHOST_CONFIG,
> + "virtio is not ready for processing.\n");
> + return 0;
> + }
>   }
> +
>   RTE_LOG(INFO, VHOST_CONFIG,
> - "virtio isn't ready for processing.\n");
> - return 0;
> + "virtio is now ready for processing.\n");
> + return 1;
>   }
>
>   void
> @@ -290,13 +298,9 @@ user_get_vring_base(struct vhost_device_ctx ctx,
>* sent and only sent in vhost_vring_stop.
>* TODO: cleanup the vring, it isn't usable since here.
>*/
> - if ((dev->virtqueue[VIRTIO_RXQ]->kickfd) >= 0) {
> - close(dev->virtqueue[VIRTIO_RXQ]->kickfd);
> - dev->virtqueue[VIRTIO_RXQ]->kickfd = -1;
> - }
> - if ((dev->virtqueue[VIRTIO_TXQ]->kickfd) >= 0) {
> - close(dev->virtqueue[VIRTIO_TXQ]->kickfd);
> - dev->virtqueue[VIRTIO_TXQ]->kickfd = -1;
> + if ((dev->virtqueue[state->index]->kickfd) >= 0) {

always >= 0

> + close(dev->virtqueue[state->index]->kickfd);
> + dev->virtqueue[state->index]->kickfd = -1;

again unsigned

>   }
>
>   return 0;
> diff --git a/lib/librte_vhost/virtio-net.c b/lib/librte_vhost/virtio-net.c
> index deac6b9..643a92e 100644
> --- a/lib/librte_vhost/virtio-net.c
> +++ b/lib/librte_vhost/virtio-net.c
> @@ -36,6 +36,7 @@
>   #include 
>   #include 
>   #include 
> +#include 
>  

[dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support

2015-09-19 Thread Yuanhan Liu
All queue pairs, including the default (the first) queue pair,
are allocated dynamically, when a vring_call message is received
first time for a specific queue pair.

This is a refactor work for enabling vhost-user multiple queue;
it should not break anything as it does no functional changes:
we don't support mq set, so there is only one mq at max.

This patch is based on Changchun's patch.

Signed-off-by: Yuanhan Liu 
---
 lib/librte_vhost/rte_virtio_net.h |   3 +-
 lib/librte_vhost/vhost_user/virtio-net-user.c |  44 +-
 lib/librte_vhost/virtio-net.c | 121 --
 3 files changed, 102 insertions(+), 66 deletions(-)

diff --git a/lib/librte_vhost/rte_virtio_net.h 
b/lib/librte_vhost/rte_virtio_net.h
index e3a21e5..5dd6493 100644
--- a/lib/librte_vhost/rte_virtio_net.h
+++ b/lib/librte_vhost/rte_virtio_net.h
@@ -96,7 +96,7 @@ struct vhost_virtqueue {
  * Device structure contains all configuration information relating to the 
device.
  */
 struct virtio_net {
-   struct vhost_virtqueue  *virtqueue[VIRTIO_QNUM];/**< Contains 
all virtqueue information. */
+   struct vhost_virtqueue  *virtqueue[VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX];
/**< Contains all virtqueue information. */
struct virtio_memory*mem;   /**< QEMU memory and memory 
region information. */
uint64_tfeatures;   /**< Negotiated feature set. */
uint64_tprotocol_features;  /**< Negotiated 
protocol feature set. */
@@ -104,6 +104,7 @@ struct virtio_net {
uint32_tflags;  /**< Device flags. Only used to 
check if device is running on data core. */
 #define IF_NAME_SZ (PATH_MAX > IFNAMSIZ ? PATH_MAX : IFNAMSIZ)
charifname[IF_NAME_SZ]; /**< Name of the tap 
device or socket path. */
+   uint32_tvirt_qp_nb; /**< number of queue pair we 
have allocated */
void*priv;  /**< private context */
 } __rte_cache_aligned;

diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c 
b/lib/librte_vhost/vhost_user/virtio-net-user.c
index 360254e..e83d279 100644
--- a/lib/librte_vhost/vhost_user/virtio-net-user.c
+++ b/lib/librte_vhost/vhost_user/virtio-net-user.c
@@ -206,25 +206,33 @@ err_mmap:
 }

 static int
+vq_is_ready(struct vhost_virtqueue *vq)
+{
+   return vq && vq->desc   &&
+  vq->kickfd != -1 &&
+  vq->callfd != -1;
+}
+
+static int
 virtio_is_ready(struct virtio_net *dev)
 {
struct vhost_virtqueue *rvq, *tvq;
+   uint32_t i;

-   /* mq support in future.*/
-   rvq = dev->virtqueue[VIRTIO_RXQ];
-   tvq = dev->virtqueue[VIRTIO_TXQ];
-   if (rvq && tvq && rvq->desc && tvq->desc &&
-   (rvq->kickfd != -1) &&
-   (rvq->callfd != -1) &&
-   (tvq->kickfd != -1) &&
-   (tvq->callfd != -1)) {
-   RTE_LOG(INFO, VHOST_CONFIG,
-   "virtio is now ready for processing.\n");
-   return 1;
+   for (i = 0; i < dev->virt_qp_nb; i++) {
+   rvq = dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_RXQ];
+   tvq = dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_TXQ];
+
+   if (!vq_is_ready(rvq) || !vq_is_ready(tvq)) {
+   RTE_LOG(INFO, VHOST_CONFIG,
+   "virtio is not ready for processing.\n");
+   return 0;
+   }
}
+
RTE_LOG(INFO, VHOST_CONFIG,
-   "virtio isn't ready for processing.\n");
-   return 0;
+   "virtio is now ready for processing.\n");
+   return 1;
 }

 void
@@ -290,13 +298,9 @@ user_get_vring_base(struct vhost_device_ctx ctx,
 * sent and only sent in vhost_vring_stop.
 * TODO: cleanup the vring, it isn't usable since here.
 */
-   if ((dev->virtqueue[VIRTIO_RXQ]->kickfd) >= 0) {
-   close(dev->virtqueue[VIRTIO_RXQ]->kickfd);
-   dev->virtqueue[VIRTIO_RXQ]->kickfd = -1;
-   }
-   if ((dev->virtqueue[VIRTIO_TXQ]->kickfd) >= 0) {
-   close(dev->virtqueue[VIRTIO_TXQ]->kickfd);
-   dev->virtqueue[VIRTIO_TXQ]->kickfd = -1;
+   if ((dev->virtqueue[state->index]->kickfd) >= 0) {
+   close(dev->virtqueue[state->index]->kickfd);
+   dev->virtqueue[state->index]->kickfd = -1;
}

return 0;
diff --git a/lib/librte_vhost/virtio-net.c b/lib/librte_vhost/virtio-net.c
index deac6b9..643a92e 100644
--- a/lib/librte_vhost/virtio-net.c
+++ b/lib/librte_vhost/virtio-net.c
@@ -36,6 +36,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #ifdef RTE_LIBRTE_VHOST_NUMA
@@ -178,6 +179,15 @@ add_config_ll_entry(struct virtio_net_config_ll 
*new_ll_dev)

 }

+static void
+cleanup_vq(struct vhost_virtqueue *vq)
+{
+   if (vq->callfd >= 0)
+   close(vq->callfd);
+   if