On 07.03.2016 17:13, Wang, Zhihong wrote:
> 
> 
>> -----Original Message-----
>> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
>> Sent: Friday, March 4, 2016 7:14 PM
>> To: Wang, Zhihong <zhihong.w...@intel.com>; dev@openvswitch.org
>> Cc: Flavio Leitner <f...@redhat.com>; Traynor, Kevin 
>> <kevin.tray...@intel.com>;
>> Dyasly Sergey <s.dya...@samsung.com>
>> Subject: Re: vhost-user invalid txqid cause discard of packets
>>
>> On 04.03.2016 13:50, Wang, Zhihong wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
>>>> Sent: Friday, March 4, 2016 6:00 PM
>>>> To: Wang, Zhihong <zhihong.w...@intel.com>; dev@openvswitch.org
>>>> Cc: Flavio Leitner <f...@redhat.com>; Traynor, Kevin
>> <kevin.tray...@intel.com>;
>>>> Dyasly Sergey <s.dya...@samsung.com>
>>>> Subject: Re: vhost-user invalid txqid cause discard of packets
>>>>
>>>> Hi, Zhihong.
>>>> I can't reproduce this in my environment.
>>>> Could you please provide ovs-vswithd.log with VLOG_DBG enabled
>>>> for netdev-dpdk and outputs of following commands:
>>>> # ovs-vsctl show
>>>> # ovs-appctl dpctl/show
>>>> # ovs-appctl dpif-netdev/pmd-rxq-show
>>>> in 'good' and 'bad' states?
>>>>
>>>> Also, are you sure that VM started with exactly 4 queues?
>>>
>>>
>>> Yes it's exact 4 queues.
>>> Please see command output below.
>>>
>>> In "bad" case only vhost txq 0, 1 are sending packets, I believe the other 2
>>> become -1 after the lookup.
>>
>> I don't think so. Reconfiguration code can't affect vHost queue mapping.
>> Only distribution of queues between threads changed here.
>> Can you reproduce this issue with another cpu-mask?
>> For example : '00ff0' and '003f0'.
> 
> 
> The results is always the same no matter what 8 cores & 6 cores are used, like
> fc0c & f00c.
> 
> The way I launch OVS is like below, could you check if there's anything wrong?

Looks normal.

> You should be able to reproduce this issue with it.
> The qemu version is 2.5, OVS and DPDK are pulled from the master Mar 1st.
> -------------------------------------------------------------
> export DB_SOCK=/var/run/openvswitch/db.sock
> #cd ovsdb
> ./ovsdb-tool create /etc/openvswitch/conf.db 
> /usr/share/openvswitch/vswitch.ovsschema
> ./ovsdb-server --remote=punix:$DB_SOCK 
> --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
> 
> #cd utilities
> ./ovs-vsctl --no-wait init
> cd $1/vswitchd
> ./ovs-vswitchd --dpdk -c 0xf -n 3 --socket-mem 1024 -- unix:$DB_SOCK 
> --pidfile --log-file=/var/log/openvswitch/ovs-vswitchd.log &
> 
> #cd utilities
> ./ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> ./ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 
> type=dpdkvhostuser
> ./ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk
> ./ovs-vsctl set Interface vhost-user1 options:n_rxq=4
> ./ovs-vsctl set Interface dpdk0 options:n_rxq=4
> ./ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x3fc00
> ./ovs-ofctl del-flows ovsbr0
> ./ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
> ./ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
> -------------------------------------------------------------
> 
> Some logs below, not sure which part you're looking for.

I'm looking for something like this:
2016-03-04T09:34:51Z|00459|dpdk(vhost_thread2)|DBG|TX queue mapping for 
/var/run/openvswitch/vhost-user1
2016-03-04T09:34:51Z|00460|dpdk(vhost_thread2)|DBG| 0 -->  0
2016-03-04T09:34:51Z|00461|dpdk(vhost_thread2)|DBG| 1 -->  1
2016-03-04T09:34:51Z|00462|dpdk(vhost_thread2)|DBG| 2 -->  2
2016-03-04T09:34:51Z|00463|dpdk(vhost_thread2)|DBG| 3 -->  3

> 
> "bad" 6c:
> 2016-03-07T06:37:32.291Z|00001|dpif_netdev(pmd84)|DBG|Core 12 processing port 
> 'vhost-user1' with queue-id 0
> 2016-03-07T06:37:32.291Z|00002|dpif_netdev(pmd84)|DBG|Core 12 processing port 
> 'dpdk0' with queue-id 2
> 2016-03-07T06:37:32.292Z|00001|dpif_netdev(pmd80)|DBG|Core 17 processing port 
> 'dpdk0' with queue-id 1
> 2016-03-07T06:37:32.292Z|00001|dpif_netdev(pmd79)|DBG|Core 16 processing port 
> 'dpdk0' with queue-id 0
> 2016-03-07T06:37:32.292Z|00001|dpif_netdev(pmd83)|DBG|Core 13 processing port 
> 'vhost-user1' with queue-id 1
> 2016-03-07T06:37:32.292Z|00002|dpif_netdev(pmd83)|DBG|Core 13 processing port 
> 'dpdk0' with queue-id 3
> 2016-03-07T06:37:32.293Z|00001|dpif_netdev(pmd81)|DBG|Core 15 processing port 
> 'vhost-user1' with queue-id 3
> 2016-03-07T06:37:32.294Z|00001|dpif_netdev(pmd82)|DBG|Core 14 processing port 
> 'vhost-user1' with queue-id 2
> 
> "good" 8c:
> 2016-03-07T06:40:34.200Z|00001|dpif_netdev(pmd86)|DBG|Core 10 processing port 
> 'vhost-user1' with queue-id 0
> 2016-03-07T06:40:34.202Z|00001|dpif_netdev(pmd79)|DBG|Core 13 processing port 
> 'vhost-user1' with queue-id 3
> 2016-03-07T06:40:34.202Z|00001|dpif_netdev(pmd80)|DBG|Core 14 processing port 
> 'dpdk0' with queue-id 0
> 2016-03-07T06:40:34.202Z|00001|dpif_netdev(pmd85)|DBG|Core 11 processing port 
> 'vhost-user1' with queue-id 1
> 2016-03-07T06:40:34.202Z|00001|dpif_netdev(pmd84)|DBG|Core 12 processing port 
> 'vhost-user1' with queue-id 2
> 2016-03-07T06:40:34.202Z|00001|dpif_netdev(pmd81)|DBG|Core 15 processing port 
> 'dpdk0' with queue-id 1
> 2016-03-07T06:40:34.202Z|00001|dpif_netdev(pmd83)|DBG|Core 17 processing port 
> 'dpdk0' with queue-id 3
> 2016-03-07T06:40:34.202Z|00001|dpif_netdev(pmd82)|DBG|Core 16 processing port 
> 'dpdk0' with queue-id 2
> 
> 
>>
>> We will see exact mapping in ovs-vswitchd.log with VLOG_DBG enabled for
>> netdev-dpdk. (-vdpdk:all:dbg option to ovs-vswitchd).
>>
>> One more comment inlined.
>>
>>>
>>> "good":
>>> ------------------------------------------------------
>>> [20160301]# ./ovs/utilities/ovs-vsctl show
>>> a71febbd-fc2b-4a0a-beb2-d6fe0ae68d58
>>>     Bridge "ovsbr0"
>>>         Port "ovsbr0"
>>>             Interface "ovsbr0"
>>>                 type: internal
>>>         Port "vhost-user1"
>>>             Interface "vhost-user1"
>>>                 type: dpdkvhostuser
>>>                 options: {n_rxq="4"}
>>>         Port "dpdk0"
>>>             Interface "dpdk0"
>>>                 type: dpdk
>>>                 options: {n_rxq="4"}
>>> [20160301]# ./ovs/utilities/ovs-appctl dpctl/show
>>> netdev@ovs-netdev:
>>>         lookups: hit:2642744165 missed:8 lost:0
>>>         flows: 8
>>>         port 0: ovs-netdev (internal)
>>>         port 1: ovsbr0 (tap)
>>>         port 2: vhost-user1 (dpdkvhostuser: configured_rx_queues=4,
>> configured_tx_queues=4, requested_rx_queues=4, requested_tx_queues=73)
>>>         port 3: dpdk0 (dpdk: configured_rx_queues=4,
>> configured_tx_queues=64, requested_rx_queues=4, requested_tx_queues=73)
>>> [20160301]# ./ovs/utilities/ovs-appctl dpif-netdev/pmd-rxq-show
>>> pmd thread numa_id 0 core_id 16:
>>>         port: dpdk0     queue-id: 2
>>> pmd thread numa_id 0 core_id 10:
>>>         port: vhost-user1       queue-id: 0
>>> pmd thread numa_id 0 core_id 12:
>>>         port: vhost-user1       queue-id: 2
>>> pmd thread numa_id 0 core_id 13:
>>>         port: vhost-user1       queue-id: 3
>>> pmd thread numa_id 0 core_id 14:
>>>         port: dpdk0     queue-id: 0
>>> pmd thread numa_id 0 core_id 15:
>>>         port: dpdk0     queue-id: 1
>>> pmd thread numa_id 0 core_id 11:
>>>         port: vhost-user1       queue-id: 1
>>> pmd thread numa_id 0 core_id 17:
>>>         port: dpdk0     queue-id: 3
>>> ------------------------------------------------------
>>>
>>> "bad":
>>> ------------------------------------------------------
>>> [20160301]# ./ovs/utilities/ovs-vsctl set Open_vSwitch .
>> other_config:pmd-cpu-mask=0x3f000
>>> 2016-03-04T03:33:30Z|00041|ovs_numa|WARN|Invalid cpu mask: x
>>
>> Just in case, have you tried to use the valid mask
>> (other_config:pmd-cpu-mask=3f000) ?
> 
> Sure.
> 
>>
>>> 2016-03-04T03:33:30Z|00042|dpif_netdev|INFO|Created 6 pmd threads on
>> numa node 0
>>> [20160301]# ./ovs/utilities/ovs-vsctl show
>>> a71febbd-fc2b-4a0a-beb2-d6fe0ae68d58
>>>     Bridge "ovsbr0"
>>>         Port "ovsbr0"
>>>             Interface "ovsbr0"
>>>                 type: internal
>>>         Port "vhost-user1"
>>>             Interface "vhost-user1"
>>>                 type: dpdkvhostuser
>>>                 options: {n_rxq="4"}
>>>         Port "dpdk0"
>>>             Interface "dpdk0"
>>>                 type: dpdk
>>>                 options: {n_rxq="4"}
>>> [20160301]# ./ovs/utilities/ovs-appctl dpctl/show
>>> netdev@ovs-netdev:
>>>         lookups: hit:181693955 missed:7 lost:0
>>>         flows: 6
>>>         port 0: ovs-netdev (internal)
>>>         port 1: ovsbr0 (tap)
>>>         port 2: vhost-user1 (dpdkvhostuser: configured_rx_queues=4,
>> configured_tx_queues=4, requested_rx_queues=4, requested_tx_queues=73)
>>>         port 3: dpdk0 (dpdk: configured_rx_queues=4,
>> configured_tx_queues=64, requested_rx_queues=4, requested_tx_queues=73)
>>> [20160301]# ./ovs/utilities/ovs-appctl dpif-netdev/pmd-rxq-show
>>> pmd thread numa_id 0 core_id 13:
>>>         port: vhost-user1       queue-id: 1
>>>         port: dpdk0     queue-id: 3
>>> pmd thread numa_id 0 core_id 14:
>>>         port: vhost-user1       queue-id: 2
>>> pmd thread numa_id 0 core_id 16:
>>>         port: dpdk0     queue-id: 0
>>> pmd thread numa_id 0 core_id 17:
>>>         port: dpdk0     queue-id: 1
>>> pmd thread numa_id 0 core_id 12:
>>>         port: vhost-user1       queue-id: 0
>>>         port: dpdk0     queue-id: 2
>>> pmd thread numa_id 0 core_id 15:
>>>         port: vhost-user1       queue-id: 3
>>> ------------------------------------------------------
>>>
>>>
>>>>
>>>> Best regards, Ilya Maximets.
>>>>
>>>> On 03.03.2016 18:24, Wang, Zhihong wrote:
>>>>> Hi,
>>>>>
>>>>> I ran an OVS multiqueue test with very simple traffic topology, basically
>>>>> 2 ports each with 4 queues, 8 rxqs in total, like below:
>>>>>
>>>>> Pktgen <=4q=> PHY <=4q=> OVS <=4q=> testpmd in the guest
>>>>>
>>>>> First set pmd-cpu-mask to 8 cores, and everything works fine, each rxq
>>>>> got a core, and all txqids are valid.
>>>>>
>>>>> Then I set pmd-cpu-mask to 6 cores, and then 2 txqids become invalid in
>>>>> __netdev_dpdk_vhost_send():
>>>>> qid = vhost_dev->tx_q[qid % vhost_dev->real_n_txq].map;
>>>>>
>>>>> qid returns -1 and this leads to discard of packets.
>>>>>
>>>>> Consequently in testpmd in VM we see only 2 queues are working and
>>>>> throughput drops more than a half.
>>>>>
>>>>> It works again when I set pmd-cpu-mask to 4 cores.
>>>>>
>>>>> My OVS and DPDK code are pulled from the repo on March 1st, 2016.
>>>>>
>>>>> Let me know if you need more info to reproduce this issue.
>>>>>
>>>>>
>>>>> Thanks
>>>>> Zhihong
>>>
>>>
>>
>> Best regards, Ilya Maximets.
> 
> 
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to