Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-10-16 Thread BALL SUN
Thank for the share.

I think I have missed an important information, our setup is on a VM
guest, so not sure it is related.

On Mon, Oct 16, 2017 at 7:26 PM, Guoshuai Li  wrote:
> I can not answer your question, but I can share my environment:
>
>
> I have 32 cpu:
>
>
> [root@gateway1 ~]# cat /proc/cpuinfo | grep processor | wc -l
> 32
> [root@gateway1 ~]#
>
>
> I config my pmd-cpu-mask with 0xff00.
>
> [root@gateway1 ~]# ovs-vsctl get Open_vSwitch . other_config
> {dpdk-init="true", pmd-cpu-mask="0xff00"}
>
>
> I config my dpdk port with "n_rxq=4", This configuration is important :
>
> Bridge br-ext
> Port bond-ext
> Interface "ext-dpdk-2"
> type: dpdk
> options: {dpdk-devargs=":84:00.1", n_rxq="4"}
> Interface "ext-dpdk-1"
> type: dpdk
> options: {dpdk-devargs=":84:00.0", n_rxq="4"}
> Bridge br-agg
> Port bond-agg
> Interface "agg-dpdk-2"
> type: dpdk
> options: {dpdk-devargs=":07:00.1", n_rxq="4"}
> Interface "agg-dpdk-1"
> type: dpdk
> options: {dpdk-devargs=":07:00.0", n_rxq="4"}
>
> And then cpu 1600%
>
>
> top - 19:24:27 up 18 days, 24 min,  6 users,  load average: 16.00, 16.00,
> 16.00
> Tasks: 419 total,   1 running, 418 sleeping,   0 stopped,   0 zombie
> %Cpu(s): 50.0 us,  0.0 sy,  0.0 ni, 50.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> KiB Mem : 26409787+total, 25773403+free,  5427996 used,   935844 buff/cache
> KiB Swap:  4194300 total,  4194300 free,0 used. 25799068+avail Mem
>
>   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
> 32426 openvsw+  10 -10 5772520 653044  14888 S  1599  0.2 2267:10
> ovs-vswitchd
>
>
>
> [root@gateway1 ~]# top
> top - 19:24:50 up 18 days, 25 min,  6 users,  load average: 16.00, 16.00,
> 16.00
> Tasks: 419 total,   1 running, 418 sleeping,   0 stopped,   0 zombie
> %Cpu0  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu1  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu2  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu3  :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu4  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu5  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu6  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu7  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu8  :  0.3 us,  0.3 sy,  0.0 ni, 99.3 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu9  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu10 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu11 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu12 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu13 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu14 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu15 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu16 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu17 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu18 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu19 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu20 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu21 :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu22 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu23 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu24 :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu25 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu26 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu27 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu28 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu29 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu30 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> %Cpu31 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  0.0
> st
> KiB Mem : 26409787+total, 25773369+free,  5428244 used,   935924 buff/cache
> KiB Swap:  4194300 total,  4194300 free,0 used. 25799040+avail Mem
>
>
>
>
>
> on 2017/10/16 16:07, BALL SUN write:
>>
>> sorry for late reply
>>
>> we have reinstall the OVS, but still having the same issue.
>>
>> we tried to set the 

Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-10-16 Thread Guoshuai Li

I can not answer your question, but I can share my environment:


I have 32 cpu:


[root@gateway1 ~]# cat /proc/cpuinfo | grep processor | wc -l
32
[root@gateway1 ~]#


I config my pmd-cpu-mask with 0xff00.

[root@gateway1 ~]# ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", pmd-cpu-mask="0xff00"}


I config my dpdk port with "n_rxq=4", This configuration is important :

    Bridge br-ext
    Port bond-ext
    Interface "ext-dpdk-2"
    type: dpdk
    options: {dpdk-devargs=":84:00.1", n_rxq="4"}
    Interface "ext-dpdk-1"
    type: dpdk
    options: {dpdk-devargs=":84:00.0", n_rxq="4"}
    Bridge br-agg
    Port bond-agg
    Interface "agg-dpdk-2"
    type: dpdk
    options: {dpdk-devargs=":07:00.1", n_rxq="4"}
    Interface "agg-dpdk-1"
    type: dpdk
    options: {dpdk-devargs=":07:00.0", n_rxq="4"}

And then cpu 1600%


top - 19:24:27 up 18 days, 24 min,  6 users,  load average: 16.00, 
16.00, 16.00

Tasks: 419 total,   1 running, 418 sleeping,   0 stopped,   0 zombie
%Cpu(s): 50.0 us,  0.0 sy,  0.0 ni, 50.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st

KiB Mem : 26409787+total, 25773403+free,  5427996 used,   935844 buff/cache
KiB Swap:  4194300 total,  4194300 free,    0 used. 25799068+avail Mem

  PID USER  PR  NI    VIRT    RES    SHR S  %CPU %MEM TIME+ COMMAND
32426 openvsw+  10 -10 5772520 653044  14888 S  1599  0.2 2267:10 
ovs-vswitchd




[root@gateway1 ~]# top
top - 19:24:50 up 18 days, 25 min,  6 users,  load average: 16.00, 
16.00, 16.00

Tasks: 419 total,   1 running, 418 sleeping,   0 stopped,   0 zombie
%Cpu0  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu2  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu3  :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu4  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu5  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu6  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu7  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu8  :  0.3 us,  0.3 sy,  0.0 ni, 99.3 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu9  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu10 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu11 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu12 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu13 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu14 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu15 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu16 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu17 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu18 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu19 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu20 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu21 :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu22 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu23 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu24 :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu25 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu26 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu27 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu28 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu29 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu30 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu31 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st

KiB Mem : 26409787+total, 25773369+free,  5428244 used,   935924 buff/cache
KiB Swap:  4194300 total,  4194300 free,    0 used. 25799040+avail Mem




on 2017/10/16 16:07, BALL SUN write:

sorry for late reply

we have reinstall the OVS, but still having the same issue.

we tried to set the pmd-cpu-mask=3, but only CPU1 is occupied.
%Cpu0  :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

#  /usr/local/bin/ovs-vsctl get Open_vSwitch . 

Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-10-16 Thread BALL SUN
sorry for late reply

we have reinstall the OVS, but still having the same issue.

we tried to set the pmd-cpu-mask=3, but only CPU1 is occupied.
%Cpu0  :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

#  /usr/local/bin/ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", pmd-cpu-mask="3"}

# /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 0:
isolated : false
port: dpdk0 queue-id: 0
pmd thread numa_id 0 core_id 1:
isolated : false

is it because there is only one nodes available in numa?

#  numactl -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3
node 0 size: 8191 MB
node 0 free: 2633 MB
node distances:
node   0
  0:  10







On Fri, Sep 22, 2017 at 9:16 PM, Flavio Leitner  wrote:
> On Fri, 22 Sep 2017 15:02:20 +0800
> Sun Paul  wrote:
>
>> hi
>>
>> we have tried on that. e.g. if we set to 0x22, we still only able to
>> see 2 cpu is in 100%, why?
>
> Because that's what you told OVS to do.
> The mask 0x22 is 0010 0010 and each '1' there represents a CPU.
>
> --
> Flavio
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-22 Thread Flavio Leitner
On Fri, 22 Sep 2017 15:02:20 +0800
Sun Paul  wrote:

> hi
> 
> we have tried on that. e.g. if we set to 0x22, we still only able to
> see 2 cpu is in 100%, why?

Because that's what you told OVS to do.
The mask 0x22 is 0010 0010 and each '1' there represents a CPU.

-- 
Flavio

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-22 Thread Per-Erik Westerberg
Hi,

Using bit-mask 0x22 is still only two bits set which will result in two
CPUs being used, use 0x33 or 0x0f for four CPUs etc.

  Regards / Per-Erik

On fre, 2017-09-22 at 15:02 +0800, Sun Paul wrote:
> hi
> 
> we have tried on that. e.g. if we set to 0x22, we still only able to
> see 2 cpu is in 100%, why?
> 
> # ovs-vsctl get Open_vSwitch . other_config
> {dpdk-init="true", n-dpdk-rxqs="2", pmd-cpu-mask="0x22"}
> 
> 
> Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
> ovs|00196|netdev_dpdk|WARN|Failed to enable flow control on device 0
> Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
> ovs|00197|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  2
> destroyed.
> Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
> ovs|00198|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  5
> created.
> Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
> ovs|00199|dpif_netdev|INFO|There are 2 pmd threads on numa node 0
> 
> 
> 
> On Wed, Sep 20, 2017 at 8:59 PM, Flavio Leitner 
> wrote:
> > 
> > On Wed, 20 Sep 2017 09:13:55 +0800
> > Sun Paul  wrote:
> > 
> > > 
> > > sorry about that
> > > 
> > > # ovs-vsctl get Open_vSwitch . other_config
> > > {dpdk-init="true", n-dpdk-rxqs="2", pmd-cpu-mask="0x6"}
> > 
> > Have you tried to change pmd-cpu-mask? Because that is mask of bits
> > representing the CPUs you allow PMDs to be created.  In this case,
> > you are saying '0x6' (binary mask: 0110), so only two CPUs.
> > 
> > Also check ovs-vswitchd.conf.db(5) man-page:
> > 
> >    other_config : pmd-cpu-mask: optional string
> >   Specifies  CPU  mask for setting the cpu affinity of
> > PMD (Poll Mode
> >   Driver) threads. Value should be in the form of hex
> > string, similar
> >   to  the  dpdk  EAL ’-c COREMASK’ option input or the
> > ’taskset’ mask
> >   input.
> > 
> >   The lowest order bit corresponds to the first CPU
> > core. A  set  bit
> >   means  the corresponding core is available and a pmd
> > thread will be
> >   created and pinned to it. If the input does
> > not  cover  all  cores,
> >   those uncovered cores are considered not set.
> > 
> >   If not specified, one pmd thread will be created for
> > each numa node
> >   and pinned to any available core on the numa node by
> > default.
> > 
> > fbl
> > 
> > > 
> > > 
> > > On Tue, Sep 19, 2017 at 8:02 PM, Flavio Leitner  > > > wrote:
> > > > 
> > > > On Tue, 19 Sep 2017 13:43:25 +0800
> > > > Sun Paul  wrote:
> > > > 
> > > > > 
> > > > > Hi
> > > > > 
> > > > > below is the output. currently, I am only able to set to use
> > > > > two CPU for PMD.
> > > > 
> > > > 
> > > > I was referring to the output of
> > > > ovs-vsctl get Open_vSwitch . other_config
> > > > 
> > > > fbl
> > > > 
> > > > > 
> > > > > 
> > > > > # ovs-vsctl show
> > > > > ea7f2b40-b7b3-4f11-a81f-cf25a56f8172
> > > > > Bridge "gtp1"
> > > > > Port "dpdk0"
> > > > > Interface "dpdk0"
> > > > > type: dpdk
> > > > > options: {dpdk-devargs=":04:00.2",
> > > > > n_rxq="4"}
> > > > > Port "gtp1"
> > > > > Interface "gtp1"
> > > > > type: internal
> > > > > Port "dpdk1"
> > > > > Interface "dpdk1"
> > > > > type: dpdk
> > > > > options: {dpdk-devargs=":04:00.3",
> > > > > n_rxq="4"}
> > > > > 
> > > > > 
> > > > > 
> > > > > On Tue, Sep 19, 2017 at 4:09 AM, Flavio Leitner  > > > > .org> wrote:
> > > > > > 
> > > > > > On Mon, 18 Sep 2017 16:51:33 +0800
> > > > > > Sun Paul  wrote:
> > > > > > 
> > > > > > > 
> > > > > > > Hi
> > > > > > > 
> > > > > > > I have two interfaces mapped to DPDK, and run the OVS on
> > > > > > > top of it. I
> > > > > > > tried to set the cpu mask, but I cannot only allocate
> > > > > > > more than 2 CPU
> > > > > > > for pmd thread. any idea?
> > > > > > > 
> > > > > > > # /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
> > > > > > > pmd thread numa_id 0 core_id 1:
> > > > > > > isolated : false
> > > > > > > port: dpdk0 queue-id: 0
> > > > > > > pmd thread numa_id 0 core_id 2:
> > > > > > > isolated : false
> > > > > > > port: dpdk1 queue-id: 0
> > > > > > 
> > > > > > Could you post the DPDK configuration and what do you want?
> > > > > > 
> > > > > > Thanks,
> > > > > > --
> > > > > > Flavio
> > > > > > 
> > > > 
> > > > 
> > > > 
> > > > --
> > > > Flavio
> > > > 
> > 
> > 
> > 
> > --
> > Flavio
> > 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

smime.p7s
Description: S/MIME cryptographic signature
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-22 Thread Sun Paul
hi

we have tried on that. e.g. if we set to 0x22, we still only able to
see 2 cpu is in 100%, why?

# ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", n-dpdk-rxqs="2", pmd-cpu-mask="0x22"}


Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
ovs|00196|netdev_dpdk|WARN|Failed to enable flow control on device 0
Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
ovs|00197|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  2
destroyed.
Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
ovs|00198|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  5
created.
Sep 22 22:54:29 host1 ovs-vswitchd[3504]:
ovs|00199|dpif_netdev|INFO|There are 2 pmd threads on numa node 0



On Wed, Sep 20, 2017 at 8:59 PM, Flavio Leitner  wrote:
> On Wed, 20 Sep 2017 09:13:55 +0800
> Sun Paul  wrote:
>
>> sorry about that
>>
>> # ovs-vsctl get Open_vSwitch . other_config
>> {dpdk-init="true", n-dpdk-rxqs="2", pmd-cpu-mask="0x6"}
>
> Have you tried to change pmd-cpu-mask? Because that is mask of bits
> representing the CPUs you allow PMDs to be created.  In this case,
> you are saying '0x6' (binary mask: 0110), so only two CPUs.
>
> Also check ovs-vswitchd.conf.db(5) man-page:
>
>other_config : pmd-cpu-mask: optional string
>   Specifies  CPU  mask for setting the cpu affinity of PMD (Poll 
> Mode
>   Driver) threads. Value should be in the form of hex string, 
> similar
>   to  the  dpdk  EAL ’-c COREMASK’ option input or the ’taskset’ 
> mask
>   input.
>
>   The lowest order bit corresponds to the first CPU core. A  set  
> bit
>   means  the corresponding core is available and a pmd thread 
> will be
>   created and pinned to it. If the input does not  cover  all  
> cores,
>   those uncovered cores are considered not set.
>
>   If not specified, one pmd thread will be created for each numa 
> node
>   and pinned to any available core on the numa node by default.
>
> fbl
>
>>
>> On Tue, Sep 19, 2017 at 8:02 PM, Flavio Leitner  wrote:
>> > On Tue, 19 Sep 2017 13:43:25 +0800
>> > Sun Paul  wrote:
>> >
>> >> Hi
>> >>
>> >> below is the output. currently, I am only able to set to use two CPU for 
>> >> PMD.
>> >
>> >
>> > I was referring to the output of
>> > ovs-vsctl get Open_vSwitch . other_config
>> >
>> > fbl
>> >
>> >>
>> >> # ovs-vsctl show
>> >> ea7f2b40-b7b3-4f11-a81f-cf25a56f8172
>> >> Bridge "gtp1"
>> >> Port "dpdk0"
>> >> Interface "dpdk0"
>> >> type: dpdk
>> >> options: {dpdk-devargs=":04:00.2", n_rxq="4"}
>> >> Port "gtp1"
>> >> Interface "gtp1"
>> >> type: internal
>> >> Port "dpdk1"
>> >> Interface "dpdk1"
>> >> type: dpdk
>> >> options: {dpdk-devargs=":04:00.3", n_rxq="4"}
>> >>
>> >>
>> >>
>> >> On Tue, Sep 19, 2017 at 4:09 AM, Flavio Leitner  wrote:
>> >> > On Mon, 18 Sep 2017 16:51:33 +0800
>> >> > Sun Paul  wrote:
>> >> >
>> >> >> Hi
>> >> >>
>> >> >> I have two interfaces mapped to DPDK, and run the OVS on top of it. I
>> >> >> tried to set the cpu mask, but I cannot only allocate more than 2 CPU
>> >> >> for pmd thread. any idea?
>> >> >>
>> >> >> # /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
>> >> >> pmd thread numa_id 0 core_id 1:
>> >> >> isolated : false
>> >> >> port: dpdk0 queue-id: 0
>> >> >> pmd thread numa_id 0 core_id 2:
>> >> >> isolated : false
>> >> >> port: dpdk1 queue-id: 0
>> >> >
>> >> > Could you post the DPDK configuration and what do you want?
>> >> >
>> >> > Thanks,
>> >> > --
>> >> > Flavio
>> >> >
>> >
>> >
>> >
>> > --
>> > Flavio
>> >
>
>
>
> --
> Flavio
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-20 Thread Flavio Leitner
On Wed, 20 Sep 2017 09:13:55 +0800
Sun Paul  wrote:

> sorry about that
> 
> # ovs-vsctl get Open_vSwitch . other_config
> {dpdk-init="true", n-dpdk-rxqs="2", pmd-cpu-mask="0x6"}

Have you tried to change pmd-cpu-mask? Because that is mask of bits
representing the CPUs you allow PMDs to be created.  In this case,
you are saying '0x6' (binary mask: 0110), so only two CPUs.

Also check ovs-vswitchd.conf.db(5) man-page:

   other_config : pmd-cpu-mask: optional string
  Specifies  CPU  mask for setting the cpu affinity of PMD (Poll 
Mode
  Driver) threads. Value should be in the form of hex string, 
similar
  to  the  dpdk  EAL ’-c COREMASK’ option input or the ’taskset’ 
mask
  input.

  The lowest order bit corresponds to the first CPU core. A  set  
bit
  means  the corresponding core is available and a pmd thread will 
be
  created and pinned to it. If the input does not  cover  all  
cores,
  those uncovered cores are considered not set.

  If not specified, one pmd thread will be created for each numa 
node
  and pinned to any available core on the numa node by default.

fbl

> 
> On Tue, Sep 19, 2017 at 8:02 PM, Flavio Leitner  wrote:
> > On Tue, 19 Sep 2017 13:43:25 +0800
> > Sun Paul  wrote:
> >  
> >> Hi
> >>
> >> below is the output. currently, I am only able to set to use two CPU for 
> >> PMD.  
> >
> >
> > I was referring to the output of
> > ovs-vsctl get Open_vSwitch . other_config
> >
> > fbl
> >  
> >>
> >> # ovs-vsctl show
> >> ea7f2b40-b7b3-4f11-a81f-cf25a56f8172
> >> Bridge "gtp1"
> >> Port "dpdk0"
> >> Interface "dpdk0"
> >> type: dpdk
> >> options: {dpdk-devargs=":04:00.2", n_rxq="4"}
> >> Port "gtp1"
> >> Interface "gtp1"
> >> type: internal
> >> Port "dpdk1"
> >> Interface "dpdk1"
> >> type: dpdk
> >> options: {dpdk-devargs=":04:00.3", n_rxq="4"}
> >>
> >>
> >>
> >> On Tue, Sep 19, 2017 at 4:09 AM, Flavio Leitner  wrote: 
> >>  
> >> > On Mon, 18 Sep 2017 16:51:33 +0800
> >> > Sun Paul  wrote:
> >> >  
> >> >> Hi
> >> >>
> >> >> I have two interfaces mapped to DPDK, and run the OVS on top of it. I
> >> >> tried to set the cpu mask, but I cannot only allocate more than 2 CPU
> >> >> for pmd thread. any idea?
> >> >>
> >> >> # /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
> >> >> pmd thread numa_id 0 core_id 1:
> >> >> isolated : false
> >> >> port: dpdk0 queue-id: 0
> >> >> pmd thread numa_id 0 core_id 2:
> >> >> isolated : false
> >> >> port: dpdk1 queue-id: 0  
> >> >
> >> > Could you post the DPDK configuration and what do you want?
> >> >
> >> > Thanks,
> >> > --
> >> > Flavio
> >> >  
> >
> >
> >
> > --
> > Flavio
> >  



-- 
Flavio

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-19 Thread Sun Paul
sorry about that

# ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", n-dpdk-rxqs="2", pmd-cpu-mask="0x6"}

On Tue, Sep 19, 2017 at 8:02 PM, Flavio Leitner  wrote:
> On Tue, 19 Sep 2017 13:43:25 +0800
> Sun Paul  wrote:
>
>> Hi
>>
>> below is the output. currently, I am only able to set to use two CPU for PMD.
>
>
> I was referring to the output of
> ovs-vsctl get Open_vSwitch . other_config
>
> fbl
>
>>
>> # ovs-vsctl show
>> ea7f2b40-b7b3-4f11-a81f-cf25a56f8172
>> Bridge "gtp1"
>> Port "dpdk0"
>> Interface "dpdk0"
>> type: dpdk
>> options: {dpdk-devargs=":04:00.2", n_rxq="4"}
>> Port "gtp1"
>> Interface "gtp1"
>> type: internal
>> Port "dpdk1"
>> Interface "dpdk1"
>> type: dpdk
>> options: {dpdk-devargs=":04:00.3", n_rxq="4"}
>>
>>
>>
>> On Tue, Sep 19, 2017 at 4:09 AM, Flavio Leitner  wrote:
>> > On Mon, 18 Sep 2017 16:51:33 +0800
>> > Sun Paul  wrote:
>> >
>> >> Hi
>> >>
>> >> I have two interfaces mapped to DPDK, and run the OVS on top of it. I
>> >> tried to set the cpu mask, but I cannot only allocate more than 2 CPU
>> >> for pmd thread. any idea?
>> >>
>> >> # /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
>> >> pmd thread numa_id 0 core_id 1:
>> >> isolated : false
>> >> port: dpdk0 queue-id: 0
>> >> pmd thread numa_id 0 core_id 2:
>> >> isolated : false
>> >> port: dpdk1 queue-id: 0
>> >
>> > Could you post the DPDK configuration and what do you want?
>> >
>> > Thanks,
>> > --
>> > Flavio
>> >
>
>
>
> --
> Flavio
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-19 Thread Flavio Leitner
On Tue, 19 Sep 2017 13:43:25 +0800
Sun Paul  wrote:

> Hi
> 
> below is the output. currently, I am only able to set to use two CPU for PMD.


I was referring to the output of
ovs-vsctl get Open_vSwitch . other_config

fbl

> 
> # ovs-vsctl show
> ea7f2b40-b7b3-4f11-a81f-cf25a56f8172
> Bridge "gtp1"
> Port "dpdk0"
> Interface "dpdk0"
> type: dpdk
> options: {dpdk-devargs=":04:00.2", n_rxq="4"}
> Port "gtp1"
> Interface "gtp1"
> type: internal
> Port "dpdk1"
> Interface "dpdk1"
> type: dpdk
> options: {dpdk-devargs=":04:00.3", n_rxq="4"}
> 
> 
> 
> On Tue, Sep 19, 2017 at 4:09 AM, Flavio Leitner  wrote:
> > On Mon, 18 Sep 2017 16:51:33 +0800
> > Sun Paul  wrote:
> >  
> >> Hi
> >>
> >> I have two interfaces mapped to DPDK, and run the OVS on top of it. I
> >> tried to set the cpu mask, but I cannot only allocate more than 2 CPU
> >> for pmd thread. any idea?
> >>
> >> # /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
> >> pmd thread numa_id 0 core_id 1:
> >> isolated : false
> >> port: dpdk0 queue-id: 0
> >> pmd thread numa_id 0 core_id 2:
> >> isolated : false
> >> port: dpdk1 queue-id: 0  
> >
> > Could you post the DPDK configuration and what do you want?
> >
> > Thanks,
> > --
> > Flavio
> >  



-- 
Flavio

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-18 Thread Sun Paul
Hi

below is the output. currently, I am only able to set to use two CPU for PMD.

# ovs-vsctl show
ea7f2b40-b7b3-4f11-a81f-cf25a56f8172
Bridge "gtp1"
Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs=":04:00.2", n_rxq="4"}
Port "gtp1"
Interface "gtp1"
type: internal
Port "dpdk1"
Interface "dpdk1"
type: dpdk
options: {dpdk-devargs=":04:00.3", n_rxq="4"}



On Tue, Sep 19, 2017 at 4:09 AM, Flavio Leitner  wrote:
> On Mon, 18 Sep 2017 16:51:33 +0800
> Sun Paul  wrote:
>
>> Hi
>>
>> I have two interfaces mapped to DPDK, and run the OVS on top of it. I
>> tried to set the cpu mask, but I cannot only allocate more than 2 CPU
>> for pmd thread. any idea?
>>
>> # /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
>> pmd thread numa_id 0 core_id 1:
>> isolated : false
>> port: dpdk0 queue-id: 0
>> pmd thread numa_id 0 core_id 2:
>> isolated : false
>> port: dpdk1 queue-id: 0
>
> Could you post the DPDK configuration and what do you want?
>
> Thanks,
> --
> Flavio
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-18 Thread Flavio Leitner
On Mon, 18 Sep 2017 16:51:33 +0800
Sun Paul  wrote:

> Hi
> 
> I have two interfaces mapped to DPDK, and run the OVS on top of it. I
> tried to set the cpu mask, but I cannot only allocate more than 2 CPU
> for pmd thread. any idea?
> 
> # /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
> pmd thread numa_id 0 core_id 1:
> isolated : false
> port: dpdk0 queue-id: 0
> pmd thread numa_id 0 core_id 2:
> isolated : false
> port: dpdk1 queue-id: 0

Could you post the DPDK configuration and what do you want?

Thanks,
-- 
Flavio

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] how many CPU cannot allocate for PMD thread?

2017-09-18 Thread Sun Paul
Hi

I have two interfaces mapped to DPDK, and run the OVS on top of it. I
tried to set the cpu mask, but I cannot only allocate more than 2 CPU
for pmd thread. any idea?

# /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 1:
isolated : false
port: dpdk0 queue-id: 0
pmd thread numa_id 0 core_id 2:
isolated : false
port: dpdk1 queue-id: 0
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss