[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-16 Thread Take Ceara
On Thu, Jun 16, 2016 at 10:19 PM, Wiles, Keith  wrote:
>
> On 6/16/16, 3:16 PM, "dev on behalf of Wiles, Keith"  on behalf of keith.wiles at intel.com> wrote:
>
>>
>>On 6/16/16, 3:00 PM, "Take Ceara"  wrote:
>>
>>>On Thu, Jun 16, 2016 at 9:33 PM, Wiles, Keith  
>>>wrote:
 On 6/16/16, 1:20 PM, "Take Ceara"  wrote:

>On Thu, Jun 16, 2016 at 6:59 PM, Wiles, Keith  
>wrote:
>>
>> On 6/16/16, 11:56 AM, "dev on behalf of Wiles, Keith" > dpdk.org on behalf of keith.wiles at intel.com> wrote:
>>
>>>
>>>On 6/16/16, 11:20 AM, "Take Ceara"  wrote:
>>>
On Thu, Jun 16, 2016 at 5:29 PM, Wiles, Keith >>>intel.com> wrote:

>
> Right now I do not know what the issue is with the system. Could be 
> too many Rx/Tx ring pairs per port and limiting the memory in the 
> NICs, which is why you get better performance when you have 8 core 
> per port. I am not really seeing the whole picture and how DPDK is 
> configured to help more. Sorry.

I doubt that there is a limitation wrt running 16 cores per port vs 8
cores per port as I've tried with two different machines connected
back to back each with one X710 port and 16 cores on each of them
running on that port. In that case our performance doubled as
expected.

>
> Maybe seeing the DPDK command line would help.

The command line I use with ports 01:00.3 and 81:00.3 is:
./warp17 -c 0xF3   -m 32768 -w :81:00.3 -w :01:00.3 --
--qmap 0.0x003FF003F0 --qmap 1.0x0FC00FFC00

Our own qmap args allow the user to control exactly how cores are
split between ports. In this case we end up with:

warp17> show port map
Port 0[socket: 0]:
   Core 4[socket:0] (Tx: 0, Rx: 0)
   Core 5[socket:0] (Tx: 1, Rx: 1)
   Core 6[socket:0] (Tx: 2, Rx: 2)
   Core 7[socket:0] (Tx: 3, Rx: 3)
   Core 8[socket:0] (Tx: 4, Rx: 4)
   Core 9[socket:0] (Tx: 5, Rx: 5)
   Core 20[socket:0] (Tx: 6, Rx: 6)
   Core 21[socket:0] (Tx: 7, Rx: 7)
   Core 22[socket:0] (Tx: 8, Rx: 8)
   Core 23[socket:0] (Tx: 9, Rx: 9)
   Core 24[socket:0] (Tx: 10, Rx: 10)
   Core 25[socket:0] (Tx: 11, Rx: 11)
   Core 26[socket:0] (Tx: 12, Rx: 12)
   Core 27[socket:0] (Tx: 13, Rx: 13)
   Core 28[socket:0] (Tx: 14, Rx: 14)
   Core 29[socket:0] (Tx: 15, Rx: 15)

Port 1[socket: 1]:
   Core 10[socket:1] (Tx: 0, Rx: 0)
   Core 11[socket:1] (Tx: 1, Rx: 1)
   Core 12[socket:1] (Tx: 2, Rx: 2)
   Core 13[socket:1] (Tx: 3, Rx: 3)
   Core 14[socket:1] (Tx: 4, Rx: 4)
   Core 15[socket:1] (Tx: 5, Rx: 5)
   Core 16[socket:1] (Tx: 6, Rx: 6)
   Core 17[socket:1] (Tx: 7, Rx: 7)
   Core 18[socket:1] (Tx: 8, Rx: 8)
   Core 19[socket:1] (Tx: 9, Rx: 9)
   Core 30[socket:1] (Tx: 10, Rx: 10)
   Core 31[socket:1] (Tx: 11, Rx: 11)
   Core 32[socket:1] (Tx: 12, Rx: 12)
   Core 33[socket:1] (Tx: 13, Rx: 13)
   Core 34[socket:1] (Tx: 14, Rx: 14)
   Core 35[socket:1] (Tx: 15, Rx: 15)
>>>
>>>On each socket you have 10 physical cores or 20 lcores per socket for 40 
>>>lcores total.
>>>
>>>The above is listing the LCORES (or hyper-threads) and not COREs, which 
>>>I understand some like to think they are interchangeable. The problem is 
>>>the hyper-threads are logically interchangeable, but not performance 
>>>wise. If you have two run-to-completion threads on a single physical 
>>>core each on a different hyper-thread of that core [0,1], then the 
>>>second lcore or thread (1) on that physical core will only get at most 
>>>about 30-20% of the CPU cycles. Normally it is much less, unless you 
>>>tune the code to make sure each thread is not trying to share the 
>>>internal execution units, but some internal execution units are always 
>>>shared.
>>>
>>>To get the best performance when hyper-threading is enable is to not run 
>>>both threads on a single physical core, but only run one hyper-thread-0.
>>>
>>>In the table below the table lists the physical core id and each of the 
>>>lcore ids per socket. Use the first lcore per socket for the best 
>>>performance:
>>>Core 1 [1, 21][11, 31]
>>>Use lcore 1 or 11 depending on the socket you are on.
>>>
>>>The info below is most likely the best performance and utilization of 
>>>your system. If I got the values right ?
>>>
>>>./warp17 -c 0x0FFFe0   -m 32768 -w :81:00.3 -w :01:00.3 --
>>>--qmap 0.0x0003FE --qmap 1.0x0FFE00
>>>
>>>Port 0[socket: 0]:
>>>   Core 2[socket:0] (Tx: 0, Rx: 0)
>>>   Core 3[socket:0] (Tx: 1, Rx: 1)
>>>   

[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-16 Thread Take Ceara
On Thu, Jun 16, 2016 at 9:33 PM, Wiles, Keith  wrote:
> On 6/16/16, 1:20 PM, "Take Ceara"  wrote:
>
>>On Thu, Jun 16, 2016 at 6:59 PM, Wiles, Keith  
>>wrote:
>>>
>>> On 6/16/16, 11:56 AM, "dev on behalf of Wiles, Keith" >> dpdk.org on behalf of keith.wiles at intel.com> wrote:
>>>

On 6/16/16, 11:20 AM, "Take Ceara"  wrote:

>On Thu, Jun 16, 2016 at 5:29 PM, Wiles, Keith  
>wrote:
>
>>
>> Right now I do not know what the issue is with the system. Could be too 
>> many Rx/Tx ring pairs per port and limiting the memory in the NICs, 
>> which is why you get better performance when you have 8 core per port. I 
>> am not really seeing the whole picture and how DPDK is configured to 
>> help more. Sorry.
>
>I doubt that there is a limitation wrt running 16 cores per port vs 8
>cores per port as I've tried with two different machines connected
>back to back each with one X710 port and 16 cores on each of them
>running on that port. In that case our performance doubled as
>expected.
>
>>
>> Maybe seeing the DPDK command line would help.
>
>The command line I use with ports 01:00.3 and 81:00.3 is:
>./warp17 -c 0xF3   -m 32768 -w :81:00.3 -w :01:00.3 --
>--qmap 0.0x003FF003F0 --qmap 1.0x0FC00FFC00
>
>Our own qmap args allow the user to control exactly how cores are
>split between ports. In this case we end up with:
>
>warp17> show port map
>Port 0[socket: 0]:
>   Core 4[socket:0] (Tx: 0, Rx: 0)
>   Core 5[socket:0] (Tx: 1, Rx: 1)
>   Core 6[socket:0] (Tx: 2, Rx: 2)
>   Core 7[socket:0] (Tx: 3, Rx: 3)
>   Core 8[socket:0] (Tx: 4, Rx: 4)
>   Core 9[socket:0] (Tx: 5, Rx: 5)
>   Core 20[socket:0] (Tx: 6, Rx: 6)
>   Core 21[socket:0] (Tx: 7, Rx: 7)
>   Core 22[socket:0] (Tx: 8, Rx: 8)
>   Core 23[socket:0] (Tx: 9, Rx: 9)
>   Core 24[socket:0] (Tx: 10, Rx: 10)
>   Core 25[socket:0] (Tx: 11, Rx: 11)
>   Core 26[socket:0] (Tx: 12, Rx: 12)
>   Core 27[socket:0] (Tx: 13, Rx: 13)
>   Core 28[socket:0] (Tx: 14, Rx: 14)
>   Core 29[socket:0] (Tx: 15, Rx: 15)
>
>Port 1[socket: 1]:
>   Core 10[socket:1] (Tx: 0, Rx: 0)
>   Core 11[socket:1] (Tx: 1, Rx: 1)
>   Core 12[socket:1] (Tx: 2, Rx: 2)
>   Core 13[socket:1] (Tx: 3, Rx: 3)
>   Core 14[socket:1] (Tx: 4, Rx: 4)
>   Core 15[socket:1] (Tx: 5, Rx: 5)
>   Core 16[socket:1] (Tx: 6, Rx: 6)
>   Core 17[socket:1] (Tx: 7, Rx: 7)
>   Core 18[socket:1] (Tx: 8, Rx: 8)
>   Core 19[socket:1] (Tx: 9, Rx: 9)
>   Core 30[socket:1] (Tx: 10, Rx: 10)
>   Core 31[socket:1] (Tx: 11, Rx: 11)
>   Core 32[socket:1] (Tx: 12, Rx: 12)
>   Core 33[socket:1] (Tx: 13, Rx: 13)
>   Core 34[socket:1] (Tx: 14, Rx: 14)
>   Core 35[socket:1] (Tx: 15, Rx: 15)

On each socket you have 10 physical cores or 20 lcores per socket for 40 
lcores total.

The above is listing the LCORES (or hyper-threads) and not COREs, which I 
understand some like to think they are interchangeable. The problem is the 
hyper-threads are logically interchangeable, but not performance wise. If 
you have two run-to-completion threads on a single physical core each on a 
different hyper-thread of that core [0,1], then the second lcore or thread 
(1) on that physical core will only get at most about 30-20% of the CPU 
cycles. Normally it is much less, unless you tune the code to make sure 
each thread is not trying to share the internal execution units, but some 
internal execution units are always shared.

To get the best performance when hyper-threading is enable is to not run 
both threads on a single physical core, but only run one hyper-thread-0.

In the table below the table lists the physical core id and each of the 
lcore ids per socket. Use the first lcore per socket for the best 
performance:
Core 1 [1, 21][11, 31]
Use lcore 1 or 11 depending on the socket you are on.

The info below is most likely the best performance and utilization of your 
system. If I got the values right ?

./warp17 -c 0x0FFFe0   -m 32768 -w :81:00.3 -w :01:00.3 --
--qmap 0.0x0003FE --qmap 1.0x0FFE00

Port 0[socket: 0]:
   Core 2[socket:0] (Tx: 0, Rx: 0)
   Core 3[socket:0] (Tx: 1, Rx: 1)
   Core 4[socket:0] (Tx: 2, Rx: 2)
   Core 5[socket:0] (Tx: 3, Rx: 3)
   Core 6[socket:0] (Tx: 4, Rx: 4)
   Core 7[socket:0] (Tx: 5, Rx: 5)
   Core 8[socket:0] (Tx: 6, Rx: 6)
   Core 9[socket:0] (Tx: 7, Rx: 7)

8 cores on first socket leaving 0-1 lcores for Linux.
>>>
>>> 9 cores and leaving the first core or two lcores for Linux

Port 1[socket: 1]:
   Core 10[socket:1] (Tx: 0, Rx: 0)
   Core 11[socket:1] (Tx: 1, Rx: 1)
   Core 12[socket:1] (Tx: 2, Rx: 2)
   Core 13[socket:1] (Tx: 3, Rx: 3)
   

[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-16 Thread Take Ceara
On Thu, Jun 16, 2016 at 6:59 PM, Wiles, Keith  wrote:
>
> On 6/16/16, 11:56 AM, "dev on behalf of Wiles, Keith"  dpdk.org on behalf of keith.wiles at intel.com> wrote:
>
>>
>>On 6/16/16, 11:20 AM, "Take Ceara"  wrote:
>>
>>>On Thu, Jun 16, 2016 at 5:29 PM, Wiles, Keith  
>>>wrote:
>>>

 Right now I do not know what the issue is with the system. Could be too 
 many Rx/Tx ring pairs per port and limiting the memory in the NICs, which 
 is why you get better performance when you have 8 core per port. I am not 
 really seeing the whole picture and how DPDK is configured to help more. 
 Sorry.
>>>
>>>I doubt that there is a limitation wrt running 16 cores per port vs 8
>>>cores per port as I've tried with two different machines connected
>>>back to back each with one X710 port and 16 cores on each of them
>>>running on that port. In that case our performance doubled as
>>>expected.
>>>

 Maybe seeing the DPDK command line would help.
>>>
>>>The command line I use with ports 01:00.3 and 81:00.3 is:
>>>./warp17 -c 0xF3   -m 32768 -w :81:00.3 -w :01:00.3 --
>>>--qmap 0.0x003FF003F0 --qmap 1.0x0FC00FFC00
>>>
>>>Our own qmap args allow the user to control exactly how cores are
>>>split between ports. In this case we end up with:
>>>
>>>warp17> show port map
>>>Port 0[socket: 0]:
>>>   Core 4[socket:0] (Tx: 0, Rx: 0)
>>>   Core 5[socket:0] (Tx: 1, Rx: 1)
>>>   Core 6[socket:0] (Tx: 2, Rx: 2)
>>>   Core 7[socket:0] (Tx: 3, Rx: 3)
>>>   Core 8[socket:0] (Tx: 4, Rx: 4)
>>>   Core 9[socket:0] (Tx: 5, Rx: 5)
>>>   Core 20[socket:0] (Tx: 6, Rx: 6)
>>>   Core 21[socket:0] (Tx: 7, Rx: 7)
>>>   Core 22[socket:0] (Tx: 8, Rx: 8)
>>>   Core 23[socket:0] (Tx: 9, Rx: 9)
>>>   Core 24[socket:0] (Tx: 10, Rx: 10)
>>>   Core 25[socket:0] (Tx: 11, Rx: 11)
>>>   Core 26[socket:0] (Tx: 12, Rx: 12)
>>>   Core 27[socket:0] (Tx: 13, Rx: 13)
>>>   Core 28[socket:0] (Tx: 14, Rx: 14)
>>>   Core 29[socket:0] (Tx: 15, Rx: 15)
>>>
>>>Port 1[socket: 1]:
>>>   Core 10[socket:1] (Tx: 0, Rx: 0)
>>>   Core 11[socket:1] (Tx: 1, Rx: 1)
>>>   Core 12[socket:1] (Tx: 2, Rx: 2)
>>>   Core 13[socket:1] (Tx: 3, Rx: 3)
>>>   Core 14[socket:1] (Tx: 4, Rx: 4)
>>>   Core 15[socket:1] (Tx: 5, Rx: 5)
>>>   Core 16[socket:1] (Tx: 6, Rx: 6)
>>>   Core 17[socket:1] (Tx: 7, Rx: 7)
>>>   Core 18[socket:1] (Tx: 8, Rx: 8)
>>>   Core 19[socket:1] (Tx: 9, Rx: 9)
>>>   Core 30[socket:1] (Tx: 10, Rx: 10)
>>>   Core 31[socket:1] (Tx: 11, Rx: 11)
>>>   Core 32[socket:1] (Tx: 12, Rx: 12)
>>>   Core 33[socket:1] (Tx: 13, Rx: 13)
>>>   Core 34[socket:1] (Tx: 14, Rx: 14)
>>>   Core 35[socket:1] (Tx: 15, Rx: 15)
>>
>>On each socket you have 10 physical cores or 20 lcores per socket for 40 
>>lcores total.
>>
>>The above is listing the LCORES (or hyper-threads) and not COREs, which I 
>>understand some like to think they are interchangeable. The problem is the 
>>hyper-threads are logically interchangeable, but not performance wise. If you 
>>have two run-to-completion threads on a single physical core each on a 
>>different hyper-thread of that core [0,1], then the second lcore or thread 
>>(1) on that physical core will only get at most about 30-20% of the CPU 
>>cycles. Normally it is much less, unless you tune the code to make sure each 
>>thread is not trying to share the internal execution units, but some internal 
>>execution units are always shared.
>>
>>To get the best performance when hyper-threading is enable is to not run both 
>>threads on a single physical core, but only run one hyper-thread-0.
>>
>>In the table below the table lists the physical core id and each of the lcore 
>>ids per socket. Use the first lcore per socket for the best performance:
>>Core 1 [1, 21][11, 31]
>>Use lcore 1 or 11 depending on the socket you are on.
>>
>>The info below is most likely the best performance and utilization of your 
>>system. If I got the values right ?
>>
>>./warp17 -c 0x0FFFe0   -m 32768 -w :81:00.3 -w :01:00.3 --
>>--qmap 0.0x0003FE --qmap 1.0x0FFE00
>>
>>Port 0[socket: 0]:
>>   Core 2[socket:0] (Tx: 0, Rx: 0)
>>   Core 3[socket:0] (Tx: 1, Rx: 1)
>>   Core 4[socket:0] (Tx: 2, Rx: 2)
>>   Core 5[socket:0] (Tx: 3, Rx: 3)
>>   Core 6[socket:0] (Tx: 4, Rx: 4)
>>   Core 7[socket:0] (Tx: 5, Rx: 5)
>>   Core 8[socket:0] (Tx: 6, Rx: 6)
>>   Core 9[socket:0] (Tx: 7, Rx: 7)
>>
>>8 cores on first socket leaving 0-1 lcores for Linux.
>
> 9 cores and leaving the first core or two lcores for Linux
>>
>>Port 1[socket: 1]:
>>   Core 10[socket:1] (Tx: 0, Rx: 0)
>>   Core 11[socket:1] (Tx: 1, Rx: 1)
>>   Core 12[socket:1] (Tx: 2, Rx: 2)
>>   Core 13[socket:1] (Tx: 3, Rx: 3)
>>   Core 14[socket:1] (Tx: 4, Rx: 4)
>>   Core 15[socket:1] (Tx: 5, Rx: 5)
>>   Core 16[socket:1] (Tx: 6, Rx: 6)
>>   Core 17[socket:1] (Tx: 7, Rx: 7)
>>   Core 18[socket:1] (Tx: 8, Rx: 8)
>>   Core 19[socket:1] (Tx: 9, Rx: 9)
>>
>>All 10 cores on the second socket.

The values were almost right :) But that's because we reserve the
first two lcores 

[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-16 Thread Wiles, Keith

On 6/16/16, 3:16 PM, "dev on behalf of Wiles, Keith"  wrote:

>
>On 6/16/16, 3:00 PM, "Take Ceara"  wrote:
>
>>On Thu, Jun 16, 2016 at 9:33 PM, Wiles, Keith  
>>wrote:
>>> On 6/16/16, 1:20 PM, "Take Ceara"  wrote:
>>>
On Thu, Jun 16, 2016 at 6:59 PM, Wiles, Keith  
wrote:
>
> On 6/16/16, 11:56 AM, "dev on behalf of Wiles, Keith"  dpdk.org on behalf of keith.wiles at intel.com> wrote:
>
>>
>>On 6/16/16, 11:20 AM, "Take Ceara"  wrote:
>>
>>>On Thu, Jun 16, 2016 at 5:29 PM, Wiles, Keith  
>>>wrote:
>>>

 Right now I do not know what the issue is with the system. Could be 
 too many Rx/Tx ring pairs per port and limiting the memory in the 
 NICs, which is why you get better performance when you have 8 core per 
 port. I am not really seeing the whole picture and how DPDK is 
 configured to help more. Sorry.
>>>
>>>I doubt that there is a limitation wrt running 16 cores per port vs 8
>>>cores per port as I've tried with two different machines connected
>>>back to back each with one X710 port and 16 cores on each of them
>>>running on that port. In that case our performance doubled as
>>>expected.
>>>

 Maybe seeing the DPDK command line would help.
>>>
>>>The command line I use with ports 01:00.3 and 81:00.3 is:
>>>./warp17 -c 0xF3   -m 32768 -w :81:00.3 -w :01:00.3 --
>>>--qmap 0.0x003FF003F0 --qmap 1.0x0FC00FFC00
>>>
>>>Our own qmap args allow the user to control exactly how cores are
>>>split between ports. In this case we end up with:
>>>
>>>warp17> show port map
>>>Port 0[socket: 0]:
>>>   Core 4[socket:0] (Tx: 0, Rx: 0)
>>>   Core 5[socket:0] (Tx: 1, Rx: 1)
>>>   Core 6[socket:0] (Tx: 2, Rx: 2)
>>>   Core 7[socket:0] (Tx: 3, Rx: 3)
>>>   Core 8[socket:0] (Tx: 4, Rx: 4)
>>>   Core 9[socket:0] (Tx: 5, Rx: 5)
>>>   Core 20[socket:0] (Tx: 6, Rx: 6)
>>>   Core 21[socket:0] (Tx: 7, Rx: 7)
>>>   Core 22[socket:0] (Tx: 8, Rx: 8)
>>>   Core 23[socket:0] (Tx: 9, Rx: 9)
>>>   Core 24[socket:0] (Tx: 10, Rx: 10)
>>>   Core 25[socket:0] (Tx: 11, Rx: 11)
>>>   Core 26[socket:0] (Tx: 12, Rx: 12)
>>>   Core 27[socket:0] (Tx: 13, Rx: 13)
>>>   Core 28[socket:0] (Tx: 14, Rx: 14)
>>>   Core 29[socket:0] (Tx: 15, Rx: 15)
>>>
>>>Port 1[socket: 1]:
>>>   Core 10[socket:1] (Tx: 0, Rx: 0)
>>>   Core 11[socket:1] (Tx: 1, Rx: 1)
>>>   Core 12[socket:1] (Tx: 2, Rx: 2)
>>>   Core 13[socket:1] (Tx: 3, Rx: 3)
>>>   Core 14[socket:1] (Tx: 4, Rx: 4)
>>>   Core 15[socket:1] (Tx: 5, Rx: 5)
>>>   Core 16[socket:1] (Tx: 6, Rx: 6)
>>>   Core 17[socket:1] (Tx: 7, Rx: 7)
>>>   Core 18[socket:1] (Tx: 8, Rx: 8)
>>>   Core 19[socket:1] (Tx: 9, Rx: 9)
>>>   Core 30[socket:1] (Tx: 10, Rx: 10)
>>>   Core 31[socket:1] (Tx: 11, Rx: 11)
>>>   Core 32[socket:1] (Tx: 12, Rx: 12)
>>>   Core 33[socket:1] (Tx: 13, Rx: 13)
>>>   Core 34[socket:1] (Tx: 14, Rx: 14)
>>>   Core 35[socket:1] (Tx: 15, Rx: 15)
>>
>>On each socket you have 10 physical cores or 20 lcores per socket for 40 
>>lcores total.
>>
>>The above is listing the LCORES (or hyper-threads) and not COREs, which I 
>>understand some like to think they are interchangeable. The problem is 
>>the hyper-threads are logically interchangeable, but not performance 
>>wise. If you have two run-to-completion threads on a single physical core 
>>each on a different hyper-thread of that core [0,1], then the second 
>>lcore or thread (1) on that physical core will only get at most about 
>>30-20% of the CPU cycles. Normally it is much less, unless you tune the 
>>code to make sure each thread is not trying to share the internal 
>>execution units, but some internal execution units are always shared.
>>
>>To get the best performance when hyper-threading is enable is to not run 
>>both threads on a single physical core, but only run one hyper-thread-0.
>>
>>In the table below the table lists the physical core id and each of the 
>>lcore ids per socket. Use the first lcore per socket for the best 
>>performance:
>>Core 1 [1, 21][11, 31]
>>Use lcore 1 or 11 depending on the socket you are on.
>>
>>The info below is most likely the best performance and utilization of 
>>your system. If I got the values right ?
>>
>>./warp17 -c 0x0FFFe0   -m 32768 -w :81:00.3 -w :01:00.3 --
>>--qmap 0.0x0003FE --qmap 1.0x0FFE00
>>
>>Port 0[socket: 0]:
>>   Core 2[socket:0] (Tx: 0, Rx: 0)
>>   Core 3[socket:0] (Tx: 1, Rx: 1)
>>   Core 4[socket:0] (Tx: 2, Rx: 2)
>>   Core 5[socket:0] (Tx: 3, Rx: 3)
>>   Core 6[socket:0] (Tx: 4, Rx: 4)
>>   Core 7[socket:0] (Tx: 5, Rx: 5)
>>   Core 8[socket:0] (Tx: 6, Rx: 6)
>>   Core 9[socket:0] (Tx: 

[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-16 Thread Wiles, Keith

On 6/16/16, 3:00 PM, "Take Ceara"  wrote:

>On Thu, Jun 16, 2016 at 9:33 PM, Wiles, Keith  wrote:
>> On 6/16/16, 1:20 PM, "Take Ceara"  wrote:
>>
>>>On Thu, Jun 16, 2016 at 6:59 PM, Wiles, Keith  
>>>wrote:

 On 6/16/16, 11:56 AM, "dev on behalf of Wiles, Keith" >>> dpdk.org on behalf of keith.wiles at intel.com> wrote:

>
>On 6/16/16, 11:20 AM, "Take Ceara"  wrote:
>
>>On Thu, Jun 16, 2016 at 5:29 PM, Wiles, Keith  
>>wrote:
>>
>>>
>>> Right now I do not know what the issue is with the system. Could be too 
>>> many Rx/Tx ring pairs per port and limiting the memory in the NICs, 
>>> which is why you get better performance when you have 8 core per port. 
>>> I am not really seeing the whole picture and how DPDK is configured to 
>>> help more. Sorry.
>>
>>I doubt that there is a limitation wrt running 16 cores per port vs 8
>>cores per port as I've tried with two different machines connected
>>back to back each with one X710 port and 16 cores on each of them
>>running on that port. In that case our performance doubled as
>>expected.
>>
>>>
>>> Maybe seeing the DPDK command line would help.
>>
>>The command line I use with ports 01:00.3 and 81:00.3 is:
>>./warp17 -c 0xF3   -m 32768 -w :81:00.3 -w :01:00.3 --
>>--qmap 0.0x003FF003F0 --qmap 1.0x0FC00FFC00
>>
>>Our own qmap args allow the user to control exactly how cores are
>>split between ports. In this case we end up with:
>>
>>warp17> show port map
>>Port 0[socket: 0]:
>>   Core 4[socket:0] (Tx: 0, Rx: 0)
>>   Core 5[socket:0] (Tx: 1, Rx: 1)
>>   Core 6[socket:0] (Tx: 2, Rx: 2)
>>   Core 7[socket:0] (Tx: 3, Rx: 3)
>>   Core 8[socket:0] (Tx: 4, Rx: 4)
>>   Core 9[socket:0] (Tx: 5, Rx: 5)
>>   Core 20[socket:0] (Tx: 6, Rx: 6)
>>   Core 21[socket:0] (Tx: 7, Rx: 7)
>>   Core 22[socket:0] (Tx: 8, Rx: 8)
>>   Core 23[socket:0] (Tx: 9, Rx: 9)
>>   Core 24[socket:0] (Tx: 10, Rx: 10)
>>   Core 25[socket:0] (Tx: 11, Rx: 11)
>>   Core 26[socket:0] (Tx: 12, Rx: 12)
>>   Core 27[socket:0] (Tx: 13, Rx: 13)
>>   Core 28[socket:0] (Tx: 14, Rx: 14)
>>   Core 29[socket:0] (Tx: 15, Rx: 15)
>>
>>Port 1[socket: 1]:
>>   Core 10[socket:1] (Tx: 0, Rx: 0)
>>   Core 11[socket:1] (Tx: 1, Rx: 1)
>>   Core 12[socket:1] (Tx: 2, Rx: 2)
>>   Core 13[socket:1] (Tx: 3, Rx: 3)
>>   Core 14[socket:1] (Tx: 4, Rx: 4)
>>   Core 15[socket:1] (Tx: 5, Rx: 5)
>>   Core 16[socket:1] (Tx: 6, Rx: 6)
>>   Core 17[socket:1] (Tx: 7, Rx: 7)
>>   Core 18[socket:1] (Tx: 8, Rx: 8)
>>   Core 19[socket:1] (Tx: 9, Rx: 9)
>>   Core 30[socket:1] (Tx: 10, Rx: 10)
>>   Core 31[socket:1] (Tx: 11, Rx: 11)
>>   Core 32[socket:1] (Tx: 12, Rx: 12)
>>   Core 33[socket:1] (Tx: 13, Rx: 13)
>>   Core 34[socket:1] (Tx: 14, Rx: 14)
>>   Core 35[socket:1] (Tx: 15, Rx: 15)
>
>On each socket you have 10 physical cores or 20 lcores per socket for 40 
>lcores total.
>
>The above is listing the LCORES (or hyper-threads) and not COREs, which I 
>understand some like to think they are interchangeable. The problem is the 
>hyper-threads are logically interchangeable, but not performance wise. If 
>you have two run-to-completion threads on a single physical core each on a 
>different hyper-thread of that core [0,1], then the second lcore or thread 
>(1) on that physical core will only get at most about 30-20% of the CPU 
>cycles. Normally it is much less, unless you tune the code to make sure 
>each thread is not trying to share the internal execution units, but some 
>internal execution units are always shared.
>
>To get the best performance when hyper-threading is enable is to not run 
>both threads on a single physical core, but only run one hyper-thread-0.
>
>In the table below the table lists the physical core id and each of the 
>lcore ids per socket. Use the first lcore per socket for the best 
>performance:
>Core 1 [1, 21][11, 31]
>Use lcore 1 or 11 depending on the socket you are on.
>
>The info below is most likely the best performance and utilization of your 
>system. If I got the values right ?
>
>./warp17 -c 0x0FFFe0   -m 32768 -w :81:00.3 -w :01:00.3 --
>--qmap 0.0x0003FE --qmap 1.0x0FFE00
>
>Port 0[socket: 0]:
>   Core 2[socket:0] (Tx: 0, Rx: 0)
>   Core 3[socket:0] (Tx: 1, Rx: 1)
>   Core 4[socket:0] (Tx: 2, Rx: 2)
>   Core 5[socket:0] (Tx: 3, Rx: 3)
>   Core 6[socket:0] (Tx: 4, Rx: 4)
>   Core 7[socket:0] (Tx: 5, Rx: 5)
>   Core 8[socket:0] (Tx: 6, Rx: 6)
>   Core 9[socket:0] (Tx: 7, Rx: 7)
>
>8 cores on first socket leaving 0-1 lcores for Linux.

 9 cores and leaving the first core or two lcores for Linux
>
>Port 1[socket: 1]:
>   

[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-16 Thread Take Ceara
On Thu, Jun 16, 2016 at 5:29 PM, Wiles, Keith  wrote:

>
> Right now I do not know what the issue is with the system. Could be too many 
> Rx/Tx ring pairs per port and limiting the memory in the NICs, which is why 
> you get better performance when you have 8 core per port. I am not really 
> seeing the whole picture and how DPDK is configured to help more. Sorry.

I doubt that there is a limitation wrt running 16 cores per port vs 8
cores per port as I've tried with two different machines connected
back to back each with one X710 port and 16 cores on each of them
running on that port. In that case our performance doubled as
expected.

>
> Maybe seeing the DPDK command line would help.

The command line I use with ports 01:00.3 and 81:00.3 is:
./warp17 -c 0xF3   -m 32768 -w :81:00.3 -w :01:00.3 --
--qmap 0.0x003FF003F0 --qmap 1.0x0FC00FFC00

Our own qmap args allow the user to control exactly how cores are
split between ports. In this case we end up with:

warp17> show port map
Port 0[socket: 0]:
   Core 4[socket:0] (Tx: 0, Rx: 0)
   Core 5[socket:0] (Tx: 1, Rx: 1)
   Core 6[socket:0] (Tx: 2, Rx: 2)
   Core 7[socket:0] (Tx: 3, Rx: 3)
   Core 8[socket:0] (Tx: 4, Rx: 4)
   Core 9[socket:0] (Tx: 5, Rx: 5)
   Core 20[socket:0] (Tx: 6, Rx: 6)
   Core 21[socket:0] (Tx: 7, Rx: 7)
   Core 22[socket:0] (Tx: 8, Rx: 8)
   Core 23[socket:0] (Tx: 9, Rx: 9)
   Core 24[socket:0] (Tx: 10, Rx: 10)
   Core 25[socket:0] (Tx: 11, Rx: 11)
   Core 26[socket:0] (Tx: 12, Rx: 12)
   Core 27[socket:0] (Tx: 13, Rx: 13)
   Core 28[socket:0] (Tx: 14, Rx: 14)
   Core 29[socket:0] (Tx: 15, Rx: 15)

Port 1[socket: 1]:
   Core 10[socket:1] (Tx: 0, Rx: 0)
   Core 11[socket:1] (Tx: 1, Rx: 1)
   Core 12[socket:1] (Tx: 2, Rx: 2)
   Core 13[socket:1] (Tx: 3, Rx: 3)
   Core 14[socket:1] (Tx: 4, Rx: 4)
   Core 15[socket:1] (Tx: 5, Rx: 5)
   Core 16[socket:1] (Tx: 6, Rx: 6)
   Core 17[socket:1] (Tx: 7, Rx: 7)
   Core 18[socket:1] (Tx: 8, Rx: 8)
   Core 19[socket:1] (Tx: 9, Rx: 9)
   Core 30[socket:1] (Tx: 10, Rx: 10)
   Core 31[socket:1] (Tx: 11, Rx: 11)
   Core 32[socket:1] (Tx: 12, Rx: 12)
   Core 33[socket:1] (Tx: 13, Rx: 13)
   Core 34[socket:1] (Tx: 14, Rx: 14)
   Core 35[socket:1] (Tx: 15, Rx: 15)

Just for reference, the cpu_layout script shows:
$ $RTE_SDK/tools/cpu_layout.py

Core and Socket Information (as reported by '/proc/cpuinfo')


cores =  [0, 1, 2, 3, 4, 8, 9, 10, 11, 12]
sockets =  [0, 1]

Socket 0Socket 1

Core 0  [0, 20] [10, 30]
Core 1  [1, 21] [11, 31]
Core 2  [2, 22] [12, 32]
Core 3  [3, 23] [13, 33]
Core 4  [4, 24] [14, 34]
Core 8  [5, 25] [15, 35]
Core 9  [6, 26] [16, 36]
Core 10 [7, 27] [17, 37]
Core 11 [8, 28] [18, 38]
Core 12 [9, 29] [19, 39]

I know it might be complicated to gigure out exactly what's happening
in our setup with our own code so please let me know if you need
additional information.

I appreciate the help!

Thanks,
Dumitru


[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-16 Thread Take Ceara
On Thu, Jun 16, 2016 at 4:58 PM, Wiles, Keith  wrote:
>
> From the output below it appears the x710 devices 01:00.[0-3] are on socket 0
> And the x710 devices 02:00.[0-3] sit on socket 1.
>

I assume there's a mistake here. The x710 devices on socket 0 are:
$ lspci | grep -ie "01:.*x710"
01:00.0 Ethernet controller: Intel Corporation Ethernet Controller
X710 for 10GbE SFP+ (rev 01)
01:00.1 Ethernet controller: Intel Corporation Ethernet Controller
X710 for 10GbE SFP+ (rev 01)
01:00.2 Ethernet controller: Intel Corporation Ethernet Controller
X710 for 10GbE SFP+ (rev 01)
01:00.3 Ethernet controller: Intel Corporation Ethernet Controller
X710 for 10GbE SFP+ (rev 01)

and the X710 devices on socket 1 are:
$ lspci | grep -ie "81:.*x710"
81:00.0 Ethernet controller: Intel Corporation Ethernet Controller
X710 for 10GbE SFP+ (rev 01)
81:00.1 Ethernet controller: Intel Corporation Ethernet Controller
X710 for 10GbE SFP+ (rev 01)
81:00.2 Ethernet controller: Intel Corporation Ethernet Controller
X710 for 10GbE SFP+ (rev 01)
81:00.3 Ethernet controller: Intel Corporation Ethernet Controller
X710 for 10GbE SFP+ (rev 01)

> This means the ports on 01.00.xx should be handled by socket 0 CPUs and 
> 02:00.xx should be handled by Socket 1. I can not tell if that is the case 
> for you here. The CPUs or lcores from the cpu_layout.py should help 
> understand the layout.
>

That was the first scenario I tried:
- assign 16 CPUs from socket 0 to port 0 (01:00.3)
- assign 16 CPUs from socket 1 to port 1 (81:00.3)

Our performance measurements show then a setup rate of 1.6M sess/s
which is less then half of what I get when i install both X710 on
socket 1 and use only 16 CPUs from socket 1 for both ports.

I double checked the cpu layout. We also have our own CLI and warnings
when using cores that are not on the same socket as the port they're
assigned too so the mapping should be fine.

Thanks,
Dumitru


[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-16 Thread Wiles, Keith

On 6/16/16, 11:56 AM, "dev on behalf of Wiles, Keith"  wrote:

>
>On 6/16/16, 11:20 AM, "Take Ceara"  wrote:
>
>>On Thu, Jun 16, 2016 at 5:29 PM, Wiles, Keith  
>>wrote:
>>
>>>
>>> Right now I do not know what the issue is with the system. Could be too 
>>> many Rx/Tx ring pairs per port and limiting the memory in the NICs, which 
>>> is why you get better performance when you have 8 core per port. I am not 
>>> really seeing the whole picture and how DPDK is configured to help more. 
>>> Sorry.
>>
>>I doubt that there is a limitation wrt running 16 cores per port vs 8
>>cores per port as I've tried with two different machines connected
>>back to back each with one X710 port and 16 cores on each of them
>>running on that port. In that case our performance doubled as
>>expected.
>>
>>>
>>> Maybe seeing the DPDK command line would help.
>>
>>The command line I use with ports 01:00.3 and 81:00.3 is:
>>./warp17 -c 0xF3   -m 32768 -w :81:00.3 -w :01:00.3 --
>>--qmap 0.0x003FF003F0 --qmap 1.0x0FC00FFC00
>>
>>Our own qmap args allow the user to control exactly how cores are
>>split between ports. In this case we end up with:
>>
>>warp17> show port map
>>Port 0[socket: 0]:
>>   Core 4[socket:0] (Tx: 0, Rx: 0)
>>   Core 5[socket:0] (Tx: 1, Rx: 1)
>>   Core 6[socket:0] (Tx: 2, Rx: 2)
>>   Core 7[socket:0] (Tx: 3, Rx: 3)
>>   Core 8[socket:0] (Tx: 4, Rx: 4)
>>   Core 9[socket:0] (Tx: 5, Rx: 5)
>>   Core 20[socket:0] (Tx: 6, Rx: 6)
>>   Core 21[socket:0] (Tx: 7, Rx: 7)
>>   Core 22[socket:0] (Tx: 8, Rx: 8)
>>   Core 23[socket:0] (Tx: 9, Rx: 9)
>>   Core 24[socket:0] (Tx: 10, Rx: 10)
>>   Core 25[socket:0] (Tx: 11, Rx: 11)
>>   Core 26[socket:0] (Tx: 12, Rx: 12)
>>   Core 27[socket:0] (Tx: 13, Rx: 13)
>>   Core 28[socket:0] (Tx: 14, Rx: 14)
>>   Core 29[socket:0] (Tx: 15, Rx: 15)
>>
>>Port 1[socket: 1]:
>>   Core 10[socket:1] (Tx: 0, Rx: 0)
>>   Core 11[socket:1] (Tx: 1, Rx: 1)
>>   Core 12[socket:1] (Tx: 2, Rx: 2)
>>   Core 13[socket:1] (Tx: 3, Rx: 3)
>>   Core 14[socket:1] (Tx: 4, Rx: 4)
>>   Core 15[socket:1] (Tx: 5, Rx: 5)
>>   Core 16[socket:1] (Tx: 6, Rx: 6)
>>   Core 17[socket:1] (Tx: 7, Rx: 7)
>>   Core 18[socket:1] (Tx: 8, Rx: 8)
>>   Core 19[socket:1] (Tx: 9, Rx: 9)
>>   Core 30[socket:1] (Tx: 10, Rx: 10)
>>   Core 31[socket:1] (Tx: 11, Rx: 11)
>>   Core 32[socket:1] (Tx: 12, Rx: 12)
>>   Core 33[socket:1] (Tx: 13, Rx: 13)
>>   Core 34[socket:1] (Tx: 14, Rx: 14)
>>   Core 35[socket:1] (Tx: 15, Rx: 15)
>
>On each socket you have 10 physical cores or 20 lcores per socket for 40 
>lcores total.
>
>The above is listing the LCORES (or hyper-threads) and not COREs, which I 
>understand some like to think they are interchangeable. The problem is the 
>hyper-threads are logically interchangeable, but not performance wise. If you 
>have two run-to-completion threads on a single physical core each on a 
>different hyper-thread of that core [0,1], then the second lcore or thread (1) 
>on that physical core will only get at most about 30-20% of the CPU cycles. 
>Normally it is much less, unless you tune the code to make sure each thread is 
>not trying to share the internal execution units, but some internal execution 
>units are always shared.
>
>To get the best performance when hyper-threading is enable is to not run both 
>threads on a single physical core, but only run one hyper-thread-0.
>
>In the table below the table lists the physical core id and each of the lcore 
>ids per socket. Use the first lcore per socket for the best performance:
>Core 1 [1, 21][11, 31]
>Use lcore 1 or 11 depending on the socket you are on.
>
>The info below is most likely the best performance and utilization of your 
>system. If I got the values right ?
>
>./warp17 -c 0x0FFFe0   -m 32768 -w :81:00.3 -w :01:00.3 --
>--qmap 0.0x0003FE --qmap 1.0x0FFE00
>
>Port 0[socket: 0]:
>   Core 2[socket:0] (Tx: 0, Rx: 0)
>   Core 3[socket:0] (Tx: 1, Rx: 1)
>   Core 4[socket:0] (Tx: 2, Rx: 2)
>   Core 5[socket:0] (Tx: 3, Rx: 3)
>   Core 6[socket:0] (Tx: 4, Rx: 4)
>   Core 7[socket:0] (Tx: 5, Rx: 5)
>   Core 8[socket:0] (Tx: 6, Rx: 6)
>   Core 9[socket:0] (Tx: 7, Rx: 7)
>
>8 cores on first socket leaving 0-1 lcores for Linux.

9 cores and leaving the first core or two lcores for Linux
>
>Port 1[socket: 1]:
>   Core 10[socket:1] (Tx: 0, Rx: 0)
>   Core 11[socket:1] (Tx: 1, Rx: 1)
>   Core 12[socket:1] (Tx: 2, Rx: 2)
>   Core 13[socket:1] (Tx: 3, Rx: 3)
>   Core 14[socket:1] (Tx: 4, Rx: 4)
>   Core 15[socket:1] (Tx: 5, Rx: 5)
>   Core 16[socket:1] (Tx: 6, Rx: 6)
>   Core 17[socket:1] (Tx: 7, Rx: 7)
>   Core 18[socket:1] (Tx: 8, Rx: 8)
>   Core 19[socket:1] (Tx: 9, Rx: 9)
>
>All 10 cores on the second socket.
>
>++Keith
>
>>
>>Just for reference, the cpu_layout script shows:
>>$ $RTE_SDK/tools/cpu_layout.py
>>
>>Core and Socket Information (as reported by '/proc/cpuinfo')
>>
>>
>>cores =  [0, 1, 

[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-16 Thread Wiles, Keith

On 6/16/16, 11:20 AM, "Take Ceara"  wrote:

>On Thu, Jun 16, 2016 at 5:29 PM, Wiles, Keith  wrote:
>
>>
>> Right now I do not know what the issue is with the system. Could be too many 
>> Rx/Tx ring pairs per port and limiting the memory in the NICs, which is why 
>> you get better performance when you have 8 core per port. I am not really 
>> seeing the whole picture and how DPDK is configured to help more. Sorry.
>
>I doubt that there is a limitation wrt running 16 cores per port vs 8
>cores per port as I've tried with two different machines connected
>back to back each with one X710 port and 16 cores on each of them
>running on that port. In that case our performance doubled as
>expected.
>
>>
>> Maybe seeing the DPDK command line would help.
>
>The command line I use with ports 01:00.3 and 81:00.3 is:
>./warp17 -c 0xF3   -m 32768 -w :81:00.3 -w :01:00.3 --
>--qmap 0.0x003FF003F0 --qmap 1.0x0FC00FFC00
>
>Our own qmap args allow the user to control exactly how cores are
>split between ports. In this case we end up with:
>
>warp17> show port map
>Port 0[socket: 0]:
>   Core 4[socket:0] (Tx: 0, Rx: 0)
>   Core 5[socket:0] (Tx: 1, Rx: 1)
>   Core 6[socket:0] (Tx: 2, Rx: 2)
>   Core 7[socket:0] (Tx: 3, Rx: 3)
>   Core 8[socket:0] (Tx: 4, Rx: 4)
>   Core 9[socket:0] (Tx: 5, Rx: 5)
>   Core 20[socket:0] (Tx: 6, Rx: 6)
>   Core 21[socket:0] (Tx: 7, Rx: 7)
>   Core 22[socket:0] (Tx: 8, Rx: 8)
>   Core 23[socket:0] (Tx: 9, Rx: 9)
>   Core 24[socket:0] (Tx: 10, Rx: 10)
>   Core 25[socket:0] (Tx: 11, Rx: 11)
>   Core 26[socket:0] (Tx: 12, Rx: 12)
>   Core 27[socket:0] (Tx: 13, Rx: 13)
>   Core 28[socket:0] (Tx: 14, Rx: 14)
>   Core 29[socket:0] (Tx: 15, Rx: 15)
>
>Port 1[socket: 1]:
>   Core 10[socket:1] (Tx: 0, Rx: 0)
>   Core 11[socket:1] (Tx: 1, Rx: 1)
>   Core 12[socket:1] (Tx: 2, Rx: 2)
>   Core 13[socket:1] (Tx: 3, Rx: 3)
>   Core 14[socket:1] (Tx: 4, Rx: 4)
>   Core 15[socket:1] (Tx: 5, Rx: 5)
>   Core 16[socket:1] (Tx: 6, Rx: 6)
>   Core 17[socket:1] (Tx: 7, Rx: 7)
>   Core 18[socket:1] (Tx: 8, Rx: 8)
>   Core 19[socket:1] (Tx: 9, Rx: 9)
>   Core 30[socket:1] (Tx: 10, Rx: 10)
>   Core 31[socket:1] (Tx: 11, Rx: 11)
>   Core 32[socket:1] (Tx: 12, Rx: 12)
>   Core 33[socket:1] (Tx: 13, Rx: 13)
>   Core 34[socket:1] (Tx: 14, Rx: 14)
>   Core 35[socket:1] (Tx: 15, Rx: 15)

On each socket you have 10 physical cores or 20 lcores per socket for 40 lcores 
total.

The above is listing the LCORES (or hyper-threads) and not COREs, which I 
understand some like to think they are interchangeable. The problem is the 
hyper-threads are logically interchangeable, but not performance wise. If you 
have two run-to-completion threads on a single physical core each on a 
different hyper-thread of that core [0,1], then the second lcore or thread (1) 
on that physical core will only get at most about 30-20% of the CPU cycles. 
Normally it is much less, unless you tune the code to make sure each thread is 
not trying to share the internal execution units, but some internal execution 
units are always shared.

To get the best performance when hyper-threading is enable is to not run both 
threads on a single physical core, but only run one hyper-thread-0.

In the table below the table lists the physical core id and each of the lcore 
ids per socket. Use the first lcore per socket for the best performance:
Core 1 [1, 21][11, 31]
Use lcore 1 or 11 depending on the socket you are on.

The info below is most likely the best performance and utilization of your 
system. If I got the values right ?

./warp17 -c 0x0FFFe0   -m 32768 -w :81:00.3 -w :01:00.3 --
--qmap 0.0x0003FE --qmap 1.0x0FFE00

Port 0[socket: 0]:
   Core 2[socket:0] (Tx: 0, Rx: 0)
   Core 3[socket:0] (Tx: 1, Rx: 1)
   Core 4[socket:0] (Tx: 2, Rx: 2)
   Core 5[socket:0] (Tx: 3, Rx: 3)
   Core 6[socket:0] (Tx: 4, Rx: 4)
   Core 7[socket:0] (Tx: 5, Rx: 5)
   Core 8[socket:0] (Tx: 6, Rx: 6)
   Core 9[socket:0] (Tx: 7, Rx: 7)

8 cores on first socket leaving 0-1 lcores for Linux.

Port 1[socket: 1]:
   Core 10[socket:1] (Tx: 0, Rx: 0)
   Core 11[socket:1] (Tx: 1, Rx: 1)
   Core 12[socket:1] (Tx: 2, Rx: 2)
   Core 13[socket:1] (Tx: 3, Rx: 3)
   Core 14[socket:1] (Tx: 4, Rx: 4)
   Core 15[socket:1] (Tx: 5, Rx: 5)
   Core 16[socket:1] (Tx: 6, Rx: 6)
   Core 17[socket:1] (Tx: 7, Rx: 7)
   Core 18[socket:1] (Tx: 8, Rx: 8)
   Core 19[socket:1] (Tx: 9, Rx: 9)

All 10 cores on the second socket.

++Keith

>
>Just for reference, the cpu_layout script shows:
>$ $RTE_SDK/tools/cpu_layout.py
>
>Core and Socket Information (as reported by '/proc/cpuinfo')
>
>
>cores =  [0, 1, 2, 3, 4, 8, 9, 10, 11, 12]
>sockets =  [0, 1]
>
>Socket 0Socket 1
>
>Core 0  [0, 20] [10, 30]
>Core 1  [1, 21] [11, 31]
>Core 2  [2, 22] [12, 32]
>Core 3  [3, 23] [13, 33]
>Core 4  

[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-16 Thread Wiles, Keith
On 6/16/16, 9:36 AM, "Take Ceara"  wrote:

>Hi Keith,
>
>On Tue, Jun 14, 2016 at 3:47 PM, Wiles, Keith  wrote:
 Normally the limitation is in the hardware, basically how the PCI bus is 
 connected to the CPUs (or sockets). How the PCI buses are connected to the 
 system depends on the Mother board design. I normally see the buses 
 attached to socket 0, but you could have some of the buses attached to the 
 other sockets or all on one socket via a PCI bridge device.

 No easy way around the problem if some of your PCI buses are split or all 
 on a single socket. Need to look at your system docs or look at lspci it 
 has an option to dump the PCI bus as an ASCII tree, at least on Ubuntu.
>>>
>>>This is the motherboard we use on our system:
>>>
>>>http://www.supermicro.com/products/motherboard/Xeon/C600/X10DRX.cfm
>>>
>>>I need to swap some NICs around (as now we moved everything on socket
>>>1) before I can share the lspci output.
>>
>> FYI: the option for lspci is ?lspci ?tv?, but maybe more options too.
>>
>
>I retested with two 10G X710 ports connected back to back:
>port 0: :01:00.3 - socket 0
>port 1: :81:00.3 - socket 1

Please provide the output from tools/cpu_layout.py.

>
>I ran the following scenarios:
>- assign 16 threads from CPU 0 on socket 0 to port 0 and 16 threads
>from CPU 1 to port 1 => setup rate of 1.6M sess/s
>- assign only the 16 threads from CPU0 for both ports (so 8 threads on
>socket 0 for port 0 and 8 threads on socket 0 for port 1) => setup
>rate of 3M sess/s
>- assign only the 16 threads from CPU1 for both ports (so 8 threads on
>socket 1 for port 0 and 8 threads on socket 1 for port 1) => setup
>rate of 3M sess/s
>
>I also tried a scenario with two machines connected back to back each
>of which had a NIC on socket 1. I assigned 16 threads from socket 1 on
>each machine to the port and performance scaled to 6M sess/s as
>expected.
>
>I double checked all our memory allocations and, at least in the
>tested scenario, we never use memory that's not on the same socket as
>the core.
>
>I pasted below the output of lspci -tv. I see that :01:00.3 and
>:81:00.3 are connected to different PCI bridges but on each of
>those bridges there are also "Intel Corporation Xeon E7 v3/Xeon E5
>v3/Core i7 DMA Channel " devices.
>
>It would be great if you could also take a look in case I
>missed/misunderstood something.
>
>Thanks,
>Dumitru
>


[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-14 Thread Wiles, Keith

On 6/14/16, 2:46 AM, "Take Ceara"  wrote:

>Hi Keith,
>
>On Mon, Jun 13, 2016 at 9:35 PM, Wiles, Keith  wrote:
>>
>> On 6/13/16, 9:07 AM, "dev on behalf of Take Ceara" > on behalf of dumitru.ceara at gmail.com> wrote:
>>
>>>Hi,
>>>
>>>I'm reposting here as I didn't get any answers on the dpdk-users mailing 
>>>list.
>>>
>>>We're working on a stateful traffic generator (www.warp17.net) using
>>>DPDK and we would like to control two XL710 NICs (one on each socket)
>>>to maximize CPU usage. It looks that we run into the following
>>>limitation:
>>>
>>>http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html
>>>section 7.2, point 3
>>>
>>>We completely split memory/cpu/NICs across the two sockets. However,
>>>the performance with a single CPU and both NICs on the same socket is
>>>better.
>>>Why do all the NICs have to be on the same socket, is there a
>>>driver/hw limitation?
>>
>> Normally the limitation is in the hardware, basically how the PCI bus is 
>> connected to the CPUs (or sockets). How the PCI buses are connected to the 
>> system depends on the Mother board design. I normally see the buses attached 
>> to socket 0, but you could have some of the buses attached to the other 
>> sockets or all on one socket via a PCI bridge device.
>>
>> No easy way around the problem if some of your PCI buses are split or all on 
>> a single socket. Need to look at your system docs or look at lspci it has an 
>> option to dump the PCI bus as an ASCII tree, at least on Ubuntu.
>
>This is the motherboard we use on our system:
>
>http://www.supermicro.com/products/motherboard/Xeon/C600/X10DRX.cfm
>
>I need to swap some NICs around (as now we moved everything on socket
>1) before I can share the lspci output.

FYI: the option for lspci is ?lspci ?tv?, but maybe more options too.

>
>Thanks,
>Dumitru
>





[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-14 Thread Take Ceara
Hi Bruce,

On Mon, Jun 13, 2016 at 4:28 PM, Bruce Richardson
 wrote:
> On Mon, Jun 13, 2016 at 04:07:37PM +0200, Take Ceara wrote:
>> Hi,
>>
>> I'm reposting here as I didn't get any answers on the dpdk-users mailing 
>> list.
>>
>> We're working on a stateful traffic generator (www.warp17.net) using
>> DPDK and we would like to control two XL710 NICs (one on each socket)
>> to maximize CPU usage. It looks that we run into the following
>> limitation:
>>
>> http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html
>> section 7.2, point 3
>>
>> We completely split memory/cpu/NICs across the two sockets. However,
>> the performance with a single CPU and both NICs on the same socket is
>> better.
>> Why do all the NICs have to be on the same socket, is there a
>> driver/hw limitation?
>>
> Hi,
>
> so long as each thread only ever accesses the NIC on it's own local socket, 
> then
> there is no performance penalty. It's only when a thread on one socket works
> using a NIC on a remote socket that you start seeing a penalty, with all
> NIC-core communication having to go across QPI.
>
> /Bruce

Thanks for the confirmation. We'll go through our code again to double
check that no thread accesses the NIC or memory on a remote socket.

Regards,
Dumitru


[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-14 Thread Take Ceara
Hi Keith,

On Mon, Jun 13, 2016 at 9:35 PM, Wiles, Keith  wrote:
>
> On 6/13/16, 9:07 AM, "dev on behalf of Take Ceara"  on behalf of dumitru.ceara at gmail.com> wrote:
>
>>Hi,
>>
>>I'm reposting here as I didn't get any answers on the dpdk-users mailing list.
>>
>>We're working on a stateful traffic generator (www.warp17.net) using
>>DPDK and we would like to control two XL710 NICs (one on each socket)
>>to maximize CPU usage. It looks that we run into the following
>>limitation:
>>
>>http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html
>>section 7.2, point 3
>>
>>We completely split memory/cpu/NICs across the two sockets. However,
>>the performance with a single CPU and both NICs on the same socket is
>>better.
>>Why do all the NICs have to be on the same socket, is there a
>>driver/hw limitation?
>
> Normally the limitation is in the hardware, basically how the PCI bus is 
> connected to the CPUs (or sockets). How the PCI buses are connected to the 
> system depends on the Mother board design. I normally see the buses attached 
> to socket 0, but you could have some of the buses attached to the other 
> sockets or all on one socket via a PCI bridge device.
>
> No easy way around the problem if some of your PCI buses are split or all on 
> a single socket. Need to look at your system docs or look at lspci it has an 
> option to dump the PCI bus as an ASCII tree, at least on Ubuntu.

This is the motherboard we use on our system:

http://www.supermicro.com/products/motherboard/Xeon/C600/X10DRX.cfm

I need to swap some NICs around (as now we moved everything on socket
1) before I can share the lspci output.

Thanks,
Dumitru


[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-13 Thread Wiles, Keith

On 6/13/16, 9:07 AM, "dev on behalf of Take Ceara"  wrote:

>Hi,
>
>I'm reposting here as I didn't get any answers on the dpdk-users mailing list.
>
>We're working on a stateful traffic generator (www.warp17.net) using
>DPDK and we would like to control two XL710 NICs (one on each socket)
>to maximize CPU usage. It looks that we run into the following
>limitation:
>
>http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html
>section 7.2, point 3
>
>We completely split memory/cpu/NICs across the two sockets. However,
>the performance with a single CPU and both NICs on the same socket is
>better.
>Why do all the NICs have to be on the same socket, is there a
>driver/hw limitation?

Normally the limitation is in the hardware, basically how the PCI bus is 
connected to the CPUs (or sockets). How the PCI buses are connected to the 
system depends on the Mother board design. I normally see the buses attached to 
socket 0, but you could have some of the buses attached to the other sockets or 
all on one socket via a PCI bridge device.

No easy way around the problem if some of your PCI buses are split or all on a 
single socket. Need to look at your system docs or look at lspci it has an 
option to dump the PCI bus as an ASCII tree, at least on Ubuntu.
>
>Thanks,
>Dumitru Ceara
>





[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-13 Thread Take Ceara
Hi,

I'm reposting here as I didn't get any answers on the dpdk-users mailing list.

We're working on a stateful traffic generator (www.warp17.net) using
DPDK and we would like to control two XL710 NICs (one on each socket)
to maximize CPU usage. It looks that we run into the following
limitation:

http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html
section 7.2, point 3

We completely split memory/cpu/NICs across the two sockets. However,
the performance with a single CPU and both NICs on the same socket is
better.
Why do all the NICs have to be on the same socket, is there a
driver/hw limitation?

Thanks,
Dumitru Ceara


[dpdk-dev] Performance hit - NICs on different CPU sockets

2016-06-13 Thread Bruce Richardson
On Mon, Jun 13, 2016 at 04:07:37PM +0200, Take Ceara wrote:
> Hi,
> 
> I'm reposting here as I didn't get any answers on the dpdk-users mailing list.
> 
> We're working on a stateful traffic generator (www.warp17.net) using
> DPDK and we would like to control two XL710 NICs (one on each socket)
> to maximize CPU usage. It looks that we run into the following
> limitation:
> 
> http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html
> section 7.2, point 3
> 
> We completely split memory/cpu/NICs across the two sockets. However,
> the performance with a single CPU and both NICs on the same socket is
> better.
> Why do all the NICs have to be on the same socket, is there a
> driver/hw limitation?
> 
Hi,

so long as each thread only ever accesses the NIC on it's own local socket, then
there is no performance penalty. It's only when a thread on one socket works
using a NIC on a remote socket that you start seeing a penalty, with all
NIC-core communication having to go across QPI.

/Bruce