Re: [vpp-dev] Muliticore Performance test

2018-11-22 Thread kyunghwan kim
Korian

While modifying num-mbufs(num-mbufs 128000) and performing Performance
tests,
I also faced the same Bug.
We are testing bi-direction with 40Gbps.
"One port Rx frame is 1"

Do you have any other doubts?


vpp# show interface
  Name   IdxState  MTU (L3/IP4/IP6/MPLS)
Counter  Count
FortyGigabitEthernet3/0/0 1  up  9000/0/0/0 rx
packets 863402200
rx
bytes107061872736
tx
packets 431639482
tx
bytes 53523295686

drops  52392

ip4863402199

rx-miss  1591588
FortyGigabitEthernet4/0/0 2  up  9000/0/0/0 rx
packets 1
rx
bytes  60
tx
packets 431710328
tx
bytes 53532080590

drops  1
local00 down  0/0/0/0
vpp# show dpdk buffer
name="dpdk_mbuf_pool_socket0"  available =  110565 allocated =   17369
total =  127934
name="dpdk_mbuf_pool_socket1"  available =  128000 allocated =   0
total =  128000
vpp#

Regards,
Kyunghwan Kim

2018년 11월 22일 (목) 오전 11:42, kyunghwan kim 님이 작성:

> Korian,
>
> Thanks for your reply,
> I solved the problem.
>
> Previously num-mbufs is the default,
> vpp # show dpdk buffer
> name = "dpdk_mbuf_pool_socket 0" available = 7938 allocated = 8446 total =
> 16384
> name = "dpdk_mbuf_pool_socket 1" available = 16384 allocated = 0 total =
> 16384
> vpp #
>
> Increase num-mbufs in startup.conf
> vpp # show dpdk buffer
> name = "dpdk_mbuf_pool_socket 0" available = 119552 allocated = 8448 total
> = 128000
> name = "dpdk_mbuf_pool_socket 1" available = 128000 allocated = 0 total =
> 128000
> vpp #
>
> When packets are flowed at 40 Gbps / 64 bytes
> vpp # show dpdk buffer
> name = "dpdk_mbuf_pool_socket 0" available = 102069 allocated = 25776
> total = 127845
> name = "dpdk_mbuf_pool_socket 1" available = 128000 allocated = 0 total =
> 128000
> vpp #
>
> I found out that buffer is missing.
> Thank you so much.
>
> Regards,
> Kyunghwan Kim
>
>
> 2018년 11월 21일 (수) 오후 9:29, korian edeline 님이 작성:
>
>> Hello,
>>
>> On 11/21/18 1:10 PM, kyunghwan kim wrote:
>> > rx-no-buf  1128129034176
>>
>>
>> You should be able to fix this particular problem by increasing
>> num-mbufs in startup.conf, you can check the allocation with vpp# sh
>> dpdk buffer
>>
>>
>> > rx-miss951486596
>>
>> This is probably another problem.
>>
>>
>> Cheers,
>>
>> Korian
>>
>>
>
> --
> 
> キム、キョンファン
> Tel : 080-3600-2306
> E-mail : gpi...@gmail.com
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#11360): https://lists.fd.io/g/vpp-dev/message/11360
> Mute This Topic: https://lists.fd.io/mt/28276317/1512670
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [gpi...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>


-- 

キム、キョンファン
Tel : 080-3600-2306
E-mail : gpi...@gmail.com

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11362): https://lists.fd.io/g/vpp-dev/message/11362
Mute This Topic: https://lists.fd.io/mt/28276317/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Muliticore Performance test

2018-11-21 Thread kyunghwan kim
Korian,

Thanks for your reply,
I solved the problem.

Previously num-mbufs is the default,
vpp # show dpdk buffer
name = "dpdk_mbuf_pool_socket 0" available = 7938 allocated = 8446 total =
16384
name = "dpdk_mbuf_pool_socket 1" available = 16384 allocated = 0 total =
16384
vpp #

Increase num-mbufs in startup.conf
vpp # show dpdk buffer
name = "dpdk_mbuf_pool_socket 0" available = 119552 allocated = 8448 total
= 128000
name = "dpdk_mbuf_pool_socket 1" available = 128000 allocated = 0 total =
128000
vpp #

When packets are flowed at 40 Gbps / 64 bytes
vpp # show dpdk buffer
name = "dpdk_mbuf_pool_socket 0" available = 102069 allocated = 25776 total
= 127845
name = "dpdk_mbuf_pool_socket 1" available = 128000 allocated = 0 total =
128000
vpp #

I found out that buffer is missing.
Thank you so much.

Regards,
Kyunghwan Kim


2018년 11월 21일 (수) 오후 9:29, korian edeline 님이 작성:

> Hello,
>
> On 11/21/18 1:10 PM, kyunghwan kim wrote:
> > rx-no-buf  1128129034176
>
>
> You should be able to fix this particular problem by increasing
> num-mbufs in startup.conf, you can check the allocation with vpp# sh
> dpdk buffer
>
>
> > rx-miss951486596
>
> This is probably another problem.
>
>
> Cheers,
>
> Korian
>
>

-- 

キム、キョンファン
Tel : 080-3600-2306
E-mail : gpi...@gmail.com

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11360): https://lists.fd.io/g/vpp-dev/message/11360
Mute This Topic: https://lists.fd.io/mt/28276317/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Muliticore Performance test

2018-11-21 Thread kyunghwan kim
1954781733
tx bytes ok5202392934828
rx frames ok 83904541699
rx bytes ok   10522147508516
rx missed  951486596
rx no bufs  864158443136
extended stats:
  rx good packets83904541699
  tx good packets41954781733
  rx good bytes   10522147508516
  tx good bytes5202392934828
  rx missed errors 951486596
  rx mbuf allocation errors 864158443136
  rx unicast packets 84856028295
  rx unknown protocol packets 3251649671
  tx unicast packets 41954781732
  tx broadcast packets 1
  rx size 64 packets   1
  rx size 128 to 255 packets 84856028294
  tx size 64 packets   1
  tx size 65 to 127 packets  293
  tx size 128 to 255 packets 41954781732
local0 0down  local0
  local
vpp#

Regarts,

Kyunghwan Kim
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11347): https://lists.fd.io/g/vpp-dev/message/11347
Mute This Topic: https://lists.fd.io/mt/28276317/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-