Korian,

Thanks for your reply,
I solved the problem.

Previously num-mbufs is the default,
vpp # show dpdk buffer
name = "dpdk_mbuf_pool_socket 0" available = 7938 allocated = 8446 total =
16384
name = "dpdk_mbuf_pool_socket 1" available = 16384 allocated = 0 total =
16384
vpp #

Increase num-mbufs in startup.conf
vpp # show dpdk buffer
name = "dpdk_mbuf_pool_socket 0" available = 119552 allocated = 8448 total
= 128000
name = "dpdk_mbuf_pool_socket 1" available = 128000 allocated = 0 total =
128000
vpp #

When packets are flowed at 40 Gbps / 64 bytes
vpp # show dpdk buffer
name = "dpdk_mbuf_pool_socket 0" available = 102069 allocated = 25776 total
= 127845
name = "dpdk_mbuf_pool_socket 1" available = 128000 allocated = 0 total =
128000
vpp #

I found out that buffer is missing.
Thank you so much.

Regards,
Kyunghwan Kim


2018년 11월 21일 (수) 오후 9:29, korian edeline <korian.edel...@ulg.ac.be>님이 작성:

> Hello,
>
> On 11/21/18 1:10 PM, kyunghwan kim wrote:
> > rx-no-buf          1128129034176
>
>
> You should be able to fix this particular problem by increasing
> num-mbufs in startup.conf, you can check the allocation with vpp# sh
> dpdk buffer
>
>
> > rx-miss                951486596
>
> This is probably another problem.
>
>
> Cheers,
>
> Korian
>
>

-- 
====================
キム、キョンファン
Tel : 080-3600-2306
E-mail : gpi...@gmail.com
====================
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11360): https://lists.fd.io/g/vpp-dev/message/11360
Mute This Topic: https://lists.fd.io/mt/28276317/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to