Re: [vpp-dev] Query Regarding Ipv6 traffic issue ( lcp )

2022-03-09 Thread truring truring
Hi Ole ,

By lcp I mean Linux control plane plugin . We are not seeing Ipv6 ND
packets coming on tap interface



Best Regards
Puneet Singh

On Wed, 9 Mar 2022 at 12:43, Ole Troan  wrote:

>
> I am using VPP 21.06 ,an lcp that supports Ipv4 traffic properly .
>
>
> What do you mean by “an lcp…”?
>
> I am facing an issue with Ipv6 traffic as it is not reaching the tap
> interface and is handled by the VPP node. It was mentioned in docs to
> disable ND, how can I disabled ND ?
>
>
> You need to describe your problem in more detail.
>
> Ole
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20997): https://lists.fd.io/g/vpp-dev/message/20997
Mute This Topic: https://lists.fd.io/mt/89656729/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Query Regarding Ipv6 traffic issue ( lcp )

2022-03-09 Thread truring truring
Hi Ole ,

By lcp I mean Linux control plane plugin . We are not seeing Ipv6 ND packets 
coming on tap interface

Best Regards
Puneet Singh

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20986): https://lists.fd.io/g/vpp-dev/message/20986
Mute This Topic: https://lists.fd.io/mt/89656729/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Query Regarding Ipv6 traffic issue ( lcp )

2022-03-08 Thread truring truring
Hi Team ,


I am using VPP 21.06 ,an lcp that supports Ipv4 traffic properly .



I am facing an issue with Ipv6 traffic as it is not reaching the tap
interface and is handled by the VPP node. It was mentioned in docs to
disable ND, how can I disabled ND ?

Best Regards

Puneet Singh

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20983): https://lists.fd.io/g/vpp-dev/message/20983
Mute This Topic: https://lists.fd.io/mt/89656729/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Question Regarding vlib Memory Leak Debugging

2022-03-07 Thread truring truring
Thanks Ben for your Reply ,This link is helpful for memory leak issues.

Could you please suggest How to debug if a vlib buffer leak is happening
inside some plugin.
Is there a way to print the content of the allocated vlib buffer ?




On Mon, 7 Mar 2022 at 13:36, Benoit Ganne (bganne)  wrote:

> > What are the general guidelines to debug VLIB Buffer Leak  ?
> > Is there a way we print the buffer details which are in an allocated
> > state?
>
> You can track allocations with memory traces:
> https://s3-docs.fd.io/vpp/22.06/gettingstarted/troubleshooting/mem.html
>
> Best
> ben
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20976): https://lists.fd.io/g/vpp-dev/message/20976
Mute This Topic: https://lists.fd.io/mt/89607214/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Question Regarding vlib Memory Leak Debugging

2022-03-06 Thread truring truring
Hi Team,



What are the general guidelines to debug VLIB Buffer Leak  ?



Is there a way we print the buffer details which are in an allocated state?


Best Regards

Puneet

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20960): https://lists.fd.io/g/vpp-dev/message/20960
Mute This Topic: https://lists.fd.io/mt/89607214/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] "Incompatible UPT version” error when running VPP v18.01 with DPDK v17.11 on VMWare with VMXNET3 interface ,ESXI Version 6.5/6.7

2018-10-01 Thread truring truring
Hi Everyone,

We're trying to run VPP-18.01 with DPDK plugin in a guest machine running
Red Hat 7.5. The host is ESXi version 6.5/6.7.

guest machine have VMXNET3 Interface ,i am getting following error while
running the vpp :

PMD: eth_vmxnet3_dev_init():  >>

PMD: eth_vmxnet3_dev_init(): Hardware version : 1

PMD: eth_vmxnet3_dev_init(): Using device version 1



PMD: eth_vmxnet3_dev_init(): UPT hardware version : 0

PMD: eth_vmxnet3_dev_init(): Incompatible UPT version.



Any help to resolve above issue would be greatly appreciated. Thanks!



Regards
Puneet
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10723): https://lists.fd.io/g/vpp-dev/message/10723
Mute This Topic: https://lists.fd.io/mt/26443715/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] memif: multiple queues VPP-memif slave icmp_responder master Not working with VPP v18.10-rc0~354-g49ca260

2018-09-11 Thread truring truring
Hi ,
I am trying to run "multiple queues VPP-memif slave icmp_responder master"
example with vpp version 18.10 but observing a crash on icmpr-epoll binary .
icmpr-epoll binary crashes when i make the memif0/0 interface up.
Steps followed :
Bring up the icmpr-epoll binary in master mode:

libmemif version: 2.0 (debug)
memif version: 512

conn 0 1
MEMIF_DEBUG:src/main.c:memif_create:764: creating socket file
MEMIF_DEBUG:src/main.c:memif_create:805: socket 10 created
MEMIF_DEBUG:src/main.c:memif_create:809: sockopt
MEMIF_DEBUG:src/main.c:memif_create:816: bind
MEMIF_DEBUG:src/main.c:memif_create:822: listen
MEMIF_DEBUG:src/main.c:memif_create:828: stat
ICMP_Responder:add_epoll_fd:240: fd 10 added to epoll
MEMIF_DEBUG:src/socket.c:memif_conn_fd_accept_ready:880: accept called
MEMIF_DEBUG:src/socket.c:memif_conn_fd_accept_ready:890: accept fd 10
MEMIF_DEBUG:src/socket.c:memif_conn_fd_accept_ready:891: conn fd 11
ICMP_Responder:add_epoll_fd:240: fd 11 added to epoll
MEMIF_DEBUG:src/socket.c:memif_msg_send:66: Message type 2 sent
MEMIF_DEBUG:src/socket.c:memif_read_ready:907: call recv
MEMIF_DEBUG:src/socket.c:memif_msg_receive:682: recvmsg fd 11
MEMIF_DEBUG:src/socket.c:memif_msg_receive:684: done
MEMIF_DEBUG:src/socket.c:memif_msg_receive:714: Message type 3 received
MEMIF_DEBUG:src/socket.c:memif_read_ready:909: recv finished
MEMIF_DEBUG:src/socket.c:memif_msg_send:66: Message type 1 sent
MEMIF_DEBUG:src/socket.c:memif_msg_receive:682: recvmsg fd 11
MEMIF_DEBUG:src/socket.c:memif_msg_receive:684: done
MEMIF_DEBUG:src/socket.c:memif_msg_receive:714: Message type 4 received
MEMIF_DEBUG:src/socket.c:memif_msg_send:66: Message type 1 sent
MEMIF_DEBUG:src/socket.c:memif_msg_receive:682: recvmsg fd 11
MEMIF_DEBUG:src/socket.c:memif_msg_receive:684: done
MEMIF_DEBUG:src/socket.c:memif_msg_receive:714: Message type 4 received
MEMIF_DEBUG:src/socket.c:memif_msg_send:66: Message type 1 sent
MEMIF_DEBUG:src/socket.c:memif_msg_receive:682: recvmsg fd 11
MEMIF_DEBUG:src/socket.c:memif_msg_receive:684: done
MEMIF_DEBUG:src/socket.c:memif_msg_receive:714: Message type 5 received
MEMIF_DEBUG:src/socket.c:memif_msg_send:66: Message type 1 sent
MEMIF_DEBUG:src/socket.c:memif_msg_receive:682: recvmsg fd 11
MEMIF_DEBUG:src/socket.c:memif_msg_receive:684: done
MEMIF_DEBUG:src/socket.c:memif_msg_receive:714: Message type 5 received
INFO: memif disconnected!
MEMIF_DEBUG:src/socket.c:memif_msg_send:66: Message type 8 sent
ICMP_Responder:del_epoll_fd:280: fd 11 removed from epoll
ICMP_Responder:del_epoll_fd:277: epoll_ctl: No such file or directory fd 14

Program received signal SIGSEGV, Segmentation fault.
memif_disconnect_internal (c=0x6081c0) at src/main.c:1198
1198  if (c->regions[i].fd > 0)
(gdb) (gdb) bt
#0  memif_disconnect_internal (c=0x6081c0) at src/main.c:1198
#1  0x77bd326c in memif_control_fd_handler (fd=11,
events=) at src/main.c:1008
#2  0x00403883 in poll_event (timeout=timeout@entry=-1)
at examples/icmp_responder-epoll/main.c:1256
#3  0x00401242 in main ()
at examples/icmp_responder-epoll/main.c:1324
(gdb)

VPP in slave mode -
startup.conf
unix {

   interactive
   nodaemon
   full-coredump
 }

 cpu {
   workers 2
 }

create interface memif id 0 slave rx-queues 2 tx-queues 2
set int state memif0/0 up
set int ip address memif0/0 192.168.1.1/24




Thanks & Regards
Puneet Singh
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10466): https://lists.fd.io/g/vpp-dev/message/10466
Mute This Topic: https://lists.fd.io/mt/25510962/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-