Re: [vpp-dev] Connection issue between container (slave) and host vpp (master) with memif
Thank you Damjan. Let me look into the option. That seems a very good idea. I could use T-Rex to send packets through AF interface and bridged inside container and back to an another AF interface on the host. Let me look into your idea to do benchmarking on memif. Thanks again for all your help. Regards, Chakri -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#9794): https://lists.fd.io/g/vpp-dev/message/9794 Mute This Topic: https://lists.fd.io/mt/22892000/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Connection issue between container (slave) and host vpp (master) with memif
Depends on what do you expect from packet generator. I personally map physical NIC VF into container and use them to feed packets in from remote T-Rex box. You can also use VPP packet-generator to play PCAP file or craft packets and send them to memif interface, but I wold not expect great performance. Latest option would be to use libmemif to write own packet-generator code. -- Damjan > On 29 Jun 2018, at 16:30, chakravarthy.arise...@viasat.com wrote: > > Hi Damjan, > > Thanks for reply. > > You correctly pointed out my mistake. I was not mapping host-container > filesystem for the socket communication. > After launch container with the option (-v "/run/vpp/:/run/vpp/" or -v > "/tmp:/tmp"), it started working. > > On the similar line, I'd like ask one more question. > > Do you know if there Is an existing packet generator with memif interface? > I'd like to send packets from container with memif from host1 to container in > host2. I'm trying hard to find an existing packet generator with memif. > Attaching my topology. > > Can you guide me or suggest an alternative option? > > Thanks again for your time to respond. > > Regards, > Chakri -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#9744): https://lists.fd.io/g/vpp-dev/message/9744 > Mute This Topic: https://lists.fd.io/mt/22892000/675642 > Group Owner: vpp-dev+ow...@lists.fd.io > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [dmar...@me.com] > -=-=-=-=-=-=-=-=-=-=-=- -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#9745): https://lists.fd.io/g/vpp-dev/message/9745 Mute This Topic: https://lists.fd.io/mt/22892000/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: [vpp-dev] Connection issue between container (slave) and host vpp (master) with memif
Hi Chakri, How do you map socket file into the container filesystem? Tnx, -- Damjan > On 29 Jun 2018, at 07:40, chakravarthy.arise...@viasat.com wrote: > > Hi, > > How do we connect memif inside host to memif inside a container? Somehow, > the container is not able to communicate with host. > Can someone point me what I'm missing? > > Thanks > Chakri > > VPP inside Host > --- > vpp# show memif > sockets > id listenerfilename > 0 yes (1) /run/vpp/memif.sock > 11 yes (1) /tmp/memif1.sock > > interface memif11/33 > socket-id 11 id 33 mode ethernet > flags admin-up > listener-fd 22 conn-fd 0 > num-s2m-rings 0 num-m2s-rings 0 buffer-size 0 num-regions 0 > interface memif0/0 > socket-id 0 id 0 mode ethernet > flags > listener-fd 21 conn-fd 0 > num-s2m-rings 1 num-m2s-rings 1 buffer-size 0 num-regions 0 > local-disc-reason "disconnected" > vpp# > > vpp# show int > Name Idx State Counter > Count > local00 up > memif0/0 6up > memif11/334 up > > Container VPP configuration > - > vpp# show memif > sockets > id listenerfilename > 0 no /run/vpp/memif.sock > 11 no /tmp/memif1.sock > > interface memif11/33 > socket-id 11 id 33 mode ethernet > flags admin-up slave zero-copy > listener-fd 0 conn-fd 0 > num-s2m-rings 0 num-m2s-rings 0 buffer-size 0 num-regions 0 > vpp# > > vpp# sh int > Name Idx State Counter > Count > local00 up drops > 0 > memif0/0 2 up > memif11/331 up drops > 0 > tx-error > 0 > On host, these commands are used to create master socket > -- > create memif socket id 11 filename /tmp/memif1.sock > create interface memif id 33 socket-id 11 master > set int state memif11/33 up > > Inside container, these commands are used to create slave socket > - > create memif socket id 11 filename /tmp/memif1.sock > create interface memif id 33 socket-id 11 slave > set int state memif11/33 up > > Intrestingly, host vpp is able to connect to client (icmpr-epoll) on the > host. the issue is only with client socket inside the container. > > -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > > View/Reply Online (#9736): https://lists.fd.io/g/vpp-dev/message/9736 > Mute This Topic: https://lists.fd.io/mt/22892000/675642 > Group Owner: vpp-dev+ow...@lists.fd.io > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [dmar...@me.com] > -=-=-=-=-=-=-=-=-=-=-=- -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#9738): https://lists.fd.io/g/vpp-dev/message/9738 Mute This Topic: https://lists.fd.io/mt/22892000/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[vpp-dev] Connection issue between container (slave) and host vpp (master) with memif
Hi, How do we connect memif inside host to memif inside a container? Somehow, the container is not able to communicate with host. Can someone point me what I'm missing? Thanks Chakri VPP inside Host --- vpp# show memif sockets id listener filename 0 yes (1) /run/vpp/memif.sock *11 yes (1) /tmp/memif1.sock* *interface memif11/33* * socket-id 11 id 33 mode ethernet* * flags admin-up* * listener-fd 22 conn-fd 0* * num-s2m-rings 0 num-m2s-rings 0 buffer-size 0 num-regions 0* interface memif0/0 socket-id 0 id 0 mode ethernet flags listener-fd 21 conn-fd 0 num-s2m-rings 1 num-m2s-rings 1 buffer-size 0 num-regions 0 local-disc-reason "disconnected" vpp# vpp# show int Name Idx State Counter Count local0 0 up memif0/0 6 up *memif11/33 4 up* Container VPP configuration - vpp# show memif sockets id listener filename 0 no /run/vpp/memif.sock 11 no /tmp/memif1.sock *interface memif11/33* * socket-id 11 id 33 mode ethernet* * flags admin-up slave zero-copy* * listener-fd 0 conn-fd 0* * num-s2m-rings 0 num-m2s-rings 0 buffer-size 0 num-regions 0* *vpp#* *vpp# sh int* * Name Idx State Counter Count* *l* ocal0 0 up drops 0 memif0/0 2 up *memif11/33 1 up drops 0* * tx-error 0 * On host, these commands are used to create master socket -- create memif socket id 11 filename /tmp/memif1.sock create interface memif id 33 socket-id 11 master set int state memif11/33 up Inside container, these commands are used to create slave socket - create memif socket id 11 filename /tmp/memif1.sock create interface memif id 33 socket-id 11 slave set int state memif11/33 up *Intrestingly, host vpp is able to connect to client (icmpr-epoll) on the host. the issue is only with client socket inside the container.* -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#9736): https://lists.fd.io/g/vpp-dev/message/9736 Mute This Topic: https://lists.fd.io/mt/22892000/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-