Looks like I was too optimistic when it comes to syscalls i was planning to use.
I was not able to get more than 3 Mpps so I switched to standard shared memory.

After a bit of tuning, I’m getting following results:

broadwell 3.2GHz, TurboBoost disabled:

IXIA - XL710-40G - VPP1 - MEMIF - VPP2 - XL710-40G - IXIA

Both VPP instances are running single-core.
So it is symetrical setup where each VPP is forwarding between physical NIC and 
MEMIF.

With 64B packets, I’m getting 13.6 Mpps aggregate throughput.
With 1500B packets, I’m getting around 29Gbps.

Good thing with this new setup, both VPPs can be inside un-priviledged 
containers.

New code is in gerrit...


> On 14 Feb 2017, at 14:21, Damjan Marion (damarion) <damar...@cisco.com> wrote:
> 
> 
> I got first pings running over new shared memory interface driver.
> Code [1] is still very fragile, but basic packet forwarding works ...
> 
> This interface defines master/slave relationship.
> 
> Some characteristics:
> - slave can run inside un-privileged containers
> - master can run inside container, but it requires global PID namespace and 
> PTRACE capability
> - initial connection is done over the unix socket, so for container 
> networking socket file needs to be mapped into container
> - slave allocates shared memory for descriptor rings and passes FD to master
> - slave is ring producer for both tx and rx, it fills rings with either full 
> or empty buffers
> - master is ring consumer, it reads descriptors and executes memcpy from/to 
> buffer
> - process_vm_readv, process_vm_writev linux system calls are used for copy of 
> data directly between master and slave VM (it avoids 2nd memcpy)
> - process_vm_* system calls are executed once per vector of packets
> - from security perspective, slave doesn’t have access to master memory
> - currently polling-only
> - reconnection should just work - slave runs reconnect process in case when 
> master disappears
> 
> TODO:
> - multi-queue
> - interrupt mode (likely simple byte read/write to file descriptor)
> - lightweight library to be used for non-VPP clients
> - L3 mode ???
> - perf tuning
> - user-mode memcpy - master maps slave buffer memory directly…
> - docs / specification
> 
> At this point I would really like to hear feedback from people,
> specially from the usability side.
> 
> config is basically:
> 
> create memif socket /path/to/unix_socket.file [master|slave]
> set int state memif0 up
> 
> DBGvpp# show interfaces
>              Name               Idx       State          Counter          
> Count
> local0                            0        down
> memif0                            1         up
> DBGvpp# show interfaces address
> local0 (dn):
> memif0 (up):
>  172.16.0.2/24
> DBGvpp# ping 172.16.0.1
> 64 bytes from 172.16.0.1: icmp_seq=1 ttl=64 time=18.4961 ms
> 64 bytes from 172.16.0.1: icmp_seq=2 ttl=64 time=18.4282 ms
> 64 bytes from 172.16.0.1: icmp_seq=3 ttl=64 time=26.4333 ms
> 64 bytes from 172.16.0.1: icmp_seq=4 ttl=64 time=18.4255 ms
> 64 bytes from 172.16.0.1: icmp_seq=5 ttl=64 time=14.4133 ms
> 
> Statistics: 5 sent, 5 received, 0% packet loss
> DBGvpp# show interfaces
>              Name               Idx       State          Counter          
> Count
> local0                            0        down
> memif0                            1         up       rx packets               
>       5
>                                                     rx bytes                  
>    490
>                                                     tx packets                
>      5
>                                                     tx bytes                  
>    490
>                                                     drops                     
>      5
>                                                     ip4                       
>      5
> 
> 
> 
> 
> [1] https://gerrit.fd.io/r/#/c/5004/
> 
> 
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to