You are passing each packet twice trough hostvpp so effectivelly your
hostvpp performance is 8 Mpps per core.

There are several factors which can impact performance (cpu speed, numa
locality, memory channel utilisation) but still you cannot expect order of
magnitude better numbers.

Best performance i managed to get so far is 14Mpps on 3.2 Broadwell Xeon
but that was with snake test setup (l2patch from physical interface to
memif). In your case you are utilizing full l2 path which includes two hash
lookups per packet (learn and forward).

Damjan

On 7 August 2017 at 15:29:18, khers (s3m2e1.6s...@gmail.com) wrote:

> Hi ,
> I am insterested in connecting two instance of vpp by memif (one vpp is
> running in host and another vpp is running in a lxc container).
> I have achieved functionality goal by memif but I have problem in
> performance test.
> I have done the following steps respectively:
> 1. first of all I installed lxc on my system
> 2. then, I made and installed vpp in lxc (I'll call it lxcvpp)
> 3. I installed vpp on my system (I'll call it hostvpp)
> 4. I created 2 memif on hostvpp
> create memif socket /tmp/unix_socket.file1 master
> create memif socket /tmp/unix_socket.file2 slave
> 5. I created 2 memif on lxcvpp
> create memif socket /share/unix_socket.file1 slave
> create memif socket /share/unix_socket.file2 master
> 6. I have two physical interface which are binded to hostvpp (I call two
> interfaces eth1 and eth2 respectively). so, I bridged my input interface
> (eth1) and memif0 (bridge-domain 1) and also bridged eth2 and memif1
> (bridge-domain 2).
>
> 7. moreover, I bridged memif0 and memif1 in lxcvpp.
>
> 8. I used trex as traffic generator. trafic is transmitted from trex to
> hostvpp by eth1 and it is recieved from eth2 interface of hostvpp. packets
> flow of this scenario is shown below.
>
> trex---->eth0(hostvpp)---->
> memif0(hostvpp)---->memif0(lxcvpp)---->memif1(lxcvpp)---->memif1(hostvpp)---->eth2(hostvpp)---->trex
>
> After running trex, I got 4 MPPS with 64B packet size. Is it the maximum
> throughput of memif in this scenario?
> I expected much more throughput than 4 MPPS. Is there any solution and
> tuning to obtain more throughput? (I allocated one core to hostvpp and
> another core to lxcvpp)
>
> Cheers,
> khers
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to