Hi, Sara,

May I know what image you are using to achieving the good performance with VPP? 
I assume you are running the same iperf3 testings?

Thanks very much!

Regards,
Yichen

From: <vpp-dev@lists.fd.io> on behalf of Sara Gittlin <sara.gitt...@gmail.com>
Date: Sunday, March 25, 2018 at 5:31 AM
To: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

Fyi
When i launch another image (not based on the clear-linux-kvm) - i'm achieving 
good performance (exactly like ovs-dpdk - which is expected in single flow/mac )
any idea why ?
this is the  launch cmdline:  [anyway i cant find a way to set the 
tx/rx-queue-size)

 sudo qemu-system-x86_64 -name vhost1 -m 2048M -smp 4 -cpu host -hda 
/var/lib/libvirt/images/vm1.qcow2 -boot c -enable-kvm -no-reboot -net none 
-chardev socket,id=char3,path=/var/run/vpp/sock2.sock -netdev 
type=vhost-user,id=mynet1,chardev=char3,vhostforce -device 
virtio-net-pci,mac=c0:00:00:00:00:ff,netdev=mynet1 -object 
memory-backend-file,id=mem,size=2048M,mem-path=/dev/hugepages,share=on -numa 
node,memdev=mem -mem-prealloc
Thank you
-Sara


On Sun, Mar 25, 2018 at 11:56 AM, Sara Gittlin 
<sara.gitt...@gmail.com<mailto:sara.gitt...@gmail.com>> wrote:
Thank Steven and Alec
I'm using the iperf as a basic tool to check sanity - i'll run pktgen on the 
VMs later.
the resut i'm got 20Mbps (~20K pps) throughput and latency of 1ms is not iperf 
issue.
Steven -  you suggest to set the rx-queue-size when lanuching  the VM but 
unfortunately i dont find this option in the qemy-system-x86_64 command line
i'm using this:

qemu-system-x86_64 \

    -enable-kvm -m 1024 \

    -bios OVMF.fd \

    -smp 4 -cpu host \

    -vga none -nographic \

    -drive file="1-clear-14200-kvm.img",if=virtio,aio=threads \

    -chardev socket,id=char1,path=/var/run/vpp/sock1.sock \

    -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \

    -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \

    -object 
memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \

    -numa node,memdev=mem -mem-prealloc \

    -debugcon file:debug.log -global isa-debugcon.iobase=0x402



-Sara


On Thu, Mar 22, 2018 at 6:10 PM, steven luong 
<slu...@cisco.com<mailto:slu...@cisco.com>> wrote:
Sara,

Iperf3 is not blasting traffic fast enough. You could try specifying multiple 
parallel streams using -P. Then, you will likely encounter vhost-user dropping 
packets as you are using the default virtqueue size 256. You’ll need to specify 
rx_queue_size=1024 when you launch the VM in the qemu start command to reduce 
the drops.

Steven

From: <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> on behalf of Sara 
Gittlin <sara.gitt...@gmail.com<mailto:sara.gitt...@gmail.com>>
Date: Thursday, March 22, 2018 at 8:12 AM
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>

Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

Hi John and Steven

setting this in the startup config didnt help
vhost-user {
  coalesce-frame 0
}
John

I'm using ping -f  for latency and iperf3 for pps testing.
later i'll run pktgen in the vm's

output :
sho int
              Name               Idx       State          Counter          Count
VirtualEthernet0/0/0              1         up       rx packets                 
11748
                                                     rx bytes                  
868648
                                                     tx packets                 
40958
                                                     tx bytes                
58047352
                                                     drops                      
   29
VirtualEthernet0/0/1              2         up       rx packets                 
40958
                                                     rx bytes                
58047352
                                                     tx packets                 
11719
                                                     tx bytes                  
862806
                                                     tx-error                   
   29
local0                            0         up


 show hardware
              Name                Idx   Link  Hardware
VirtualEthernet0/0/0               1     up   VirtualEthernet0/0/0
  Ethernet address 02:fe:25:2f:bd:c2
VirtualEthernet0/0/1               2     up   VirtualEthernet0/0/1
  Ethernet address 02:fe:40:16:70:1b
local0                             0    down  local0
  local

-Sara


On Thu, Mar 22, 2018 at 4:53 PM, John DeNisco 
<jdeni...@cisco.com<mailto:jdeni...@cisco.com>> wrote:

Hi Sara,

Can you also send the results from show hardware and show interfaces?

What are you using to test your performance.

John


From: <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> on behalf of Sara 
Gittlin <sara.gitt...@gmail.com<mailto:sara.gitt...@gmail.com>>
Date: Thursday, March 22, 2018 at 9:27 AM
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

this is the output of:

show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 1)
virtio_net_hdr_sz 12
 features mask (0xffffffffffffffff):
 features (0x58208000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_F_ANY_LAYOUT (27)
   VIRTIO_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

 socket filename /var/run/vpp/sock1.sock type server errno "Success"

 rx placement:
   thread 1 on vring 1, polling
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0

 Memory regions (total 2)
 region fd    guest_phys_addr    memory_size        userspace_addr     
mmap_offset        mmap_addr
 ====== ===== ================== ================== ================== 
================== ==================
  0     32    0x0000000000000000 0x00000000000c0000 0x00007f5a6a600000 
0x0000000000000000 0x00007f47c4200000
  1     33    0x0000000000100000 0x000000003ff00000 0x00007f5a6a700000 
0x0000000000100000 0x00007f46f4100000

 Virtqueue 0 (TX)
  qsz 256 last_avail_idx 63392 last_used_idx 63392
  avail.flags 0 avail.idx 63530 used.flags 1 used.idx 63392
  kickfd 34 callfd 35 errfd -1

 Virtqueue 1 (RX)
  qsz 256 last_avail_idx 32414 last_used_idx 32414
  avail.flags 1 avail.idx 32414 used.flags 1 used.idx 32414
  kickfd 30 callfd 36 errfd -1

On Thu, Mar 22, 2018 at 3:07 PM, Sara Gittlin 
<sara.gitt...@gmail.com<mailto:sara.gitt...@gmail.com>> wrote:
i dont think these are error counters - anyway very poor pps

On Thu, Mar 22, 2018 at 2:55 PM, Sara Gittlin 
<sara.gitt...@gmail.com<mailto:sara.gitt...@gmail.com>> wrote:
in the show err output i see that  l2-output  l2-learn l2-input counters are 
continuously  incremented :
show  err
   Count                    Node                  Reason
        11                l2-output               L2 output packets
        11                l2-learn                L2 learn packets
        11                l2-input                L2 input packets
         3                l2-flood                L2 flood packets
   8479644                l2-output               L2 output packets
   8479644                l2-learn                L2 learn packets
   8479644                l2-input                L2 input packets

On Thu, Mar 22, 2018 at 11:59 AM, Sara Gittlin 
<sara.gitt...@gmail.com<mailto:sara.gitt...@gmail.com>> wrote:
Hello
i setup 2 vm connected to VPP as per the guide :
https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_Vhost-User_Interface

The performance looks very bad very low pps and large latencies

udp pkt size 100B - throughput 500Mb
average latency  is 900us

i have  2 PMDs threads (200% cpu) in the host, in the VMs i see low
cpu load (10%)

Please can you tell me what is wrong with my setup ?

Thank you in advance
- Sara








Reply via email to