Hi John and Steven
setting this in the startup config didnt help
vhost-user {
coalesce-frame 0
}
John
I'm using ping -f for latency and iperf3 for pps testing.
later i'll run pktgen in the vm's
output :
sho int
Name Idx State Counter
Count
VirtualEthernet0/0/0 1 up rx
packets 11748
rx
bytes 868648
tx
packets 40958
tx
bytes 58047352
drops 29
VirtualEthernet0/0/1 2 up rx
packets 40958
rx
bytes 58047352
tx
packets 11719
tx
bytes 862806
tx-error 29
local0 0 up
show hardware
Name Idx Link Hardware
VirtualEthernet0/0/0 1 up VirtualEthernet0/0/0
Ethernet address 02:fe:25:2f:bd:c2
VirtualEthernet0/0/1 2 up VirtualEthernet0/0/1
Ethernet address 02:fe:40:16:70:1b
local0 0 down local0
local
-Sara
On Thu, Mar 22, 2018 at 4:53 PM, John DeNisco <[email protected]> wrote:
>
>
> Hi Sara,
>
>
>
> Can you also send the results from show hardware and show interfaces?
>
>
>
> What are you using to test your performance.
>
>
>
> John
>
>
>
>
>
> *From: *<[email protected]> on behalf of Sara Gittlin <
> [email protected]>
> *Date: *Thursday, March 22, 2018 at 9:27 AM
> *To: *"[email protected]" <[email protected]>
> *Subject: *Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser
>
>
>
> this is the output of:
>
> show vhost-user VirtualEthernet0/0/0
> Virtio vhost-user interfaces
> Global:
> coalesce frames 32 time 1e-3
> number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 1)
> virtio_net_hdr_sz 12
> features mask (0xffffffffffffffff):
> features (0x58208000):
> VIRTIO_NET_F_MRG_RXBUF (15)
> VIRTIO_NET_F_GUEST_ANNOUNCE (21)
> VIRTIO_F_ANY_LAYOUT (27)
> VIRTIO_F_INDIRECT_DESC (28)
> VHOST_USER_F_PROTOCOL_FEATURES (30)
> protocol features (0x3)
> VHOST_USER_PROTOCOL_F_MQ (0)
> VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)
>
> socket filename /var/run/vpp/sock1.sock type server errno "Success"
>
> rx placement:
> thread 1 on vring 1, polling
> tx placement: spin-lock
> thread 0 on vring 0
> thread 1 on vring 0
> thread 2 on vring 0
>
> Memory regions (total 2)
> region fd guest_phys_addr memory_size userspace_addr
> mmap_offset mmap_addr
> ====== ===== ================== ================== ==================
> ================== ==================
> 0 32 0x0000000000000000 0x00000000000c0000 0x00007f5a6a600000
> 0x0000000000000000 0x00007f47c4200000
> 1 33 0x0000000000100000 0x000000003ff00000 0x00007f5a6a700000
> 0x0000000000100000 0x00007f46f4100000
>
> Virtqueue 0 (TX)
> qsz 256 last_avail_idx 63392 last_used_idx 63392
> avail.flags 0 avail.idx 63530 used.flags 1 used.idx 63392
> kickfd 34 callfd 35 errfd -1
>
> Virtqueue 1 (RX)
> qsz 256 last_avail_idx 32414 last_used_idx 32414
> avail.flags 1 avail.idx 32414 used.flags 1 used.idx 32414
> kickfd 30 callfd 36 errfd -1
>
>
>
> On Thu, Mar 22, 2018 at 3:07 PM, Sara Gittlin <[email protected]>
> wrote:
>
> i dont think these are error counters - anyway very poor pps
>
>
>
> On Thu, Mar 22, 2018 at 2:55 PM, Sara Gittlin <[email protected]>
> wrote:
>
> in the show err output i see that l2-output l2-learn l2-input counters
> are continuously incremented :
> show err
> Count Node Reason
> 11 l2-output L2 output packets
> 11 l2-learn L2 learn packets
> 11 l2-input L2 input packets
> 3 l2-flood L2 flood packets
>
>
> * 8479644 l2-output L2 output packets
> 8479644 l2-learn L2 learn packets 8479644
> l2-input L2 input packets*
>
>
>
> On Thu, Mar 22, 2018 at 11:59 AM, Sara Gittlin <[email protected]>
> wrote:
>
> Hello
> i setup 2 vm connected to VPP as per the guide :
> https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_
> Vhost-User_Interface
>
> The performance looks very bad very low pps and large latencies
>
> udp pkt size 100B - throughput 500Mb
> average latency is 900us
>
> i have 2 PMDs threads (200% cpu) in the host, in the VMs i see low
> cpu load (10%)
>
> Please can you tell me what is wrong with my setup ?
>
> Thank you in advance
> - Sara
>
>
>
>
>
>
>
>
>
>
>
>