Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-26 Thread Sara Gittlin
Hello Yichen and  Steven

1. Steven- I will upgrade qemu to 2.8 and let you know the perf. results.
2. Yichen - I took  ubuntu 16.04 iso and  launch its qcow2 like  this:

sudo qemu-system-x86_64 -name vhost1 -m 2048M -smp 4 -cpu host -hda
/var/lib/libvirt/images/vm1.qcow2 -boot c -enable-kvm -no-reboot -net none
-chardev socket,id=char1,path=/var/run/vpp/sock2.sock -netdev
type=vhost-user,id=mynet1,chardev=char1,vhostforce -device
virtio-net-pci,mac=c0:00:00:00:00:ff,netdev=mynet1 -object
memory-backend-file,id=mem,size=2048M,mem-path=/dev/hugepages,share=on
-numa node,memdev=mem -mem-prealloc

-
other steps are like in
https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_Vhost-User_Interface

i get pps performance like in ovs-dpdk, regarding the latency - when i ping
i get ~100us latency , but stiil when i ping -f iget almost 1ms latency
Maybe it has to do with MAX_BURST_SIZE (=256 )  because ping -f has
relatevely high rate and the dpdk fills buffer with 256 pkts , unlike  the
regular ping with low rate - there the DPDK exit the loop *before* filling
the buffer with 256 pkts ...

-Sara


On Mon, Mar 26, 2018 at 6:48 AM, steven luong  wrote:

> Sara,
>
>
>
> You need qemu 2.8.0 or later for the rx_queue_size option to work.
>
>
>
> Steven
>
>
>
> *From: * on behalf of Sara Gittlin <
> sara.gitt...@gmail.com>
> *Date: *Sunday, March 25, 2018 at 1:57 AM
>
> *To: *"vpp-dev@lists.fd.io" 
> *Cc: *"vpp-dev@lists.fd.io" 
> *Subject: *Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser
>
>
>
> Thank Steven and Alec
>
> I'm using the iperf as a basic tool to check sanity - i'll run pktgen on
> the VMs later.
>
> the resut i'm got 20Mbps (~20K pps) throughput and latency of 1ms is not
> iperf issue.
>
> Steven -  you suggest to set the rx-queue-size when lanuching  the VM but
> unfortunately i dont find this option in the qemy-system-x86_64 command line
>
> i'm using this:
>
> qemu-system-x86_64 \
>
> -enable-kvm -m 1024 \
>
> -bios OVMF.fd \
>
> -smp 4 -cpu host \
>
> -vga none -nographic \
>
> -drive file="1-clear-14200-kvm.img",if=virtio,aio=threads \
>
> -chardev socket,id=char1,path=/var/run/vpp/sock1.sock \
>
> -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
>
> -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
>
> -object 
> memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
>
> -numa node,memdev=mem -mem-prealloc \
>
> -debugcon file:debug.log -global isa-debugcon.iobase=0x402
>
>
>
> -Sara
>
>
>
>
>
> On Thu, Mar 22, 2018 at 6:10 PM, steven luong  wrote:
>
> Sara,
>
>
>
> Iperf3 is not blasting traffic fast enough. You could try specifying
> multiple parallel streams using -P. Then, you will likely encounter
> vhost-user dropping packets as you are using the default virtqueue size
> 256. You’ll need to specify rx_queue_size=1024 when you launch the VM in
> the qemu start command to reduce the drops.
>
>
>
> Steven
>
>
>
> *From: * on behalf of Sara Gittlin <
> sara.gitt...@gmail.com>
> *Date: *Thursday, March 22, 2018 at 8:12 AM
> *To: *"vpp-dev@lists.fd.io" 
> *Cc: *"vpp-dev@lists.fd.io" 
>
>
> *Subject: *Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser
>
>
>
> Hi John and Steven
>
>
>
> setting this in the startup config didnt help
>
> vhost-user {
>
>   coalesce-frame 0
>
> }
>
> John
>
>
>
> I'm using ping -f  for latency and iperf3 for pps testing.
>
> later i'll run pktgen in the vm's
>
>
>
> output :
>
> sho int
>   Name   Idx   State  Counter
> Count
> VirtualEthernet0/0/0  1 up   rx
> packets 11748
>  rx
> bytes  868648
>  tx
> packets 40958
>  tx
> bytes58047352
>
> drops 29
> VirtualEthernet0/0/1  2 up   rx
> packets 40958
>  rx
> bytes58047352
>  tx
> packets 11719
>  tx
> bytes  862806
>
> tx-error  29
> local00 up
>
>
>
>
>
>  show h

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-25 Thread steven luong
Sara,

You need qemu 2.8.0 or later for the rx_queue_size option to work.

Steven

From:  on behalf of Sara Gittlin 
Date: Sunday, March 25, 2018 at 1:57 AM
To: "vpp-dev@lists.fd.io" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

Thank Steven and Alec
I'm using the iperf as a basic tool to check sanity - i'll run pktgen on the 
VMs later.
the resut i'm got 20Mbps (~20K pps) throughput and latency of 1ms is not iperf 
issue.
Steven -  you suggest to set the rx-queue-size when lanuching  the VM but 
unfortunately i dont find this option in the qemy-system-x86_64 command line
i'm using this:

qemu-system-x86_64 \

-enable-kvm -m 1024 \

-bios OVMF.fd \

-smp 4 -cpu host \

-vga none -nographic \

-drive file="1-clear-14200-kvm.img",if=virtio,aio=threads \

-chardev socket,id=char1,path=/var/run/vpp/sock1.sock \

-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \

-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \

-object 
memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \

-numa node,memdev=mem -mem-prealloc \

-debugcon file:debug.log -global isa-debugcon.iobase=0x402



-Sara


On Thu, Mar 22, 2018 at 6:10 PM, steven luong 
mailto:slu...@cisco.com>> wrote:
Sara,

Iperf3 is not blasting traffic fast enough. You could try specifying multiple 
parallel streams using -P. Then, you will likely encounter vhost-user dropping 
packets as you are using the default virtqueue size 256. You’ll need to specify 
rx_queue_size=1024 when you launch the VM in the qemu start command to reduce 
the drops.

Steven

From: mailto:vpp-dev@lists.fd.io>> on behalf of Sara 
Gittlin mailto:sara.gitt...@gmail.com>>
Date: Thursday, March 22, 2018 at 8:12 AM
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Cc: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>

Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

Hi John and Steven

setting this in the startup config didnt help
vhost-user {
  coalesce-frame 0
}
John

I'm using ping -f  for latency and iperf3 for pps testing.
later i'll run pktgen in the vm's

output :
sho int
  Name   Idx   State  Counter  Count
VirtualEthernet0/0/0  1 up   rx packets 
11748
 rx bytes  
868648
 tx packets 
40958
 tx bytes
58047352
 drops  
   29
VirtualEthernet0/0/1  2 up   rx packets 
40958
 rx bytes
58047352
 tx packets 
11719
 tx bytes  
862806
 tx-error   
   29
local00 up


 show hardware
  NameIdx   Link  Hardware
VirtualEthernet0/0/0   1 up   VirtualEthernet0/0/0
  Ethernet address 02:fe:25:2f:bd:c2
VirtualEthernet0/0/1   2 up   VirtualEthernet0/0/1
  Ethernet address 02:fe:40:16:70:1b
local0 0down  local0
  local

-Sara


On Thu, Mar 22, 2018 at 4:53 PM, John DeNisco 
mailto:jdeni...@cisco.com>> wrote:

Hi Sara,

Can you also send the results from show hardware and show interfaces?

What are you using to test your performance.

John


From: mailto:vpp-dev@lists.fd.io>> on behalf of Sara 
Gittlin mailto:sara.gitt...@gmail.com>>
Date: Thursday, March 22, 2018 at 9:27 AM
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

this is the output of:

show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 1)
virtio_net_hdr_sz 12
 features mask (0x):
 features (0x58208000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_F_ANY_LAYOUT (27)
   VIRTIO_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

 socket filename /var/run/vpp/sock1.sock type server errno "Success"

 rx placement:
   thread 1 on vring 1, polling
 tx placem

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-25 Thread Yichen Wang
Hi, Sara,

May I know what image you are using to achieving the good performance with VPP? 
I assume you are running the same iperf3 testings?

Thanks very much!

Regards,
Yichen

From:  on behalf of Sara Gittlin 
Date: Sunday, March 25, 2018 at 5:31 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

Fyi
When i launch another image (not based on the clear-linux-kvm) - i'm achieving 
good performance (exactly like ovs-dpdk - which is expected in single flow/mac )
any idea why ?
this is the  launch cmdline:  [anyway i cant find a way to set the 
tx/rx-queue-size)

 sudo qemu-system-x86_64 -name vhost1 -m 2048M -smp 4 -cpu host -hda 
/var/lib/libvirt/images/vm1.qcow2 -boot c -enable-kvm -no-reboot -net none 
-chardev socket,id=char3,path=/var/run/vpp/sock2.sock -netdev 
type=vhost-user,id=mynet1,chardev=char3,vhostforce -device 
virtio-net-pci,mac=c0:00:00:00:00:ff,netdev=mynet1 -object 
memory-backend-file,id=mem,size=2048M,mem-path=/dev/hugepages,share=on -numa 
node,memdev=mem -mem-prealloc
Thank you
-Sara


On Sun, Mar 25, 2018 at 11:56 AM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
Thank Steven and Alec
I'm using the iperf as a basic tool to check sanity - i'll run pktgen on the 
VMs later.
the resut i'm got 20Mbps (~20K pps) throughput and latency of 1ms is not iperf 
issue.
Steven -  you suggest to set the rx-queue-size when lanuching  the VM but 
unfortunately i dont find this option in the qemy-system-x86_64 command line
i'm using this:

qemu-system-x86_64 \

-enable-kvm -m 1024 \

-bios OVMF.fd \

-smp 4 -cpu host \

-vga none -nographic \

-drive file="1-clear-14200-kvm.img",if=virtio,aio=threads \

-chardev socket,id=char1,path=/var/run/vpp/sock1.sock \

-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \

-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \

-object 
memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \

-numa node,memdev=mem -mem-prealloc \

-debugcon file:debug.log -global isa-debugcon.iobase=0x402



-Sara


On Thu, Mar 22, 2018 at 6:10 PM, steven luong 
mailto:slu...@cisco.com>> wrote:
Sara,

Iperf3 is not blasting traffic fast enough. You could try specifying multiple 
parallel streams using -P. Then, you will likely encounter vhost-user dropping 
packets as you are using the default virtqueue size 256. You’ll need to specify 
rx_queue_size=1024 when you launch the VM in the qemu start command to reduce 
the drops.

Steven

From: mailto:vpp-dev@lists.fd.io>> on behalf of Sara 
Gittlin mailto:sara.gitt...@gmail.com>>
Date: Thursday, March 22, 2018 at 8:12 AM
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Cc: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>

Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

Hi John and Steven

setting this in the startup config didnt help
vhost-user {
  coalesce-frame 0
}
John

I'm using ping -f  for latency and iperf3 for pps testing.
later i'll run pktgen in the vm's

output :
sho int
  Name   Idx   State  Counter  Count
VirtualEthernet0/0/0  1 up   rx packets 
11748
 rx bytes  
868648
 tx packets 
40958
 tx bytes
58047352
 drops  
   29
VirtualEthernet0/0/1  2 up   rx packets 
40958
 rx bytes
58047352
 tx packets 
11719
 tx bytes  
862806
 tx-error   
   29
local00 up


 show hardware
  NameIdx   Link  Hardware
VirtualEthernet0/0/0   1 up   VirtualEthernet0/0/0
  Ethernet address 02:fe:25:2f:bd:c2
VirtualEthernet0/0/1   2 up   VirtualEthernet0/0/1
  Ethernet address 02:fe:40:16:70:1b
local0 0down  local0
  local

-Sara


On Thu, Mar 22, 2018 at 4:53 PM, John DeNisco 
mailto:jdeni...@cisco.com>> wrote:

Hi Sara,

Can you also send the results from show hardware and show interfaces?

What are you using to test your performance.

John


From: mailto:vpp-dev@lists.fd.io>> on behalf of Sara 
Gittlin mailto:sara.gitt...@gmail.com>>
Date: Thursday, March 22, 2018 at 9:27 AM
To: "vpp-d

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-25 Thread Sara Gittlin
Fyi
When i launch another image (not based on the clear-linux-kvm) - i'm
achieving good performance (exactly like ovs-dpdk - which is expected in
single flow/mac )
any idea why ?
this is the  launch cmdline:  [anyway i cant find a way to set the
tx/rx-queue-size)

 sudo qemu-system-x86_64 -name vhost1 -m 2048M -smp 4 -cpu host -hda
/var/lib/libvirt/images/vm1.qcow2 -boot c -enable-kvm -no-reboot -net none
-chardev socket,id=char3,path=/var/run/vpp/sock2.sock -netdev
type=vhost-user,id=mynet1,chardev=char3,vhostforce -device
virtio-net-pci,mac=c0:00:00:00:00:ff,netdev=mynet1 -object
memory-backend-file,id=mem,size=2048M,mem-path=/dev/hugepages,share=on
-numa node,memdev=mem -mem-prealloc

Thank you
-Sara


On Sun, Mar 25, 2018 at 11:56 AM, Sara Gittlin 
wrote:

> Thank Steven and Alec
> I'm using the iperf as a basic tool to check sanity - i'll run pktgen on
> the VMs later.
> the resut i'm got 20Mbps (~20K pps) throughput and latency of 1ms is not
> iperf issue.
> Steven -  you suggest to set the rx-queue-size when lanuching  the VM but
> unfortunately i dont find this option in the qemy-system-x86_64 command line
>
> i'm using this:
>
> qemu-system-x86_64 \
> -enable-kvm -m 1024 \
> -bios OVMF.fd \
> -smp 4 -cpu host \
> -vga none -nographic \
> -drive file="1-clear-14200-kvm.img",if=virtio,aio=threads \
> -chardev socket,id=char1,path=/var/run/vpp/sock1.sock \
> -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
> -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
> -object 
> memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
> -numa node,memdev=mem -mem-prealloc \
> -debugcon file:debug.log -global isa-debugcon.iobase=0x402
>
> -Sara
>
>
>
> On Thu, Mar 22, 2018 at 6:10 PM, steven luong  wrote:
>
>> Sara,
>>
>>
>>
>> Iperf3 is not blasting traffic fast enough. You could try specifying
>> multiple parallel streams using -P. Then, you will likely encounter
>> vhost-user dropping packets as you are using the default virtqueue size
>> 256. You’ll need to specify rx_queue_size=1024 when you launch the VM in
>> the qemu start command to reduce the drops.
>>
>>
>>
>> Steven
>>
>>
>>
>> *From: * on behalf of Sara Gittlin <
>> sara.gitt...@gmail.com>
>> *Date: *Thursday, March 22, 2018 at 8:12 AM
>> *To: *"vpp-dev@lists.fd.io" 
>> *Cc: *"vpp-dev@lists.fd.io" 
>>
>> *Subject: *Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser
>>
>>
>>
>> Hi John and Steven
>>
>>
>>
>> setting this in the startup config didnt help
>>
>> vhost-user {
>>
>>   coalesce-frame 0
>>
>> }
>>
>> John
>>
>>
>>
>> I'm using ping -f  for latency and iperf3 for pps testing.
>>
>> later i'll run pktgen in the vm's
>>
>>
>>
>> output :
>>
>> sho int
>>   Name   Idx   State
>> Counter  Count
>> VirtualEthernet0/0/0  1 up   rx
>> packets 11748
>>  rx
>> bytes  868648
>>  tx
>> packets 40958
>>  tx
>> bytes58047352
>>
>> drops 29
>> VirtualEthernet0/0/1  2 up   rx
>> packets 40958
>>  rx
>> bytes58047352
>>  tx
>> packets 11719
>>  tx
>> bytes  862806
>>
>> tx-error  29
>> local00 up
>>
>>
>>
>>
>>
>>  show hardware
>>   NameIdx   Link  Hardware
>> VirtualEthernet0/0/0   1 up   VirtualEthernet0/0/0
>>   Ethernet address 02:fe:25:2f:bd:c2
>> VirtualEthernet0/0/1   2 up   VirtualEthernet0/0/1
>>   Ethernet address 02:fe:40:16:70:1b
>> local0 0down  local0
>>   local
>>
>>
>>
>> -Sara
>>
>>
>>
>>
>>
>> On Thu, Mar 22, 2018 at 4:53 PM, John DeNisco  wrote:
>>
>>
>>
>> Hi Sara,
>>
>>
>>
>>

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-25 Thread Sara Gittlin
Thank Steven and Alec
I'm using the iperf as a basic tool to check sanity - i'll run pktgen on
the VMs later.
the resut i'm got 20Mbps (~20K pps) throughput and latency of 1ms is not
iperf issue.
Steven -  you suggest to set the rx-queue-size when lanuching  the VM but
unfortunately i dont find this option in the qemy-system-x86_64 command line

i'm using this:

qemu-system-x86_64 \
-enable-kvm -m 1024 \
-bios OVMF.fd \
-smp 4 -cpu host \
-vga none -nographic \
-drive file="1-clear-14200-kvm.img",if=virtio,aio=threads \
-chardev socket,id=char1,path=/var/run/vpp/sock1.sock \
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
-object 
memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on
\
-numa node,memdev=mem -mem-prealloc \
-debugcon file:debug.log -global isa-debugcon.iobase=0x402

-Sara



On Thu, Mar 22, 2018 at 6:10 PM, steven luong  wrote:

> Sara,
>
>
>
> Iperf3 is not blasting traffic fast enough. You could try specifying
> multiple parallel streams using -P. Then, you will likely encounter
> vhost-user dropping packets as you are using the default virtqueue size
> 256. You’ll need to specify rx_queue_size=1024 when you launch the VM in
> the qemu start command to reduce the drops.
>
>
>
> Steven
>
>
>
> *From: * on behalf of Sara Gittlin <
> sara.gitt...@gmail.com>
> *Date: *Thursday, March 22, 2018 at 8:12 AM
> *To: *"vpp-dev@lists.fd.io" 
> *Cc: *"vpp-dev@lists.fd.io" 
>
> *Subject: *Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser
>
>
>
> Hi John and Steven
>
>
>
> setting this in the startup config didnt help
>
> vhost-user {
>
>   coalesce-frame 0
>
> }
>
> John
>
>
>
> I'm using ping -f  for latency and iperf3 for pps testing.
>
> later i'll run pktgen in the vm's
>
>
>
> output :
>
> sho int
>   Name   Idx   State  Counter
> Count
> VirtualEthernet0/0/0  1 up   rx
> packets 11748
>  rx
> bytes  868648
>  tx
> packets 40958
>  tx
> bytes58047352
>
> drops 29
> VirtualEthernet0/0/1  2 up   rx
> packets 40958
>  rx
> bytes58047352
>  tx
> packets 11719
>  tx
> bytes  862806
>
> tx-error  29
> local00 up
>
>
>
>
>
>  show hardware
>   NameIdx   Link  Hardware
> VirtualEthernet0/0/0   1 up   VirtualEthernet0/0/0
>   Ethernet address 02:fe:25:2f:bd:c2
> VirtualEthernet0/0/1   2 up   VirtualEthernet0/0/1
>   Ethernet address 02:fe:40:16:70:1b
> local0 0down  local0
>   local
>
>
>
> -Sara
>
>
>
>
>
> On Thu, Mar 22, 2018 at 4:53 PM, John DeNisco  wrote:
>
>
>
> Hi Sara,
>
>
>
> Can you also send the results from show hardware and show interfaces?
>
>
>
> What are you using to test your performance.
>
>
>
> John
>
>
>
>
>
> *From: * on behalf of Sara Gittlin <
> sara.gitt...@gmail.com>
> *Date: *Thursday, March 22, 2018 at 9:27 AM
> *To: *"vpp-dev@lists.fd.io" 
> *Subject: *Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser
>
>
>
> this is the output of:
>
>
> show vhost-user VirtualEthernet0/0/0
> Virtio vhost-user interfaces
> Global:
>   coalesce frames 32 time 1e-3
>   number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 1)
> virtio_net_hdr_sz 12
>  features mask (0x):
>  features (0x58208000):
>VIRTIO_NET_F_MRG_RXBUF (15)
>VIRTIO_NET_F_GUEST_ANNOUNCE (21)
>VIRTIO_F_ANY_LAYOUT (27)
>VIRTIO_F_INDIRECT_DESC (28)
>VHOST_USER_F_PROTOCOL_FEATURES (30)
>   protocol features (0x3)
>VHOST_USER_PROTOCOL_F_MQ (0)
>VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)
>
>  socket filename /var/run/vpp/sock1.sock type server errno "Success"
>
>  rx placement:
>thread 1 on vring 1, polling
>  tx placement: spin-lock
>thread 0 on vring 0
>thread 1 

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread steven luong
Sara,

Iperf3 is not blasting traffic fast enough. You could try specifying multiple 
parallel streams using -P. Then, you will likely encounter vhost-user dropping 
packets as you are using the default virtqueue size 256. You’ll need to specify 
rx_queue_size=1024 when you launch the VM in the qemu start command to reduce 
the drops.

Steven

From:  on behalf of Sara Gittlin 
Date: Thursday, March 22, 2018 at 8:12 AM
To: "vpp-dev@lists.fd.io" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

Hi John and Steven

setting this in the startup config didnt help
vhost-user {
  coalesce-frame 0
}
John

I'm using ping -f  for latency and iperf3 for pps testing.
later i'll run pktgen in the vm's

output :
sho int
  Name   Idx   State  Counter  Count
VirtualEthernet0/0/0  1 up   rx packets 
11748
 rx bytes  
868648
 tx packets 
40958
 tx bytes
58047352
 drops  
   29
VirtualEthernet0/0/1  2 up   rx packets 
40958
 rx bytes
58047352
 tx packets 
11719
 tx bytes  
862806
 tx-error   
   29
local00 up


 show hardware
  NameIdx   Link  Hardware
VirtualEthernet0/0/0   1 up   VirtualEthernet0/0/0
  Ethernet address 02:fe:25:2f:bd:c2
VirtualEthernet0/0/1   2 up   VirtualEthernet0/0/1
  Ethernet address 02:fe:40:16:70:1b
local0 0down  local0
  local

-Sara


On Thu, Mar 22, 2018 at 4:53 PM, John DeNisco 
mailto:jdeni...@cisco.com>> wrote:

Hi Sara,

Can you also send the results from show hardware and show interfaces?

What are you using to test your performance.

John


From: mailto:vpp-dev@lists.fd.io>> on behalf of Sara 
Gittlin mailto:sara.gitt...@gmail.com>>
Date: Thursday, March 22, 2018 at 9:27 AM
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

this is the output of:

show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 1)
virtio_net_hdr_sz 12
 features mask (0x):
 features (0x58208000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_F_ANY_LAYOUT (27)
   VIRTIO_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

 socket filename /var/run/vpp/sock1.sock type server errno "Success"

 rx placement:
   thread 1 on vring 1, polling
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0

 Memory regions (total 2)
 region fdguest_phys_addrmemory_sizeuserspace_addr 
mmap_offsetmmap_addr
 == = == == == 
== ==
  0 320x 0x000c 0x7f5a6a60 
0x 0x7f47c420
  1 330x0010 0x3ff0 0x7f5a6a70 
0x0010 0x7f46f410

 Virtqueue 0 (TX)
  qsz 256 last_avail_idx 63392 last_used_idx 63392
  avail.flags 0 avail.idx 63530 used.flags 1 used.idx 63392
  kickfd 34 callfd 35 errfd -1

 Virtqueue 1 (RX)
  qsz 256 last_avail_idx 32414 last_used_idx 32414
  avail.flags 1 avail.idx 32414 used.flags 1 used.idx 32414
  kickfd 30 callfd 36 errfd -1

On Thu, Mar 22, 2018 at 3:07 PM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
i dont think these are error counters - anyway very poor pps

On Thu, Mar 22, 2018 at 2:55 PM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
in the show err output i see that  l2-output  l2-learn l2-input counters are 
continuously  incremented :
show  err
   CountNode  Reason
11l2-output   L2 output packets
11l2-learnL2 learn packets
11l2-inputL2 input packets
 3l2-floodL2 flood packets
   8479644l2-output  

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread Alec

You should not use iperf for pps because you’re going to incur the overhead of 
running a guest user space app over socket + guest kernel tcp-ip stack + virtio 
over vhost user (times 2 since both your end points run iperf).
You might get decent throughput with large write buffer size (e.g. 64k writes) 
as those do not require as many pps.
For pps in VM it is better to use pktgen or any equivalent traffic generator 
based on DPDK.

  Alec


From:  on behalf of Sara Gittlin 
Date: Thursday, March 22, 2018 at 8:12 AM
To: "vpp-dev@lists.fd.io" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

Hi John and Steven

setting this in the startup config didnt help
vhost-user {
  coalesce-frame 0
}
John

I'm using ping -f  for latency and iperf3 for pps testing.
later i'll run pktgen in the vm's

output :
sho int
  Name   Idx   State  Counter  Count
VirtualEthernet0/0/0  1 up   rx packets 
11748
 rx bytes  
868648
 tx packets 
40958
 tx bytes
58047352
 drops  
   29
VirtualEthernet0/0/1  2 up   rx packets 
40958
 rx bytes
58047352
 tx packets 
11719
 tx bytes  
862806
 tx-error   
   29
local00 up


 show hardware
  NameIdx   Link  Hardware
VirtualEthernet0/0/0   1 up   VirtualEthernet0/0/0
  Ethernet address 02:fe:25:2f:bd:c2
VirtualEthernet0/0/1   2 up   VirtualEthernet0/0/1
  Ethernet address 02:fe:40:16:70:1b
local0 0down  local0
  local

-Sara


On Thu, Mar 22, 2018 at 4:53 PM, John DeNisco 
mailto:jdeni...@cisco.com>> wrote:

Hi Sara,

Can you also send the results from show hardware and show interfaces?

What are you using to test your performance.

John


From: mailto:vpp-dev@lists.fd.io>> on behalf of Sara 
Gittlin mailto:sara.gitt...@gmail.com>>
Date: Thursday, March 22, 2018 at 9:27 AM
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

this is the output of:

show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 1)
virtio_net_hdr_sz 12
 features mask (0x):
 features (0x58208000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_F_ANY_LAYOUT (27)
   VIRTIO_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

 socket filename /var/run/vpp/sock1.sock type server errno "Success"

 rx placement:
   thread 1 on vring 1, polling
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0

 Memory regions (total 2)
 region fdguest_phys_addrmemory_sizeuserspace_addr 
mmap_offsetmmap_addr
 == = == == == 
== ==
  0 320x 0x000c 0x7f5a6a60 
0x 0x7f47c420
  1 330x0010 0x3ff0 0x7f5a6a70 
0x0010 0x7f46f410

 Virtqueue 0 (TX)
  qsz 256 last_avail_idx 63392 last_used_idx 63392
  avail.flags 0 avail.idx 63530 used.flags 1 used.idx 63392
  kickfd 34 callfd 35 errfd -1

 Virtqueue 1 (RX)
  qsz 256 last_avail_idx 32414 last_used_idx 32414
  avail.flags 1 avail.idx 32414 used.flags 1 used.idx 32414
  kickfd 30 callfd 36 errfd -1

On Thu, Mar 22, 2018 at 3:07 PM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
i dont think these are error counters - anyway very poor pps

On Thu, Mar 22, 2018 at 2:55 PM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
in the show err output i see that  l2-output  l2-learn l2-input counters are 
continuously  incremented :
show  err
   CountNode  Reason
11l2-output   L2 output packets
11l2-learnL2 learn packets
11l2-inputL2 input packets
 

Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread Sara Gittlin
Hi John and Steven

setting this in the startup config didnt help

vhost-user {

  coalesce-frame 0

}

John


I'm using ping -f  for latency and iperf3 for pps testing.

later i'll run pktgen in the vm's


output :

sho int
  Name   Idx   State  Counter
Count
VirtualEthernet0/0/0  1 up   rx
packets 11748
 rx
bytes  868648
 tx
packets 40958
 tx
bytes58047352

drops 29
VirtualEthernet0/0/1  2 up   rx
packets 40958
 rx
bytes58047352
 tx
packets 11719
 tx
bytes  862806

tx-error  29
local00 up



 show hardware
  NameIdx   Link  Hardware
VirtualEthernet0/0/0   1 up   VirtualEthernet0/0/0
  Ethernet address 02:fe:25:2f:bd:c2
VirtualEthernet0/0/1   2 up   VirtualEthernet0/0/1
  Ethernet address 02:fe:40:16:70:1b
local0 0down  local0
  local


-Sara


On Thu, Mar 22, 2018 at 4:53 PM, John DeNisco  wrote:

>
>
> Hi Sara,
>
>
>
> Can you also send the results from show hardware and show interfaces?
>
>
>
> What are you using to test your performance.
>
>
>
> John
>
>
>
>
>
> *From: * on behalf of Sara Gittlin <
> sara.gitt...@gmail.com>
> *Date: *Thursday, March 22, 2018 at 9:27 AM
> *To: *"vpp-dev@lists.fd.io" 
> *Subject: *Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser
>
>
>
> this is the output of:
>
> show vhost-user VirtualEthernet0/0/0
> Virtio vhost-user interfaces
> Global:
>   coalesce frames 32 time 1e-3
>   number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 1)
> virtio_net_hdr_sz 12
>  features mask (0x):
>  features (0x58208000):
>VIRTIO_NET_F_MRG_RXBUF (15)
>VIRTIO_NET_F_GUEST_ANNOUNCE (21)
>VIRTIO_F_ANY_LAYOUT (27)
>VIRTIO_F_INDIRECT_DESC (28)
>VHOST_USER_F_PROTOCOL_FEATURES (30)
>   protocol features (0x3)
>VHOST_USER_PROTOCOL_F_MQ (0)
>VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)
>
>  socket filename /var/run/vpp/sock1.sock type server errno "Success"
>
>  rx placement:
>thread 1 on vring 1, polling
>  tx placement: spin-lock
>thread 0 on vring 0
>thread 1 on vring 0
>thread 2 on vring 0
>
>  Memory regions (total 2)
>  region fdguest_phys_addrmemory_sizeuserspace_addr
> mmap_offsetmmap_addr
>  == = == == ==
> == ==
>   0 320x 0x000c 0x7f5a6a60
> 0x 0x7f47c420
>   1 330x0010 0x3ff0 0x7f5a6a70
> 0x0010 0x7f46f410
>
>  Virtqueue 0 (TX)
>   qsz 256 last_avail_idx 63392 last_used_idx 63392
>   avail.flags 0 avail.idx 63530 used.flags 1 used.idx 63392
>   kickfd 34 callfd 35 errfd -1
>
>  Virtqueue 1 (RX)
>   qsz 256 last_avail_idx 32414 last_used_idx 32414
>   avail.flags 1 avail.idx 32414 used.flags 1 used.idx 32414
>   kickfd 30 callfd 36 errfd -1
>
>
>
> On Thu, Mar 22, 2018 at 3:07 PM, Sara Gittlin 
> wrote:
>
> i dont think these are error counters - anyway very poor pps
>
>
>
> On Thu, Mar 22, 2018 at 2:55 PM, Sara Gittlin 
> wrote:
>
> in the show err output i see that  l2-output  l2-learn l2-input counters
> are continuously  incremented :
> show  err
>CountNode  Reason
> 11l2-output   L2 output packets
> 11l2-learnL2 learn packets
> 11l2-inputL2 input packets
>  3l2-floodL2 flood packets
>
>
> *  8479644l2-output   L2 output packets
>  8479644l2-learnL2 learn packets8479644
>l2-inputL2 input packets*
>
>
>
> On Thu, Mar 22, 2018 at 11:59 AM, Sara Gittlin 
> wrote:
>
> Hello
> i setup 2 vm connected to VPP as per the guide :
> https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_
> Vhost-User_Interface
>
> The performance looks very bad very low pps and large latencies
>
> udp pkt size 100B - throughput 500Mb
> average latency  is 900us
>
> i have  2 PMDs threads (200% cpu) in the host, in the VMs i see low
> cpu load (10%)
>
> Please can you tell me what is wrong with my setup ?
>
> Thank you in advance
> - Sara
>
>
>
>
>
>
>
>
>
> 
>
>


Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread John Lo (loj)
Hi Sara,

To address your question on “show err”, your output is displaying running 
counts for the l2-output, l2-learn and l2-input node wrt how many packets each 
node processed.  These are not error counts, unless the “Reason” column 
indicate some kind of error.  The CLI name “show err” is mis-leading in that it 
is showing stats kept by VPP nodes for various reasons, not just errors.  I 
typically tell people to use the CLI “show node counters” which is more 
appropriate and produce the same output.  The CLI “show err” was there for 
historical reasons and kept for backward compatibility.  You are welcome to 
keep on using “show err” as it is shorter and faster to type.

Regards,
John

From: vpp-dev@lists.fd.io  On Behalf Of Sara Gittlin
Sent: Thursday, March 22, 2018 8:56 AM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

in the show err output i see that  l2-output  l2-learn l2-input counters are 
continuously  incremented :
show  err
   CountNode  Reason
11l2-output   L2 output packets
11l2-learnL2 learn packets
11l2-inputL2 input packets
 3l2-floodL2 flood packets
   8479644l2-output   L2 output packets
   8479644l2-learnL2 learn packets
   8479644l2-inputL2 input packets

On Thu, Mar 22, 2018 at 11:59 AM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
Hello
i setup 2 vm connected to VPP as per the guide :
https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_Vhost-User_Interface

The performance looks very bad very low pps and large latencies

udp pkt size 100B - throughput 500Mb
average latency  is 900us

i have  2 PMDs threads (200% cpu) in the host, in the VMs i see low
cpu load (10%)

Please can you tell me what is wrong with my setup ?

Thank you in advance
- Sara





Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread John DeNisco

Hi Sara,

Can you also send the results from show hardware and show interfaces?

What are you using to test your performance.

John


From:  on behalf of Sara Gittlin 
Date: Thursday, March 22, 2018 at 9:27 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

this is the output of:
show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 1)
virtio_net_hdr_sz 12
 features mask (0x):
 features (0x58208000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_F_ANY_LAYOUT (27)
   VIRTIO_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

 socket filename /var/run/vpp/sock1.sock type server errno "Success"

 rx placement:
   thread 1 on vring 1, polling
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0

 Memory regions (total 2)
 region fdguest_phys_addrmemory_sizeuserspace_addr 
mmap_offsetmmap_addr
 == = == == == 
== ==
  0 320x 0x000c 0x7f5a6a60 
0x 0x7f47c420
  1 330x0010 0x3ff0 0x7f5a6a70 
0x0010 0x7f46f410

 Virtqueue 0 (TX)
  qsz 256 last_avail_idx 63392 last_used_idx 63392
  avail.flags 0 avail.idx 63530 used.flags 1 used.idx 63392
  kickfd 34 callfd 35 errfd -1

 Virtqueue 1 (RX)
  qsz 256 last_avail_idx 32414 last_used_idx 32414
  avail.flags 1 avail.idx 32414 used.flags 1 used.idx 32414
  kickfd 30 callfd 36 errfd -1


On Thu, Mar 22, 2018 at 3:07 PM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
i dont think these are error counters - anyway very poor pps

On Thu, Mar 22, 2018 at 2:55 PM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
in the show err output i see that  l2-output  l2-learn l2-input counters are 
continuously  incremented :
show  err
   CountNode  Reason
11l2-output   L2 output packets
11l2-learnL2 learn packets
11l2-inputL2 input packets
 3l2-floodL2 flood packets
   8479644l2-output   L2 output packets
   8479644l2-learnL2 learn packets
   8479644l2-inputL2 input packets

On Thu, Mar 22, 2018 at 11:59 AM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
Hello
i setup 2 vm connected to VPP as per the guide :
https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_Vhost-User_Interface

The performance looks very bad very low pps and large latencies

udp pkt size 100B - throughput 500Mb
average latency  is 900us

i have  2 PMDs threads (200% cpu) in the host, in the VMs i see low
cpu load (10%)

Please can you tell me what is wrong with my setup ?

Thank you in advance
- Sara







Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread steven luong
Sara,

Could you please try again with adding below config to startup.conf? I remember 
some drivers don’t blast traffic at full speed unless it is getting a kick for 
each packet sent. I am not sure if you are using one of those.

vhost-user {
  coalesce-frame 0
}

Steven

From:  on behalf of Sara Gittlin 
Date: Thursday, March 22, 2018 at 6:27 AM
To: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

this is the output of:
show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 1)
virtio_net_hdr_sz 12
 features mask (0x):
 features (0x58208000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_F_ANY_LAYOUT (27)
   VIRTIO_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

 socket filename /var/run/vpp/sock1.sock type server errno "Success"

 rx placement:
   thread 1 on vring 1, polling
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0

 Memory regions (total 2)
 region fdguest_phys_addrmemory_sizeuserspace_addr 
mmap_offsetmmap_addr
 == = == == == 
== ==
  0 320x 0x000c 0x7f5a6a60 
0x 0x7f47c420
  1 330x0010 0x3ff0 0x7f5a6a70 
0x0010 0x7f46f410

 Virtqueue 0 (TX)
  qsz 256 last_avail_idx 63392 last_used_idx 63392
  avail.flags 0 avail.idx 63530 used.flags 1 used.idx 63392
  kickfd 34 callfd 35 errfd -1

 Virtqueue 1 (RX)
  qsz 256 last_avail_idx 32414 last_used_idx 32414
  avail.flags 1 avail.idx 32414 used.flags 1 used.idx 32414
  kickfd 30 callfd 36 errfd -1


On Thu, Mar 22, 2018 at 3:07 PM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
i dont think these are error counters - anyway very poor pps

On Thu, Mar 22, 2018 at 2:55 PM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
in the show err output i see that  l2-output  l2-learn l2-input counters are 
continuously  incremented :
show  err
   CountNode  Reason
11l2-output   L2 output packets
11l2-learnL2 learn packets
11l2-inputL2 input packets
 3l2-floodL2 flood packets
   8479644l2-output   L2 output packets
   8479644l2-learnL2 learn packets
   8479644l2-inputL2 input packets

On Thu, Mar 22, 2018 at 11:59 AM, Sara Gittlin 
mailto:sara.gitt...@gmail.com>> wrote:
Hello
i setup 2 vm connected to VPP as per the guide :
https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_Vhost-User_Interface

The performance looks very bad very low pps and large latencies

udp pkt size 100B - throughput 500Mb
average latency  is 900us

i have  2 PMDs threads (200% cpu) in the host, in the VMs i see low
cpu load (10%)

Please can you tell me what is wrong with my setup ?

Thank you in advance
- Sara







Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread Sara Gittlin
this is the output of:
show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 1)
virtio_net_hdr_sz 12
 features mask (0x):
 features (0x58208000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_F_ANY_LAYOUT (27)
   VIRTIO_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

 socket filename /var/run/vpp/sock1.sock type server errno "Success"

 rx placement:
   thread 1 on vring 1, polling
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0

 Memory regions (total 2)
 region fdguest_phys_addrmemory_sizeuserspace_addr
mmap_offsetmmap_addr
 == = == == ==
== ==
  0 320x 0x000c 0x7f5a6a60
0x 0x7f47c420
  1 330x0010 0x3ff0 0x7f5a6a70
0x0010 0x7f46f410

 Virtqueue 0 (TX)
  qsz 256 last_avail_idx 63392 last_used_idx 63392
  avail.flags 0 avail.idx 63530 used.flags 1 used.idx 63392
  kickfd 34 callfd 35 errfd -1

 Virtqueue 1 (RX)
  qsz 256 last_avail_idx 32414 last_used_idx 32414
  avail.flags 1 avail.idx 32414 used.flags 1 used.idx 32414
  kickfd 30 callfd 36 errfd -1



On Thu, Mar 22, 2018 at 3:07 PM, Sara Gittlin 
wrote:

> i dont think these are error counters - anyway very poor pps
>
> On Thu, Mar 22, 2018 at 2:55 PM, Sara Gittlin 
> wrote:
>
>> in the show err output i see that  l2-output  l2-learn l2-input counters
>> are continuously  incremented :
>> show  err
>>CountNode  Reason
>> 11l2-output   L2 output packets
>> 11l2-learnL2 learn packets
>> 11l2-inputL2 input packets
>>  3l2-floodL2 flood packets
>>
>>
>> *  8479644l2-output   L2 output packets
>>  8479644l2-learnL2 learn packets   8479644
>>l2-inputL2 input packets*
>>
>> On Thu, Mar 22, 2018 at 11:59 AM, Sara Gittlin 
>> wrote:
>>
>>> Hello
>>> i setup 2 vm connected to VPP as per the guide :
>>> https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_Vho
>>> st-User_Interface
>>>
>>> The performance looks very bad very low pps and large latencies
>>>
>>> udp pkt size 100B - throughput 500Mb
>>> average latency  is 900us
>>>
>>> i have  2 PMDs threads (200% cpu) in the host, in the VMs i see low
>>> cpu load (10%)
>>>
>>> Please can you tell me what is wrong with my setup ?
>>>
>>> Thank you in advance
>>> - Sara
>>>
>>>
>>>
>>>
>>
> 
>
>


Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread Sara Gittlin
i dont think these are error counters - anyway very poor pps

On Thu, Mar 22, 2018 at 2:55 PM, Sara Gittlin 
wrote:

> in the show err output i see that  l2-output  l2-learn l2-input counters
> are continuously  incremented :
> show  err
>CountNode  Reason
> 11l2-output   L2 output packets
> 11l2-learnL2 learn packets
> 11l2-inputL2 input packets
>  3l2-floodL2 flood packets
>
>
> *  8479644l2-output   L2 output packets
>  8479644l2-learnL2 learn packets   8479644
>l2-inputL2 input packets*
>
> On Thu, Mar 22, 2018 at 11:59 AM, Sara Gittlin 
> wrote:
>
>> Hello
>> i setup 2 vm connected to VPP as per the guide :
>> https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_Vho
>> st-User_Interface
>>
>> The performance looks very bad very low pps and large latencies
>>
>> udp pkt size 100B - throughput 500Mb
>> average latency  is 900us
>>
>> i have  2 PMDs threads (200% cpu) in the host, in the VMs i see low
>> cpu load (10%)
>>
>> Please can you tell me what is wrong with my setup ?
>>
>> Thank you in advance
>> - Sara
>>
>> 
>>
>>
>


Re: [vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread Sara Gittlin
in the show err output i see that  l2-output  l2-learn l2-input counters
are continuously  incremented :
show  err
   CountNode  Reason
11l2-output   L2 output packets
11l2-learnL2 learn packets
11l2-inputL2 input packets
 3l2-floodL2 flood packets


*  8479644l2-output   L2 output packets
 8479644l2-learnL2 learn packets   8479644
   l2-inputL2 input packets*

On Thu, Mar 22, 2018 at 11:59 AM, Sara Gittlin 
wrote:

> Hello
> i setup 2 vm connected to VPP as per the guide :
> https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_
> Vhost-User_Interface
>
> The performance looks very bad very low pps and large latencies
>
> udp pkt size 100B - throughput 500Mb
> average latency  is 900us
>
> i have  2 PMDs threads (200% cpu) in the host, in the VMs i see low
> cpu load (10%)
>
> Please can you tell me what is wrong with my setup ?
>
> Thank you in advance
> - Sara
>
> 
>
>


[vpp-dev] Very poor performance vm to vm via VPP vhostuser

2018-03-22 Thread Sara Gittlin
Hello
i setup 2 vm connected to VPP as per the guide :
https://wiki.fd.io/view/VPP/Use_VPP_to_connect_VMs_Using_Vhost-User_Interface

The performance looks very bad very low pps and large latencies

udp pkt size 100B - throughput 500Mb
average latency  is 900us

i have  2 PMDs threads (200% cpu) in the host, in the VMs i see low
cpu load (10%)

Please can you tell me what is wrong with my setup ?

Thank you in advance
- Sara

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8618): https://lists.fd.io/g/vpp-dev/message/8618
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/16048859
Mute This Topic: https://lists.fd.io/mt/16048859/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-