[dpdk-dev] KNI performance numbers...

2015-06-25 Thread Maciej Grochowski
I meet similar issue with KNI connected VM, but In my case I run 2 VM
guests based on KNI and measure network performance between them:

sesion:

### I just started demo with kni

./build/kni -c 0xf0 -n 4 -- -P -p 0x3 --config="(0,4,6,8),(1,5,7,9)"

###starting...

###set kni on vEthX to connect (as in example)

echo 1 > /sys/class/net/vEth0_0/sock_en
fd=`cat /sys/class/net/vEth0_0/sock_fd`

## start first guest VM
kvm -nographic -name vm1 -cpu host -m 2048 -smp 1 -hda
.../debian_squeeze_amd64.qcow2 -netdev tap,fd=$fd,id=hostnet1,vhost=on
-device virtio-net-pci,netdev=hostnet1,id=net1,bus=pci.0,addr=0x4

## start second guest VM
echo 1 > /sys/class/net/vEth1_0/sock_en
fd=`cat /sys/class/net/vEth1_0/sock_fd`

kvm -nographic -name vm2 -cpu host -m 2048 -smp 1 -hda
.../debian_squeeze2_amd64.qcow2 -netdev tap,fd=$fd,id=hostnet1,vhost=on
-device virtio-net-pci,netdev=hostnet1,id=net1,bus=pci.0,addr=0x4

###END: ustawiam 2 kvm z virtual guestem


### first VM node start server
 netserver -p 22113

### performance from second VM guest to first (server) using netperf

root at debian-amd64:~# netperf -H 10.0.0.200 -p 22113 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
10.0.0.200 () port 0 AF_INET
Recv   SendSend
Socket Socket  Message  Elapsed
Size   SizeSize Time Throughput
bytes  bytes   bytessecs.10^6bits/sec
 87380  16384  1638410.01 219.86

So I got 220M between two VM using KNI, but it was only experiment (I
didn't analyze it deeply)

On Wed, Jun 24, 2015 at 7:58 AM, Vithal S Mohare 
wrote:

> Hi,
>
> I am running DPDP KNI application on linux (3.18 kernel) VM (ESXi 5.5),
> directly connected to another linux box to measure throughput using  iperf
> tool.  Link speed: 1Gbps.   Maximum throughput I get is 50% with 1470
> Bytes.  With 512B pkt sizes, throughput drops to 282 Mbps.
>
> Tried using KNI loopback modes (and traffic from Ixia), but no change in
> throughput.
>
> KNI is running in single thread mode.  One lcore for rx, one for tx and
> another fir kni thread.
>
> Is the result expected?  Has anybody got better numbers?  Appreciate for
> input and relevant info.
>
> Thanks,
> -Vithal
>


[dpdk-dev] Vhost user no connection vm2vm

2015-05-22 Thread Maciej Grochowski
Thank You Andriy, You are right

Below I put tested KVM configuration that put packets into vhost dpdk plane
space:

export TLBFS_DIR=/mnt/huge
export UVH_PREFIX=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost
export VM1_MAC=00:01:04:00:01:00
kvm -cpu host -smp 2 -enable-kvm \
-drive if=virtio,file=debian_min_2.qcow2,cache=none \
-object
memory-backend-file,id=mem,size=756M,mem-path=${TLBFS_DIR},share=on \
-numa node,memdev=mem \
-m 756 -nographic \
-chardev socket,id=charnet0,path=${UVH_PREFIX} \
-netdev type=vhost-user,id=hostnet0,chardev=charnet0 \
-device virtio-net-pci,netdev=hostnet0,mac=${VM1_MAC}

Notice that:
memory-backend-file must be with right -numa node, "memdev=mem" option

So yeah right configuration is a essence :)


On Fri, May 22, 2015 at 12:54 PM, Andriy Berestovskyy 
wrote:

> Hi guys,
> I guess you just miss the qemu flag to map the memory as shared, i.e.:
> -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on
> (the keyword is share=on)
>
> Here is an example script:
>
> https://github.com/srajag/contrail-vrouter/blob/dpdk-devel/examples/vms/VROUTER1/80.start-vm.sh
>
> Regards,
> Andriy
>
> On Fri, May 22, 2015 at 12:04 PM, Maciej Grochowski
>  wrote:
> > I checked this, results below
> >
> > #before script:
> > root@# cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages
> > 494
> > #after 1 qemu script
> > root@# cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages
> > 366
> >
> > So qemu consume 262144k~262MB that is correct with script
> >
> > On Fri, May 22, 2015 at 11:58 AM, Tetsuya Mukawa 
> wrote:
> >
> >> Hi Maciej,
> >>
> >> I guess it's nice to make sure guest memory is actually allocated by
> >> hugepages.
> >> So please check like below.
> >>
> >> $ cat /sys/kernel/mm/hugepage/x/free_hugepages
> >> $ ./start_qemu.sh
> >> $ cat /sys/kernel/mm/hugepage/x/free_hugepages
> >>
> >> If qemu guest allocates memory from hugepages, 2nd cat command will
> >> indicate it.
> >>
> >> Thanks,
> >> Tetsuya
> >>
> >>
> >> On 2015/05/22 18:28, Maciej Grochowski wrote:
> >> > "Do you use some command I suggest before,
> >> > In case of you miss the previous mail, just copy it again:"
> >> >
> >> > -Yes but it didn't help me ;/
> >> >
> >> > I will describe step by step to esure that configuration is made by
> right
> >> > way
> >> >
> >> >
> >> > I started vhost:
> >> >
> >> > ./build/app/vhost-switch -c f -n 4  --huge-dir /mnt/huge --socket-mem
> >> 3712
> >> > -- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9
> >> >
> >> > Now I run two vm machines, with followed configuration
> >> >
> >> > VM1   __  __  VM2
> >> > eth0 >  \/  > eth0
> >> > eth1 >__/\__> eth1
> >> >
> >> > So I will connect VM1.eth0 with VM2.eth1 and VM1.eth1 with VM2.eth0
> >> > Because it is test env and I didn't have other network connection on
> >> vhost
> >> > I will create two networks 192.168.0.x and 192.168.1.x
> >> >  VM1.eth0 with VM2.eth1 will be placed in 192.168.0.x and VM1.eth1
> with
> >> > VM2.eth0 in 192.168.1.x
> >> >
> >> > ## I started first VM1 as follow
> >> > kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm2 -cpu
> >> host
> >> > -smp 1 \
> >> > -hda /home/ubuntu/esi_ee/qemu/debian_min_1.qcow2 -m 256 -mem-path
> >> /mnt/huge
> >> > -mem-prealloc \
> >> > -chardev
> >> > socket,id=char3,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
> >> > -netdev type=vhost-user,id=hostnet3,chardev=char3 \
> >> > -device
> >> >
> >>
> virtio-net-pci,netdev=hostnet3,id=net3,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> >> > \
> >> > -chardev
> >> > socket,id=char4,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
> >> > -netdev type=vhost-user,id=hostnet4,chardev=char4 \
> >> > -device
> >> >
> >>
> virtio-net-pci,netdev=hostnet4,id=net4,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> >> > ## qemu give followed output
> >> > qemu-system-x86_64: -netdev type=vhost-user,id=hostnet3,chardev=char3:
> >> > chardev "char3" went up
> >> > qemu-system-x86_64: -n

[dpdk-dev] Vhost user no connection vm2vm

2015-05-22 Thread Maciej Grochowski
I checked this, results below

#before script:
root@# cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages
494
#after 1 qemu script
root@# cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages
366

So qemu consume 262144k~262MB that is correct with script

On Fri, May 22, 2015 at 11:58 AM, Tetsuya Mukawa  wrote:

> Hi Maciej,
>
> I guess it's nice to make sure guest memory is actually allocated by
> hugepages.
> So please check like below.
>
> $ cat /sys/kernel/mm/hugepage/x/free_hugepages
> $ ./start_qemu.sh
> $ cat /sys/kernel/mm/hugepage/x/free_hugepages
>
> If qemu guest allocates memory from hugepages, 2nd cat command will
> indicate it.
>
> Thanks,
> Tetsuya
>
>
> On 2015/05/22 18:28, Maciej Grochowski wrote:
> > "Do you use some command I suggest before,
> > In case of you miss the previous mail, just copy it again:"
> >
> > -Yes but it didn't help me ;/
> >
> > I will describe step by step to esure that configuration is made by right
> > way
> >
> >
> > I started vhost:
> >
> > ./build/app/vhost-switch -c f -n 4  --huge-dir /mnt/huge --socket-mem
> 3712
> > -- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9
> >
> > Now I run two vm machines, with followed configuration
> >
> > VM1   __  __  VM2
> > eth0 >  \/  > eth0
> > eth1 >__/\__> eth1
> >
> > So I will connect VM1.eth0 with VM2.eth1 and VM1.eth1 with VM2.eth0
> > Because it is test env and I didn't have other network connection on
> vhost
> > I will create two networks 192.168.0.x and 192.168.1.x
> >  VM1.eth0 with VM2.eth1 will be placed in 192.168.0.x and VM1.eth1 with
> > VM2.eth0 in 192.168.1.x
> >
> > ## I started first VM1 as follow
> > kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm2 -cpu
> host
> > -smp 1 \
> > -hda /home/ubuntu/esi_ee/qemu/debian_min_1.qcow2 -m 256 -mem-path
> /mnt/huge
> > -mem-prealloc \
> > -chardev
> > socket,id=char3,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
> > -netdev type=vhost-user,id=hostnet3,chardev=char3 \
> > -device
> >
> virtio-net-pci,netdev=hostnet3,id=net3,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > \
> > -chardev
> > socket,id=char4,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
> > -netdev type=vhost-user,id=hostnet4,chardev=char4 \
> > -device
> >
> virtio-net-pci,netdev=hostnet4,id=net4,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > ## qemu give followed output
> > qemu-system-x86_64: -netdev type=vhost-user,id=hostnet3,chardev=char3:
> > chardev "char3" went up
> > qemu-system-x86_64: -netdev type=vhost-user,id=hostnet4,chardev=char4:
> > chardev "char4" went up
> >
> > ## second VM2
> > kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu
> host
> > -smp 1 \
> > -hda /home/ubuntu/esi_ee/qemu/debian_min_2.qcow2 -m 256 -mem-path
> /mnt/huge
> > -mem-prealloc \
> > -chardev
> > socket,id=char1,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
> > -netdev type=vhost-user,id=hostnet1,chardev=char1 \
> > -device
> >
> virtio-net-pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > \
> > -chardev
> > socket,id=char2,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
> > -netdev type=vhost-user,id=hostnet2,chardev=char2 \
> > -device
> >
> virtio-net-pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > ## second output
> > qemu-system-x86_64: -netdev type=vhost-user,id=hostnet1,chardev=char1:
> > chardev "char1" went up
> > qemu-system-x86_64: -netdev type=vhost-user,id=hostnet2,chardev=char2:
> > chardev "char2" went up
> >
> >
> >
> > After that I had MAC conflict between VM2 and VM1
> >
> > VM1: -ifconfig -a
> > eth0  Link encap:Ethernet  HWaddr 52:54:00:12:34:56
> >   inet6 addr: fe80::5054:ff:fe12:3456/64 Scope:Link
> >   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> >   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
> >   collisions:0 txqueuelen:1000
> >   RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
> >
> > eth1  Link encap:Ethernet  HWaddr 52:54:00:12:34:57
> >   BROADCAST MULTICAST  MTU:1500  Metric:1
> >   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> >   TX packets:0 errors:0 dro

[dpdk-dev] Vhost user no connection vm2vm

2015-05-22 Thread Maciej Grochowski
"Do you use some command I suggest before,
In case of you miss the previous mail, just copy it again:"

-Yes but it didn't help me ;/

I will describe step by step to esure that configuration is made by right
way


I started vhost:

./build/app/vhost-switch -c f -n 4  --huge-dir /mnt/huge --socket-mem 3712
-- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9

Now I run two vm machines, with followed configuration

VM1   __  __  VM2
eth0 >  \/  > eth0
eth1 >__/\__> eth1

So I will connect VM1.eth0 with VM2.eth1 and VM1.eth1 with VM2.eth0
Because it is test env and I didn't have other network connection on vhost
I will create two networks 192.168.0.x and 192.168.1.x
 VM1.eth0 with VM2.eth1 will be placed in 192.168.0.x and VM1.eth1 with
VM2.eth0 in 192.168.1.x

## I started first VM1 as follow
kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm2 -cpu host
-smp 1 \
-hda /home/ubuntu/esi_ee/qemu/debian_min_1.qcow2 -m 256 -mem-path /mnt/huge
-mem-prealloc \
-chardev
socket,id=char3,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet3,chardev=char3 \
-device
virtio-net-pci,netdev=hostnet3,id=net3,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
\
-chardev
socket,id=char4,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet4,chardev=char4 \
-device
virtio-net-pci,netdev=hostnet4,id=net4,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
## qemu give followed output
qemu-system-x86_64: -netdev type=vhost-user,id=hostnet3,chardev=char3:
chardev "char3" went up
qemu-system-x86_64: -netdev type=vhost-user,id=hostnet4,chardev=char4:
chardev "char4" went up

## second VM2
kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu host
-smp 1 \
-hda /home/ubuntu/esi_ee/qemu/debian_min_2.qcow2 -m 256 -mem-path /mnt/huge
-mem-prealloc \
-chardev
socket,id=char1,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet1,chardev=char1 \
-device
virtio-net-pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
\
-chardev
socket,id=char2,path=/home/ubuntu/esi_ee/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet2,chardev=char2 \
-device
virtio-net-pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
## second output
qemu-system-x86_64: -netdev type=vhost-user,id=hostnet1,chardev=char1:
chardev "char1" went up
qemu-system-x86_64: -netdev type=vhost-user,id=hostnet2,chardev=char2:
chardev "char2" went up



After that I had MAC conflict between VM2 and VM1

VM1: -ifconfig -a
eth0  Link encap:Ethernet  HWaddr 52:54:00:12:34:56
  inet6 addr: fe80::5054:ff:fe12:3456/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1  Link encap:Ethernet  HWaddr 52:54:00:12:34:57
  BROADCAST MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


VM2: -ifconfig -a
eth0  Link encap:Ethernet  HWaddr 52:54:00:12:34:56
  inet6 addr: fe80::5054:ff:fe12:3456/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1  Link encap:Ethernet  HWaddr 52:54:00:12:34:57
  BROADCAST MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

In KNI example I had something similar and also no packet flow and solution
was to change MAC addresses

#VM1
/etc/init.d/networking stop
ifconfig eth0 hw ether 00:01:04:00:01:00
ifconfig eth1 hw ether 00:01:04:00:01:01
/etc/init.d/networking start
ifconfig eth0
ifconfig eth1

#VM2
/etc/init.d/networking stop
ifconfig eth0 hw ether 00:01:04:00:02:00
ifconfig eth1 hw ether 00:01:04:00:02:01
/etc/init.d/networking start
ifconfig eth0
ifconfig eth1

Then I make a configuration that You show:

#VM1
ip addr add 192.168.0.100/24 dev eth0
ip addr add 192.168.1.100/24 dev eth1
ip neigh add 192.168.0.200 lladdr 00:01:04:00:02:01 dev eth0
ip link set dev eth0 up
ip neigh add 192.168.1.200 lladdr 00:01:04:00:02:00 dev eth1
ip link set dev eth1 up

eth0  Link encap:Ethernet  HWaddr 00:01:04:00:01:00
  inet addr:192.168.0.100  Bcast:0.0.0.0  Mask:255.255.255.0
  inet6 addr: fe80::201:4ff:fe00:100/64 

[dpdk-dev] FW: Vhost user no connection vm2vm

2015-05-22 Thread Maciej Grochowski
Unfortunately not, I have the same issue in rte_vhost_dequeue_burst
function.

What kernel version are You using on host/guest? In my case on host I
had 3.13.0 and on guests old 3.2 debian.

I just looked deeper into virtio  back-end (vhost) but at first glace it
seems like nothing coming from virtio.

What I'm going to do today is to compile newest kernel for vhost and guest
and debug where packet flow stuck, I will report the result

On Thu, May 21, 2015 at 11:12 AM, Gaohaifeng (A) 
wrote:

> Hi Maciej
> Did you solve your problem? I meet this problem as your case. And
> I found avail_idx(in rte_vhost_dequeue_burst function) is always zero
> although I do send packets in VM.
>
> Thanks.
>
>
> > Hello, I have strange issue with example/vhost app.
> >
> > I had compiled DPDK to run a vhost example app with followed flags
> >
> > CONFIG_RTE_LIBRTE_VHOST=y
> > CONFIG_RTE_LIBRTE_VHOST_USER=y
> > CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
> >
> > then I run vhost app based on documentation:
> >
> >  ./build/app/vhost-switch -c f -n 4  --huge-dir /mnt/huge --socket-mem
> > 3712
> > -- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9
> >
> > -I use this strange --socket-mem 3712 because of physical limit of
> > memoryon device -with this vhost user I run two KVM machines with
> > followed parameters
> >
> > kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu
> > host -smp 2 -hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m
> > 1024 -mem-path /mnt/huge -mem-prealloc -chardev
> > socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost
> > -netdev type=vhost-user,id=hostnet1,chardev=char1
> > -device virtio-net
> > pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6
> > =
> > off,guest_ecn=off
> > -chardev
> > socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost
> > -netdev type=vhost-user,id=hostnet2,chardev=char2
> > -device
> > virtio-net-
> > pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6
> > =
> > off,guest_ecn=off
> >
> > After running KVM virtio correctly starting (below logs from vhost app)
> ...
> > VHOST_CONFIG: mapped region 0 fd:31 to 0x2aaabae0 sz:0xa
> > off:0x0
> > VHOST_CONFIG: mapped region 1 fd:37 to 0x2aaabb00 sz:0x1000
> > off:0xc
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
> > VHOST_CONFIG: vring kick idx:0 file:38
> > VHOST_CONFIG: virtio isn't ready for processing.
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
> > VHOST_CONFIG: vring kick idx:1 file:39
> > VHOST_CONFIG: virtio is now ready for processing.
> > VHOST_DATA: (1) Device has been added to data core 2
> >
> > So everything looking good.
> >
> > Maybe it is something trivial but using options: --vm2vm 1 (or) 2
> > --stats 9 it seems that I didn't have connection between VM2VM
> > communication. I set manually IP for eth0 and eth1:
> >
> > on 1 VM
> > ifconfig eth0 192.168.0.100 netmask 255.255.255.0 up ifconfig eth1
> > 192.168.1.101 netmask 255.255.255.0 up
> >
> > on 2 VM
> > ifconfig eth0 192.168.1.200 netmask 255.255.255.0 up ifconfig eth1
> > 192.168.0.202 netmask 255.255.255.0 up
> >
> > I notice that in vhostapp are one directional rx/tx queue so I tryied
> > to ping between VM1 to VM2 using both interfaces ping -I eth0
> > 192.168.1.200 ping -I
> > eth1 192.168.1.200 ping -I eth0 192.168.0.202 ping -I eth1
> > 192.168.0.202
> >
> > on VM2 using tcpdump on both interfaces I didn't see any ICMP requests
> > or traffic
> >
> > And I cant ping between any IP/interfaces, moreover stats show me that:
> >
> > Device statistics 
> > Statistics for device 0 --
> > TX total:   0
> > TX dropped: 0
> > TX successful:  0
> > RX total:   0
> > RX dropped: 0
> > RX successful:  0
> > Statistics for device 1 --
> > TX total:   0
> > TX dropped: 0
> > TX successful:  0
> > RX total:   0
> > RX dropped: 0
> > RX successful:  0
> > Statistics for device 2 --
> > TX total:   0
> > TX dropped: 0
> > TX successful:  0
> > RX total:   0
> > RX dropped: 0
> > RX successful:  0
> > Statistics for device 3 --
> > TX total:   0
> > TX dropped: 0
> > TX successful:  0
> > RX total:   0
> > RX dropped: 0
> > RX successful:  0
> > ==
> >
> > So it seems 

[dpdk-dev] Vhost user no connection vm2vm

2015-05-15 Thread Maciej Grochowski
Hello, I have strange issue with example/vhost app.

I had compiled DPDK to run a vhost example app with followed flags

CONFIG_RTE_LIBRTE_VHOST=y
CONFIG_RTE_LIBRTE_VHOST_USER=y
CONFIG_RTE_LIBRTE_VHOST_DEBUG=n

then I run vhost app based on documentation:

 ./build/app/vhost-switch -c f -n 4  --huge-dir /mnt/huge --socket-mem 3712
-- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9

-I use this strange --socket-mem 3712 because of physical limit of memoryon
device
-with this vhost user I run two KVM machines with followed parameters

kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu host
-smp 2
-hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m 1024 -mem-path
/mnt/huge -mem-prealloc
-chardev socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost
-netdev type=vhost-user,id=hostnet1,chardev=char1
-device virtio-net
pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
-chardev socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost
-netdev type=vhost-user,id=hostnet2,chardev=char2
-device
virtio-net-pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off

After running KVM virtio correctly starting (below logs from vhost app)
...
VHOST_CONFIG: mapped region 0 fd:31 to 0x2aaabae0 sz:0xa off:0x0
VHOST_CONFIG: mapped region 1 fd:37 to 0x2aaabb00 sz:0x1000
off:0xc
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:38
VHOST_CONFIG: virtio isn't ready for processing.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:39
VHOST_CONFIG: virtio is now ready for processing.
VHOST_DATA: (1) Device has been added to data core 2

So everything looking good.

Maybe it is something trivial but using options: --vm2vm 1 (or) 2 --stats 9
it seems that I didn't have connection between VM2VM communication. I set
manually IP for eth0 and eth1:

on 1 VM
ifconfig eth0 192.168.0.100 netmask 255.255.255.0 up
ifconfig eth1 192.168.1.101 netmask 255.255.255.0 up

on 2 VM
ifconfig eth0 192.168.1.200 netmask 255.255.255.0 up
ifconfig eth1 192.168.0.202 netmask 255.255.255.0 up

I notice that in vhostapp are one directional rx/tx queue
so I tryied to ping between VM1 to VM2 using both interfaces
ping -I eth0 192.168.1.200
ping -I eth1 192.168.1.200
ping -I eth0 192.168.0.202
ping -I eth1 192.168.0.202

on VM2 using tcpdump on both interfaces I didn't see any ICMP requests or
traffic

And I cant ping between any IP/interfaces, moreover stats show me that:

Device statistics 
Statistics for device 0 --
TX total:   0
TX dropped: 0
TX successful:  0
RX total:   0
RX dropped: 0
RX successful:  0
Statistics for device 1 --
TX total:   0
TX dropped: 0
TX successful:  0
RX total:   0
RX dropped: 0
RX successful:  0
Statistics for device 2 --
TX total:   0
TX dropped: 0
TX successful:  0
RX total:   0
RX dropped: 0
RX successful:  0
Statistics for device 3 --
TX total:   0
TX dropped: 0
TX successful:  0
RX total:   0
RX dropped: 0
RX successful:  0
==

So it seems like any packet didn't leave my VM.
also arp table is empty on each VM.

ifconfig -a show that no packet come across eth0, eth1 which I used with
ping, but everything come across local loopback

eth0  Link encap:Ethernet  HWaddr 52:54:00:12:34:56
  inet addr:192.168.0.200  Bcast:192.168.0.255  Mask:255.255.255.0
  inet6 addr: fe80::5054:ff:fe12:3456/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1  Link encap:Ethernet  HWaddr 52:54:00:12:34:57
  inet addr:192.168.1.202  Bcast:192.168.1.255  Mask:255.255.255.0
  inet6 addr: fe80::5054:ff:fe12:3457/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

loLink 

[dpdk-dev] Issues with example/vhost with running VM

2015-05-14 Thread Maciej Grochowski
 be wrong with configuration between VM2VM?


On Wed, May 13, 2015 at 7:53 PM, Xie, Huawei  wrote:

> Try --socket-mem or -m 2048 to limit the vhost switch's memory
> consumption, note that vswitch requires several GB memory due to some issue
> in the example, so try allocating more huges pages.
>
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Maciej Grochowski
> > Sent: Wednesday, May 13, 2015 11:00 PM
> > To: dev at dpdk.org
> > Subject: [dpdk-dev] Issues with example/vhost with running VM
> >
> > Hello, I trying to create vm2vm benchmark on my ubuntu(14.04) based
> > platform.
> >
> > I had compiled DPDK to run a vhost example app with followed flags
> >
> > CONFIG_RTE_LIBRTE_VHOST=y
> > CONFIG_RTE_LIBRTE_VHOST_USER=y
> > CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
> >
> >
> > then I run vhost app based on documentation:
> >
> > ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- -p 0x1
> > --dev-basename usvhost
> >
> > then I trying to start kvm VM
> >
> > kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu
> > host
> > -smp 2 -mem-path /mnt/huge -mem-prealloc \
> > -hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m 4096  \
> > -chardev
> > socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
> > -netdev type=vhost-user,id=hostnet1,chardev=char1 \
> > -device
> > virtio-net-
> > pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=o
> > ff,guest_ecn=off
> > \
> > -chardev
> > socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
> > -netdev type=vhost-user,id=hostnet2,chardev=char2 \
> > -device
> > virtio-net-
> > pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=o
> > ff,guest_ecn=off
> >
> > but this give me an error:
> >
> > qemu-system-x86_64: -netdev type=vhost-user,id=hostnet1,chardev=char1:
> > chardev "char1" went up
> > qemu-system-x86_64: unable to map backing store for hugepages: Cannot
> > allocate memory
> >
> >
> > On vhost app in logs I can see:
> >
> > VHOST_DATA: Procesing on Core 1 started
> > VHOST_DATA: Procesing on Core 2 started
> > VHOST_DATA: Procesing on Core 3 started
> > VHOST_CONFIG: socket created, fd:25
> > VHOST_CONFIG: bind to usvhost
> > VHOST_CONFIG: new virtio connection is 26
> > VHOST_CONFIG: new device, handle is 0
> > VHOST_CONFIG: read message VHOST_USER_SET_OWNER
> > VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
> > VHOST_CONFIG: vring call idx:0 file:27
> > VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
> > VHOST_CONFIG: vring call idx:1 file:28
> > VHOST_CONFIG: recvmsg failed
> > VHOST_CONFIG: vhost peer closed
> >
> >
> > So that looks at huge page memory problem.
> > On my machine I had 2048k huge pages, and I can allocate 2479.
> >
> > before I run vhost "cat /proc/meminfo | grep Huge" show
> >
> > AnonHugePages:  4096 kB
> > HugePages_Total:2479
> > HugePages_Free: 2479
> > HugePages_Rsvd:0
> > HugePages_Surp:0
> > Hugepagesize:   2048 kB
> >
> > and while running vhost:
> >
> > AnonHugePages:  4096 kB
> > HugePages_Total:2479
> > HugePages_Free:0
> > HugePages_Rsvd:0
> > HugePages_Surp:0
> > Hugepagesize:   2048 kB
> >
> > so that looks that I didn't have free hugepages for my VM. But this looks
> > as independently if I reserve 1k 2k or 2.5k memory always example/vhost
> got
> > whole memory.
> >
> > Any help will be greatly appreciated
>


[dpdk-dev] Issues with example/vhost with running VM

2015-05-13 Thread Maciej Grochowski
Hello, I trying to create vm2vm benchmark on my ubuntu(14.04) based
platform.

I had compiled DPDK to run a vhost example app with followed flags

CONFIG_RTE_LIBRTE_VHOST=y
CONFIG_RTE_LIBRTE_VHOST_USER=y
CONFIG_RTE_LIBRTE_VHOST_DEBUG=n


then I run vhost app based on documentation:

./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- -p 0x1
--dev-basename usvhost

then I trying to start kvm VM

kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu host
-smp 2 -mem-path /mnt/huge -mem-prealloc \
-hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m 4096  \
-chardev socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet1,chardev=char1 \
-device
virtio-net-pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
\
-chardev socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost \
-netdev type=vhost-user,id=hostnet2,chardev=char2 \
-device
virtio-net-pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off

but this give me an error:

qemu-system-x86_64: -netdev type=vhost-user,id=hostnet1,chardev=char1:
chardev "char1" went up
qemu-system-x86_64: unable to map backing store for hugepages: Cannot
allocate memory


On vhost app in logs I can see:

VHOST_DATA: Procesing on Core 1 started
VHOST_DATA: Procesing on Core 2 started
VHOST_DATA: Procesing on Core 3 started
VHOST_CONFIG: socket created, fd:25
VHOST_CONFIG: bind to usvhost
VHOST_CONFIG: new virtio connection is 26
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:27
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:28
VHOST_CONFIG: recvmsg failed
VHOST_CONFIG: vhost peer closed


So that looks at huge page memory problem.
On my machine I had 2048k huge pages, and I can allocate 2479.

before I run vhost "cat /proc/meminfo | grep Huge" show

AnonHugePages:  4096 kB
HugePages_Total:2479
HugePages_Free: 2479
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

and while running vhost:

AnonHugePages:  4096 kB
HugePages_Total:2479
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

so that looks that I didn't have free hugepages for my VM. But this looks
as independently if I reserve 1k 2k or 2.5k memory always example/vhost got
whole memory.

Any help will be greatly appreciated