Re: [ovs-dev] Traffic fails in vhost user port

2017-04-04 Thread Nadathur, Sundar
You are perfectly right, Ilya! That fixed it. 

Thank you very much.

Regards,
Sundar

> -Original Message-
> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
> Sent: Tuesday, April 4, 2017 2:45 AM
> To: Nadathur, Sundar <sundar.nadat...@intel.com>; ovs-
> d...@openvswitch.org
> Subject: Re: [ovs-dev] Traffic fails in vhost user port
> 
> On 04.04.2017 12:26, Nadathur, Sundar wrote:
> > Thanks, Ilya.
> >
> > # ovs-vsctl list Interface vi1
> > _uuid   : 30d1600a-ff7d-4bf5-9fdb-b0767af3611c
> > admin_state : up
> > bfd : {}
> > bfd_status  : {}
> > cfm_fault   : []
> > cfm_fault_status: []
> > cfm_flap_count  : []
> > cfm_health  : []
> > cfm_mpid: []
> > cfm_remote_mpids: []
> > cfm_remote_opstate  : []
> > duplex  : []
> > error   : []
> > external_ids: {}
> > ifindex : 0
> > ingress_policing_burst: 0
> > ingress_policing_rate: 0
> > lacp_current: []
> > link_resets : 0
> > link_speed  : []
> > link_state  : up
> > lldp: {}
> > mac : []
> > mac_in_use  : "00:00:00:00:00:00"
> > mtu : 1500
> > mtu_request : []
> > name: "vi1"
> > ofport  : 5
> > ofport_request  : []
> > options : {}
> > other_config: {}
> > statistics  : {"rx_1024_to_1518_packets"=0,
> "rx_128_to_255_packets"=0, "rx_1523_to_max_packets"=0,
> "rx_1_to_64_packets"=0, "rx_256_to_511_packets"=0,
> "rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=0, rx_bytes=0,
> rx_dropped=0, rx_errors=0, tx_bytes=0, tx_dropped=11}
> > status  : {}
> > type: dpdkvhostuser
> >
> > Here is the qemu command line split for readability:
> > /usr/libexec/qemu-kvm -name guest=vhu-vm1,debug-threads=on -S
> >-object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-vhu-
> vm1/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off
> >-m 2048 -mem-prealloc -mem-path /dev/hugepages/libvirt/qemu -
> realtime mlock=off -smp 2,sockets=2,cores=1,threads=1
> >-uuid f5b8c05b-9c7a-3211-49b9-2bd635f7e2aa -no-user-config -
> nodefaults
> >-chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-
> vhu-vm1/monitor.sock,server,nowait
> >-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-
> shutdown -boot strict=on -device piix3-usb-
> uhci,id=usb,bus=pci.0,addr=0x1.0x2
> >-drive
> file=/home/nfv/Images/vm1.qcow2,format=qcow2,if=none,id=drive-virtio-
> disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-
> disk0,id=virtio-disk0,bootindex=1
> > -chardev socket,id=charnet0,path=/usr/local/var/run/openvswitch/vi1 -
> netdev vhost-user,chardev=charnet0,id=hostnet0
> >-device virtio-net-
> pci,netdev=hostnet0,id=net0,mac=3a:19:09:52:14:50,bus=pci.0,addr=0x3 -
> vnc 0.0.0.0:1
> >-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
> > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
> >
> 
> OK. I got it. Memory is not shared between OVS and VM.
> To make vhostuser work you must use 'share' option for qemu memory
> backing.
> 
> Please, refer the Documentation/topics/dpdk/vhost-user.rst for libvirt xml
> example.  "memAccess='shared'" - is what you need.
> 
> QEMU cmdline should contain something like this:
> -object memory-backend-file,id=ram-node0,prealloc=yes,mem-
> path=/dev/hugepages/libvirt/qemu,share=yes,size=10737418240,host-
> nodes=0,policy=bind
> Maybe you can avoid using hugepages, but 'share=yes' is required for vhost-
> user to work.
> 
> Best regards, Ilya Maximets.
> 
> 
> 
> > Re. ifconfig from VM, I have difficulty getting it right now over VPN, but I
> will get it by tomorrow morning. The 'ifconfig ' state is UP in the VM, IP
> address is configured as 200.1.1.2/24 in the virtio-net interface in the VM.
> Within the VM, the local address 200.1.1.2 can be pinged.
> >
> > Is there any good way to monitor packets flowing over vhost-user interface,
> such as wireshark for eth interfaces?
> >
> >
> > Regards,
> > Sundar
> >
> >> -Original Message-
> >> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
> >> Sent: Tuesday, April 4, 2017 2:13 AM
> >> To: Nadathur, Sundar <sundar.n

Re: [ovs-dev] Traffic fails in vhost user port

2017-04-04 Thread Ilya Maximets
On 04.04.2017 12:26, Nadathur, Sundar wrote:
> Thanks, Ilya. 
> 
> # ovs-vsctl list Interface vi1
> _uuid   : 30d1600a-ff7d-4bf5-9fdb-b0767af3611c
> admin_state : up
> bfd : {}
> bfd_status  : {}
> cfm_fault   : []
> cfm_fault_status: []
> cfm_flap_count  : []
> cfm_health  : []
> cfm_mpid: []
> cfm_remote_mpids: []
> cfm_remote_opstate  : []
> duplex  : []
> error   : []
> external_ids: {}
> ifindex : 0
> ingress_policing_burst: 0
> ingress_policing_rate: 0
> lacp_current: []
> link_resets : 0
> link_speed  : []
> link_state  : up
> lldp: {}
> mac : []
> mac_in_use  : "00:00:00:00:00:00"
> mtu : 1500
> mtu_request : []
> name: "vi1"
> ofport  : 5
> ofport_request  : []
> options : {}
> other_config: {}
> statistics  : {"rx_1024_to_1518_packets"=0, 
> "rx_128_to_255_packets"=0, "rx_1523_to_max_packets"=0, 
> "rx_1_to_64_packets"=0, "rx_256_to_511_packets"=0, 
> "rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=0, rx_bytes=0, 
> rx_dropped=0, rx_errors=0, tx_bytes=0, tx_dropped=11}
> status  : {}
> type: dpdkvhostuser
> 
> Here is the qemu command line split for readability:
> /usr/libexec/qemu-kvm -name guest=vhu-vm1,debug-threads=on -S 
>-object 
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-vhu-vm1/master-key.aes
>  -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off 
>-m 2048 -mem-prealloc -mem-path /dev/hugepages/libvirt/qemu -realtime 
> mlock=off -smp 2,sockets=2,cores=1,threads=1 
>-uuid f5b8c05b-9c7a-3211-49b9-2bd635f7e2aa -no-user-config -nodefaults 
>-chardev 
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-vhu-vm1/monitor.sock,server,nowait
>-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc 
> -no-shutdown -boot strict=on -device 
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
>-drive 
> file=/home/nfv/Images/vm1.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 
> -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>  
> -chardev socket,id=charnet0,path=/usr/local/var/run/openvswitch/vi1 
> -netdev vhost-user,chardev=charnet0,id=hostnet0 
>-device 
> virtio-net-pci,netdev=hostnet0,id=net0,mac=3a:19:09:52:14:50,bus=pci.0,addr=0x3
>  -vnc 0.0.0.0:1 
>-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
> 

OK. I got it. Memory is not shared between OVS and VM.
To make vhostuser work you must use 'share' option for qemu memory backing.

Please, refer the Documentation/topics/dpdk/vhost-user.rst for libvirt xml
example.  "memAccess='shared'" - is what you need.

QEMU cmdline should contain something like this:
-object 
memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=yes,size=10737418240,host-nodes=0,policy=bind
Maybe you can avoid using hugepages, but 'share=yes' is required for vhost-user 
to work.

Best regards, Ilya Maximets.



> Re. ifconfig from VM, I have difficulty getting it right now over VPN, but I 
> will get it by tomorrow morning. The 'ifconfig ' state is UP in the VM, IP 
> address is configured as 200.1.1.2/24 in the virtio-net interface in the VM. 
> Within the VM, the local address 200.1.1.2 can be pinged. 
> 
> Is there any good way to monitor packets flowing over vhost-user interface, 
> such as wireshark for eth interfaces? 
> 
> 
> Regards,
> Sundar
> 
>> -Original Message-
>> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
>> Sent: Tuesday, April 4, 2017 2:13 AM
>> To: Nadathur, Sundar <sundar.nadat...@intel.com>; ovs-
>> d...@openvswitch.org
>> Subject: Re: [ovs-dev] Traffic fails in vhost user port
>>
>> On 04.04.2017 11:29, Nadathur, Sundar wrote:
>>>> -Original Message-
>>>> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
>>>> Sent: Tuesday, April 4, 2017 12:07 AM
>>>> To: ovs-dev@openvswitch.org; Nadathur, Sundar
>>>> <sundar.nadat...@intel.com>
>>>> Subject: [ovs-dev] Traffic fails in vhost user port
>>>>
>>>> Hi Sundar.
>>>
>>>>> The flows are configured as below:
>>>>> # ovs-ofctl dump-flows br0
>>>>> NXST_FLOW reply (xid=0x4):
>>>>> co

Re: [ovs-dev] Traffic fails in vhost user port

2017-04-04 Thread Nadathur, Sundar
Thanks, Ilya. 

# ovs-vsctl list Interface vi1
_uuid   : 30d1600a-ff7d-4bf5-9fdb-b0767af3611c
admin_state : up
bfd : {}
bfd_status  : {}
cfm_fault   : []
cfm_fault_status: []
cfm_flap_count  : []
cfm_health  : []
cfm_mpid: []
cfm_remote_mpids: []
cfm_remote_opstate  : []
duplex  : []
error   : []
external_ids: {}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current: []
link_resets : 0
link_speed  : []
link_state  : up
lldp: {}
mac : []
mac_in_use  : "00:00:00:00:00:00"
mtu : 1500
mtu_request : []
name: "vi1"
ofport  : 5
ofport_request  : []
options : {}
other_config: {}
statistics  : {"rx_1024_to_1518_packets"=0, "rx_128_to_255_packets"=0, 
"rx_1523_to_max_packets"=0, "rx_1_to_64_packets"=0, "rx_256_to_511_packets"=0, 
"rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=0, rx_bytes=0, rx_dropped=0, 
rx_errors=0, tx_bytes=0, tx_dropped=11}
status  : {}
type: dpdkvhostuser

Here is the qemu command line split for readability:
/usr/libexec/qemu-kvm -name guest=vhu-vm1,debug-threads=on -S 
   -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-vhu-vm1/master-key.aes
 -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off 
   -m 2048 -mem-prealloc -mem-path /dev/hugepages/libvirt/qemu -realtime 
mlock=off -smp 2,sockets=2,cores=1,threads=1 
   -uuid f5b8c05b-9c7a-3211-49b9-2bd635f7e2aa -no-user-config -nodefaults 
   -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-vhu-vm1/monitor.sock,server,nowait
   -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
   -drive 
file=/home/nfv/Images/vm1.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 
-chardev socket,id=charnet0,path=/usr/local/var/run/openvswitch/vi1 -netdev 
vhost-user,chardev=charnet0,id=hostnet0 
   -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=3a:19:09:52:14:50,bus=pci.0,addr=0x3 
-vnc 0.0.0.0:1 
   -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on

Re. ifconfig from VM, I have difficulty getting it right now over VPN, but I 
will get it by tomorrow morning. The 'ifconfig ' state is UP in the VM, IP 
address is configured as 200.1.1.2/24 in the virtio-net interface in the VM. 
Within the VM, the local address 200.1.1.2 can be pinged. 

Is there any good way to monitor packets flowing over vhost-user interface, 
such as wireshark for eth interfaces? 


Regards,
Sundar

> -Original Message-
> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
> Sent: Tuesday, April 4, 2017 2:13 AM
> To: Nadathur, Sundar <sundar.nadat...@intel.com>; ovs-
> d...@openvswitch.org
> Subject: Re: [ovs-dev] Traffic fails in vhost user port
> 
> On 04.04.2017 11:29, Nadathur, Sundar wrote:
> >> -Original Message-
> >> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
> >> Sent: Tuesday, April 4, 2017 12:07 AM
> >> To: ovs-dev@openvswitch.org; Nadathur, Sundar
> >> <sundar.nadat...@intel.com>
> >> Subject: [ovs-dev] Traffic fails in vhost user port
> >>
> >> Hi Sundar.
> >
> >>> The flows are configured as below:
> >>> # ovs-ofctl dump-flows br0
> >>> NXST_FLOW reply (xid=0x4):
> >>> cookie=0x0, duration=2833.612s, table=0, n_packets=0, n_bytes=0,
> >>> idle_age=2833, in_port=1 actions=output:5 cookie=0x2,
> >>> duration=2819.820s, table=0, n_packets=0, n_bytes=0, idle_age=2819,
> >>> in_port=5 actions=output:1
> >>
> >> I guess, your flow table configured in a wrong way.
> >> OpenFlow port of br0 is LOCAL, not 1.
> >> Try this:
> >>
> >> # ovs-ofctl del-flows br0
> >>
> >> # ovs-ofctl add-flow br0 in_port=5,actions=output:LOCAL # ovs-ofctl
> >> add-flow
> >> br0 in_port=LOCAL,actions=output:5
> >
> > Thank you, Ilya. I did as you suggested, but the ping traffic from br0
> (LOCAL) is dropped by the output port 5:
> > # ovs-ofctl dump-flows br0
> > NXST_FLOW reply (xid=0x4):
> >  cookie=0x0, duration=1922.876s, table=0, n_packets=0, n_bytes=0,
> > idle_age=1922, in_port=5 actions=LOCAL  cookie=0x0,
> > duration=1915.458s, table=0, n_packets=6, n_bytes=252, idle_age=116,
> > in_port=LOCAL action

Re: [ovs-dev] Traffic fails in vhost user port

2017-04-04 Thread Ilya Maximets
On 04.04.2017 11:29, Nadathur, Sundar wrote:
>> -Original Message-
>> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
>> Sent: Tuesday, April 4, 2017 12:07 AM
>> To: ovs-dev@openvswitch.org; Nadathur, Sundar
>> <sundar.nadat...@intel.com>
>> Subject: [ovs-dev] Traffic fails in vhost user port
>>
>> Hi Sundar.
> 
>>> The flows are configured as below:
>>> # ovs-ofctl dump-flows br0
>>> NXST_FLOW reply (xid=0x4):
>>> cookie=0x0, duration=2833.612s, table=0, n_packets=0, n_bytes=0,
>>> idle_age=2833, in_port=1 actions=output:5 cookie=0x2,
>>> duration=2819.820s, table=0, n_packets=0, n_bytes=0, idle_age=2819,
>>> in_port=5 actions=output:1
>>
>> I guess, your flow table configured in a wrong way.
>> OpenFlow port of br0 is LOCAL, not 1.
>> Try this:
>>
>> # ovs-ofctl del-flows br0
>>
>> # ovs-ofctl add-flow br0 in_port=5,actions=output:LOCAL # ovs-ofctl add-flow
>> br0 in_port=LOCAL,actions=output:5
> 
> Thank you, Ilya. I did as you suggested, but the ping traffic from br0 
> (LOCAL) is dropped by the output port 5:
> # ovs-ofctl dump-flows br0
> NXST_FLOW reply (xid=0x4):
>  cookie=0x0, duration=1922.876s, table=0, n_packets=0, n_bytes=0, 
> idle_age=1922, in_port=5 actions=LOCAL
>  cookie=0x0, duration=1915.458s, table=0, n_packets=6, n_bytes=252, 
> idle_age=116, in_port=LOCAL actions=output:5
> 
> # ovs-ofctl dump-ports br0 # <-- Drops in port 5
> OFPST_PORT reply (xid=0x2): 2 ports
>   port  5: rx pkts=?, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
>tx pkts=?, bytes=0, drop=5, errs=?, coll=?
>   port LOCAL: rx pkts=43, bytes=2118, drop=0, errs=0, frame=0, over=0, crc=0
>tx pkts=0, bytes=0, drop=0, errs=0, coll=0
> 
> Wireshark shows that br0 sends out 3 ARP requests but there is no response. 
> 
>> or
>>
>> # ovs-ofctl add-flow br0 actions=NORMAL
> I tried this too after doing del-flows. The LOCAL port's MAC is learnt, 
> wireshark still shows br0 sending out ARP requests with no response. 
> 
> BTW, 'ovs-vsctl list Interface' shows the vi1 (VM port, #5) is up (most 
> fields are blank):
> _uuid   : 30d1600a-ff7d-4bf5-9fdb-b0767af3611c
> admin_state : up
> . . .
> link_speed  : []
> link_state  : up
> . . .
> mac_in_use  : "00:00:00:00:00:00"
> mtu : 1500
> mtu_request : []
> name: "vi1"
> . . .
> statistics  : {"rx_1024_to_1518_packets"=0, 
> "rx_128_to_255_packets"=0, "rx_1523_to_max_packets"=0, 
> "rx_1_to_64_packets"=0, "rx_256_to_511_packets"=0, 
> "rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=0, rx_bytes=0, 
> rx_dropped=0, rx_errors=0, tx_bytes=0, tx_dropped=8}
> status  : {}
> type: dpdkvhostuser
> 
> Is there any way to do the equivalent of a tcpdump or wireshark on a vhost 
> user port?
> 
> Thanks,
> Sundar
> 
Blanc fields in 'list interface' is normal for vhostuser.

Looks like something wrong with VM.
Please, provide the output of 'ip a' or 'ifconfig -a' from VM and full output
of 'ovs-vsctl list Interface vi1'. Also, qemu cmdline or libvirt xml can be 
helpful.


Best regards, Ilya Maximets.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] Traffic fails in vhost user port

2017-04-04 Thread Nadathur, Sundar
Since, the port 5 is dropping it on the tx side, are there logs or more 
granular error counters?

FWIW, here are snippets from the qemu process's command line:
   ...  -m 2048 -mem-prealloc -mem-path /dev/hugepages/libvirt/qemu ... 
   ... -chardev socket,id=charnet0,path=/usr/local/var/run/openvswitch/vi1 
-netdev vhost-user,chardev=charnet0,id=hostnet0 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=3a:19:09:52:14:50,bus=pci.0,addr=0x3 
...

srwxr-xr-x 1 root root 0 Apr  3 14:51 /usr/local/var/run/openvswitch/vi1 
This is root-owned, following instructions from the libvirt part of:
 http://docs.openvswitch.org/en/latest/topics/dpdk/vhost-user/ 
libvirtd and qemu-kvm are running as root. 

Regards,
Sundar

> -Original Message-
> From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-
> boun...@openvswitch.org] On Behalf Of Nadathur, Sundar
> Sent: Tuesday, April 4, 2017 1:30 AM
> To: Ilya Maximets <i.maxim...@samsung.com>; ovs-dev@openvswitch.org
> Subject: Re: [ovs-dev] Traffic fails in vhost user port
> 
> > -Original Message-
> > From: Ilya Maximets [mailto:i.maxim...@samsung.com]
> > Sent: Tuesday, April 4, 2017 12:07 AM
> > To: ovs-dev@openvswitch.org; Nadathur, Sundar
> > <sundar.nadat...@intel.com>
> > Subject: [ovs-dev] Traffic fails in vhost user port
> >
> > Hi Sundar.
> 
> > > The flows are configured as below:
> > > # ovs-ofctl dump-flows br0
> > > NXST_FLOW reply (xid=0x4):
> > > cookie=0x0, duration=2833.612s, table=0, n_packets=0, n_bytes=0,
> > > idle_age=2833, in_port=1 actions=output:5 cookie=0x2,
> > > duration=2819.820s, table=0, n_packets=0, n_bytes=0, idle_age=2819,
> > > in_port=5 actions=output:1
> >
> > I guess, your flow table configured in a wrong way.
> > OpenFlow port of br0 is LOCAL, not 1.
> > Try this:
> >
> > # ovs-ofctl del-flows br0
> >
> > # ovs-ofctl add-flow br0 in_port=5,actions=output:LOCAL # ovs-ofctl
> > add-flow
> > br0 in_port=LOCAL,actions=output:5
> 
> Thank you, Ilya. I did as you suggested, but the ping traffic from br0 (LOCAL)
> is dropped by the output port 5:
> # ovs-ofctl dump-flows br0
> NXST_FLOW reply (xid=0x4):
>  cookie=0x0, duration=1922.876s, table=0, n_packets=0, n_bytes=0,
> idle_age=1922, in_port=5 actions=LOCAL  cookie=0x0, duration=1915.458s,
> table=0, n_packets=6, n_bytes=252, idle_age=116, in_port=LOCAL
> actions=output:5
> 
> # ovs-ofctl dump-ports br0 # <-- Drops in port 5 OFPST_PORT reply (xid=0x2):
> 2 ports
>   port  5: rx pkts=?, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
>tx pkts=?, bytes=0, drop=5, errs=?, coll=?
>   port LOCAL: rx pkts=43, bytes=2118, drop=0, errs=0, frame=0, over=0, crc=0
>tx pkts=0, bytes=0, drop=0, errs=0, coll=0
> 
> Wireshark shows that br0 sends out 3 ARP requests but there is no response.
> 
> > or
> >
> > # ovs-ofctl add-flow br0 actions=NORMAL
> I tried this too after doing del-flows. The LOCAL port's MAC is learnt,
> wireshark still shows br0 sending out ARP requests with no response.
> 
> BTW, 'ovs-vsctl list Interface' shows the vi1 (VM port, #5) is up (most fields
> are blank):
> _uuid   : 30d1600a-ff7d-4bf5-9fdb-b0767af3611c
> admin_state : up
> . . .
> link_speed  : []
> link_state  : up
> . . .
> mac_in_use  : "00:00:00:00:00:00"
> mtu : 1500
> mtu_request : []
> name: "vi1"
> . . .
> statistics  : {"rx_1024_to_1518_packets"=0, "rx_128_to_255_packets"=0,
> "rx_1523_to_max_packets"=0, "rx_1_to_64_packets"=0,
> "rx_256_to_511_packets"=0, "rx_512_to_1023_packets"=0,
> "rx_65_to_127_packets"=0, rx_bytes=0, rx_dropped=0, rx_errors=0,
> tx_bytes=0, tx_dropped=8}
> status  : {}
> type: dpdkvhostuser
> 
> Is there any way to do the equivalent of a tcpdump or wireshark on a vhost
> user port?
> 
> Thanks,
> Sundar
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] Traffic fails in vhost user port

2017-04-04 Thread Nadathur, Sundar
> -Original Message-
> From: Ilya Maximets [mailto:i.maxim...@samsung.com]
> Sent: Tuesday, April 4, 2017 12:07 AM
> To: ovs-dev@openvswitch.org; Nadathur, Sundar
> <sundar.nadat...@intel.com>
> Subject: [ovs-dev] Traffic fails in vhost user port
> 
> Hi Sundar.

> > The flows are configured as below:
> > # ovs-ofctl dump-flows br0
> > NXST_FLOW reply (xid=0x4):
> > cookie=0x0, duration=2833.612s, table=0, n_packets=0, n_bytes=0,
> > idle_age=2833, in_port=1 actions=output:5 cookie=0x2,
> > duration=2819.820s, table=0, n_packets=0, n_bytes=0, idle_age=2819,
> > in_port=5 actions=output:1
> 
> I guess, your flow table configured in a wrong way.
> OpenFlow port of br0 is LOCAL, not 1.
> Try this:
> 
> # ovs-ofctl del-flows br0
> 
> # ovs-ofctl add-flow br0 in_port=5,actions=output:LOCAL # ovs-ofctl add-flow
> br0 in_port=LOCAL,actions=output:5

Thank you, Ilya. I did as you suggested, but the ping traffic from br0 (LOCAL) 
is dropped by the output port 5:
# ovs-ofctl dump-flows br0
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=1922.876s, table=0, n_packets=0, n_bytes=0, 
idle_age=1922, in_port=5 actions=LOCAL
 cookie=0x0, duration=1915.458s, table=0, n_packets=6, n_bytes=252, 
idle_age=116, in_port=LOCAL actions=output:5

# ovs-ofctl dump-ports br0 # <-- Drops in port 5
OFPST_PORT reply (xid=0x2): 2 ports
  port  5: rx pkts=?, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
   tx pkts=?, bytes=0, drop=5, errs=?, coll=?
  port LOCAL: rx pkts=43, bytes=2118, drop=0, errs=0, frame=0, over=0, crc=0
   tx pkts=0, bytes=0, drop=0, errs=0, coll=0

Wireshark shows that br0 sends out 3 ARP requests but there is no response. 

> or
> 
> # ovs-ofctl add-flow br0 actions=NORMAL
I tried this too after doing del-flows. The LOCAL port's MAC is learnt, 
wireshark still shows br0 sending out ARP requests with no response. 

BTW, 'ovs-vsctl list Interface' shows the vi1 (VM port, #5) is up (most fields 
are blank):
_uuid   : 30d1600a-ff7d-4bf5-9fdb-b0767af3611c
admin_state : up
. . .
link_speed  : []
link_state  : up
. . .
mac_in_use  : "00:00:00:00:00:00"
mtu : 1500
mtu_request : []
name: "vi1"
. . .
statistics  : {"rx_1024_to_1518_packets"=0, "rx_128_to_255_packets"=0, 
"rx_1523_to_max_packets"=0, "rx_1_to_64_packets"=0, "rx_256_to_511_packets"=0, 
"rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=0, rx_bytes=0, rx_dropped=0, 
rx_errors=0, tx_bytes=0, tx_dropped=8}
status  : {}
type: dpdkvhostuser

Is there any way to do the equivalent of a tcpdump or wireshark on a vhost user 
port?

Thanks,
Sundar
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Traffic fails in vhost user port

2017-04-04 Thread Ilya Maximets
Hi Sundar.

> Hi,
> I have an OVS bridge br0 with no NICs and  1 vhost user port which is 
> connected to a VM. But ping fails between the VM and the br0 port, either 
> way. The stats show zero all the time. Inside the VM, tcpdump shows nothing.
> 
> This is with OVS 2.7.0 and DPDK 17.02. Please indicate what could be going 
> wrong.
> 
> In the host, the bridge's internal port is up.
> # ip addr show br0
> 24: br0:  mtu 1500 qdisc noqueue state UNKNOWN 
> qlen 500
> link/ether 52:2f:57:13:d8:40 brd ff:ff:ff:ff:ff:ff
> inet 200.1.1.1/24 scope global br0
>valid_lft forever preferred_lft forever
> 
> In the VM, the eth interface is up with address 200.1.1.2/24.
> 
> The ports are in the following state, even after "ovs-ofctl mod-port br0 vi1 
> up":
> # ovs-ofctl dump-ports-desc br0
> OFPST_PORT_DESC reply (xid=0x2):
> 5(vi1): addr:00:00:00:00:00:00
>  config: 0
>  state:  0
>  speed: 0 Mbps now, 0 Mbps max
> LOCAL(br0): addr:52:2f:57:13:d8:40
>  config: 0
>  state:  0
>  current:10MB-FD COPPER
>  speed: 10 Mbps now, 0 Mbps max
> 
> The flows are configured as below:
> # ovs-ofctl dump-flows br0
> NXST_FLOW reply (xid=0x4):
> cookie=0x0, duration=2833.612s, table=0, n_packets=0, n_bytes=0, 
> idle_age=2833, in_port=1 actions=output:5
> cookie=0x2, duration=2819.820s, table=0, n_packets=0, n_bytes=0, 
> idle_age=2819, in_port=5 actions=output:1

I guess, your flow table configured in a wrong way.
OpenFlow port of br0 is LOCAL, not 1.
Try this:

# ovs-ofctl del-flows br0

# ovs-ofctl add-flow br0 in_port=5,actions=output:LOCAL
# ovs-ofctl add-flow br0 in_port=LOCAL,actions=output:5

or

# ovs-ofctl add-flow br0 actions=NORMAL

> 
> The table info is as below:
> # ovs-ofctl dump-tables br0 | more
> OFPST_TABLE reply (xid=0x2):
>   table 0 ("classifier"):
> active=2, lookup=37, matched=28
> max_entries=100
> matching:
>   in_port: exact match or wildcard
>   eth_src: exact match or wildcard
>   eth_dst: exact match or wildcard
>   eth_type: exact match or wildcard
>   vlan_vid: exact match or wildcard
>   vlan_pcp: exact match or wildcard
>   ip_src: exact match or wildcard
>   ip_dst: exact match or wildcard
>   nw_proto: exact match or wildcard
>   nw_tos: exact match or wildcard
>   tcp_src: exact match or wildcard
>   tcp_dst: exact match or wildcard
> 
>   table 1 ("table1"):
>active=0, lookup=0, matched=0
> (same features)
> 
> Thanks,
> Sundar
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Traffic fails in vhost user port

2017-04-03 Thread Nadathur, Sundar
Hi,
I have an OVS bridge br0 with no NICs and  1 vhost user port which is 
connected to a VM. But ping fails between the VM and the br0 port, either way. 
The stats show zero all the time. Inside the VM, tcpdump shows nothing.

This is with OVS 2.7.0 and DPDK 17.02. Please indicate what could be going 
wrong.

In the host, the bridge's internal port is up.
# ip addr show br0
24: br0:  mtu 1500 qdisc noqueue state UNKNOWN 
qlen 500
link/ether 52:2f:57:13:d8:40 brd ff:ff:ff:ff:ff:ff
inet 200.1.1.1/24 scope global br0
   valid_lft forever preferred_lft forever

In the VM, the eth interface is up with address 200.1.1.2/24.

The ports are in the following state, even after "ovs-ofctl mod-port br0 vi1 
up":
# ovs-ofctl dump-ports-desc br0
OFPST_PORT_DESC reply (xid=0x2):
5(vi1): addr:00:00:00:00:00:00
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
LOCAL(br0): addr:52:2f:57:13:d8:40
 config: 0
 state:  0
 current:10MB-FD COPPER
 speed: 10 Mbps now, 0 Mbps max

The flows are configured as below:
# ovs-ofctl dump-flows br0
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=2833.612s, table=0, n_packets=0, n_bytes=0, idle_age=2833, 
in_port=1 actions=output:5
cookie=0x2, duration=2819.820s, table=0, n_packets=0, n_bytes=0, idle_age=2819, 
in_port=5 actions=output:1

The table info is as below:
# ovs-ofctl dump-tables br0 | more
OFPST_TABLE reply (xid=0x2):
  table 0 ("classifier"):
active=2, lookup=37, matched=28
max_entries=100
matching:
  in_port: exact match or wildcard
  eth_src: exact match or wildcard
  eth_dst: exact match or wildcard
  eth_type: exact match or wildcard
  vlan_vid: exact match or wildcard
  vlan_pcp: exact match or wildcard
  ip_src: exact match or wildcard
  ip_dst: exact match or wildcard
  nw_proto: exact match or wildcard
  nw_tos: exact match or wildcard
  tcp_src: exact match or wildcard
  tcp_dst: exact match or wildcard

  table 1 ("table1"):
   active=0, lookup=0, matched=0
(same features)

Thanks,
Sundar




Regards,
Sundar

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev