[ovs-discuss] A doubt about slow speed of ovs in userspace at about 3KB/s

2018-11-19 Thread ????????
Hi??
  I'm an engineer who offen use  OpenVSwtich, also develop on it. But I found a 
bug a few days ago, it can also be said as a doubt. As follows:
  I Deploy OVS on a physical Computer with CentOS 7,It has two onboard PCI 
NICs,and then config OVS work in userspace without kernel,"ovs-vsctl set bridge 
br0 datapath_type=netdev". 
Strange things happened, The speed was only 3KB/s(about 24Kbps), But there was 
not this problem with use kernel module, In kernel space, it can work as a 
Gigabit switch.
I attempted to do something below:
1.I tried other PCI NICs like:(RealTek RTL8111 Gigabit ??intel 82566DM Gigabit 
??Broadcom BCM5705 Gigabit ), the bug always with me, It tranlated with only 
3KB/s??
2.I tried other USB NICs like:(Realtek  RTL8152  100Mbps??ASIX  AX88772B 
100Mbps)  
a.AX88772B  tranlated 12MB/s,about 100Mbps bandwith;
b.RTL8152 has the same bug like PCI NICs, only 3KB/s.
For detail:


Can I guess as follow?
1.PCI NICs has something wrong work in userspace?
2.Some USB NICs work rightly in userspace,but some are not?
3.Any Difference between their driver program,or linux system?
I beg your help:
1.I want get all packages with funciton "dp_netdev_execute_actions()" in 
dpif-netdev.c;
2.Is there any other way which can get all packages in userspace, when I config 
ovs work with kernel space?
3.Whether it is a bug,and be solved in further version?


Thanks! 


Nefusmzj
461123...@qq.com___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-19 Thread Siva Teja ARETI
On Mon, Nov 19, 2018 at 7:17 PM Gregory Rose  wrote:

>
> Hi Siva,
>
> One more request  - I need to see the underlying network configuration
> of the hypervisor running the two VMs.
> Are both VMs on the same machine?  If so then just the network
> configuration of the base machine
> running the VMs, otherwise the network configuration of each base
> machine running their perspective
> VM.
>
> This is turning into quite the investigation and I apologize that it is
> taking so long.  Please bear with me
> if you can and we'll see if we can't get this problem solved.  I've seen
> some puzzling bugs before and
> this one is turning out to be one of the best.  Or worst depends on
> your outlook.  :)
>
> Thanks for all your help so far!
>
> - Greg
>

Hi Greg,

Both the VMs run on same hypervisor in my setup. Created VMs and virtual
networks using virsh commands. Virsh XMLs for networks look like below

[user@hyp1 ] virsh net-dumpxml route1

  route1
  2c935aaf-ebde-5b76-a903-4fccb115ff75
  
  
  
  

  

  


[user@hyp1 ]  network virsh net-dumpxml route2

  route2
  2c935baf-ebde-5b76-a903-4fccb115ff75
  
  
  
  

  

  


Each VM is connected to both the networks.

Some network configuration of the hypervisor.

[user@hyp-1] ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: enp5s0:  mtu 1400 qdisc pfifo_fast
state UP group default qlen 1000
link/ether  brd ff:ff:ff:ff:ff:ff
inet A.B.C.D/24 brd X.Y.Z.W scope global dynamic enp5s0
   valid_lft 318349sec preferred_lft 318349sec
3: virbr0:  mtu 1500 qdisc noqueue state
UP group default qlen 1000
link/ether fe:54:00:0a:d3:70 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
   valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc pfifo_fast state DOWN
group default qlen 1000
link/ether 52:54:00:94:4e:04 brd ff:ff:ff:ff:ff:ff
11: docker0:  mtu 1500 qdisc noqueue state
UP group default
link/ether 02:42:89:28:db:a5 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
   valid_lft forever preferred_lft forever
inet6 fe80::42:89ff:fe28:dba5/64 scope link
   valid_lft forever preferred_lft forever
96: vboxnet0:  mtu 1500 qdisc pfifo_fast
state DOWN group default qlen 1000
link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 192.168.99.1/24 brd 192.168.99.255 scope global vboxnet0
   valid_lft forever preferred_lft forever
inet6 fe80::800:27ff:fe00:0/64 scope link
   valid_lft forever preferred_lft forever
193: testbr0:  mtu 1500 qdisc noqueue
state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 10.10.0.1/24 brd 10.10.0.255 scope global testbr0
   valid_lft forever preferred_lft forever
194: testbr0-nic:  mtu 1500 qdisc pfifo_fast state
DOWN group default qlen 1000
link/ether 42:54:00:94:4e:04 brd ff:ff:ff:ff:ff:ff
227: testbr1:  mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether fe:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
inet 20.20.0.1/24 brd 20.20.0.255 scope global testbr1
   valid_lft forever preferred_lft forever
228: testbr1-nic:  mtu 1500 qdisc pfifo_fast state
DOWN group default qlen 1000
link/ether 42:54:00:84:4e:04 brd ff:ff:ff:ff:ff:ff
229: testbr2:  mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether fe:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
inet 30.30.0.1/24 brd 30.30.0.255 scope global testbr2
   valid_lft forever preferred_lft forever
230: testbr2-nic:  mtu 1500 qdisc pfifo_fast state
DOWN group default qlen 1000
link/ether 42:54:10:84:4e:04 brd ff:ff:ff:ff:ff:ff
231: vnet0:  mtu 1500 qdisc pfifo_fast
master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:0a:d3:70 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe0a:d370/64 scope link
   valid_lft forever preferred_lft forever
232: vnet1:  mtu 1500 qdisc pfifo_fast
master testbr2 state UNKNOWN group default qlen 1000
link/ether fe:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:feb8:5be/64 scope link
   valid_lft forever preferred_lft forever
233: vnet2:  mtu 1500 qdisc pfifo_fast
master testbr1 state UNKNOWN group default qlen 1000
link/ether fe:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fef0:6437/64 scope link
   valid_lft forever preferred_lft forever
234: vnet3:  mtu 1500 qdisc pfifo_fast
master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:56:cb:89 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe56:cb89/64 scope link
   valid_lft forever preferred_lft forever
235: vnet4:  mtu 1500 qdisc pfifo_fast
master testbr2 state UNKNOWN group default qlen 1000
link/ether fe:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff

Re: [ovs-discuss] [ovs-dev] Packet Drop Issue in OVS-DPDK L2FWD Application

2018-11-19 Thread Robert Brooks
On Mon, Nov 19, 2018 at 5:36 AM Ian Stokes  wrote:

> On 11/18/2018 8:16 PM, vkrishnabhat k wrote:
> > Hi Team,
> >
> > I am new to OVS and DPDK. While I am using l2fwd application with OVS and
> > DPDK I am seeing packet drop issue in OVS bridge.
> >
> > Topology : My topology has Ubuntu machine (Ubuntu 18.04 LTS). I have
> > installed Qemu-KVM 2.11.1 version. Also I am using OVS-DPDK. Please find
> > the detailed topology attached with this mail. I have bound two NICs
> (Intel
> > 82599ES 10-gigabit ) to dpdk IGB_UIO driver and also have added same
> ports
> > in to OVS bridge "br0". I am trying to send the bidirectional traffic
> from
> > both the port and measure the throughput value for the l2fwd application
>

I also saw drops in the br0 using ovs-dpdk with 82599ES cards, see "poor
ovs-dpdk performance vs ovs", unfortunately the stats got truncated, so I
re-ran the testing and I see them here:

AutoAttach table

_uuid mappings system_description system_name

-  -- ---


Bridge table

_uuidauto_attach controller datapath_id
datapath_type datapath_version external_ids fail_mode flood_vlans
flow_tables ipfix mcast_snooping_enable mirrors name  netflow other_config
ports
  protocols rstp_enable rstp_status sflow status stp_enable

 --- --
-- -   -
--- --- - - --- - ---


- --- --- - -- --

28911b6f-4f85-4a77-982c-d16b0e284e1a []  [] "001b21a6ddc4"
netdev"" {}   [][]  {}
  []false []  "br0" []  {}
[257ad852-9078-4378-a996-3cbb7772457e,
2cff7d6e-2f3a-4aec-8f1c-f29125760771] []false   {}  []
  {} false


Controller table

_uuid connection_mode controller_burst_limit controller_rate_limit
enable_async_messages external_ids inactivity_probe is_connected
local_gateway local_ip local_netmask max_backoff other_config role status
target

- --- -- -
-   
-  - ---   --
--


Flow_Sample_Collector_Set table

_uuid bridge external_ids id ipfix

- --  -- -


Flow_Table table

_uuid external_ids flow_limit groups name overflow_policy prefixes

-  -- --  --- 


IPFIX table

_uuid cache_active_timeout cache_max_flows external_ids obs_domain_id
obs_point_id other_config sampling targets

-  ---  -
   ---


Interface table

_uuidadmin_state bfd bfd_status cfm_fault
cfm_fault_status cfm_flap_count cfm_health cfm_mpid cfm_remote_mpids
cfm_remote_opstate duplex error external_ids ifindex  ingress_policing_burst
ingress_policing_rate lacp_current link_resets link_speed  link_state lldp
mac mac_in_use  mtu  mtu_request name   ofport ofport_request
options   other_config statistics



















  status




type

 --- --- -- -
 -- --  
-- -- -  
-- -  ---
--- --  --- ---  --- --
-- -- - 
--

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-19 Thread Gregory Rose


Hi Siva,

One more request  - I need to see the underlying network configuration 
of the hypervisor running the two VMs.
Are both VMs on the same machine?  If so then just the network 
configuration of the base machine
running the VMs, otherwise the network configuration of each base 
machine running their perspective

VM.

This is turning into quite the investigation and I apologize that it is 
taking so long.  Please bear with me
if you can and we'll see if we can't get this problem solved.  I've seen 
some puzzling bugs before and
this one is turning out to be one of the best.  Or worst depends on 
your outlook.  :)


Thanks for all your help so far!

- Greg
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-19 Thread Siva Teja ARETI
On Mon, Nov 19, 2018 at 11:18 AM Gregory Rose  wrote:

>
> On 11/19/2018 7:50 AM, Siva Teja ARETI wrote:
>
>
>
> On Fri, Nov 16, 2018 at 4:52 PM Gregory Rose  wrote:
>
>> On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:
>>
>> Hi Greg,
>>
>> Thanks for looking into this.
>>
>> I have two VMs in my setup each with two interfaces. Trying to setup the
>> VXLAN tunnels across these interfaces which are in different subnets. A
>> docker container is attached to ovs bridge using ovs-docker utility on each
>> VM and doing a ping from one container to another.
>>
>> *VM1 details:*
>>
>> [root@vm1 ~]# ip a
>> ...
>> 3: eth1:  mtu 1500 qdisc pfifo_fast
>> state UP qlen 1000
>> link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
>> inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
>>valid_lft 3002sec preferred_lft 3002sec
>> inet6 fe80::5054:ff:feb8:5be/64 scope link
>>valid_lft forever preferred_lft forever
>> 4: eth2:  mtu 1500 qdisc pfifo_fast
>> state UP qlen 1000
>> link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
>> inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
>>valid_lft 3248sec preferred_lft 3248sec
>> inet6 fe80::5054:ff:fef0:6437/64 scope link
>>valid_lft forever preferred_lft forever
>> ...
>>
>>
>> Hi Siva,
>>
>> I have a question.  Are you able to ping between the two interfaces on
>> VM1 with this command?:
>>
>> # ping 20.20.0.183 -I eth1
>>
>> thanks,
>>
>> - Greg
>>
> Hi Greg,
>
> Sorry for the late reply.
>
> Yes, I am able to ping between two interfaces.
>
> [root@localhost ~]# ovs-appctl dpif/show
> system@ovs-system: hit:2799 missed:198775
> testbr0:
> a0769422cfc04_l 2/3: (system)
> testbr0 65534/1: (internal)
> vxlan0 10/2: (vxlan: local_ip=30.30.0.193,
> remote_ip=20.20.0.183)
> [root@localhost ~]# ping 20.20.0.183 -I 30.30.0.193
> PING 20.20.0.183 (20.20.0.183) from 30.30.0.193 : 56(84) bytes of data.
> 64 bytes from 20.20.0.183: icmp_seq=1 ttl=64 time=0.470 ms
> 64 bytes from 20.20.0.183: icmp_seq=2 ttl=64 time=0.657 ms
> 64 bytes from 20.20.0.183: icmp_seq=3 ttl=64 time=0.685 ms
> 64 bytes from 20.20.0.183: icmp_seq=4 ttl=64 time=0.721 ms
> 64 bytes from 20.20.0.183: icmp_seq=5 ttl=64 time=0.630 ms
> 64 bytes from 20.20.0.183: icmp_seq=6 ttl=64 time=0.629 ms
> ^C
>
>
> Well that's probably where my setup isn't configured right.  What is the
> output of 'ip route' on that system?
>
> Thanks,
>
> - Greg
>

Hi Greg,

Here is the output of 'ip route' command.

[root@vm1 ~]# ip route
default via 192.168.122.1 dev eth0
20.20.0.0/24 dev eth2 proto kernel scope link src 20.20.0.64
30.30.0.0/24 dev eth1 proto kernel scope link src 30.30.0.193
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.165

Siva Teja.

> --- 20.20.0.183 ping statistics ---
> 6 packets transmitted, 6 received, 0% packet loss, time 5000ms
> rtt min/avg/max/mdev = 0.470/0.632/0.721/0.079 ms
> [root@localhost ~]#
>
>  Siva Teja.
>
>> [root@vm1 ~]# ovs-vsctl show
>> ff70c814-d1b0-4018-aee8-8b635187afee
>> Bridge "testbr0"
>> Port "gre0"
>> Interface "gre0"
>> type: gre
>> options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
>> Port "testbr0"
>> Interface "testbr0"
>> type: internal
>> Port "2cfb62a9b0f04_l"
>> Interface "2cfb62a9b0f04_l"
>> ovs_version: "2.9.2"
>> [root@vm1 ~]# ip rule list
>> 0:  from all lookup local
>> 32765:  from 20.20.0.183 lookup siva
>> 32766:  from all lookup main
>> 32767:  from all lookup default
>> [root@vm1 ~]# ip route show table siva
>> default dev eth2 scope link src 20.20.0.183
>> [root@vm1 ~]# # A docker container is attached
>> to ovs bridge using ovs-docker utility
>> [root@vm1 ~]# docker ps
>> CONTAINER IDIMAGE   COMMAND CREATED
>>STATUS  PORTS   NAMES
>> be4ab434db99busybox "sh"5 days ago
>>   Up 5 days   admiring_euclid
>> [root@vm1 ~]# nsenter -n -t `docker inspect be4 --format={{.State.Pid}}`
>> -- ip a
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>>valid_lft forever preferred_lft forever
>> 2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
>> link/gre 0.0.0.0 brd 0.0.0.0
>> 3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN
>> qlen 1000
>> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
>> 9: eth0@if10:  mtu 1500 qdisc noqueue
>> state UP qlen 1000
>> link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
>> inet 70.70.0.10/24 scope global eth0
>>valid_lft f

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-19 Thread Gregory Rose


On 11/19/2018 7:50 AM, Siva Teja ARETI wrote:



On Fri, Nov 16, 2018 at 4:52 PM Gregory Rose > wrote:


On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:

Hi Greg,

Thanks for looking into this.

I have two VMs in my setup each with two interfaces. Trying to
setup the VXLAN tunnels across these interfaces which are in
different subnets. A docker container is attached to ovs bridge
using ovs-docker utility on each VM and doing a ping from one
container to another.

*VM1 details:*

[root@vm1 ~]# ip a
...
3: eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
    link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
    inet 30.30.0.59/24  brd 30.30.0.255
scope global dynamic eth1
 valid_lft 3002sec preferred_lft 3002sec
    inet6 fe80::5054:ff:feb8:5be/64 scope link
 valid_lft forever preferred_lft forever
4: eth2:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
    link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
    inet 20.20.0.183/24  brd 20.20.0.255
scope global dynamic eth2
 valid_lft 3248sec preferred_lft 3248sec
    inet6 fe80::5054:ff:fef0:6437/64 scope link
 valid_lft forever preferred_lft forever
...


Hi Siva,

I have a question.  Are you able to ping between the two
interfaces on VM1 with this command?:

# ping 20.20.0.183 -I eth1

thanks,

- Greg

Hi Greg,

Sorry for the late reply.

Yes, I am able to ping between two interfaces.

[root@localhost ~]# ovs-appctl dpif/show
system@ovs-system: hit:2799 missed:198775
        testbr0:
                a0769422cfc04_l 2/3: (system)
                testbr0 65534/1: (internal)
                vxlan0 10/2: (vxlan: local_ip=30.30.0.193, 
remote_ip=20.20.0.183)

[root@localhost ~]# ping 20.20.0.183 -I 30.30.0.193
PING 20.20.0.183 (20.20.0.183) from 30.30.0.193 : 56(84) bytes of data.
64 bytes from 20.20.0.183 : icmp_seq=1 ttl=64 
time=0.470 ms
64 bytes from 20.20.0.183 : icmp_seq=2 ttl=64 
time=0.657 ms
64 bytes from 20.20.0.183 : icmp_seq=3 ttl=64 
time=0.685 ms
64 bytes from 20.20.0.183 : icmp_seq=4 ttl=64 
time=0.721 ms
64 bytes from 20.20.0.183 : icmp_seq=5 ttl=64 
time=0.630 ms
64 bytes from 20.20.0.183 : icmp_seq=6 ttl=64 
time=0.629 ms

^C


Well that's probably where my setup isn't configured right.  What is the 
output of 'ip route' on that system?


Thanks,

- Greg


--- 20.20.0.183 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5000ms
rtt min/avg/max/mdev = 0.470/0.632/0.721/0.079 ms
[root@localhost ~]#

 Siva Teja.


[root@vm1 ~]# ovs-vsctl show
ff70c814-d1b0-4018-aee8-8b635187afee
    Bridge "testbr0"
        Port "gre0"
Interface "gre0"
type: gre
options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
        Port "testbr0"
Interface "testbr0"
type: internal
        Port "2cfb62a9b0f04_l"
Interface "2cfb62a9b0f04_l"
ovs_version: "2.9.2"
[root@vm1 ~]# ip rule list
0:      from all lookup local
32765:  from 20.20.0.183 lookup siva
32766:  from all lookup main
32767:  from all lookup default
[root@vm1 ~]# ip route show table siva
default dev eth2 scope link src 20.20.0.183
[root@vm1 ~]# # A docker container is
attached to ovs bridge using ovs-docker utility
[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE  COMMAND  CREATED  STATUS PORTS  NAMES
be4ab434db99   busybox  "sh" 5 days ago Up 5 days  admiring_euclid
[root@vm1 ~]# nsenter -n -t `docker inspect be4
--format={{.State.Pid}}` -- ip a
1: lo:  mtu 65536 qdisc noqueue state
UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8  scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE:  mtu 1462 qdisc noop state
DOWN qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
9: eth0@if10:  mtu 1500 qdisc
noqueue state UP qlen 1000
    link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 70.70.0.10/24  scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2098:41ff:fe0f:e850/64 scope link
       valid_lft forever preferred_lft forever


*VM2 details:*
*
*
[root@vm2 ~]# ip a

3: eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
    link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
    inet 30.30.0.193/24  brd 30.30.0.255
scope global dynamic eth1
 va

[ovs-discuss] Web interface?

2018-11-19 Thread Alexandre Bruyere
Hello!

As title says. I was wondering if OpenVSwitch has a web interface, or if
one can be installed.

Thanks!
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-19 Thread Siva Teja ARETI
On Fri, Nov 16, 2018 at 4:52 PM Gregory Rose  wrote:

> On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:
>
> Hi Greg,
>
> Thanks for looking into this.
>
> I have two VMs in my setup each with two interfaces. Trying to setup the
> VXLAN tunnels across these interfaces which are in different subnets. A
> docker container is attached to ovs bridge using ovs-docker utility on each
> VM and doing a ping from one container to another.
>
> *VM1 details:*
>
> [root@vm1 ~]# ip a
> ...
> 3: eth1:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
> inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
>valid_lft 3002sec preferred_lft 3002sec
> inet6 fe80::5054:ff:feb8:5be/64 scope link
>valid_lft forever preferred_lft forever
> 4: eth2:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
> inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
>valid_lft 3248sec preferred_lft 3248sec
> inet6 fe80::5054:ff:fef0:6437/64 scope link
>valid_lft forever preferred_lft forever
> ...
>
>
> Hi Siva,
>
> I have a question.  Are you able to ping between the two interfaces on VM1
> with this command?:
>
> # ping 20.20.0.183 -I eth1
>
> thanks,
>
> - Greg
>
> Hi Greg,

Sorry for the late reply.

Yes, I am able to ping between two interfaces.

[root@localhost ~]# ovs-appctl dpif/show
system@ovs-system: hit:2799 missed:198775
testbr0:
a0769422cfc04_l 2/3: (system)
testbr0 65534/1: (internal)
vxlan0 10/2: (vxlan: local_ip=30.30.0.193,
remote_ip=20.20.0.183)
[root@localhost ~]# ping 20.20.0.183 -I 30.30.0.193
PING 20.20.0.183 (20.20.0.183) from 30.30.0.193 : 56(84) bytes of data.
64 bytes from 20.20.0.183: icmp_seq=1 ttl=64 time=0.470 ms
64 bytes from 20.20.0.183: icmp_seq=2 ttl=64 time=0.657 ms
64 bytes from 20.20.0.183: icmp_seq=3 ttl=64 time=0.685 ms
64 bytes from 20.20.0.183: icmp_seq=4 ttl=64 time=0.721 ms
64 bytes from 20.20.0.183: icmp_seq=5 ttl=64 time=0.630 ms
64 bytes from 20.20.0.183: icmp_seq=6 ttl=64 time=0.629 ms
^C
--- 20.20.0.183 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5000ms
rtt min/avg/max/mdev = 0.470/0.632/0.721/0.079 ms
[root@localhost ~]#

 Siva Teja.

> [root@vm1 ~]# ovs-vsctl show
> ff70c814-d1b0-4018-aee8-8b635187afee
> Bridge "testbr0"
> Port "gre0"
> Interface "gre0"
> type: gre
> options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
> Port "testbr0"
> Interface "testbr0"
> type: internal
> Port "2cfb62a9b0f04_l"
> Interface "2cfb62a9b0f04_l"
> ovs_version: "2.9.2"
> [root@vm1 ~]# ip rule list
> 0:  from all lookup local
> 32765:  from 20.20.0.183 lookup siva
> 32766:  from all lookup main
> 32767:  from all lookup default
> [root@vm1 ~]# ip route show table siva
> default dev eth2 scope link src 20.20.0.183
> [root@vm1 ~]# # A docker container is attached to
> ovs bridge using ovs-docker utility
> [root@vm1 ~]# docker ps
> CONTAINER IDIMAGE   COMMAND CREATED
>  STATUS  PORTS   NAMES
> be4ab434db99busybox "sh"5 days ago
>   Up 5 days   admiring_euclid
> [root@vm1 ~]# nsenter -n -t `docker inspect be4 --format={{.State.Pid}}`
> -- ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
> link/gre 0.0.0.0 brd 0.0.0.0
> 3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN
> qlen 1000
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 9: eth0@if10:  mtu 1500 qdisc noqueue
> state UP qlen 1000
> link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 70.70.0.10/24 scope global eth0
>valid_lft forever preferred_lft forever
> inet6 fe80::2098:41ff:fe0f:e850/64 scope link
>valid_lft forever preferred_lft forever
>
>
> *VM2 details:*
>
> [root@vm2 ~]# ip a
> 
> 3: eth1:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
> inet 30.30.0.193/24 brd 30.30.0.255 scope global dynamic eth1
>valid_lft 2406sec preferred_lft 2406sec
> inet6 fe80::5054:ff:fe79:ef92/64 scope link
>valid_lft forever preferred_lft forever
> 4: eth2:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
> inet 20.20.0.64/24 brd 20.20.0.255 scope global dynamic eth2
>valid_lft 2775sec preferred_lft 2775sec
> inet6 fe80::5054:ff:fe05:9

Re: [ovs-discuss] VLAN mode=dot1q-tunnel and tags in OVS

2018-11-19 Thread Sim Paul
>
>
> > I am still trying to understand the test case behavior that i pasted in
> my
> > previous email.
> > In my first test case when vlan-limit=1, the ping worked because
> > only the outside VLAN tag (36) was inspected ??
> > But in second case when i set vlan-limit=2, ping stopped working because
> > both tags 36 and 120 were inspected ?
> >
> > Shouldn't the ping work even in second test case ?
>
> I'm not sure. Your configuration is a big odd. dot1q-tunnel should only
> be configured at the ends, but it sounds like you've added it to the
> patch ports as well.
>
> Are you saying you are able to ping a virtual machine sitting on a
neighboring ovs bridge
by simply configuring dot1q-tunnel at the end points (VM NICs) ? Plz
confirm.
For me, if i don't configure all 4 ports(two VM VNICs and two patch ports)
as dot1q-tunnel,
VM1 sitting on ovsbr1 CANNOT ping VM2 sitting on ovsbr2.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVN: MAC_Binding entries not getting updated leads to unreachable destinations

2018-11-19 Thread Numan Siddique
On Mon, Nov 19, 2018 at 2:56 PM Daniel Alvarez Sanchez 
wrote:

> Having thought this again, I'd rather merge the patch I proposed in my
> previous email (I'd need tests and propose a formal patch after your
> feedback) but in the long term I think it'd make sense to also implement
> some sort of aging to the MAC_Binding entries so that they eventually
> expire, especially for entries that come from external networks.
>
> On Fri, Nov 16, 2018 at 6:41 PM Daniel Alvarez Sanchez <
> dalva...@redhat.com> wrote:
>
>>
>> On Sat, Nov 10, 2018 at 12:21 AM Ben Pfaff  wrote:
>> >
>> > On Mon, Oct 29, 2018 at 05:21:13PM +0530, Numan Siddique wrote:
>> > > On Mon, Oct 29, 2018 at 5:00 PM Daniel Alvarez Sanchez <
>> dalva...@redhat.com>
>> > > wrote:
>> > >
>> > > > Hi,
>> > > >
>> > > > After digging further. The problem seems to be reduced to reusing an
>> > > > old gateway IP address for a dnat_and_snat entry.
>> > > > When a gateway port is bound to a chassis, its entry will show up in
>> > > > the MAC_Binding table (at least when that Logical Switch is
>> connected
>> > > > to more than one Logical Router). After deleting the Logical Router
>> > > > and all its ports, this entry will remain there. If a new Logical
>> > > > Router is created and a Floating IP (dnat_and_snat) is assigned to a
>> > > > VM with the old gw IP address, it will become unreachable.
>> > > >
>> > > > A workaround now from networking-ovn (OpenStack integration) is to
>> > > > delete MAC_Binding entries for that IP address upon a FIP creation.
>> I
>> > > > think that this however should be done from OVN, what do you folks
>> > > > think?
>> > > >
>> > > >
>> > > Agree. Since the MAC_Binding table row is created by ovn-controller,
>> it
>> > > should
>> > > be handled properly within OVN.
>> >
>> > I see that this has been sitting here for a while.  The solution seems
>> > reasonable to me.  Are either of you working on it?
>>
>> I started working on it. I came up with a solution (see patch below)
>> which works but I wanted to give you a bit more of context and get your
>> feedback:
>>
>>
>>^ localnet
>>|
>>+---+---+
>>|   |
>> +--+  pub  +--+
>> |  |   |  |
>> |  +---+  |
>> |172.24.4.0/24|
>> | |
>>172.24.4.220 | | 172.24.4.221
>> +---+---+ +---+---+
>> |   | |   |
>> |  LR0  | |  LR1  |
>> |   | |   |
>> +---+---+ +---+---+
>>  10.0.0.254 | | 20.0.0.254
>> | |
>> +---+---+ +---+---+
>> |   | |   |
>> 10.0.0.0/24 |  SW0  | |  SW1  | 20.0.0.0/24
>> |   | |   |
>> +---+---+ +---+---+
>> | |
>> | |
>> +---+---+ +---+---+
>> |   | |   |
>> |  VM0  | |  VM1  |
>> |   | |   |
>> +---+ +---+
>> 10.0.0.10 20.0.0.10
>>   172.24.4.100   172.24.4.200
>>
>>
>> When I ping VM1 floating IP from the external network, a new entry for
>> 172.24.4.221 in the LR0 datapath appears in the MAC_Binding table:
>>
>> _uuid   : 85e30e87-3c59-423e-8681-ec4cfd9205f9
>> datapath: ac5984b9-0fea-485f-84d4-031bdeced29b
>> ip  : "172.24.4.221"
>> logical_port: "lrp02"
>> mac : "00:00:02:01:02:04"
>>
>>
>> Now, if LR1 gets removed and the old gateway IP (172.24.4.221) is reused
>> for VM2 FIP with different MAC and new gateway IP is created (for example
>> 172.24.4.222 00:00:02:01:02:99),  VM2 FIP becomes unreachable from VM1
>> until the old MAC_Binding entry gets deleted as pinging 172.24.4.221 will
>> use the wrong address ("00:00:02:01:02:04").
>>
>> With the patch below, removing LR1 results in deleting all MAC_Binding
>> entries for every datapath where '172.24.4.221' appears in the 'ip' column
>> so the problem goes away.
>>
>> Another solution would be implementing some kind of 'aging' for
>> MAC_Binding entries but perhaps it's more complex.
>> Looking forward for your comments :)
>>
>>
As discussed with you offline, ageing itself might not solve this issue. We
might still hit the issue until the mac _binding entry ages out and flushed
out. Your proposed solution seems fine to me.

Thanks
Numan


>> diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c
>> index 58bef7d..a86733e 100644
>> --- a/ovn/northd/ovn-northd.c
>> +++ b/ovn/northd/ovn-northd.c
>> @@ -232

Re: [ovs-discuss] [ovs-dev] Packet Drop Issue in OVS-DPDK L2FWD Application

2018-11-19 Thread Ian Stokes

On 11/18/2018 8:16 PM, vkrishnabhat k wrote:

Hi Team,

I am new to OVS and DPDK. While I am using l2fwd application with OVS and
DPDK I am seeing packet drop issue in OVS bridge.

Topology : My topology has Ubuntu machine (Ubuntu 18.04 LTS). I have
installed Qemu-KVM 2.11.1 version. Also I am using OVS-DPDK. Please find
the detailed topology attached with this mail. I have bound two NICs (Intel
82599ES 10-gigabit ) to dpdk IGB_UIO driver and also have added same ports
in to OVS bridge "br0". I am trying to send the bidirectional traffic from
both the port and measure the throughput value for the l2fwd application.




Could you please help me with below questions to understand l2fwd better.



Hi, just a few questions to clarify, I assume you mean you are running 
the DPDK sample app 'l2fwd' in a Virtual machine that is also connected 
to bridge br0 via a vhostuser port?



What is the reason for packet drops in OVS bridge ?
What is the expected throughput value for l2fwd ?
How to improve the performance of l2fwd to get better throughput value ?
Is it possible to send or can l2fwd handle layer 7 traffic ?

I tried tuning performance by adding more number of Rx queues and
increasing the Rx queue size as per the link "
http://docs.openvswitch.org/en/latest/intro/install/dpdk/";, but it didn't
help much.



Can you provide what versions of OVS and DPDK are being used in the host 
and VM instances?



I have attached screen shots of the Topology, DPDK port statistics, OVS
configurations with this mail.


I don't see these attached, they may have been filtered. Could you copy 
the output in these to the mail in text.


Thanks
Ian



It will be really great if you could help me with this.

Look forward to hear from you.

Thanks in advance.

Regards,
Venkat



___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev



___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVN: MAC_Binding entries not getting updated leads to unreachable destinations

2018-11-19 Thread Daniel Alvarez Sanchez
Having thought this again, I'd rather merge the patch I proposed in my
previous email (I'd need tests and propose a formal patch after your
feedback) but in the long term I think it'd make sense to also implement
some sort of aging to the MAC_Binding entries so that they eventually
expire, especially for entries that come from external networks.

On Fri, Nov 16, 2018 at 6:41 PM Daniel Alvarez Sanchez 
wrote:

>
> On Sat, Nov 10, 2018 at 12:21 AM Ben Pfaff  wrote:
> >
> > On Mon, Oct 29, 2018 at 05:21:13PM +0530, Numan Siddique wrote:
> > > On Mon, Oct 29, 2018 at 5:00 PM Daniel Alvarez Sanchez <
> dalva...@redhat.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > After digging further. The problem seems to be reduced to reusing an
> > > > old gateway IP address for a dnat_and_snat entry.
> > > > When a gateway port is bound to a chassis, its entry will show up in
> > > > the MAC_Binding table (at least when that Logical Switch is connected
> > > > to more than one Logical Router). After deleting the Logical Router
> > > > and all its ports, this entry will remain there. If a new Logical
> > > > Router is created and a Floating IP (dnat_and_snat) is assigned to a
> > > > VM with the old gw IP address, it will become unreachable.
> > > >
> > > > A workaround now from networking-ovn (OpenStack integration) is to
> > > > delete MAC_Binding entries for that IP address upon a FIP creation. I
> > > > think that this however should be done from OVN, what do you folks
> > > > think?
> > > >
> > > >
> > > Agree. Since the MAC_Binding table row is created by ovn-controller, it
> > > should
> > > be handled properly within OVN.
> >
> > I see that this has been sitting here for a while.  The solution seems
> > reasonable to me.  Are either of you working on it?
>
> I started working on it. I came up with a solution (see patch below) which
> works but I wanted to give you a bit more of context and get your feedback:
>
>
>^ localnet
>|
>+---+---+
>|   |
> +--+  pub  +--+
> |  |   |  |
> |  +---+  |
> |172.24.4.0/24|
> | |
>172.24.4.220 | | 172.24.4.221
> +---+---+ +---+---+
> |   | |   |
> |  LR0  | |  LR1  |
> |   | |   |
> +---+---+ +---+---+
>  10.0.0.254 | | 20.0.0.254
> | |
> +---+---+ +---+---+
> |   | |   |
> 10.0.0.0/24 |  SW0  | |  SW1  | 20.0.0.0/24
> |   | |   |
> +---+---+ +---+---+
> | |
> | |
> +---+---+ +---+---+
> |   | |   |
> |  VM0  | |  VM1  |
> |   | |   |
> +---+ +---+
> 10.0.0.10 20.0.0.10
>   172.24.4.100   172.24.4.200
>
>
> When I ping VM1 floating IP from the external network, a new entry for
> 172.24.4.221 in the LR0 datapath appears in the MAC_Binding table:
>
> _uuid   : 85e30e87-3c59-423e-8681-ec4cfd9205f9
> datapath: ac5984b9-0fea-485f-84d4-031bdeced29b
> ip  : "172.24.4.221"
> logical_port: "lrp02"
> mac : "00:00:02:01:02:04"
>
>
> Now, if LR1 gets removed and the old gateway IP (172.24.4.221) is reused
> for VM2 FIP with different MAC and new gateway IP is created (for example
> 172.24.4.222 00:00:02:01:02:99),  VM2 FIP becomes unreachable from VM1
> until the old MAC_Binding entry gets deleted as pinging 172.24.4.221 will
> use the wrong address ("00:00:02:01:02:04").
>
> With the patch below, removing LR1 results in deleting all MAC_Binding
> entries for every datapath where '172.24.4.221' appears in the 'ip' column
> so the problem goes away.
>
> Another solution would be implementing some kind of 'aging' for
> MAC_Binding entries but perhaps it's more complex.
> Looking forward for your comments :)
>
>
> diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c
> index 58bef7d..a86733e 100644
> --- a/ovn/northd/ovn-northd.c
> +++ b/ovn/northd/ovn-northd.c
> @@ -2324,6 +2324,18 @@ cleanup_mac_bindings(struct northd_context *ctx,
> struct hmap *ports)
>  }
>  }
>
> +static void
> +delete_mac_binding_by_ip(struct northd_context *ctx, const char *ip)
> +{
> +const struct sbrec_mac_binding *b, *n;
> +SBREC_MAC_BINDING_FOR_EACH_SAFE (b, n, ctx->ovnsb_idl) {
> +if (strstr(ip, b->ip)) {
> +sbrec_mac_binding_delete(b);
> +}
> +}
> +}
> +
> +
>  /* U