Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-12-17 Thread Siva Teja ARETI
Thanks Greg for explaining the correct way to do.

Siva Teja.

On Fri, Nov 30, 2018 at 12:55 PM Gregory Rose  wrote:

>
>
> On 11/28/2018 3:15 PM, Siva Teja ARETI wrote:
>
> Hi Greg,
>
> Please find the answers inline below.
>
> On Tue, Nov 27, 2018 at 1:35 PM Gregory Rose  wrote:
>
>> Siva,
>>
>> You have a routing issue.
>> See interalia
>> https://github.com/OpenNebula/one/issues/2161
>>
>> http://wwwaem.brocade.com/content/html/en/brocade-validated-design/brocade-vcs-fabric-ip-storage-bvd/GUID-CB5BFC4D-B2BE-4E9C-BA91-7E7E9BD35FCC.html
>>
>> http://blog.arunsriraman.com/2017/02/how-to-setting-up-gre-or-vxlan-tunnel.html
>>
>> For this to work you must be able to ping from the local IP to the remote
>> IP *through* the remote IP address.As we have seen that doesn't work.
>>
>
> Did you mean to be able to ping using remote interface? I am able to get
> this to work when I connect the two bridges using a veth pair.
>
> [root@vm1 ~]# ping 30.30.0.193 -I eth2
> PING 30.30.0.193 (30.30.0.193) from 20.20.0.183 eth2: 56(84) bytes of data.
> 64 bytes from 30.30.0.193: icmp_seq=1 ttl=64 time=0.655 ms
> 64 bytes from 30.30.0.193: icmp_seq=2 ttl=64 time=0.574 ms
> 64 bytes from 30.30.0.193: icmp_seq=3 ttl=64 time=0.600 ms
> 64 bytes from 30.30.0.193: icmp_seq=4 ttl=64 time=0.604 ms
> 64 bytes from 30.30.0.193: icmp_seq=5 ttl=64 time=0.607 ms
> 64 bytes from 30.30.0.193: icmp_seq=6 ttl=64 time=0.620 ms
> 64 bytes from 30.30.0.193: icmp_seq=7 ttl=64 time=0.466 ms
> 64 bytes from 30.30.0.193: icmp_seq=8 ttl=64 time=0.623 ms
> ^C
> --- 30.30.0.193 ping statistics ---
> 8 packets transmitted, 8 received, 0% packet loss, time 7000ms
> rtt min/avg/max/mdev = 0.466/0.593/0.655/0.059 ms
>
> Even with this routing setup, the local_ip option with vxlan tunnels does
> not seem to work and GRE tunnels work.
>
>
> So what you did there with the veth pair is not routing, it's bridging.
>
>
> As an aside, why do you have two bridges to the same VMs?  Your
>> configuration makes it impossible to
>> set a route because  you have two sets of IP addresses and routes all on
>> two bridges going into the same
>> VMs.  In that configuration the local ip option makes  no sense.  You
>> don't need it - you're already bridged.
>>
>
> I was to trying to mimic a use case with two hypervisors and each
> hypervisor is connected to two different underlay networks. So, used linux
> bridges when imitated the topology with VMs. Please advice if this is not
> the right approach.
>
>
> I don't see how that can work - there does not seem to be enough
> isolation.  The VMs are still connected to
> a single hypervisor and they're all bridged, not routed.
>
>
> I understand that you have seen the gre configuration work and I'm not
>> sure why because it has the same
>> requirements for the local ip to be routable through the remote ip.  And
>> again, there is no point to the
>> local ip option because the ip addresses do not need to be routed to
>> reach each other.
>>
>> In any case, I'm going to set up a valid configuration and then make sure
>> that the local ip option does work
>> or not.  I'll report back when I'm done.
>>
>>
> I will look out for your conclusions.
>
>
> So I have gotten both gre and vxlan to work with the local_ip option.
>
> Below is my setup for vxlan. The one for gre is identical except it is gre
> tunneling instead of vxlan tunneling.
> I've highlighted in red notable configurations and IP addresses.  With
> this setup I can do this:
>
> From Machine B to Machine A:
> # ip netns exec ns0 ping 10.1.1.1
> PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
> 64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.966 ms
> 64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.128 ms
> 64 bytes from 10.1.1.1: icmp_seq=3 ttl=64 time=0.116 ms
> 64 bytes from 10.1.1.1: icmp_seq=4 ttl=64 time=0.113 ms
> 64 bytes from 10.1.1.1: icmp_seq=5 ttl=64 time=0.155 ms
> 64 bytes from 10.1.1.1: icmp_seq=6 ttl=64 time=0.124 ms
> 64 bytes from 10.1.1.1: icmp_seq=7 ttl=64 time=0.133 ms
>
> As you can see the vxlan tunnel with local_ip option works fine when the
> base configuration is done
> correctly.  I think a lot of confusion in this case has been between
> bridging and routing.  They are
> really separate concepts.
>
> I hope this helps.
>
> Thanks,
>
> - Greg
>
> Setup follows:
>
> Machine A:
> # ovs-vsctl show
> e4490ab5-ba93-4291-8a4f-c6f71292310b
> Bridge br-test
>
>
>
> *Port "vxlan0" Interface "vxlan0"
> type: vxlan **options: {key="100",
> local_ip="201.20.20.1", remote_ip="200.0.0.2"}*
> Port "p1"
> Interface "p1"
> Port br-test
> Interface br-test
> type: internal
> Bridge "br0"
> Port "br0-peer"
> Interface "br0-peer"
> type: patch
> options: {peer="br1-peer"}
> Port "em2"
> Interface "em2"
> Port "br0"
> Interface "br0"
> type: internal
> 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-30 Thread Gregory Rose



On 11/28/2018 3:15 PM, Siva Teja ARETI wrote:

Hi Greg,

Please find the answers inline below.

On Tue, Nov 27, 2018 at 1:35 PM Gregory Rose > wrote:


Siva,

You have a routing issue.

See interalia
https://github.com/OpenNebula/one/issues/2161

http://wwwaem.brocade.com/content/html/en/brocade-validated-design/brocade-vcs-fabric-ip-storage-bvd/GUID-CB5BFC4D-B2BE-4E9C-BA91-7E7E9BD35FCC.html

http://blog.arunsriraman.com/2017/02/how-to-setting-up-gre-or-vxlan-tunnel.html

For this to work you must be able to ping from the local IP to the
remote IP *through* the remote IP address.As we have seen that
doesn't work.


Did you mean to be able to ping using remote interface? I am able to 
get this to work when I connect the two bridges using a veth pair.


[root@vm1 ~]# ping 30.30.0.193 -I eth2
PING 30.30.0.193 (30.30.0.193) from 20.20.0.183 eth2: 56(84) bytes of 
data.
64 bytes from 30.30.0.193 : icmp_seq=1 ttl=64 
time=0.655 ms
64 bytes from 30.30.0.193 : icmp_seq=2 ttl=64 
time=0.574 ms
64 bytes from 30.30.0.193 : icmp_seq=3 ttl=64 
time=0.600 ms
64 bytes from 30.30.0.193 : icmp_seq=4 ttl=64 
time=0.604 ms
64 bytes from 30.30.0.193 : icmp_seq=5 ttl=64 
time=0.607 ms
64 bytes from 30.30.0.193 : icmp_seq=6 ttl=64 
time=0.620 ms
64 bytes from 30.30.0.193 : icmp_seq=7 ttl=64 
time=0.466 ms
64 bytes from 30.30.0.193 : icmp_seq=8 ttl=64 
time=0.623 ms

^C
--- 30.30.0.193 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7000ms
rtt min/avg/max/mdev = 0.466/0.593/0.655/0.059 ms
Even with this routing setup, the local_ip option with vxlan tunnels 
does not seem to work and GRE tunnels work.


So what you did there with the veth pair is not routing, it's bridging.



As an aside, why do you have two bridges to the same VMs?  Your
configuration makes it impossible to
set a route because  you have two sets of IP addresses and routes
all on two bridges going into the same
VMs.  In that configuration the local ip option makes no sense. 
You don't need it - you're already bridged.


I was to trying to mimic a use case with two hypervisors and each 
hypervisor is connected to two different underlay networks. So, used 
linux bridges when imitated the topology with VMs. Please advice if 
this is not the right approach.


I don't see how that can work - there does not seem to be enough 
isolation.  The VMs are still connected to

a single hypervisor and they're all bridged, not routed.



I understand that you have seen the gre configuration work and I'm
not sure why because it has the same
requirements for the local ip to be routable through the remote
ip.  And again, there is no point to the
local ip option because the ip addresses do not need to be routed
to reach each other.

In any case, I'm going to set up a valid configuration and then
make sure that the local ip option does work
or not.  I'll report back when I'm done.


I will look out for your conclusions.



So I have gotten both gre and vxlan to work with the local_ip option.

Below is my setup for vxlan. The one for gre is identical except it is 
gre tunneling instead of vxlan tunneling.
I've highlighted in red notable configurations and IP addresses. With 
this setup I can do this:


From Machine B to Machine A:
# ip netns exec ns0 ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.966 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.128 ms
64 bytes from 10.1.1.1: icmp_seq=3 ttl=64 time=0.116 ms
64 bytes from 10.1.1.1: icmp_seq=4 ttl=64 time=0.113 ms
64 bytes from 10.1.1.1: icmp_seq=5 ttl=64 time=0.155 ms
64 bytes from 10.1.1.1: icmp_seq=6 ttl=64 time=0.124 ms
64 bytes from 10.1.1.1: icmp_seq=7 ttl=64 time=0.133 ms

As you can see the vxlan tunnel with local_ip option works fine when the 
base configuration is done
correctly.  I think a lot of confusion in this case has been between 
bridging and routing.  They are

really separate concepts.

I hope this helps.

Thanks,

- Greg

Setup follows:

Machine A:
# ovs-vsctl show
e4490ab5-ba93-4291-8a4f-c6f71292310b
    Bridge br-test
*    Port "vxlan0"
    Interface "vxlan0"
    type: vxlan
**    options: {key="100", local_ip="201.20.20.1", 
remote_ip="200.0.0.2"}*

    Port "p1"
    Interface "p1"
    Port br-test
    Interface br-test
    type: internal
    Bridge "br0"
    Port "br0-peer"
    Interface "br0-peer"
    type: patch
    options: {peer="br1-peer"}
    Port "em2"
    Interface "em2"
    Port "br0"
    Interface "br0"
    type: internal
    Bridge "br1"
    Port "br1-peer"
    

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-28 Thread Siva Teja ARETI
Hi Greg,

Please find the answers inline below.

On Tue, Nov 27, 2018 at 1:35 PM Gregory Rose  wrote:

> Siva,
>
> You have a routing issue.
> See interalia
> https://github.com/OpenNebula/one/issues/2161
>
> http://wwwaem.brocade.com/content/html/en/brocade-validated-design/brocade-vcs-fabric-ip-storage-bvd/GUID-CB5BFC4D-B2BE-4E9C-BA91-7E7E9BD35FCC.html
>
> http://blog.arunsriraman.com/2017/02/how-to-setting-up-gre-or-vxlan-tunnel.html
>
> For this to work you must be able to ping from the local IP to the remote
> IP *through* the remote IP address.As we have seen that doesn't work.
>

Did you mean to be able to ping using remote interface? I am able to get
this to work when I connect the two bridges using a veth pair.

[root@vm1 ~]# ping 30.30.0.193 -I eth2
PING 30.30.0.193 (30.30.0.193) from 20.20.0.183 eth2: 56(84) bytes of data.
64 bytes from 30.30.0.193: icmp_seq=1 ttl=64 time=0.655 ms
64 bytes from 30.30.0.193: icmp_seq=2 ttl=64 time=0.574 ms
64 bytes from 30.30.0.193: icmp_seq=3 ttl=64 time=0.600 ms
64 bytes from 30.30.0.193: icmp_seq=4 ttl=64 time=0.604 ms
64 bytes from 30.30.0.193: icmp_seq=5 ttl=64 time=0.607 ms
64 bytes from 30.30.0.193: icmp_seq=6 ttl=64 time=0.620 ms
64 bytes from 30.30.0.193: icmp_seq=7 ttl=64 time=0.466 ms
64 bytes from 30.30.0.193: icmp_seq=8 ttl=64 time=0.623 ms
^C
--- 30.30.0.193 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7000ms
rtt min/avg/max/mdev = 0.466/0.593/0.655/0.059 ms

Even with this routing setup, the local_ip option with vxlan tunnels does
not seem to work and GRE tunnels work.

As an aside, why do you have two bridges to the same VMs?  Your
> configuration makes it impossible to
> set a route because  you have two sets of IP addresses and routes all on
> two bridges going into the same
> VMs.  In that configuration the local ip option makes  no sense.  You
> don't need it - you're already bridged.
>

I was to trying to mimic a use case with two hypervisors and each
hypervisor is connected to two different underlay networks. So, used linux
bridges when imitated the topology with VMs. Please advice if this is not
the right approach.

I understand that you have seen the gre configuration work and I'm not sure
> why because it has the same
> requirements for the local ip to be routable through the remote ip.  And
> again, there is no point to the
> local ip option because the ip addresses do not need to be routed to reach
> each other.
>
> In any case, I'm going to set up a valid configuration and then make sure
> that the local ip option does work
> or not.  I'll report back when I'm done.
>
>
I will look out for your conclusions.


> Thanks,
>
> - Greg
>
> On 11/20/2018 10:13 AM, Gregory Rose wrote:
>
>
> On 11/20/2018 10:03 AM, Siva Teja ARETI wrote:
>
>
>
> On Tue, Nov 20, 2018 at 12:59 PM Gregory Rose 
> wrote:
>
>> On 11/19/2018 6:30 PM, Siva Teja ARETI wrote:
>>
>>
>> [user@hyp-1] ip route
>> default via A.B.C.D dev enp5s0  proto static  metric 100
>> 10.10.0.0/24 dev testbr0  proto kernel  scope link  src 10.10.0.1
>> linkdown
>> 20.20.0.0/24 dev testbr1  proto kernel  scope link  src 20.20.0.1
>> 30.30.0.0/24 dev testbr2  proto kernel  scope link  src 30.30.0.1
>>
>> Hi Siva,
>>
>> I'm curious about these bridges.  Are they Linux bridges or OVS bridges?
>>
>> If they are Linux bridges please provide the output of 'brctl show'.
>> If they are OVS bridges then please provide the output of 'ovs-vsctl
>> show'.
>>
>> Thanks!
>>
>> - Greg
>>
>
> Hi Greg,
>
> These are linux bridges.
>
> [user@hyp1 ] brctl show
> bridge name bridge id STP enabled interfaces
> docker0 8000.02428928dba5 no veth6079ee7
> testbr0 8000. yes
> testbr1 8000.fe540005937c yes vnet2
> vnet5
> testbr2 8000.fe540079ef92 yes vnet1
> vnet4
> virbr0 8000.fe54000ad370 yes vnet0
> vnet3
>
>  Siva Teja.
>
>
> Thanks Siva!  I'll follow up when I have more questions and/or results.
>
> - Greg
>
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-27 Thread Gregory Rose

Siva,

You have a routing issue.

See interalia
https://github.com/OpenNebula/one/issues/2161
http://wwwaem.brocade.com/content/html/en/brocade-validated-design/brocade-vcs-fabric-ip-storage-bvd/GUID-CB5BFC4D-B2BE-4E9C-BA91-7E7E9BD35FCC.html
http://blog.arunsriraman.com/2017/02/how-to-setting-up-gre-or-vxlan-tunnel.html

For this to work you must be able to ping from the local IP to the 
remote IP *through* the remote IP address.As we have seen that doesn't work.


As an aside, why do you have two bridges to the same VMs?  Your 
configuration makes it impossible to
set a route because  you have two sets of IP addresses and routes all on 
two bridges going into the same
VMs.  In that configuration the local ip option makes  no sense. You 
don't need it - you're already bridged.


I understand that you have seen the gre configuration work and I'm not 
sure why because it has the same
requirements for the local ip to be routable through the remote ip. And 
again, there is no point to the
local ip option because the ip addresses do not need to be routed to 
reach each other.


In any case, I'm going to set up a valid configuration and then make 
sure that the local ip option does work

or not.  I'll report back when I'm done.

Thanks,

- Greg

On 11/20/2018 10:13 AM, Gregory Rose wrote:


On 11/20/2018 10:03 AM, Siva Teja ARETI wrote:



On Tue, Nov 20, 2018 at 12:59 PM Gregory Rose > wrote:


On 11/19/2018 6:30 PM, Siva Teja ARETI wrote:


[user@hyp-1] ip route
default via A.B.C.D dev enp5s0  proto static  metric 100
10.10.0.0/24  dev testbr0  proto kernel 
scope link  src 10.10.0.1 linkdown
20.20.0.0/24  dev testbr1  proto kernel 
scope link  src 20.20.0.1
30.30.0.0/24  dev testbr2  proto kernel 
scope link  src 30.30.0.1


Hi Siva,

I'm curious about these bridges.  Are they Linux bridges or OVS
bridges?

If they are Linux bridges please provide the output of 'brctl show'.
If they are OVS bridges then please provide the output of
'ovs-vsctl show'.

Thanks!

- Greg


Hi Greg,

These are linux bridges.

[user@hyp1 ] brctl show
bridge namebridge idSTP enabledinterfaces
docker08000.02428928dba5noveth6079ee7
testbr08000.yes
testbr18000.fe540005937cyesvnet2
vnet5
testbr28000.fe540079ef92yesvnet1
vnet4
virbr08000.fe54000ad370yesvnet0
vnet3

 Siva Teja.


Thanks Siva!  I'll follow up when I have more questions and/or results.

- Greg


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-20 Thread Gregory Rose


On 11/20/2018 10:03 AM, Siva Teja ARETI wrote:



On Tue, Nov 20, 2018 at 12:59 PM Gregory Rose > wrote:


On 11/19/2018 6:30 PM, Siva Teja ARETI wrote:


[user@hyp-1] ip route
default via A.B.C.D dev enp5s0  proto static  metric 100
10.10.0.0/24  dev testbr0  proto kernel 
scope link  src 10.10.0.1 linkdown
20.20.0.0/24  dev testbr1  proto kernel 
scope link  src 20.20.0.1
30.30.0.0/24  dev testbr2  proto kernel 
scope link  src 30.30.0.1


Hi Siva,

I'm curious about these bridges.  Are they Linux bridges or OVS
bridges?

If they are Linux bridges please provide the output of 'brctl show'.
If they are OVS bridges then please provide the output of
'ovs-vsctl show'.

Thanks!

- Greg


Hi Greg,

These are linux bridges.

[user@hyp1 ] brctl show
bridge namebridge idSTP enabledinterfaces
docker08000.02428928dba5noveth6079ee7
testbr08000.yes
testbr18000.fe540005937cyesvnet2
vnet5
testbr28000.fe540079ef92yesvnet1
vnet4
virbr08000.fe54000ad370yesvnet0
vnet3

 Siva Teja.


Thanks Siva!  I'll follow up when I have more questions and/or results.

- Greg
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-20 Thread Siva Teja ARETI
On Tue, Nov 20, 2018 at 12:59 PM Gregory Rose  wrote:

> On 11/19/2018 6:30 PM, Siva Teja ARETI wrote:
>
>
> [user@hyp-1] ip route
> default via A.B.C.D dev enp5s0  proto static  metric 100
> 10.10.0.0/24 dev testbr0  proto kernel  scope link  src 10.10.0.1 linkdown
> 20.20.0.0/24 dev testbr1  proto kernel  scope link  src 20.20.0.1
> 30.30.0.0/24 dev testbr2  proto kernel  scope link  src 30.30.0.1
>
> Hi Siva,
>
> I'm curious about these bridges.  Are they Linux bridges or OVS bridges?
>
> If they are Linux bridges please provide the output of 'brctl show'.
> If they are OVS bridges then please provide the output of 'ovs-vsctl show'.
>
> Thanks!
>
> - Greg
>

Hi Greg,

These are linux bridges.

[user@hyp1 ] brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02428928dba5 no veth6079ee7
testbr0 8000. yes
testbr1 8000.fe540005937c yes vnet2
vnet5
testbr2 8000.fe540079ef92 yes vnet1
vnet4
virbr0 8000.fe54000ad370 yes vnet0
vnet3

 Siva Teja.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-20 Thread Gregory Rose

On 11/19/2018 6:30 PM, Siva Teja ARETI wrote:


[user@hyp-1] ip route
default via A.B.C.D dev enp5s0  proto static  metric 100
10.10.0.0/24  dev testbr0  proto kernel  scope 
link  src 10.10.0.1 linkdown
20.20.0.0/24  dev testbr1  proto kernel  scope 
link  src 20.20.0.1
30.30.0.0/24  dev testbr2  proto kernel  scope 
link  src 30.30.0.1



Hi Siva,

I'm curious about these bridges.  Are they Linux bridges or OVS bridges?

If they are Linux bridges please provide the output of 'brctl show'.
If they are OVS bridges then please provide the output of 'ovs-vsctl show'.

Thanks!

- Greg
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-20 Thread Gregory Rose


On 11/19/2018 6:30 PM, Siva Teja ARETI wrote:



On Mon, Nov 19, 2018 at 7:17 PM Gregory Rose > wrote:



Hi Siva,

One more request  - I need to see the underlying network
configuration
of the hypervisor running the two VMs.
Are both VMs on the same machine?  If so then just the network
configuration of the base machine
running the VMs, otherwise the network configuration of each base
machine running their perspective
VM.

This is turning into quite the investigation and I apologize that
it is
taking so long.  Please bear with me
if you can and we'll see if we can't get this problem solved. 
I've seen
some puzzling bugs before and
this one is turning out to be one of the best.  Or worst
depends on
your outlook.  :)

Thanks for all your help so far!

- Greg


Hi Greg,

Both the VMs run on same hypervisor in my setup. Created VMs and 
virtual networks using virsh commands. Virsh XMLs for networks look 
like below


[user@hyp1 ] virsh net-dumpxml route1

  route1
2c935aaf-ebde-5b76-a903-4fccb115ff75
  
  
  
  
    
      
    
  


[user@hyp1 ]  network virsh net-dumpxml route2

  route2
2c935baf-ebde-5b76-a903-4fccb115ff75
  
  
  
  
    
      
    
  


Each VM is connected to both the networks.

Some network configuration of the hypervisor.

[user@hyp-1] ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
group default qlen 1

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8  scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp5s0:  mtu 1400 qdisc pfifo_fast 
state UP group default qlen 1000

    link/ether  brd ff:ff:ff:ff:ff:ff
    inet A.B.C.D/24 brd X.Y.Z.W scope global dynamic enp5s0
       valid_lft 318349sec preferred_lft 318349sec
3: virbr0:  mtu 1500 qdisc noqueue 
state UP group default qlen 1000

    link/ether fe:54:00:0a:d3:70 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24  brd 
192.168.122.255 scope global virbr0

       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc pfifo_fast state 
DOWN group default qlen 1000

    link/ether 52:54:00:94:4e:04 brd ff:ff:ff:ff:ff:ff
11: docker0:  mtu 1500 qdisc noqueue 
state UP group default

    link/ether 02:42:89:28:db:a5 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16  scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:89ff:fe28:dba5/64 scope link
       valid_lft forever preferred_lft forever
96: vboxnet0:  mtu 1500 qdisc 
pfifo_fast state DOWN group default qlen 1000

    link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.1/24  brd 192.168.99.255 
scope global vboxnet0

       valid_lft forever preferred_lft forever
    inet6 fe80::800:27ff:fe00:0/64 scope link
       valid_lft forever preferred_lft forever
193: testbr0:  mtu 1500 qdisc 
noqueue state DOWN group default qlen 1000

    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 10.10.0.1/24  brd 10.10.0.255 scope 
global testbr0

       valid_lft forever preferred_lft forever
194: testbr0-nic:  mtu 1500 qdisc pfifo_fast 
state DOWN group default qlen 1000

    link/ether 42:54:00:94:4e:04 brd ff:ff:ff:ff:ff:ff
227: testbr1:  mtu 1500 qdisc noqueue 
state UP group default qlen 1000

    link/ether fe:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
    inet 20.20.0.1/24  brd 20.20.0.255 scope 
global testbr1

       valid_lft forever preferred_lft forever
228: testbr1-nic:  mtu 1500 qdisc pfifo_fast 
state DOWN group default qlen 1000

    link/ether 42:54:00:84:4e:04 brd ff:ff:ff:ff:ff:ff
229: testbr2:  mtu 1500 qdisc noqueue 
state UP group default qlen 1000

    link/ether fe:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
    inet 30.30.0.1/24  brd 30.30.0.255 scope 
global testbr2

       valid_lft forever preferred_lft forever
230: testbr2-nic:  mtu 1500 qdisc pfifo_fast 
state DOWN group default qlen 1000

    link/ether 42:54:10:84:4e:04 brd ff:ff:ff:ff:ff:ff
231: vnet0:  mtu 1500 qdisc 
pfifo_fast master virbr0 state UNKNOWN group default qlen 1000

    link/ether fe:54:00:0a:d3:70 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe0a:d370/64 scope link
       valid_lft forever preferred_lft forever
232: vnet1:  mtu 1500 qdisc 
pfifo_fast master testbr2 state UNKNOWN group default qlen 1000

    link/ether fe:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:feb8:5be/64 scope link
       valid_lft forever preferred_lft forever
233: vnet2:  mtu 1500 qdisc 
pfifo_fast master testbr1 state UNKNOWN group default qlen 1000

    link/ether fe:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fef0:6437/64 scope link
       valid_lft forever preferred_lft forever
234: vnet3:  mtu 1500 qdisc 
pfifo_fast master virbr0 state 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-19 Thread Gregory Rose


Hi Siva,

One more request  - I need to see the underlying network configuration 
of the hypervisor running the two VMs.
Are both VMs on the same machine?  If so then just the network 
configuration of the base machine
running the VMs, otherwise the network configuration of each base 
machine running their perspective

VM.

This is turning into quite the investigation and I apologize that it is 
taking so long.  Please bear with me
if you can and we'll see if we can't get this problem solved.  I've seen 
some puzzling bugs before and
this one is turning out to be one of the best.  Or worst depends on 
your outlook.  :)


Thanks for all your help so far!

- Greg
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-19 Thread Siva Teja ARETI
On Mon, Nov 19, 2018 at 11:18 AM Gregory Rose  wrote:

>
> On 11/19/2018 7:50 AM, Siva Teja ARETI wrote:
>
>
>
> On Fri, Nov 16, 2018 at 4:52 PM Gregory Rose  wrote:
>
>> On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:
>>
>> Hi Greg,
>>
>> Thanks for looking into this.
>>
>> I have two VMs in my setup each with two interfaces. Trying to setup the
>> VXLAN tunnels across these interfaces which are in different subnets. A
>> docker container is attached to ovs bridge using ovs-docker utility on each
>> VM and doing a ping from one container to another.
>>
>> *VM1 details:*
>>
>> [root@vm1 ~]# ip a
>> ...
>> 3: eth1:  mtu 1500 qdisc pfifo_fast
>> state UP qlen 1000
>> link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
>> inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
>>valid_lft 3002sec preferred_lft 3002sec
>> inet6 fe80::5054:ff:feb8:5be/64 scope link
>>valid_lft forever preferred_lft forever
>> 4: eth2:  mtu 1500 qdisc pfifo_fast
>> state UP qlen 1000
>> link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
>> inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
>>valid_lft 3248sec preferred_lft 3248sec
>> inet6 fe80::5054:ff:fef0:6437/64 scope link
>>valid_lft forever preferred_lft forever
>> ...
>>
>>
>> Hi Siva,
>>
>> I have a question.  Are you able to ping between the two interfaces on
>> VM1 with this command?:
>>
>> # ping 20.20.0.183 -I eth1
>>
>> thanks,
>>
>> - Greg
>>
> Hi Greg,
>
> Sorry for the late reply.
>
> Yes, I am able to ping between two interfaces.
>
> [root@localhost ~]# ovs-appctl dpif/show
> system@ovs-system: hit:2799 missed:198775
> testbr0:
> a0769422cfc04_l 2/3: (system)
> testbr0 65534/1: (internal)
> vxlan0 10/2: (vxlan: local_ip=30.30.0.193,
> remote_ip=20.20.0.183)
> [root@localhost ~]# ping 20.20.0.183 -I 30.30.0.193
> PING 20.20.0.183 (20.20.0.183) from 30.30.0.193 : 56(84) bytes of data.
> 64 bytes from 20.20.0.183: icmp_seq=1 ttl=64 time=0.470 ms
> 64 bytes from 20.20.0.183: icmp_seq=2 ttl=64 time=0.657 ms
> 64 bytes from 20.20.0.183: icmp_seq=3 ttl=64 time=0.685 ms
> 64 bytes from 20.20.0.183: icmp_seq=4 ttl=64 time=0.721 ms
> 64 bytes from 20.20.0.183: icmp_seq=5 ttl=64 time=0.630 ms
> 64 bytes from 20.20.0.183: icmp_seq=6 ttl=64 time=0.629 ms
> ^C
>
>
> Well that's probably where my setup isn't configured right.  What is the
> output of 'ip route' on that system?
>
> Thanks,
>
> - Greg
>

Hi Greg,

Here is the output of 'ip route' command.

[root@vm1 ~]# ip route
default via 192.168.122.1 dev eth0
20.20.0.0/24 dev eth2 proto kernel scope link src 20.20.0.64
30.30.0.0/24 dev eth1 proto kernel scope link src 30.30.0.193
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.165

Siva Teja.

> --- 20.20.0.183 ping statistics ---
> 6 packets transmitted, 6 received, 0% packet loss, time 5000ms
> rtt min/avg/max/mdev = 0.470/0.632/0.721/0.079 ms
> [root@localhost ~]#
>
>  Siva Teja.
>
>> [root@vm1 ~]# ovs-vsctl show
>> ff70c814-d1b0-4018-aee8-8b635187afee
>> Bridge "testbr0"
>> Port "gre0"
>> Interface "gre0"
>> type: gre
>> options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
>> Port "testbr0"
>> Interface "testbr0"
>> type: internal
>> Port "2cfb62a9b0f04_l"
>> Interface "2cfb62a9b0f04_l"
>> ovs_version: "2.9.2"
>> [root@vm1 ~]# ip rule list
>> 0:  from all lookup local
>> 32765:  from 20.20.0.183 lookup siva
>> 32766:  from all lookup main
>> 32767:  from all lookup default
>> [root@vm1 ~]# ip route show table siva
>> default dev eth2 scope link src 20.20.0.183
>> [root@vm1 ~]# # A docker container is attached
>> to ovs bridge using ovs-docker utility
>> [root@vm1 ~]# docker ps
>> CONTAINER IDIMAGE   COMMAND CREATED
>>STATUS  PORTS   NAMES
>> be4ab434db99busybox "sh"5 days ago
>>   Up 5 days   admiring_euclid
>> [root@vm1 ~]# nsenter -n -t `docker inspect be4 --format={{.State.Pid}}`
>> -- ip a
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>>valid_lft forever preferred_lft forever
>> 2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
>> link/gre 0.0.0.0 brd 0.0.0.0
>> 3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN
>> qlen 1000
>> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
>> 9: eth0@if10:  mtu 1500 qdisc noqueue
>> state UP qlen 1000
>> link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
>> inet 70.70.0.10/24 scope global eth0
>>valid_lft 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-19 Thread Gregory Rose


On 11/19/2018 7:50 AM, Siva Teja ARETI wrote:



On Fri, Nov 16, 2018 at 4:52 PM Gregory Rose > wrote:


On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:

Hi Greg,

Thanks for looking into this.

I have two VMs in my setup each with two interfaces. Trying to
setup the VXLAN tunnels across these interfaces which are in
different subnets. A docker container is attached to ovs bridge
using ovs-docker utility on each VM and doing a ping from one
container to another.

*VM1 details:*

[root@vm1 ~]# ip a
...
3: eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
    link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
    inet 30.30.0.59/24  brd 30.30.0.255
scope global dynamic eth1
 valid_lft 3002sec preferred_lft 3002sec
    inet6 fe80::5054:ff:feb8:5be/64 scope link
 valid_lft forever preferred_lft forever
4: eth2:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
    link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
    inet 20.20.0.183/24  brd 20.20.0.255
scope global dynamic eth2
 valid_lft 3248sec preferred_lft 3248sec
    inet6 fe80::5054:ff:fef0:6437/64 scope link
 valid_lft forever preferred_lft forever
...


Hi Siva,

I have a question.  Are you able to ping between the two
interfaces on VM1 with this command?:

# ping 20.20.0.183 -I eth1

thanks,

- Greg

Hi Greg,

Sorry for the late reply.

Yes, I am able to ping between two interfaces.

[root@localhost ~]# ovs-appctl dpif/show
system@ovs-system: hit:2799 missed:198775
        testbr0:
                a0769422cfc04_l 2/3: (system)
                testbr0 65534/1: (internal)
                vxlan0 10/2: (vxlan: local_ip=30.30.0.193, 
remote_ip=20.20.0.183)

[root@localhost ~]# ping 20.20.0.183 -I 30.30.0.193
PING 20.20.0.183 (20.20.0.183) from 30.30.0.193 : 56(84) bytes of data.
64 bytes from 20.20.0.183 : icmp_seq=1 ttl=64 
time=0.470 ms
64 bytes from 20.20.0.183 : icmp_seq=2 ttl=64 
time=0.657 ms
64 bytes from 20.20.0.183 : icmp_seq=3 ttl=64 
time=0.685 ms
64 bytes from 20.20.0.183 : icmp_seq=4 ttl=64 
time=0.721 ms
64 bytes from 20.20.0.183 : icmp_seq=5 ttl=64 
time=0.630 ms
64 bytes from 20.20.0.183 : icmp_seq=6 ttl=64 
time=0.629 ms

^C


Well that's probably where my setup isn't configured right.  What is the 
output of 'ip route' on that system?


Thanks,

- Greg


--- 20.20.0.183 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5000ms
rtt min/avg/max/mdev = 0.470/0.632/0.721/0.079 ms
[root@localhost ~]#

 Siva Teja.


[root@vm1 ~]# ovs-vsctl show
ff70c814-d1b0-4018-aee8-8b635187afee
    Bridge "testbr0"
        Port "gre0"
Interface "gre0"
type: gre
options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
        Port "testbr0"
Interface "testbr0"
type: internal
        Port "2cfb62a9b0f04_l"
Interface "2cfb62a9b0f04_l"
ovs_version: "2.9.2"
[root@vm1 ~]# ip rule list
0:      from all lookup local
32765:  from 20.20.0.183 lookup siva
32766:  from all lookup main
32767:  from all lookup default
[root@vm1 ~]# ip route show table siva
default dev eth2 scope link src 20.20.0.183
[root@vm1 ~]# # A docker container is
attached to ovs bridge using ovs-docker utility
[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE  COMMAND  CREATED  STATUS PORTS  NAMES
be4ab434db99   busybox  "sh" 5 days ago Up 5 days  admiring_euclid
[root@vm1 ~]# nsenter -n -t `docker inspect be4
--format={{.State.Pid}}` -- ip a
1: lo:  mtu 65536 qdisc noqueue state
UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8  scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE:  mtu 1462 qdisc noop state
DOWN qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
9: eth0@if10:  mtu 1500 qdisc
noqueue state UP qlen 1000
    link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 70.70.0.10/24  scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2098:41ff:fe0f:e850/64 scope link
       valid_lft forever preferred_lft forever


*VM2 details:*
*
*
[root@vm2 ~]# ip a

3: eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
    link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
    inet 30.30.0.193/24  brd 30.30.0.255
scope global dynamic eth1
 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-19 Thread Siva Teja ARETI
On Fri, Nov 16, 2018 at 4:52 PM Gregory Rose  wrote:

> On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:
>
> Hi Greg,
>
> Thanks for looking into this.
>
> I have two VMs in my setup each with two interfaces. Trying to setup the
> VXLAN tunnels across these interfaces which are in different subnets. A
> docker container is attached to ovs bridge using ovs-docker utility on each
> VM and doing a ping from one container to another.
>
> *VM1 details:*
>
> [root@vm1 ~]# ip a
> ...
> 3: eth1:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
> inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
>valid_lft 3002sec preferred_lft 3002sec
> inet6 fe80::5054:ff:feb8:5be/64 scope link
>valid_lft forever preferred_lft forever
> 4: eth2:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
> inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
>valid_lft 3248sec preferred_lft 3248sec
> inet6 fe80::5054:ff:fef0:6437/64 scope link
>valid_lft forever preferred_lft forever
> ...
>
>
> Hi Siva,
>
> I have a question.  Are you able to ping between the two interfaces on VM1
> with this command?:
>
> # ping 20.20.0.183 -I eth1
>
> thanks,
>
> - Greg
>
> Hi Greg,

Sorry for the late reply.

Yes, I am able to ping between two interfaces.

[root@localhost ~]# ovs-appctl dpif/show
system@ovs-system: hit:2799 missed:198775
testbr0:
a0769422cfc04_l 2/3: (system)
testbr0 65534/1: (internal)
vxlan0 10/2: (vxlan: local_ip=30.30.0.193,
remote_ip=20.20.0.183)
[root@localhost ~]# ping 20.20.0.183 -I 30.30.0.193
PING 20.20.0.183 (20.20.0.183) from 30.30.0.193 : 56(84) bytes of data.
64 bytes from 20.20.0.183: icmp_seq=1 ttl=64 time=0.470 ms
64 bytes from 20.20.0.183: icmp_seq=2 ttl=64 time=0.657 ms
64 bytes from 20.20.0.183: icmp_seq=3 ttl=64 time=0.685 ms
64 bytes from 20.20.0.183: icmp_seq=4 ttl=64 time=0.721 ms
64 bytes from 20.20.0.183: icmp_seq=5 ttl=64 time=0.630 ms
64 bytes from 20.20.0.183: icmp_seq=6 ttl=64 time=0.629 ms
^C
--- 20.20.0.183 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5000ms
rtt min/avg/max/mdev = 0.470/0.632/0.721/0.079 ms
[root@localhost ~]#

 Siva Teja.

> [root@vm1 ~]# ovs-vsctl show
> ff70c814-d1b0-4018-aee8-8b635187afee
> Bridge "testbr0"
> Port "gre0"
> Interface "gre0"
> type: gre
> options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
> Port "testbr0"
> Interface "testbr0"
> type: internal
> Port "2cfb62a9b0f04_l"
> Interface "2cfb62a9b0f04_l"
> ovs_version: "2.9.2"
> [root@vm1 ~]# ip rule list
> 0:  from all lookup local
> 32765:  from 20.20.0.183 lookup siva
> 32766:  from all lookup main
> 32767:  from all lookup default
> [root@vm1 ~]# ip route show table siva
> default dev eth2 scope link src 20.20.0.183
> [root@vm1 ~]# # A docker container is attached to
> ovs bridge using ovs-docker utility
> [root@vm1 ~]# docker ps
> CONTAINER IDIMAGE   COMMAND CREATED
>  STATUS  PORTS   NAMES
> be4ab434db99busybox "sh"5 days ago
>   Up 5 days   admiring_euclid
> [root@vm1 ~]# nsenter -n -t `docker inspect be4 --format={{.State.Pid}}`
> -- ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
> link/gre 0.0.0.0 brd 0.0.0.0
> 3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN
> qlen 1000
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 9: eth0@if10:  mtu 1500 qdisc noqueue
> state UP qlen 1000
> link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 70.70.0.10/24 scope global eth0
>valid_lft forever preferred_lft forever
> inet6 fe80::2098:41ff:fe0f:e850/64 scope link
>valid_lft forever preferred_lft forever
>
>
> *VM2 details:*
>
> [root@vm2 ~]# ip a
> 
> 3: eth1:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
> inet 30.30.0.193/24 brd 30.30.0.255 scope global dynamic eth1
>valid_lft 2406sec preferred_lft 2406sec
> inet6 fe80::5054:ff:fe79:ef92/64 scope link
>valid_lft forever preferred_lft forever
> 4: eth2:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
> inet 20.20.0.64/24 brd 20.20.0.255 scope global dynamic eth2
>valid_lft 2775sec preferred_lft 2775sec
> inet6 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-16 Thread Gregory Rose

On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:

Hi Greg,

Thanks for looking into this.

I have two VMs in my setup each with two interfaces. Trying to setup 
the VXLAN tunnels across these interfaces which are in different 
subnets. A docker container is attached to ovs bridge using ovs-docker 
utility on each VM and doing a ping from one container to another.


*VM1 details:*

[root@vm1 ~]# ip a
...
3: eth1:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000

    link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
    inet 30.30.0.59/24  brd 30.30.0.255 scope 
global dynamic eth1

       valid_lft 3002sec preferred_lft 3002sec
    inet6 fe80::5054:ff:feb8:5be/64 scope link
       valid_lft forever preferred_lft forever
4: eth2:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000

    link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
    inet 20.20.0.183/24  brd 20.20.0.255 scope 
global dynamic eth2

       valid_lft 3248sec preferred_lft 3248sec
    inet6 fe80::5054:ff:fef0:6437/64 scope link
       valid_lft forever preferred_lft forever
...


Hi Siva,

I have a question.  Are you able to ping between the two interfaces on 
VM1 with this command?:


# ping 20.20.0.183 -I eth1

thanks,

- Greg


[root@vm1 ~]# ovs-vsctl show
ff70c814-d1b0-4018-aee8-8b635187afee
    Bridge "testbr0"
        Port "gre0"
            Interface "gre0"
                type: gre
                options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
        Port "testbr0"
            Interface "testbr0"
                type: internal
        Port "2cfb62a9b0f04_l"
            Interface "2cfb62a9b0f04_l"
    ovs_version: "2.9.2"
[root@vm1 ~]# ip rule list
0:      from all lookup local
32765:  from 20.20.0.183 lookup siva
32766:  from all lookup main
32767:  from all lookup default
[root@vm1 ~]# ip route show table siva
default dev eth2 scope link src 20.20.0.183
[root@vm1 ~]# # A docker container is attached 
to ovs bridge using ovs-docker utility

[root@vm1 ~]# docker ps
CONTAINER ID        IMAGE            COMMAND  CREATED            
 STATUS         PORTS  NAMES
be4ab434db99        busybox            "sh"                5 days ago  
        Up 5 days  admiring_euclid
[root@vm1 ~]# nsenter -n -t `docker inspect be4 
--format={{.State.Pid}}` -- ip a

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8  scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN 
qlen 1000

    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
9: eth0@if10:  mtu 1500 qdisc noqueue 
state UP qlen 1000

    link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 70.70.0.10/24  scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2098:41ff:fe0f:e850/64 scope link
       valid_lft forever preferred_lft forever


*VM2 details:*
*
*
[root@vm2 ~]# ip a

3: eth1:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000

    link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
    inet 30.30.0.193/24  brd 30.30.0.255 scope 
global dynamic eth1

       valid_lft 2406sec preferred_lft 2406sec
    inet6 fe80::5054:ff:fe79:ef92/64 scope link
       valid_lft forever preferred_lft forever
4: eth2:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000

    link/ether 52:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
    inet 20.20.0.64/24  brd 20.20.0.255 scope 
global dynamic eth2

       valid_lft 2775sec preferred_lft 2775sec
    inet6 fe80::5054:ff:fe05:937c/64 scope link
       valid_lft forever preferred_lft forever
...
[root@vm2 ~]# ovs-vsctl show
b85514db-3f29-4f7a-9001-37d70adfca34
    Bridge "testbr0"
        Port "gre0"
            Interface "gre0"
                type: gre
                options: {local_ip="30.30.0.193", remote_ip="20.20.0.183"}
        Port "a0769422cfc04_l"
            Interface "a0769422cfc04_l"
        Port "testbr0"
            Interface "testbr0"
                type: internal
    ovs_version: "2.9.2"
[root@vm2 ~]# ip rule list
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default
[root@vm2 ~]# # A docker container is attached 
to ovs bridge using ovs-docker utility

[root@vm2 ~]# docker ps
CONTAINER ID        IMAGE              COMMAND    CREATED            
 STATUS             PORTS  NAMES
86214f0d99e8 busybox:latest      "sh"           5 days ago Up 5 days   
       peaceful_snyder
[root@vm2 ~]# nsenter -n -t `docker inspect 862 
--format={{.State.Pid}}` -- ip a

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-15 Thread Gregory Rose

Hi Siva,

I have some updates but I am traveling today so I'll provide them tomorrow.

Thanks,

- Greg


On 11/13/2018 4:02 PM, Gregory Rose wrote:


On 11/13/2018 1:44 PM, Siva Teja ARETI wrote:

Hi Greg,

Did you happen to get a chance to investigate this further?


Unfortunately not.  The IT team replaced a switch in the lab over the 
weekend and my access to the

test machines is down.
I have a ticket in to get it fixed and will resume debugging then.

Sorry for the delay.

- Greg



Siva Teja.

On Fri, Nov 9, 2018 at 1:26 PM Gregory Rose > wrote:



On 11/8/2018 4:16 PM, Gregory Rose wrote:

On 11/8/2018 3:48 PM, Siva Teja ARETI wrote:



Siva,


When you see the error condition with the local_ip option
on vxlan can you provide me the output of
this command?

*# ip -s link show vxlan_sys_4789*
70: vxlan_sys_4789:  mtu
65470 qdisc noqueue master ovs-system state UNKNOWN mode
DEFAULT group default qlen 1000
    link/ether 0e:9b:58:4a:6e:44 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0  0    0   0 0   0
    TX: bytes  packets  errors  dropped carrier collsns
    0  0    99  8 99  0

Hi Greg,

Here is the output.

[root@vm1 ~]# ip -s link show vxlan_sys_4789
27: vxlan_sys_4789:  mtu 65000
qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT qlen
1000
    link/ether ca:8f:0d:13:08:1f brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0          0        0       0       0      0
    TX: bytes  packets  errors  dropped carrier collsns
    3666796    130957   0       0       0      0

 Siva Teja.

It will help me understand which error you're encountering.

Thanks!

- Greg



Well then obviously I still have errors in my own setup.

Back to the drawing board but I think it's a routing issue in my
case.

Thanks!



Siva,

I've made progress.  I misconfigured my network which led to the
errors you were seeing.  Now I've got that fixed up and I think
I'm reproducing the error you are seeing. When adding the local
IP option the packets are getting
delivered to the VXLAN port but not getting delivered over to the
bridge with the local ip address.

I have two machines A and B.  They are bare metal running OVS
with kvm virtual machines.  Here is the config:

A) IP 10.172.208.214
Bridge test-vxlan   < ip=10.1.1.3
    Port test-vxlan
    Interface test-vxlan
    type: internal
    Port "vxlan0"
    Interface "vxlan0"
    type: vxlan
    options: {key="100", local_ip="10.1.1.3",
remote_ip="10.172.208.215"}
    Port "vnet4"
    Interface "vnet4"  < VM 1 with IP 10.1.1.1


B) IP 10.172.208.215
Bridge test-vxlan <- ip=10.1.1.4
    Port "vxlan0"
    Interface "vxlan0"
    type: vxlan
    options: {key="100", local_ip="10.1.1.4",
remote_ip="10.172.208.214"}
    Port "vnet6"
    Interface "vnet6"  < VM 2 with IP 10.1.1.2
    Port test-vxlan
    Interface test-vxlan
    type: internal

From VM 2 on machine B I start a ping from 10.1.1.2 -> 10.1.1.1

roseg@ubuntu-1604-base:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
From 10.1.1.2 icmp_seq=1 Destination Host Unreachable
From 10.1.1.2 icmp_seq=2 Destination Host Unreachable
From 10.1.1.2 icmp_seq=3 Destination Host Unreachable
From 10.1.1.2 icmp_seq=4 Destination Host Unreachable

On machine B we can see the vxlan_sys_4789 tx counter increasing:

[root@sc2-hs2-b2515 ~]# ip -s link show vxlan_sys_4789
76: vxlan_sys_4789:  mtu 65470
qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT group
default qlen 1000
    link/ether f2:3a:d4:fd:b3:46 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0  0    0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    4200   150  0   8   0   0

On machine A we can see the vxlan_sys_4789 rx counter increasing:

53: vxlan_sys_4789:  mtu 65470
qdisc noqueue ma
ster ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 06:4b:21:d8:af:8b brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    4200   150  0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    0  0    0   8   0   0

However, even though there is no indication of drops the packets
are not getting over to the test-vxlan 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-13 Thread Gregory Rose


On 11/13/2018 1:44 PM, Siva Teja ARETI wrote:

Hi Greg,

Did you happen to get a chance to investigate this further?


Unfortunately not.  The IT team replaced a switch in the lab over the 
weekend and my access to the

test machines is down.
I have a ticket in to get it fixed and will resume debugging then.

Sorry for the delay.

- Greg



Siva Teja.

On Fri, Nov 9, 2018 at 1:26 PM Gregory Rose > wrote:



On 11/8/2018 4:16 PM, Gregory Rose wrote:

On 11/8/2018 3:48 PM, Siva Teja ARETI wrote:



Siva,


When you see the error condition with the local_ip option on
vxlan can you provide me the output of
this command?

*# ip -s link show vxlan_sys_4789*
70: vxlan_sys_4789:  mtu
65470 qdisc noqueue master ovs-system state UNKNOWN mode
DEFAULT group default qlen 1000
    link/ether 0e:9b:58:4a:6e:44 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0  0    0   0 0   0
    TX: bytes  packets  errors  dropped carrier collsns
    0  0    99  8 99  0

Hi Greg,

Here is the output.

[root@vm1 ~]# ip -s link show vxlan_sys_4789
27: vxlan_sys_4789:  mtu 65000
qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT qlen 1000
    link/ether ca:8f:0d:13:08:1f brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0          0        0       0       0    0
    TX: bytes  packets  errors  dropped carrier collsns
    3666796    130957   0       0       0    0

 Siva Teja.

It will help me understand which error you're encountering.

Thanks!

- Greg



Well then obviously I still have errors in my own setup.

Back to the drawing board but I think it's a routing issue in my
case.

Thanks!



Siva,

I've made progress.  I misconfigured my network which led to the
errors you were seeing.  Now I've got that fixed up and I think
I'm reproducing the error you are seeing.  When adding the local
IP option the packets are getting
delivered to the VXLAN port but not getting delivered over to the
bridge with the local ip address.

I have two machines A and B.  They are bare metal running OVS with
kvm virtual machines.  Here is the config:

A) IP 10.172.208.214
Bridge test-vxlan   < ip=10.1.1.3
    Port test-vxlan
    Interface test-vxlan
    type: internal
    Port "vxlan0"
    Interface "vxlan0"
    type: vxlan
    options: {key="100", local_ip="10.1.1.3",
remote_ip="10.172.208.215"}
    Port "vnet4"
    Interface "vnet4"  < VM 1 with IP 10.1.1.1


B) IP 10.172.208.215
Bridge test-vxlan <- ip=10.1.1.4
    Port "vxlan0"
    Interface "vxlan0"
    type: vxlan
    options: {key="100", local_ip="10.1.1.4",
remote_ip="10.172.208.214"}
    Port "vnet6"
    Interface "vnet6"  < VM 2 with IP 10.1.1.2
    Port test-vxlan
    Interface test-vxlan
    type: internal

From VM 2 on machine B I start a ping from 10.1.1.2 -> 10.1.1.1

roseg@ubuntu-1604-base:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
From 10.1.1.2 icmp_seq=1 Destination Host Unreachable
From 10.1.1.2 icmp_seq=2 Destination Host Unreachable
From 10.1.1.2 icmp_seq=3 Destination Host Unreachable
From 10.1.1.2 icmp_seq=4 Destination Host Unreachable

On machine B we can see the vxlan_sys_4789 tx counter increasing:

[root@sc2-hs2-b2515 ~]# ip -s link show vxlan_sys_4789
76: vxlan_sys_4789:  mtu 65470
qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT group
default qlen 1000
    link/ether f2:3a:d4:fd:b3:46 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0  0    0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    4200   150  0   8   0   0

On machine A we can see the vxlan_sys_4789 rx counter increasing:

53: vxlan_sys_4789:  mtu 65470
qdisc noqueue ma
ster ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 06:4b:21:d8:af:8b brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    4200   150  0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    0  0    0   8   0   0

However, even though there is no indication of drops the packets
are not getting over to the test-vxlan bridge
which has the local 10.1.1.3 ip address:

35: test-vxlan:  mtu 1500 qdisc
noqueue state UNKNOWN mode DEFAULT group default qlen 1000
  

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-13 Thread Siva Teja ARETI
Hi Greg,

Did you happen to get a chance to investigate this further?

Siva Teja.

On Fri, Nov 9, 2018 at 1:26 PM Gregory Rose  wrote:

>
> On 11/8/2018 4:16 PM, Gregory Rose wrote:
>
> On 11/8/2018 3:48 PM, Siva Teja ARETI wrote:
>
>
>
> Siva,
>
>>
>> When you see the error condition with the local_ip option on vxlan can
>> you provide me the output of
>> this command?
>>
>> *# ip -s link show vxlan_sys_4789*
>> 70: vxlan_sys_4789:  mtu 65470 qdisc
>> noqueue master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
>> link/ether 0e:9b:58:4a:6e:44 brd ff:ff:ff:ff:ff:ff
>> RX: bytes  packets  errors  dropped overrun mcast
>> 0  00   0   0   0
>> TX: bytes  packets  errors  dropped carrier collsns
>> 0  099  8   99  0
>>
>> Hi Greg,
>
> Here is the output.
>
> [root@vm1 ~]# ip -s link show vxlan_sys_4789
> 27: vxlan_sys_4789:  mtu 65000 qdisc
> noqueue master ovs-system state UNKNOWN mode DEFAULT qlen 1000
> link/ether ca:8f:0d:13:08:1f brd ff:ff:ff:ff:ff:ff
> RX: bytes  packets  errors  dropped overrun mcast
> 0  00   0   0   0
> TX: bytes  packets  errors  dropped carrier collsns
> 3666796130957   0   0   0   0
>
>  Siva Teja.
>
>> It will help me understand which error you're encountering.
>>
>> Thanks!
>>
>> - Greg
>>
>
> Well then obviously I still have errors in my own setup.
>
> Back to the drawing board but I think it's a routing issue in my case.
>
> Thanks!
>
>
> Siva,
>
> I've made progress.  I misconfigured my network which led to the errors
> you were seeing.  Now I've got that fixed up and I think I'm reproducing
> the error you are seeing.  When adding the local IP option the packets are
> getting
> delivered to the VXLAN port but not getting delivered over to the bridge
> with the local ip address.
>
> I have two machines A and B.  They are bare metal running OVS with kvm
> virtual machines.  Here is the config:
>
> A) IP 10.172.208.214
> Bridge test-vxlan   < ip=10.1.1.3
> Port test-vxlan
> Interface test-vxlan
> type: internal
> Port "vxlan0"
> Interface "vxlan0"
> type: vxlan
> options: {key="100", local_ip="10.1.1.3",
> remote_ip="10.172.208.215"}
> Port "vnet4"
> Interface "vnet4"  < VM 1 with IP 10.1.1.1
>
>
> B) IP 10.172.208.215
> Bridge test-vxlan <- ip=10.1.1.4
> Port "vxlan0"
> Interface "vxlan0"
> type: vxlan
> options: {key="100", local_ip="10.1.1.4",
> remote_ip="10.172.208.214"}
> Port "vnet6"
> Interface "vnet6"  < VM 2 with IP 10.1.1.2
> Port test-vxlan
> Interface test-vxlan
> type: internal
>
> From VM 2 on machine B I start a ping from 10.1.1.2 -> 10.1.1.1
>
> roseg@ubuntu-1604-base:~$ ping 10.1.1.1
> PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
> From 10.1.1.2 icmp_seq=1 Destination Host Unreachable
> From 10.1.1.2 icmp_seq=2 Destination Host Unreachable
> From 10.1.1.2 icmp_seq=3 Destination Host Unreachable
> From 10.1.1.2 icmp_seq=4 Destination Host Unreachable
>
> On machine B we can see the vxlan_sys_4789 tx counter increasing:
>
> [root@sc2-hs2-b2515 ~]# ip -s link show vxlan_sys_4789
> 76: vxlan_sys_4789:  mtu 65470 qdisc
> noqueue master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
> link/ether f2:3a:d4:fd:b3:46 brd ff:ff:ff:ff:ff:ff
> RX: bytes  packets  errors  dropped overrun mcast
> 0  00   0   0   0
> TX: bytes  packets  errors  dropped carrier collsns
> 4200   150  0   8   0   0
>
> On machine A we can see the vxlan_sys_4789 rx counter increasing:
>
> 53: vxlan_sys_4789:  mtu 65470 qdisc
> noqueue ma
> ster ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
> link/ether 06:4b:21:d8:af:8b brd ff:ff:ff:ff:ff:ff
> RX: bytes  packets  errors  dropped overrun mcast
> 4200   150  0   0   0   0
> TX: bytes  packets  errors  dropped carrier collsns
> 0  00   8   0   0
>
> However, even though there is no indication of drops the packets are not
> getting over to the test-vxlan bridge
> which has the local 10.1.1.3 ip address:
>
> 35: test-vxlan:  mtu 1500 qdisc noqueue
> state UNKNOWN mode DEFAULT group default qlen 1000
> link/ether 86:9b:1f:ae:ba:42 brd ff:ff:ff:ff:ff:ff
> RX: bytes  packets  errors  dropped overrun mcast
> 0  00   0   0   0
> TX: bytes  packets  errors  dropped carrier collsns
> 0  00   0   0   0
>
> They're just not seen at all - none of the counters are increasing.  When
> I remove the local_ip option from
> the vxlan tunnels then the ping between the VMs works as expected which
> you have shown:
>
> 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-09 Thread Gregory Rose


On 11/8/2018 4:16 PM, Gregory Rose wrote:

On 11/8/2018 3:48 PM, Siva Teja ARETI wrote:



Siva,


When you see the error condition with the local_ip option on
vxlan can you provide me the output of
this command?

*# ip -s link show vxlan_sys_4789*
70: vxlan_sys_4789:  mtu 65470
qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT group
default qlen 1000
    link/ether 0e:9b:58:4a:6e:44 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0  0    0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    0  0    99  8   99  0

Hi Greg,

Here is the output.

[root@vm1 ~]# ip -s link show vxlan_sys_4789
27: vxlan_sys_4789:  mtu 65000 qdisc 
noqueue master ovs-system state UNKNOWN mode DEFAULT qlen 1000

    link/ether ca:8f:0d:13:08:1f brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0          0        0       0       0       0
    TX: bytes  packets  errors  dropped carrier collsns
    3666796    130957   0       0       0       0

 Siva Teja.

It will help me understand which error you're encountering.

Thanks!

- Greg



Well then obviously I still have errors in my own setup.

Back to the drawing board but I think it's a routing issue in my case.

Thanks!



Siva,

I've made progress.  I misconfigured my network which led to the errors 
you were seeing.  Now I've got that fixed up and I think I'm reproducing 
the error you are seeing.  When adding the local IP option the packets 
are getting
delivered to the VXLAN port but not getting delivered over to the bridge 
with the local ip address.


I have two machines A and B.  They are bare metal running OVS with kvm 
virtual machines.  Here is the config:


A) IP 10.172.208.214
    Bridge test-vxlan   < ip=10.1.1.3
    Port test-vxlan
    Interface test-vxlan
    type: internal
    Port "vxlan0"
    Interface "vxlan0"
    type: vxlan
    options: {key="100", local_ip="10.1.1.3", 
remote_ip="10.172.208.215"}

    Port "vnet4"
    Interface "vnet4"  < VM 1 with IP 10.1.1.1


B) IP 10.172.208.215
    Bridge test-vxlan <- ip=10.1.1.4
    Port "vxlan0"
    Interface "vxlan0"
    type: vxlan
    options: {key="100", local_ip="10.1.1.4", 
remote_ip="10.172.208.214"}

    Port "vnet6"
    Interface "vnet6"  < VM 2 with IP 10.1.1.2
    Port test-vxlan
    Interface test-vxlan
    type: internal

From VM 2 on machine B I start a ping from 10.1.1.2 -> 10.1.1.1

roseg@ubuntu-1604-base:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
From 10.1.1.2 icmp_seq=1 Destination Host Unreachable
From 10.1.1.2 icmp_seq=2 Destination Host Unreachable
From 10.1.1.2 icmp_seq=3 Destination Host Unreachable
From 10.1.1.2 icmp_seq=4 Destination Host Unreachable

On machine B we can see the vxlan_sys_4789 tx counter increasing:

[root@sc2-hs2-b2515 ~]# ip -s link show vxlan_sys_4789
76: vxlan_sys_4789:  mtu 65470 qdisc 
noqueue master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000

    link/ether f2:3a:d4:fd:b3:46 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0  0    0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    4200   150  0   8   0   0

On machine A we can see the vxlan_sys_4789 rx counter increasing:

53: vxlan_sys_4789:  mtu 65470 qdisc 
noqueue ma

ster ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 06:4b:21:d8:af:8b brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    4200   150  0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    0  0    0   8   0   0

However, even though there is no indication of drops the packets are not 
getting over to the test-vxlan bridge

which has the local 10.1.1.3 ip address:

35: test-vxlan:  mtu 1500 qdisc noqueue 
state UNKNOWN mode DEFAULT group default qlen 1000

    link/ether 86:9b:1f:ae:ba:42 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0      0    0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    0  0    0   0   0   0

They're just not seen at all - none of the counters are increasing. When 
I remove the local_ip option from
the vxlan tunnels then the ping between the VMs works as expected which 
you have shown:


roseg@ubuntu-1604-base:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=2.04 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.366 ms
64 bytes from 10.1.1.1: icmp_seq=3 ttl=64 time=0.332 ms
64 bytes from 10.1.1.1: icmp_seq=4 ttl=64 time=0.335 ms
64 bytes 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-08 Thread Gregory Rose

On 11/8/2018 3:48 PM, Siva Teja ARETI wrote:



On Thu, Nov 8, 2018 at 6:30 PM Gregory Rose > wrote:


On 11/7/2018 4:14 PM, Siva Teja ARETI wrote:



On Wed, Nov 7, 2018 at 6:43 PM Gregory Rose mailto:gvrose8...@gmail.com>> wrote:

On 11/6/2018 3:17 PM, Gregory Rose wrote:
>
> I see.  It appears you are right and I misread the
documentation. OK,
> l'll investigate further then.
>
> - Greg
>
>

Siva,

I am still looking into this but wanted to update you on my
status.

I am seeing problems with the vxlan local_ip option myself. 
Either I
don't actually understand the docs
and examples I've looked at or else there is a real bug but
given the
reports from  you and Marcus (in
the email thread you first referenced) I think there must be
some issue.

Flavio mentioned that it might be an MTU issue but the
testing I'm doing
would not be affected by
the MTU so I think there is something else.

Hopefully I can make some progress on this soon - I'll let
you know.

Thanks,

- Greg

Thanks for the update.


Siva,

When you see the error condition with the local_ip option on vxlan
can you provide me the output of
this command?

*# ip -s link show vxlan_sys_4789*
70: vxlan_sys_4789:  mtu 65470
qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT group
default qlen 1000
    link/ether 0e:9b:58:4a:6e:44 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0  0    0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    0  0    99  8   99  0

Hi Greg,

Here is the output.

[root@vm1 ~]# ip -s link show vxlan_sys_4789
27: vxlan_sys_4789:  mtu 65000 qdisc 
noqueue master ovs-system state UNKNOWN mode DEFAULT qlen 1000

    link/ether ca:8f:0d:13:08:1f brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0          0        0       0       0       0
    TX: bytes  packets  errors  dropped carrier collsns
    3666796    130957   0       0       0       0

 Siva Teja.

It will help me understand which error you're encountering.

Thanks!

- Greg



Well then obviously I still have errors in my own setup.

Back to the drawing board but I think it's a routing issue in my case.

Thanks!

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-08 Thread Siva Teja ARETI
On Thu, Nov 8, 2018 at 6:30 PM Gregory Rose  wrote:

> On 11/7/2018 4:14 PM, Siva Teja ARETI wrote:
>
>
>
> On Wed, Nov 7, 2018 at 6:43 PM Gregory Rose  wrote:
>
>> On 11/6/2018 3:17 PM, Gregory Rose wrote:
>> >
>> > I see.  It appears you are right and I misread the documentation. OK,
>> > l'll investigate further then.
>> >
>> > - Greg
>> >
>> >
>>
>> Siva,
>>
>> I am still looking into this but wanted to update you on my status.
>>
>> I am seeing problems with the vxlan local_ip option myself.  Either I
>> don't actually understand the docs
>> and examples I've looked at or else there is a real bug but given the
>> reports from  you and Marcus (in
>> the email thread you first referenced) I think there must be some issue.
>>
>> Flavio mentioned that it might be an MTU issue but the testing I'm doing
>> would not be affected by
>> the MTU so I think there is something else.
>>
>> Hopefully I can make some progress on this soon - I'll let you know.
>>
>> Thanks,
>>
>> - Greg
>>
>>
> Thanks for the update.
>
>
> Siva,
>
> When you see the error condition with the local_ip option on vxlan can you
> provide me the output of
> this command?
>
> *# ip -s link show vxlan_sys_4789*
> 70: vxlan_sys_4789:  mtu 65470 qdisc
> noqueue master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
> link/ether 0e:9b:58:4a:6e:44 brd ff:ff:ff:ff:ff:ff
> RX: bytes  packets  errors  dropped overrun mcast
> 0  00   0   0   0
> TX: bytes  packets  errors  dropped carrier collsns
> 0  099  8   99  0
>
> Hi Greg,

Here is the output.

[root@vm1 ~]# ip -s link show vxlan_sys_4789
27: vxlan_sys_4789:  mtu 65000 qdisc
noqueue master ovs-system state UNKNOWN mode DEFAULT qlen 1000
link/ether ca:8f:0d:13:08:1f brd ff:ff:ff:ff:ff:ff
RX: bytes  packets  errors  dropped overrun mcast
0  00   0   0   0
TX: bytes  packets  errors  dropped carrier collsns
3666796130957   0   0   0   0

 Siva Teja.

> It will help me understand which error you're encountering.
>
> Thanks!
>
> - Greg
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-08 Thread Gregory Rose

On 11/7/2018 4:14 PM, Siva Teja ARETI wrote:



On Wed, Nov 7, 2018 at 6:43 PM Gregory Rose > wrote:


On 11/6/2018 3:17 PM, Gregory Rose wrote:
>
> I see.  It appears you are right and I misread the
documentation. OK,
> l'll investigate further then.
>
> - Greg
>
>

Siva,

I am still looking into this but wanted to update you on my status.

I am seeing problems with the vxlan local_ip option myself. Either I
don't actually understand the docs
and examples I've looked at or else there is a real bug but given the
reports from  you and Marcus (in
the email thread you first referenced) I think there must be some
issue.

Flavio mentioned that it might be an MTU issue but the testing I'm
doing
would not be affected by
the MTU so I think there is something else.

Hopefully I can make some progress on this soon - I'll let you know.

Thanks,

- Greg

Thanks for the update.


Siva,

When you see the error condition with the local_ip option on vxlan can 
you provide me the output of

this command?

*# ip -s link show vxlan_sys_4789*
70: vxlan_sys_4789:  mtu 65470 qdisc 
noqueue master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000

    link/ether 0e:9b:58:4a:6e:44 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    0  0    0   0   0   0
    TX: bytes  packets  errors  dropped carrier collsns
    0  0    99  8   99  0

It will help me understand which error you're encountering.

Thanks!

- Greg
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-07 Thread Siva Teja ARETI
On Wed, Nov 7, 2018 at 6:43 PM Gregory Rose  wrote:

> On 11/6/2018 3:17 PM, Gregory Rose wrote:
> >
> > I see.  It appears you are right and I misread the documentation. OK,
> > l'll investigate further then.
> >
> > - Greg
> >
> >
>
> Siva,
>
> I am still looking into this but wanted to update you on my status.
>
> I am seeing problems with the vxlan local_ip option myself.  Either I
> don't actually understand the docs
> and examples I've looked at or else there is a real bug but given the
> reports from  you and Marcus (in
> the email thread you first referenced) I think there must be some issue.
>
> Flavio mentioned that it might be an MTU issue but the testing I'm doing
> would not be affected by
> the MTU so I think there is something else.
>
> Hopefully I can make some progress on this soon - I'll let you know.
>
> Thanks,
>
> - Greg
>
>
Thanks for the update.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-07 Thread Gregory Rose

On 11/6/2018 3:17 PM, Gregory Rose wrote:


I see.  It appears you are right and I misread the documentation. OK, 
l'll investigate further then.


- Greg




Siva,

I am still looking into this but wanted to update you on my status.

I am seeing problems with the vxlan local_ip option myself.  Either I 
don't actually understand the docs
and examples I've looked at or else there is a real bug but given the 
reports from  you and Marcus (in

the email thread you first referenced) I think there must be some issue.

Flavio mentioned that it might be an MTU issue but the testing I'm doing 
would not be affected by

the MTU so I think there is something else.

Hopefully I can make some progress on this soon - I'll let you know.

Thanks,

- Greg


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-07 Thread Flavio Leitner
On Tue, Nov 06, 2018 at 06:21:22PM -0500, Siva Teja ARETI wrote:
> Yes. Packet counts are incremented.
> 
> [root@vm1 ~]# ovs-ofctl dump-ports testbr0
> OFPST_PORT reply (xid=0x2): 3 ports
>   port LOCAL: rx pkts=0, bytes=0, drop=68726, errs=0, frame=0, over=0, crc=0
>tx pkts=0, bytes=0, drop=0, errs=0, coll=0
>   port  vxlan0: rx pkts=0, bytes=0, drop=?, errs=?, frame=?, over=?, crc=?
>tx pkts=58, bytes=2436, drop=?, errs=?, coll=?
>   port  "2cfb62a9b0f04_l": rx pkts=69211, bytes=2918374, drop=0, errs=0,
> frame=0, over=0, crc=0
>tx pkts=190, bytes=17532, drop=0, errs=0, coll=0


It sounds like you have a MTU issue. When using VXLAN, the packet is
not passing somewhere.

fbl


> 
> Siva Teja.
> 
> On Tue, Nov 6, 2018 at 6:15 PM Flavio Leitner  wrote:
> 
> > On Tue, Nov 06, 2018 at 02:09:23PM -0500, Siva Teja ARETI wrote:
> > > Answers in line.
> > >
> > > Siva Teja.
> > >
> > > On Tue, Nov 6, 2018 at 1:56 PM Flavio Leitner  wrote:
> > >
> > > > On Tue, Nov 06, 2018 at 11:51:49AM -0500, Siva Teja ARETI wrote:
> > > > > Hi Greg,
> > > > >
> > > > > Thanks for looking into this.
> > > > >
> > > > > I have two VMs in my setup each with two interfaces. Trying to setup
> > the
> > > > > VXLAN tunnels across these interfaces which are in different
> > subnets. A
> > > > > docker container is attached to ovs bridge using ovs-docker utility
> > on
> > > > each
> > > > > VM and doing a ping from one container to another.
> > > >
> > > > Do you see any interesting related messages in 'dmesg' output or in
> > > > ovs-vswitchd.log?
> > > >
> > >
> > > I could not find any interesting messages in dmesg or in ovs-vswitchd.log
> > > output.
> > >
> > >
> > > > If I recall correctly, the "ip l" should show the vxlan dev named
> > > > vxlan_sys_
> > > >
> > >
> > > Yes. I can see the dev on both of my VMs
> > >
> > > [root@vm1 ~]# ifconfig vxlan_sys_4789
> > > vxlan_sys_4789: flags=4163  mtu 65000
> > > inet6 fe80::2a:28ff:fed2:d4f6  prefixlen 64  scopeid 0x20
> > > ether 02:2a:28:d2:d4:f6  txqueuelen 1000  (Ethernet)
> > > RX packets 0  bytes 0 (0.0 B)
> > > RX errors 0  dropped 0  overruns 0  frame 0
> > > TX packets 48  bytes 1680 (1.6 KiB)
> > > TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> > Do you see TX increasing as you execute the test?
> > or in ovs-ofctl dump-ports  ?
> >
> > Thanks,
> > fbl
> >
> > >
> > >
> > >
> > > > fbl
> > > >
> > > > >
> > > > > *VM1 details:*
> > > > >
> > > > > [root@vm1 ~]# ip a
> > > > > ...
> > > > > 3: eth1:  mtu 1500 qdisc pfifo_fast
> > > > state
> > > > > UP qlen 1000
> > > > > link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
> > > > > inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
> > > > >valid_lft 3002sec preferred_lft 3002sec
> > > > > inet6 fe80::5054:ff:feb8:5be/64 scope link
> > > > >valid_lft forever preferred_lft forever
> > > > > 4: eth2:  mtu 1500 qdisc pfifo_fast
> > > > state
> > > > > UP qlen 1000
> > > > > link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
> > > > > inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
> > > > >valid_lft 3248sec preferred_lft 3248sec
> > > > > inet6 fe80::5054:ff:fef0:6437/64 scope link
> > > > >valid_lft forever preferred_lft forever
> > > > > ...
> > > > > [root@vm1 ~]# ovs-vsctl show
> > > > > ff70c814-d1b0-4018-aee8-8b635187afee
> > > > > Bridge "testbr0"
> > > > > Port "gre0"
> > > > > Interface "gre0"
> > > > > type: gre
> > > > > options: {local_ip="20.20.0.183",
> > > > remote_ip="30.30.0.193"}
> > > > > Port "testbr0"
> > > > > Interface "testbr0"
> > > > > type: internal
> > > > > Port "2cfb62a9b0f04_l"
> > > > > Interface "2cfb62a9b0f04_l"
> > > > > ovs_version: "2.9.2"
> > > > > [root@vm1 ~]# ip rule list
> > > > > 0:  from all lookup local
> > > > > 32765:  from 20.20.0.183 lookup siva
> > > > > 32766:  from all lookup main
> > > > > 32767:  from all lookup default
> > > > > [root@vm1 ~]# ip route show table siva
> > > > > default dev eth2 scope link src 20.20.0.183
> > > > > [root@vm1 ~]# # A docker container is
> > attached
> > > > to
> > > > > ovs bridge using ovs-docker utility
> > > > > [root@vm1 ~]# docker ps
> > > > > CONTAINER IDIMAGE   COMMAND CREATED
> > > > >  STATUS  PORTS   NAMES
> > > > > be4ab434db99busybox "sh"5 days
> > ago
> > > > > Up 5 days   admiring_euclid
> > > > > [root@vm1 ~]# nsenter -n -t `docker inspect be4
> > > > --format={{.State.Pid}}` --
> > > > > ip a
> > > > > 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> > qlen
> > > > 1
> > > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > > inet 127.0.0.1/8 scope 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-06 Thread Siva Teja ARETI
Yes. Packet counts are incremented.

[root@vm1 ~]# ovs-ofctl dump-ports testbr0
OFPST_PORT reply (xid=0x2): 3 ports
  port LOCAL: rx pkts=0, bytes=0, drop=68726, errs=0, frame=0, over=0, crc=0
   tx pkts=0, bytes=0, drop=0, errs=0, coll=0
  port  vxlan0: rx pkts=0, bytes=0, drop=?, errs=?, frame=?, over=?, crc=?
   tx pkts=58, bytes=2436, drop=?, errs=?, coll=?
  port  "2cfb62a9b0f04_l": rx pkts=69211, bytes=2918374, drop=0, errs=0,
frame=0, over=0, crc=0
   tx pkts=190, bytes=17532, drop=0, errs=0, coll=0

Siva Teja.

On Tue, Nov 6, 2018 at 6:15 PM Flavio Leitner  wrote:

> On Tue, Nov 06, 2018 at 02:09:23PM -0500, Siva Teja ARETI wrote:
> > Answers in line.
> >
> > Siva Teja.
> >
> > On Tue, Nov 6, 2018 at 1:56 PM Flavio Leitner  wrote:
> >
> > > On Tue, Nov 06, 2018 at 11:51:49AM -0500, Siva Teja ARETI wrote:
> > > > Hi Greg,
> > > >
> > > > Thanks for looking into this.
> > > >
> > > > I have two VMs in my setup each with two interfaces. Trying to setup
> the
> > > > VXLAN tunnels across these interfaces which are in different
> subnets. A
> > > > docker container is attached to ovs bridge using ovs-docker utility
> on
> > > each
> > > > VM and doing a ping from one container to another.
> > >
> > > Do you see any interesting related messages in 'dmesg' output or in
> > > ovs-vswitchd.log?
> > >
> >
> > I could not find any interesting messages in dmesg or in ovs-vswitchd.log
> > output.
> >
> >
> > > If I recall correctly, the "ip l" should show the vxlan dev named
> > > vxlan_sys_
> > >
> >
> > Yes. I can see the dev on both of my VMs
> >
> > [root@vm1 ~]# ifconfig vxlan_sys_4789
> > vxlan_sys_4789: flags=4163  mtu 65000
> > inet6 fe80::2a:28ff:fed2:d4f6  prefixlen 64  scopeid 0x20
> > ether 02:2a:28:d2:d4:f6  txqueuelen 1000  (Ethernet)
> > RX packets 0  bytes 0 (0.0 B)
> > RX errors 0  dropped 0  overruns 0  frame 0
> > TX packets 48  bytes 1680 (1.6 KiB)
> > TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
> Do you see TX increasing as you execute the test?
> or in ovs-ofctl dump-ports  ?
>
> Thanks,
> fbl
>
> >
> >
> >
> > > fbl
> > >
> > > >
> > > > *VM1 details:*
> > > >
> > > > [root@vm1 ~]# ip a
> > > > ...
> > > > 3: eth1:  mtu 1500 qdisc pfifo_fast
> > > state
> > > > UP qlen 1000
> > > > link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
> > > > inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
> > > >valid_lft 3002sec preferred_lft 3002sec
> > > > inet6 fe80::5054:ff:feb8:5be/64 scope link
> > > >valid_lft forever preferred_lft forever
> > > > 4: eth2:  mtu 1500 qdisc pfifo_fast
> > > state
> > > > UP qlen 1000
> > > > link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
> > > > inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
> > > >valid_lft 3248sec preferred_lft 3248sec
> > > > inet6 fe80::5054:ff:fef0:6437/64 scope link
> > > >valid_lft forever preferred_lft forever
> > > > ...
> > > > [root@vm1 ~]# ovs-vsctl show
> > > > ff70c814-d1b0-4018-aee8-8b635187afee
> > > > Bridge "testbr0"
> > > > Port "gre0"
> > > > Interface "gre0"
> > > > type: gre
> > > > options: {local_ip="20.20.0.183",
> > > remote_ip="30.30.0.193"}
> > > > Port "testbr0"
> > > > Interface "testbr0"
> > > > type: internal
> > > > Port "2cfb62a9b0f04_l"
> > > > Interface "2cfb62a9b0f04_l"
> > > > ovs_version: "2.9.2"
> > > > [root@vm1 ~]# ip rule list
> > > > 0:  from all lookup local
> > > > 32765:  from 20.20.0.183 lookup siva
> > > > 32766:  from all lookup main
> > > > 32767:  from all lookup default
> > > > [root@vm1 ~]# ip route show table siva
> > > > default dev eth2 scope link src 20.20.0.183
> > > > [root@vm1 ~]# # A docker container is
> attached
> > > to
> > > > ovs bridge using ovs-docker utility
> > > > [root@vm1 ~]# docker ps
> > > > CONTAINER IDIMAGE   COMMAND CREATED
> > > >  STATUS  PORTS   NAMES
> > > > be4ab434db99busybox "sh"5 days
> ago
> > > > Up 5 days   admiring_euclid
> > > > [root@vm1 ~]# nsenter -n -t `docker inspect be4
> > > --format={{.State.Pid}}` --
> > > > ip a
> > > > 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> qlen
> > > 1
> > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > inet 127.0.0.1/8 scope host lo
> > > >valid_lft forever preferred_lft forever
> > > > inet6 ::1/128 scope host
> > > >valid_lft forever preferred_lft forever
> > > > 2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
> > > > link/gre 0.0.0.0 brd 0.0.0.0
> > > > 3: gretap0@NONE:  mtu 1462 qdisc noop state
> DOWN
> > > qlen
> > > > 1000
> > > > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> > > > 9: eth0@if10:  mtu 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-06 Thread Gregory Rose

On 11/6/2018 3:08 PM, Siva Teja ARETI wrote:



On Tue, Nov 6, 2018 at 5:42 PM Gregory Rose > wrote:



On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:

Hi Greg,

Thanks for looking into this.

I have two VMs in my setup each with two interfaces. Trying to
setup the VXLAN tunnels across these interfaces which are in
different subnets. A docker container is attached to ovs bridge
using ovs-docker utility on each VM and doing a ping from one
container to another.


Hi Siva,

In reading through the documentation and looking at your
configuration I noticed that when using the
local_ip option the remote_ip is not set to flow. If the local_ip
option is specified then remote_ip must
equal flow.

From the documentation (man ovs-vswitchd.conf.db):

   options : local_ip: optional string
  Optional.  The  tunnel destination IP that received
packets must
  match. Default is to match all addresses. If
specified,  may  be
  one of:

  ·  An IPv4/IPv6 address (not a DNS name), e.g.
192.168.12.3.

  ·  The  word flow. The tunnel accepts packets
sent to any of
 the local IP addresses of  the system 
running  OVS.  To
 process  only  packets sent to a specific IP
address, the
 flow entries may match on  the tun_dst  or 
tun_ipv6_dst
 field.  When  sending  packets to a
local_ip=flow tunnel,
 the flow  actions  may explicitly  set  the 
tun_src  or
 tun_ipv6_src field to the desired IP address,
e.g. with a
 set_field action. However, while  routing 
the  tunneled
 packet  out,  the local system may override
the specified
 address with the local IP address configured
for the out‐
 going system interface.

 This  option  is  valid  only for tunnels
also configured
 with the remote_ip=flow option.


As I understand this documentation, option local_ip=flow can only be 
specified when remote_ip=flow is also specified. Otherwise, ovs-vsctl 
throws this error I guess?


[root@vm1 ~]# ovs-vsctl add-port testbr0 vxlan1 -- set interface 
vxlan1 type=vxlan options:local_ip=flow options:remote_ip=30.30.0.193 
options:dst_port=4789
ovs-vsctl: Error detected while setting up 'vxlan1': vxlan1: vxlan 
type requires 'remote_ip=flow' with 'local_ip=flow'.  See ovs-vswitchd 
log for details.

ovs-vsctl: The default log directory is "/var/log/openvswitch".


Please try using the remote_ip=flow option and then configuring
the proper flow and action.


Anyways, I also tried by specifying remote_ip=flow option when 
creating the tunnel. But I still the same issue


I see.  It appears you are right and I misread the documentation. OK, 
l'll investigate further then.


- Greg


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-06 Thread Flavio Leitner
On Tue, Nov 06, 2018 at 02:09:23PM -0500, Siva Teja ARETI wrote:
> Answers in line.
> 
> Siva Teja.
> 
> On Tue, Nov 6, 2018 at 1:56 PM Flavio Leitner  wrote:
> 
> > On Tue, Nov 06, 2018 at 11:51:49AM -0500, Siva Teja ARETI wrote:
> > > Hi Greg,
> > >
> > > Thanks for looking into this.
> > >
> > > I have two VMs in my setup each with two interfaces. Trying to setup the
> > > VXLAN tunnels across these interfaces which are in different subnets. A
> > > docker container is attached to ovs bridge using ovs-docker utility on
> > each
> > > VM and doing a ping from one container to another.
> >
> > Do you see any interesting related messages in 'dmesg' output or in
> > ovs-vswitchd.log?
> >
> 
> I could not find any interesting messages in dmesg or in ovs-vswitchd.log
> output.
> 
> 
> > If I recall correctly, the "ip l" should show the vxlan dev named
> > vxlan_sys_
> >
> 
> Yes. I can see the dev on both of my VMs
> 
> [root@vm1 ~]# ifconfig vxlan_sys_4789
> vxlan_sys_4789: flags=4163  mtu 65000
> inet6 fe80::2a:28ff:fed2:d4f6  prefixlen 64  scopeid 0x20
> ether 02:2a:28:d2:d4:f6  txqueuelen 1000  (Ethernet)
> RX packets 0  bytes 0 (0.0 B)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 48  bytes 1680 (1.6 KiB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Do you see TX increasing as you execute the test?
or in ovs-ofctl dump-ports  ?

Thanks,
fbl

> 
> 
> 
> > fbl
> >
> > >
> > > *VM1 details:*
> > >
> > > [root@vm1 ~]# ip a
> > > ...
> > > 3: eth1:  mtu 1500 qdisc pfifo_fast
> > state
> > > UP qlen 1000
> > > link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
> > > inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
> > >valid_lft 3002sec preferred_lft 3002sec
> > > inet6 fe80::5054:ff:feb8:5be/64 scope link
> > >valid_lft forever preferred_lft forever
> > > 4: eth2:  mtu 1500 qdisc pfifo_fast
> > state
> > > UP qlen 1000
> > > link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
> > > inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
> > >valid_lft 3248sec preferred_lft 3248sec
> > > inet6 fe80::5054:ff:fef0:6437/64 scope link
> > >valid_lft forever preferred_lft forever
> > > ...
> > > [root@vm1 ~]# ovs-vsctl show
> > > ff70c814-d1b0-4018-aee8-8b635187afee
> > > Bridge "testbr0"
> > > Port "gre0"
> > > Interface "gre0"
> > > type: gre
> > > options: {local_ip="20.20.0.183",
> > remote_ip="30.30.0.193"}
> > > Port "testbr0"
> > > Interface "testbr0"
> > > type: internal
> > > Port "2cfb62a9b0f04_l"
> > > Interface "2cfb62a9b0f04_l"
> > > ovs_version: "2.9.2"
> > > [root@vm1 ~]# ip rule list
> > > 0:  from all lookup local
> > > 32765:  from 20.20.0.183 lookup siva
> > > 32766:  from all lookup main
> > > 32767:  from all lookup default
> > > [root@vm1 ~]# ip route show table siva
> > > default dev eth2 scope link src 20.20.0.183
> > > [root@vm1 ~]# # A docker container is attached
> > to
> > > ovs bridge using ovs-docker utility
> > > [root@vm1 ~]# docker ps
> > > CONTAINER IDIMAGE   COMMAND CREATED
> > >  STATUS  PORTS   NAMES
> > > be4ab434db99busybox "sh"5 days ago
> > > Up 5 days   admiring_euclid
> > > [root@vm1 ~]# nsenter -n -t `docker inspect be4
> > --format={{.State.Pid}}` --
> > > ip a
> > > 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen
> > 1
> > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > inet 127.0.0.1/8 scope host lo
> > >valid_lft forever preferred_lft forever
> > > inet6 ::1/128 scope host
> > >valid_lft forever preferred_lft forever
> > > 2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
> > > link/gre 0.0.0.0 brd 0.0.0.0
> > > 3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN
> > qlen
> > > 1000
> > > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> > > 9: eth0@if10:  mtu 1500 qdisc noqueue
> > > state UP qlen 1000
> > > link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> > > inet 70.70.0.10/24 scope global eth0
> > >valid_lft forever preferred_lft forever
> > > inet6 fe80::2098:41ff:fe0f:e850/64 scope link
> > >valid_lft forever preferred_lft forever
> > >
> > >
> > > *VM2 details:*
> > >
> > > [root@vm2 ~]# ip a
> > > 
> > > 3: eth1:  mtu 1500 qdisc pfifo_fast
> > state
> > > UP qlen 1000
> > > link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
> > > inet 30.30.0.193/24 brd 30.30.0.255 scope global dynamic eth1
> > >valid_lft 2406sec preferred_lft 2406sec
> > > inet6 fe80::5054:ff:fe79:ef92/64 scope link
> > >valid_lft forever preferred_lft forever
> > > 4: eth2:  mtu 1500 qdisc pfifo_fast
> > state
> > > UP qlen 1000
> 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-06 Thread Siva Teja ARETI
On Tue, Nov 6, 2018 at 5:42 PM Gregory Rose  wrote:

>
> On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:
>
> Hi Greg,
>
> Thanks for looking into this.
>
> I have two VMs in my setup each with two interfaces. Trying to setup the
> VXLAN tunnels across these interfaces which are in different subnets. A
> docker container is attached to ovs bridge using ovs-docker utility on each
> VM and doing a ping from one container to another.
>
>
> Hi Siva,
>
> In reading through the documentation and looking at your configuration I
> noticed that when using the
> local_ip option the remote_ip is not set to flow.  If the local_ip option
> is specified then remote_ip must
> equal flow.
>
> From the documentation (man ovs-vswitchd.conf.db):
>
>options : local_ip: optional string
>   Optional.  The  tunnel destination IP that received packets
> must
>   match. Default is to match all addresses. If specified,
> may  be
>   one of:
>
>   ·  An IPv4/IPv6 address (not a DNS name), e.g.
> 192.168.12.3.
>
>   ·  The  word flow. The tunnel accepts packets sent to
> any of
>  the local IP addresses of  the  system  running
> OVS.  To
>  process  only  packets sent to a specific IP address,
> the
>  flow entries may match on  the  tun_dst  or
> tun_ipv6_dst
>  field.  When  sending  packets to a local_ip=flow
> tunnel,
>  the flow  actions  may  explicitly  set  the
> tun_src  or
>  tun_ipv6_src field to the desired IP address, e.g.
> with a
>  set_field action. However,  while  routing  the
> tunneled
>  packet  out,  the local system may override the
> specified
>  address with the local IP address configured for the
> out‐
>  going system interface.
>
>  This  option  is  valid  only for tunnels also
> configured
>  with the remote_ip=flow option.
>

As I understand this documentation, option local_ip=flow can only be
specified when remote_ip=flow is also specified. Otherwise, ovs-vsctl
throws this error I guess?

[root@vm1 ~]# ovs-vsctl add-port testbr0 vxlan1 -- set interface vxlan1
type=vxlan options:local_ip=flow options:remote_ip=30.30.0.193
options:dst_port=4789
ovs-vsctl: Error detected while setting up 'vxlan1': vxlan1: vxlan type
requires 'remote_ip=flow' with 'local_ip=flow'.  See ovs-vswitchd log for
details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".


Please try using the remote_ip=flow option and then configuring the proper
> flow and action.
>

Anyways, I also tried by specifying remote_ip=flow option when creating the
tunnel. But I still the same issue

[root@vm1 ~]# ovs-vsctl add-port testbr0 vxlan0 -- set interface vxlan0
type=vxlan options:local_ip=20.20.0.183 options:remote_ip=flow
options:dst_port=4789
[root@vm1 ~]# ovs-appctl dpif/show
system@ovs-system: hit:69275 missed:1050
testbr0:
2cfb62a9b0f04_l 2/3: (system)
testbr0 65534/1: (internal)
vxlan0 3/2: (vxlan: local_ip=20.20.0.183, remote_ip=flow)
[root@vm1 ~]# ovs-ofctl add-flow testbr0 'table=0 priority=100 in_port=2
actions=set_tunnel:3,set_field:30.30.0.193->tun_dst,output:3'


> Thanks,
>
> - Greg
>
>
> *VM1 details:*
>
> [root@vm1 ~]# ip a
> ...
> 3: eth1:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
> inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
>valid_lft 3002sec preferred_lft 3002sec
> inet6 fe80::5054:ff:feb8:5be/64 scope link
>valid_lft forever preferred_lft forever
> 4: eth2:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
> inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
>valid_lft 3248sec preferred_lft 3248sec
> inet6 fe80::5054:ff:fef0:6437/64 scope link
>valid_lft forever preferred_lft forever
> ...
> [root@vm1 ~]# ovs-vsctl show
> ff70c814-d1b0-4018-aee8-8b635187afee
> Bridge "testbr0"
> Port "gre0"
> Interface "gre0"
> type: gre
> options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
> Port "testbr0"
> Interface "testbr0"
> type: internal
> Port "2cfb62a9b0f04_l"
> Interface "2cfb62a9b0f04_l"
> ovs_version: "2.9.2"
> [root@vm1 ~]# ip rule list
> 0:  from all lookup local
> 32765:  from 20.20.0.183 lookup siva
> 32766:  from all lookup main
> 32767:  from all lookup default
> [root@vm1 ~]# ip route show table siva
> default dev eth2 scope link src 20.20.0.183
> [root@vm1 ~]# # A docker container is attached to
> ovs bridge using ovs-docker utility
> [root@vm1 ~]# docker ps
> CONTAINER IDIMAGE   

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-06 Thread Gregory Rose


On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:

Hi Greg,

Thanks for looking into this.

I have two VMs in my setup each with two interfaces. Trying to setup 
the VXLAN tunnels across these interfaces which are in different 
subnets. A docker container is attached to ovs bridge using ovs-docker 
utility on each VM and doing a ping from one container to another.


Hi Siva,

In reading through the documentation and looking at your configuration I 
noticed that when using the
local_ip option the remote_ip is not set to flow.  If the local_ip 
option is specified then remote_ip must

equal flow.

From the documentation (man ovs-vswitchd.conf.db):

   options : local_ip: optional string
  Optional.  The  tunnel destination IP that received 
packets must
  match. Default is to match all addresses. If specified,  
may  be

  one of:

  ·  An IPv4/IPv6 address (not a DNS name), e.g. 
192.168.12.3.


  ·  The  word flow. The tunnel accepts packets sent to 
any of
 the local IP addresses of  the  system running  
OVS.  To
 process  only  packets sent to a specific IP 
address, the
 flow entries may match on  the  tun_dst  or 
tun_ipv6_dst
 field.  When  sending  packets to a local_ip=flow 
tunnel,
 the flow  actions  may  explicitly  set  the 
tun_src  or
 tun_ipv6_src field to the desired IP address, e.g. 
with a
 set_field action. However,  while  routing the  
tunneled
 packet  out,  the local system may override the 
specified
 address with the local IP address configured for 
the out‐

 going system interface.

 This  option  is  valid  only for tunnels also 
configured

 with the remote_ip=flow option.

Please try using the remote_ip=flow option and then configuring the 
proper flow and action.


Thanks,

- Greg



*VM1 details:*

[root@vm1 ~]# ip a
...
3: eth1:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000

    link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
    inet 30.30.0.59/24  brd 30.30.0.255 scope 
global dynamic eth1

       valid_lft 3002sec preferred_lft 3002sec
    inet6 fe80::5054:ff:feb8:5be/64 scope link
       valid_lft forever preferred_lft forever
4: eth2:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000

    link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
    inet 20.20.0.183/24  brd 20.20.0.255 scope 
global dynamic eth2

       valid_lft 3248sec preferred_lft 3248sec
    inet6 fe80::5054:ff:fef0:6437/64 scope link
       valid_lft forever preferred_lft forever
...
[root@vm1 ~]# ovs-vsctl show
ff70c814-d1b0-4018-aee8-8b635187afee
    Bridge "testbr0"
        Port "gre0"
            Interface "gre0"
                type: gre
                options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
        Port "testbr0"
            Interface "testbr0"
                type: internal
        Port "2cfb62a9b0f04_l"
            Interface "2cfb62a9b0f04_l"
    ovs_version: "2.9.2"
[root@vm1 ~]# ip rule list
0:      from all lookup local
32765:  from 20.20.0.183 lookup siva
32766:  from all lookup main
32767:  from all lookup default
[root@vm1 ~]# ip route show table siva
default dev eth2 scope link src 20.20.0.183
[root@vm1 ~]# # A docker container is attached 
to ovs bridge using ovs-docker utility

[root@vm1 ~]# docker ps
CONTAINER ID        IMAGE            COMMAND  CREATED            
 STATUS         PORTS  NAMES
be4ab434db99        busybox            "sh"                5 days ago  
        Up 5 days  admiring_euclid
[root@vm1 ~]# nsenter -n -t `docker inspect be4 
--format={{.State.Pid}}` -- ip a

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8  scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN 
qlen 1000

    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
9: eth0@if10:  mtu 1500 qdisc noqueue 
state UP qlen 1000

    link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 70.70.0.10/24  scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2098:41ff:fe0f:e850/64 scope link
       valid_lft forever preferred_lft forever


*VM2 details:*
*
*
[root@vm2 ~]# ip a

3: eth1:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000

    link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
    inet 30.30.0.193/24  brd 30.30.0.255 scope 
global dynamic eth1

       valid_lft 2406sec preferred_lft 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-06 Thread Siva Teja ARETI
Answers in line.

Siva Teja.

On Tue, Nov 6, 2018 at 1:56 PM Flavio Leitner  wrote:

> On Tue, Nov 06, 2018 at 11:51:49AM -0500, Siva Teja ARETI wrote:
> > Hi Greg,
> >
> > Thanks for looking into this.
> >
> > I have two VMs in my setup each with two interfaces. Trying to setup the
> > VXLAN tunnels across these interfaces which are in different subnets. A
> > docker container is attached to ovs bridge using ovs-docker utility on
> each
> > VM and doing a ping from one container to another.
>
> Do you see any interesting related messages in 'dmesg' output or in
> ovs-vswitchd.log?
>

I could not find any interesting messages in dmesg or in ovs-vswitchd.log
output.


> If I recall correctly, the "ip l" should show the vxlan dev named
> vxlan_sys_
>

Yes. I can see the dev on both of my VMs

[root@vm1 ~]# ifconfig vxlan_sys_4789
vxlan_sys_4789: flags=4163  mtu 65000
inet6 fe80::2a:28ff:fed2:d4f6  prefixlen 64  scopeid 0x20
ether 02:2a:28:d2:d4:f6  txqueuelen 1000  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 48  bytes 1680 (1.6 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



> fbl
>
> >
> > *VM1 details:*
> >
> > [root@vm1 ~]# ip a
> > ...
> > 3: eth1:  mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> > link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
> > inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
> >valid_lft 3002sec preferred_lft 3002sec
> > inet6 fe80::5054:ff:feb8:5be/64 scope link
> >valid_lft forever preferred_lft forever
> > 4: eth2:  mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> > link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
> > inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
> >valid_lft 3248sec preferred_lft 3248sec
> > inet6 fe80::5054:ff:fef0:6437/64 scope link
> >valid_lft forever preferred_lft forever
> > ...
> > [root@vm1 ~]# ovs-vsctl show
> > ff70c814-d1b0-4018-aee8-8b635187afee
> > Bridge "testbr0"
> > Port "gre0"
> > Interface "gre0"
> > type: gre
> > options: {local_ip="20.20.0.183",
> remote_ip="30.30.0.193"}
> > Port "testbr0"
> > Interface "testbr0"
> > type: internal
> > Port "2cfb62a9b0f04_l"
> > Interface "2cfb62a9b0f04_l"
> > ovs_version: "2.9.2"
> > [root@vm1 ~]# ip rule list
> > 0:  from all lookup local
> > 32765:  from 20.20.0.183 lookup siva
> > 32766:  from all lookup main
> > 32767:  from all lookup default
> > [root@vm1 ~]# ip route show table siva
> > default dev eth2 scope link src 20.20.0.183
> > [root@vm1 ~]# # A docker container is attached
> to
> > ovs bridge using ovs-docker utility
> > [root@vm1 ~]# docker ps
> > CONTAINER IDIMAGE   COMMAND CREATED
> >  STATUS  PORTS   NAMES
> > be4ab434db99busybox "sh"5 days ago
> > Up 5 days   admiring_euclid
> > [root@vm1 ~]# nsenter -n -t `docker inspect be4
> --format={{.State.Pid}}` --
> > ip a
> > 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen
> 1
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > inet 127.0.0.1/8 scope host lo
> >valid_lft forever preferred_lft forever
> > inet6 ::1/128 scope host
> >valid_lft forever preferred_lft forever
> > 2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
> > link/gre 0.0.0.0 brd 0.0.0.0
> > 3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN
> qlen
> > 1000
> > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> > 9: eth0@if10:  mtu 1500 qdisc noqueue
> > state UP qlen 1000
> > link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> > inet 70.70.0.10/24 scope global eth0
> >valid_lft forever preferred_lft forever
> > inet6 fe80::2098:41ff:fe0f:e850/64 scope link
> >valid_lft forever preferred_lft forever
> >
> >
> > *VM2 details:*
> >
> > [root@vm2 ~]# ip a
> > 
> > 3: eth1:  mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> > link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
> > inet 30.30.0.193/24 brd 30.30.0.255 scope global dynamic eth1
> >valid_lft 2406sec preferred_lft 2406sec
> > inet6 fe80::5054:ff:fe79:ef92/64 scope link
> >valid_lft forever preferred_lft forever
> > 4: eth2:  mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> > link/ether 52:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
> > inet 20.20.0.64/24 brd 20.20.0.255 scope global dynamic eth2
> >valid_lft 2775sec preferred_lft 2775sec
> > inet6 fe80::5054:ff:fe05:937c/64 scope link
> >valid_lft forever preferred_lft forever
> > ...
> > [root@vm2 ~]# ovs-vsctl show
> > b85514db-3f29-4f7a-9001-37d70adfca34
> > Bridge "testbr0"
> > Port "gre0"
> > 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-06 Thread Flavio Leitner
On Tue, Nov 06, 2018 at 11:51:49AM -0500, Siva Teja ARETI wrote:
> Hi Greg,
> 
> Thanks for looking into this.
> 
> I have two VMs in my setup each with two interfaces. Trying to setup the
> VXLAN tunnels across these interfaces which are in different subnets. A
> docker container is attached to ovs bridge using ovs-docker utility on each
> VM and doing a ping from one container to another.

Do you see any interesting related messages in 'dmesg' output or in 
ovs-vswitchd.log?

If I recall correctly, the "ip l" should show the vxlan dev named
vxlan_sys_.

fbl

> 
> *VM1 details:*
> 
> [root@vm1 ~]# ip a
> ...
> 3: eth1:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
> inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
>valid_lft 3002sec preferred_lft 3002sec
> inet6 fe80::5054:ff:feb8:5be/64 scope link
>valid_lft forever preferred_lft forever
> 4: eth2:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
> inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
>valid_lft 3248sec preferred_lft 3248sec
> inet6 fe80::5054:ff:fef0:6437/64 scope link
>valid_lft forever preferred_lft forever
> ...
> [root@vm1 ~]# ovs-vsctl show
> ff70c814-d1b0-4018-aee8-8b635187afee
> Bridge "testbr0"
> Port "gre0"
> Interface "gre0"
> type: gre
> options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
> Port "testbr0"
> Interface "testbr0"
> type: internal
> Port "2cfb62a9b0f04_l"
> Interface "2cfb62a9b0f04_l"
> ovs_version: "2.9.2"
> [root@vm1 ~]# ip rule list
> 0:  from all lookup local
> 32765:  from 20.20.0.183 lookup siva
> 32766:  from all lookup main
> 32767:  from all lookup default
> [root@vm1 ~]# ip route show table siva
> default dev eth2 scope link src 20.20.0.183
> [root@vm1 ~]# # A docker container is attached to
> ovs bridge using ovs-docker utility
> [root@vm1 ~]# docker ps
> CONTAINER IDIMAGE   COMMAND CREATED
>  STATUS  PORTS   NAMES
> be4ab434db99busybox "sh"5 days ago
> Up 5 days   admiring_euclid
> [root@vm1 ~]# nsenter -n -t `docker inspect be4 --format={{.State.Pid}}` --
> ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
> link/gre 0.0.0.0 brd 0.0.0.0
> 3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN qlen
> 1000
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 9: eth0@if10:  mtu 1500 qdisc noqueue
> state UP qlen 1000
> link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 70.70.0.10/24 scope global eth0
>valid_lft forever preferred_lft forever
> inet6 fe80::2098:41ff:fe0f:e850/64 scope link
>valid_lft forever preferred_lft forever
> 
> 
> *VM2 details:*
> 
> [root@vm2 ~]# ip a
> 
> 3: eth1:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
> inet 30.30.0.193/24 brd 30.30.0.255 scope global dynamic eth1
>valid_lft 2406sec preferred_lft 2406sec
> inet6 fe80::5054:ff:fe79:ef92/64 scope link
>valid_lft forever preferred_lft forever
> 4: eth2:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 52:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
> inet 20.20.0.64/24 brd 20.20.0.255 scope global dynamic eth2
>valid_lft 2775sec preferred_lft 2775sec
> inet6 fe80::5054:ff:fe05:937c/64 scope link
>valid_lft forever preferred_lft forever
> ...
> [root@vm2 ~]# ovs-vsctl show
> b85514db-3f29-4f7a-9001-37d70adfca34
> Bridge "testbr0"
> Port "gre0"
> Interface "gre0"
> type: gre
> options: {local_ip="30.30.0.193", remote_ip="20.20.0.183"}
> Port "a0769422cfc04_l"
> Interface "a0769422cfc04_l"
> Port "testbr0"
> Interface "testbr0"
> type: internal
> ovs_version: "2.9.2"
> [root@vm2 ~]# ip rule list
> 0:  from all lookup local
> 32766:  from all lookup main
> 32767:  from all lookup default
> [root@vm2 ~]# # A docker container is attached to
> ovs bridge using ovs-docker utility
> [root@vm2 ~]# docker ps
> CONTAINER IDIMAGE   COMMAND CREATED
>  STATUS  PORTS   NAMES
> 86214f0d99e8busybox:latest  "sh"5 days ago
> Up 5 days   peaceful_snyder
> 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-06 Thread Gregory Rose

On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:

Hi Greg,

Thanks for looking into this.

I have two VMs in my setup each with two interfaces. Trying to setup 
the VXLAN tunnels across these interfaces which are in different 
subnets. A docker container is attached to ovs bridge using ovs-docker 
utility on each VM and doing a ping from one container to another.


I don't have docker and haven't used it at all so I'll have to get that 
installed and then try to match your

configuration.  I'll keep you updated on my progress.

Thanks,

- Greg



*VM1 details:*

[root@vm1 ~]# ip a
...
3: eth1:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000

    link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
    inet 30.30.0.59/24  brd 30.30.0.255 scope 
global dynamic eth1

       valid_lft 3002sec preferred_lft 3002sec
    inet6 fe80::5054:ff:feb8:5be/64 scope link
       valid_lft forever preferred_lft forever
4: eth2:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000

    link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
    inet 20.20.0.183/24  brd 20.20.0.255 scope 
global dynamic eth2

       valid_lft 3248sec preferred_lft 3248sec
    inet6 fe80::5054:ff:fef0:6437/64 scope link
       valid_lft forever preferred_lft forever
...
[root@vm1 ~]# ovs-vsctl show
ff70c814-d1b0-4018-aee8-8b635187afee
    Bridge "testbr0"
        Port "gre0"
            Interface "gre0"
                type: gre
                options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
        Port "testbr0"
            Interface "testbr0"
                type: internal
        Port "2cfb62a9b0f04_l"
            Interface "2cfb62a9b0f04_l"
    ovs_version: "2.9.2"
[root@vm1 ~]# ip rule list
0:      from all lookup local
32765:  from 20.20.0.183 lookup siva
32766:  from all lookup main
32767:  from all lookup default
[root@vm1 ~]# ip route show table siva
default dev eth2 scope link src 20.20.0.183
[root@vm1 ~]# # A docker container is attached 
to ovs bridge using ovs-docker utility

[root@vm1 ~]# docker ps
CONTAINER ID        IMAGE            COMMAND  CREATED            
 STATUS         PORTS  NAMES
be4ab434db99        busybox            "sh"                5 days ago  
        Up 5 days  admiring_euclid
[root@vm1 ~]# nsenter -n -t `docker inspect be4 
--format={{.State.Pid}}` -- ip a

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8  scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN 
qlen 1000

    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
9: eth0@if10:  mtu 1500 qdisc noqueue 
state UP qlen 1000

    link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 70.70.0.10/24  scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2098:41ff:fe0f:e850/64 scope link
       valid_lft forever preferred_lft forever


*VM2 details:*
*
*
[root@vm2 ~]# ip a

3: eth1:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000

    link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
    inet 30.30.0.193/24  brd 30.30.0.255 scope 
global dynamic eth1

       valid_lft 2406sec preferred_lft 2406sec
    inet6 fe80::5054:ff:fe79:ef92/64 scope link
       valid_lft forever preferred_lft forever
4: eth2:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000

    link/ether 52:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
    inet 20.20.0.64/24  brd 20.20.0.255 scope 
global dynamic eth2

       valid_lft 2775sec preferred_lft 2775sec
    inet6 fe80::5054:ff:fe05:937c/64 scope link
       valid_lft forever preferred_lft forever
...
[root@vm2 ~]# ovs-vsctl show
b85514db-3f29-4f7a-9001-37d70adfca34
    Bridge "testbr0"
        Port "gre0"
            Interface "gre0"
                type: gre
                options: {local_ip="30.30.0.193", remote_ip="20.20.0.183"}
        Port "a0769422cfc04_l"
            Interface "a0769422cfc04_l"
        Port "testbr0"
            Interface "testbr0"
                type: internal
    ovs_version: "2.9.2"
[root@vm2 ~]# ip rule list
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default
[root@vm2 ~]# # A docker container is attached 
to ovs bridge using ovs-docker utility

[root@vm2 ~]# docker ps
CONTAINER ID        IMAGE              COMMAND    CREATED            
 STATUS             PORTS  NAMES
86214f0d99e8 busybox:latest      "sh"           5 days ago Up 5 days   
       peaceful_snyder
[root@vm2 ~]# nsenter -n -t `docker inspect 862 
--format={{.State.Pid}}` -- ip a

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-06 Thread Siva Teja ARETI
Hi Greg,

Thanks for looking into this.

I have two VMs in my setup each with two interfaces. Trying to setup the
VXLAN tunnels across these interfaces which are in different subnets. A
docker container is attached to ovs bridge using ovs-docker utility on each
VM and doing a ping from one container to another.

*VM1 details:*

[root@vm1 ~]# ip a
...
3: eth1:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000
link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
   valid_lft 3002sec preferred_lft 3002sec
inet6 fe80::5054:ff:feb8:5be/64 scope link
   valid_lft forever preferred_lft forever
4: eth2:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000
link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
   valid_lft 3248sec preferred_lft 3248sec
inet6 fe80::5054:ff:fef0:6437/64 scope link
   valid_lft forever preferred_lft forever
...
[root@vm1 ~]# ovs-vsctl show
ff70c814-d1b0-4018-aee8-8b635187afee
Bridge "testbr0"
Port "gre0"
Interface "gre0"
type: gre
options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
Port "testbr0"
Interface "testbr0"
type: internal
Port "2cfb62a9b0f04_l"
Interface "2cfb62a9b0f04_l"
ovs_version: "2.9.2"
[root@vm1 ~]# ip rule list
0:  from all lookup local
32765:  from 20.20.0.183 lookup siva
32766:  from all lookup main
32767:  from all lookup default
[root@vm1 ~]# ip route show table siva
default dev eth2 scope link src 20.20.0.183
[root@vm1 ~]# # A docker container is attached to
ovs bridge using ovs-docker utility
[root@vm1 ~]# docker ps
CONTAINER IDIMAGE   COMMAND CREATED
 STATUS  PORTS   NAMES
be4ab434db99busybox "sh"5 days ago
Up 5 days   admiring_euclid
[root@vm1 ~]# nsenter -n -t `docker inspect be4 --format={{.State.Pid}}` --
ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN qlen
1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
9: eth0@if10:  mtu 1500 qdisc noqueue
state UP qlen 1000
link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 70.70.0.10/24 scope global eth0
   valid_lft forever preferred_lft forever
inet6 fe80::2098:41ff:fe0f:e850/64 scope link
   valid_lft forever preferred_lft forever


*VM2 details:*

[root@vm2 ~]# ip a

3: eth1:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000
link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
inet 30.30.0.193/24 brd 30.30.0.255 scope global dynamic eth1
   valid_lft 2406sec preferred_lft 2406sec
inet6 fe80::5054:ff:fe79:ef92/64 scope link
   valid_lft forever preferred_lft forever
4: eth2:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000
link/ether 52:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
inet 20.20.0.64/24 brd 20.20.0.255 scope global dynamic eth2
   valid_lft 2775sec preferred_lft 2775sec
inet6 fe80::5054:ff:fe05:937c/64 scope link
   valid_lft forever preferred_lft forever
...
[root@vm2 ~]# ovs-vsctl show
b85514db-3f29-4f7a-9001-37d70adfca34
Bridge "testbr0"
Port "gre0"
Interface "gre0"
type: gre
options: {local_ip="30.30.0.193", remote_ip="20.20.0.183"}
Port "a0769422cfc04_l"
Interface "a0769422cfc04_l"
Port "testbr0"
Interface "testbr0"
type: internal
ovs_version: "2.9.2"
[root@vm2 ~]# ip rule list
0:  from all lookup local
32766:  from all lookup main
32767:  from all lookup default
[root@vm2 ~]# # A docker container is attached to
ovs bridge using ovs-docker utility
[root@vm2 ~]# docker ps
CONTAINER IDIMAGE   COMMAND CREATED
 STATUS  PORTS   NAMES
86214f0d99e8busybox:latest  "sh"5 days ago
Up 5 days   peaceful_snyder
[root@vm2 ~]# nsenter -n -t `docker inspect 862 --format={{.State.Pid}}` --
ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: gre0@NONE:  mtu 1476 qdisc noop state DOWN qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-06 Thread Gregory Rose


On 11/5/2018 6:10 PM, Siva Teja ARETI wrote:

Hi,

I am trying to use local_ip option for a VXLAN tunnel using ovs but it 
does not seem to work. The same works when I use GRE tunnel. I also 
found a previous discussion from another user who tried the exact same 
approach. Here is the link to the discussion


_https://www.mail-archive.com/ovs-discuss@openvswitch.org/msg03643.html_

I am unable to find any working resolution at the end of this 
discussion. Could you please help?


I looked into that but was never able to set up a configuration like the 
one in that discussion and could

not repro the bug.

Please provide some details on your usage, configuration and steps to 
repro and I can look into it.


Thanks,

- Greg



I am using ovs 2.9.2

[root@localhost ~]# ovs-vsctl --version
ovs-vsctl (Open vSwitch) 2.9.2
DB Schema 7.15.1

Thanks,
Siva Teja.


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-05 Thread Siva Teja ARETI
Hi,

I am trying to use local_ip option for a VXLAN tunnel using ovs but it does
not seem to work. The same works when I use GRE tunnel. I also found a
previous discussion from another user who tried the exact same approach.
Here is the link to the discussion

*https://www.mail-archive.com/ovs-discuss@openvswitch.org/msg03643.html
*

I am unable to find any working resolution at the end of this discussion.
Could you please help?

I am using ovs 2.9.2

[root@localhost ~]# ovs-vsctl --version
ovs-vsctl (Open vSwitch) 2.9.2
DB Schema 7.15.1

Thanks,
Siva Teja.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss