Re: [Openstack-operators] [Neutron][Linuxbridge] Problem with configuring linux bridge agent with vxlan networks

2015-10-03 Thread Sławek Kapłoński
Hello,

I'm configuring it manually. DHCP is not working because vxlan tunnels
are not working at all :/
Compute nodes and network nodes con ping each other:

admin@network:~$ ping 10.1.0.4
PING 10.1.0.4 (10.1.0.4) 56(84) bytes of data.
64 bytes from 10.1.0.4: icmp_seq=1 ttl=64 time=8.83 ms
64 bytes from 10.1.0.4: icmp_seq=2 ttl=64 time=0.282 ms
^C
--- 10.1.0.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.282/4.560/8.838/4.278 ms


-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Sat, 03 Oct 2015, James Denton wrote:

> Are your instances getting their ip from DHCP server or are you manually 
> configuring them? Can the network node ping the compute node at 10.1.0.4 and 
> vice-versa?
> 
> Sent from my iPhone
> 
> > On Oct 3, 2015, at 3:55 AM, Sławek Kapłoński  wrote:
> > 
> > This vlan bridge_mapping I set just to be sure if it will not help for
> > some reason :) Before I tested it without this mapping configured. And
> > in fact I'm not using vlan networks at all (at least now) - I only want
> > to have local vxlan network between instances :)
> > When I booted one instance on host in brqXXX bridge I got vxlan-10052
> > port and tapXXX port (10052 is vni used assigned to network in neutron).
> > After boot second vm I got in same bridge second tap interface so it
> > looks like:
> > 
> > root@compute-2:~# brctl show
> > bridge namebridge idSTP enabledinterfaces
> > brq8fe8a32f-e68000.ce544d0c0e5dnotap691a138a-6c
> >tapbc1e5179-53
> >vxlan-10052
> > virbr08000.5254007611abyesvirbr0-nic
> > 
> > 
> > So it looks fine for me. I have no idea what is this vibr0 bridge - maybe it
> > should be used somehow?
> > 
> > One more think. Those two vms on one host are pinging each other. So bridge
> > looks that is working fine. Problem is with vxlan tunnels.
> > 
> > About security groups: by default there is rule to allow traffic from 
> > different
> > vms using same SG. All my instances are using same security group so it 
> > should
> > be no problem IMHO.
> > 
> > -- 
> > Best regards / Pozdrawiam
> > Sławek Kapłoński
> > sla...@kaplonski.pl
> > 
> >> On Fri, 02 Oct 2015, James Denton wrote:
> >> 
> >> If eth1 is used for the vxlan tunnel end points, it can't also be used in 
> >> a bridge ala provider_bridge_mappings. You should have a dedicated 
> >> interface or a vlan interface off eth1 (i.e. Eth1.20) that is dedicated to 
> >> the overlay traffic. Move the local_ip address to that interface on 
> >> respective nodes. Verify that you can ping between nodes at each address. 
> >> If this doesn't work, the Neutron pieces won't work. You shouldn't have to 
> >> restart any neutron services, since the IP isn't changing.
> >> 
> >> Once you create a vxlan tenant network and boot some instances, verify 
> >> that the vxlan interface is being setup and placed in the respective 
> >> bridge. You can use 'brctl show' to look at the brq bridge that 
> >> corresponds to the network. You should see a vxlan interface and the tap 
> >> interfaces of your instances. 
> >> 
> >> As always, verify your security groups first when troubleshooting instance 
> >> to instance communication.
> >> 
> >> James
> >> 
> >> Sent from my iPhone
> >> 
> >>> On Oct 2, 2015, at 3:48 PM, Sławek Kapłoński  wrote:
> >>> 
> >>> Hello,
> >>> 
> >>> I'm trying to configure small openstack infra (one network node, 2
> >>> compute nodes) with linux bridge and vxlan tenant networks. I don't know
> >>> what I'm doing wrong but my instances have no connection between
> >>> each other. On compute hosts I run neutron-plugin-linuxbrigde-agent
> >>> with config like:
> >>> 
> >>> --
> >>> [ml2_type_vxlan]
> >>> # (ListOpt) Comma-separated list of : tuples
> >>> # enumerating
> >>> # ranges of VXLAN VNI IDs that are available for tenant network
> >>> # allocation.
> >>> #
> >>> vni_ranges = 1:2
> >>> 
> >>> # (StrOpt) Multicast group for the VXLAN interface. When configured,
> >>> # will
> >>> # enable sending all broadcast traffic to this multicast group. When
> >>> # left
> >>> # unconfigured, will disable multicast VXLAN mode.
> >>> #
> >>> # vxlan_group =
> >>> # Example: vxlan_group = 239.1.1.1
> >>> 
> >>> [securitygroup]
> >>> # Controls if neutron security group is enabled or not.
> >>> # It should be false when you use nova security group.
> >>> enable_security_group = True
> >>> 
> >>> # Use ipset to speed-up the iptables security groups. Enabling ipset
> >>> # support
> >>> # requires that ipset is installed on L2 agent node.
> >>> enable_ipset = True
> >>> 
> >>> firewall_driver = 
> >>> neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
> >>> 
> >>> [ovs]
> >>> local_ip = 10.1.0.4
> >>> 
> >>> [agent]
> >>> tunnel_types = vxlan
> >>> 
> >>> [linuxbridge]
> >>> 

[Openstack-operators] [Neutron][Linuxbridge] Problem with configuring linux bridge agent with vxlan networks

2015-10-02 Thread Sławek Kapłoński
Hello,

I'm trying to configure small openstack infra (one network node, 2
compute nodes) with linux bridge and vxlan tenant networks. I don't know
what I'm doing wrong but my instances have no connection between
each other. On compute hosts I run neutron-plugin-linuxbrigde-agent
with config like:

--
[ml2_type_vxlan]
# (ListOpt) Comma-separated list of : tuples
# enumerating
# ranges of VXLAN VNI IDs that are available for tenant network
# allocation.
#
vni_ranges = 1:2

# (StrOpt) Multicast group for the VXLAN interface. When configured,
# will
# enable sending all broadcast traffic to this multicast group. When
# left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1

[securitygroup]
# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
enable_security_group = True

# Use ipset to speed-up the iptables security groups. Enabling ipset
# support
# requires that ipset is installed on L2 agent node.
enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[ovs]
local_ip = 10.1.0.4

[agent]
tunnel_types = vxlan

[linuxbridge]
physical_interface_mappings = physnet1:eth1

[vxlan]
local_ip = 10.1.0.4
l2_population = True
enable_vxlan = True
---

Eth1 is my "tunnel network" which should be used for tunnels. When I
spawn vms on compute 1 and 2 and after configuring network manually on
both vms (dhcp is not working also because of broken tunnels probably)
it not pings.
Even when I started two instances on same host and they are both
connected to one bridge:

---
root@compute-2:/usr/lib/python2.7/dist-packages/neutron# brctl show
bridge name bridge id   STP enabled interfaces
brq8fe8a32f-e6  8000.ce544d0c0e5d   no  tap691a138a-6c
tapbc1e5179-53
vxlan-10052
virbr0  8000.5254007611ab   yes virbr0-nic
---

those 2 vms are not pinging each other :/
I don't have any expeirence with linux bridge in fact (For now I was always
using ovs). Maybe someone of You will know what I should check or what I should
configure wrong :/ Generally I was installing this openstack according to
official openstack documentation but in this docs there is info about ovs+gre
tunnels and that is what I changed. I'm using Ubuntu 14.04 and Openstack Kilo
installed from cloud archive repo.

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl



signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Neutron][Linuxbridge] Problem with configuring linux bridge agent with vxlan networks

2015-10-02 Thread Matt Kassawara
Did you review the scenarios in the networking guide [1]?

[1] http://docs.openstack.org/networking-guide/deploy.html

On Fri, Oct 2, 2015 at 2:41 PM, Sławek Kapłoński 
wrote:

> Hello,
>
> I'm trying to configure small openstack infra (one network node, 2
> compute nodes) with linux bridge and vxlan tenant networks. I don't know
> what I'm doing wrong but my instances have no connection between
> each other. On compute hosts I run neutron-plugin-linuxbrigde-agent
> with config like:
>
> --
> [ml2_type_vxlan]
> # (ListOpt) Comma-separated list of : tuples
> # enumerating
> # ranges of VXLAN VNI IDs that are available for tenant network
> # allocation.
> #
> vni_ranges = 1:2
>
> # (StrOpt) Multicast group for the VXLAN interface. When configured,
> # will
> # enable sending all broadcast traffic to this multicast group. When
> # left
> # unconfigured, will disable multicast VXLAN mode.
> #
> # vxlan_group =
> # Example: vxlan_group = 239.1.1.1
>
> [securitygroup]
> # Controls if neutron security group is enabled or not.
> # It should be false when you use nova security group.
> enable_security_group = True
>
> # Use ipset to speed-up the iptables security groups. Enabling ipset
> # support
> # requires that ipset is installed on L2 agent node.
> enable_ipset = True
>
> firewall_driver =
> neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
>
> [ovs]
> local_ip = 10.1.0.4
>
> [agent]
> tunnel_types = vxlan
>
> [linuxbridge]
> physical_interface_mappings = physnet1:eth1
>
> [vxlan]
> local_ip = 10.1.0.4
> l2_population = True
> enable_vxlan = True
> ---
>
> Eth1 is my "tunnel network" which should be used for tunnels. When I
> spawn vms on compute 1 and 2 and after configuring network manually on
> both vms (dhcp is not working also because of broken tunnels probably)
> it not pings.
> Even when I started two instances on same host and they are both
> connected to one bridge:
>
> ---
> root@compute-2:/usr/lib/python2.7/dist-packages/neutron# brctl show
> bridge name bridge id   STP enabled interfaces
> brq8fe8a32f-e6  8000.ce544d0c0e5d   no
> tap691a138a-6c
> tapbc1e5179-53
> vxlan-10052
> virbr0  8000.5254007611ab   yes virbr0-nic
> ---
>
> those 2 vms are not pinging each other :/
> I don't have any expeirence with linux bridge in fact (For now I was always
> using ovs). Maybe someone of You will know what I should check or what I
> should
> configure wrong :/ Generally I was installing this openstack according to
> official openstack documentation but in this docs there is info about
> ovs+gre
> tunnels and that is what I changed. I'm using Ubuntu 14.04 and Openstack
> Kilo
> installed from cloud archive repo.
>
> --
> Best regards / Pozdrawiam
> Sławek Kapłoński
> sla...@kaplonski.pl
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Neutron][Linuxbridge] Problem with configuring linux bridge agent with vxlan networks

2015-10-02 Thread Sławek Kapłoński
Hello,

Yes. I was reading mostly
http://docs.openstack.org/networking-guide/scenario_legacy_lb.html
because imho this is what I want now :). I don't want DVR and L3 HA (at
least for now) but only traffic via vxlan between two vms which are
connected to tenant network (it is probably called east-west traffic,
yes?)

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Fri, 02 Oct 2015, Matt Kassawara wrote:

> Did you review the scenarios in the networking guide [1]?
> 
> [1] http://docs.openstack.org/networking-guide/deploy.html
> 
> On Fri, Oct 2, 2015 at 2:41 PM, Sławek Kapłoński 
> wrote:
> 
> > Hello,
> >
> > I'm trying to configure small openstack infra (one network node, 2
> > compute nodes) with linux bridge and vxlan tenant networks. I don't know
> > what I'm doing wrong but my instances have no connection between
> > each other. On compute hosts I run neutron-plugin-linuxbrigde-agent
> > with config like:
> >
> > --
> > [ml2_type_vxlan]
> > # (ListOpt) Comma-separated list of : tuples
> > # enumerating
> > # ranges of VXLAN VNI IDs that are available for tenant network
> > # allocation.
> > #
> > vni_ranges = 1:2
> >
> > # (StrOpt) Multicast group for the VXLAN interface. When configured,
> > # will
> > # enable sending all broadcast traffic to this multicast group. When
> > # left
> > # unconfigured, will disable multicast VXLAN mode.
> > #
> > # vxlan_group =
> > # Example: vxlan_group = 239.1.1.1
> >
> > [securitygroup]
> > # Controls if neutron security group is enabled or not.
> > # It should be false when you use nova security group.
> > enable_security_group = True
> >
> > # Use ipset to speed-up the iptables security groups. Enabling ipset
> > # support
> > # requires that ipset is installed on L2 agent node.
> > enable_ipset = True
> >
> > firewall_driver =
> > neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
> >
> > [ovs]
> > local_ip = 10.1.0.4
> >
> > [agent]
> > tunnel_types = vxlan
> >
> > [linuxbridge]
> > physical_interface_mappings = physnet1:eth1
> >
> > [vxlan]
> > local_ip = 10.1.0.4
> > l2_population = True
> > enable_vxlan = True
> > ---
> >
> > Eth1 is my "tunnel network" which should be used for tunnels. When I
> > spawn vms on compute 1 and 2 and after configuring network manually on
> > both vms (dhcp is not working also because of broken tunnels probably)
> > it not pings.
> > Even when I started two instances on same host and they are both
> > connected to one bridge:
> >
> > ---
> > root@compute-2:/usr/lib/python2.7/dist-packages/neutron# brctl show
> > bridge name bridge id   STP enabled interfaces
> > brq8fe8a32f-e6  8000.ce544d0c0e5d   no
> > tap691a138a-6c
> > tapbc1e5179-53
> > vxlan-10052
> > virbr0  8000.5254007611ab   yes virbr0-nic
> > ---
> >
> > those 2 vms are not pinging each other :/
> > I don't have any expeirence with linux bridge in fact (For now I was always
> > using ovs). Maybe someone of You will know what I should check or what I
> > should
> > configure wrong :/ Generally I was installing this openstack according to
> > official openstack documentation but in this docs there is info about
> > ovs+gre
> > tunnels and that is what I changed. I'm using Ubuntu 14.04 and Openstack
> > Kilo
> > installed from cloud archive repo.
> >
> > --
> > Best regards / Pozdrawiam
> > Sławek Kapłoński
> > sla...@kaplonski.pl
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >


signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators