See inline at #PCM

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Oct 1, 2014, at 8:35 AM, masoom alam <masoom.a...@gmail.com> wrote:

> In line. Thanks for the response.
> 
> >
> >
> > @PCM So is the public IP for the router (172.24.4.226) an internet on the 
> > Internet?  In the example, IIRC, the quantum router has an IP on the public 
> > network, and the GW IP is also on the same network (172.24.4.225). I think 
> > the latter, is assigned to the external bridge on the host (br-ex). Is that 
> > what you have?
> >
> >
> @MA: both these ips are the provider network ips. Think of these ips as data 
> center internal ips. Thus host system is also acting as a router in some way. 
> Br-ex should be connected to our eth0 (having a public ip). By giving the 
> following command our system gets corrupted.
> 

#PCM Afraid I don’t understand what you mean by “system gets corrupted”.

If I understand, it sounds like you have:

Neutron router (public IP) 172.24.4.225
       |
       |
br-ex 172.24.4.226
       |
       |
eth0 x.x.x.x (internet IP)

Is that correct?

I’ve never done that (always played with lab environment), but I don’t see how 
the OVS bridge can “route” packets between two different networks. I could see 
this working with a physical router or with the Neutron router having an IP on 
the Internet.

in any case, you have a basic connectivity issue, so need to get that squared 
away first.

Sorry I’m not much help… maybe someone can chime in…


Regards,

PCM

> sudo ovs-vsctl add-port br-ex eth0.
> Even when I try to set eth0 as default gate way system s corrupted. We 
> noticed that IP forwarding was not enabled so we enabled it but no gain
> 
> 
> 
> 
> 
> 
> >>
> >>       (10.1.0.0/24 - DevStack East)
> >>               |
> >>               |  10.1.0.1
> >>      [Quantum Router]
> >>               |  172.24.4.226
> >>               |
> >>               |  172.24.4.225
> >>      [Internet GW]
> >>               |  
> >>               |
> >>      [Internet GW]
> >>               | 172.24.4.232
> >>               |
> >>               | 172.24.4.233
> >>      [Quantum Router]
> >>               |  10.2.0.1
> >>               |
> >>      (10.2.0.0/24 DevStack West)
> >>
> >>
> >>>
> >>> First thing would be to ensure that you can ping from one host to another 
> >>> over the public IPs involved. You can then go to the namespace of the 
> >>> router and see if you can ping the public I/F of the other end’s router.
> >>
> >>
> >> We can ping anything on the host having devstack setup for example 
> >> google.com, but GW of the other host.
> >
> >
> > @PCM Are you saying that the host for devstack East can ping on the 
> > Internet, but cannot ping the GW IP of the other Devstack setup (also on 
> > the internet)?
> >
> > I guess I need to understand what the “GW” actually is, in your setup. For 
> > the example given, it is the host’s br-ex interface and is on the same 
> > subnet as the router’s public interface.
> >
> >
> >> However, we cannot ping from within the CirrOS instance. I have run the 
> >> traceroute command and we are reaching till 172.24.4.225 but not beyond 
> >> this point.
> >
> >
> > @PCM By 172.24.4.225 do you mean the Internet IP for the br-ex interface on 
> > the local host? The cirrus VM, irrespective of VPN, should be able to ping 
> > the router’s public IP, the gateway IP and the far end public IPs. I’m 
> > struggling to understand what you have setup. Is the internet GW just the 
> > br-ex or some external router?
> >
> 
> 
> > Should like you have some connectivity issues outside of VPN.  From the 
> > Cirros VM should should be able to ping everything, except the Cirros VMs 
> > on the other side.
> >
> >
> >> BTW we did some other experiments as well. For example, when we tried to 
> >> explicitly link our br-ex (172.24.4.225) with eth0 (Internet GW), machine 
> >> got corrupted. Same is the issue if we do a hard reboot, Neutron gets 
> >> corrupted :)
> >
> >
> > @PCM This seems to be the point of confusion. On the example, br-ex would 
> > have an IP on the public network. Sounds like that is not the case here. 
> > The br-ex would have, a port that is the interface that is actually 
> > connected to the public network. For example, I may have eth1 on my system 
> > added to br-ex, and eth1 would be connected to a switch that connects this 
> > to the other node (in a simple lab environment).
> >
> > Not sure I understand what you mean by “machine got corrupted” and “Neutron 
> > gets corrupted”. Can you elaborate?
> >
> > When I set up this in a lab, I add the interface to br-ex and then I stack. 
> > In the localrc, the interface is specified, along with br-ex.
> >
> >
> >>  
> >>>
> >>>
> >>> You can look at the screen-q-vpn.log (assuming devstack used) to see if 
> >>> any errors during setup.
> >>>
> >>> Note: When I stack, I turn off neutron security groups and then set nova 
> >>> security groups to allow SSH and ICMP. I imagine the alternative would be 
> >>> to setup neutron security groups to allow these two protocols.
> >
> >
> > @PCM What are you doing for security groups? I disable Neutron security 
> > groups and have set Nova to allow ICMP and SSH. I think you can instead, do:
> >
> > LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
> >
> > HTHs,
> >
> > PCM
> >
> >
> >>>
> >>> I didn’t quite follow what you meant by "Please note that my two devstack 
> >>> nodes are on different public addresses, so scenario is a little 
> >>> different than the one described here: 
> >>> https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall”. Can you 
> >>> elaborate (showing the commands and topology will help)?
> >>>
> >>> Germy,
> >>>
> >>> I have created this BP during Juno (unfortunately no progress on it 
> >>> however), regarding being able to see more status information for 
> >>> troubleshooting: 
> >>> https://blueprints.launchpad.net/neutron/+spec/l3-svcs-vendor-status-report
> >>>
> >>> It was targeted for vendor implementations, but would include reference 
> >>> implementation status too. Right now, if a VPN connection negotiation 
> >>> fails, there’s no indication of what went wrong.
> >>>
> >>> Regards,
> >>>
> >>>
> >>> PCM (Paul Michali)
> >>>
> >>> MAIL …..…. p...@cisco.com
> >>> IRC ……..… pcm_ (irc.freenode.com)
> >>> TW ………... @pmichali
> >>> GPG Key … 4525ECC253E31A83
> >>> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
> >>>
> >>>
> >>>
> >>> On Sep 29, 2014, at 1:38 AM, masoom alam <masoom.a...@gmail.com> wrote:
> >>>
> >>>> Hi Germy
> >>>>
> >>>> We cannot ping the public interface of the 2nd devstack setup (devstack 
> >>>> West). From our Cirros instance (First devstack -- devstack east), we 
> >>>> can ping our own public ip, but cannot ping the other public ip. I think 
> >>>> problem lies here, if we are reaching the devstack west, how can we make 
> >>>> a VPN connection
> >>>>
> >>>> Our topology looks like:
> >>>>
> >>>> CirrOS --->Qrouter---->Public IP -------publicIP---->Qrouter----->CirrOS
> >>>> _________________________             _____________________________
> >>>>        devstack EAST                                        devstack WEST
> >>>>
> >>>>
> >>>> Also it is important to note that we are not able to ssh the instance 
> >>>> private ip, without sudo ip netns qrouter id so this means we cannot 
> >>>> even ssh with floating ip.
> >>>>
> >>>>
> >>>> it seems there is a problem in firewall or iptables. 
> >>>>
> >>>> Please guide
> >>>>
> >>>>
> >>>>
> >>>> On Sunday, September 28, 2014, Germy Lure <germy.l...@gmail.com> wrote:
> >>>>>
> >>>>> Hi,
> >>>>>
> >>>>> masoom:
> >>>>> I think firstly you can just check that if you could ping from left to 
> >>>>> right without installing VPN connection.
> >>>>> If it worked, then you should cat the system logs to confirm the 
> >>>>> configure's OK.
> >>>>> You can ping and tcpdump to dialog where packets are blocked.
> >>>>>
> >>>>> stackers:
> >>>>> I think we should give mechanism to show the cause when vpn-connection 
> >>>>> is down. At least, we could extend an attribute to explain this. Maybe 
> >>>>> the VPN-incubator project is a chance?
> >>>>>
> >>>>> BR,
> >>>>> Germy
> >>>>>
> >>>>>
> >>>>> On Sat, Sep 27, 2014 at 7:04 PM, masoom alam <masoom.a...@gmail.com> 
> >>>>> wrote:
> >>>>>>
> >>>>>> Hi Every one, 
> >>>>>>
> >>>>>> I am trying to establish the VPN connection by giving the neutron 
> >>>>>> ipsec-site-connection-create.
> >>>>>>
> >>>>>> neutron ipsec-site-connection-create --name vpnconnection1 
> >>>>>> --vpnservice-id myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id 
> >>>>>> ipsecpolicy1 --peer-address 172.24.4.233 --peer-id 172.24.4.233 
> >>>>>> --peer-cidr 10.2.0.0/24 --psk secret
> >>>>>>
> >>>>>>
> >>>>>> For the --peer-address I am giving the public interface of the other 
> >>>>>> devstack node. Please note that my two devstack nodes are on different 
> >>>>>> public addresses, so scenario is a little different than the one 
> >>>>>> described here: 
> >>>>>> https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall
> >>>>>>
> >>>>>> The --peer-id is the ip address of the Qrouter connected to the public 
> >>>>>> interface. With this configuration, I am not able to up the VPN site 
> >>>>>> to site connection. Do you think its a firewall issue, I have disabled 
> >>>>>> both firewalls with sudo ufw disable. Any help in this regard. Am I 
> >>>>>> giving the correct parameters?
> >>>>>>
> >>>>>> Thanks
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> OpenStack-dev mailing list
> >>>>>> OpenStack-dev@lists.openstack.org
> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>>
> >>>>>
> >>>> _______________________________________________
> >>>> OpenStack-dev mailing list
> >>>> OpenStack-dev@lists.openstack.org
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to