It worked :) Thanks a lot Jesse for helping through the troubleshooting process.
On 18 November 2016 at 15:35, Jesse Pretorius < jesse.pretor...@rackspace.co.uk> wrote: > I **think** you’d have to set the following in your > /etc/openstack_deploy/user_vriables.yml file: > > > > openstack_service_publicuri_proto: http > > openstack_external_ssl: false > > haproxy_ssl: false > > > > I might be missing more, but that should get you started. > > > > *From: *Achi Hamza <h16m...@gmail.com> > *Date: *Friday, November 18, 2016 at 1:56 PM > > *To: *Jesse Pretorius <jesse.pretor...@rackspace.co.uk>, " > OpenStack-operators@lists.openstack.org" <OpenStack-operators@lists. > openstack.org> > *Subject: *Re: [Openstack-operators] [openstack-dev] [openstack-ansible] > pip issues > > > > Thank you Jesse. > > So should i set the parameter haproxy_ssl to false in the default folder > of the haproxy_server role or somewhere else ? > > > > On 18 November 2016 at 13:43, Jesse Pretorius < > jesse.pretor...@rackspace.co.uk> wrote: > > Ah, then that’s the cause. You can’t have both external and internal > addresses be the same unless you disable SSL for public endpoints. > > > > *From: *Achi Hamza <h16m...@gmail.com> > *Date: *Friday, November 18, 2016 at 7:02 AM > *To: *Jesse Pretorius <jesse.pretor...@rackspace.co.uk>, " > openstack-operators@lists.openstack.org" <openstack-operators@lists. > openstack.org> > > > *Subject: *Re: [Openstack-operators] [openstack-dev] [openstack-ansible] > pip issues > > > > No Jesse, you got me wrong. My external_lv_vip_address and > internal_vip_lb_address are the same (172.16.1.2), which is also the IP > address of node01 on which haproxy is running. > > > > This is how it looks on my openstack_user_config.yml file: > > global_overrides: > > internal_lb_vip_address: 172.16.1.2 > > external_lb_vip_address: 172.16.1.2 > > management_bridge: "br-mgmt" > > > > > > On 17 November 2016 at 20:24, Jesse Pretorius < > jesse.pretor...@rackspace.co.uk> wrote: > > Hmm, that’s odd – if you configured your external_lv_vip_address and > internal_vip_lb_address to be different then that should not be happening, > because SSL is implemented on the external VIP. We do not support the use > of both SSL and non-SSL on the same IP address as that is not something > that can be done (share the same port, but use HTTPS internally and HTTP > externally). > > > > Are you sure that both addresses in your configuration are different? > > > > *From: *Achi Hamza <h16m...@gmail.com> > *Date: *Thursday, November 17, 2016 at 5:58 PM > *To: *Jesse Pretorius <jesse.pretor...@rackspace.co.uk>, " > klindg...@godaddy.com" <klindg...@godaddy.com>, " > OpenStack-operators@lists.openstack.org" <OpenStack-operators@lists. > openstack.org> > > > *Subject: *Re: [Openstack-operators] [openstack-dev] [openstack-ansible] > pip issues > > > > Hi Jesse and Lindgren, > > Thank you both for the responses. think i figured out the root cause of > the problem, which is SSL. But first the answers of your questions: > Are you deploying haproxy onto the same host as the repo container, or a > different host? > Yes, it is on the same host. > > Have you bound the VIP address manually? > No, through openstack-ansible playbooks. > > Is the VIP address shared in some way – ie is it used for the host and > haproxy? > Yes, it is used for the host and haproxy (keepalived is disabled). > > After tcpdumping ( tcpdump -i any port 8181 -n) i found that the packets > from galera conatiner are not forwarded to repo container through the VIP, > and the haproxy file is full with this error message: > root@node01:~# tail /var/log/haproxy.log > Nov 17 18:02:16 localhost haproxy[30180]: 172.16.1.94:38050 > [17/Nov/2016:18:02:16.786] repo_all-front-1/1: SSL handshake failure > Nov 17 18:02:16 localhost haproxy[30180]: 172.16.1.94:38052 > [17/Nov/2016:18:02:16.791] repo_all-front-1/1: SSL handshake failure > Nov 17 18:02:16 localhost haproxy[30180]: 172.16.1.94:38054 > [17/Nov/2016:18:02:16.795] repo_all-front-1/1: SSL handshake failure > Nov 17 18:02:16 localhost haproxy[30180]: 172.16.1.94:38056 > [17/Nov/2016:18:02:16.800] repo_all-front-1/1: SSL handshake failure > Nov 17 18:02:16 localhost haproxy[30180]: 172.16.1.94:38058 > [17/Nov/2016:18:02:16.805] repo_all-front-1/1: SSL handshake failure > Nov 17 18:02:16 localhost haproxy[30180]: 172.16.1.94:38060 > [17/Nov/2016:18:02:16.809] repo_all-front-1/1: SSL handshake failure > Nov 17 18:02:16 localhost haproxy[30180]: 172.16.1.94:38062 > [17/Nov/2016:18:02:16.814] repo_all-front-1/1: SSL handshake failure > Nov 17 18:02:16 localhost haproxy[30180]: 172.16.1.94:38064 > [17/Nov/2016:18:02:16.819] repo_all-front-1/1: SSL handshake failure > Nov 17 18:02:16 localhost haproxy[30180]: 172.16.1.94:38066 > [17/Nov/2016:18:02:16.823] repo_all-front-1/1: SSL handshake failure > Nov 17 18:02:16 localhost haproxy[30180]: 172.16.1.94:38068 > [17/Nov/2016:18:02:16.828] repo_all-front-1/1: SSL handshake failure > > Can you please point out how to ignore SSL check ? > > Thank you, > > Hamza > > > > On 17 November 2016 at 16:58, Jesse Pretorius < > jesse.pretor...@rackspace.co.uk> wrote: > > With the combination of all those thing, it seems clear that there’s a > problem with the internal_lb_vip_address and that the issue is specifically > on the load balancer. You’re going to have to dig into that. From here it’s > a little difficult for me to try and advise. > > > > Are you deploying haproxy onto the same host as the repo container, or a > different host? > > Have you bound the VIP address manually? > > Is the VIP address shared in some way – ie is it used for the host and > haproxy? > > > > *From: *Achi Hamza <h16m...@gmail.com> > *Date: *Thursday, November 17, 2016 at 3:07 PM > *To: *Jesse Pretorius <jesse.pretor...@rackspace.co.uk> > *Cc: *"OpenStack-operators@lists.openstack.org" < > OpenStack-operators@lists.openstack.org> > > > *Subject: *Re: [Openstack-operators] [openstack-dev] [openstack-ansible] > pip issues > > > > It also works on the Public IP of the repo: > > root@maas:/opt/openstack-ansible/playbooks# ansible hosts -m shell -a > "curl http://*172.16.1.222*:8181/os-releases/" > > Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e > @/etc/openstack_deploy/user_variables.yml " > > node01 | SUCCESS | rc=0 >> > > <html> > > <head><title>Index of /os-releases/</title></head> > > <body bgcolor="white"> > > <h1>Index of /os-releases/</h1><hr><pre><a href="../">../</a> > > <a href="14.0.1/">14.0.1/</a> > 16-Nov-2016 14:47 - > > </pre><hr></body> > > </html> % Total % Received % Xferd Average Speed Time Time > Time Current > > Dload Upload Total Spent Left > Speed > > 100 287 0 287 0 0 381k 0 --:--:-- --:--:-- --:--:-- > 280k > > > > > > do you have an explanation to this Jesse ? > > > > Thank you > > > > On 17 November 2016 at 15:53, Achi Hamza <h16m...@gmail.com> wrote: > > It also works on the internal interface of the containers, i can fetch > from the repo container to the host on the internal IP of the container: > > > > *root@maas:/opt/openstack-ansible/playbooks# ansible hosts -m shell -a > "curl http://10.0.3.92:8181/os-releases/ > <http://10.0.3.92:8181/os-releases/>"* > > Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e > @/etc/openstack_deploy/user_variables.yml " > > node01 | SUCCESS | rc=0 >> > > <html> > > <head><title>Index of /os-releases/</title></head> > > <body bgcolor="white"> > > <h1>Index of /os-releases/</h1><hr><pre><a href="../">../</a> > > <a href="14.0.1/">14.0.1/</a> > 16-Nov-2016 14:47 - > > </pre><hr></body> > > </html> % Total % Received % Xferd Average Speed Time Time > Time Current > > Dload Upload Total Spent Left > Speed > > 100 287 0 287 0 0 405k 0 --:--:-- --:--:-- --:--:-- > 280k > > > > > > On 17 November 2016 at 15:26, Achi Hamza <h16m...@gmail.com> wrote: > > It works on the repo itself: > > > > *root@maas:/opt/openstack-ansible/playbooks# ansible repo_all -m shell -a > "curl http://localhost:8181/os-releases/ > <http://localhost:8181/os-releases/>"* > > Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e > @/etc/openstack_deploy/user_variables.yml " > > node01_repo_container-82b4e1f6 | SUCCESS | rc=0 >> > > <html> > > <head><title>Index of /os-releases/</title></head> > > <body bgcolor="white"> > > <h1>Index of /os-releases/</h1><hr><pre><a href="../">../</a> > > <a href="14.0.1/">14.0.1/</a> > 16-Nov-2016 14:47 - > > </pre><hr></body> > > </html> % Total % Received % Xferd Average Speed Time Time > Time Current > > Dload Upload Total Spent Left > Speed > > 100 287 0 287 0 0 59878 0 --:--:-- --:--:-- --:--:-- > 71750 > > > > > > On 17 November 2016 at 15:22, Jesse Pretorius < > jesse.pretor...@rackspace.co.uk> wrote: > > > > > > *From: *Achi Hamza <h16m...@gmail.com> > *Date: *Thursday, November 17, 2016 at 1:57 PM > *To: *Jesse Pretorius <jesse.pretor...@rackspace.co.uk>, " > OpenStack-operators@lists.openstack.org" <OpenStack-operators@lists. > openstack.org> > *Subject: *Re: [Openstack-operators] [openstack-dev] [openstack-ansible] > pip issues > > > > Thank you Jesse, but these iptables rules are just applied on the > deployment node not the host nodes. do i have to omit these rules even on > the deployment node ? > > > > Thank you > > > > Ah, then that’s a red herring. As long as your hosts can reach the > internet through it, then you’re good on that front. > > > > Let’s go back to verifying access to the repo – try checking access from > the repo server to itself: > > > > ansible repo_all -m uri -a "url=http://localhost:8181/os-releases/" > > > > or > > > > ansible repo_all –m shell –a "curl http://localhost:8181/os-releases/" > > > > > ------------------------------ > > Rackspace Limited is a company registered in England & Wales (company > registered number 03897010) whose registered office is at 5 Millington > Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy > can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail > message may contain confidential or privileged information intended for the > recipient. Any dissemination, distribution or copying of the enclosed > material is prohibited. If you receive this transmission in error, please > notify us immediately by e-mail at ab...@rackspace.com and delete the > original message. Your cooperation is appreciated. > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > > > > > > > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >
_______________________________________________ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators