Re: [Openstack-operators] Newton LBaaS v2 settings

2017-12-15 Thread Grant Morley

Hi,

That is fantastic, just what I am looking for!

Thanks for the help.

Kind Regards,


On 15/12/17 10:43, Volodymyr Litovka wrote:

Hi Grant,

in case of Octavia, when you create healthmonitor with parameters of 
monitoring:


$ openstack loadbalancer healthmonitor create
usage: openstack loadbalancer healthmonitor create [-h]
   [-f 
{json,shell,table,value,yaml}]

   [-c COLUMN]
[--max-width ]
[--fit-width]
[--print-empty]
[--noindent]
[--prefix PREFIX]
[--name ] --delay

[--expected-codes ]
[--http_method {GET,POST,DELETE,PUT,HEAD,OPTIONS,PATCH,CONNECT,TRACE}]
--timeout 
--max-retries 
[--url-path ]
   --type
{PING,HTTP,TCP,HTTPS,TLS-HELLO}
[--max-retries-down ]
[--enable | --disable]


Octavia pushes these parameters to haproxy config on Amphorae agent 
(/var/lib/octavia//haproxy.cfg) like this:


backend f30f2586-a387-40f4-a7b7-9718aebf49d4
    mode tcp
    balance roundrobin
    timeout check 1s
    server 26ae7b5c-4ec4-4bb3-ba21-6c8bccd9cdf8 10.1.4.11:80 weight 1 
check inter 5s fall 3 rise 3
    server 611a645e-9b47-40cd-a26a-b0b2a6348959 10.1.4.14:80 weight 1 
check inter 5s fall 3 rise 3


so, if you guess it's a problem with backend servers, you can play 
with HealthMonitor parameters in order to set appropriate timeouts for 
backend servers in this pool.


On 12/15/17 12:11 PM, Grant Morley wrote:


Hi All,

I wonder if anyone would be able to help with some settings I might 
be obviously missing for LBaaS. We have a client that uses the 
service but they are coming across issues with their app randomly not 
working.  Basically if their app takes longer than 20 seconds to 
process a request it looks like LBaaS times out the connection.


I have had a look and I can't seem to find any default settings for 
either "server" or "tunnel" and wondered if there was a way I could 
increase or see any default timeout settings through the neutron cli?


I can only see timeout settings for the "Health Monitor"

Any help will be much appreciated.

Regards,

--
Grant Morley
Senior Cloud Engineer
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:gr...@absolutedevops.io> 0845 874 0580



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Volodymyr Litovka
   "Vision without Execution is Hallucination." -- Thomas Edison


--
Grant Morley
Senior Cloud Engineer
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:gr...@absolutedevops.io> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Newton LBaaS v2 settings

2017-12-15 Thread Grant Morley

Hi All,

I wonder if anyone would be able to help with some settings I might be 
obviously missing for LBaaS. We have a client that uses the service but 
they are coming across issues with their app randomly not working.  
Basically if their app takes longer than 20 seconds to process a request 
it looks like LBaaS times out the connection.


I have had a look and I can't seem to find any default settings for 
either "server" or "tunnel" and wondered if there was a way I could 
increase or see any default timeout settings through the neutron cli?


I can only see timeout settings for the "Health Monitor"

Any help will be much appreciated.

Regards,

--
Grant Morley
Senior Cloud Engineer
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:gr...@absolutedevops.io> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OSA upgrading Ubuntu14.04 Newton to Ubuntu16.04 Ocata

2017-10-04 Thread Grant Morley

Hi Amy,

Many thanks for this, pleased to know that it is doable :) - We will 
test this on our dev environment first to see if there are any issues or 
not.


Will be sure to join the #openstack-ansible channel if we get stuck.

Thanks again,

Grant


On 04/10/17 15:56, Amy Marrich wrote:

Hi Grant,

We actually have the process documented here:

https://etherpad.openstack.org/p/osa-newton-xenial-upgrade

It does make a few assumptions so make sure you meet them before 
starting. Please join us on Freenode in the #openstack-ansible channel 
and we'll give you a hand if we can.


Thanks,

Amy (spotz)

On Wed, Oct 4, 2017 at 9:42 AM, Grant Morley <gr...@absolutedevops.io 
<mailto:gr...@absolutedevops.io>> wrote:


Hi All,

Just have a quick question regarding upgrading from Newton to
Ocata using OSA. We have a small installation ( 4 compute. 3
management, 2 network and some Ceph storage nodes ) And we are
looking to upgrade from Ubuntu 14.04 Newton to Ubuntu 16.04  Ocata.

Does anyone have any good tips on how to do this at all? We are
not sure whether it is best to decommission a single node at a
time, upgrade that to Ubuntu 16.04 and then get Ocata installed
onto that. Or whether there is a better method at all?

We are conscious that if we bootstrap our ansible deployer machine
for Ocata, it will then not be able to manage the nodes that are
running 14.04 Newton we assume?

Another thing we were thinking was to possibly get some more kit
and just install Ocata from scratch on that and start to migrate
customers over, but again we assume would need a separate deployer
to do so?.

We would prefer to be able to upgrade both the OS for Ubuntu and
Openstack to Ocata using our current set up and just wondered if
this was even possible with OSA? Unfortunately we do not have the
budget to simply keep on getting more and more kit, so we have to
be quite tactile about how we do things.

We are going to test this on our dev environment and see what
breaks, but just wondered if anyone here has come across this
already and suffered the pain :)

Any suggestions would be much appreciated.

Many thanks,

-- 
Grant Morley

Senior Cloud Engineer
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/>
gr...@absolutedevops.io <mailto:gr...@absolutedevops.io> 0845 874
0580

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>




--
Grant Morley
Senior Cloud Engineer
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:gr...@absolutedevops.io> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] OSA upgrading Ubuntu14.04 Newton to Ubuntu16.04 Ocata

2017-10-04 Thread Grant Morley

Hi All,

Just have a quick question regarding upgrading from Newton to Ocata 
using OSA. We have a small installation ( 4 compute. 3 management, 2 
network and some Ceph storage nodes ) And we are looking to upgrade from 
Ubuntu 14.04 Newton to Ubuntu 16.04 Ocata.


Does anyone have any good tips on how to do this at all? We are not sure 
whether it is best to decommission a single node at a time, upgrade that 
to Ubuntu 16.04 and then get Ocata installed onto that. Or whether there 
is a better method at all?


We are conscious that if we bootstrap our ansible deployer machine for 
Ocata, it will then not be able to manage the nodes that are running 
14.04 Newton we assume?


Another thing we were thinking was to possibly get some more kit and 
just install Ocata from scratch on that and start to migrate customers 
over, but again we assume would need a separate deployer to do so?.


We would prefer to be able to upgrade both the OS for Ubuntu and 
Openstack to Ocata using our current set up and just wondered if this 
was even possible with OSA? Unfortunately we do not have the budget to 
simply keep on getting more and more kit, so we have to be quite tactile 
about how we do things.


We are going to test this on our dev environment and see what breaks, 
but just wondered if anyone here has come across this already and 
suffered the pain :)


Any suggestions would be much appreciated.

Many thanks,

--
Grant Morley
Senior Cloud Engineer
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:gr...@absolutedevops.io> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron not clearing up deleted routers

2017-06-08 Thread Grant Morley

Ignore that now all,

Managed to fix it by restarting the l3-agent. Looks like it must have 
been cached in memory.


Thanks,


On 08/06/17 18:07, Grant Morley wrote:


Hi All,

We have  noticed in our neutron-l3-agent logs that there are a number 
of routers that neutron seems to think exist ( but they no longer 
exist in the data base )  and it is constantly trying to delete them. 
However because they don't appear to exist,  so neutron seems to get 
stuck in a loop of trying to delete routers that are no longer there.


We originally noticed that neutron was trying to delete the 
"qrouter--xxx-" and that didn't exist. So we added the router 
manually with "ip netns add qrouter-xxx-xxx-xxx-xxx" which then 
deletes the router however we then get the following error messages:


2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent [-] Error 
while deleting router da7b633a-233b-46d1-ba3d-315b3bee6a61
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent Traceback 
(most recent call last):
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/l3/agent.py", 
line 359, in _safe_router_removed
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
self._router_removed(router_id)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/l3/agent.py", 
line 377, in _router_removed

2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent ri.delete(self)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", 
line 380, in delete
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
super(HaRouter, self).delete(agent)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", 
line 361, in delete
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
self.process_delete(agent)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/common/utils.py", 
line 385, in call

2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent self.logger(e)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/oslo_utils/excutils.py", 
line 220, in __exit__
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
self.force_reraise()
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/oslo_utils/excutils.py", 
line 196, in force_reraise
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/common/utils.py", 
line 382, in call
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent return 
func(*args, **kwargs)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", 
line 972, in process_delete
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
self._process_external_on_delete(agent)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", 
line 794, in _process_external_on_delete
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
self._process_external_gateway(ex_gw_port, agent.pd)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", 
line 693, in _process_external_gateway
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
self.external_gateway_removed(self.ex_gw_port, interface_name)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", 
line 371, in external_gateway_removed

2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent interface_name)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", 
line 668, in external_gateway_removed
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
ip_addr['prefixlen']))
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", 
line 286, in remove_external_gateway_ip
2017-06-08 16:50:22.340 677 

[Openstack-operators] Neutron not clearing up deleted routers

2017-06-08 Thread Grant Morley
n-13.3.8/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", 
line 290, in delete_addr_and_conntrack_state
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
self.addr.delete(cidr)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", 
line 580, in delete

2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 'dev', self.name))
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", 
line 361, in _as_root
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
use_root_namespace=use_root_namespace)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", 
line 94, in _as_root
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
log_fail_as_error=self.log_fail_as_error)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", 
line 103, in _execute
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent 
log_fail_as_error=log_fail_as_error)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent   File 
"/openstack/venvs/neutron-13.3.8/lib/python2.7/site-packages/neutron/agent/linux/utils.py", 
line 140, in execute
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent raise 
RuntimeError(msg)
2017-06-08 16:50:22.340 677 ERROR neutron.agent.l3.agent RuntimeError: 
Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot find device "qg-c4080b5c-c9"


Has anyone come across this before?

We don't seem to have an entry for them anywhere from what we can see.

Regards,


--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ceph recovery going unusually slow

2017-06-02 Thread Grant Morley
Hi All,

I wonder if anyone could help at all.

We were doing some routine maintenance on our ceph cluster and after
running a "service ceph-all restart" on one of our nodes we noticed that
something wasn't quite right. The cluster has gone into an error mode and
we have multiple stuck PGs and the object replacement recovery is taking a
strangely long time. At first there was about 46% objects misplaced and we
now have roughly 16%.

However it has taken about 36 hours to do the recovery so far and with a
possible 16 to go we are looking at a fairly major issue. As a lot of the
system is now blocked for read / writes, customers cannot access their VMs.

I think the main issue at the moment is that we have 210pgs stuck inactive
and nothing we seem to do can get them to peer.

Below is an ouptut of the ceph status. Can anyone help or have any ideas on
how to speed up the recover process? We have tried turning down logging on
the OSD's but some are going so slow they wont allow us to injectargs into
them.

health HEALTH_ERR
210 pgs are stuck inactive for more than 300 seconds
298 pgs backfill_wait
3 pgs backfilling
1 pgs degraded
200 pgs peering
1 pgs recovery_wait
1 pgs stuck degraded
210 pgs stuck inactive
512 pgs stuck unclean
3310 requests are blocked > 32 sec
recovery 2/11094405 objects degraded (0.000%)
recovery 1785063/11094405 objects misplaced (16.090%)
nodown,noout,noscrub,nodeep-scrub flag(s) set

election epoch 16314, quorum 0,1,2,3,4,5,6,7,8
storage-1,storage-2,storage-3,storage-4,storage-5,storage-6,storage-7,storage-8,storage-9
 osdmap e213164: 54 osds: 54 up, 54 in; 329 remapped pgs
flags nodown,noout,noscrub,nodeep-scrub
  pgmap v41030942: 2036 pgs, 14 pools, 14183 GB data, 3309 kobjects
43356 GB used, 47141 GB / 90498 GB avail
2/11094405 objects degraded (0.000%)
1785063/11094405 objects misplaced (16.090%)
1524 active+clean
 298 active+remapped+wait_backfill
 153 peering
  47 remapped+peering
  10 inactive
   3 active+remapped+backfilling
   1 active+recovery_wait+degraded+remapped

Many thanks,

Grant
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ceph upgrade causing health warnings

2017-05-17 Thread Grant Morley

Hi All,

We have just upgraded our ceph cluster to Jewel-10.2.6 running on Ubuntu 
14.04 and all seems to have gone quite well. However every now and then 
we are getting health warnings for the cluster that state the following:


all OSDs are running jewel or later but the 'require_jewel_osds' osdmap 
flag is not set


That setting doesn't appear to be in the ceph.conf file. I was wondering 
if anyone else has come across this and if they have simply added the 
config and restarted ceph to fix the issue?


I am a little reluctant to do it at the moment as the cluster seems to 
be running fine. Just more of an annoyance that we are getting a lot of 
alerts from our monitoring systems.


Regards,

Grant


--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Unable to launch instances because of glance errors

2017-03-02 Thread Grant Morley

Hi Saverio,

I have managed to fix it, turns out it was a HAPROXY issue and it wasn't 
terminating the backend connection correctly. The glance error logs sent 
me in the wrong direction.


Thank you for all of your suggestions to try and debug the issue.

Regards,


On 02/03/17 16:19, Grant Morley wrote:


If I use the image name I get the same result:

openstack image show Ubuntu-16.04
+--+--+
| Field| Value |
+--+--+
| checksum | 6008d3fdf650658965b132b547416d83 |
| container_format | bare |
| created_at   | 2016-12-07T16:00:54Z |
| disk_format  | raw |
| file | 
/v2/images/d5d43ba0-82e9-43de-8883-5ebaf07bf3e3/file |

| id   | d5d43ba0-82e9-43de-8883-5ebaf07bf3e3 |
| min_disk | 0 |
| min_ram  | 0 |
| name | Ubuntu-16.04 |
| owner| 4a6213a64312482896130efc3047195c |
| properties   | 
direct_url='rbd://742cd163-32ac-4121-8060-d440aa7345d6/images/d5d43ba0-82e9-43de-8883-5ebaf07bf3e3/snap' 
|

| protected| False |
| schema   | /v2/schemas/image |
| size | 2361393152 |
| status   | active |
| tags | |
| updated_at   | 2016-12-07T16:01:45Z |
| virtual_size | None |
| visibility   | public |
+--+--+

Also there is only one entry in the DB for that image:

select * from images where name="Ubuntu-16.04";
+--+--+++---+-+-++-+-+--+--+--+--+-+---+--+
| id   | name | size   | 
status | is_public | created_at  | updated_at  | 
deleted_at | deleted | disk_format | container_format | 
checksum | owner| 
min_disk | min_ram | protected | virtual_size |

+--+--+++---+-+-++-+-+--+--+--+--+-+---+--+
| d5d43ba0-82e9-43de-8883-5ebaf07bf3e3 | Ubuntu-16.04 | 2361393152 | 
active | 1 | 2016-12-07 16:00:54 | 2016-12-07 16:01:45 | 
NULL   |   0 | raw | bare | 
6008d3fdf650658965b132b547416d83 | 4a6213a64312482896130efc3047195c 
|0 |   0 | 0 | NULL |

+--+--+++---+-+-++-+-+--+--+--+--+-+---+--+

Regards,

On 02/03/17 16:08, Saverio Proto wrote:

select * from images where name=''Ubuntu-16.04";


--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580


--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Unable to launch instances because of glance errors

2017-03-02 Thread Grant Morley

If I use the image name I get the same result:

openstack image show Ubuntu-16.04
+--+--+
| Field| Value |
+--+--+
| checksum | 6008d3fdf650658965b132b547416d83 |
| container_format | bare |
| created_at   | 2016-12-07T16:00:54Z |
| disk_format  | raw |
| file | /v2/images/d5d43ba0-82e9-43de-8883-5ebaf07bf3e3/file |
| id   | d5d43ba0-82e9-43de-8883-5ebaf07bf3e3 |
| min_disk | 0 |
| min_ram  | 0 |
| name | Ubuntu-16.04 |
| owner| 4a6213a64312482896130efc3047195c |
| properties   | 
direct_url='rbd://742cd163-32ac-4121-8060-d440aa7345d6/images/d5d43ba0-82e9-43de-8883-5ebaf07bf3e3/snap' 
|

| protected| False |
| schema   | /v2/schemas/image |
| size | 2361393152 |
| status   | active |
| tags | |
| updated_at   | 2016-12-07T16:01:45Z |
| virtual_size | None |
| visibility   | public |
+--+--+

Also there is only one entry in the DB for that image:

select * from images where name="Ubuntu-16.04";
+--+--+++---+-+-++-+-+--+--+--+--+-+---+--+
| id   | name | size | status | 
is_public | created_at  | updated_at  | deleted_at | 
deleted | disk_format | container_format | 
checksum | owner| 
min_disk | min_ram | protected | virtual_size |

+--+--+++---+-+-++-+-+--+--+--+--+-+---+--+
| d5d43ba0-82e9-43de-8883-5ebaf07bf3e3 | Ubuntu-16.04 | 2361393152 | 
active | 1 | 2016-12-07 16:00:54 | 2016-12-07 16:01:45 | 
NULL   |   0 | raw | bare | 
6008d3fdf650658965b132b547416d83 | 4a6213a64312482896130efc3047195c 
|0 |   0 | 0 | NULL |

+--+--+++---+-+-++-+-+--+--+--+--+-+---+--+

Regards,

On 02/03/17 16:08, Saverio Proto wrote:

select * from images where name=''Ubuntu-16.04";


--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Unable to launch instances because of glance errors

2017-03-02 Thread Grant Morley

sure no problem:

openstack image show d5d43ba0-82e9-43de-8883-5ebaf07bf3e3
+--+--+
| Field| Value |
+--+--+
| checksum | 6008d3fdf650658965b132b547416d83 |
| container_format | bare |
| created_at   | 2016-12-07T16:00:54Z |
| disk_format  | raw |
| file | /v2/images/d5d43ba0-82e9-43de-8883-5ebaf07bf3e3/file |
| id   | d5d43ba0-82e9-43de-8883-5ebaf07bf3e3 |
| min_disk | 0 |
| min_ram  | 0 |
| name | Ubuntu-16.04 |
| owner| 4a6213a64312482896130efc3047195c |
| properties   | 
direct_url='rbd://742cd163-32ac-4121-8060-d440aa7345d6/images/d5d43ba0-82e9-43de-8883-5ebaf07bf3e3/snap' 
|

| protected| False |
| schema   | /v2/schemas/image |
| size | 2361393152 |
| status   | active |
| tags | |
| updated_at   | 2016-12-07T16:01:45Z |
| virtual_size | None |
| visibility   | public |
+--+--+


On 02/03/17 15:56, Saverio Proto wrote:

Can you share with us the output of:

openstack image show 

for that image ?

Saverio


2017-03-02 13:54 GMT+01:00 Grant Morley <gr...@absolutedevops.io 
<mailto:gr...@absolutedevops.io>>:


Unfortunately not, I still get the same error.

Grant


On 02/03/17 12:54, Saverio Proto wrote:

If you pass the uuid of the image does it work ?

Saverio

2017-03-02 13:49 GMT+01:00 Grant Morley <gr...@absolutedevops.io
<mailto:gr...@absolutedevops.io>>:

Hi Saverio,

We are running Mitaka - sorry forgot to mention that.

Grant


On 02/03/17 12:45, Saverio Proto wrote:

What version of Openstack are we talking about ?

Saverio

2017-03-02 12:11 GMT+01:00 Grant Morley
<gr...@absolutedevops.io <mailto:gr...@absolutedevops.io>>:

Hi All,

Not sure if anyone can help, but as of today we are
unable to launch any instances and I have traced back
the error to glance. Whenever I try and launch an
instance I get the below returned:

openstack server create --flavor g1.small --image
"Ubuntu-16.04" --key-name ib-desktop --security-group
default --nic
net-id=e2d925f0-63c1-442f-9158-6a138e5847ce gmtest
Could not find resource Ubuntu-16.04

The image exists:

openstack image list | grep Ubuntu-16.04

| d5d43ba0-82e9-43de-8883-5ebaf07bf3e3 | Ubuntu-16.04  
| active |


The traceback gives a really odd message about Multiple
choices ( All images have unique names )

Below is the stack trace - not sure if anyone has come
across this before? I am stumped at the moment.

2017-03-02 11:10:16.842 2107 INFO eventlet.wsgi.server
[-] 10.6.2.255,10.6.0.39 - - [02/Mar/2017 11:10:16] "GET
/images/detail?is_public=none=20 HTTP/1.1" 300 787
0.000769
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client
[req-3b498a7e-2d6a-4337-a314-12abfbac0117
4c91f07132454a97b21fff35402b7825
4a6213a64312482896130efc3047195c - - -] Registry client
request GET /images/detail raised MultipleChoices
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client Traceback (most recent
call last):
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/registry/client/v1/client.py",
line 123, in do_request
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client **kwargs)
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py",
line 70, in wrapped
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client return func(self,
*args, **kwargs)
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py",
line 373, in do_request
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client
headers=copy.deepcopy(headers))
  

Re: [Openstack-operators] Unable to launch instances because of glance errors

2017-03-02 Thread Grant Morley

Unfortunately not, I still get the same error.

Grant


On 02/03/17 12:54, Saverio Proto wrote:

If you pass the uuid of the image does it work ?

Saverio

2017-03-02 13:49 GMT+01:00 Grant Morley <gr...@absolutedevops.io 
<mailto:gr...@absolutedevops.io>>:


Hi Saverio,

We are running Mitaka - sorry forgot to mention that.

Grant


On 02/03/17 12:45, Saverio Proto wrote:

What version of Openstack are we talking about ?

Saverio

2017-03-02 12:11 GMT+01:00 Grant Morley <gr...@absolutedevops.io
<mailto:gr...@absolutedevops.io>>:

Hi All,

Not sure if anyone can help, but as of today we are unable to
launch any instances and I have traced back the error to
glance. Whenever I try and launch an instance I get the below
returned:

openstack server create --flavor g1.small --image
"Ubuntu-16.04" --key-name ib-desktop --security-group default
--nic net-id=e2d925f0-63c1-442f-9158-6a138e5847ce gmtest
Could not find resource Ubuntu-16.04

The image exists:

openstack image list | grep Ubuntu-16.04

| d5d43ba0-82e9-43de-8883-5ebaf07bf3e3 | Ubuntu-16.04   |
active |

The traceback gives a really odd message about Multiple
choices ( All images have unique names )

Below is the stack trace - not sure if anyone has come across
this before? I am stumped at the moment.

2017-03-02 11:10:16.842 2107 INFO eventlet.wsgi.server [-]
10.6.2.255,10.6.0.39 - - [02/Mar/2017 11:10:16] "GET
/images/detail?is_public=none=20 HTTP/1.1" 300 787 0.000769
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client
[req-3b498a7e-2d6a-4337-a314-12abfbac0117
4c91f07132454a97b21fff35402b7825
4a6213a64312482896130efc3047195c - - -] Registry client
request GET /images/detail raised MultipleChoices
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client Traceback (most recent call
last):
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/registry/client/v1/client.py",
line 123, in do_request
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client **kwargs)
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py",
line 70, in wrapped
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client return func(self, *args,
**kwargs)
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py",
line 373, in do_request
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client headers=copy.deepcopy(headers))
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py",
line 87, in wrapped
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client return func(self, method,
url, body, headers)
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py",
line 534, in _do_request
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client raise
exception.MultipleChoices(body=read_body(res))
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client MultipleChoices: The request
returned a 302 Multiple Choices. This generally means that
you have not included a version indicator in a request URI.
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client The body of response returned:
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client {"versions": [{"status":
"CURRENT", "id": "v2.3", "links": [{"href":
"http://10.6.0.3:9191/v2/; <http://10.6.0.3:9191/v2/>, "rel":
"self"}]}, {"status": "SUPPORTED", "id": "v2.2", "links":
[{"href": "http://10.6.0.3:9191/v2/;
<http://10.6.0.3:9191/v2/>, "rel": "self"}]}, {"status":
"SUPPORTED", "id": "v2.1", "

Re: [Openstack-operators] Unable to launch instances because of glance errors

2017-03-02 Thread Grant Morley

Hi Saverio,

We are running Mitaka - sorry forgot to mention that.

Grant


On 02/03/17 12:45, Saverio Proto wrote:

What version of Openstack are we talking about ?

Saverio

2017-03-02 12:11 GMT+01:00 Grant Morley <gr...@absolutedevops.io 
<mailto:gr...@absolutedevops.io>>:


Hi All,

Not sure if anyone can help, but as of today we are unable to
launch any instances and I have traced back the error to glance.
Whenever I try and launch an instance I get the below returned:

openstack server create --flavor g1.small --image "Ubuntu-16.04"
--key-name ib-desktop --security-group default --nic
net-id=e2d925f0-63c1-442f-9158-6a138e5847ce gmtest
Could not find resource Ubuntu-16.04

The image exists:

openstack image list | grep Ubuntu-16.04

| d5d43ba0-82e9-43de-8883-5ebaf07bf3e3 | Ubuntu-16.04   | active |

The traceback gives a really odd message about Multiple choices (
All images have unique names )

Below is the stack trace - not sure if anyone has come across this
before? I am stumped at the moment.

2017-03-02 11:10:16.842 2107 INFO eventlet.wsgi.server [-]
10.6.2.255,10.6.0.39 - - [02/Mar/2017 11:10:16] "GET
/images/detail?is_public=none=20 HTTP/1.1" 300 787 0.000769
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client
[req-3b498a7e-2d6a-4337-a314-12abfbac0117
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c
- - -] Registry client request GET /images/detail raised
MultipleChoices
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client Traceback (most recent call last):
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client   File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/registry/client/v1/client.py",
line 123, in do_request
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client **kwargs)
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client   File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py",
line 70, in wrapped
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client return func(self, *args,
**kwargs)
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client   File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py",
line 373, in do_request
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client headers=copy.deepcopy(headers))
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client   File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py",
line 87, in wrapped
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client return func(self, method,
url, body, headers)
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client   File

"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py",
line 534, in _do_request
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client raise
exception.MultipleChoices(body=read_body(res))
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client MultipleChoices: The request
returned a 302 Multiple Choices. This generally means that you
have not included a version indicator in a request URI.
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client The body of response returned:
2017-03-02 11:10:16.843 2104 ERROR
glance.registry.client.v1.client {"versions": [{"status":
"CURRENT", "id": "v2.3", "links": [{"href":
"http://10.6.0.3:9191/v2/; <http://10.6.0.3:9191/v2/>, "rel":
"self"}]}, {"status": "SUPPORTED", "id": "v2.2", "links":
[{"href": "http://10.6.0.3:9191/v2/; <http://10.6.0.3:9191/v2/>,
"rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.1", "links":
[{"href": "http://10.6.0.3:9191/v2/; <http://10.6.0.3:9191/v2/>,
"rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.0", "links":
[{"href": "http://10.6.0.3:9191/v2/; <http://10.6.0.3:9191/v2/>,
"rel": "self"}]}, {"status": "SUPPORTED", "id": "v1.1", "links":
[{"href": "http://10.6.0.3:9191/v1/; <http://10.6.0.3:9191/v1/>,
"rel": "self"}]}, {"sta

[Openstack-operators] Unable to launch instances because of glance errors

2017-03-02 Thread Grant Morley

Hi All,

Not sure if anyone can help, but as of today we are unable to launch any 
instances and I have traced back the error to glance. Whenever I try and 
launch an instance I get the below returned:


openstack server create --flavor g1.small --image "Ubuntu-16.04" 
--key-name ib-desktop --security-group default --nic 
net-id=e2d925f0-63c1-442f-9158-6a138e5847ce gmtest

Could not find resource Ubuntu-16.04

The image exists:

openstack image list | grep Ubuntu-16.04

| d5d43ba0-82e9-43de-8883-5ebaf07bf3e3 | Ubuntu-16.04   | active |

The traceback gives a really odd message about Multiple choices ( All 
images have unique names )


Below is the stack trace - not sure if anyone has come across this 
before? I am stumped at the moment.


2017-03-02 11:10:16.842 2107 INFO eventlet.wsgi.server [-] 
10.6.2.255,10.6.0.39 - - [02/Mar/2017 11:10:16] "GET 
/images/detail?is_public=none=20 HTTP/1.1" 300 787 0.000769
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client 
[req-3b498a7e-2d6a-4337-a314-12abfbac0117 
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c - - -] 
Registry client request GET /images/detail raised MultipleChoices
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client 
Traceback (most recent call last):
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client   
File 
"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/registry/client/v1/client.py", 
line 123, in do_request
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client 
**kwargs)
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client   
File 
"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py", 
line 70, in wrapped
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client 
return func(self, *args, **kwargs)
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client   
File 
"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py", 
line 373, in do_request
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client 
headers=copy.deepcopy(headers))
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client   
File 
"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py", 
line 87, in wrapped
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client 
return func(self, method, url, body, headers)
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client   
File 
"/openstack/venvs/glance-13.3.8/lib/python2.7/site-packages/glance/common/client.py", 
line 534, in _do_request
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client 
raise exception.MultipleChoices(body=read_body(res))
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client 
MultipleChoices: The request returned a 302 Multiple Choices. This 
generally means that you have not included a version indicator in a 
request URI.

2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client The 
body of response returned:
2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client 
{"versions": [{"status": "CURRENT", "id": "v2.3", "links": [{"href": 
"http://10.6.0.3:9191/v2/;, "rel": "self"}]}, {"status": "SUPPORTED", 
"id": "v2.2", "links": [{"href": "http://10.6.0.3:9191/v2/;, "rel": 
"self"}]}, {"status": "SUPPORTED", "id": "v2.1", "links": [{"href": 
"http://10.6.0.3:9191/v2/;, "rel": "self"}]}, {"status": "SUPPORTED", 
"id": "v2.0", "links": [{"href": "http://10.6.0.3:9191/v2/;, "rel": 
"self"}]}, {"status": "SUPPORTED", "id": "v1.1", "links": [{"href": 
"http://10.6.0.3:9191/v1/;, "rel": "self"}]}, {"status": "SUPPORTED", 
"id": "v1.0", "links": [{"href": "http://10.6.0.3:9191/v1/;, "rel": 
"self"}]}]}

2017-03-02 11:10:16.843 2104 ERROR glance.registry.client.v1.client
2017-03-02 11:10:16.844 2104 ERROR glance.common.wsgi 
[req-3b498a7e-2d6a-4337-a314-12abfbac0117 
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c - - -] 
Caught error: The request returned a 302 Multiple Choices. This 
generally means that you have not included a version indicator in a 
request URI.


Any help would be great.

Regards,

--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] osa Mitaka api SSL end points

2017-02-28 Thread Grant Morley

Hi Andy,

Thank you for that, I will get straight onto that and make sure all of 
the public endpoints are HTTPS. Those are the ones that I care about for 
obvious reasons.


If I get stuck, I will be sure to chat in #openstack-ansible

Once again thanks for the speedy reply and help.

Grant


On 28/02/17 11:42, Andy McCrae wrote:



On 28 February 2017 at 09:59, Grant Morley <gr...@absolutedevops.io 
<mailto:gr...@absolutedevops.io>> wrote:


Hi All,

We have an OSA Mitaka deployment and for some reason all of the
end points ( keystone, neutron, glance etc.. ) are all reporting
as HTTP rather than HTTPS. The only thing that seems to have
worked with HTTPS is Horizon ( I know that isn't an api endpoint,
just for clarification).

We have placed our SSL certs in the correct directory for the
deployment "/etc/openstack_deploy/ssl/" but for some reason when
the setup has run it is only using HTTP as below:


+--+---+--++-+---+--+
| ID| Region| Service Name | Service Type   | Enabled |
Interface | URL|

+--+---+--++-+---+--+
| 0b7ca91c06334207b3199eeca432d5fe | lon1  | cinder   |
volume | True| admin |
http://10.6.0.3:8776/v1/%(tenant_id)s
<http://10.6.0.3:8776/v1/%%28tenant_id%29s> |
| 0f7440688cbc4d1f8f3c62158889729d | lon1  | keystone |
identity   | True| internal  | http://10.6.0.3:5000/v3 |

Is there something else I have missed or do I need to put our SSL
certs in a different directory for OSA to setup the endpoints with
HTTPS on haproxy?

Grateful for any help.

Regards,

Grant

Hi Grant,

I took a look back at the stable/mitaka branch for OSA - we do default 
the value to be http, so if you don't override the setting it will be 
setup as http.
That's changed since, but you can overwrite this by setting 
"openstack_service_publicuri_proto: https" which would then set the 
public endpoints to be https.
Although the paste you have above implies you want all endpoints to be 
https - as it stands I don't believe there is support for that - that 
is to say that
internal traffic (internal/admin endpoints) would be http, and your 
public endpoint (terminating at your LB - haproxy if you are using the 
built in one) would be

https.

There are a few exceptions in keystone, rabbitmq, horizon and HAProxy: 
https://docs.openstack.org/developer/openstack-ansible/mitaka/install-guide/configure-sslcertificates.html


Here are some docs about securing haproxy with ssl-certificates that 
may be helpful: 
https://docs.openstack.org/developer/openstack-ansible/mitaka/install-guide/configure-haproxy.html#securing-haproxy-communication-with-ssl-certificates


If you're stuck or running into issues feel free to jump into the 
#openstack-ansible channel on Freenode IRC, there are usually quite a 
few people around to help and answer questions.


Andy





--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] osa Mitaka api SSL end points

2017-02-28 Thread Grant Morley

Hi All,

We have an OSA Mitaka deployment and for some reason all of the end 
points ( keystone, neutron, glance etc.. ) are all reporting as HTTP 
rather than HTTPS. The only thing that seems to have worked with HTTPS 
is Horizon ( I know that isn't an api endpoint, just for clarification).


We have placed our SSL certs in the correct directory for the deployment 
"/etc/openstack_deploy/ssl/" but for some reason when the setup has run 
it is only using HTTP as below:


+--+---+--++-+---+--+
| ID   | Region| Service Name | Service 
Type   | Enabled | Interface | 
URL  |

+--+---+--++-+---+--+
| 0b7ca91c06334207b3199eeca432d5fe | lon1  | cinder   | 
volume | True| admin | 
http://10.6.0.3:8776/v1/%(tenant_id)s|
| 0f7440688cbc4d1f8f3c62158889729d | lon1  | keystone | 
identity   | True| internal  | 
http://10.6.0.3:5000/v3  |


Is there something else I have missed or do I need to put our SSL certs 
in a different directory for OSA to setup the endpoints with HTTPS on 
haproxy?


Grateful for any help.

Regards,

Grant

--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Glance with ceph backend stated to give errors

2017-01-31 Thread Grant Morley

Hi Nick,

Thanks for the reply, looks like we could be hitting something similar. 
We are running Ceph Jewel packages on the Glance node.


Thanks for the links to the bug reports.

Regards,


On 31/01/17 11:35, Nick Jones wrote:

Hi Grant.

Could be unrelated but I'm throwing it out there anyway as we had 
similar 'weirdness' with Glance recently...  If you're running the 
Ceph Jewel or Hammer packages on your Glance node then you might be 
running into this bug, in which image.stat() calls in librbdpy fail 
intermittently:


http://tracker.ceph.com/issues/17310

There's a corresponding bug report on Launchpad for the Ubuntu Ceph 
package that has some more detail:


https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1625489

Cheers.

--

-Nick

On 31 January 2017 at 10:49, Grant Morley <gr...@absolutedevops.io 
<mailto:gr...@absolutedevops.io>> wrote:


Hi All,

Not sure if anyone has come across this yet. Yesterday we started
to try to upload some images to our stack and they were failing
with the following error when trying via command line:

Error finding address for
http://10.6.0.3:9292/v2/images/e90a4626-4781-4b53-8914-85ff2129f777/file
<http://10.6.0.3:9292/v2/images/e90a4626-4781-4b53-8914-85ff2129f777/file>:
[Errno 32] Broken pipe

In the Glance logs at the same time we see this:

20172017-01-31 10:22:45.005 2096 INFO eventlet.wsgi.server
[req-1837bce5-a90c-44d5-bf65-c1e3a534c793
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c
- - -] 10.6.1.223,10.6.0.39 - - [31/Jan/2017 10:22:45] "HEAD
/v1/images/e90a4626-4781-4b53-8914-85ff2129f777 HTTP/1.1" 200 689
0.046623
2017-01-31 10:22:47.595 2099 INFO eventlet.wsgi.server [-]
10.6.0.40 - - [31/Jan/2017 10:22:47] "OPTIONS /versions HTTP/1.0"
200 785 0.001113
2017-01-31 10:22:47.720 2095 INFO eventlet.wsgi.server
[req-873dd7df-f8a4-4443-8795-b81dcd54f412
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c
- - -] 10.6.1.223,10.6.0.39 - - [31/Jan/2017 10:22:47] "HEAD
/v1/images/e90a4626-4781-4b53-8914-85ff2129f777 HTTP/1.1" 200 689
0.032188-01-31 10:22:45.005 2096 INFO eventlet.wsgi.server
[req-1837bce5-a90c-44d5-bf65-c1e3a534c793
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c
- - -] 10.6.1.223,10.6.0.39 - - [31/Jan/2017 10:22:45] "HEAD
/v1/images/e90a4626-4781-4b53-8914-85ff2129f777 HTTP/1.1" 200 689
0.046623
2017-01-31 10:22:47.595 2099 INFO eventlet.wsgi.server [-]
10.6.0.40 - - [31/Jan/2017 10:22:47] "OPTIONS /versions HTTP/1.0"
200 785 0.001113
2017-01-31 10:22:47.720 2095 INFO eventlet.wsgi.server
[req-873dd7df-f8a4-4443-8795-b81dcd54f412
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c
- - -] 10.6.1.223,10.6.0.39 - - [31/Jan/2017 10:22:47] "HEAD
/v1/images/e90a4626-4781-4b53-8914-85ff2129f777 HTTP/1.1" 200 689
0.032188

Interestingly, when you delete the failed image you get this in
the glance logs:

2017-01-31 10:23:16.188 2099 INFO eventlet.wsgi.server
[req-c2191fe1-64de-4252-b4a2-e84643dbed4c
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c
- - -] 10.6.2.190,10.6.0.39 - - [31/Jan/2017 10:23:16] "DELETE
/v2/images/e90a4626-4781-4b53-8914-85ff2129f777 HTTP/1.1" 204 208 0.08


It seems to be using V1 to try and upload and V2 to remove? -
However the image doesn't actually get deleted because it remains
in ceph:

rbd -p images ls | grep e90a4626-4781-4b53-8914-85ff2129f777
e90a4626-4781-4b53-8914-85ff2129f777

It is almost as if, the image uploads and is then forgotten about.

Our compute nodes are also ceph backed and they are working
absolutely fine, it is just Glance and ceph has all of a sudden
stopped working. Just wondered if anyone had any ideas?

Regards,


-- 
Grant Morley

Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/>
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>



DataCentred Limited registered in England and Wales no. 05611763 


--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
Op

[Openstack-operators] Glance with ceph backend stated to give errors

2017-01-31 Thread Grant Morley

Hi All,

Not sure if anyone has come across this yet. Yesterday we started to try 
to upload some images to our stack and they were failing with the 
following error when trying via command line:


Error finding address for 
http://10.6.0.3:9292/v2/images/e90a4626-4781-4b53-8914-85ff2129f777/file: 
[Errno 32] Broken pipe


In the Glance logs at the same time we see this:

20172017-01-31 10:22:45.005 2096 INFO eventlet.wsgi.server 
[req-1837bce5-a90c-44d5-bf65-c1e3a534c793 
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c - - -] 
10.6.1.223,10.6.0.39 - - [31/Jan/2017 10:22:45] "HEAD 
/v1/images/e90a4626-4781-4b53-8914-85ff2129f777 HTTP/1.1" 200 689 0.046623
2017-01-31 10:22:47.595 2099 INFO eventlet.wsgi.server [-] 10.6.0.40 - - 
[31/Jan/2017 10:22:47] "OPTIONS /versions HTTP/1.0" 200 785 0.001113
2017-01-31 10:22:47.720 2095 INFO eventlet.wsgi.server 
[req-873dd7df-f8a4-4443-8795-b81dcd54f412 
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c - - -] 
10.6.1.223,10.6.0.39 - - [31/Jan/2017 10:22:47] "HEAD 
/v1/images/e90a4626-4781-4b53-8914-85ff2129f777 HTTP/1.1" 200 689 
0.032188-01-31 10:22:45.005 2096 INFO eventlet.wsgi.server 
[req-1837bce5-a90c-44d5-bf65-c1e3a534c793 
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c - - -] 
10.6.1.223,10.6.0.39 - - [31/Jan/2017 10:22:45] "HEAD 
/v1/images/e90a4626-4781-4b53-8914-85ff2129f777 HTTP/1.1" 200 689 0.046623
2017-01-31 10:22:47.595 2099 INFO eventlet.wsgi.server [-] 10.6.0.40 - - 
[31/Jan/2017 10:22:47] "OPTIONS /versions HTTP/1.0" 200 785 0.001113
2017-01-31 10:22:47.720 2095 INFO eventlet.wsgi.server 
[req-873dd7df-f8a4-4443-8795-b81dcd54f412 
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c - - -] 
10.6.1.223,10.6.0.39 - - [31/Jan/2017 10:22:47] "HEAD 
/v1/images/e90a4626-4781-4b53-8914-85ff2129f777 HTTP/1.1" 200 689 0.032188


Interestingly, when you delete the failed image you get this in the 
glance logs:


2017-01-31 10:23:16.188 2099 INFO eventlet.wsgi.server 
[req-c2191fe1-64de-4252-b4a2-e84643dbed4c 
4c91f07132454a97b21fff35402b7825 4a6213a64312482896130efc3047195c - - -] 
10.6.2.190,10.6.0.39 - - [31/Jan/2017 10:23:16] "DELETE 
/v2/images/e90a4626-4781-4b53-8914-85ff2129f777 HTTP/1.1" 204 208 0.08



It seems to be using V1 to try and upload and V2 to remove? - However 
the image doesn't actually get deleted because it remains in ceph:


rbd -p images ls | grep e90a4626-4781-4b53-8914-85ff2129f777
e90a4626-4781-4b53-8914-85ff2129f777

It is almost as if, the image uploads and is then forgotten about.

Our compute nodes are also ceph backed and they are working absolutely 
fine, it is just Glance and ceph has all of a sudden stopped working. 
Just wondered if anyone had any ideas?


Regards,


--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Mitaka to Newton networking issues

2016-12-08 Thread Grant Morley

Hi Kevin,

Sorry for the late reply, We have tried doing that and we were still 
seeing the same issues.  I don't think the bug was quite the same as 
what we were seeing.


Unfortunately we have had to roll back to Mitaka as we had a tight 
deadline and not being able to create networks / have HA was fairly 
critical. Interestingly, now we are back on Mitaka, everything is 
working fine.


I will try and get a testing environment set up to see if I get the same 
results as we were seeing when we upgraded to Newton from Mitaka. I am 
not sure if it is something to do with our specific set up, but we have 
followed the OSA guidelines and as everything was working on Liberty and 
Mitaka I assume we have it all set up correctly.


I will keep you posted to our findings, as we may be onto another bug.

Regards,


On 06/12/16 14:07, Kevin Benton wrote:
There was a bug that the fixes just recently merged for where removing 
a router on the L3 agent was done in the wrong order and it resulted 
in issues cleaning up the interfaces with Linux Bridge + L3HA. 
https://bugs.launchpad.net/neutron/+bug/1629159


It could be the case that there is an orphaned veth pair in a deleted 
namespace from the same router when it was removed from the L3 agent.


For each L3 agent, can you shutdown the L3 agent, run the netns 
cleanup script, ensure all keepalived processes are dead, and then 
start the agent again?


On Tue, Dec 6, 2016 at 4:59 AM, Grant Morley <gr...@absolutedevops.io 
<mailto:gr...@absolutedevops.io>> wrote:


They both appear to be "ACTIVE" which is what I would expect:

root@management-1-utility-container-f1222d05:~# neutron port-show
8cd027f1-9f8c-4077-9c8a-92abc62fadd4

+---+--+
| Field | Value |

+---+--+
| admin_state_up| True |
| allowed_address_pairs | |
| binding:host_id   |
network-1-neutron-agents-container-11d47568 |
| binding:profile   | {} |
| binding:vif_details   | {"port_filter": true} |
| binding:vif_type  | bridge |
| binding:vnic_type | normal |
| created_at| 2016-12-05T10:58:01Z |
| description | |
| device_id | a8a10308-d62f-420f-99cf-f3727ef2b784 |
| device_owner  | network:router_ha_interface |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id":
"6495d542-4b78-40df-84af-31500aaa0bf8", "ip_address":
"169.254.192.5"} |
| id| 8cd027f1-9f8c-4077-9c8a-92abc62fadd4 |
| mac_address   | fa:16:3e:58:a1:a4 |
| name  | HA port tenant
e0ffdeb1e910469d9e625b95f2fa6c54 |
| network_id| 2b04fc3a-5c0d-4f55-996f-d8bd1e1d |
| port_security_enabled | False |
| project_id | |
| revision_number   | 23 |
| security_groups | |
| status| ACTIVE |
| tenant_id | |
| updated_at| 2016-12-06T10:18:00Z |

+---+--+
root@management-1-utility-container-f1222d05:~# neutron port-show
bda1f324-3178-46e5-8638-0f454ba09cab

+---+--+
| Field | Value |

+---+--+
| admin_state_up| True |
| allowed_address_pairs | |
| binding:host_id   |
network-2-neutron-agents-container-40906bfc |
| binding:profile   | {} |
| binding:vif_details   | {"port_filter": true} |
| binding:vif_type  | bridge |
| binding:vnic_type | normal |
| created_at| 2016-12-05T10:58:01Z |
| description | |
| device_id | a8a10308-d62f-420f-99cf-f3727ef2b784 |
| device_owner  | network:router_ha_interface |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id":
"6495d542-4b78-40df-84af-31500aaa0bf8", "ip_address":
"169.254.192.1"} |
| id| bda1f324-3178-46e5-8638-0f454ba09cab |
| mac_address   | fa:16:3e:c3:8a:14 |
| name  | HA port tenant
e0ffdeb1e910469d9e625b95f2fa6c54 |
| network_id| 2b04fc3a-5c0d-4f55-996f-d8bd1e1d |
| port_security_enabled | False |
| project_id | |
| revision_number   | 15 |
| security_groups | |
| status| ACTIVE |
| tenant_id | |
| updated_at| 2016-12-05T14:35:16Z |

+---+

Re: [Openstack-operators] Mitaka to Newton networking issues

2016-12-06 Thread Grant Morley

They both appear to be "ACTIVE" which is what I would expect:

root@management-1-utility-container-f1222d05:~# neutron port-show 
8cd027f1-9f8c-4077-9c8a-92abc62fadd4

+---+--+
| Field | Value |
+---+--+
| admin_state_up| True |
| allowed_address_pairs | |
| binding:host_id   | network-1-neutron-agents-container-11d47568 |
| binding:profile   | {} |
| binding:vif_details   | {"port_filter": true} |
| binding:vif_type  | bridge |
| binding:vnic_type | normal |
| created_at| 2016-12-05T10:58:01Z |
| description | |
| device_id | a8a10308-d62f-420f-99cf-f3727ef2b784 |
| device_owner  | network:router_ha_interface |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": 
"6495d542-4b78-40df-84af-31500aaa0bf8", "ip_address": "169.254.192.5"} |

| id| 8cd027f1-9f8c-4077-9c8a-92abc62fadd4 |
| mac_address   | fa:16:3e:58:a1:a4 |
| name  | HA port tenant e0ffdeb1e910469d9e625b95f2fa6c54 |
| network_id| 2b04fc3a-5c0d-4f55-996f-d8bd1e1d |
| port_security_enabled | False |
| project_id | |
| revision_number   | 23 |
| security_groups | |
| status| ACTIVE |
| tenant_id | |
| updated_at| 2016-12-06T10:18:00Z |
+---+--+
root@management-1-utility-container-f1222d05:~# neutron port-show 
bda1f324-3178-46e5-8638-0f454ba09cab

+---+--+
| Field | Value |
+---+--+
| admin_state_up| True |
| allowed_address_pairs | |
| binding:host_id   | network-2-neutron-agents-container-40906bfc |
| binding:profile   | {} |
| binding:vif_details   | {"port_filter": true} |
| binding:vif_type  | bridge |
| binding:vnic_type | normal |
| created_at| 2016-12-05T10:58:01Z |
| description | |
| device_id | a8a10308-d62f-420f-99cf-f3727ef2b784 |
| device_owner  | network:router_ha_interface |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": 
"6495d542-4b78-40df-84af-31500aaa0bf8", "ip_address": "169.254.192.1"} |

| id| bda1f324-3178-46e5-8638-0f454ba09cab |
| mac_address   | fa:16:3e:c3:8a:14 |
| name  | HA port tenant e0ffdeb1e910469d9e625b95f2fa6c54 |
| network_id| 2b04fc3a-5c0d-4f55-996f-d8bd1e1d |
| port_security_enabled | False |
| project_id | |
| revision_number   | 15 |
| security_groups | |
| status| ACTIVE |
| tenant_id | |
| updated_at| 2016-12-05T14:35:16Z |
+---+--+



On 06/12/16 12:53, Kevin Benton wrote:
Can you do a 'neutron port-show' for both of those HA ports to check 
their status field?


On Tue, Dec 6, 2016 at 2:29 AM, Grant Morley <gr...@absolutedevops.io 
<mailto:gr...@absolutedevops.io>> wrote:


Hi Kevin & Neil,

Many thanks for the reply. I have attached a screen shot showing
that we cannot ping between the L3 HA nodes on the router name
spaces. This was previously working fine with Mitaka, and has only
stopped working since the upgrade to Newton.

From the packet captures and TCP dumps, the traffic doesn't seem
to be even leaving the namespace.

On the attachment, the left hand side shows the state of
keepalived showing both HA agents as master and the ring hand side
is the ping attempt.

Regards,

On 06/12/16 10:14, Kevin Benton wrote:

Yes, that is a misleading warning. What is happening is that it's
trying to load the interface driver as an alias first, which
results in a stevedore warning that you see and then it falls
back to loading it by the class path, which is what you have
configured. We will need to see if there is a way we can suppress
that warning somehow when we make the call to load by an alias
and it fails.

If you switch your interface to just 'linuxbridge', that should
get rid of the warning.


For both L3 HA nodes becoming master, we need a little more info
to figure out the root cause. Can you try switching into the
router namespace on one of the L3 HA nodes and see if you can
ping the other router instance across the L3 HA network for that
router?

On Mon, Dec 5, 2016 at 7:54 AM, Neil Jerram <n...@tigera.i

[Openstack-operators] Mitaka to Newton networking issues

2016-12-05 Thread Grant Morley

Hi All,

We have just upgraded from Mitaka to Newton. We are running OSA and we 
seem to have come across some weird networking issues since the upgrade. 
Basically network access to instances is very intermittent and seems to 
randomly stop working.


We are running neutron in HA and it appears that both of the neutron 
nodes are now trying to be master and are both trying to bring up the 
gateway IP addresses which would be causing conflicts.


We are also seeing a lot of the following in the "neutron-dhcp-agent" 
log files:


2016-12-05 14:42:24.837 2020 WARNING stevedore.named 
[req-1955d0a1-1453-4c65-a93a-54e8ea39b230 
1ac995c0729142289f7237222f335806 3cc95dbe91c84e3e8ebbb9893ee54d20 - - -] 
Could not load neutron.agent.linux.interface.BridgeInterfaceDriver
2016-12-05 14:42:42.803 2020 INFO neutron.agent.dhcp.agent 
[req-fad7d2bb-9d3c-4192-868a-0164b382aecf 
1ac995c0729142289f7237222f335806 3cc95dbe91c84e3e8ebbb9893ee54d20 - - -] 
Trigger reload_allocations for port admin_state_up=True, 
allowed_address_pairs=[], binding:host_id=, binding:profile=, 
binding:vif_details=, binding:vif_type=unbound, 
binding:vnic_type=normal, created_at=2016-12-05T14:42:42Z, description=, 
device_id=8752effa-2ff2-4ce1-be70-e9f2243612cb, 
device_owner=network:floatingip, extra_dhcp_opts=[], 
fixed_ips=[{u'subnet_id': u'4ca7db2d-544a-4a97-b5a4-3cbf2467a4b7', 
u'ip_address': u'XXX.XXX.XXX.XXX'}], 
id=b3cf223d-8e76-484a-a649-d8a7dd435124, mac_address=fa:16:3e:ff:0d:50, 
name=, network_id=af5db886-0178-4f8d-9189-f55f773b37fa, 
port_security_enabled=False, project_id=, revision_number=4, 
security_groups=[], status=N/A, tenant_id=, updated_at=2016-12-05T14:42:42Z


I am a bit concerned about neutron not being able to load the Bridge 
interface driver.


Has anyone else come across this at all or have any pointers? This was 
working fine in Mitaka it just seems since the upgrade to Newton, we 
have these issues.


I am able to provide more logs if they are needed.

Regards,

--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Instances failing to launch when rbd backed (ansible Liberty setup)

2016-10-21 Thread Grant Morley

Thanks Kris,

I have run the commands as suggested and there is no rbd installed. 
However when we try to manually install rbd with pip we get the 
following error:


pip install rbd
DEPRECATION: --allow-all-external has been deprecated and will be 
removed in the future. Due to changes in the repository protocol, it no 
longer has any effect.

Ignoring indexes: https://pypi.python.org/simple
Collecting rbd
  Could not find a version that satisfies the requirement rbd (from 
versions: )

No matching distribution found for rbd

I assume the playbooks are coming across the same issue which is why we 
are having this problem. I will also ask in the #openstack-ansible 
channel for some help.


Pip is using the local repo servers that are created by 
openstack-ansible (I assume) so looking at this, it appears they don't 
have the correct packages.


Regards,



On 21/10/16 16:42, Kris G. Lindgren wrote:


From the traceback it looks like nova-compute is running out of a venv.

You need to activate the venv, most likely via: source 
/openstack/venvs/nova-12.0.16/.venv/bin/activate then run: pip 
freeze.  If you don’t see the RBD stuff – then that is your issue.  
You might be able to fix via: pip install rbd.


Venv’s are self-contained python installs, so they do not use the 
system level python packages at all.


I would also ask for some help in the #openstack-ansible channel on 
irc as well.


___

Kris Lindgren

Senior Linux Systems Engineer

GoDaddy

*From: *Grant Morley <gr...@absolutedevops.io>
*Date: *Friday, October 21, 2016 at 6:14 AM
*To: *OpenStack Operators <openstack-operators@lists.openstack.org>
*Cc: *"ian.ba...@serverchoice.com" <ian.ba...@serverchoice.com>
*Subject: *[Openstack-operators] Instances failing to launch when rbd 
backed (ansible Liberty setup)


Hi all,

We have a openstack-ansible setup and have ceph installed for the 
backend. However whenever we try and launch a new instance it fails to 
launch and we get the following error:


2016-10-21 12:08:06.241 70661 INFO nova.virt.libvirt.driver 
[req-79811c40-8394-4e33-b16d-ff5fa7341b6a 
41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 - - 
-] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Creating image
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager 
[req-79811c40-8394-4e33-b16d-ff5fa7341b6a 
41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1 - - 
-] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Instance failed to 
spawn
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f] Traceback (most recent call last):
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f]   File 
"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", 
line 2156, in _build_resources
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f] yield resources
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f]   File 
"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py", 
line 2009, in _build_and_run_instance
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f] block_device_info=block_device_info)
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f]   File 
"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 2527, in spawn
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f] admin_pass=admin_password)
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f]   File 
"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 2939, in _create_image
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f] backend = image('disk')
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f]   File 
"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 2884, in image
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f] fname + suffix, image_type)
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f]   File 
"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", 
line 967, in image
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f] return 
backend(instance=instance, disk_name=

Re: [Openstack-operators] Instances failing to launch when rbd backed (ansible Liberty setup)

2016-10-21 Thread Grant Morley

Hi Chris,

That bug suggests there has been a fix in 12.0.8. Hopefully 12.0.16 
should have that fixed still with a bit of luck , unless the fix hasn't 
been carried over.


I can see that the bug report says there is also a fix 12.0.9 and 12.0.11.

Do you know if there is a workaround we can try for this? As ideally I 
don't want to be downgrading the openstack_ansible version.


Regards,


On 21/10/16 14:09, Chris Sarginson wrote:
It seems like it may be an occurrence of this bug, as you look to be 
using python venvs:


https://bugs.launchpad.net/openstack-ansible/+bug/1509837

2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager [instance: 
5633d98e-5f79-4c13-8d45-7544069f0e6f]   File 
"*/openstack/venvs/*nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py", 
line 117, in __init__


Chris

On Fri, 21 Oct 2016 at 13:19 Grant Morley <gr...@absolutedevops.io 
<mailto:gr...@absolutedevops.io>> wrote:


Hi all,

We have a openstack-ansible setup and have ceph installed for the
backend. However whenever we try and launch a new instance it
fails to launch and we get the following error:

2016-10-21 12:08:06.241 70661 INFO nova.virt.libvirt.driver
[req-79811c40-8394-4e33-b16d-ff5fa7341b6a
41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1
- - -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Creating image
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[req-79811c40-8394-4e33-b16d-ff5fa7341b6a
41c60f65ae914681b6a6ca27a42ff780 324844c815084205995aff10b03a85e1
- - -] [instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Instance
failed to spawn
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] Traceback (most
recent call last):
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f]   File

"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py",
line 2156, in _build_resources
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] yield resources
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f]   File

"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/compute/manager.py",
line 2009, in _build_and_run_instance
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f]
block_device_info=block_device_info)
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f]   File

"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
line 2527, in spawn
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f]
admin_pass=admin_password)
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f]   File

"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
line 2939, in _create_image
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] backend =
image('disk')
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f]   File

"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
line 2884, in image
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] fname +
suffix, image_type)
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f]   File

"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py",
line 967, in image
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f] return
backend(instance=instance, disk_name=disk_name)
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f]   File

"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py",
line 748, in __init__
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f]
rbd_user=self.rbd_user)
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
[instance: 5633d98e-5f79-4c13-8d45-7544069f0e6f]   File

"/openstack/venvs/nova-12.0.16/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
line 117, in __init__
2016-10-21 12:08:06.242 70661 ERROR nova.compute.manager
 

Re: [Openstack-operators] VXLAN / Tenant Network Issue

2016-09-08 Thread Grant Morley

Hi there,

thanks for replying, configs below:

The following are from the neutron agents container.

# Ansible managed: 
/opt/openstack-ansible/playbooks/roles/os_neutron/templates/plugins/ml2/ml2_conf.ini.j2


# ML2 general

[ml2]

type_drivers = flat,vlan,vxlan,local

tenant_network_types = vxlan,vlan,flat

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security

path_mtu = 0

segment_mtu = 0

# ML2 flat networks

[ml2_type_flat]

flat_networks = flat

# ML2 VLAN networks

[ml2_type_vlan]

network_vlan_ranges = vlan:101:200,vlan:301:400

# ML2 VXLAN networks

[ml2_type_vxlan]

vxlan_group = 239.1.1.1

vni_ranges = 1:1000

# Security groups

[securitygroup]

enable_security_group = True

enable_ipset = True



# Ansible managed: 
/opt/openstack-ansible/playbooks/roles/os_neutron/templates/dhcp_agent.ini.j2 



# General

[DEFAULT]

verbose = True

debug = False

num_sync_threads = 6

# Drivers

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

# Default domain for DHCP leases

dhcp_domain = openstacklocal

# Dnsmasq options

dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

dnsmasq_dns_servers =

dnsmasq_lease_max = 16777216

# Metadata

enable_isolated_metadata = True

-

# Ansible managed: 
/opt/openstack-ansible/playbooks/roles/os_neutron/templates/l3_agent.ini.j2


# General

[DEFAULT]

verbose = True

debug = False

# While this option is deprecated in Liberty, if we remove it then it takes

# a default value of 'br-ex', which we do not want. We therefore leave it

# in place for now and can remove it in Mitaka.

external_network_bridge =

gateway_external_network_id =

# Drivers

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

# Agent mode (legacy only)

agent_mode = legacy

# Conventional failover

allow_automatic_l3agent_failover = True

# HA failover

ha_confs_path = /var/lib/neutron/ha_confs

ha_vrrp_advert_int = 2

ha_vrrp_auth_password = bee916a2589b14dd7f

ha_vrrp_auth_type = PASS

handle_internal_only_routers = False

send_arp_for_ha = 3

# Metadata

enable_metadata_proxy = True

Regards,

On 08/09/16 13:51, Vahric Muhtaryan wrote:

Hello Grant ,

Possible to share ml2_conf.ini , dhcp_agent.ini and l3_agent.ini files ?

Regards
VM

From: Grant Morley <gr...@absolutedevops.io 
<mailto:gr...@absolutedevops.io>>

Date: Thursday 8 September 2016 at 15:12
To: OpenStack Operators <openstack-operators@lists.openstack.org 
<mailto:openstack-operators@lists.openstack.org>>

Cc: <ian.ba...@serverchoice.com <mailto:ian.ba...@serverchoice.com>>
Subject: [Openstack-operators] VXLAN / Tenant Network Issue

Hi All,

We are working off the OSA deployment for a new cloud system we are 
building and everything seems to be working apart from the tenant 
VXLAN network. We have tried various troubleshooting but the initial 
DHCP request, is not making it out of the linux bridge on the compute 
node. We have checked all physical networking and switch setup and 
they appear to be fine.


Below is an output of related networking components that we have 
configured. (Sorry for the long post but wanted to get as much info on 
here) Can anyone see what might be causing the issue or where we have 
gone wrong?


Neutron subnet and router:

neutron) net-list

+--+++

| id | 
name   | 
subnets|


+--+++

| b1da0a4f-2d06-46af-92aa-962c7a7c36f9 | 
ext-net| 
405f439c-51bb-40b6-820a-9048c2ee69fe   |


| || 
185.136.232.0/22   |


| a256ccb2-273a-4738-97ab-bd8bfbc2a2cc | HA network tenant 
7b5aad6af3ee450ea60e06aaaba2da50 | 
6d98faac-2e3b-43c8-bcd6-f9a6f5dcc45e   |


| || 
169.254.192.0/18   |


| f88ceab1-a392-4281-8c60-f57d171a8029 | 
vxlan-172  | 
367e88eb-b09f-4ce5-bfff-5d9e0b0e14b0


| 172.16.0.0/24

+--+++

(neutron) net-show f88ceab1-a392-4281-8c60-f57d171a8029

+---+--+

[Openstack-operators] VXLAN / Tenant Network Issue

2016-09-08 Thread Grant Morley
  | 
name   | mac_address   | 
fixed_ips|


+--++---+--+

| 443d8a0e-833e-4dd2-9320-c2a361e97bf0 | HA port tenant   
  | fa:16:3e:db:48:be | {"subnet_id": 
"6d98faac-2e3b-|


|  | 
7b5aad6af3ee450ea60e06aaaba2da50   |   | 
43c8-bcd6-f9a6f5dcc45e", "ip_address":   |


|| |   | 
"169.254.192.2"} |


| 58312691-77d1-408a-adf2-8c74bb87d35d | HA port 
tenant | fa:16:3e:26:86:3c | {"subnet_id": 
"6d98faac-2e3b-|


|  | 
7b5aad6af3ee450ea60e06aaaba2da50   |   | 
43c8-bcd6-f9a6f5dcc45e", "ip_address":   |


| ||   | 
"169.254.192.1"} |


| 8182e8ca-0e3d-444a-ac4f-f424027aa373 
|| fa:16:3e:20:1c:08 | 
{"subnet_id": "405f439c-51bb-40b6-820a-  |


| ||   | 
9048c2ee69fe", "ip_address": |


|  | 
 |   | 
"185.136.232.55"}|


| beaa905d-fc68-46ba-9fd3-9f620584a1f7 
|| fa:16:3e:5a:8e:c0 | 
{"subnet_id": "367e88eb-b09f-4ce5-bfff-  |


|   | |   | 5d9e0b0e14b0", 
"ip_address": |


| ||   | 
"172.16.0.254"}  |


+--++---+--+

The bridge and interface for the instance:

root@compute-2:~# brctl show

bridge name  bridge id  STP 
enabled   interfaces


br-mgmt 8000.1418775ed1bc no   bond0.11

br-storage 8000.1418775ed1bc no   bond0.31

br-vlan 8000.1418775ed1beno   bond1

br-vxlan 8000.1418775ed1beno   bond1.21

brqf88ceab1-a3 8000.0a81d25d36ce no tapf9871920-e0

vxlan-21

Network agent node namespaces:

root@network-1_neutron_agents_container-f3caf6a1:~# ip netns

qrouter-f31ed1fb-1b90-46e3-b869-d9374e3d08b1

qdhcp-f88ceab1-a392-4281-8c60-f57d171a8029

qdhcp-b1da0a4f-2d06-46af-92aa-962c7a7c36f9

The two qdhcp namespaces are able to ping to each other.

When booting the instance the DHCP request can be seen:

root@compute-2:~# dhcpdump -i tapf9871920-e0

  TIME: 2016-09-08 11:49:03.646

IP: 0.0.0.0 (fa:16:3e:32:7e:79) > 255.255.255.255 (ff:ff:ff:ff:ff:ff)

OP: 1 (BOOTPREQUEST)

HTYPE: 1 (Ethernet)

  HLEN: 6

  HOPS: 0

   XID: 7840761a

  SECS: 60

FLAGS: 0

CIADDR: 0.0.0.0

YIADDR: 0.0.0.0

SIADDR: 0.0.0.0

GIADDR: 0.0.0.0

CHADDR: fa:16:3e:32:7e:79:00:00:00:00:00:00:00:00:00:00

SNAME: .

FNAME: .

OPTION:  53 (  1) DHCP message type 1 (DHCPDISCOVER)

OPTION:  61 (  7) Client-identifier 01:fa:16:3e:32:7e:79

OPTION:  57 (  2) Maximum DHCP message size 576

OPTION:  55 (  9) Parameter Request List  1 (Subnet mask)

  3 (Routers)

  6 (DNS server)

 12 (Host name)

 15 (Domainname)

 26 (Interface MTU)

 28 (Broadcast address)

 42 (NTP servers)

121 (Classless Static Route)

OPTION:  60 ( 12) Vendor class identifier udhcp 1.20.1

OPTION:  12 (  6) Host name cirros

---

The DHCP packet is seen on the tap interface for the instance and the 
bridge brqf88ceab1-a3, but not on any other interface on the compute 
host. No DHCP packet is observed on the network agent container running 
the DHCP namespace.


output of the instance booting:

Starting network...

udhcpc (v1.20.1) started

Sending discover...

Sending discover...

Sending discover...

Usage: /sbin/cirros-dhcpc <up|down>

No lease, failing

WARN: /etc/rc3.d/S40-network failed

cirros-ds 'net' up at 181.24

Regards,
--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Issue when trying to snapshot an instance

2016-06-03 Thread Grant Morley
: 70d42d14-66f6-4374-9038-4b6f840193e0] HTTPUnauthorized:

2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
[instance: 70d42d14-66f6-4374-9038-4b6f840193e0] 
2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
[instance: 70d42d14-66f6-4374-9038-4b6f840193e0] 401
Unauthorized
2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
[instance: 70d42d14-66f6-4374-9038-4b6f840193e0] 
2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
[instance: 70d42d14-66f6-4374-9038-4b6f840193e0] 
2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
[instance: 70d42d14-66f6-4374-9038-4b6f840193e0] 401
Unauthorized
2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
[instance: 70d42d14-66f6-4374-9038-4b6f840193e0]   This server
could not verify that you are authorized to access the document
you requested. Either you supplied the wrong credential
s (e.g., bad password), or your browser does not understand how to
supply the credentials required.
2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
[instance: 70d42d14-66f6-4374-9038-4b6f840193e0]
2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
[instance: 70d42d14-66f6-4374-9038-4b6f840193e0] 
2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
[instance: 70d42d14-66f6-4374-9038-4b6f840193e0]  (HTTP 401)
2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
[instance: 70d42d14-66f6-4374-9038-4b6f840193e0]
2016-06-02 17:14:00.173 52559 ERROR oslo_messaging.rpc.dispatcher
[req-8200a3b0-ad2a-406e-969e-c22762db3455
bb07f987fbae485c9e05f06fb0d422c2 a22e503869c34a92bceb66b0c1da7231
- - -] Exception during message handling: Not authorized for imag
e f9844dd5-5a92-4cd4-956d-8ad04cfc5e84.

Any help will be appreciated.

Regards,

-- 
Grant Morley

Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io>
gr...@absolutedevops.io <mailto:gr...@absolutedevops.io> 0845 874
0580

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators