Re: [Openstack-operators] Neutron fails to notify nova on events network-vif-plugged

2016-11-01 Thread Davíð Örn Jóhannsson
I’m trying to use the puppet modules for liberty, I’ve had some issues there 
where the modules do not seem to exactly represent the configuration intended 
for liberty

From: Matt Kassawara
Date: Tuesday 1 November 2016 at 15:40
To: David Orn Johannsson
Cc: 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
Subject: Re: [Openstack-operators] Neutron fails to notify nova on events 
network-vif-plugged

You are mixing a lot of current and defunct configuration options. I recommend 
removing the defunct options and then comparing your configuration files with 
the installation guide [1].

[1] http://docs.openstack.org/liberty/install-guide-ubuntu/

On Tue, Nov 1, 2016 at 7:05 AM, Davíð Örn Jóhannsson 
<davi...@siminn.is<mailto:davi...@siminn.is>> wrote:
I’m working on setting up a OpenStack Liberty development env on Ubuntu 14.04. 
At the present I have 3 nodes, Controller, Network and Compute. I am up to the 
place where I’m trying to spin up an instance where neutron-server seems to 
fail to notify nova because of an authentication error against keystone , I’ve 
been struggling for some time to figure out the cause of this and was hoping 
that some could lend me more experienced eyes

Controller node /etc/neutron/neutron.conf 
http://paste.openstack.org/show/587547/
Openstack endpoint list http://paste.openstack.org/show/587548/

2016-11-01 12:42:04.067 15888 DEBUG keystoneclient.session [-] RESP: [300] 
Content-Length: 635 Vary: X-Auth-Token Connection: keep-alive Date: Tue, 01 Nov 
2016 12:42:04 GMT Content-Type: application/json X-Distribution: Ubuntu
RESP BODY: {"versions": {"values": [{"status": "stable", "updated": 
"2015-03-30T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": 
[{"href": "http://controller-01:35357/v3/;, "rel": "self"}]}, {"status": 
"stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base": 
"application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 
"id": "v2.0", "links": [{"href": "http://controller-01:35357/v2.0/;, "rel": 
"self"}, {"href": "http://docs.openstack.org/;, "type": "text/html", "rel": 
"describedby"}]}]}}
 _http_log_response 
/usr/lib/python2.7/dist-packages/keystoneclient/session.py:215
2016-11-01 12:42:04.067 15888 DEBUG keystoneclient.auth.identity.v3.base [-] 
Making authentication request to http://controller-01:35357/v3/auth/tokens 
get_auth_ref 
/usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/v3/base.py:188
2016-11-01 12:42:04.091 15888 DEBUG keystoneclient.session [-] Request returned 
failure status: 401 request 
/usr/lib/python2.7/dist-packages/keystoneclient/session.py:400
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova [-] Failed to notify 
nova on events: [{'status': 'completed', 'tag': 
u'bf092fd0-51ba-4fbf-8d3d-9c3004b3811f', 'name': 'network-vif-plugged', 
'server_uuid': u'24616ae2-a6e4-4843-ade6-357a9ce80bc0'}]
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova Traceback (most 
recent call last):
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py", line 248, in 
send_events
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova batched_events)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/novaclient/v2/contrib/server_external_events.py",
 line 39, in create
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova return_raw=True)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/novaclient/base.py", line 169, in _create
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova _resp, body = 
self.api.client.post(url, body=body)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 176, in post
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova return 
self.request(url, 'POST', **kwargs)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 91, in request
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova **kwargs)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 206, in 
request
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova r

[Openstack-operators] Neutron fails to notify nova on events network-vif-plugged

2016-11-01 Thread Davíð Örn Jóhannsson
I’m working on setting up a OpenStack Liberty development env on Ubuntu 14.04. 
At the present I have 3 nodes, Controller, Network and Compute. I am up to the 
place where I’m trying to spin up an instance where neutron-server seems to 
fail to notify nova because of an authentication error against keystone , I’ve 
been struggling for some time to figure out the cause of this and was hoping 
that some could lend me more experienced eyes

Controller node /etc/neutron/neutron.conf 
http://paste.openstack.org/show/587547/
Openstack endpoint list http://paste.openstack.org/show/587548/

2016-11-01 12:42:04.067 15888 DEBUG keystoneclient.session [-] RESP: [300] 
Content-Length: 635 Vary: X-Auth-Token Connection: keep-alive Date: Tue, 01 Nov 
2016 12:42:04 GMT Content-Type: application/json X-Distribution: Ubuntu
RESP BODY: {"versions": {"values": [{"status": "stable", "updated": 
"2015-03-30T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": 
[{"href": "http://controller-01:35357/v3/;, "rel": "self"}]}, {"status": 
"stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base": 
"application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 
"id": "v2.0", "links": [{"href": "http://controller-01:35357/v2.0/;, "rel": 
"self"}, {"href": "http://docs.openstack.org/;, "type": "text/html", "rel": 
"describedby"}]}]}}
 _http_log_response 
/usr/lib/python2.7/dist-packages/keystoneclient/session.py:215
2016-11-01 12:42:04.067 15888 DEBUG keystoneclient.auth.identity.v3.base [-] 
Making authentication request to http://controller-01:35357/v3/auth/tokens 
get_auth_ref 
/usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/v3/base.py:188
2016-11-01 12:42:04.091 15888 DEBUG keystoneclient.session [-] Request returned 
failure status: 401 request 
/usr/lib/python2.7/dist-packages/keystoneclient/session.py:400
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova [-] Failed to notify 
nova on events: [{'status': 'completed', 'tag': 
u'bf092fd0-51ba-4fbf-8d3d-9c3004b3811f', 'name': 'network-vif-plugged', 
'server_uuid': u'24616ae2-a6e4-4843-ade6-357a9ce80bc0'}]
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova Traceback (most 
recent call last):
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py", line 248, in 
send_events
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova batched_events)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/novaclient/v2/contrib/server_external_events.py",
 line 39, in create
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova return_raw=True)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/novaclient/base.py", line 169, in _create
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova _resp, body = 
self.api.client.post(url, body=body)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 176, in post
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova return 
self.request(url, 'POST', **kwargs)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/novaclient/client.py", line 91, in request
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova **kwargs)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 206, in 
request
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova resp = 
super(LegacyJsonAdapter, self).request(*args, **kwargs)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 95, in 
request
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova return 
self.session.request(url, method, **kwargs)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/utils.py", line 337, in inner
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova return 
func(*args, **kwargs)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/session.py", line 304, in 
request
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova auth_headers = 
self.get_auth_headers(auth)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/session.py", line 617, in 
get_auth_headers
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova return 
auth.get_headers(self, **kwargs)
2016-11-01 12:42:04.091 15888 ERROR neutron.notifiers.nova   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/auth/base.py", line 142, in 
get_headers
2016-11-01 12:42:04.091 15888 

Re: [Openstack-operators] Quota exceeded for resource router

2016-11-01 Thread Davíð Örn Jóhannsson
I think I found it, the horizon error message couldn’t be clearer :)

2016-11-01 11:09:22.548 5894 INFO neutron.api.v2.resource 
[req-1f7cff32-defc-4c81-9021-8d58cc151d2b ebc40bf0b0e24114ad1c2d3dfa42e062 
90ccab0d2336493bb38d0f86969eb632 - - -] create failed (client error): No more 
IP addresses available on network 29eea5ab-3891-478d-a76f-04f92d9e1f52.

From: Tobias Schön
Date: Tuesday 1 November 2016 at 09:51
To: David Orn Johannsson, 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
Subject: SV: Quota exceeded for resource router

Hi,

What does your log say?


MVH/Best regards,
Tobias Schön
Sysadmin/drifttekniker Area X
[cid:image001.png@01CEED18.59C12030]

Kvarnkroken 7, 805 92  GÄVLE<http://www.fiberdata.se/>
Box 6027, 800 06 GÄVLE<http://www.fiberdata.se/>
www.areax.se<http://www.areax.se/>
www.fiberdata.se<http://www.fiberdata.se/>

Överväg miljöpåverkan innan du skriver ut detta e-postmeddelande

This communication and any attachment(s) to it are only for the use of the 
intended recipient. The information in it may be confidential and subject to 
legal privilege. Reading or dissemination of it by anyone other than the 
intended recipient is strictly prohibited. If you are not the intended 
recipient please contact the sender by reply e-mail and delete all copies of 
this message and any attachments from any drives or storage media and destroy 
any printouts.

Från: Davíð Örn Jóhannsson [mailto:davi...@siminn.is]
Skickat: den 1 november 2016 10:33
Till: 
openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>
Ämne: [Openstack-operators] Quota exceeded for resource router

Ubuntu 14.04
Liberty
Neutron network

I’m trying add a router to a new project that has no configured router but I 
seem some how to manage to hit a "Quota exceeded for resource router” in 
horizon. I even tried to set the quota to 500.

+-+---+
| Field   | Value |
+-+---+
| floatingip  | 50|
| health_monitor  | -1|
| member  | -1|
| network | 10|
| pool| 10|
| port| 50|
| rbac_policy | 10|
| router  | 500   |
| security_group  | 10|
| security_group_rule | 100   |
| subnet  | 10|
| subnetpool  | -1|
| vip | 10|
+-+---+

Any ideas to what could be the cause of this?
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Quota exceeded for resource router

2016-11-01 Thread Davíð Örn Jóhannsson
Ubuntu 14.04
Liberty
Neutron network

I’m trying add a router to a new project that has no configured router but I 
seem some how to manage to hit a "Quota exceeded for resource router” in 
horizon. I even tried to set the quota to 500.

+-+---+
| Field   | Value |
+-+---+
| floatingip  | 50|
| health_monitor  | -1|
| member  | -1|
| network | 10|
| pool| 10|
| port| 50|
| rbac_policy | 10|
| router  | 500   |
| security_group  | 10|
| security_group_rule | 100   |
| subnet  | 10|
| subnetpool  | -1|
| vip | 10|
+-+---+

Any ideas to what could be the cause of this?
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] External network connectivity problem

2016-09-29 Thread Davíð Örn Jóhannsson
OpenStack Liberty
Ubuntu 14.04

I have a little strange problem, I’m running a Swift cluster but the proxy 
nodes reside in a OpenStack tenant. The private network of the tenant is 
connected to a ha-router on the external storage network.

Now this used to work like a charm, where all my 3 proxy nodes within the 
tenant were able to connect to the storage network and the ports on each of the 
Swift nodes, but all of the sudden I lost the connectivity from 2 and now if I 
spin up new instances within the project I can not connect to the swift nodes, 
but still I can connect from this only proxy.

I can ping the swift nodes but can not connect to any open ports, [6000/2, 22, 
etc], here is where it gets a little  I have a none swift node on the network 
that I can connect to with out problems, the swift nodes are not running a 
firewall.

root@swift-01:~# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source   destination

Chain FORWARD (policy ACCEPT)
target prot opt source   destination

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination

The nodes belong to the default security group which has the following rules
Ingress IPv6 Any Any - default Delete Rule
Egress IPv4 Any Any 0.0.0.0/0 - Delete Rule
Egress IPv6 Any Any ::/0 - Delete Rule
Ingress IPv4 Any Any - default Delete Rule
Ingress IPv4 ICMP Any 0.0.0.0/0 - Delete Rule
Ingress IPv4 TCP 22 (SSH) 0.0.0.0/0 -

I created a new project and set up a router against the storage network in the 
same manner as my previous project and instances within that project can 
connect to ports on all servers running on the storage network.

On one of the network nodes I ran "ip netns exec 
qrouter-dfa2bdc2-7482-42c4-b166-515849119428 bash” (the router in the faulty 
project) and tried to ping and telnet to the ports on the swift hosts without 
luck.

Any ideas on where to go next for troubleshooting ?
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] L3 HA Router

2016-09-28 Thread Davíð Örn Jóhannsson
Thank you, this was very helpful

From: Tobias Urdin
Date: Wednesday 28 September 2016 at 12:43
To: David Orn Johannsson
Cc: 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
Subject: Re: [Openstack-operators] L3 HA Router

Hello,
Please note that this is not recommended but it's currently the least invasive 
way (i.e not killing the machine or the keepalived process).

* Get the UUID of the L3 router.
* Go to the node that hosts your L3 routers (running the neutron-l3-agent 
service) and show all interfaces in that namespace.
* Shutdown ha-xxx interface in that routers namespace.

Example:
List all interfaces in this router namespace.
$ ip netns exec qrouter-ebb5939f-b8b2-4351-8995-27e4ccf9ebe2 ifconfig -a

Then just kill the "ha-" interface with:
$ ip netns exec qrouter-ebb5939f-b8b2-4351-8995-27e4ccf9ebe2 ifconfig ha-xxx 
down

You can find the current master router using
$ neutron l3-agent-list-hosting-router ebb5939f-b8b2-4351-8995-27e4ccf9ebe2

Read up more on how these things work, you will need it.
Google some stuff and you'll get something like this: 
https://developer.rackspace.com/blog/neutron-networking-l3-agent/

There is alot of resource at your disposal out there.
Good luck!

On 09/28/2016 01:58 PM, Davíð Örn Jóhannsson wrote:
I figured this might be the case, but can you tell how I can locate the 
interface for the router namespace, if I do a ifconfig –a on the network node, 
I only see the br-* interfaces and the physical ones.

I assume, I’d need to take down one of the interfaces that keepalived is 
responsible for, but I’m not sure how to find them and make the right 
connection interface to router in order to choose the right interface to take 
down

From: Tobias Urdin
Date: Wednesday 28 September 2016 at 11:11
To: David Orn Johannsson
Cc: 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
Subject: Re: [Openstack-operators] L3 HA Router

Hello,

Some work was done in that area however it was never completed.
https://bugs.launchpad.net/neutron/+bug/1370033

You can issue an ugly failover by taking down the "ha" interface in the router 
namespace of the master with ifconfig down. But it's not pretty.

Best regards

On 09/28/2016 11:40 AM, Davíð Örn Jóhannsson wrote:
OS: Ubuntu 14.04
OpenStack Liberty

Is it possible to perform a manual failover of HA routers between master and 
backup by any sensible means ?


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] L3 HA Router

2016-09-28 Thread Davíð Örn Jóhannsson
I figured this might be the case, but can you tell how I can locate the 
interface for the router namespace, if I do a ifconfig –a on the network node, 
I only see the br-* interfaces and the physical ones.

I assume, I’d need to take down one of the interfaces that keepalived is 
responsible for, but I’m not sure how to find them and make the right 
connection interface to router in order to choose the right interface to take 
down

From: Tobias Urdin
Date: Wednesday 28 September 2016 at 11:11
To: David Orn Johannsson
Cc: 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
Subject: Re: [Openstack-operators] L3 HA Router

Hello,

Some work was done in that area however it was never completed.
https://bugs.launchpad.net/neutron/+bug/1370033

You can issue an ugly failover by taking down the "ha" interface in the router 
namespace of the master with ifconfig down. But it's not pretty.

Best regards

On 09/28/2016 11:40 AM, Davíð Örn Jóhannsson wrote:
OS: Ubuntu 14.04
OpenStack Liberty

Is it possible to perform a manual failover of HA routers between master and 
backup by any sensible means ?

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] L3 HA Router

2016-09-28 Thread Davíð Örn Jóhannsson
OS: Ubuntu 14.04
OpenStack Liberty

Is it possible to perform a manual failover of HA routers between master and 
backup by any sensible means ?
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Evacuate host with host ephemeral storage

2016-09-26 Thread Davíð Örn Jóhannsson
Well it seems that I’m out of luck with  the nova migrate function

2016-09-26 12:16:06.249 28226 INFO nova.compute.manager 
[req-bfccd77e-b351-4c30-a5a4-a0e20fc62add 98b4044d95e34926aa53405f2b7c5a13 
1dda2478e30d44dda0ca752c6047725d - - -] [instance: 
b98be860-9253-43a1-9351-36a7aa125a51] Setting instance back to ACTIVE after: 
Instance rollback performed due to: Migration pre-check error: Migration is not 
supported for LVM backed instances

From: Marcin Iwinski
Date: Friday 23 September 2016 at 14:31
To: David Orn Johannsson, 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage

Hi,
double-check if "nova migrate" option will work for you - it uses a different 
mechanisms than live-migration and judging by [1] depending on the release of 
OpenStack you use it might work with LVM backed ephemeral storage. And I 
seccond Kostiantyn email - we seems to be mixing evacuation with migration.


[1] https://bugs.launchpad.net/nova/+bug/1270305

Regards,
Marcin



On 23 Sep 2016 at 16:21:17, Davíð Örn Jóhannsson 
(davi...@siminn.is<mailto:davi...@siminn.is>) wrote:

Thank you for the clarification. My digging around has thus far only revealed 
https://bugs.launchpad.net/nova/+bug/1499405 (live migration is not implemented 
for LVM backed storage)

If any one has any more info on this subject it would be much appreciated.

From: 
"kostiantyn.volenbovs...@swisscom.com<mailto:kostiantyn.volenbovs...@swisscom.com>"
Date: Friday 23 September 2016 at 13:59
To: David Orn Johannsson, 
"marcin.iwin...@gmail.com<mailto:marcin.iwin...@gmail.com>", 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
Subject: RE: [Openstack-operators] Evacuate host with host ephemeral storage

Hi,

here migration and evacuation are getting mixed up.
In migration case you can access the ephemeral storage of your VM and thus you 
will copy that disk=that file (either as offline aka ‘cold’ migration or via 
live migration)
In evacuation case your Compute Host is either unavailable (or 
assumed-to-be-unavailable) and thus you can’t access (or assume that you can’t 
access) whatever is stored on that Compute Host
So in case your emphemeral (=root disk of VM) disk is actually on that Compute 
Host – then you can’t access that and thus evacuation will result in rebuild
(=taking original image from Glance and thus typically you lose whatever 
happened after initial booting)

But in case you have something shared underneath (like NFS) – then 
--on-shared-storage
nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage
(I guess that it will detect that automatically even in case)
But LVM using NFS share – that sounds like something not very-straightforward 
(not sure if it is OK with OpenStack)

See [1] and [2]
BR,
Konstantin
[1] 
http://docs.openstack.org/admin-guide/compute-configuring-migrations.html#section-configuring-compute-migrations
[2] http://docs.openstack.org/admin-guide/cli-nova-evacuate.html
From: Davíð Örn Jóhannsson [mailto:davi...@siminn.is]
Sent: Friday, September 23, 2016 2:12 PM
To: Marcin Iwinski <marcin.iwin...@gmail.com<mailto:marcin.iwin...@gmail.com>>; 
openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage

No I have not, I guess there is nothing else to do than just give it a go :)

Thanks for the pointer

From: Marcin Iwinski
Date: Friday 23 September 2016 at 11:39
To: David Orn Johannsson, 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage



On 23 Sep 2016 at 13:25:39, Davíð Örn Jóhannsson 
(davi...@siminn.is<mailto:davi...@siminn.is>) wrote:
OpenStack Liberty
Ubuntu 14.04

I know that using block storage like Cinder you can evacuate instances from 
hosts, but in my case we are not yet using Cinder or other block storage 
solutions, we rely on local ephemeral storage, configured on using LVM

Nova.conf
[libvirt]
images_volume_group=vg_ephemeral
images_type=lvm

Is it possible to evacuate (migrate) ephemeral instances from compute hosts and 
if so does any one have any experience with that?


Hi Davíð

Have you actually tried the regular "nova migrate UUID" option? It does copy 
the entire disk to a different compute - but i'm not sure if it's working with 
LVM. I've also used [1] ("nova live-migrate --block-migrate UUID") in the past 
- but unfortunately this also wasn't LVM backed ephemeral storage.

[1] 
http://www.tcpcloud.eu/en/blog/2014/11/20/block-live-migration-openstack-environment/

BR
Marcin

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Evacuate host with host ephemeral storage

2016-09-23 Thread Davíð Örn Jóhannsson
Thank you for the clarification. My digging around has thus far only revealed 
https://bugs.launchpad.net/nova/+bug/1499405 (live migration is not implemented 
for LVM backed storage)

If any one has any more info on this subject it would be much appreciated.

From: 
"kostiantyn.volenbovs...@swisscom.com<mailto:kostiantyn.volenbovs...@swisscom.com>"
Date: Friday 23 September 2016 at 13:59
To: David Orn Johannsson, 
"marcin.iwin...@gmail.com<mailto:marcin.iwin...@gmail.com>", 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
Subject: RE: [Openstack-operators] Evacuate host with host ephemeral storage

Hi,

here migration and evacuation are getting mixed up.
In migration case you can access the ephemeral storage of your VM and thus you 
will copy that disk=that file (either as offline aka ‘cold’ migration or via 
live migration)
In evacuation case your Compute Host is either unavailable (or 
assumed-to-be-unavailable) and thus you can’t access (or assume that you can’t 
access) whatever is stored on that Compute Host
So in case your emphemeral (=root disk of VM) disk is actually on that Compute 
Host – then you can’t access that and thus evacuation will result in rebuild
(=taking original image from Glance and thus typically you lose whatever 
happened after initial booting)

But in case you have something shared underneath (like NFS) – then 
--on-shared-storage
nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage
(I guess that it will detect that automatically even in case)
But LVM using NFS share – that sounds like something not very-straightforward 
(not sure if it is OK with OpenStack)

See [1] and [2]
BR,
Konstantin
[1] 
http://docs.openstack.org/admin-guide/compute-configuring-migrations.html#section-configuring-compute-migrations
[2] http://docs.openstack.org/admin-guide/cli-nova-evacuate.html
From: Davíð Örn Jóhannsson [mailto:davi...@siminn.is]
Sent: Friday, September 23, 2016 2:12 PM
To: Marcin Iwinski <marcin.iwin...@gmail.com<mailto:marcin.iwin...@gmail.com>>; 
openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage

No I have not, I guess there is nothing else to do than just give it a go :)

Thanks for the pointer

From: Marcin Iwinski
Date: Friday 23 September 2016 at 11:39
To: David Orn Johannsson, 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage



On 23 Sep 2016 at 13:25:39, Davíð Örn Jóhannsson 
(davi...@siminn.is<mailto:davi...@siminn.is>) wrote:
OpenStack Liberty
Ubuntu 14.04

I know that using block storage like Cinder you can evacuate instances from 
hosts, but in my case we are not yet using Cinder or other block storage 
solutions, we rely on local ephemeral storage, configured on using LVM

Nova.conf
[libvirt]
images_volume_group=vg_ephemeral
images_type=lvm

Is it possible to evacuate (migrate) ephemeral instances from compute hosts and 
if so does any one have any experience with that?


Hi Davíð

Have you actually tried the regular "nova migrate UUID" option? It does copy 
the entire disk to a different compute - but i'm not sure if it's working with 
LVM. I've also used [1] ("nova live-migrate --block-migrate UUID") in the past 
- but unfortunately this also wasn't LVM backed ephemeral storage.

[1] 
http://www.tcpcloud.eu/en/blog/2014/11/20/block-live-migration-openstack-environment/

BR
Marcin

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Evacuate host with host ephemeral storage

2016-09-23 Thread Davíð Örn Jóhannsson
No I have not, I guess there is nothing else to do than just give it a go :)

Thanks for the pointer

From: Marcin Iwinski
Date: Friday 23 September 2016 at 11:39
To: David Orn Johannsson, 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage



On 23 Sep 2016 at 13:25:39, Davíð Örn Jóhannsson 
(davi...@siminn.is<mailto:davi...@siminn.is>) wrote:

OpenStack Liberty
Ubuntu 14.04

I know that using block storage like Cinder you can evacuate instances from 
hosts, but in my case we are not yet using Cinder or other block storage 
solutions, we rely on local ephemeral storage, configured on using LVM

Nova.conf
[libvirt]
images_volume_group=vg_ephemeral
images_type=lvm

Is it possible to evacuate (migrate) ephemeral instances from compute hosts and 
if so does any one have any experience with that?


Hi Davíð

Have you actually tried the regular "nova migrate UUID" option? It does copy 
the entire disk to a different compute - but i'm not sure if it's working with 
LVM. I've also used [1] ("nova live-migrate --block-migrate UUID") in the past 
- but unfortunately this also wasn't LVM backed ephemeral storage.

[1] 
http://www.tcpcloud.eu/en/blog/2014/11/20/block-live-migration-openstack-environment/

BR
Marcin

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Auto start running Nova Instances after reboot

2016-09-19 Thread Davíð Örn Jóhannsson
Ubuntu 14.04
OpenStack Liberty

I’m looking for a way to ensure running instances are started on a host after 
reboot, I’ve seen a way to do this for RHEL and CentOs by using 
/etc/sysconfig/libvirt-guests but it is not available on Ubuntu as far as I 
know.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Migrate ephemeral nodes

2015-12-04 Thread Davíð Örn Jóhannsson
Is it possible to migrate ephemeral volumes from their default location to an 
ephemeral LVM volume?


The reason for this is that we just added additional storage to the servers and 
created a volume group that we want to use as a backend for ephemeral storage 
on the compute hosts and are searching for ways to move instances to that 
without having to delete and re-create instances currently using 
/var/lib/nova/instances as backend for epithermal storage.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Multible DHCP_Domain in Neutron

2015-12-02 Thread Davíð Örn Jóhannsson
I've been having some issues where I want to change the domain name for a given 
project, I know I can do it globally by editing dhcp_domain in dhcp_agent.ini 
but on a global scale, however when I look at the dnsmasq process I see the 
final runtime param  --domain=openstacklocal, is there any sensible way set 
this param to  --domain=example.com ?


 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces 
--interface=tape9e8f364-71 --except-interface=lo
 --pid-file=/var/lib/neutron/dhcp/7d6c5968-6713-427c-be33-fabaf53bef81/pid
 
--dhcp-hostsfile=/var/lib/neutron/dhcp/7d6c5968-6713-427c-be33-fabaf53bef81/host
 
--addn-hosts=/var/lib/neutron/dhcp/7d6c5968-6713-427c-be33-fabaf53bef81/addn_hosts
 --dhcp-optsfile=/var/lib/neutron/dhcp/7d6c5968-6713-427c-be33-fabaf53bef81/opts
 
--dhcp-leasefile=/var/lib/neutron/dhcp/7d6c5968-6713-427c-be33-fabaf53bef81/leases
 --dhcp-range=set:tag0,10.0.10.0,static,86400s
 --dhcp-lease-max=256
 --conf-file=/etc/neutron/dnsmasq-neutron.conf
 --domain=openstacklocal

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Nova Configure VG ephemeral-storage

2015-11-26 Thread Davíð Örn Jóhannsson
Could someone point me in the right direction for documentation on configuring 
ephemeral-storage? through a VolumeGroup for nova compute hosts in Kilo?
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How do I install specific versions of openstack/puppet-keystone

2015-11-26 Thread Davíð Örn Jóhannsson
Same here

Frá: Saverio Proto 
Sent: 26. nóvember 2015 07:58
Til: Sam Morrison
Afrit: openstack-operators@lists.openstack.org
Efni: Re: [Openstack-operators] How do I install specific versions of   
openstack/puppet-keystone

> Can you get R10k to NOT install dependencies listed in metadata etc.?

In my experience r10k will not try to install any dependencies.

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] nova list-extensions ERROR (KeyError): 'extensions'

2015-11-25 Thread Davíð Örn Jóhannsson
I'm setting upp a controller node for a development env using openstack puppet 
modules and I encountered an error while logging into horizon stating ERROR 
(KeyError): 'extensions'

Since this is a setup using the puppet module there could be some 
misconfiguration or some component oversight, but the error it self does not 
explain much to me what might be wrong.

Any ideas?

root@controller-01:~# nova --debug list-extensions
DEBUG (session:195) REQ: curl -g -i -X GET http://127.0.0.1:35357/v2.0 -H 
"Accept: application/json" -H "User-Agent: python-keystoneclient"
INFO (connectionpool:188) Starting new HTTP connection (1): 127.0.0.1
DEBUG (connectionpool:362) "GET /v2.0 HTTP/1.1" 200 336
DEBUG (session:223) RESP: [200] content-length: 336 vary: X-Auth-Token 
x-distribution: Ubuntu connection: keep-alive date: Wed, 25 Nov 2015 12:36:26 
GMT content-type: application/json x-openstack-request-id: 
req-64bcc134-3e09-4264-ae79-3dd850b4ec18
RESP BODY: {"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 
"media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://127.0.0.1:35357/v2.0/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}}

DEBUG (v2:76) Making authentication request to 
http://127.0.0.1:35357/v2.0/tokens
DEBUG (connectionpool:362) "POST /v2.0/tokens HTTP/1.1" 200 2442
DEBUG (iso8601:184) Parsed 2015-11-25T13:36:26Z into {'tz_sign': None, 
'second_fraction': None, 'hour': u'13', 'daydash': u'25', 'tz_hour': None, 
'month': None, 'timezone': u'Z', 'second': u'26', 'tz_minute': None, 'year': 
u'2015', 'separator': u'T', 'monthdash': u'11', 'day': None, 'minute': u'36'} 
with default timezone 
DEBUG (iso8601:140) Got u'2015' for 'year' with default None
DEBUG (iso8601:140) Got u'11' for 'monthdash' with default 1
DEBUG (iso8601:140) Got 11 for 'month' with default 11
DEBUG (iso8601:140) Got u'25' for 'daydash' with default 1
DEBUG (iso8601:140) Got 25 for 'day' with default 25
DEBUG (iso8601:140) Got u'13' for 'hour' with default None
DEBUG (iso8601:140) Got u'36' for 'minute' with default None
DEBUG (iso8601:140) Got u'26' for 'second' with default None
DEBUG (session:195) REQ: curl -g -i -X GET http://controller-01:8774/extensions 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}603f74199663e969ef12d7b131fa30a9c20cd7a3"
INFO (connectionpool:188) Starting new HTTP connection (1): 
controller-01.dev.opst.siminn.is
DEBUG (connectionpool:362) "GET /extensions HTTP/1.1" 300 257
DEBUG (session:223) RESP: [300] date: Wed, 25 Nov 2015 12:36:26 GMT 
content-length: 257 content-type: application/json connection: keep-alive
RESP BODY: {"choices": [{"status": "SUPPORTED", "media-types": [{"base": 
"application/json", "type": 
"application/vnd.openstack.compute+json;version=2"}], "id": "v2.0", "links": 
[{"href": "http://controller-01:8774/v2/extensions;, "rel": "self"}]}]}

DEBUG (shell:914) 'extensions'
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 911, in main
OpenStackComputeShell().main(argv)
  File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 838, in main
args.func(self.cs, args)
  File 
"/usr/lib/python2.7/dist-packages/novaclient/v2/contrib/list_extensions.py", 
line 44, in do_list_extensions
extensions = client.list_extensions.show_all()
  File 
"/usr/lib/python2.7/dist-packages/novaclient/v2/contrib/list_extensions.py", 
line 37, in show_all
return self._list("/extensions", 'extensions')
  File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 69, in _list
data = body[response_key]
KeyError: 'extensions'
ERROR (KeyError): 'extensions'

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] puppet-neutron Neutron::Keystone::Auth/Keystone::Resource::Service_identity Error

2015-11-20 Thread Davíð Örn Jóhannsson
I'm trying to configure neutron keystone auth (with 6.1.0 branch  of the 
puppet-keystone module)  on a controller node and I receive the following error 
message:



Error: 
/Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]:
 Could not evaluate: Execution of '/usr/bin/openstack token issue --format 
value' returned 1: ERROR: openstack Authentication cannot be scoped to multiple 
targets. Pick one of: project, domain or trust


root@controller-01:~# /usr/bin/openstack token issue
++--+
| Field  | Value|
++--+
| expires| 2015-11-20T16:50:56Z |
| id | 61fc5710fe1c4bf3898d0fda075b3aed |
| project_id | 64f40bf2fe234f2cb3c9e211f60a1479 |
| user_id| eed5f078d075415888d4189e5a6b6cef |
++--+?

Could there be a bug in the module or is it just a misconfig of neutron?


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] OpenStack Horizon puppet module error python manage.py compress

2015-11-18 Thread Davíð Örn Jóhannsson
I'm currently trying to automate horizon setup with 
https://github.com/openstack/puppet-horizon/blob/stable/kilo/manifests/init.pp? 
module but I recieve an OfflineGenerationError:


You have offline compression enabled but key "695212a2a24c10ba077ec76c387aa922" 
is missing from offline manifest. You may need to run "python manage.py 
compress".


The module tries to run the following command:


/usr/share/openstack-dashboard/manage.py collectstatic --noinput --clear && 
/usr/share/openstack-dashboard/manage.py compress --force


But it results in the following error:


Deleting 'bootstrap/js/bootstrap-datepicker.js'
Deleting 'bootstrap/js/bootstrap.js'
Copying 
'/usr/lib/python2.7/dist-packages/horizon/static/bootstrap/js/bootstrap-datepicker.js'
Copying 
'/usr/lib/python2.7/dist-packages/horizon/static/bootstrap/js/bootstrap.js'

2 static files copied.
Found 'compress' tags in:
/usr/share/openstack-dashboard/openstack_dashboard/templates/_stylesheets.html
/usr/lib/python2.7/dist-packages/horizon/templates/horizon/_conf.html
/usr/lib/python2.7/dist-packages/horizon/templates/horizon/_scripts.html
CommandError: An error occured during rendering 
/usr/share/openstack-dashboard/openstack_dashboard/templates/_stylesheets.html: 
'dashboard/less/horizon.less' could not be found in the COMPRESS_ROOT 
'/usr/share/openstack-dashboard/static' or with staticfiles.?

Any Ideas to how one might fix this?


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Problem configuring a second external network

2015-11-12 Thread Davíð Örn Jóhannsson
We have an external network (Internet) working and are trying to configure an 
additional external network (Intranet)


>From the network nodes we can ping the gateway of the of the Intra network as 
>well as a router created on the external network.


However the instances on the private OpenStack network can not contact either 
the gateway nor the router and if I trace the ip of the intra router the third 
hop is the Internet GW (default GW on the network nodes)


Neutron Network node routes

0.0.0.0  192.157.9.1 0.0.0.0UG0  0  
  0 br-ex-internet

192.157.9.0 0.0.0.0  255.255.255.0   U 0  00 
br-ex-internet

192.22.18.0 192.22.18.1 255.255.255.0   UG0  00 
br-ex-intranet
192.22.18.0 0.0.0.0 255.255.255.0U 0  00 
br-ex-intranet

Instance routes
Destination Gateway Genmask Flags Metric RefUse Iface
0.0.0.0  192.168.0.1 0.0.0.0 UG0  00 
eth0
192.168.0.0 0.0.0.0 255.255.255.0   U 0  00 eth0?



Both external networks have routers with configured interfaces on the private 
network.


Any ideas what I could be missing?
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Permission Denied in swift

2015-11-04 Thread Davíð Örn Jóhannsson
I have a newly set up swift cluster that is generating a lot of error like this 
where it seems that swift user does not have access to read permissions or 
something


Nov  4 10:01:27 swift-04 object-auditor: ERROR Trying to audit 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/auditor.py", line 173, in 
failsafe_object_audit#012self.object_audit(location)#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/auditor.py", line 191, in 
object_audit#012with df.open():#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/diskfile.py", line 1292, in 
open#012data_file, meta_file, ts_file = self._get_ondisk_file()#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/diskfile.py", line 1381, in 
_get_ondisk_file#012"Error listing directory %s: %s" % (self._datadir, 
err))#012DiskFileError: Error listing directory 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: [Errno 
13] Permission denied: 
'/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8'


As Root


root@swift-04:~# ls -la 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8
total 44
drwxr--r-- 2 swift swift42 Nov  4 09:52 .
drw-r--r-- 3 swift swift53 Nov  4 09:52 ..
-rw-r--r-- 1 swift swift 40957 Nov  4 09:52 1446566608.56781.data

As swift

swift@swift-04:/$ ls -la 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8
ls: cannot access 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
Permission denied

swift@swift-04:/$ ls -la /srv/node/Z302XLHP/objects/11808/7c8
ls: cannot access /srv/node/Z302XLHP/objects/11808/7c8/.: Permission denied
ls: cannot access /srv/node/Z302XLHP/objects/11808/7c8/..: Permission denied
ls: cannot access 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
Permission denied
total 0
d? ? ? ? ?? .
d? ? ? ? ?? ..
?? ? ? ? ?? 1710674aaa385fc75885b2ac7f8967c8

swift@swift-04:/$ ls -la /srv/node/Z302XLHP/objects/11808/
total 8
drwxrwxr-x   3 swift swift   61 Nov  4 10:03 .
drwxr-xr-x 151 swift swift 4096 Nov  4 09:56 ..
drwxr--r--   3 swift swift   53 Nov  4 09:52 7c8
-rw---   1 swift swift   50 Nov  4 10:03 hashes.pkl
-rwxr-xr-x   1 swift swift0 Nov  4 09:52 .lock

Any ideas to what could be wrong with the permissions?





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Permission Denied in swift

2015-11-04 Thread Davíð Örn Jóhannsson
No, Local disks.


it seems that the /srv/node/Z302XK5V/objects/84990/fbc/ folder is being created 
without execute permissions


Frá: Heiko Krämer <hkrae...@anynines.com>
Sent: 04. nóvember 2015 10:18
Til: openstack-operators@lists.openstack.org
Efni: Re: [Openstack-operators] Permission Denied in swift

Hi,

you are using an shared storage such as NFS ?

total 0
d? ? ? ? ?? .
d? ? ? ? ?? ..
?? ? ? ? ?? 1710674aaa385fc75885b2ac7f8967c8

It looks like your file system is broken or your export of your shared storage 
isn't working well.

Cheers
Heiko

Am 04.11.2015 um 11:10 schrieb Davíð Örn Jóhannsson:

I have a newly set up swift cluster that is generating a lot of error like this 
where it seems that swift user does not have access to read permissions or 
something


Nov  4 10:01:27 swift-04 object-auditor: ERROR Trying to audit 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/auditor.py", line 173, in 
failsafe_object_audit#012self.object_audit(location)#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/auditor.py", line 191, in 
object_audit#012with df.open():#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/diskfile.py", line 1292, in 
open#012data_file, meta_file, ts_file = self._get_ondisk_file()#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/diskfile.py", line 1381, in 
_get_ondisk_file#012"Error listing directory %s: %s" % (self._datadir, 
err))#012DiskFileError: Error listing directory 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: [Errno 
13] Permission denied: 
'/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8'


As Root


root@swift-04:~# ls -la 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8
total 44
drwxr--r-- 2 swift swift42 Nov  4 09:52 .
drw-r--r-- 3 swift swift53 Nov  4 09:52 ..
-rw-r--r-- 1 swift swift 40957 Nov  4 09:52 1446566608.56781.data

As swift

swift@swift-04:/$ ls -la 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8
ls: cannot access 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
Permission denied

swift@swift-04:/$ ls -la /srv/node/Z302XLHP/objects/11808/7c8
ls: cannot access /srv/node/Z302XLHP/objects/11808/7c8/.: Permission denied
ls: cannot access /srv/node/Z302XLHP/objects/11808/7c8/..: Permission denied
ls: cannot access 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
Permission denied
total 0
d? ? ? ? ?? .
d? ? ? ? ?? ..
?? ? ? ? ?? 1710674aaa385fc75885b2ac7f8967c8

swift@swift-04:/$ ls -la /srv/node/Z302XLHP/objects/11808/
total 8
drwxrwxr-x   3 swift swift   61 Nov  4 10:03 .
drwxr-xr-x 151 swift swift 4096 Nov  4 09:56 ..
drwxr--r--   3 swift swift   53 Nov  4 09:52 7c8
-rw---   1 swift swift   50 Nov  4 10:03 hashes.pkl
-rwxr-xr-x   1 swift swift0 Nov  4 09:52 .lock

Any ideas to what could be wrong with the permissions?








___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
anynines.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Permission Denied in swift

2015-11-04 Thread Davíð Örn Jóhannsson
Swift creates the folder with these permissions

drw-r--r--   3 swift swift   53 Nov  3 15:53 fbc


but all other folders ar creteted with these permissions

drwxr-xr-x   3 swift swift   61 Nov  4 10:47 84990


Might the process responsible for creating these directories be misconfigured  
to use 644 ?


Frá: Davíð Örn Jóhannsson
Sent: 04. nóvember 2015 10:38
Til: Heiko Krämer; openstack-operators@lists.openstack.org
Efni: Re: [Openstack-operators] Permission Denied in swift


No, Local disks.


it seems that the /srv/node/Z302XK5V/objects/84990/fbc/ folder is being created 
without execute permissions


Frá: Heiko Krämer <hkrae...@anynines.com>
Sent: 04. nóvember 2015 10:18
Til: openstack-operators@lists.openstack.org
Efni: Re: [Openstack-operators] Permission Denied in swift

Hi,

you are using an shared storage such as NFS ?

total 0
d? ? ? ? ?? .
d? ? ? ? ?? ..
?? ? ? ? ?? 1710674aaa385fc75885b2ac7f8967c8

It looks like your file system is broken or your export of your shared storage 
isn't working well.

Cheers
Heiko

Am 04.11.2015 um 11:10 schrieb Davíð Örn Jóhannsson:

I have a newly set up swift cluster that is generating a lot of error like this 
where it seems that swift user does not have access to read permissions or 
something


Nov  4 10:01:27 swift-04 object-auditor: ERROR Trying to audit 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/auditor.py", line 173, in 
failsafe_object_audit#012self.object_audit(location)#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/auditor.py", line 191, in 
object_audit#012with df.open():#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/diskfile.py", line 1292, in 
open#012data_file, meta_file, ts_file = self._get_ondisk_file()#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/diskfile.py", line 1381, in 
_get_ondisk_file#012"Error listing directory %s: %s" % (self._datadir, 
err))#012DiskFileError: Error listing directory 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: [Errno 
13] Permission denied: 
'/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8'


As Root


root@swift-04:~# ls -la 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8
total 44
drwxr--r-- 2 swift swift42 Nov  4 09:52 .
drw-r--r-- 3 swift swift53 Nov  4 09:52 ..
-rw-r--r-- 1 swift swift 40957 Nov  4 09:52 1446566608.56781.data

As swift

swift@swift-04:/$ ls -la 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8
ls: cannot access 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
Permission denied

swift@swift-04:/$ ls -la /srv/node/Z302XLHP/objects/11808/7c8
ls: cannot access /srv/node/Z302XLHP/objects/11808/7c8/.: Permission denied
ls: cannot access /srv/node/Z302XLHP/objects/11808/7c8/..: Permission denied
ls: cannot access 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
Permission denied
total 0
d? ? ? ? ?? .
d? ? ? ? ?? ..
?? ? ? ? ?? 1710674aaa385fc75885b2ac7f8967c8

swift@swift-04:/$ ls -la /srv/node/Z302XLHP/objects/11808/
total 8
drwxrwxr-x   3 swift swift   61 Nov  4 10:03 .
drwxr-xr-x 151 swift swift 4096 Nov  4 09:56 ..
drwxr--r--   3 swift swift   53 Nov  4 09:52 7c8
-rw---   1 swift swift   50 Nov  4 10:03 hashes.pkl
-rwxr-xr-x   1 swift swift0 Nov  4 09:52 .lock

Any ideas to what could be wrong with the permissions?








___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
anynines.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Permission Denied in swift

2015-11-04 Thread Davíð Örn Jóhannsson
​I seem to have spotted the error, I'm using Puppet OpenStack Swift Module 
https://github.com/openstack/puppet-swift/blob/master/manifests/storage/server.pp​
 and I seem to have missed the comments regarding permissions in rsyncd.conf


Frá: Davíð Örn Jóhannsson
Sent: 04. nóvember 2015 10:55
Til: Heiko Krämer; openstack-operators@lists.openstack.org
Efni: Re: [Openstack-operators] Permission Denied in swift


Swift creates the folder with these permissions

drw-r--r--   3 swift swift   53 Nov  3 15:53 fbc


but all other folders ar creteted with these permissions

drwxr-xr-x   3 swift swift   61 Nov  4 10:47 84990


Might the process responsible for creating these directories be misconfigured  
to use 644 ?


Frá: Davíð Örn Jóhannsson
Sent: 04. nóvember 2015 10:38
Til: Heiko Krämer; openstack-operators@lists.openstack.org
Efni: Re: [Openstack-operators] Permission Denied in swift


No, Local disks.


it seems that the /srv/node/Z302XK5V/objects/84990/fbc/ folder is being created 
without execute permissions


Frá: Heiko Krämer <hkrae...@anynines.com>
Sent: 04. nóvember 2015 10:18
Til: openstack-operators@lists.openstack.org
Efni: Re: [Openstack-operators] Permission Denied in swift

Hi,

you are using an shared storage such as NFS ?

total 0
d? ? ? ? ?? .
d? ? ? ? ?? ..
?? ? ? ? ?? 1710674aaa385fc75885b2ac7f8967c8

It looks like your file system is broken or your export of your shared storage 
isn't working well.

Cheers
Heiko

Am 04.11.2015 um 11:10 schrieb Davíð Örn Jóhannsson:

I have a newly set up swift cluster that is generating a lot of error like this 
where it seems that swift user does not have access to read permissions or 
something


Nov  4 10:01:27 swift-04 object-auditor: ERROR Trying to audit 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/auditor.py", line 173, in 
failsafe_object_audit#012self.object_audit(location)#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/auditor.py", line 191, in 
object_audit#012with df.open():#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/diskfile.py", line 1292, in 
open#012data_file, meta_file, ts_file = self._get_ondisk_file()#012  File 
"/usr/lib/python2.7/dist-packages/swift/obj/diskfile.py", line 1381, in 
_get_ondisk_file#012"Error listing directory %s: %s" % (self._datadir, 
err))#012DiskFileError: Error listing directory 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: [Errno 
13] Permission denied: 
'/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8'


As Root


root@swift-04:~# ls -la 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8
total 44
drwxr--r-- 2 swift swift42 Nov  4 09:52 .
drw-r--r-- 3 swift swift53 Nov  4 09:52 ..
-rw-r--r-- 1 swift swift 40957 Nov  4 09:52 1446566608.56781.data

As swift

swift@swift-04:/$ ls -la 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8
ls: cannot access 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
Permission denied

swift@swift-04:/$ ls -la /srv/node/Z302XLHP/objects/11808/7c8
ls: cannot access /srv/node/Z302XLHP/objects/11808/7c8/.: Permission denied
ls: cannot access /srv/node/Z302XLHP/objects/11808/7c8/..: Permission denied
ls: cannot access 
/srv/node/Z302XLHP/objects/11808/7c8/1710674aaa385fc75885b2ac7f8967c8: 
Permission denied
total 0
d? ? ? ? ?? .
d? ? ? ? ?? ..
?? ? ? ? ?? 1710674aaa385fc75885b2ac7f8967c8

swift@swift-04:/$ ls -la /srv/node/Z302XLHP/objects/11808/
total 8
drwxrwxr-x   3 swift swift   61 Nov  4 10:03 .
drwxr-xr-x 151 swift swift 4096 Nov  4 09:56 ..
drwxr--r--   3 swift swift   53 Nov  4 09:52 7c8
-rw---   1 swift swift   50 Nov  4 10:03 hashes.pkl
-rwxr-xr-x   1 swift swift0 Nov  4 09:52 .lock

Any ideas to what could be wrong with the permissions?








___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
anynines.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators