[Openstack] unrescue VM instance

2016-03-15 Thread Balazs Varhegyi

Hi guys,
I have an openstack instance that run out of storage on core machine and 
the instance got paused.
I cleared up the core machine so it has more disk now but when I start 
the machine I can't ssh in.
In rescue mode I'm able to ssh in but I want to unrescue the machine to 
be able to use it again without creating a new instance and migrate 
everything from this failed instance manually.
I enabled virtlib debug log but nothing interesting came up (or at least 
I didn't notice), the only thing that indicates an error is:
"Domain id=53 name='instance-0281' 
uuid=f2715c2c-6d4e-5cf1-7606-f5552a59cb56 is tainted: host-cpu"
"nova list" shows the machine has the correct IP set but I'm not able to 
ping it on private IP nor via floating IP.
I tracked the network configuration from libvirt.xml and it seems it's 
configured correctly to connect to the internal bridge: "br-int" on Open 
VSwitch.


do you have any idea what should I try to unrescue the machine to and 
get it into active state?


Regards,
Balazs Varhegyi

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] swift ringbuilder and disk size/capacity relationship

2016-03-15 Thread Peter Brouwer

Hi Mark,


On 10/03/2016 05:14, Mark Kirkwood wrote:

On 10/03/16 00:03, Peter Brouwer wrote:



Indeed, I should have been a bit more clear with my question.
What is swifts behavior of a situation in which  a disk where a swift
partition points to runs out of space? There can be a number of swift
partitions that point to the same disk, does each partition gets a
certain capacity of the disk allocated?


Hmm, I'm confused by the phrase 'partitions that point to the same disk':
PArtitions is used in the swift context, i.e. the partitions scheme the 
ring-builder uses. I'm assuming a whole physical disk is used, i.e. 
filesystem created on a disk using the whole physical disk.
So the ring structure provides a reference to a swift partition and a 
disk location, right?
What happens if the disk it is pointing to is full, does swift returns 
an error to the app/client or does it try a re-lookup in an attempt to 
find space elsewhere?


- account, container, object rings can get set to use the same 
*device* (typically an entire disk e.g /dev/sdc). While you could use 
partitions e.g if you are sourcing storage from a SAN...this is not 
the usual scenario.
- they all 'compete' for the storage, however it is usually the 
objects that eat the most of it (you can put the accounts and 
containers on their own real disks to avoid this...I think the docs 
suggest this as as a good practice).



What happens when you run out of disk is that (eventually) you cannot 
add any more objects or containers. Swift is pretty resilient and can 
cope with *some* devices being full but eventually nothing can be done 
and you need to add more storage nodes and amend ring configuration 
(preferably *before* getting into the situation I described)!


regards

Mark



--
Regards,

Peter Brouwer, Principal Software Engineer,
Oracle Application Integration Engineering.
Phone:  +44 1506 672767, Mobile +44 7720 598 226
E-Mail: peter.brou...@oracle.com


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] swift ringbuilder and disk size/capacity relationship

2016-03-15 Thread Mark Kirkwood

On 15/03/16 22:21, Peter Brouwer wrote:


On 10/03/2016 05:14, Mark Kirkwood wrote:

Hmm, I'm confused by the phrase 'partitions that point to the same disk':



PArtitions is used in the swift context, i.e. the partitions scheme the
ring-builder uses.


I'm sorry but that does not make sense. The ring builder lets you add 
*devices*. Now a device could be a partition (e.g /dev/sdc1) as opposed 
to a complete disk (/dev/sdc) but I'm at a loss to see where you are 
going with this, as the ring builder is essentially partition un-aware.


I'm assuming a whole physical disk is used, i.e.

filesystem created on a disk using the whole physical disk.
So the ring structure provides a reference to a swift partition and a
disk location, right?


Hmm...this partition word again...you are talking about a whole disk, so 
there is no partition (or the 'partition is = the entire disk - you have 
added a whole disk after all).



What happens if the disk it is pointing to is full, does swift returns
an error to the app/client or does it try a re-lookup in an attempt to
find space elsewhere?


I think I answered that previously. Swift can survive *some* of the 
disks being full, but eventually you'll get a PUT failure for 
objects/containers (a 50x http error), and you will be unable to add any 
more until you:


- add more disks to existing servers (or)
- add more servers and disks

and amend the ring with these additional devices.

regards

Mark

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] swift ringbuilder and disk size/capacity relationship

2016-03-15 Thread Mark Kirkwood

On 15/03/16 22:21, Peter Brouwer wrote:

PArtitions is used in the swift context, i.e. the partitions scheme the
ring-builder uses. I'm assuming a whole physical disk is used, i.e.
filesystem created on a disk using the whole physical disk.
So the ring structure provides a reference to a swift partition and a
disk location, right?
What happens if the disk it is pointing to is full, does swift returns
an error to the app/client or does it try a re-lookup in an attempt to
find space elsewhere?


Ah sorry, I see you are in fact (correctly) talking about swift 
partitions and *not* disk partitions (confusing in your initial email).


So you are essentially saying what happens when the particular target 
disk is full. Swift tries to write objects or containers to a sufficient 
number of storage server devices (the exact number or proportion may 
depend on the schema defined for the specific profile - replicated or 
erasure...I haven't checked recent code ...John will know I 
think)...however if enough copies (or segments) are written then all is 
ok (so *some* disks can be full), however eventually too many will be 
full and you will get an error [1].


regards

Mark

[1] tested this with 4 servers and 8 devices.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] swift ringbuilder and disk size/capacity relationship

2016-03-15 Thread Peter Brouwer



On 15/03/2016 09:51, Mark Kirkwood wrote:

On 15/03/16 22:21, Peter Brouwer wrote:


On 10/03/2016 05:14, Mark Kirkwood wrote:
Hmm, I'm confused by the phrase 'partitions that point to the same 
disk':



PArtitions is used in the swift context, i.e. the partitions scheme the
ring-builder uses.


I'm sorry but that does not make sense. The ring builder lets you add 
*devices*. Now a device could be a partition (e.g /dev/sdc1) as 
opposed to a complete disk (/dev/sdc) but I'm at a loss to see where 
you are going with this, as the ring builder is essentially partition 
un-aware.

See https://ask.openstack.org/en/question/6766/what-is-a-swift-partition/



I'm assuming a whole physical disk is used, i.e.

filesystem created on a disk using the whole physical disk.
So the ring structure provides a reference to a swift partition and a
disk location, right?


Hmm...this partition word again...you are talking about a whole disk, 
so there is no partition (or the 'partition is = the entire disk - you 
have added a whole disk after all).



What happens if the disk it is pointing to is full, does swift returns
an error to the app/client or does it try a re-lookup in an attempt to
find space elsewhere?


I think I answered that previously. Swift can survive *some* of the 
disks being full, but eventually you'll get a PUT failure for 
objects/containers (a 50x http error), and you will be unable to add 
any more until you:


- add more disks to existing servers (or)
- add more servers and disks

and amend the ring with these additional devices.

regards

Mark


--
Regards,

Peter Brouwer, Principal Software Engineer,
Oracle Application Integration Engineering.
Phone:  +44 1506 672767, Mobile +44 7720 598 226
E-Mail: peter.brou...@oracle.com


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] unrescue VM instance

2016-03-15 Thread Tomas Vondra
Balazs Varhegyi  writes:

> 
> Hi guys,
> I have an openstack instance that run out of storage on core machine and 
> the instance got paused.
> I cleared up the core machine so it has more disk now but when I start 
> the machine I can't ssh in.
> In rescue mode I'm able to ssh in but I want to unrescue the machine to 
> be able to use it again without creating a new instance and migrate 
> everything from this failed instance manually.
> I enabled virtlib debug log but nothing interesting came up (or at least 
> I didn't notice), the only thing that indicates an error is:
> "Domain id=53 name='instance-0281' 
> uuid=f2715c2c-6d4e-5cf1-7606-f5552a59cb56 is tainted: host-cpu"
> "nova list" shows the machine has the correct IP set but I'm not able to 
> ping it on private IP nor via floating IP.
> I tracked the network configuration from libvirt.xml and it seems it's 
> configured correctly to connect to the internal bridge: "br-int" on Open 
> VSwitch.
> 
> do you have any idea what should I try to unrescue the machine to and 
> get it into active state?
> 
> Regards,
> Balazs Varhegyi
> 
> 

Hi!
What does the log and console show when the VM is in normal state?
The rescue mode is not a magical solution to all problem. It just runs a
clean OS and attaches your broken one as a secondary disk. YOU have to do
the magic to rescue it.
Tomas



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] swift ringbuilder and disk size/capacity relationship

2016-03-15 Thread Peter Brouwer



On 15/03/2016 10:29, Mark Kirkwood wrote:

On 15/03/16 22:21, Peter Brouwer wrote:

PArtitions is used in the swift context, i.e. the partitions scheme the
ring-builder uses. I'm assuming a whole physical disk is used, i.e.
filesystem created on a disk using the whole physical disk.
So the ring structure provides a reference to a swift partition and a
disk location, right?
What happens if the disk it is pointing to is full, does swift returns
an error to the app/client or does it try a re-lookup in an attempt to
find space elsewhere?


Ah sorry, I see you are in fact (correctly) talking about swift 
partitions and *not* disk partitions (confusing in your initial email).


So you are essentially saying what happens when the particular target 
disk is full. Swift tries to write objects or containers to a 
sufficient number of storage server devices (the exact number or 
proportion may depend on the schema defined for the specific profile - 
replicated or erasure...I haven't checked recent code ...John will 
know I think)...however if enough copies (or segments) are written 
then all is ok (so *some* disks can be full), however eventually too 
many will be full and you will get an error [1].
Ah, good info. Followup question, assume worse case ( just to emphasis 
the situation) , one copy ( replication = 1 ) , disk approaching its max 
capacity.

How can you monitor this situation, i.e. to avoid the disk full scenario and
if the disk is full, what type of error is returned?

BTW, thanks for the patience for sticking with me in this.


regards

Mark

[1] tested this with 4 servers and 8 devices.


--
Regards,

Peter Brouwer, Principal Software Engineer,
Oracle Application Integration Engineering.
Phone:  +44 1506 672767, Mobile +44 7720 598 226
E-Mail: peter.brou...@oracle.com


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] unrescue VM instance

2016-03-15 Thread Stephen Davies

Hi Balazs,

Have you tried connecting to the image via NBD (network block device).

1) Mount the image.
2) Repair/free some space.
3) Start the instance.

http://blog.vmsplice.net/2011/02/how-to-access-virtual-machine-image.html


Steve




On 15/03/16 11:20, Tomas Vondra wrote:

Balazs Varhegyi  writes:


Hi guys,
I have an openstack instance that run out of storage on core machine and
the instance got paused.
I cleared up the core machine so it has more disk now but when I start
the machine I can't ssh in.
In rescue mode I'm able to ssh in but I want to unrescue the machine to
be able to use it again without creating a new instance and migrate
everything from this failed instance manually.
I enabled virtlib debug log but nothing interesting came up (or at least
I didn't notice), the only thing that indicates an error is:
"Domain id=53 name='instance-0281'
uuid=f2715c2c-6d4e-5cf1-7606-f5552a59cb56 is tainted: host-cpu"
"nova list" shows the machine has the correct IP set but I'm not able to
ping it on private IP nor via floating IP.
I tracked the network configuration from libvirt.xml and it seems it's
configured correctly to connect to the internal bridge: "br-int" on Open
VSwitch.

do you have any idea what should I try to unrescue the machine to and
get it into active state?

Regards,
Balazs Varhegyi



Hi!
What does the log and console show when the VM is in normal state?
The rescue mode is not a magical solution to all problem. It just runs a
clean OS and attaches your broken one as a secondary disk. YOU have to do
the magic to rescue it.
Tomas



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


--
Best regards
Stephen Davies
07914 346521


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] unrescue VM instance

2016-03-15 Thread Balazs Varhegyi


Hi Steve,

Thanks for the suggestion,
I already managed to clear up some space in rescue mode (VM instance has
2,1T disk with 1T free space) and core machine has an extra 270G free
disk too.

Balazs

On 15/03/16 13:56, Stephen Davies wrote:

Hi Balazs,

Have you tried connecting to the image via NBD (network block device).

1) Mount the image.
2) Repair/free some space.
3) Start the instance.

http://blog.vmsplice.net/2011/02/how-to-access-virtual-machine-image.html


Steve




On 15/03/16 11:20, Tomas Vondra wrote:

Balazs Varhegyi  writes:


Hi guys,
I have an openstack instance that run out of storage on core machine
and
the instance got paused.
I cleared up the core machine so it has more disk now but when I start
the machine I can't ssh in.
In rescue mode I'm able to ssh in but I want to unrescue the machine to
be able to use it again without creating a new instance and migrate
everything from this failed instance manually.
I enabled virtlib debug log but nothing interesting came up (or at
least
I didn't notice), the only thing that indicates an error is:
"Domain id=53 name='instance-0281'
uuid=f2715c2c-6d4e-5cf1-7606-f5552a59cb56 is tainted: host-cpu"
"nova list" shows the machine has the correct IP set but I'm not
able to
ping it on private IP nor via floating IP.
I tracked the network configuration from libvirt.xml and it seems it's
configured correctly to connect to the internal bridge: "br-int" on
Open
VSwitch.

do you have any idea what should I try to unrescue the machine to and
get it into active state?

Regards,
Balazs Varhegyi



Hi!
What does the log and console show when the VM is in normal state?
The rescue mode is not a magical solution to all problem. It just runs a
clean OS and attaches your broken one as a secondary disk. YOU have
to do
the magic to rescue it.
Tomas



___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack







___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] unrescue VM instance

2016-03-15 Thread Balazs Varhegyi

Thank you Tomas for the reply,
Yes, I want to bring it back to normal state so it would run without 
this rescue mode.
When I execute "nova rescue" and VM gets into RESCUED then 
/var/lib/nova/instances/{uuid}/console.log is filled,
but when I do "nova unrescue" and VM is in ACTIVE state this log is 
empty (I can see the process of the VM in the core machine).
I wanted to use "virtsh console instance-" as described here: 
http://www.jaredlog.com/?p=1484 to get a console to the machine but hit 
this error:

https://bugs.launchpad.net/ubuntu/+source/nova/+bug/807091?comments=all

Balazs

On 15/03/16 13:20, Tomas Vondra wrote:

Balazs Varhegyi  writes:


Hi guys,
I have an openstack instance that run out of storage on core machine and
the instance got paused.
I cleared up the core machine so it has more disk now but when I start
the machine I can't ssh in.
In rescue mode I'm able to ssh in but I want to unrescue the machine to
be able to use it again without creating a new instance and migrate
everything from this failed instance manually.
I enabled virtlib debug log but nothing interesting came up (or at least
I didn't notice), the only thing that indicates an error is:
"Domain id=53 name='instance-0281'
uuid=f2715c2c-6d4e-5cf1-7606-f5552a59cb56 is tainted: host-cpu"
"nova list" shows the machine has the correct IP set but I'm not able to
ping it on private IP nor via floating IP.
I tracked the network configuration from libvirt.xml and it seems it's
configured correctly to connect to the internal bridge: "br-int" on Open
VSwitch.

do you have any idea what should I try to unrescue the machine to and
get it into active state?

Regards,
Balazs Varhegyi



Hi!
What does the log and console show when the VM is in normal state?
The rescue mode is not a magical solution to all problem. It just runs a
clean OS and attaches your broken one as a secondary disk. YOU have to do
the magic to rescue it.
Tomas



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [keystone] mitaka release recap

2016-03-15 Thread Steve Martinelli

Before beginning, I'd like to thank all members of the Keystone community.
The Mitaka release would not be possible without the many dedicated
contributors to the Keystone project. This was a great development cycle
and I’m very happy with the finished product. Here’s the Keystone Mitaka
release at a glance.

*Features*
In total, we completed 16 blueprints/specs in the Mitaka release. We bumped
2 to the Newton release. Some of the more interesting features are:
  - Time-based one-time password (TOTP)
  - Implied roles
  - Domain specific roles
  - Shadow users
  - Bootstrap via keystone-manage
For a comprehensive list of complete features and additional information
please refer to our specs page.
Source: http://specs.openstack.org/openstack/keystone-specs/

*Community*
We added 2 new core members to the team, Dave Chen and Samuel de Medeiros
Queiroz. I’d like to once again thank them for their support during the
development cycle.
Source:
http://lists.openstack.org/pipermail/openstack-dev/2016-January/085170.html

*Bugs*
We started to perform weekly bug squashes and triages. We _actually_ did
them, consistently too. I’m very happy to say that Keystone’s bug count is
the lowest it has been in a long time, with 107 open bugs to date. A far
cry from the 300 when Mitaka started. This is a small victory that we can
be proud of that gives us a sense of completion.
Source: http://status.openstack.org/bugday/ and
https://bugs.launchpad.net/keystone

*Consolidation*
A large factor that made the above possible is the removal or deprecation
of certain features. This includes, but is not limited to:
  - LDAP write support for users and groups -- deprecated
  - PKI tokens -- deprecated
  - Using LDAP as a store for projects and roles -- removed
Source:
http://lists.openstack.org/pipermail/openstack-operators/2015-November/009019.html


*Libraries*
We made great strides in the adoption of keystoneauth. Both novaclient and
neutronclient are now using our new keystoneauth library, with many other
projects working on patches to do the same.
Source: https://review.openstack.org/#/c/236325/ and
https://review.openstack.org/#/c/256056/

*Release notes*
A comprehensive list of release notes can be viewed here:
http://docs.openstack.org/releasenotes/keystone/unreleased.html or here
http://docs.openstack.org/releasenotes/keystone/mitaka.html (depending on
when you view them)

To everyone who participated in contributing to Keystone during the Mitaka
release, I cannot thank you enough for your time and effort.

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Migrating images off compute node?

2016-03-15 Thread Ken D'Ambrosio
Hey, all.  We're having some significant network issues in our Icehouse 
cloud, and I was wondering if there's a way to migrate quiescent VM 
images right off the compute node, and, if there is, if there'd be a 
problem migrating them to (say) Liberty.


Thanks kindly,

-Ken

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] swift ringbuilder and disk size/capacity relationship

2016-03-15 Thread Mark Kirkwood

On 16/03/16 00:09, Peter Brouwer wrote:

See https://ask.openstack.org/en/question/6766/what-is-a-swift-partition/



Yep, misunderstood you - see comments email after that one!


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] swift ringbuilder and disk size/capacity relationship

2016-03-15 Thread Mark Kirkwood

On 16/03/16 00:51, Peter Brouwer wrote:


Ah, good info. Followup question, assume worse case ( just to emphasis
the situation) , one copy ( replication = 1 ) , disk approaching its max
capacity.
How can you monitor this situation, i.e. to avoid the disk full scenario
and
if the disk is full, what type of error is returned?



Let's do an example: 4 storage nodes (obj1...obj4) each with 1 disk 
(vdb) added to ring. Replication set to 1.


Firstly write a 1G object (to see where it is gonna go)...host obj1, 
disk vdb, partition 1003):


obj1 $ ls -l 
/srv/node/vdb/objects/1003/d31/fae796287c852f0833316a3dadfb3d31/

total 1048580
-rw--- 1 swift swift 1073741824 Mar 16 10:15 1458079557.01198.data


Then remove it

obj1 $ ls -l 
/srv/node/vdb/objects/1003/d31/fae796287c852f0833316a3dadfb3d31/

total 4
-rw--- 1 swift swift 0 Mar 16 10:47 1458078463.80396.ts


...and use up space on obj1/vdb (dd a 29G file into /srv/node/vdb somewhere)

obj1 $ df -m|grep vdb
/dev/vdb   30705 29729   977  97% /srv/node/vdb


Add object again (ends up on obj4 instead...handoff node)

obj4 $ ls -l 
/srv/node/vdb/objects/1003/d31/fae796287c852f0833316a3dadfb3d31/

total 1048580
-rw--- 1 swift swift 1073741824 Mar 16 11:06 1458079557.01198.data


So swift is coping with the obj1/vdb disk being too full. Remove again 
and exhaust space on all disks (dd again):


@obj[1-4] $ df -h|grep vdb
/dev/vdb 30G   30G  977M  97% /srv/node/vdb


Now attempt to write 1G object again

swiftclient.exceptions.ClientException:
Object PUT failed:
http://192.168.122.61:8080/v1/AUTH_9a428d5a6f134f829b2a5e4420f512e7/con0/obj0 
503 Service Unavailable



So we get an http 503 to show that the put has failed.


Now re monitoring. Out of the box swift-recon cover this:

proxy1 $ swift-recon -dv
===
--> Starting reconnaissance on 4 hosts
===
[2016-03-16 13:16:54] Checking disk usage now
-> http://192.168.122.63:6000/recon/diskusage: [{u'device': u'vdc', 
u'avail': 32162807808, u'mounted': True, u'used': 33718272, u'size': 
32196526080}, {u'device': u'vdb', u'avail': 1024225280, u'mounted': 
True, u'used': 31172300800, u'size': 32196526080}]
-> http://192.168.122.64:6000/recon/diskusage: [{u'device': u'vdc', 
u'avail': 32162807808, u'mounted': True, u'used': 33718272, u'size': 
32196526080}, {u'device': u'vdb', u'avail': 1024274432, u'mounted': 
True, u'used': 31172251648, u'size': 32196526080}]
-> http://192.168.122.62:6000/recon/diskusage: [{u'device': u'vdc', 
u'avail': 32162807808, u'mounted': True, u'used': 33718272, u'size': 
32196526080}, {u'device': u'vdb', u'avail': 1024237568, u'mounted': 
True, u'used': 31172288512, u'size': 32196526080}]
-> http://192.168.122.65:6000/recon/diskusage: [{u'device': u'vdc', 
u'avail': 32162807808, u'mounted': True, u'used': 33718272, u'size': 
32196526080}, {u'device': u'vdb', u'avail': 1024221184, u'mounted': 
True, u'used': 31172304896, u'size': 32196526080}]

Distribution Graph:
  0%4 
*
 96%4 
*

Disk usage: space used: 124824018944 of 257572208640
Disk usage: space free: 132748189696 of 257572208640
Disk usage: lowest: 0.1%, highest: 96.82%, avg: 48.4617574245%
===


So integrating swift-recon into regular monitoring/alerting 
(collectd/nagios or whatever) is one approach (mind you most folk 
already monitor disk usage data... and there is nothing overly special 
about ensuring you don't run of space)!




BTW, thanks for the patience for sticking with me in this.


No worries - a good question (once I finally understood it).

regards

Mark

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack