ceph auth settings with this:
http://docs.ceph.com/docs/master/rbd/rbd-openstack/#setup-ceph-client-authentication
Josh
I’m certainly not in a position to be able to contribute to a code
change, I’m surprised this hasn’t been done already, it seems terribly
inefficient to have to copy the images
Yes, that should be your password- same value as you would use for the
OS_PASSWORD environment variable if using the nova CLI.
If you'd like more help with the zenoss setup, please contact me directly.
--Josh
On Wed, Oct 15, 2014 at 5:49 PM, Guillermo Alvarado <
guillermoalvarad...@g
prints.launchpad.net/cinder/+spec/generic-volume-migration
Josh
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
re it's storing the image. That location comes
from Glance's db.
Does your glance-api.conf contain rbd_store_pool=images in the
[DEFAULT] section?
Josh
Installation is Openstack havana on Debian wheezy, 2013.2.2-2~bpo70+1.
If more info is needed, let me know.
Thanks.
Hi Giuesppe,
The mission statement for trove was recently changed and approved by the
technical committee to include both relational and non-relational databases.
There are several blueprints in place for non-relational data stores with some
implementation work also in flight.
-Josh
From
On 08/29/2013 11:13 PM, Mark Chaney wrote:
Full disclosure, I have zero experience with openstack so far.
If I am going to use a Ceph RBD cluster to store my guest instances, how
should I be doing backups?
1) I would prefer them to be incremental so that a whole backup doesnt
have to happen eve
ng containing references to the volume
snapshots.
Depending on why you want the location, using cinder snapshots of
volumes directly may be simpler than inspecting an image like this.
Josh
Aditional info
# glance index
ID Name Disk Format
Container Fo
That depends on how big the VM is, and what over subscription rate you're
running out.
Generally RAM is going to be your limiting factor, it is fairly easy to over
subscribe CPU, and if you're disk is really unlimited, that isn't going to stop
you either.
Assuming you're not oversubscribing RA
image_id
parameter but cinder-volume gets None instead which looks weird.
Yes, that's the problem. Could you report this as a bug against grizzly
in launchpad? It would affect all backends if it's a regression in the
scheduler.
Josh
Any pointer would be much appreciated, thanks for