Dave,OpenStack does"qemu-img snapshot" command to create a snapshot, here's the method:https://github.com/openstack/nova/blob/stable/folsom/nova/virt/libvirt/utils.py#L335-L347So the memory is _not_ saved, only the disk is. Note that it's always hard to make consistent snapshot. I assume that
Good to know that it also works for RBD qemu-driver. I'm not really surprised though :).Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."PHONE :+33 (0)1 49 70 99 72–MOBILE :+33 (0)6 52 84 44 70EMAIL :sebastien@enovance.com–SKYPE :han.sbastienADDRESS :10, rue de la
:+33 (0)1 49 70 99 72–MOBILE :+33 (0)6 52 84 44 70EMAIL :sebastien@enovance.com–SKYPE :han.sbastienADDRESS :10, rue de la Victoire – 75009 ParisWEB :www.enovance.com–TWITTER :@enovanceOn Mar 28, 2013, at 2:08 PM, Mark Nelson mark.nel...@inktank.com wrote:On 03/28/2013 04:34 AM, Sebastien Han w
Yes I will :-), thank you for pointing out this to me.Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."PHONE :+33 (0)1 49 70 99 72–MOBILE :+33 (0)6 52 84 44 70EMAIL :sebastien@enovance.com–SKYPE :han.sbastienADDRESS :10, rue de la Victoire – 75009 ParisWEB
Ok,I just noticed that the documentation seems to be wrong, the correct command to find the location of an object is:$ ceph odd map pool-name object-nameThen, the error that you raised is pretty strange because even the object doesn't exist, the command will calculate the eventual location.Could
re – 75009 ParisWEB :www.enovance.com–TWITTER :@enovanceOn Mar 27, 2013, at 11:36 PM, Sebastien Han sebastien@enovance.com wrote:Ok,I just noticed that the documentation seems to be wrong, the correct command to find the location of an object is:$ ceph odd map pool-name object-nameThen, the erro
Hi,Storing the image as an object with RADOS or RGW will result as a single big object stored somewhere in Ceph. However with RBD the image is spread across thousands of objects across the entire cluster. At the end, you get way more performance by using RBD since you intensively use the entire
Hi,* Edit `ceph.conf` and add a MDS section like so: [mds] mds data = ""> keyring = /var/lib/ceph/mds/mds.$id/mdsi.$id.keyring [mds.0] host = {hostname}* Create the authentication key (if you use cephx):$ sudo ceph auth get-or-create mds.0 mds 'allow rwx' mds 'allow *' osd 'allow *'
101 - 108 of 108 matches
Mail list logo