On 10/04/2013 04:11 AM, Caitlin Bestler wrote:
On Oct 3, 2013 1:45 PM, "Chris Friesen" <chris.frie...@windriver.com <mailto:chris.frie...@windriver.com>> wrote: > > On 10/03/2013 02:02 PM, Caitlin Bestler wrote:
>> On October 3, 2013 12:44:50 PM Chris Friesen >> <chris.frie...@windriver.com <mailto:chris.frie...@windriver.com>> wrote:
>>> I was wondering if there is any interest in adding an >>> "on_shared_storage" field to the Instance class. This would be set >>> once at instance creation time and we would then be able to avoid >>> having the admin manually pass it in for the various API calls >>> (evacuate/rebuild_instance/migration/etc.)
>> *What* is on shared storage? >> >> The boot drive? >> A snapshot of the running VM?
Meaning that this is not an attribute of the instance, it is an attribute of the Cinder drive, or more precisely from the Volume Driver responsible for that drive.
Booting an instance from a cinder volume is only one way of getting shared storage. (And yes, any instance booting from a cinder volume could be considered to be on shared storage--but the existing code doesn't use that knowledge.)
The compute node can mount a shared filesystem and store the instance files on it, and all instances on that compute node would be on shared storage. The "evacuate" code currently requires the admin to specify whether the instance files are shared or not--which means the admin potentially needs to look up the instance, figure out what node it's on, and check whether the files are shared. Interestingly, when a compute node comes back up it actually creates temporary files to see whether instances are shared or not so that it can delete the ones that aren't shared--it'd be way more efficient to just store that information once at instance creation.
The existing "host-evacuate" command only works if all instances on a given compute node are either shared or not shared. If some of them are local and some boot from cinder volumes then you have to evacuate them one at a time until the remaining ones are all of the same time.
Further the question can actually be complex. Is a thin local volume backed by a remote volume "local"? If so, at what hit rate for the local cache?
For the purposes of the "evacuate" command, this would be local storage because the thin volume (containing all the instance-specific data) would be lost if the compute node goes down.
Maybe "on_shared_storage" is too generic, and "instance_shared_storage" would be more accurate. I'm not hung up on the name, but I think it would be good for the instance itself to track whether or not its rootfs is persistent over compute node failure rather than forcing the admin to remember it.
Chris _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev