On 10/13/2015 10:39 PM, Eric Harney wrote:
On 10/13/2015 02:57 PM, Dmitry Guryanov wrote:
Hello,

RemoteFS drivers combine 2 logical tasks. The first one is how to mount
a filesystem and select proper share for a new or existing volume. The
second one: how to deal with an image files in given directory (mount
point) (create, delete, create snapshot e.t.c.).

The first part is different for each volume driver. The second - the
same for all volume drivers, but it depends on selected volume format:
you can create qcow2 file on NFS or smbfs with the same code.

Since there are several volume formats (raw, qcow2, vhd and possibly
some others), I propose to move the code, which works with image to
separate classes, 'VolumeFormat' handlers.

This change have 3 advantages:

1. Duplicated code from remotefs driver will be removed.
2. All drivers will support all volume formats.
3. New volume formats could be added easily, including non-qcow2 snapshots.

Here is a draft version of a patch:
https://review.openstack.org/#/c/234359/

Although there are problems in it, most of the operations with volumes
work and there are only about 10 fails in tempest.


I'd like to discuss this approach before further work on the patch.

I've only taken a quick look, but, a few comments:

IMO it is not a good idea to work on extending support for volume
formats until we get further on having Cinder manage data in different
formats in a robust and secure manner [1]. We should fix that problem
before making it a worse problem.

I stored volume format in metadata in my patch, so I completely agree to
implement this spec before adding new formats.


Points 2 and 3 above aren't really that straightforward.  For example,
calling delete_snapshot_online only works if Nova/libvirt/etc. support
managing the format you are using.  This is fine for the current uses,
because qcow2 is well-supported.  Adding this to a driver using a
different/new file format will likely not work, so combining all of the
code is questionable, even if it seems more nicely organized.

Yes, there is a problem with online snapshots and hypervisors different
from libvirt/qemu, the API between cinder and nova is strongly tied to
qcow2 format and how snapshots implemented in libvirt. But can we make
online snapshots optional? I think it's better to support only offline
snapshots than don't support them at all.


Point #2 assumes that there's a reason that someone would want to use
currently unsupported combinations such as NFS + VHD or SMB + qcow2.
The specific file format being used is not terribly interesting other
than in the context of what a hypervisor supports, and we don't need
more not-so-well-tested combinations for people to deploy.  So why
enable this?

The main use case I want to implement is to use vzstorage with our
hypervisor, virtuozzo containers (known as OpenVZ in opensource
community). At this point it supports only our image format.

Usually base image and all snapshots are placed in a separate
directory, which contains file DiskDescriptor.xml. This file describes
snapshots hierarchy and contains paths to all images (base and deltas).
DiskDescriptor.xml can point to images outside this directory, but we
don't test it regularly. Deltas doesn't contain path to base image, like
qcow2, this information stored only in DiskDescriptor.xml.

qemu-img supports our image format without snapshots, it called 'parallels'
there. It can convert only image files, DiskDescriptor.xml should be
created with tool ploop.
Here is an image format description - https://openvz.org/Ploop/format

We will definitely run CI on this combination.


We've already gone somewhat in the other direction with [2], which
removed the ability to configure the GlusterFS driver to use qcow2
volumes, and instead just lets you choose if you want thick or thinly
provisioned volumes, leaving the format choice as an implementation
detail rather than a deployment choice.  (It still uses qcow2 behind the
scenes.)  I think that's the right direction.

What if someone will have qemu and hyperv compute nodes in the same
openstack cluster? I think the most convenient way would be to create
volume types for each hypervisor and specify volume format in type.
(it doesn't work now because of metadata filter in scheduler).


[1] https://review.openstack.org/#/c/165393/
[2] https://review.openstack.org/#/c/164527/

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to