On Tue, Jun 16, 2015 at 04:21:16PM -0500, Matt Riedemann wrote:
> The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all very
> similar.
> 
> I want to extract a common base class that abstracts some of the common code
> and then let the sub-classes provide overrides where necessary.
> 
> As part of this, I'm wondering if we could just have a single
> 'mount_point_base' config option rather than one per backend like we have
> today:
> 
> nfs_mount_point_base
> glusterfs_mount_point_base
> smbfs_mount_point_base
> quobyte_mount_point_base
> 
> With libvirt you can only have one of these drivers configured per compute
> host right?  So it seems to make sense that we could have one option used
> for all 4 different driver implementations and reduce some of the config
> option noise.

Doesn't cinder support multiple different backends to be used ? I was always
under the belief that it did, and thus Nova had to be capable of using any
of its volume drivers concurrently.

> Are there any concerns with this?

Not a concern, but since we removed the 'volume_drivers' config parameter,
we're now free to re-arrange the code too. I'd like use to create a subdir
nova/virt/libvirt/volume and create one file in that subdir per driver
that we have.

> Is a blueprint needed for this refactor?

Not from my POV. We've just done a huge libvirt driver refactor by adding
the Guest.py module without any blueprint.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to