Public bug reported: I've been doing a little work with nova-volume and ceph/RBD. In order to gain some fault-tolerance, I plan to run a nova-volume on each compute node. However, a problem arises, because a given nova-volume host only wants to deal with requests for volumes that it created.
This makes perfect sense in a world where nova-volume hosts create volumes in LVM and export them over iSCSI. It makes less sense in a Ceph world, since the volumes live in the ceph cluster, and their metadata live in the nova database. But if the wrong nova-volume goes away, some of my volumes become arbitrarily unusable. I've hit upon a workaround that seems to work so far, although I'm not sure if it's supposed to. I am running each nova-volume on the various hosts with an identical --host flag. When running in this setup, rapid volume creation, deletion and attachment requests are splayed nicely across the nova-volume instances. (A less brutal hack might be to teach nova-volume to call into the volume driver to check if it has its own notion of what the host flag ought to be -- the RBD driver, for example, could construct a string such as "ceph:67670443-07ad-4ce3-bdb8-75e9a14562f9:rbd" by probing the Ceph cluster for its fsid, which ought to be unique, and then appending the name of the RADOS pool in which it is creating RBDs.) ** Affects: nova (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1028718 Title: nova volumes are inappropriately clingy for ceph To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1028718/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs