This is what I thought of as well. In the rbd driver, if a request to
delete a volume comes in, where the volume object on the backend has other
objects that depend on it, it simply renames it:
https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/rbd.py#L657
There is also code to
On Thu, Jun 19, 2014 at 11:01 AM, Amit Das wrote:
> Hi All,
>
> Thanks for clarifying the Cinder behavior w.r.t a snapshot & its clones
> which seems to be independent/decoupled.
> The current volume & its snapshot based validations in Cinder holds true
> for snapshot & its clones w.r.t my storag
So these are all features that various other backends manage to
implement successfully.
Your best point of reference might be the ceph code - I believe it
deals with very similar issues in various ways.
On 19 June 2014 18:01, Amit Das wrote:
> Hi All,
>
> Thanks for clarifying the Cinder behavio
Hi All,
Thanks for clarifying the Cinder behavior w.r.t a snapshot & its clones
which seems to be independent/decoupled.
The current volume & its snapshot based validations in Cinder holds true
for snapshot & its clones w.r.t my storage requirements.
Our storage is built on top of ZFS filesystem.
On Tue, Jun 17, 2014 at 10:50 PM, Amit Das wrote:
> Hi Stackers,
>
> I have been implementing a Cinder driver for our storage solution & facing
> issues with below scenario.
>
> Scenario - When a user/admin tries to delete a snapshot that has
> associated clone(s), an error message/log should be
It is defined by the expected behaviour of cinder... and changing that
is hard. Allowing the driver to give feedback is also hard, since the
API returns long before the driver gets called, so there really isn't
an easy route to send feedback, and a 'last status' field or similar
in the volume/snap/
I agree with Amit on this. There needs to be a way for the driver to
indicate that an operation is not currently possible and include some
descriptive message to indicate why. Right now the volume manager assumes
certain behavioral constraints (e.g. that snapshots are completely
decoupled from clon
On 10:20 Wed 18 Jun , Amit Das wrote:
> Implementation issues - If Cinder driver throws an Exception the snapshot
> will have error_deleting status & will not be usable. If Cinder driver logs
> the error silently then Openstack will probably mark the snapshot as
> deleted.
>
> What is the appr
Hi Stackers,
I have been implementing a Cinder driver for our storage solution & facing
issues with below scenario.
Scenario - When a user/admin tries to delete a snapshot that has associated
clone(s), an error message/log should be shown to the user stating that '*There
are clones associated to