GutoVeronezi opened a new pull request, #6630: URL: https://github.com/apache/cloudstack/pull/6630
### Description ACS + Xenserver works with differential snapshots. ACS takes a volume full snapshot and the next ones are referenced as a child of the previous snapshot until the chain reaches the limit defined in the global setting `snapshot.delta.max`; then, a new full snapshot is taken. PR #5297 introduced disk-only snapshots for KVM volumes. Among the changes, the delete process was also refactored. Before the changes, when one was removing a snapshot with children, ACS was marking it as `Destroyed` and it was keeping the `Image` entry on the table `cloud.snapshot_store_ref` as `Ready`. When ACS was rotating the snapshots (the max delta was reached) and all the children were already marked as removed; then, ACS would start removing the whole hierarchy, completing the differential snapshot cycle. After the changes, the snapshots with children stopped being marked as removed and the differential snapshot cycle was not being completed. This PR intends to honor again the differential snapshot cycle for XenServer, making the snapshots to be marked as removed when deleted while having children and following the differential snapshot cycle. Also, when one takes a volume snapshot and ACS backs it up to the secondary storage, ACS inserts 2 entries on table `cloud.snapshot_store_ref` (`Primary` and `Image`). When one deletes a volume snapshot, ACS first tries to remove the snapshot from the secondary storage and mark the entry `Image` as removed; then, it tries to remove the snapshot from the primary storage and mark the entry `Primary` as removed. If ACS cannot remove the snapshot from the primary storage, it will keep the snapshot as `BackedUp`; however, If it does not exist in the secondary storage and without the entry `SNAPSHOT.DELETE` on `cloud.usage_event`. In the end, after the garbage collector flow, the snapshot will be marked as `BackedUp`, with a value in the field `removed` and still being rated. This PR also addresses the correction for this situation. ### Types of changes - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] New feature (non-breaking change which adds functionality) - [x] Bug fix (non-breaking change which fixes an issue) - [ ] Enhancement (improves an existing feature and functionality) - [ ] Cleanup (Code refactoring and cleanup, that may add test cases) ### Feature/Enhancement Scale or Bug Severity #### Bug Severity - [ ] BLOCKER - [ ] Critical - [ ] Major - [x] Minor - [ ] Trivial ### How Has This Been Tested? The situation was observed in XenServer environments; however, due to some internal circumstances, I had to reproduce the situation in a KVM environment (considering 2 as max deltas). I created a VM and scheduled an hourly snapshot for the `ROOT` volume, retaining 2 snapshots. After ACS take the first two snapshots (and before taking the third one), I manually changed the database to put the ID of the first snapshot as the parent of the second, to simulate the XenServer differential snapshot. After the third snapshot was generated, the first one was marked as `Destroyed`, ACS generated the entry `SNAPSHOT.DELETE` on `cloud.usage_event`, and the entries `Primary` and `Image` ended up as `Destroyed` and `Ready`, respectively. After the fourth snapshot was generated, ACS identified that the second one was the last on the hierarchy and started removing the hierarchy. In the end, the entries for the first and second snapshots were marked as removed and only the last 2 snapshots got entries in `Ready` state. I also forced errors in the deletion of the snapshot in the primary and secondary storage. At the end of both tests, ACS inserted the entries `SNAPSHOT.DELETE` on `cloud.usage_event` and the garbage collector removed the entries of the `cloud.snapshot_store_ref`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
