Hello, 

You can check the the snapshot status in - snapshots table in DB. You can also 
verify the status in  table called - snapshot_store_ref. Yes cloudStack runs a 
cleanup job, timings depends what you have defined in global setting  - 

storage.cleanup.delay   Determines how long (in seconds) to wait before 
actually expunging destroyed volumes. The default value = the default value of 
storage.cleanup.interval.        Advanced        
86400

storage.cleanup.enabled Enables/disables the storage cleanup thread.    
Advanced        
true

storage.cleanup.interval        The interval (in seconds) to wait before 
running the storage cleanup thread.    Advanced        
86400



Vivek Kumar
Sr. Manager - Cloud & DevOps 
IndiQus Technologies
M +91 7503460090 
www.indiqus.com




> On 21-Oct-2021, at 4:38 PM, Yordan Kostov <yord...@nsogroup.com> wrote:
> 
> Here is another thing I noticed.
> - have a VM with a volume snapshots
> - Expunge the VM so the disk is removed also
> - check the Secondary Storage - backup still remains.
> 
> Does anyone knows if Cloudstack does a cleanup later or the orphaned backups 
> will remain?
> 
> Regards,
> Jordan 
> 
> -----Original Message-----
> From: benoit lair <kurushi4...@gmail.com> 
> Sent: Tuesday, October 19, 2021 1:22 PM
> To: users@cloudstack.apache.org
> Subject: Re: Size of the snapshots volume
> 
> 
> [X] This message came from outside your organization
> 
> 
> Hello Yordan,
> 
> I had same results with xcp-ng 8.2 and ACS 4.15.1
> 
> The max filled during the life of the disk will be the size of the snapshot
> 
> That's why i looking towards SDS with a solution giving me possibility to do 
> some thin provisionning with XCP-NG I was thinking about an SDS which could 
> give me block storage or at least file storage and acting as a proxy between 
> my iscsi array and my xcp-ng
> 
> Linstor could be a solution, but for the moment i don't know if the plugin 
> will be compatible with xcp-ng
> 
> Regards, Benoit
> 
> Le mar. 19 oct. 2021 à 11:46, Yordan Kostov <yord...@nsogroup.com> a écrit :
> 
>> Hello Benoit,
>> 
>>        Here are some results - 4.15.2 + XCP-NG. I made 2 VMs from 
>> template - Centos 7, 46 GB hdd, 4% full
>>        - VM1 - root disk is as full as template.
>>        - VM2 - root disk is made full up to ~90%  ( cat /dev/zero >
>> test_file1 )then the file was removed so the used space is again 4%.
>>        - scheduled backup goes through both VMs. First snapshot size is
>>                - VM1 -  2.3G
>>                - VM2 -  41G
>>        - Then on VM2 this script was run to fill and empty the disk 
>> again
>> - cat /dev/zero > /opt/test_file1; sync; rm /opt/ test_file1.
>>        - scheduled backup goes through both VMs. All snapshots size is:
>>                - VM1 - 2.3G
>>                - VM2 - 88G
>> 
>>        Once the disk is filled you will get a snapshot with size no 
>> less than the size of the whole disk.
>>        May be there is a way to shrink it but I could not find it.
>> 
>> Best regards,
>> Jordan
>> 
>> -----Original Message-----
>> From: Yordan Kostov <yord...@nsogroup.com>
>> Sent: Tuesday, October 12, 2021 3:58 PM
>> To: users@cloudstack.apache.org
>> Subject: RE: Size of the snapshots volume
>> 
>> 
>> [X] This message came from outside your organization
>> 
>> 
>> Hello Benoit,
>> 
>>        Unfortunately no.
>>        When I do it I will make sure to drop a line here.
>> 
>> Best regards,
>> Jordan
>> 
>> -----Original Message-----
>> From: benoit lair <kurushi4...@gmail.com>
>> Sent: Tuesday, October 12, 2021 3:40 PM
>> To: users@cloudstack.apache.org
>> Subject: Re: Size of the snapshots volume
>> 
>> 
>> [X] This message came from outside your organization
>> 
>> 
>> Hello Jordan,
>> 
>> Could you proceed to your tests ? Have you got the same results ?
>> 
>> Regards, Benoit Lair
>> 
>> Le lun. 4 oct. 2021 à 17:59, Yordan Kostov <yord...@nsogroup.com> a 
>> écrit
>> :
>> 
>>> Here are a few considerations:
>>> 
>>> - First snapshot of volume is always full snap.
>>> - XenServer/XCP-NG backups are always thin.
>>> - Thin provisioning calculations never go down. Even if you delete 
>>> data from disk.
>>> 
>>> As you filled the disk of the VM to top the thin provisioning 
>>> threats it as full VM from that moment on even if data is deleted. 
>>> So the full snap that will be migrated to NFS will always be of max size.
>>> 
>>> I am not 100% certain as I am yet to start running backup tests.
>>> 
>>> Best regards,
>>> Jordan
>>> 
>>> -----Original Message-----
>>> From: Florian Noel <f.n...@webetsolutions.com>
>>> Sent: Monday, October 4, 2021 6:22 PM
>>> To: 'users@cloudstack.apache.org' <users@cloudstack.apache.org>
>>> Subject: Size of the snapshots volume
>>> 
>>> 
>>> [X] This message came from outside your organization
>>> 
>>> 
>>> Hi,
>>> 
>>> I've a question about the snapshots volume in Cloudstack
>>> 
>>> When we take a snapshot of a volume, this create a VHD file on the 
>>> secondary storage.
>>> Snapshot size doesn't match volume size used.
>>> 
>>> Imagine a volume of 20GB, we fill the volume and empty it just after.
>>> We take a snapshot of the volume from Cloudstack frontend and its 
>>> size is 20GB on the secondary storage while the volume is empty.
>>> 
>>> We've made the same test with volume provisioning in thin, sparse 
>>> and
>> fat.
>>> The results are the same.
>>> 
>>> We use Cloudstack 4.15.1 with XCP-NG 8.1. The LUNs are connected in 
>>> iSCSI on the hypervisors XCP.
>>> 
>>> Thanks for your help.
>>> 
>>> Best regards.
>>> 
>>> 
>>> [Logo Web et Solutions]<
>>> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/6
>>> 0e 
>>> 5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS
>>> _e 
>>> v9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDzCgMKxZA
>>> oq 
>>> vlt4NqCVlovo0bn9PcMUWFMak1jGIGRgGg==__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_
>>> Mf o8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHCye42DjJ$
>>>> 
>>> 
>>> [Facebook]<
>>> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/6
>>> 0e 
>>> 5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS
>>> _e 
>>> v9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDyIo6EwBs
>>> kR 
>>> 6pg3M12nuwExu8D-tkYDv5BE1h2dA1rTOfbHIEta8XTaUC0Et-KgDBM=__;!!A6UyJA!
>>> zY 
>>> KJBkzZPANfqT6kPkY_Mfo8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHC9
>>> _z
>>> SGk3$
>>>> 
>>> 
>>> [Twitter]<
>>> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/6
>>> 0e 
>>> 5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS
>>> _e 
>>> v9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDxVGISVA_
>>> Rn 
>>> Jl21WVuzHCTH_v3e4PfK5YBq_Q228Kqxog==__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_
>>> Mf o8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHC36OFkHl$
>>>> 
>>> 
>>> [LinkedIn]<
>>> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/6
>>> 0e 
>>> 5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS
>>> _e 
>>> v9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDz5UNyOTE
>>> m_ 
>>> EvRFXdshn5-xaylm0Ysa1fuL9vCg5uDKfouGPQSgwbQq28Nl7_fXFIA=__;!!A6UyJA!
>>> zY 
>>> KJBkzZPANfqT6kPkY_Mfo8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHCz
>>> zS
>>> Dj-d$
>>>> 
>>> 
>>> [Youtube]<
>>> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/bc/6
>>> 0e 
>>> 5c62f48323abd316580a3?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS
>>> _e
>>> v9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDyEop3qI2
>>> i2 
>>> HFrm2U65Sd5oXm55IjnZsXt1s4eREvsJGMpsgNaX2L3OdByrUM3b4Xg=__;!!A6UyJA!
>>> zY
>>> KJBkzZPANfqT6kPkY_Mfo8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHC3
>>> f1
>>> vTjU$
>>>> 
>>> 
>>> Florian Noel
>>> 
>>> Administrateur Systèmes Et Réseaux
>>> 
>>> [
>>> https://urldefense.com/v3/__https://storage.letsignit.com/icons/desi
>>> gn 
>>> er/v2/phone-1.png__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_Mfo8xu_hnCJDzEIYjPM
>>> Ov qs3MwyZUs0N9FX1Ln1zICtHKJKHCxqW91pG$
>>> ] 02 35 78 11 90
>>> 
>>> 705 Avenue Isaac Newton
>>> 
>>> 76800 Saint-Etienne-Du-Rouvray
>>> 
>>> [Payneo]<
>>> https://urldefense.com/v3/__https://cloud.letsignit.com/collect/b/60
>>> ed 
>>> 92296e8c02bf93d4f9aa?p=NCQXXscJv3N-mDjmqdZzYH59ppVbYP3afFkR7SxQ1JaS_
>>> ev
>>> 9TYs06R5yG_cSPe6tLuS3Bgn1EjTO39P6hIWtNhqUZ5n-wh878kG0mKc-TDx4rIKe6rk
>>> 37 
>>> 4sFS07v0YLIvIF68SXTHzNmGDb3XO6dLQ==__;!!A6UyJA!zYKJBkzZPANfqT6kPkY_M
>>> fo 8xu_hnCJDzEIYjPMOvqs3MwyZUs0N9FX1Ln1zICtHKJKHCyft4U9I$
>>>> 
>>> 
>>> 
>>> 
>>> 
>> 

Reply via email to