Hi Thura, at the beginning of the year, we also faces this challenge. Eventually we used the “one stroke” method which Vivek has described. Since we have about 80TB of sec storage it was not option to copy the data while the management servers are down. Thankfully our storage system has a built-in feature to synchronise data between volumes, so we synched the data over a couple of days and made a clean cut during our maintenance where we only had to sync the remaining few gigabytes which were new on the old storage. If your secondary storage is based on linux you probably can achieve this via rsync or similar tools.
Regards Christian On 5. Dec 2017, at 10:40, Thura, Minn Minn <fj608...@aa.jp.fujitsu.com<mailto:fj608...@aa.jp.fujitsu.com>> wrote: Hi Vivek, Appreciate sharing your steps. I understand that ur procedure is replacing secstorage at one stroke. The copy process will take a very long time(about 1 month) in our environment for tens of TB. What we would like to achieve is something like below: First, to make the system create new template and snapshot only on new secstorage while keeping old secstorage for current snapshot and template. Second, we will gradually copy current snapshot and template to new secstorage and change appropriate db info. The problem is I could not find any way to achieve First step. (Will filling some dummy data to hog all available space in current secstorage do the trick? Or any other smart way?) Maybe I could use ur procedure to achieve our Second step. Thanks, Thura (Fujitsu FIP) -----Original Message----- From: Vivek Kumar [mailto:vivek.ku...@indiqus.com] Sent: Tuesday, December 05, 2017 5:44 PM To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org> Subject: Re: Things to consider in replacing secondary storage Hello Thura, I have done this earlier on as well. Steps which I have followed- 1- Create a new secondary storage- 2- Mount this secondary storage at any host. ( mount -t nfs x.x.x.x/x:/<New_Sec_Storage_Path> /<any_mount_point> ), make sure that your older secondary storage is also mounted. 3- Stop all running Management Servers and Wait 30 minutes. This allows any writes to secondary storage to complete. 4- Copy all data from older secondary storage to newly secondary storage by using cp -Rv ( i.e cp -rp /secondary1/* /secondary2 ). 5- Check the integrity of data, make sure that all data has been copied, double check the size , permission, all folders on the new secondary storage. 6- Once you are done with above steps. Now take the DB backup, Please don’t forget to take DB Backup, also new secondary storage should be reachable from host and management server. Suppose I have a two NFS server where I have created 2 directories( /secondary1 on first NFS server and /secondary2 on secondary server) and shared via NFS. First NFS server Path - 192.168.0.100/secondary1 Second NFS Server Path- 192.168.0.200/secondary2 Please make sure that all data have been copied successfully from secondary1 to secondary2. ( Double Check the data ) View of image_store which was looking as below. Please make sure that all data have been copied successfully from secondary1 to secondary2. ( Double Check the data ) mysql> select * from image_store\G *************************** 1. row *************************** id: 1 name: Secondary1 image_provider_name: NFS protocol: nfs url: nfs://192.168.0.100/secondary1 data_center_id: 1 scope: ZONE role: Image uuid: 4a18559e-e6e8-4329-aac6-9ea6c74ec5e6 parent: 5554e6ec-0dea-3881-9943-d06a6ebe17e8 created: 2016-10-04 15:13:40 removed: NULL total_size: NULL used_bytes: NULL 1 row in set (0.00 sec) Now i want to replace the IP and mount point of the existing secondary storage. So # mysql -p mysql> use cloud; mysql> select id from image_store where url like '%old ip address%'; mysql> update image_store set url = 'nfs://192.168.0.200/secondary2' mysql> where id =1; Now my tables looks like select * from image_store\G *************************** 1. row *************************** id: 1 name: Secondary1 image_provider_name: NFS protocol: nfs url: nfs://192.168.0.200/secondary2 data_center_id: 1 scope: ZONE role: Image uuid: 4a18559e-e6e8-4329-aac6-9ea6c74ec5e6 parent: 5554e6ec-0dea-3881-9943-d06a6ebe17e8 created: 2016-10-04 15:13:40 removed: NULL total_size: NULL used_bytes: NULL 1 row in set (0.00 sec) 7- Now you can start the management server, then stop and start the SSVM, 8- check the agent state, it should be in up state if not then check the connectivity from host and management server as well. 9 -Login to SSVM, and check the mouth point, you will see that new secondary storage has been mounted successfully. 10- Try to take a snapshot and also try to provision VM from old template. Please test it in our test environment first before hitting it in production. Vivek Kumar Virtualization and Cloud Consultant On 05-Dec-2017, at 1:33 PM, Thura, Minn Minn <fj608...@aa.jp.fujitsu.com<mailto:fj608...@aa.jp.fujitsu.com>> wrote: Dear All, Does anyone have experience in replacing secondary storage? We need to replace current sec storage(with a new one) as it will soon be end of support. We are considering the process as follow. 1.) Register new secstore. 2.) Make new template to be created in new secstore. 3.) Make new snapshot to be created in new secstore. 4.) Copy current template and snapshot from old secstore to new secstore. 5.) Change db info of copied template and snapshot to reflect new secstore. We know (1) very well :) We would like to know if there is any way to achieve (2) and (3). Regarding with (4), There are 4 dir under SecStorage. (Snapshots, systemvm, template, volumes) In our understanding, only snapshots and template dir are necessary to copy to new storage. Systemvm and volumes dir are not necessary to copy as it is for temporary storage.(Original data are stored in primary store.) Please correct me if I am wrong. Regarding with (5), For template, we assume it is enough to change template_store_ref.store_id to new secstore. Please correct me if I am wrong. For snapshot, we would like to know what db info should be changed for XenServer and VMware. (XenServer implement 16 chains while VMware use full clone in taking snapshot.) Thanks, Thura (Fujitsu FIP)