On 5/14/14 06:36 , Dinu Vlad wrote:

Hi Dinu,

On Wed, 14 May 2014, Dinu Vlad wrote:
I'm running a ceph cluster with 3 mon and 4 osd nodes (32 disks total) and I've been 
looking at the possibility to "migrate" the data to 2 new nodes. The operation 
should happen by relocating the disks - I'm not getting any new hard-drives. The cluster 
is used as a backend for an openstack cloud, so downtime should be as short as possible - 
preferably not more than 24 h during the week-end.

I'd like a second opinion on the process - since I do not have the resources to 
test the move scenario. I'm running emperor (0.72.1) at the moment. All pools 
in the cluster have size 2. Each existing OSD nodes have each an SSD for 
journals; /dev/disk/by-id paths were used.

Here's what I think would work:
1 - stop ceph on the existing OSD nodes (all of them) and shutdown the node 1 & 
2;
2 - take drives 1-16/ssds 1-2 out and put them in the new node #1; start it up 
with ceph's upstart script set on manual and check/correct journal paths
3 - edit the CRUSH map on the monitors to reflect the new situation
4 - start ceph on the new node #1 and old nodes 3 & 4; wait for the rebuild to 
happen
5 - repeat steps 1-4 for the rest of the nodes/drives;
If you used ceph-deploy and/or ceph-disk to set up these OSDs (that is, if
they are stored on labeled GPT partitions such that upstart is
automagically starting up the ceph-osd daemons for you without you putting
anythign in /etc/fstab to manually mount the volumes) then all of this
should be plug and play for you--including step #3.  By default, the
startup process will 'fix' the CRUSH hierarchy position based on the
hostname and (if present) other positional data configured for 'crush
location' in ceph.conf.  The only real requirement is that both the osd
data and journal volumes get moved so that the daemon has everything it
needs to start up.

sage



If you used disk encryption (ceph-deploy --dmcrypt), you'll also need to copy the keys from /etc/ceph/dmcrypt-keys/.


--

*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email cle...@centraldesktop.com <mailto:cle...@centraldesktop.com>

*Central Desktop. Work together in ways you never thought possible.*
Connect with us Website <http://www.centraldesktop.com/> | Twitter <http://www.twitter.com/centraldesktop> | Facebook <http://www.facebook.com/CentralDesktop> | LinkedIn <http://www.linkedin.com/groups?gid=147417> | Blog <http://cdblog.centraldesktop.com/>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to