Hi Martin

> On 2013-09-02 19:37, Jens-Christian Fischer wrote:
>> we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally 
>> formatted the OSDs with btrfs but have had numerous problems (server kernel 
>> panics) that we could point back to btrfs. We are therefore in the process 
>> of reformatting our OSDs to XFS. We have a process that works, but I was 
>> wondering, if there is a simpler / faster way.
>> 
>> Currently we 'ceph osd out' all drives of a server and wait for the data to 
>> migrate away, then delete the OSD, recreate it and start the OSD processes 
>> again. This takes at least 1-2 days per server (mostly waiting for the data 
>> to migrate back and forth)
>> 
> 
> The first thing I'd try is doing one osd at a time, rather than the entire 
> server; in theory, this should allow for (as opposed to definitely make it 
> happen) data to move from one osd to the other, rather than having to push it 
> across the network from other nodes.

Isn't that depending on the CRUSH map and some rules?

> 
> depending on just how much data you have on an individual osd, you could stop 
> two, blow the first away, copy the data from osd 2 to the disk osd 1 was 
> using, change the mount-points, then bring osd 2 back up again; in theory, 
> osd 2 will only need to resync changes that have occurred while it was 
> offline. This, of course, presumes that there's no change in the on-disk 
> layout between btrfs and xfs...

We were actually thinking of doing that, but I wanted to hear the wisdom of the 
crowd… The thread from a year ago (that I just read) cautioned against that 
procedure though. 

cheers
jc
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to