Hi all

we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally 
formatted the OSDs with btrfs but have had numerous problems (server kernel 
panics) that we could point back to btrfs. We are therefore in the process of 
reformatting our OSDs to XFS. We have a process that works, but I was 
wondering, if there is a simpler / faster way.

Currently we 'ceph osd out' all drives of a server and wait for the data to 
migrate away, then delete the OSD, recreate it and start the OSD processes 
again. This takes at least 1-2 days per server (mostly waiting for the data to 
migrate back and forth)

Here's the script we are using:

--- cut ---
#! /bin/bash

OSD=$1
PART=$2
HOST=$3
echo "changing partition ${PART}1 to XFS for OSD: $OSD on host: $HOST"
read -p "continue or CTRL-C"


service ceph -a stop osd.$OSD
ceph osd crush remove osd.$OSD
ceph auth del osd.$OSD
ceph osd rm $OSD
ceph osd create # this should give you back the same osd number as the one you 
just removed

umount ${PART}1
parted $PART rm 1 # remove partion and create a new one
parted $PART mkpart primary 0% 100%  # remove partion and create a new one
mkfs.xfs -f -i size=2048 ${PART}1 -L osd.$OSD
mount -o inode64,noatime ${PART}1 /var/lib/ceph/osd/ceph-$OSD
ceph-osd -i $OSD --mkfs --mkkey --mkjournal
ceph auth add osd.$OSD osd 'allow *' mon 'allow rwx' -i 
/var/lib/ceph/osd/ceph-${OSD}/keyring
ceph osd crush set $OSD 1 root=default host=$HOST
service ceph -a start osd.$OSD

--- cut ---

cheers
Jens-Christian

-- 
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch

http://www.switch.ch/socialmedia

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to