Why wait for the data to migrate away? Normally you have replicas of the
whole osd data, so you can simply stop the osd, reformat the disk and restart
it again. It'll join the cluster and automatically get all data it's missing.
Of course the risk of dataloss is a bit higher during that
On Mon, 2 Sep 2013, Jens-Christian Fischer wrote:
Hi all
we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally
formatted the OSDs with btrfs but have had numerous problems (server kernel
panics) that we could point back to btrfs. We are therefore in the process
of
On 03.09.2013, at 16:27, Sage Weil s...@inktank.com wrote:
ceph osd create # this should give you back the same osd number as the one
you just removed
OSD=`ceph osd create` # may or may not be the same osd id
good point - so far it has been good to us!
umount ${PART}1
parted $PART
Hi all
we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally
formatted the OSDs with btrfs but have had numerous problems (server kernel
panics) that we could point back to btrfs. We are therefore in the process of
reformatting our OSDs to XFS. We have a process that works,
Am 02.09.2013 11:37, schrieb Jens-Christian Fischer:
we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally
formatted the OSDs with btrfs but have had numerous problems (server kernel
panics) that we could point back to btrfs. We are therefore in the process of
reformatting our
Hi Jens,
On 2013-09-02 19:37, Jens-Christian Fischer wrote:
we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally
formatted the OSDs with btrfs but have had numerous problems (server
kernel panics) that we could point back to btrfs. We are therefore in
the process of
Why wait for the data to migrate away? Normally you have replicas of the
whole osd data, so you can simply stop the osd, reformat the disk and restart
it again. It'll join the cluster and automatically get all data it's missing.
Of course the risk of dataloss is a bit higher during that
Hi Martin
On 2013-09-02 19:37, Jens-Christian Fischer wrote:
we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally
formatted the OSDs with btrfs but have had numerous problems (server kernel
panics) that we could point back to btrfs. We are therefore in the process
of