[ceph-users] Re: How to copy an OSD from one failing disk to another one
For Ceph,this is fortunately not a major issue. Drives failing is considered entirely normal, and Ceph will automatically rebuild your data from redundancy onto a new replacement drive.If You're able to predict the imminent failure of a drive, adding a new drive /OSD will automatically start flowing data onto that drive immediately, thus reducing the time period with decreased redundancy.If You're running with very tight levels of redundancy, You're better off, creating a new OSD on a replacement drive before destroying the old OSD on the failed drive. But if You're running with anything near the recommended/default levels of redundancy, it doesn't really matter in which order you do it. Best regards, Simon Kepp, Kepp Technologies. On Tue, Dec 8, 2020 at 8:59 PM Konstantin Shalygin wrote: > Destroy this OSD, replace disk, deploy OSD. > > > k > > Sent from my iPhone > > > On 8 Dec 2020, at 15:13, huxia...@horebdata.cn wrote: > > > > Hi, dear cephers, > > > > On one ceph i have a failing disk, whose SMART information signals an > impending failure but still availble for reads and writes. I am setting up > a new disk on the same node to replace it. > > What is the best procedure to migrate data (or COPY ) from the failing > OSD to the new one? > > > > Is there any stardard method to copy the OSD from one to another? > > > > best regards, > > > > samuel > > > > > > > > huxia...@horebdata.cn > > ___ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: How to copy an OSD from one failing disk to another one
Destroy this OSD, replace disk, deploy OSD. k Sent from my iPhone > On 8 Dec 2020, at 15:13, huxia...@horebdata.cn wrote: > > Hi, dear cephers, > > On one ceph i have a failing disk, whose SMART information signals an > impending failure but still availble for reads and writes. I am setting up a > new disk on the same node to replace it. > What is the best procedure to migrate data (or COPY ) from the failing OSD to > the new one? > > Is there any stardard method to copy the OSD from one to another? > > best regards, > > samuel > > > > huxia...@horebdata.cn > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: How to copy an OSD from one failing disk to another one
Thanks a lot. I got it. huxia...@horebdata.cn From: Janne Johansson Date: 2020-12-08 13:38 To: huxia...@horebdata.cn CC: ceph-users Subject: Re: [ceph-users] How to copy an OSD from one failing disk to another one "ceph osd set norebalance" "ceph osd set nobackfill" Add new OSD, set osd weight to 0 on old OSD unset the norebalance and nobackfill options, and the cluster will do it all for you. Den tis 8 dec. 2020 kl 13:13 skrev huxia...@horebdata.cn : Hi, dear cephers, On one ceph i have a failing disk, whose SMART information signals an impending failure but still availble for reads and writes. I am setting up a new disk on the same node to replace it. What is the best procedure to migrate data (or COPY ) from the failing OSD to the new one? Is there any stardard method to copy the OSD from one to another? best regards, samuel huxia...@horebdata.cn ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io -- May the most significant bit of your life be positive. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: How to copy an OSD from one failing disk to another one
"ceph osd set norebalance" "ceph osd set nobackfill" Add new OSD, set osd weight to 0 on old OSD unset the norebalance and nobackfill options, and the cluster will do it all for you. Den tis 8 dec. 2020 kl 13:13 skrev huxia...@horebdata.cn < huxia...@horebdata.cn>: > Hi, dear cephers, > > On one ceph i have a failing disk, whose SMART information signals an > impending failure but still availble for reads and writes. I am setting up > a new disk on the same node to replace it. > What is the best procedure to migrate data (or COPY ) from the failing OSD > to the new one? > > Is there any stardard method to copy the OSD from one to another? > > best regards, > > samuel > > > > huxia...@horebdata.cn > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > -- May the most significant bit of your life be positive. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io