I would consider doing it host-by-host wise, as you should always be able
to handle the complete loss of a node. This would be much faster in the end
as you save a lot of time not migrating data back and forth. However this
can lead to problems if your cluster is not configured according to the
hardware performance given.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Fr., 15. Nov. 2019 um 20:46 Uhr schrieb Janne Johansson <
icepic...@gmail.com>:

> Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave <mc...@uvic.ca>:
>
>> So would you recommend doing an entire node at the same time or per-osd?
>>
>
> You should be able to do it per-OSD (or per-disk in case you run more than
> one OSD per disk), to minimize data movement over the network, letting
> other OSDs on the same host take a bit of the load while re-making the
> disks one by one. You can use "ceph osd reweight <number> 0.0" to make the
> particular OSD release its data but still claim it supplies $crush-weight
> to the host, meaning the other disks will have to take its data more or
> less.
> Moving data between disks in the same host usually goes lots faster than
> over the network to other hosts.
>
> --
> May the most significant bit of your life be positive.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to