Losing a node is not a big deal for us (dual bonded 10G connection to each 
node).

I’m thinking:

  1.  Drain node
  2.  Redeploy with Ceph Ansible

It would require much less hands-on time for our group.

I know the churn on the cluster would be high, which was my only concern.

Mike


Senior Systems Administrator
Research Computing Services Team
University of Victoria

From: Martin Verges <martin.ver...@croit.io>
Date: Friday, November 15, 2019 at 11:52 AM
To: Janne Johansson <icepic...@gmail.com>
Cc: Cave Mike <mc...@uvic.ca>, ceph-users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Migrating from block to lvm

I would consider doing it host-by-host wise, as you should always be able to 
handle the complete loss of a node. This would be much faster in the end as you 
save a lot of time not migrating data back and forth. However this can lead to 
problems if your cluster is not configured according to the hardware 
performance given.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io<mailto:martin.ver...@croit.io>
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Fr., 15. Nov. 2019 um 20:46 Uhr schrieb Janne Johansson 
<icepic...@gmail.com<mailto:icepic...@gmail.com>>:
Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave 
<mc...@uvic.ca<mailto:mc...@uvic.ca>>:
So would you recommend doing an entire node at the same time or per-osd?

You should be able to do it per-OSD (or per-disk in case you run more than one 
OSD per disk), to minimize data movement over the network, letting other OSDs 
on the same host take a bit of the load while re-making the disks one by one. 
You can use "ceph osd reweight <number> 0.0" to make the particular OSD release 
its data but still claim it supplies $crush-weight to the host, meaning the 
other disks will have to take its data more or less.
Moving data between disks in the same host usually goes lots faster than over 
the network to other hosts.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to