Re: [ceph-users] Moving OSDs between hosts

2018-03-23 Thread David Turner
Just moving the OSD is indeed the right thing to do and the crush map will update when the OSDs start up on the new host. The only "gotcha" is if you do not have your journals/WAL/DBs on the same device as your data. In that case, you will need to move both devices to the new server for the OSD

Re: [ceph-users] Moving OSDs between hosts

2018-03-16 Thread ceph
Hi jon, Am 16. März 2018 17:00:09 MEZ schrieb Jon Light : >Hi all, > >I have a very small cluster consisting of 1 overloaded OSD node and a >couple MON/MGR/MDS nodes. I will be adding new OSD nodes to the cluster >and >need to move 36 drives from the existing node to a new one.

[ceph-users] Moving OSDs between hosts

2018-03-16 Thread Jon Light
Hi all, I have a very small cluster consisting of 1 overloaded OSD node and a couple MON/MGR/MDS nodes. I will be adding new OSD nodes to the cluster and need to move 36 drives from the existing node to a new one. I'm running Luminous 12.2.2 on Ubuntu 16.04 and everything was created with