Re: [ceph-users] Removing OSD - double rebalance?

2015-12-02 Thread Carsten Schmitt
Hi all, On 12/02/2015 12:10 PM, Jan Schermer wrote: 1) if you have the original drive that works and just want to replace it then you can just "dd" it over to the new drive and then extend the partition if the new one is larger, this avoids double backfilling in this case 2) if the old drive i

Re: [ceph-users] Removing OSD - double rebalance?

2015-12-02 Thread Andy Allan
On 2 December 2015 at 11:10, Jan Schermer wrote: > 1) if you have the original drive that works and just want to replace it then > you can just "dd" it over to the new drive and then extend the partition if > the new one is larger, this avoids double backfilling in this case > 2) if the old driv

Re: [ceph-users] Removing OSD - double rebalance?

2015-12-02 Thread Dan van der Ster
Here's something that I didn't see mentioned in this thread yet: the set of PGs mapped to an OSD is a function of the ID of that OSD. So, if you replace a drive but don't reuse the same OSD ID for the replacement, you'll have more PG movement than if you kept the ID. -- dan On Wed, Dec 2, 2015 at

Re: [ceph-users] Removing OSD - double rebalance?

2015-12-02 Thread Jan Schermer
1) if you have the original drive that works and just want to replace it then you can just "dd" it over to the new drive and then extend the partition if the new one is larger, this avoids double backfilling in this case 2) if the old drive is dead you should "out" it and at the same time add a n

Re: [ceph-users] Removing OSD - double rebalance?

2015-12-02 Thread Andy Allan
On 30 November 2015 at 09:34, Burkhard Linke wrote: > On 11/30/2015 10:08 AM, Carsten Schmitt wrote: >> But after entering the last command, the cluster starts rebalancing again. >> >> And that I don't understand: Shouldn't be one rebalancing process enough >> or am I missing something? > > Remov

Re: [ceph-users] Removing OSD - double rebalance?

2015-11-30 Thread Steve Anthony
It's probably worth noting that if you're planning on removing multiple OSDs in this manner, you should make sure they are not in the same failure domain, per your CRUSH rules. For example, if you keep one replica per node and three copies (as in the default) and remove OSDs from multiple nodes wit

Re: [ceph-users] Removing OSD - double rebalance?

2015-11-30 Thread Burkhard Linke
Hi Carsten, On 11/30/2015 10:08 AM, Carsten Schmitt wrote: Hi all, I'm running ceph version 0.94.5 and I need to downsize my servers because of insufficient RAM. So I want to remove OSDs from the cluster and according to the manual it's a pretty straightforward process: I'm beginning with "

Re: [ceph-users] Removing OSD - double rebalance?

2015-11-30 Thread Wido den Hollander
On 30-11-15 10:08, Carsten Schmitt wrote: > Hi all, > > I'm running ceph version 0.94.5 and I need to downsize my servers > because of insufficient RAM. > > So I want to remove OSDs from the cluster and according to the manual > it's a pretty straightforward process: > I'm beginning with "ceph

[ceph-users] Removing OSD - double rebalance?

2015-11-30 Thread Carsten Schmitt
Hi all, I'm running ceph version 0.94.5 and I need to downsize my servers because of insufficient RAM. So I want to remove OSDs from the cluster and according to the manual it's a pretty straightforward process: I'm beginning with "ceph osd out {osd-num}" and the cluster starts rebalancing i