Re: [ceph-users] Temporary degradation when adding OSD's

2014-07-10 Thread Gregory Farnum
On Thursday, July 10, 2014, Erik Logtenberg wrote: > > > Yeah, Ceph will never voluntarily reduce the redundancy. I believe > > splitting the "degraded" state into separate "wrongly placed" and > > "degraded" (reduced redundancy) states is currently on the menu for > > the Giant release, but it's

Re: [ceph-users] Temporary degradation when adding OSD's

2014-07-10 Thread Erik Logtenberg
> Yeah, Ceph will never voluntarily reduce the redundancy. I believe > splitting the "degraded" state into separate "wrongly placed" and > "degraded" (reduced redundancy) states is currently on the menu for > the Giant release, but it's not been done yet. That would greatly improve the accuracy o

Re: [ceph-users] Temporary degradation when adding OSD's

2014-07-07 Thread Gregory Farnum
On Mon, Jul 7, 2014 at 7:03 AM, Erik Logtenberg wrote: > Hi, > > If you add an OSD to an existing cluster, ceph will move some existing > data around so the new OSD gets its respective share of usage right away. > > Now I noticed that during this moving around, ceph reports the relevant > PG's as

[ceph-users] Temporary degradation when adding OSD's

2014-07-07 Thread Erik Logtenberg
Hi, If you add an OSD to an existing cluster, ceph will move some existing data around so the new OSD gets its respective share of usage right away. Now I noticed that during this moving around, ceph reports the relevant PG's as degraded. I can more or less understand the logic here: if a piece o