Hi Aaron,
thanks for the very usefull hint! With "ceph osd set noout" it's works
without trouble. Typical beginner's mistake.
regards
Udo
Am 21.01.2014 20:45, schrieb Aaron Ten Clay:
> Udo,
>
> I think you might have better luck using "ceph osd set noout" before
> doing maintenance, rather than
Udo,
I think you might have better luck using "ceph osd set noout" before doing
maintenance, rather than "ceph osd set nodown", since you want the node to
be marked down to avoid having I/O directed at it (but not out to avoid
having recovery backfill begin.)
-Aaron
On Tue, Jan 21, 2014 at 10:0
Hi,
I need a little bit help.
We have an 4-node ceph cluster and the clients run in trouble if one
node is down (due to maintenance).
After the node is switched on again ceph health shows (for a little time):
HEALTH_WARN 4 pgs incomplete; 14 pgs peering; 370 pgs stale; 12 pgs
stuck unclean; 36 req