Re: [ceph-users] PG_AVAILABILITY with one osd down?

2019-02-16 Thread Maks Kowalik
Clients' experience depends on whether at the very moment they need to read/write to those particular PGs involved in peering. If their objects are placed in another PGs, then I/O operations shouldn't be impacted. If clients were performing I/O ops to those PGs that went into peering, then they wil

Re: [ceph-users] PG_AVAILABILITY with one osd down?

2019-02-16 Thread jesper
> Hello, > your log extract shows that: > > 2019-02-15 21:40:08 OSD.29 DOWN > 2019-02-15 21:40:09 PG_AVAILABILITY warning start > 2019-02-15 21:40:15 PG_AVAILABILITY warning cleared > > 2019-02-15 21:44:06 OSD.29 UP > 2019-02-15 21:44:08 PG_AVAILABILITY warning start > 2019-02-15 21:44:15 PG_AVAILA

Re: [ceph-users] PG_AVAILABILITY with one osd down?

2019-02-16 Thread Maks Kowalik
Hello, your log extract shows that: 2019-02-15 21:40:08 OSD.29 DOWN 2019-02-15 21:40:09 PG_AVAILABILITY warning start 2019-02-15 21:40:15 PG_AVAILABILITY warning cleared 2019-02-15 21:44:06 OSD.29 UP 2019-02-15 21:44:08 PG_AVAILABILITY warning start 2019-02-15 21:44:15 PG_AVAILABILITY warning cle

[ceph-users] PG_AVAILABILITY with one osd down?

2019-02-15 Thread jesper
Yesterday I saw this one.. it puzzles me: 2019-02-15 21:00:00.000126 mon.torsk1 mon.0 10.194.132.88:6789/0 604164 : cluster [INF] overall HEALTH_OK 2019-02-15 21:39:55.793934 mon.torsk1 mon.0 10.194.132.88:6789/0 604304 : cluster [WRN] Health check failed: 2 slow requests are blocked > 32 sec. Impl