> Hello,
> your log extract shows that:
>
> 2019-02-15 21:40:08 OSD.29 DOWN
> 2019-02-15 21:40:09 PG_AVAILABILITY warning start
> 2019-02-15 21:40:15 PG_AVAILABILITY warning cleared
>
> 2019-02-15 21:44:06 OSD.29 UP
> 2019-02-15 21:44:08 PG_AVAILABILITY warning start
> 2019-02-15 21:44:15 PG_AVAILABILITY warning cleared
>
> What you saw is the natural consequence of OSD state change. Those two
> periods of limited PG availability (6s each) are related to peering
> that happens shortly after an OSD goes down or up.
> Basically, the placement groups stored on that OSD need peering, so
> the incoming connections are directed to other (alive) OSDs. And, yes,
> during those few seconds the data are not accessible.

Thanks, bear over with my questions. I'm pretty new to Ceph.
What will clients  (CephFS, Object) experience?
.. will they just block until time has passed and they get through or?

Which means that I'll get 72 x 6 seconds unavailabilty when doing
a rolling restart of my OSD's during upgrades and such? Or is a
controlled restart different than a crash?

-- 
Jesper.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to