Hi Paul,

thanks for the hint.
Restarting the primary osds of the inactive pgs resolved the problem:

Before restarting them they said:
2019-06-19 15:55:36.190 7fcd55c4e700 -1 osd.5 33858 get_health_metrics 
reporting 15 slow ops, oldest is osd_op(client.220116.0:967410 21.2e4s0 
21.d4e19ae4 (undecoded) ondisk+write+known_if_redirected e31569)
and
2019-06-19 15:53:31.214 7f9b946d1700 -1 osd.13 33849 get_health_metrics 
reporting 14560 slow ops, oldest is osd_op(mds.0.44294:99584053 23.5 
23.cad28605 (undecoded) ondisk+write+known_if_redirected+full_force e31562)

Is this something to worry about?

Regards,
Lars

Wed, 19 Jun 2019 15:04:06 +0200
Paul Emmerich <paul.emmer...@croit.io> ==> Lars Täuber <taeu...@bbaw.de> :
> That shouldn't trigger the PG limit (yet), but increasing "mon max pg per
> osd" from the default of 200 is a good idea anyways since you are running
> with more than 200 PGs per OSD.
> 
> I'd try to restart all OSDs that are in the UP set for that PG:
> 
>         13,
>         21,
>         23
>         7,
>         29,
>         9,
>         28,
>         11,
>         8
> 
> 
> Maybe that solves it (technically it shouldn't), if that doesn't work
> you'll have to dig in deeper into the log files to see where exactly and
> why it is stuck activating.
> 
> Paul
> 


-- 
                            Informationstechnologie
Berlin-Brandenburgische Akademie der Wissenschaften
Jägerstraße 22-23                      10117 Berlin
Tel.: +49 30 20370-352           http://www.bbaw.de
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to