This is what I see in the OSD.54 log file

2020-01-14 10:35:04.986 7f0c20dca700 -1 log_channel(cluster) log [ERR] :
13.4 soid
13:20fbec66:::%2fhbWPh36KajAKcJUlCjG9XdqLGQMzkwn3NDrrLDi_mTM%2ffile2:head :
size 385888256 > 134217728 is too large
2020-01-14 10:35:08.534 7f0c20dca700 -1 log_channel(cluster) log [ERR] :
13.4 soid
13:25e2d1bd:::%2fhbWPh36KajAKcJUlCjG9XdqLGQMzkwn3NDrrLDi_mTM%2ffile8:head :
size 385888256 > 134217728 is too large

On Tue, Jan 14, 2020 at 11:02 AM Massimo Sgaravatto <
massimo.sgarava...@gmail.com> wrote:

> I have just finished the update of a ceph cluster from luminous to nautilus
> Everything seems running, but I keep receiving notifications (about ~ 10
> so far, involving different PGs and different OSDs)  of PGs in inconsistent
> state.
>
> rados list-inconsistent-obj pg-id --format=json-pretty  (an example is
> attached) says that the problem is "size_too_large".
>
> "ceph pg repair" is able to "fix" the problem, but I am not able to
> understand what is the problem
>
> Thanks, Massimo
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to