So I am following the orphans trail.
Now I have a job that is running since 3 1/2 days. Can I hit finish on a
job that is in the comparing state? It is in this since 2 days and the
messages in the output are repeating and look like this:
leaked:
Could this also be failed multipart uploads?
Am Do., 15. Apr. 2021 um 18:23 Uhr schrieb Boris Behrens :
> Cheers,
>
> [root@s3db1 ~]# ceph daemon osd.23 perf dump | grep numpg
> "numpg": 187,
> "numpg_primary": 64,
> "numpg_replica": 121,
> "numpg_stray": 2,
>
Cheers,
[root@s3db1 ~]# ceph daemon osd.23 perf dump | grep numpg
"numpg": 187,
"numpg_primary": 64,
"numpg_replica": 121,
"numpg_stray": 2,
"numpg_removing": 0,
Am Do., 15. Apr. 2021 um 18:18 Uhr schrieb 胡 玮文 :
> Hi Boris,
>
> Could you check something
Hi Boris,
Could you check something like
ceph daemon osd.23 perf dump | grep numpg
to see if there are some stray or removing PG?
Weiwen Hu
> 在 2021年4月15日,22:53,Boris Behrens 写道:
>
> Ah you are right.
> [root@s3db1 ~]# ceph daemon osd.23 config get bluestore_min_alloc_size_hdd
> {
>
Ah you are right.
[root@s3db1 ~]# ceph daemon osd.23 config get bluestore_min_alloc_size_hdd
{
"bluestore_min_alloc_size_hdd": "65536"
}
But I also checked how many objects our s3 hold and the numbers just do not
add up.
There are only 26509200 objects, which would result in around 1TB "waste"
So, I need to live with it? A value of zero leads to use the default?
[root@s3db1 ~]# ceph daemon osd.23 config get bluestore_min_alloc_size
{
"bluestore_min_alloc_size": "0"
}
I also checked the fragmentation on the bluestore OSDs and it is around
0.80 - 0.89 on most OSDs. yikes.
[root@s3db1