Hi Paul, all,
Thanks! But I don't seem to find how to debug the purge queue. When I
check the purge queue, I get these numbers:
[root@mds02 ~]# ceph daemon mds.mds02 perf dump | grep -E 'purge|pq'
"purge_queue": {
"pq_executing_ops": 0,
"pq_executing": 0,
"pq_execut
Yeah, no ENOSPC error code on deletion is a little bit unintuitive,
but what it means is: the purge queue is full.
You've already told the MDS to purge faster.
Not sure how to tell it to increase the maximum backlog for
deletes/purges, though, but you should be able to find something with
the sear
Quoting Kenneth Waegeman (kenneth.waege...@ugent.be):
> The cluster is healthy at this moment, and we have certainly enough space
> (see also osd df below)
It's not well balanced though ... do you use ceph balancer (with
balancer in upmap mode)?
Gr. Stefan
--
| BIT BV https://www.bit.nl/
Hi all,
We are using cephfs to make a copy of another fs via rsync, and also use
snapshots.
I'm seeing this issue now and then when I try to delete files on cephFS:
|[root@osd001 ~]# rm -f /mnt/ceph/backups/osd00*||
||rm: cannot remove
‘/mnt/ceph/backups/osd001.gigalith.os-3eea7740.1542483’: