[ceph-users] Re: Determine client/inode/dnode source of massive explosion in CephFS metadata pool usage (Red Hat Nautilus CephFS)

2024-05-14 Thread Kotresh Hiremath Ravishankar
I think you can do the following. NOTE: If you know the objects that are recently created, you can skip to step 5 1. List the objects in the metadata pool and copy it to a file rados -p ls > /tmp/metadata_obj_list 2. Prepare a bulk stat script for each object. Unfortunately xargs didn't

[ceph-users] Re: Determine client/inode/dnode source of massive explosion in CephFS metadata pool usage (Red Hat Nautilus CephFS)

2024-05-13 Thread Eugen Block
I just read your message again, you only mention newly created files, not new clients. So my suggestion probably won't help you in this case, but it might help others. :-) Zitat von Eugen Block : Hi Paul, I don't really have a good answer to your question, but maybe this approach can

[ceph-users] Re: Determine client/inode/dnode source of massive explosion in CephFS metadata pool usage (Red Hat Nautilus CephFS)

2024-05-13 Thread Eugen Block
Hi Paul, I don't really have a good answer to your question, but maybe this approach can help track down the clients. Each MDS client has an average "uptime" metric stored in the MDS: storage01:~ # ceph tell mds.cephfs.storage04.uxkclk session ls ... "id": 409348719, ...