[ceph-users] Re: Ceph User + Community Meeting and Survey [May 23]

2024-05-13 Thread Laura Flores
Thanks to everyone who has already completed the survey. There is still time this week to get your voice heard in the upcoming User + Dev meeting if you haven't done so already! Take the survey here:

[ceph-users] Re: Upgrading Ceph Cluster OS

2024-05-13 Thread Michel Jouvin
Nima, Can you also specify the Ceph version you are using and whether your current configuration is cephadm-based? Michel Le 13/05/2024 à 15:19, Götz Reinicke a écrit : Hi, Am 11.05.2024 um 15:54 schrieb Nima AbolhassanBeigi : Hi, We want to upgrade the OS version of our production

[ceph-users] Re: Upgrading Ceph Cluster OS

2024-05-13 Thread Götz Reinicke
Hi, > Am 11.05.2024 um 15:54 schrieb Nima AbolhassanBeigi > : > > Hi, > > We want to upgrade the OS version of our production ceph cluster by > reinstalling the OS on the server. from which OS to which OS do you like to upgrade? Whats your ceph version? Regards . Goetz smime.p7s

[ceph-users] Re: Problem with take-over-existing-cluster.yml playbook

2024-05-13 Thread vladimir franciz blando
Hi, If I follow the guide, it only says to define the mons on the ansible hosts files under the section [mons] which I did with this example (not real ip) [mons] vlad-ceph1 monitor_address=192.168.1.1 ansible_user=ceph vlad-ceph2 monitor_address=192.168.1.2 ansible_user=ceph vlad-ceph3

[ceph-users] cephfs-data-scan orphan objects while mds active?

2024-05-13 Thread Olli Rajala
Hi, I suspect that I have some orphan objects on a data pool after quite haphazardly evicting and removing a cache pool after deleting 17TB of files from cephfs. I have forward scrubbed the mds and the filesystem is in clean state. This is a production system and I'm curious if it would be safe

[ceph-users] Re: Multisite: metadata behind on shards

2024-05-13 Thread Christian Rohmann
On 13.05.24 5:26 AM, Szabo, Istvan (Agoda) wrote: Wonder what is the mechanism behind the sync mechanism because I need to restart all the gateways every 2 days on the remote sites to keep those it in sync. (Octopus 15.2.7) We've also seen lots of those issues with stuck RGWs with earlier

[ceph-users] Re: Determine client/inode/dnode source of massive explosion in CephFS metadata pool usage (Red Hat Nautilus CephFS)

2024-05-13 Thread Eugen Block
I just read your message again, you only mention newly created files, not new clients. So my suggestion probably won't help you in this case, but it might help others. :-) Zitat von Eugen Block : Hi Paul, I don't really have a good answer to your question, but maybe this approach can

[ceph-users] Re: Determine client/inode/dnode source of massive explosion in CephFS metadata pool usage (Red Hat Nautilus CephFS)

2024-05-13 Thread Eugen Block
Hi Paul, I don't really have a good answer to your question, but maybe this approach can help track down the clients. Each MDS client has an average "uptime" metric stored in the MDS: storage01:~ # ceph tell mds.cephfs.storage04.uxkclk session ls ... "id": 409348719, ...