[ceph-users] Re: MDS stuck in replay

2022-06-04 Thread Ramana Venkatesh Raja
On Tue, May 31, 2022 at 3:42 AM Magnus HAGDORN wrote: > > So, we are wondering what it is up to. How long it might take. And is > there something we can do to speed up the replay phase. > I'm not sure what can be done to speed up replay for MDSes in your nautilus cluster since they are already

[ceph-users] Re: MDS stuck in replay

2022-06-04 Thread Ramana Venkatesh Raja
een > to use this as a learning opportunity to see what we can do to bring > this filesystem back to life. > > On Wed, 2022-06-01 at 20:11 -0400, Ramana Venkatesh Raja wrote: > > Can you temporarily turn up the MDS debug log level (debug_mds) to > > > > check what's happening to this

[ceph-users] Re: Help needed picking the right amount of PGs for (Cephfs) metadata pool

2022-06-02 Thread Ramana Venkatesh Raja
On Thu, Jun 2, 2022 at 11:40 AM Stefan Kooman wrote: > > Hi, > > We have a CephFS filesystem holding 70 TiB of data in ~ 300 M files and > ~ 900 M sub directories. We currently have 180 OSDs in this cluster. > > POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED > (DATA)

[ceph-users] Re: MDS stuck in replay

2022-06-01 Thread Ramana Venkatesh Raja
On Tue, May 31, 2022 at 3:42 AM Magnus HAGDORN wrote: > > Hi all, > it seems to be the time of stuck MDSs. We also have our ceph filesystem > degraded. The MDS is stuck in replay for about 20 hours now. > > We run a nautilus ceph cluster with about 300TB of data and many > millions of files. We

[ceph-users] Re: df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-19 Thread Ramana Venkatesh Raja
On Sat, Apr 16, 2022 at 10:15 PM Ramana Venkatesh Raja wrote: > > On Thu, Apr 14, 2022 at 8:07 PM Ryan Taylor wrote: > > > > Hello, > > > > > > I am using cephfs via Openstack Manila (Ussuri I think). > > > > The cephfs cluster is v14.2.22 and

[ceph-users] Re: df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-16 Thread Ramana Venkatesh Raja
On Thu, Apr 14, 2022 at 8:07 PM Ryan Taylor wrote: > > Hello, > > > I am using cephfs via Openstack Manila (Ussuri I think). > > The cephfs cluster is v14.2.22 and my client has kernel > 4.18.0-348.20.1.el8_5.x86_64 > > > I have a Manila share > >

[ceph-users] Re: Path to a cephfs subvolume

2022-03-22 Thread Ramana Venkatesh Raja
On Tue, Mar 22, 2022 at 11:24 AM Robert Vasek wrote: > > Hello, > > I have a question about cephfs subvolume paths. The path to a subvol seems > to be in the format of //, e.g.: > > /volumes/csi/csi-vol-59c3cb5a-a9ee-11ec-b412-0242ac110004/b2b5a0b3-e02b-4f93-a3f5-fdcef80ebbea > > I'm wondering

[ceph-users] Re: are you using nfs-ganesha builds from download.ceph.com

2022-01-12 Thread Ramana Venkatesh Raja
On Wed, Jan 12, 2022 at 10:24 AM Dan van der Ster wrote: > > Dear Ceph Users, > > There was a question at the CLT today about the nfs-ganesha builds at: > https://download.ceph.com/nfs-ganesha/ > > Are people actively using those? Is there a reason you don't use the > builds from

[ceph-users] Re: NFS Ganesha 2.7 in Xenial not available

2020-07-03 Thread Ramana Venkatesh Raja
Hi Victoria and Goutham, I triggered a jenkins build and the nfs-ganesha packages are up for now. https://jenkins.ceph.com/job/nfs-ganesha-stable/480/ The cephfs-nfs driver manila job also passed, https://review.opendev.org/#/c/733161/

[ceph-users] Re: NFS Ganesha 2.7 in Xenial not available

2020-06-23 Thread Ramana Venkatesh Raja
On Tue, Jun 23, 2020 at 6:59 PM Victoria Martinez de la Cruz wrote: > > Hi folks, > > I'm hitting issues with the nfs-ganesha-stable packages [0], the repo url > [1] is broken. Is there a known issue for this? > The missing packages in chacra could be due to the recent mishap in the sepia long