On Tue, May 31, 2022 at 3:42 AM Magnus HAGDORN wrote:
>
> So, we are wondering what it is up to. How long it might take. And is
> there something we can do to speed up the replay phase.
>
I'm not sure what can be done to speed up replay for MDSes in your
nautilus cluster since they are already
een
> to use this as a learning opportunity to see what we can do to bring
> this filesystem back to life.
>
> On Wed, 2022-06-01 at 20:11 -0400, Ramana Venkatesh Raja wrote:
> > Can you temporarily turn up the MDS debug log level (debug_mds) to
> >
> > check what's happening to this
On Thu, Jun 2, 2022 at 11:40 AM Stefan Kooman wrote:
>
> Hi,
>
> We have a CephFS filesystem holding 70 TiB of data in ~ 300 M files and
> ~ 900 M sub directories. We currently have 180 OSDs in this cluster.
>
> POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED
> (DATA)
On Tue, May 31, 2022 at 3:42 AM Magnus HAGDORN wrote:
>
> Hi all,
> it seems to be the time of stuck MDSs. We also have our ceph filesystem
> degraded. The MDS is stuck in replay for about 20 hours now.
>
> We run a nautilus ceph cluster with about 300TB of data and many
> millions of files. We
On Sat, Apr 16, 2022 at 10:15 PM Ramana Venkatesh Raja wrote:
>
> On Thu, Apr 14, 2022 at 8:07 PM Ryan Taylor wrote:
> >
> > Hello,
> >
> >
> > I am using cephfs via Openstack Manila (Ussuri I think).
> >
> > The cephfs cluster is v14.2.22 and
On Thu, Apr 14, 2022 at 8:07 PM Ryan Taylor wrote:
>
> Hello,
>
>
> I am using cephfs via Openstack Manila (Ussuri I think).
>
> The cephfs cluster is v14.2.22 and my client has kernel
> 4.18.0-348.20.1.el8_5.x86_64
>
>
> I have a Manila share
>
>
On Tue, Mar 22, 2022 at 11:24 AM Robert Vasek wrote:
>
> Hello,
>
> I have a question about cephfs subvolume paths. The path to a subvol seems
> to be in the format of //, e.g.:
>
> /volumes/csi/csi-vol-59c3cb5a-a9ee-11ec-b412-0242ac110004/b2b5a0b3-e02b-4f93-a3f5-fdcef80ebbea
>
> I'm wondering
On Wed, Jan 12, 2022 at 10:24 AM Dan van der Ster wrote:
>
> Dear Ceph Users,
>
> There was a question at the CLT today about the nfs-ganesha builds at:
> https://download.ceph.com/nfs-ganesha/
>
> Are people actively using those? Is there a reason you don't use the
> builds from
Hi Victoria and Goutham,
I triggered a jenkins build and the nfs-ganesha packages are up for now.
https://jenkins.ceph.com/job/nfs-ganesha-stable/480/
The cephfs-nfs driver manila job also passed,
https://review.opendev.org/#/c/733161/
On Tue, Jun 23, 2020 at 6:59 PM Victoria Martinez de la Cruz
wrote:
>
> Hi folks,
>
> I'm hitting issues with the nfs-ganesha-stable packages [0], the repo url
> [1] is broken. Is there a known issue for this?
>
The missing packages in chacra could be due to the recent mishap in
the sepia long
10 matches
Mail list logo