[ceph-users] Re: cephfs-top doesn't work

2022-04-19 Thread Jos Collin
This doesn't break anything, but the current version of cephfs-top cannot accommodate great number of clients. The workaround is to limit the number of clients (if that's possible) or reduce the terminal zoom/font size to accommodate 100 clients. We have a tracker [1] to implement the limit als

[ceph-users] Re: v17.2.0 Quincy released

2022-04-19 Thread Harry G. Coin
Great news!  Any notion when the many pending bug fixes will show up in Pacific?  It's been a while. On 4/19/22 20:36, David Galloway wrote: We're very happy to announce the first stable release of the Quincy series. We encourage you to read the full release notes at https://ceph.io/en/news/

[ceph-users] v17.2.0 Quincy released

2022-04-19 Thread David Galloway
We're very happy to announce the first stable release of the Quincy series. We encourage you to read the full release notes at https://ceph.io/en/news/blog/2022/v17-2-0-quincy-released/ Getting Ceph * Git at git://github.com/ceph/ceph.git * Tarball at https://download.ceph.com/tar

[ceph-users] Re: globally disableradosgw lifecycle processing

2022-04-19 Thread Matt Benjamin
Hi Christopher, Yes, you will need to restart the rgw instance(s). Matt On Tue, Apr 19, 2022 at 3:13 PM Christopher Durham wrote: > > > Hello, > I am using radosgw with lifecycle processing on multiple buckets. I may have > need to globallydisable lifecycle processing and do some investigation

[ceph-users] Re: df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-19 Thread Ryan Taylor
Thanks for the pointers! It does look like https://tracker.ceph.com/issues/55090 and I am not surprised Dan and I are hitting the same issue... I am using the latest available Almalinux 8, 4.18.0-348.20.1.el8_5.x86_64 Installing kernel-debuginfo-common-x86_64 I see in /usr/src/debug/kernel-4.18

[ceph-users] CephFS health warnings after deleting millions of files

2022-04-19 Thread David Turner
A rogue process wrote 38M files into a single CephFS directory that took about a month to delete. We had to increase MDS cache sizes to handle the increased file volume, but we've been able to reduce all of our settings back to default. Ceph cluster is 15.2.11. Cephfs clients are ceph-fuse either

[ceph-users] Ceph mon issues

2022-04-19 Thread Ilhaan Rasheed
Hello Ceph users, I have two issues affecting mon nodes in my ceph cluster. 1) mon store keeps growing store.db directory (/var/lib/ceph/mon/ceph-v60/store.db/) has grown by almost 20G the last two days. I've been clearing up space in /var and grew /var a few times. I have compacted the mon store

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2022-04-19 Thread Kai Stian Olstad
On 18.04.2022 21:35, Wesley Dillingham wrote: If you mark an osd "out" but not down / you dont stop the daemon do the PGs go remapped or do they go degraded then as well? First I made sure the balancer was active, then I marked one osd "out", "ceph osd out 34" and check status every 2 seconds

[ceph-users] Re: Ceph RGW Multisite Multi Zonegroup Build Problems

2022-04-19 Thread Ulrich Klein
After a bunch of attempts to get multiple zonegroups with RGW multi-site to work I’d have a question: Has anyone successfully created a working setup with multiple zonegroups with RGW multi-site using a cephadm/ceph orch installation of pacific? Ciao, Uli > On 19. 04 2022, at 14:33, Ulrich K

[ceph-users] OSD doesn't get marked out if other OSDs are already out

2022-04-19 Thread Julian Einwag
Hi, I’m currently playing around with a little Ceph test cluster and I’m trying to understand why a down OSD won’t get marked out under certain conditions. It’s a three node cluster with three OSDs in each node, mon_osd_down_out_interval is set to 120 seconds. I’m running version 16.2.7. There

[ceph-users] Re: df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-19 Thread Hendrik Peyerl
I did hit this issue aswell: https://tracker.ceph.com/issues/38482 you will need a kernel >= 5.2 that can handle the quotas on subdirectories. > On 19. Apr 2022, at 14:47, Ramana Venkatesh Raja wrote: > > On Sat, Apr 16, 2022 at 10:15 PM Ramana Venkatesh Raja > wrote: >> >> On Thu, Apr 14,

[ceph-users] Re: df shows wrong size of cephfs share when a subdirectory is mounted

2022-04-19 Thread Ramana Venkatesh Raja
On Sat, Apr 16, 2022 at 10:15 PM Ramana Venkatesh Raja wrote: > > On Thu, Apr 14, 2022 at 8:07 PM Ryan Taylor wrote: > > > > Hello, > > > > > > I am using cephfs via Openstack Manila (Ussuri I think). > > > > The cephfs cluster is v14.2.22 and my client has kernel > > 4.18.0-348.20.1.el8_5.x86_

[ceph-users] Re: Ceph RGW Multisite Multi Zonegroup Build Problems

2022-04-19 Thread Ulrich Klein
Hi, I'm trying to do the same as Mark. Basically the same problem. Can’t get it to work. The —-master doesn’t make much of a difference for me. Any other idea, maybe? Ciao, Uli On Cluster #1 ("nceph"): radosgw-admin realm create --rgw-realm=acme --default radosgw-admi

[ceph-users] Re: Ceph RGW Multisite Multi Zonegroup Build Problems

2022-04-19 Thread Eugen Block
Hi, unless there are copy/paste mistakes involved I believe you shouldn't specify '--master' for the secondary zone because you did that already for the first zone which is supposed to be the master zone. You specified '--rgw-zone=us-west-1' as the master zone within your realm, but then

[ceph-users] Re: Ceph Multisite Cloud Sync Module

2022-04-19 Thread Soumya Koduri
Hi, On 4/19/22 09:47, Mark Selby wrote: I am trying to get the Ceph Multisite Clous Sync module working with Amazon S3. The docs are not clear on how the sync module is actually configured. I just want a POC of the most simple config. Can anyone share the config and radosgw-admin commands tha