[ceph-users] How to force backfill on undersized pgs ?

2020-06-17 Thread Kári Bertilsson
Hello I'm running ceph 14.2.9. During heavy backfilling due to rebalancing one OSD crashed. I want to recover the data from the lost OSD before continuing the backfilling so i out'ed the lost osd and ran "ceph osd set norebalance". But i'm noticing with the norebalance flag set the system does

[ceph-users] Re: Calculate recovery time

2020-06-17 Thread Janne Johansson
Den ons 17 juni 2020 kl 22:48 skrev Seena Fallah : > Yes I know but any point of view for backfill or priority used in Ceph > when recovering? > Client traffic defaults to highest prio, then scrubs+recovery, then rebalancing of misplaced PGs, if I recall correctly. Exception would be if you have

[ceph-users] Re: mount cephfs with autofs

2020-06-17 Thread Derrick Lin
Thanks Eugen Your environment is different from mine, but I will take your example and keep exploring. Thanks! Cheers D On Wed, Jun 17, 2020 at 4:58 PM Eugen Block wrote: > Hi, > > the autofs on our clients is configured to use LDAP, so there's one > more layer. > This is the current setup: >

[ceph-users] Re: Nautilus latest builds for CentOS 8

2020-06-17 Thread Ken Dreyer
On Wed, Jun 17, 2020 at 9:25 AM David Galloway wrote: > If there will be a 14.2.10 or 14.3.0 (I don't actually know), it will be > built and signed for CentOS 8. > > Is this sufficient? Yes, thanks! ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: Calculate recovery time

2020-06-17 Thread Seena Fallah
Yes I know but any point of view for backfill or priority used in Ceph when recovering? On Wed, Jun 17, 2020 at 11:00 AM Janne Johansson wrote: > Den ons 17 juni 2020 kl 02:14 skrev Seena Fallah : > >> Hi all. >> Is there any way that I could calculate how much time it takes to add >> OSD to my

[ceph-users] Re: Jewel clients on recent cluster

2020-06-17 Thread Eugen Block
Hi, I believe this command shows the desired output: ceph daemon mon. sessions Zitat von Christoph Ackermann : Hi all, we have a cluster starting from jewel to octopus nowadays. We would like to enable Upmap but unfortunately  there are some old Jewel clients active. We cannot force Upmap

[ceph-users] Jewel clients on recent cluster

2020-06-17 Thread Christoph Ackermann
Hi all, we have a cluster starting from jewel to octopus nowadays. We would like to enable Upmap but unfortunately  there are some old Jewel clients active. We cannot force Upmap by:  ceph osd set-require-min-compat-client luminous    Because of production state, we must not lose any client. ;-)

[ceph-users] Bucket link problem with tenants

2020-06-17 Thread Benjamin . Zieglmeier
Hello, I have a ceph object cluster (12.2.11) that I am unable to figure out how to link a bucket to a new user when tenants are involved. If no tenant is mentioned (default tenant) I am able to link the bucket to a new user in the default tenant just fine. (e.g.: radosgw-admin bucket link –buc

[ceph-users] Re: Nautilus latest builds for CentOS 8

2020-06-17 Thread David Galloway
On 6/16/20 9:07 AM, kefu chai wrote: > On Mon, Jun 15, 2020 at 11:31 PM kefu chai wrote: >> >> On Mon, Jun 15, 2020 at 7:27 PM Giulio Fidente wrote: >>> >>> hi David, thanks for helping >>> >>> python3-Cython seems to be already in the centos8 PowerTools repo: >>> >>> http://mirror.centos.org/c

[ceph-users] Re: [NFS-Ganesha-Support] Re: bug in nfs-ganesha? and cephfs?

2020-06-17 Thread Jeff Layton
This: 2020-06-14 14:36:26.133 7fb5edd82700 0 log_channel(cluster) log [WRN] : client.4022217 isn't responding to mclientcaps(revoke), ino 0x11b9177 pending pAsLsXs issued pAsLsXsFs, sent 60.158122 seconds ago The client is not responding to a revoke of FILE_SHARED (Fs) caps. We've seen

[ceph-users] Re: advantage separate cluster network on single interface

2020-06-17 Thread Martin Verges
In my opinion, having the additional and error prone work of configuring additional networks, maintain and monitor them outweighs the small benefits. In the past, we saw lots of clusters that had reduced availability due to misconfigured or broken networks. This got so far that we included a networ

[ceph-users] Re: Combining erasure coding and replication?

2020-06-17 Thread Brett Randall
Hi Darren The buildings are connected by fibre, currently with equipment running at 40gbps, soon to be 100gbps, with sub-millisecond latency, so if the data is getting pulled from either building it's not a massive issue, as long as the cluster can stay operational if an entire building goes d