[ceph-users] require_min_compat_client vs min_compat_client

2019-09-13 Thread Alfred
Hi ceph users, If I understand correctly the "min_compat_client" option in the OSD map was replaced in Luminous with "require_min_compat_client". After upgrading a cluster to Luminous and setting set-require-min-compat-client to jewel, the min_compat_client option still shows as hammer. Is

Re: [ceph-users] Ceph dovecot again

2019-09-13 Thread Danny Al-Gaaf
Hi Marc, let me add Peter, he probably can answer your question. Danny Am 13.09.19 um 10:13 schrieb Marc Roos: > > > How do I actually configure dovecot to use ceph for a mailbox? I have > build the plugins as mentioned here[0] > > - but where do I copy/load what module? > - can I

[ceph-users] unsubscribe

2019-09-13 Thread Oliver Liebel
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] unsubscribe

2019-09-13 Thread Goncalo Borges
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] mds directory pinning, status display

2019-09-13 Thread Patrick Donnelly
On Fri, Sep 13, 2019 at 7:09 AM thoralf schulze wrote: > > hi there, > > while debugging metadata servers reporting slow requests, we took a stab > at pinning directories of a cephfs like so: > > setfattr -n ceph.dir.pin -v 1 /tubfs/kubernetes/ > setfattr -n ceph.dir.pin -v 0 /tubfs/profiles/ >

Re: [ceph-users] Ceph RBD Mirroring

2019-09-13 Thread Oliver Freyermuth
Am 13.09.19 um 18:38 schrieb Jason Dillaman: On Fri, Sep 13, 2019 at 11:30 AM Oliver Freyermuth wrote: Am 13.09.19 um 17:18 schrieb Jason Dillaman: On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth wrote: Am 13.09.19 um 16:30 schrieb Jason Dillaman: On Fri, Sep 13, 2019 at 10:17 AM

Re: [ceph-users] Ceph RBD Mirroring

2019-09-13 Thread Jason Dillaman
On Fri, Sep 13, 2019 at 11:30 AM Oliver Freyermuth wrote: > > Am 13.09.19 um 17:18 schrieb Jason Dillaman: > > On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth > > wrote: > >> > >> Am 13.09.19 um 16:30 schrieb Jason Dillaman: > >>> On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman > >>> wrote:

Re: [ceph-users] Ceph RBD Mirroring

2019-09-13 Thread Oliver Freyermuth
Am 13.09.19 um 17:18 schrieb Jason Dillaman: On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth wrote: Am 13.09.19 um 16:30 schrieb Jason Dillaman: On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote: On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth wrote: Dear Jason, thanks for

Re: [ceph-users] Ceph RBD Mirroring

2019-09-13 Thread Jason Dillaman
On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth wrote: > > Am 13.09.19 um 16:30 schrieb Jason Dillaman: > > On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote: > >> > >> On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth > >> wrote: > >>> > >>> Dear Jason, > >>> > >>> thanks for the very

Re: [ceph-users] Ceph RBD Mirroring

2019-09-13 Thread Oliver Freyermuth
Am 13.09.19 um 16:30 schrieb Jason Dillaman: On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote: On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth wrote: Dear Jason, thanks for the very detailed explanation! This was very instructive. Sadly, the watchers look correct - see details

Re: [ceph-users] Ceph RBD Mirroring

2019-09-13 Thread Oliver Freyermuth
Am 13.09.19 um 16:17 schrieb Jason Dillaman: On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth wrote: Dear Jason, thanks for the very detailed explanation! This was very instructive. Sadly, the watchers look correct - see details inline. Am 13.09.19 um 15:02 schrieb Jason Dillaman: On

Re: [ceph-users] Ceph RBD Mirroring

2019-09-13 Thread Jason Dillaman
On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote: > > On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth > wrote: > > > > Dear Jason, > > > > thanks for the very detailed explanation! This was very instructive. > > Sadly, the watchers look correct - see details inline. > > > > Am 13.09.19

Re: [ceph-users] Ceph RBD Mirroring

2019-09-13 Thread Jason Dillaman
On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth wrote: > > Dear Jason, > > thanks for the very detailed explanation! This was very instructive. > Sadly, the watchers look correct - see details inline. > > Am 13.09.19 um 15:02 schrieb Jason Dillaman: > > On Thu, Sep 12, 2019 at 9:55 PM Oliver

[ceph-users] mds directory pinning, status display

2019-09-13 Thread thoralf schulze
hi there, while debugging metadata servers reporting slow requests, we took a stab at pinning directories of a cephfs like so: setfattr -n ceph.dir.pin -v 1 /tubfs/kubernetes/ setfattr -n ceph.dir.pin -v 0 /tubfs/profiles/ setfattr -n ceph.dir.pin -v 0 /tubfs/homes on the active mds for rank 0,

Re: [ceph-users] Ceph RBD Mirroring

2019-09-13 Thread Oliver Freyermuth
Dear Jason, thanks for the very detailed explanation! This was very instructive. Sadly, the watchers look correct - see details inline. Am 13.09.19 um 15:02 schrieb Jason Dillaman: On Thu, Sep 12, 2019 at 9:55 PM Oliver Freyermuth wrote: Dear Jason, thanks for taking care and developing a

Re: [ceph-users] Ceph Balancer Limitations

2019-09-13 Thread Adam Tygart
Thanks, I moved back to crush-compat mapping, the pool that was at "90% full" is now under 76% full. Before doing that, I had the automatic balancer off, and ran 'ceph balancer optimize test'. It ran for 12 hours before I killed it. In upmap mode, it was "balanced" or at least as balanced as it

Re: [ceph-users] Ceph RBD Mirroring

2019-09-13 Thread Jason Dillaman
On Thu, Sep 12, 2019 at 9:55 PM Oliver Freyermuth wrote: > > Dear Jason, > > thanks for taking care and developing a patch so quickly! > > I have another strange observation to share. In our test setup, only a single > RBD mirroring daemon is running for 51 images. > It works fine with a

Re: [ceph-users] CephFS client-side load issues for write-/delete-heavy workloads

2019-09-13 Thread Janek Bevendorff
Here's some more information on this issue. I found the MDS host not to have any load issues, but other clients who have the FS mounted cannot execute statfs/fstatfs on the mount, since the call never returns while my rsync job is running. Other syscalls like fstat work without problems.

Re: [ceph-users] reproducible rbd-nbd crashes

2019-09-13 Thread Marc Schöchlin
Hello Jason, Am 12.09.19 um 16:56 schrieb Jason Dillaman: > On Thu, Sep 12, 2019 at 3:31 AM Marc Schöchlin wrote: > > Whats that, have we seen that before? ("Numerical argument out of domain") > It's the error that rbd-nbd prints when the kernel prematurely closes > the socket ... and as we

Re: [ceph-users] multiple RESETSESSION messages

2019-09-13 Thread Konstantin Shalygin
We have a 5 node Luminous cluster on which we see multiple RESETSESSION messages for OSDs on the last node alone. 's=STATE_CONNECTING_WAIT_CONNECT_REPLY_AUTH pgs=2613 cs=1 l=0).handle_connect_reply connect got RESETSESSION' We found the below fix for this issue, but not able to identify the

[ceph-users] multiple RESETSESSION messages

2019-09-13 Thread nokia ceph
Hi, We have a 5 node Luminous cluster on which we see multiple RESETSESSION messages for OSDs on the last node alone. 's=STATE_CONNECTING_WAIT_CONNECT_REPLY_AUTH pgs=2613 cs=1 l=0).handle_connect_reply connect got RESETSESSION' We found the below fix for this issue, but not able to identify the

[ceph-users] CephFS client-side load issues for write-/delete-heavy workloads

2019-09-13 Thread Janek Bevendorff
Hi, There have been various stability issues with the MDS that I reported a while ago and most of them have been addressed and fixes will be available in upcoming patch releases. However, there also seem to be problems on the client side, which I have not reported so far. Note: This report

[ceph-users] Ceph dovecot again

2019-09-13 Thread Marc Roos
How do I actually configure dovecot to use ceph for a mailbox? I have build the plugins as mentioned here[0] - but where do I copy/load what module? - can I configure a specific mailbox only, via eg userdb: test3:x:8267:231:Account with special settings for

[ceph-users] CephFS deletion performance

2019-09-13 Thread Hector Martin
We have a cluster running CephFS with metadata on SSDs and data split between SSDs and OSDs (main pool is on HDDs, some subtrees are on an SSD pool). We're seeing quite poor deletion performance, especially for directories. It seems that previously empty directories are often deleted