Hi ceph users,
If I understand correctly the "min_compat_client" option in the OSD map
was replaced in Luminous with "require_min_compat_client".
After upgrading a cluster to Luminous and setting
set-require-min-compat-client to jewel, the min_compat_client option
still shows as hammer.
Is
Hi Marc,
let me add Peter, he probably can answer your question.
Danny
Am 13.09.19 um 10:13 schrieb Marc Roos:
>
>
> How do I actually configure dovecot to use ceph for a mailbox? I have
> build the plugins as mentioned here[0]
>
> - but where do I copy/load what module?
> - can I
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, Sep 13, 2019 at 7:09 AM thoralf schulze wrote:
>
> hi there,
>
> while debugging metadata servers reporting slow requests, we took a stab
> at pinning directories of a cephfs like so:
>
> setfattr -n ceph.dir.pin -v 1 /tubfs/kubernetes/
> setfattr -n ceph.dir.pin -v 0 /tubfs/profiles/
>
Am 13.09.19 um 18:38 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 11:30 AM Oliver Freyermuth
wrote:
Am 13.09.19 um 17:18 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth
wrote:
Am 13.09.19 um 16:30 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:17 AM
On Fri, Sep 13, 2019 at 11:30 AM Oliver Freyermuth
wrote:
>
> Am 13.09.19 um 17:18 schrieb Jason Dillaman:
> > On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth
> > wrote:
> >>
> >> Am 13.09.19 um 16:30 schrieb Jason Dillaman:
> >>> On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman
> >>> wrote:
Am 13.09.19 um 17:18 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth
wrote:
Am 13.09.19 um 16:30 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote:
On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
wrote:
Dear Jason,
thanks for
On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth
wrote:
>
> Am 13.09.19 um 16:30 schrieb Jason Dillaman:
> > On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote:
> >>
> >> On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
> >> wrote:
> >>>
> >>> Dear Jason,
> >>>
> >>> thanks for the very
Am 13.09.19 um 16:30 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote:
On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
wrote:
Dear Jason,
thanks for the very detailed explanation! This was very instructive.
Sadly, the watchers look correct - see details
Am 13.09.19 um 16:17 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
wrote:
Dear Jason,
thanks for the very detailed explanation! This was very instructive.
Sadly, the watchers look correct - see details inline.
Am 13.09.19 um 15:02 schrieb Jason Dillaman:
On
On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote:
>
> On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
> wrote:
> >
> > Dear Jason,
> >
> > thanks for the very detailed explanation! This was very instructive.
> > Sadly, the watchers look correct - see details inline.
> >
> > Am 13.09.19
On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
wrote:
>
> Dear Jason,
>
> thanks for the very detailed explanation! This was very instructive.
> Sadly, the watchers look correct - see details inline.
>
> Am 13.09.19 um 15:02 schrieb Jason Dillaman:
> > On Thu, Sep 12, 2019 at 9:55 PM Oliver
hi there,
while debugging metadata servers reporting slow requests, we took a stab
at pinning directories of a cephfs like so:
setfattr -n ceph.dir.pin -v 1 /tubfs/kubernetes/
setfattr -n ceph.dir.pin -v 0 /tubfs/profiles/
setfattr -n ceph.dir.pin -v 0 /tubfs/homes
on the active mds for rank 0,
Dear Jason,
thanks for the very detailed explanation! This was very instructive.
Sadly, the watchers look correct - see details inline.
Am 13.09.19 um 15:02 schrieb Jason Dillaman:
On Thu, Sep 12, 2019 at 9:55 PM Oliver Freyermuth
wrote:
Dear Jason,
thanks for taking care and developing a
Thanks,
I moved back to crush-compat mapping, the pool that was at "90% full"
is now under 76% full.
Before doing that, I had the automatic balancer off, and ran 'ceph
balancer optimize test'. It ran for 12 hours before I killed it. In
upmap mode, it was "balanced" or at least as balanced as it
On Thu, Sep 12, 2019 at 9:55 PM Oliver Freyermuth
wrote:
>
> Dear Jason,
>
> thanks for taking care and developing a patch so quickly!
>
> I have another strange observation to share. In our test setup, only a single
> RBD mirroring daemon is running for 51 images.
> It works fine with a
Here's some more information on this issue.
I found the MDS host not to have any load issues, but other clients who
have the FS mounted cannot execute statfs/fstatfs on the mount, since
the call never returns while my rsync job is running. Other syscalls
like fstat work without problems.
Hello Jason,
Am 12.09.19 um 16:56 schrieb Jason Dillaman:
> On Thu, Sep 12, 2019 at 3:31 AM Marc Schöchlin wrote:
>
> Whats that, have we seen that before? ("Numerical argument out of domain")
> It's the error that rbd-nbd prints when the kernel prematurely closes
> the socket ... and as we
We have a 5 node Luminous cluster on which we see multiple RESETSESSION
messages for OSDs on the last node alone.
's=STATE_CONNECTING_WAIT_CONNECT_REPLY_AUTH pgs=2613 cs=1
l=0).handle_connect_reply connect got RESETSESSION'
We found the below fix for this issue, but not able to identify the
Hi,
We have a 5 node Luminous cluster on which we see multiple RESETSESSION
messages for OSDs on the last node alone.
's=STATE_CONNECTING_WAIT_CONNECT_REPLY_AUTH pgs=2613 cs=1
l=0).handle_connect_reply connect got RESETSESSION'
We found the below fix for this issue, but not able to identify the
Hi,
There have been various stability issues with the MDS that I reported a
while ago and most of them have been addressed and fixes will be
available in upcoming patch releases. However, there also seem to be
problems on the client side, which I have not reported so far.
Note: This report
How do I actually configure dovecot to use ceph for a mailbox? I have
build the plugins as mentioned here[0]
- but where do I copy/load what module?
- can I configure a specific mailbox only, via eg userdb:
test3:x:8267:231:Account with special settings for
We have a cluster running CephFS with metadata on SSDs and data split
between SSDs and OSDs (main pool is on HDDs, some subtrees are on an SSD
pool).
We're seeing quite poor deletion performance, especially for
directories. It seems that previously empty directories are often
deleted
24 matches
Mail list logo