Re: [ceph-users] MDS segfaults on client connection -- brand new FS

2019-03-08 Thread Gregory Farnum
I don’t have any idea what’s going on here or why it’s not working, but you are using v0.94.7. That release is: 1) out of date for the Hammer cycle, which reached at least .94.10 2) prior to the release where we declared CephFS stable (Jewel, v10.2.0) 3) way past its supported expiration date.

[ceph-users] MDS segfaults on client connection -- brand new FS

2019-03-08 Thread Kadiyska, Yana
Hi, I’m very much hoping someone can unblock me on this – we recently ran into a very odd issue – I sent an earlier email to the list http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033579.html After unsuccessfully trying to repair we decided to forsake the Filesystem I marked

Re: [ceph-users] Failed to repair pg

2019-03-08 Thread Herbert Alexander Faleiros
Hi, [...] > Now I have: > > HEALTH_ERR 5 scrub errors; Possible data damage: 1 pg inconsistent > OSD_SCRUB_ERRORS 5 scrub errors > PG_DAMAGED Possible data damage: 1 pg inconsistent > pg 2.2bb is active+clean+inconsistent, acting [36,12,80] > > Jumped from 3 to 5 scrub errors now. did the

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-08 Thread Casey Bodley
(cc ceph-users) Can you tell whether these sync errors are coming from metadata sync or data sync? Are they blocking sync from making progress according to your 'sync status'? On 3/8/19 10:23 AM, Trey Palmer wrote: Casey, Having done the 'reshard stale-instances delete' earlier on the

Re: [ceph-users] 13.2.4 odd memory leak?

2019-03-08 Thread Mark Nelson
On 3/8/19 8:12 AM, Steffen Winther Sørensen wrote: On 8 Mar 2019, at 14.30, Mark Nelson > wrote: On 3/8/19 5:56 AM, Steffen Winther Sørensen wrote: On 5 Mar 2019, at 10.02, Paul Emmerich > wrote: Yeah, there's a bug in 13.2.4.

Re: [ceph-users] 13.2.4 odd memory leak?

2019-03-08 Thread Steffen Winther Sørensen
> On 8 Mar 2019, at 14.30, Mark Nelson wrote: > > > On 3/8/19 5:56 AM, Steffen Winther Sørensen wrote: >> >>> On 5 Mar 2019, at 10.02, Paul Emmerich >> > wrote: >>> >>> Yeah, there's a bug in 13.2.4. You need to set it to at least ~1.2GB. >> Yeap thanks,

Re: [ceph-users] rbd cache limiting IOPS

2019-03-08 Thread Alexandre DERUMIER
>>(I think I see a PR about this on performance meeting pad some months ago) https://github.com/ceph/ceph/pull/25713 - Mail original - De: "aderumier" À: "Engelmann Florian" Cc: "ceph-users" Envoyé: Vendredi 8 Mars 2019 15:03:23 Objet: Re: [ceph-users] rbd cache limiting IOPS

Re: [ceph-users] rbd cache limiting IOPS

2019-03-08 Thread Alexandre DERUMIER
>>Which options do we have to increase IOPS while writeback cache is used? If I remember they are some kind of global lock/mutex with rbd cache, and I think they are some work currently to improve it. (I think I see a PR about this on performance meeting pad some months ago) - Mail

Re: [ceph-users] 13.2.4 odd memory leak?

2019-03-08 Thread Mark Nelson
On 3/8/19 5:56 AM, Steffen Winther Sørensen wrote: On 5 Mar 2019, at 10.02, Paul Emmerich wrote: Yeah, there's a bug in 13.2.4. You need to set it to at least ~1.2GB. Yeap thanks, setting it at 1G+256M worked :) Hope this won’t bloat memory during coming weekend VM backups through CephFS

Re: [ceph-users] garbage in cephfs pool

2019-03-08 Thread Fyodor Ustinov
Hi! And more: # rados df POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR fsd0 B 11527769278 69166614 0 00 137451347 61 TiB 46171363 63 TiB fsdtier240 KiB 2048 0

Re: [ceph-users] Failed to repair pg

2019-03-08 Thread Herbert Alexander Faleiros
Hi, thanks for the answer. On Thu, Mar 07, 2019 at 07:48:59PM -0800, David Zafman wrote: > See what results you get from this command. > > # rados list-inconsistent-snapset 2.2bb --format=json-pretty > > You might see this, so nothing interesting.  If you don't get json, then > re-run a scrub

Re: [ceph-users] 13.2.4 odd memory leak?

2019-03-08 Thread Steffen Winther Sørensen
> On 5 Mar 2019, at 10.02, Paul Emmerich wrote: > > Yeah, there's a bug in 13.2.4. You need to set it to at least ~1.2GB. Yeap thanks, setting it at 1G+256M worked :) Hope this won’t bloat memory during coming weekend VM backups through CephFS /Steffen > > On Tue, Mar 5, 2019 at 9:00 AM