,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Francois Legrand
Sent: 09 June 2020 22:20:29
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
Hi,
Actually I let the mds
June 2020 16:38:18
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
I already had some discussion on the list about this problem. But I
should ask again.
We really lost some objects and there are not enought shards to
reconstruct
regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Francois Legrand
Sent: 08 June 2020 16:38:18
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
I already had
regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Francois Legrand
Sent: 08 June 2020 16:00:28
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
Legrand
Sent: 08 June 2020 15:27:59
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
Thanks again for the hint !
Indeed, I did a
ceph daemon mds.lpnceph-mds02.in2p3.fr objecter_requests
and it seems that osd 27 is more or less
Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
Hi Franck,
Finally I dit :
ceph config set global mds_beacon_grace 60
and create /etc/sysctl.d/sysctl-ceph.conf with
vm.min_free_kbytes=4194303
and then
sysctl --system
After
==
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Francois Legrand
Sent: 08 June 2020 16:00:28
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
There is no recovery going on
From: Francois Legrand
Sent: 08 June 2020 15:27:59
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
Thanks again for the hint !
Indeed, I did a
ceph daemon mds.lpnceph-mds02.in2p3.fr objecter_requests
and it seems that osd 27
um S14
From: Francois Legrand
Sent: 06 June 2020 11:11
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
Thanks for the tip,
I will try that. For now vm.min_free_kbytes = 90112
Indeed, yesterday afte
rds,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Francois Legrand
Sent: 08 June 2020 14:45:13
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
Hi Franck,
Finally I
S14
From: Francois Legrand
Sent: 06 June 2020 11:11
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
Thanks for the tip,
I will try that. For now vm.min_free_kbytes = 90112
Indeed, yesterday after your last mail I
egrand
Sent: 06 June 2020 11:11
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Re: mds behind on trimming - replay until memory
exhausted
Thanks for the tip,
I will try that. For now vm.min_free_kbytes = 90112
Indeed, yesterday after your last mail I set mds_beacon_grace to 240.0
but this
Subject: [ceph-users] Re: mds behind on trimming - replay until memory exhausted
Hi Francois,
yes, the beacon grace needs to be higher due to the latency of swap. Not sure
if 60s will do. For this particular recovery operation, you might want to go
much higher (1h) and watch the cluster health closely
; f...@lpnhe.in2p3.fr
Subject: [ceph-users] Re: mds behind on trimming - replay until memory exhausted
Hi Francois,
yes, the beacon grace needs to be higher due to the latency of swap. Not sure
if 60s will do. For this particular recovery operation, you might want to go
much higher (1h) and wa
Campus
Bygning 109, rum S14
From: Francois Legrand
Sent: 05 June 2020 23:51:04
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] mds behind on trimming - replay until memory exhausted
Hi,
Unfortunately adding swap did not solve the problem !
I added
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] mds behind on trimming - replay until memory exhausted
I was also wondering if setting mds dump cache after rejoin could help ?
Le 05/06/2020 à 12:49, Frank Schilder a écrit :
Out of interest, I did the same on a mimic cluster a f
t; AIT Risø Campus
> Bygning 109, rum S14
>
> ____________
> From: Francois Legrand
> Sent: 05 June 2020 13:46:03
> To: Frank Schilder; ceph-users
> Subject: Re: [ceph-users] mds behind on trimming - replay until memory
> exhausted
>
> I
it will do eventually.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Francois Legrand
Sent: 05 June 2020 13:46:03
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] mds behind on trimming - replay until memory exhausted
Out of interest, I did the same on a mimic cluster a few months ago, running up
to 5 parallel rsync sessions without any problems. I moved about 120TB. Each
rsync was running on a separate client with its own cache. I made sure that the
sync dirs were all disjoint (no overlap of
. Will take a while, but it will do eventually.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Francois Legrand
Sent: 05 June 2020 13:46:03
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] mds behind on trimming
I was also wondering if setting mds dump cache after rejoin could help ?
Le 05/06/2020 à 12:49, Frank Schilder a écrit :
Out of interest, I did the same on a mimic cluster a few months ago, running up
to 5 parallel rsync sessions without any problems. I moved about 120TB. Each
rsync was
Hi,
Thanks for your answer.
I have :
osd_op_queue=wpq
osd_op_queue_cut_off=low
I can try to set osd_op_queue_cut_off to high, but it will be useful
only if the mds get active, true ?
For now, the mds_cache_memory_limit is set to 8 589 934 592 (so 8GB
which seems reasonable for a mds server
22 matches
Mail list logo