在 2021年9月18日,22:50,Eric Dold 写道:
Hi Patrick
Thanks a lot!
After setting
ceph fs compat cephfs add_incompat 7 "mds uses inline data"
the filesystem is working again.
So should I leave this setting as it is now, or do I have to remove it
again in a future update?
If I understand it[1] correct
Hi Patrick
Thanks a lot!
After setting
ceph fs compat cephfs add_incompat 7 "mds uses inline data"
the filesystem is working again.
So should I leave this setting as it is now, or do I have to remove it
again in a future update?
On Sat, Sep 18, 2021 at 2:28 AM Patrick Donnelly
wrote:
> On Fri
On Fri, Sep 17, 2021 at 11:30 AM Robert Sander
wrote:
>
> On 17.09.21 16:40, Patrick Donnelly wrote:
>
> > Stopping NFS should not have been necessary. But, yes, reducing
> > max_mds to 1 and disabling allow_standby_replay is required. See:
> > https://docs.ceph.com/en/pacific/cephfs/upgrading/#up
On Fri, Sep 17, 2021 at 6:57 PM Eric Dold wrote:
>
> Hi Patrick
>
> Here's the output of ceph fs dump:
>
> e226256
> enable_multiple, ever_enabled_multiple: 0,1
> default compat: compat={},rocompat={},incompat={1=base v0.20,2=client
> writeable ranges,3=default file layouts on dirs,4=dir inode in
Hi Patrick
Here's the output of ceph fs dump:
e226256
enable_multiple, ever_enabled_multiple: 0,1
default compat: compat={},rocompat={},incompat={1=base v0.20,2=client
writeable ranges,3=default file layouts on dirs,4=dir inode in separate
object,5=mds uses versioned encoding,6=dirfrag is stored
Thanks again. Now my CephFS is back online!
I ended up build ceph-mon from source myself, with the following patch applied.
and only replacing the mon leader seems sufficient.
Now I’m interested in why such a routine automated minor version upgrade could
get the cluster into such a state in the
On Fri, Sep 17, 2021 at 3:17 PM 胡 玮文 wrote:
>
> > Did you run the command I suggested before or after you executed `rmfailed`
> > below?
>
>
>
> I run “rmfailed” before reading your mail. Then I got MON crashed. I fixed
> the crash by setting max_mds=2. Then I tried the command you suggested.
>
On Fri, Sep 17, 2021 at 2:32 PM 胡 玮文 wrote:
>
> Thank you very much. But the mds still don’t go active.
Did you run the command I suggested before or after you executed
`rmfailed` below?
> While trying to resolve this, I run:
>
> ceph mds rmfailed 0 --yes-i-really-mean-it
>
> ceph mds rmfailed 1
>; ceph-users<mailto:ceph-users@ceph.io>
主题: Re: [ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active
On Fri, Sep 17, 2021 at 11:11 AM 胡 玮文 wrote:
>
> We are experiencing the same when upgrading to 16.2.6 with cephadm.
>
>
>
> I tried
>
&g
On Fri, Sep 17, 2021 at 11:11 AM 胡 玮文 wrote:
>
> We are experiencing the same when upgrading to 16.2.6 with cephadm.
>
>
>
> I tried
>
>
>
> ceph fs set cephfs max_mds 1
>
> ceph fs set cephfs allow_standby_replay false
>
>
>
> , but still all MDS goes to standby. It seems all ranks are marked fai
> > Stopping NFS should not have been necessary. But, yes, reducing
> > max_mds to 1 and disabling allow_standby_replay is required. See:
> > https://docs.ceph.com/en/pacific/cephfs/upgrading/#upgrading-the-mds-
> cluster
>
> I do no read upgrade notes any more because I just run
>
> ceph orch up
On 17.09.21 16:40, Patrick Donnelly wrote:
Stopping NFS should not have been necessary. But, yes, reducing
max_mds to 1 and disabling allow_standby_replay is required. See:
https://docs.ceph.com/en/pacific/cephfs/upgrading/#upgrading-the-mds-cluster
I do no read upgrade notes any more because
On Fri, Sep 17, 2021 at 8:19 AM Joshua West wrote:
>
> Thanks Patrick,
>
> Similar to Robert, when trying that, I simply receive "Error EINVAL:
> adding a feature requires a feature string" 10x times.
>
> I attempted to downgrade, but wasn't able to successfully get my mons
> to come back up, as t
On Fri, Sep 17, 2021 at 8:54 AM Eric Dold wrote:
>
> Hi,
>
> I get the same after upgrading to 16.2.6. All mds daemons are standby.
>
> After setting
> ceph fs set cephfs max_mds 1
> ceph fs set cephfs allow_standby_replay false
> the mds still wants to be standby.
>
> 2021-09-17T14:40:59.371+0200
On Fri, Sep 17, 2021 at 5:54 AM Robert Sander
wrote:
>
> Hi,
>
> I had to run
>
> ceph fs set cephfs max_mds 1
> ceph fs set cephfs allow_standby_replay false
>
> and stop all MDS and NFS containers and start one after the other again
> to clear this issue.
Stopping NFS should not have been neces
Hi,
I get the same after upgrading to 16.2.6. All mds daemons are standby.
After setting
ceph fs set cephfs max_mds 1
ceph fs set cephfs allow_standby_replay false
the mds still wants to be standby.
2021-09-17T14:40:59.371+0200 7f810a58f600 0 ceph version 16.2.6
(ee28fb57e47e9f88813e24bbf4c1449
Thanks Patrick,
Similar to Robert, when trying that, I simply receive "Error EINVAL:
adding a feature requires a feature string" 10x times.
I attempted to downgrade, but wasn't able to successfully get my mons
to come back up, as they had quincy specific "mon data structure
changes" or something
Hi,
I had to run
ceph fs set cephfs max_mds 1
ceph fs set cephfs allow_standby_replay false
and stop all MDS and NFS containers and start one after the other again
to clear this issue.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-s
Hi,
On 20.08.21 23:58, Patrick Donnelly wrote:
Your MDSMap compat is probably what's preventing promotion of
standbys. That's a new change in master (which is also being
backported to Pacific). Did you downgrade back to Pacific?
Try:
for i in $(seq 1 10); do ceph fs compat add_incompat $i; d
Hello Joshua,
On Tue, Aug 10, 2021 at 11:44 AM Joshua West wrote:
>
> Related: Where can I find MDS numeric state references for ceph mds
> set_state GID ?
>
> Like a dummy I accidentally upgraded to the ceph dev branch (quincy?),
> and have been having nothing but trouble since. This wasn't actu
20 matches
Mail list logo