[ceph-users] PG_DAMAGED: Possible data damage: 4 pgs recovery_unfound

2022-08-17 Thread Eric Dold
Hi everyone, It seems like I hit Bug #44286 Cache tiering shows unfound objects after OSD reboots . I did stop some OSD's to compact the RocksDB on them. Noout was set during this time. Soon after that i got: [ERR] PG_DAMAGED: Possible data damage: 4 pgs

[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-18 Thread Eric Dold
lly wrote: > On Fri, Sep 17, 2021 at 6:57 PM Eric Dold wrote: > > > > Hi Patrick > > > > Here's the output of ceph fs dump: > > > > e226256 > > enable_multiple, ever_enabled_multiple: 0,1 > > default compat: compat={},rocompat={},incompat={1=base v0.20,2

[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Eric Dold
1 addr [v2: 192.168.1.72:6800/2991378711,v1:192.168.1.72:6801/2991378711] compat {c=[1],r=[1],i=[7ff]}] dumped fsmap epoch 226256 On Fri, Sep 17, 2021 at 4:41 PM Patrick Donnelly wrote: > On Fri, Sep 17, 2021 at 8:54 AM Eric Dold wrote: > > > > Hi, > > > > I g

[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Eric Dold
Hi, I get the same after upgrading to 16.2.6. All mds daemons are standby. After setting ceph fs set cephfs max_mds 1 ceph fs set cephfs allow_standby_replay false the mds still wants to be standby. 2021-09-17T14:40:59.371+0200 7f810a58f600 0 ceph version 16.2.6

[ceph-users] diskprediction_local fails with python3-sklearn 0.22.2

2020-06-04 Thread Eric Dold
Hello the mgr module diskprediction_local fails under ubuntu 20.04 focal with python3-sklearn version 0.22.2 Ceph version is 15.2.3 when the module is enabled i get the following error: File "/usr/share/ceph/mgr/diskprediction_local/module.py", line 112, in serve

[ceph-users] Re: verify_upmap number of buckets 5 exceeds desired 4

2019-09-25 Thread Eric Dold
like CRUSH does not stop picking a host after the first four with the first rule and is complaining when it gets the fifth host. Is this a bug or intended behaviour? Regards Eric On Tue, Sep 17, 2019 at 3:55 PM Eric Dold wrote: > With ceph 14.2.4 it's the same. > The upmap ba

[ceph-users] verify_upmap number of buckets 5 exceeds desired 4

2019-09-11 Thread Eric Dold
Hello, I'm running ceph 14.2.3 on six hosts with each four osds. I did recently upgrade this from four hosts. The cluster is running fine. But i get this in my logs: Sep 11 11:02:41 ceph1 ceph-mon[1333]: 2019-09-11 11:02:41.953 7f26023a6700 -1 verify_upmap number of buckets 5 exceeds desired 4