[ceph-users] Re: mon db growing. over 500Gb

2021-03-11 Thread ricardo.re.azevedo
HI Andreas, That's good to know. I managed to fix the problem! Here is my journey in case it helps anyone: My system drives are only 512GB so I added spare 1Tb drives to each server and moved the mon db to the new drive. I set noout, nobackfill and norecover and enabled only the ceph mon and osd

[ceph-users] balance OSD usage.

2021-03-05 Thread ricardo.re.azevedo
Hi All, Does anyone know how I can rebalance my cluster to balance out the OSD usage? I just added 12 more 14Tb HDDs to my cluster (cluster of made up of 12Tb and 14Tb disks) bringing my total to 48 OSDs. Ceph df reports my pool as 83% full (see below). I am aware this only reports the

[ceph-users] MDS is reporting damaged metadata damage- followup

2021-03-02 Thread ricardo.re.azevedo
Hi all, Following up on a previous issue. My cephfs MDS is reporting damaged metadata following the addition (and remapping) of 12 new OSDs. `ceph tell mds.database-0 damage ls` reports ~85 files damaged. All of type "backtrace". ` ceph tell mds.database-0 scrub start /

[ceph-users] Re: MDSs report damaged metadata

2021-02-26 Thread ricardo.re.azevedo
Thanks for the advice and info regarding the error. I tried ` ceph tell mds.database-0 scrub start / recursive repair force` and it didn't help. Is there anything else I can try? Or manually fix the links? Best, Ricardo -Original Message- From: Patrick Donnelly Sent: Thursday,

[ceph-users] MDSs report damaged metadata

2021-02-25 Thread ricardo.re.azevedo
Hi all, My cephfs MDS is reporting damaged metadata following the addition (and remapping) of 12 new OSDs. `ceph tell mds.database-0 damage ls` reports ~85 files damaged. All of type "backtrace" which is very concerning. ` ceph tell mds.database-0 scrub start / recursive repair` seems to