HI Andreas,
That's good to know. I managed to fix the problem! Here is my journey in
case it helps anyone:
My system drives are only 512GB so I added spare 1Tb drives to each server
and moved the mon db to the new drive. I set noout, nobackfill and norecover
and enabled only the ceph mon and osd
Hi All,
Does anyone know how I can rebalance my cluster to balance out the OSD
usage?
I just added 12 more 14Tb HDDs to my cluster (cluster of made up of 12Tb
and 14Tb disks) bringing my total to 48 OSDs. Ceph df reports my pool as 83%
full (see below). I am aware this only reports the
Hi all,
Following up on a previous issue.
My cephfs MDS is reporting damaged metadata following the addition (and
remapping) of 12 new OSDs.
`ceph tell mds.database-0 damage ls` reports ~85 files damaged. All of type
"backtrace".
` ceph tell mds.database-0 scrub start /
Thanks for the advice and info regarding the error.
I tried ` ceph tell mds.database-0 scrub start / recursive repair force` and it
didn't help. Is there anything else I can try? Or manually fix the links?
Best,
Ricardo
-Original Message-
From: Patrick Donnelly
Sent: Thursday,
Hi all,
My cephfs MDS is reporting damaged metadata following the addition (and
remapping) of 12 new OSDs.
`ceph tell mds.database-0 damage ls` reports ~85 files damaged. All of type
"backtrace" which is very concerning.
` ceph tell mds.database-0 scrub start / recursive repair` seems to