Hello,
even after 24hours the files are still present:
osd.20]# find . -name "*106dd406b8b4567*" -exec ls -la "{}" \;
-rw-r--r-- 1 ceph ceph 4194304 Aug 5 09:32
./current/3.61a_head/DIR_A/DIR_1/DIR_6/DIR_8/rbd\udata.106dd406b8b4567.2315__9d5e4_9E65861A__3
-rw-r--r-- 1 ceph ceph 41943
Hello Greg,
i deleted the image 12 hours ago and it had only 120GB... Do you think i
should wait more?
Yes that osd is part of the pg:
ceph pg map 3.61a
osdmap e819444 pg 3.61a (3.61a) -> up [20,57,70] acting [20,57,70]
but:
# ceph pg ls inconsistent
pg_stat objects mip degrmispunf
is OSD 20 actually a member of the pg right now? It could be stray data
that is slowly getting cleaned up.
Also, you've got "snapdir" listings there. Those indicate the object is
snapshotted but the "head" got deleted. So it may just be delayed cleanup
of snapshots.
On Sat, Aug 5, 2017 at 12:34 P
Hello,
today i deleted an rbd image which had the following
prefix:
block_name_prefix: rbd_data.106dd406b8b4567
the rm command went fine.
also the rados list command does not show any objects with that string:
# rados -p rbd ls | grep 106dd406b8b4567
But find on an osd still has them?
SOLVED
Short: I followed the procedure to replace an OSD.
Long: I reweighted the flapping OSD to 0 until it was done, then marked it
out, and then unmounted it, then followed the replacement procedure[1],
then restore weight. I attempted to recreate my actions from bash history
from various termi
I tried to remove the whole image as i didn't need it.
But it seems it doesn't get cleared: 106dd406b8b4567 was the id of the
old deleted rbd image.
ceph-57]# find . -name "*106dd406b8b4567*" -exec ls -la "{}" \;
-rw-r--r-- 1 ceph ceph 4194304 Aug 5 09:40
./current/3.61a_head/DIR_A/DIR_1/DIR_6/
Is there a way to remove that object from all osds? As it is unexpected
it should not harm.
Greets,
Stefan
Am 05.08.2017 um 09:03 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> i'm trying to fix a cluster where one pg is in
> active+clean+inconsistent+snaptrim
>
> state.
>
> The log says:
Hello,
i'm trying to fix a cluster where one pg is in
active+clean+inconsistent+snaptrim
state.
The log says:
2017-08-05 08:57:43.240030 osd.20 [ERR] 3.61a repair 0 missing, 1
inconsistent objects
2017-08-05 08:57:43.240044 osd.20 [ERR] 3.61a repair 4 errors, 2 fixed
2017-08-05 08:57:43.242828 o