[ceph-users] One lost cephfs data object

2020-01-14 Thread Andrew Denton
Hi all,

I'm on 13.2.6. My cephfs has managed to lose one single object from
it's data pool. All the cephfs docs I'm finding show me how to recover
from an entire lost PG, but the rest of the PG checks out as far as I
can tell. How can I track down which file does that object belongs to?
I'm missing "102e2aa.3721" in pg 16.d7. Pool 16 is an EC cephfs
data pool called cephfs_ecdata (this data pool is assigned to a
directory by ceph.dir.layout). We store backups in this data pool, so
we'll likely be fine just deleting the file.

# ceph health detail
HEALTH_ERR 60758/81263036 objects misplaced (0.075%); 1/16673236
objects unfound (0.000%); Possible data damage: 1 pg recovery_unfound;
Degraded data redundancy: 1/81263036 objects degraded (0.000%), 1 pg
degraded
OBJECT_MISPLACED 60758/81263036 objects misplaced (0.075%)
OBJECT_UNFOUND 1/16673236 objects unfound (0.000%)
pg 16.d7 has 1 unfound objects
PG_DAMAGED Possible data damage: 1 pg recovery_unfound
pg 16.d7 is active+recovery_unfound+degraded+remapped, acting
[48,8,30,11,42], 1 unfound
PG_DEGRADED Degraded data redundancy: 1/81263036 objects degraded
(0.000%), 1 pg degraded
pg 16.d7 is active+recovery_unfound+degraded+remapped, acting
[48,8,30,11,42], 1 unfound


# ceph pg 16.d7 list_missing
{
"offset": {
"oid": "",
"key": "",
"snapid": 0,
"hash": 0,
"max": 0,
"pool": -9223372036854775808,
"namespace": ""
},
"num_missing": 1,
"num_unfound": 1,
"objects": [
{
"oid": {
"oid": "102e2aa.3721",
"key": "",
"snapid": -2,
"hash": 2685987031,
"max": 0,
"pool": 16,
"namespace": ""
},
"need": "41610'2203339",
"have": "0'0",
"flags": "none",
"locations": [
"42(4)"
]
}
],
"more": false
}

At one point this object showed it's map as

# ceph osd map cephfs_ecdata "102e2aa.3721"
osdmap e45659 pool 'cephfs_ecdata' (16) object '102e2aa.3721'
-> pg 16.a018e8d7 (16.d7) -> up ([48,52,30,11,44], p48) acting
([48,8,30,11,NONE], p48)

but I restarted osd.44, and now it's showing 

# ceph osd map cephfs_ecdata "102e2aa.3721"
osdmap e45679 pool 'cephfs_ecdata' (16) object '102e2aa.3721'
-> pg 16.a018e8d7 (16.d7) -> up ([48,52,30,11,44], p48) acting
([48,8,30,11,42], p48)

Thanks,
Andrew
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Openstack VMs with Ceph EC pools

2018-06-07 Thread Andrew Denton
On Wed, 2018-06-06 at 17:02 -0700, Pardhiv Karri wrote:
> Hi,
> 
> Is anyone using Openstack with Ceph  Erasure Coding pools as it now
> supports RBD in Luminous. If so, hows the performance? 

I attempted it, but couldn't figure out how to get Cinder to specify
the data pool. You can't just point Cinder at the erasure-coded pool
since the ec pool doesn't support OMAP and the rbd creation will fail.
Cinder will need to learn how to create the rbd differently, or there
needs to be some override in ceph.conf.

Thanks,
Andrew
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com