> > >> > Recently I've noticed a problem with one of our buckets: > >> > > >> > I cannot list or stats on a bucket: > >> > #v+ > >> > root@ceph-s1:~# radosgw-admin bucket stats > --bucket=problematic_bucket > >> > error getting bucket stats ret=-22 > >> > >> That's EINVAL, not ENOENT. It could mean lot's of things, e.g., > >> radosgw-admin version mismatch vs. version that osds are running. Try > >> to add --debug-rgw=20 --debug-ms=1 --log-to-stderr to maybe get a bit > >> more info about the source of this error. > > > > > > https://gist.github.com/ljagiello/06a4dd1f34a776e38f77 > > > > Result of more verbose debug. > > > 2015-11-13 21:10:19.160420 7fd9f91be7c0 1 -- 10.8.68.78:0/1007616 --> > 10.8.42.35:6800/26514 -- osd_op(client.44897323.0:30 > .dir.default.5457.9 [call rgw.bucket_list] 16.2f979b1a e172956) v4 -- > ?+0 0x15f3740 con 0x15daa60 > 2015-11-13 21:10:19.161058 7fd9ef8a7700 1 -- 10.8.68.78:0/1007616 <== > osd.12 10.8.42.35:6800/26514 6 ==== osd_op_reply(30 > .dir.default.5457.9 [call] ondisk = -22 (Invalid argument)) v4 ==== > 118+0+0 (3885840820 0 0) 0x7fd9c8000d50 con 0x15daa60 > error getting bucket stats ret=-22 > > You can try taking a look at osd.12 logs. Any chance osd.12 and > radosgw-admin aren't running the same major version? (more likely > radosgw-admin running a newer version).
>From last 12h it's just deep-scrub info #v+ 2015-11-13 08:23:00.690076 7fc4c62ee700 0 log [INF] : 15.621 deep-scrub ok #v- But yesterday there was a big rebalance and a host with that osd was rebuilding from scratch. We're running the same version (ceph, rados) across entire cluster just double check it. -- Łukasz Jagiełło lukasz<at>jagiello<dot>org
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com