hi,
is it possible to remove the require_jewel_osds flag after upgrade to kraken?
$ ceph osd stat
osdmap e29021: 40 osds: 40 up, 40 in
flags sortbitwise,require_jewel_osds,require_kraken_osds
it seems that ceph osd unset does not support require_jewel_osds
$ ceph osd unset
hi,
it seems, it could be the SnapContext problem.
i've tried stat command. it works fine.
i will i post the bug report?
thanks
fous
2016-11-16 21:55 GMT+01:00 Gregory Farnum <gfar...@redhat.com>:
> On Wed, Nov 16, 2016 at 5:13 AM, Jan Krcmar <honza...@gmail.com> wrote:
>&g
hi,
i've got found problem/feature in pool snapshots
when i delete some object from pool which was previously snapshotted,
i cannot list the object name in the snapshot anymore.
steps to reproduce
# ceph -v
ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
# rados -p test ls
stats
hi,
i have rbd0 mapped to client, xfs formatted. i'm putting a lot of data on it.
following messages appear in logs and 'ceph -s' output
osd.255 [WRN] 1 slow requests, 1 included below; oldest blocked for >
51.726881 secs
osd.255 [WRN] slow request 51.726881 seconds old, received at
2016-03-04