I just ran into this today with a server we rebooted. The server has been
upgraded to Nautilus 14.2.2 for a few months. Was originally installed as
Jewel, then upgraded to Luminous ( then Nautilus ). I have a whole server
where all 12 OSDs have empty folders. I recreated the keyring file an
On Fri, Nov 22, 2019 at 9:09 PM J. Eric Ivancich wrote:
> 2^64 (2 to the 64th power) is 18446744073709551616, which is 13 greater
> than your value of 18446744073709551603. So this likely represents the
> value of -13, but displayed in an unsigned format.
I've seen this with values between -2 an
On 11/22/19 11:50 AM, David Monschein wrote:
> Hi all. Running an Object Storage cluster with Ceph Nautilus 14.2.4.
>
> We are running into what appears to be a serious bug that is affecting
> our fairly new object storage cluster. While investigating some
> performance issues -- seeing abnormally
On Fri, Nov 22, 2019 at 11:16 AM Vikas Rana wrote:
>
> Hi All,
>
> We have a XFS filesystems on Prod side and when we trying to mount the DR
> copy, we get superblock error
>
> root@:~# rbd-nbd map nfs/dir
> /dev/nbd0
> root@:~# mount /dev/nbd0 /mnt
> mount: /dev/nbd0: can't read superblock
Does
I've originally reported the linked issue. I've seen this problem with
negative stats on several of S3 setups but I could never figure out
how to reproduce it.
But I haven't seen the resharder act on these stats; that seems like a
particularly bad case :(
Paul
--
Paul Emmerich
Looking for hel
Hi all. Running an Object Storage cluster with Ceph Nautilus 14.2.4.
We are running into what appears to be a serious bug that is affecting our
fairly new object storage cluster. While investigating some performance
issues -- seeing abnormally high IOPS, extremely slow bucket stat listings
(over 3
Hi All,
We have a XFS filesystems on Prod side and when we trying to mount the DR copy,
we get superblock error
root@:~# rbd-nbd map nfs/dir
/dev/nbd0
root@:~# mount /dev/nbd0 /mnt
mount: /dev/nbd0: can't read superblock
Any suggestions to test the DR copy any other way or if I'm doing someth
Hi,
On 2019-11-20 15:55, thoralf schulze wrote:
hi,
we were able to track this down to the auto balancer: disabling the auto
balancer and cleaning out old (and probably not very meaningful)
upmap-entries via ceph osd rm-pg-upmap-items brought back stable mgr
daemons and an usable dashboard.
I