Actually, this looks very much like my issue, so I'll add to that:
http://tracker.ceph.com/issues/21040
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Edward
R Huyer
Sent: Wednesday, August 23, 2017 11:10 AM
To: Brad Hubbard
Cc: ceph-
27;ll put something in the tracker later today.
Thank you for your help.
-Original Message-
From: Brad Hubbard [mailto:bhubb...@redhat.com]
Sent: Wednesday, August 23, 2017 4:44 AM
To: Edward R Huyer
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] PG reported as inconsistent in st
om: Brad Hubbard [mailto:bhubb...@redhat.com]
Sent: Monday, August 21, 2017 7:05 PM
To: Edward R Huyer
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] PG reported as inconsistent in status, but no
inconsistencies visible to rados
Could you provide the output of 'ceph-bluestore-too
This is an odd one. My cluster is reporting an inconsistent pg in ceph status
and ceph health detail. However, rados list-inconsistent-obj and rados
list-inconsistent-snapset both report no inconsistencies. Scrubbing the pg
results in these errors in the osd logs:
OSD 63 (primary):
2017-08-2
Does the change in ceph -w output also affect ceph status? If so, that's a
pretty major change in output formatting for just going from one RC to the
next, and is liable to break monitoring systems.
More generally, it seems like there have been an unexpected number of
significant feature and o
I'm migrating my Ceph cluster to entirely new hardware. Part of that is
replacing the monitors. My plan is to add new monitors and remove old ones,
updating config files on client machines as I go.
I have clients actively using the cluster. They are all QEMU/libvirt and
kernel clients using R
Are you on 12.1.0 or 12.1.1? I noticed that in 12.1.0 the ceph command was
missing options that were supposed to be there, but 12.1.1 had them. Maybe
you're seeing a similar issue?
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Kenneth Waeg
I just now ran into the same problem you did, though I managed to get it
straightened out.
It looks to me like the "ceph osd set-{full,nearfull,backfillfull}-ratio"
commands *do* work, with two caveats.
Caveat 1: "ceph pg dump" doesn't reflect the change for some reason.
Caveat 2: There doesn'
I need to switch out some old Ceph cluster hardware with new (in a somewhat
unusual scenario), upgrade my Hammer cluster to Jewel, and do the chown that's
part of the upgrade. I have a notion of the process, but would like a sanity
check if people don't mind.
I'm going to be spinning up some n
That is normal behavior. Ceph has no understanding of the filesystem living on
top of the RBD, so it doesn’t know when space is freed up. If you are running
a sufficiently current kernel, you can use fstrim to cause the kernel to tell
Ceph what blocks are free. More details here:
http://www
10 matches
Mail list logo