On Wed, Oct 30, 2019 at 9:28 AM Jake Grimmett wrote:
>
> Hi Zheng,
>
> Many thanks for your helpful post, I've done the following:
>
> 1) set the threshold to 1024 * 1024:
>
> # ceph config set osd \
> osd_deep_scrub_large_omap_object_key_threshold 1048576
>
> 2) deep scrubbed all of the pgs on
Hi Zheng,
Many thanks for your helpful post, I've done the following:
1) set the threshold to 1024 * 1024:
# ceph config set osd \
osd_deep_scrub_large_omap_object_key_threshold 1048576
2) deep scrubbed all of the pgs on the two OSD that reported "Large omap
object found." - these were all in
see https://tracker.ceph.com/issues/42515. just ignore the warning for now
On Mon, Oct 7, 2019 at 7:50 AM Nigel Williams
wrote:
>
> Out of the blue this popped up (on an otherwise healthy cluster):
>
> HEALTH_WARN 1 large omap objects
> LARGE_OMAP_OBJECTS 1 large omap objects
> 1 large
Hi Paul, Nigel,
I'm also seeing "HEALTH_WARN 6 large omap objects" warnings with cephfs
after upgrading to 14.2.4:
The affected osd's are used (only) by the metadata pool:
POOLID STORED OBJECTS USED %USED MAX AVAIL
mds_ssd 1 64 GiB 1.74M 65 GiB 4.47 466 GiB
See below for more log
Hi,
the default for this warning changed recently (see other similar
threads on the mailing list), it was 2 million before 14.2.3.
I don't think the new default of 200k is a good choice, so increasing
it is a reasonable work-around.
Paul
--
Paul Emmerich
Looking for help with your Ceph
I've adjusted the threshold:
ceph config set osd osd_deep_scrub_large_omap_object_key_threshold 35
Colleague suggested that this will take effect on the next deep-scrub.
Is the default of 200,000 too small? will this be adjusted in future
releases or is it meant to be adjusted in some
I followed some other suggested steps, and have this:
root@cnx-17:/var/log/ceph# zcat ceph-osd.178.log.?.gz|fgrep Large
2019-10-02 13:28:39.412 7f482ab1c700 0 log_channel(cluster) log [WRN] :
Large omap object found. Object: 2:654134d2:::mds0_openfiles.0:head Key
count: 306331 Size (bytes):
Out of the blue this popped up (on an otherwise healthy cluster):
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'cephfs_metadata'
Search the cluster log for 'Large omap object found' for more details.
"Search the cluster log" is