Hi Zheng,

Many thanks for your helpful post, I've done the following:

1) set the threshold to 1024 * 1024:

# ceph config set osd \
osd_deep_scrub_large_omap_object_key_threshold 1048576

2) deep scrubbed all of the pgs on the two OSD that reported "Large omap
object found." - these were all in pool 1, which has just four osd.


Result: After 30 minutes, all deep-scrubs completed, and all "large omap
objects" warnings disappeared.

...should we be worried about the size of these OMAP objects?

again many thanks,

Jake

On 10/30/19 3:15 AM, Yan, Zheng wrote:
> see https://tracker.ceph.com/issues/42515.  just ignore the warning for now
> 
> On Mon, Oct 7, 2019 at 7:50 AM Nigel Williams
> <nigel.willi...@tpac.org.au> wrote:
>>
>> Out of the blue this popped up (on an otherwise healthy cluster):
>>
>> HEALTH_WARN 1 large omap objects
>> LARGE_OMAP_OBJECTS 1 large omap objects
>>     1 large objects found in pool 'cephfs_metadata'
>>     Search the cluster log for 'Large omap object found' for more details.
>>
>> "Search the cluster log" is somewhat opaque, there are logs for many 
>> daemons, what is a "cluster" log? In the ML history some found it in the OSD 
>> logs?
>>
>> Another post suggested removing lost+found, but using cephfs-shell I don't 
>> see one at the top-level, is there another way to disable this "feature"?
>>
>> thanks.
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


--
Jake Grimmett
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to