Hi Joshua,

These messages actually deserve more attention than you think, I believe. You 
may hit this one [1] that Mark (comment #4) also hit with 16.2.10 (RHCS 5).
PR's here: https://github.com/ceph/ceph/pull/51669

Could you try raising osd_max_scrubs to 2 or 3 (now defaults to 3 in quincy and 
reef) and see if these logs disappear over the next hours/days?

Regards,
Frédéric.

----- Le 4 Juin 24, à 18:39, Joshua Arulsamy jarul...@uwyo.edu a écrit :

> Hi,
> 
> I recently upgraded my RHCS cluster from v4 to v5 and moved to containerized
> daemons (podman) along the way. I noticed that there are a huge number of logs
> going to journald on each of my hosts. I am unsure why there are so many.
> 
> I tried changing the logging level at runtime with commands like these (from 
> the
> ceph docs):
> 
> ceph tell osd.\* config set debug_osd 0/5
> 
> I tried adjusting several different subsystems (also with 0/0) but I noticed
> that logs seem to come at the same rate/content. I'm not sure what to try 
> next?
> Is there a way to trace where logs are coming from?
> 
> Some of the sample log entries are events like this on the OSD nodes:
> 
> Jun 04 10:34:02 pf-osd1 ceph-osd-0[182875]: 2024-06-04T10:34:02.470-0600
> 7fc049c03700 -1 osd.0 pg_epoch: 703151 pg[35.39s0( v 703141'789389
> (701266'780746,703141'789389] local-lis/les=702935/702936 n=48162 
> ec=63726/27988
> lis/c=702935/702935 les/c/f=702936/702936/0 sis=702935)
> [0,194,132,3,177,159,83,18,149,14,145]p0(0) r=0 lpr=702935 crt=703141'789389
> lcod 703141'789388 mlcod 703141'789388 active+clean planned 
> DEEP_SCRUB_ON_ERROR]
> scrubber <NotActive/>: handle_scrub_reserve_grant: received unsolicited
> reservation grant from osd 177(4) (0x55fdea6c4000)
> 
> These are very verbose messages and occur roughly every 0.5 second per daemon.
> On a cluster with 200 daemons this is getting unmanageable and is flooding my
> syslog servers.
> 
> Any advice on how to tame all the logs would be greatly appreciated!
> 
> Best,
> 
> Josh
> 
> Joshua Arulsamy
> HPC Systems Architect
> Advanced Research Computing Center
> University of Wyoming
> jarul...@uwyo.edu
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to