Thanks, that is most useful to know!

The Ceph docs are very good except when they propagate obsolete
information. For example, using "ceph-deploy" on Octopus (my copy
didn't come with ceph-deploy - it used cephadm).

And, alas, nothing has been written to delineate differences between
containerized resource management versus "bare" resource management.

Hopefully that will get fixed someday. Ideally, it will be both fixed
and applied backwards, where applicable.

   Tim

On Wed, 2023-12-20 at 12:32 +0000, Eugen Block wrote:
> Just to add a bit more information, the 'ceph daemon' command is
> still  
> valid, it just has to be issued inside of the containers:
> 
> quincy-1:~ # cephadm enter --name osd.0
> Inferring fsid 1e6e5cb6-73e8-11ee-b195-fa163ee43e22
> [ceph: root@quincy-1 /]# ceph daemon osd.0 config diff | head
> {
>      "diff": {
>          "bluestore_block_db_size": {
>              "default": "0",
>              "mon": "2000000000",
>              "final": "2000000000"
>          },
> 
> Or with the cephadm shell:
> 
> quincy-1:~ # cephadm shell --name osd.0 -- ceph daemon osd.0 config  
> diff | head
> 
> But 'ceph tell' should work as well, I just wanted to show some more 
> context a options.
> 
> As for the debug messages, there are a couple of things to tweak as  
> you may have noticed. For example, you could reduce the log level of 
> debug_rocksdb (default 4/5). If you want to reduce the
> mgr_tick_period  
> (the repeating health messages every two seconds) you can do that
> like  
> this:
> 
> quincy-1:~ # ceph config set mgr mgr_tick_period 10
> 
> But don't use too large periods, I had mentioned that in a recent  
> thread. 10 seconds seem to work just fine for me, though.
> 
> 
> 
> Zitat von Tim Holloway <t...@mousetech.com>:
> 
> > OK. Found some loglevel overrides in the monitor and reset them.
> > 
> > Restarted the mgr and monitor just in case.
> > 
> > Still getting a lot of stuff that looks like this.
> > 
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+0000
> > 7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
> > [default] Manual compact>
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+0000
> > 7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
> > [default] Manual compact>
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+0000
> > 7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
> > [default] Manual compact>
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+0000
> > 7f36d7291700  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1443]
> > [default] Manual compact>
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-
> > 19T22:10:49.542670+0000
> > mgr.xyz1 (mgr.6889303) 177 : cluster [DBG] pgmap v160: 649 pgs: 1
> > active+clean+scrubbin>
> > Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:51.542+0000
> > 7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v161: 649
> > pgs: 1
> > active+clean+scrubbi>
> > Dec 19 17:10:52 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-
> > 19T22:10:51.543403+0000
> > mgr.xyz1 (mgr.6889303) 178 : cluster [DBG] pgmap v161: 649 pgs: 1
> > active+clean+scrubbin>
> > Dec 19 17:10:52 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:52.748+0000
> > 7fa1c74a2700  0 [progress INFO root] Processing OSDMap change
> > 20239..20239
> > Dec 19 17:10:53 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:53.544+0000
> > 7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v162: 649
> > pgs: 1
> > active+clean+scrubbi>
> > Dec 19 17:10:54 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-
> > 19T22:10:53.545649+0000
> > mgr.xyz1 (mgr.6889303) 179 : cluster [DBG] pgmap v162: 649 pgs: 1
> > active+clean+scrubbin>
> > Dec 19 17:10:55 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:55.545+0000
> > 7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v163: 649
> > pgs: 1
> > active+clean+scrubbi>
> > Dec 19 17:10:55 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:55.834+0000
> > 7f36de29f700  1 mon.xyz1@1(peon).osd e20239 _set_new_cache_sizes
> > cache_size:1020054731 inc_a>
> > Dec 19 17:10:56 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-
> > 19T22:10:55.546197+0000
> > mgr.xyz1 (mgr.6889303) 180 : cluster [DBG] pgmap v163: 649 pgs: 1
> > active+clean+scrubbin>
> > Dec 19 17:10:57 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:57.546+0000
> > 7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v164: 649
> > pgs: 1
> > active+clean+scrubbi>
> > Dec 19 17:10:57 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:57.751+0000
> > 7fa1c74a2700  0 [progress INFO root] Processing OSDMap change
> > 20239..20239
> > Dec 19 17:10:58 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-
> > 19T22:10:57.547657+0000
> > mgr.xyz1 (mgr.6889303) 181 : cluster [DBG] pgmap v164: 649 pgs: 1
> > active+clean+scrubbin>
> > Dec 19 17:10:59 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:10:59.548+0000
> > 7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v165: 649
> > pgs: 1
> > active+clean+scrubbi>
> > Dec 19 17:11:00 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: ::ffff:10.0.1.2 - -
> > [19/Dec/2023:22:11:00] "GET /metrics HTTP/1.1" 200 215073 ""
> > "Prometheus/2.33.4"
> > Dec 19 17:11:00 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:11:00.762+0000
> > 7fa1b6e42700  0 [prometheus INFO cherrypy.access.140332751105776]
> > ::ffff:10.0.1.2 - - [19/De>
> > Dec 19 17:11:00 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:11:00.835+0000
> > 7f36de29f700  1 mon.xyz1@1(peon).osd e20239 _set_new_cache_sizes
> > cache_size:1020054731 inc_a>
> > Dec 19 17:11:00 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-
> > 19T22:10:59.549152+0000
> > mgr.xyz1 (mgr.6889303) 182 : cluster [DBG] pgmap v165: 649 pgs: 1
> > active+clean+scrubbin>
> > Dec 19 17:11:01 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mgr-xyz1[1906755]: debug 2023-12-19T22:11:01.548+0000
> > 7fa1d9fc7700  0 log_channel(cluster) log [DBG] : pgmap v166: 649
> > pgs: 1
> > active+clean+scrubbi>
> > Dec 19 17:11:02 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
> > 9c5c8e86cf8f-mon-xyz1[1906961]: cluster 2023-12-
> > 19T22:11:01.549680+0000
> > mgr.xyz1 (mgr.6889303) 183 : cluster [DBG] pgmap v166: 649 pgs: 1
> > active+clean+scrubbin>
> > 
> > 
> > 
> > On Tue, 2023-12-19 at 16:21 -0500, Wesley Dillingham wrote:
> > > "ceph daemon" commands need to be run local to the machine where
> > > the
> > > daemon
> > > is running. So in this case if you arent on the node where osd.1
> > > lives it
> > > wouldnt work. "ceph tell" should work anywhere there is a
> > > client.admin key.
> > > 
> > > 
> > > Respectfully,
> > > 
> > > *Wes Dillingham*
> > > w...@wesdillingham.com
> > > LinkedIn <http://www.linkedin.com/in/wesleydillingham>
> > > 
> > > 
> > > On Tue, Dec 19, 2023 at 4:02 PM Tim Holloway <t...@mousetech.com>
> > > wrote:
> > > 
> > > > Ceph version is Pacific (16.2.14), upgraded from a sloppy
> > > > Octopus.
> > > > 
> > > > I ran afoul of all the best bugs in Octopus, and in the process
> > > > switched on a lot of stuff better left alone, including some
> > > > detailed
> > > > debug logging. Now I can't turn it off.
> > > > 
> > > > I am confidently informed by the documentation that the first
> > > > step
> > > > would be the command:
> > > > 
> > > > ceph daemon osd.1 config show | less
> > > > 
> > > > But instead of config information I get back:
> > > > 
> > > > Can't get admin socket path: unable to get conf option
> > > > admin_socket
> > > > for
> > > > osd: b"error parsing 'osd': expected string of the form
> > > > TYPE.ID,
> > > > valid
> > > > types are: auth, mon, osd, mds, mgr, client\n"
> > > > 
> > > > Which seems to be kind of insane.
> > > > 
> > > > Attempting to get daemon config info on a monitor on that
> > > > machine
> > > > gives:
> > > > 
> > > > admin_socket: exception getting command descriptions: [Errno 2]
> > > > No
> > > > such
> > > > file or directory
> > > > 
> > > > Which doesn't help either.
> > > > 
> > > > Anyone got an idea?
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users@ceph.io
> > > > To unsubscribe send an email to ceph-users-le...@ceph.io
> > > > 
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@ceph.io
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to