[ceph-users] Re: Ceph 16.2.14: pgmap updated every few seconds for no apparent reason

2023-10-25 Thread Zakhar Kirpichenko
Thanks for the warning, Eugen.

/Z

On Wed, 25 Oct 2023 at 13:04, Eugen Block  wrote:

> Hi,
>
> this setting is not as harmless as I assumed. There seem to be more
> ticks/periods/health_checks involved. When I choose a mgr_tick_period
> value > 30 seconds the two MGRs keep respawning. 30 seconds are the
> highest value that still seemed to work without MGR respawn, even with
> increased mon_mgr_beacon_grace (default 30 sec.). So if you decide to
> increase the mgr_tick_period don't go over 30 unless you find out what
> else you need to change.
>
> Regards,
> Eugen
>
>
> Zitat von Eugen Block :
>
> > Hi,
> >
> > you can change the report interval with this config option (default
> > 2 seconds):
> >
> > $ ceph config get mgr mgr_tick_period
> > 2
> >
> > $ ceph config set mgr mgr_tick_period 10
> >
> > Regards,
> > Eugen
> >
> > Zitat von Chris Palmer :
> >
> >> I have just checked 2 quincy 17.2.6 clusters, and I see exactly the
> >> same. The pgmap version is bumping every two seconds (which ties in
> >> with the frequency you observed). Both clusters are healthy with
> >> nothing apart from client IO happening.
> >>
> >> On 13/10/2023 12:09, Zakhar Kirpichenko wrote:
> >>> Hi,
> >>>
> >>> I am investigating excessive mon writes in our cluster and wondering
> >>> whether excessive pgmap updates could be the culprit. Basically pgmap
> is
> >>> updated every few seconds, sometimes over ten times per minute, in a
> >>> healthy cluster with no OSD and/or PG changes:
> >>>
> >>> Oct 13 11:03:03 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:01.515438+
> >>> mgr.ceph01.vankui (mgr.336635131) 838252 : cluster [DBG] pgmap v606575:
> >>> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >>> TiB used, 716 TiB / 777 TiB avail; 60 MiB/s rd, 109 MiB/s wr, 5.65k
> op/s
> >>> Oct 13 11:03:04 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:03.520953+
> >>> mgr.ceph01.vankui (mgr.336635131) 838253 : cluster [DBG] pgmap v606576:
> >>> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >>> TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 128 MiB/s wr, 5.76k
> op/s
> >>> Oct 13 11:03:06 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:05.524474+
> >>> mgr.ceph01.vankui (mgr.336635131) 838255 : cluster [DBG] pgmap v606577:
> >>> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >>> TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 122 MiB/s wr, 5.57k
> op/s
> >>> Oct 13 11:03:08 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:07.530484+
> >>> mgr.ceph01.vankui (mgr.336635131) 838256 : cluster [DBG] pgmap v606578:
> >>> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >>> TiB used, 716 TiB / 777 TiB avail; 79 MiB/s rd, 127 MiB/s wr, 6.62k
> op/s
> >>> Oct 13 11:03:10 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:09.57+
> >>> mgr.ceph01.vankui (mgr.336635131) 838258 : cluster [DBG] pgmap v606579:
> >>> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >>> TiB used, 716 TiB / 777 TiB avail; 66 MiB/s rd, 104 MiB/s wr, 5.38k
> op/s
> >>> Oct 13 11:03:12 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:11.537908+
> >>> mgr.ceph01.vankui (mgr.336635131) 838259 : cluster [DBG] pgmap v606580:
> >>> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >>> TiB used, 716 TiB / 777 TiB avail; 85 MiB/s rd, 121 MiB/s wr, 6.43k
> op/s
> >>> Oct 13 11:03:13 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:13.543490+
> >>> mgr.ceph01.vankui (mgr.336635131) 838260 : cluster [DBG] pgmap v606581:
> >>> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >>> TiB used, 716 TiB / 777 TiB avail; 78 MiB/s rd, 127 MiB/s wr, 6.54k
> op/s
> >>> Oct 13 11:03:16 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:15.547122+
> >>> mgr.ceph01.vankui (mgr.336635131) 838262 : cluster [DBG] pgmap v606582:
> >>> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >>> TiB used, 716 TiB / 777 TiB avail; 71 MiB/s rd, 122 MiB/s wr, 6.08k
> op/s
> >>> Oct 13 11:03:18 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:17.553180+
> >>> mgr.ceph01.vankui (mgr.336635131) 838263 : cluster [DBG] pgmap v606583:
> >>> 2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
> >>> active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 75
> MiB/s
> >>> rd, 176 MiB/s wr, 6.83k op/s
> >>> Oct 13 11:03:20 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:19.555960+
> >>> mgr.ceph01.vankui (mgr.336635131) 838264 : cluster [DBG] pgmap v606584:
> >>> 2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
> >>> active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 58
> MiB/s
> >>> rd, 161 MiB/s wr, 5.55k op/s
> >>> Oct 13 11:03:22 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:21.560597+
> >>> mgr.ceph01.vankui (mgr.336635131) 838266 : cluster [DBG] pgmap v606585:
> >>> 2400 pgs: 1 

[ceph-users] Re: Ceph 16.2.14: pgmap updated every few seconds for no apparent reason

2023-10-25 Thread Eugen Block

Hi,

this setting is not as harmless as I assumed. There seem to be more  
ticks/periods/health_checks involved. When I choose a mgr_tick_period  
value > 30 seconds the two MGRs keep respawning. 30 seconds are the  
highest value that still seemed to work without MGR respawn, even with  
increased mon_mgr_beacon_grace (default 30 sec.). So if you decide to  
increase the mgr_tick_period don't go over 30 unless you find out what  
else you need to change.


Regards,
Eugen


Zitat von Eugen Block :


Hi,

you can change the report interval with this config option (default  
2 seconds):


$ ceph config get mgr mgr_tick_period
2

$ ceph config set mgr mgr_tick_period 10

Regards,
Eugen

Zitat von Chris Palmer :

I have just checked 2 quincy 17.2.6 clusters, and I see exactly the  
same. The pgmap version is bumping every two seconds (which ties in  
with the frequency you observed). Both clusters are healthy with  
nothing apart from client IO happening.


On 13/10/2023 12:09, Zakhar Kirpichenko wrote:

Hi,

I am investigating excessive mon writes in our cluster and wondering
whether excessive pgmap updates could be the culprit. Basically pgmap is
updated every few seconds, sometimes over ten times per minute, in a
healthy cluster with no OSD and/or PG changes:

Oct 13 11:03:03 ceph03 bash[4019]: cluster 2023-10-13T11:03:01.515438+
mgr.ceph01.vankui (mgr.336635131) 838252 : cluster [DBG] pgmap v606575:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 60 MiB/s rd, 109 MiB/s wr, 5.65k op/s
Oct 13 11:03:04 ceph03 bash[4019]: cluster 2023-10-13T11:03:03.520953+
mgr.ceph01.vankui (mgr.336635131) 838253 : cluster [DBG] pgmap v606576:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 128 MiB/s wr, 5.76k op/s
Oct 13 11:03:06 ceph03 bash[4019]: cluster 2023-10-13T11:03:05.524474+
mgr.ceph01.vankui (mgr.336635131) 838255 : cluster [DBG] pgmap v606577:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 122 MiB/s wr, 5.57k op/s
Oct 13 11:03:08 ceph03 bash[4019]: cluster 2023-10-13T11:03:07.530484+
mgr.ceph01.vankui (mgr.336635131) 838256 : cluster [DBG] pgmap v606578:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 79 MiB/s rd, 127 MiB/s wr, 6.62k op/s
Oct 13 11:03:10 ceph03 bash[4019]: cluster 2023-10-13T11:03:09.57+
mgr.ceph01.vankui (mgr.336635131) 838258 : cluster [DBG] pgmap v606579:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 66 MiB/s rd, 104 MiB/s wr, 5.38k op/s
Oct 13 11:03:12 ceph03 bash[4019]: cluster 2023-10-13T11:03:11.537908+
mgr.ceph01.vankui (mgr.336635131) 838259 : cluster [DBG] pgmap v606580:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 85 MiB/s rd, 121 MiB/s wr, 6.43k op/s
Oct 13 11:03:13 ceph03 bash[4019]: cluster 2023-10-13T11:03:13.543490+
mgr.ceph01.vankui (mgr.336635131) 838260 : cluster [DBG] pgmap v606581:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 78 MiB/s rd, 127 MiB/s wr, 6.54k op/s
Oct 13 11:03:16 ceph03 bash[4019]: cluster 2023-10-13T11:03:15.547122+
mgr.ceph01.vankui (mgr.336635131) 838262 : cluster [DBG] pgmap v606582:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 71 MiB/s rd, 122 MiB/s wr, 6.08k op/s
Oct 13 11:03:18 ceph03 bash[4019]: cluster 2023-10-13T11:03:17.553180+
mgr.ceph01.vankui (mgr.336635131) 838263 : cluster [DBG] pgmap v606583:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 75 MiB/s
rd, 176 MiB/s wr, 6.83k op/s
Oct 13 11:03:20 ceph03 bash[4019]: cluster 2023-10-13T11:03:19.555960+
mgr.ceph01.vankui (mgr.336635131) 838264 : cluster [DBG] pgmap v606584:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 58 MiB/s
rd, 161 MiB/s wr, 5.55k op/s
Oct 13 11:03:22 ceph03 bash[4019]: cluster 2023-10-13T11:03:21.560597+
mgr.ceph01.vankui (mgr.336635131) 838266 : cluster [DBG] pgmap v606585:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 62 MiB/s
rd, 221 MiB/s wr, 6.19k op/s
Oct 13 11:03:24 ceph03 bash[4019]: cluster 2023-10-13T11:03:23.565974+
mgr.ceph01.vankui (mgr.336635131) 838267 : cluster [DBG] pgmap v606586:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 50 MiB/s
rd, 246 MiB/s wr, 5.93k op/s
Oct 13 11:03:26 ceph03 bash[4019]: cluster 

[ceph-users] Re: Ceph 16.2.14: pgmap updated every few seconds for no apparent reason

2023-10-19 Thread Zakhar Kirpichenko
Thanks, Eugen. This is a useful setting.

/Z

On Thu, 19 Oct 2023 at 10:43, Eugen Block  wrote:

> Hi,
>
> you can change the report interval with this config option (default 2
> seconds):
>
> $ ceph config get mgr mgr_tick_period
> 2
>
> $ ceph config set mgr mgr_tick_period 10
>
> Regards,
> Eugen
>
> Zitat von Chris Palmer :
>
> > I have just checked 2 quincy 17.2.6 clusters, and I see exactly the
> > same. The pgmap version is bumping every two seconds (which ties in
> > with the frequency you observed). Both clusters are healthy with
> > nothing apart from client IO happening.
> >
> > On 13/10/2023 12:09, Zakhar Kirpichenko wrote:
> >> Hi,
> >>
> >> I am investigating excessive mon writes in our cluster and wondering
> >> whether excessive pgmap updates could be the culprit. Basically pgmap is
> >> updated every few seconds, sometimes over ten times per minute, in a
> >> healthy cluster with no OSD and/or PG changes:
> >>
> >> Oct 13 11:03:03 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:01.515438+
> >> mgr.ceph01.vankui (mgr.336635131) 838252 : cluster [DBG] pgmap v606575:
> >> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >> TiB used, 716 TiB / 777 TiB avail; 60 MiB/s rd, 109 MiB/s wr, 5.65k op/s
> >> Oct 13 11:03:04 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:03.520953+
> >> mgr.ceph01.vankui (mgr.336635131) 838253 : cluster [DBG] pgmap v606576:
> >> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >> TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 128 MiB/s wr, 5.76k op/s
> >> Oct 13 11:03:06 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:05.524474+
> >> mgr.ceph01.vankui (mgr.336635131) 838255 : cluster [DBG] pgmap v606577:
> >> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >> TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 122 MiB/s wr, 5.57k op/s
> >> Oct 13 11:03:08 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:07.530484+
> >> mgr.ceph01.vankui (mgr.336635131) 838256 : cluster [DBG] pgmap v606578:
> >> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >> TiB used, 716 TiB / 777 TiB avail; 79 MiB/s rd, 127 MiB/s wr, 6.62k op/s
> >> Oct 13 11:03:10 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:09.57+
> >> mgr.ceph01.vankui (mgr.336635131) 838258 : cluster [DBG] pgmap v606579:
> >> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >> TiB used, 716 TiB / 777 TiB avail; 66 MiB/s rd, 104 MiB/s wr, 5.38k op/s
> >> Oct 13 11:03:12 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:11.537908+
> >> mgr.ceph01.vankui (mgr.336635131) 838259 : cluster [DBG] pgmap v606580:
> >> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >> TiB used, 716 TiB / 777 TiB avail; 85 MiB/s rd, 121 MiB/s wr, 6.43k op/s
> >> Oct 13 11:03:13 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:13.543490+
> >> mgr.ceph01.vankui (mgr.336635131) 838260 : cluster [DBG] pgmap v606581:
> >> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >> TiB used, 716 TiB / 777 TiB avail; 78 MiB/s rd, 127 MiB/s wr, 6.54k op/s
> >> Oct 13 11:03:16 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:15.547122+
> >> mgr.ceph01.vankui (mgr.336635131) 838262 : cluster [DBG] pgmap v606582:
> >> 2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB
> data, 61
> >> TiB used, 716 TiB / 777 TiB avail; 71 MiB/s rd, 122 MiB/s wr, 6.08k op/s
> >> Oct 13 11:03:18 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:17.553180+
> >> mgr.ceph01.vankui (mgr.336635131) 838263 : cluster [DBG] pgmap v606583:
> >> 2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
> >> active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 75
> MiB/s
> >> rd, 176 MiB/s wr, 6.83k op/s
> >> Oct 13 11:03:20 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:19.555960+
> >> mgr.ceph01.vankui (mgr.336635131) 838264 : cluster [DBG] pgmap v606584:
> >> 2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
> >> active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 58
> MiB/s
> >> rd, 161 MiB/s wr, 5.55k op/s
> >> Oct 13 11:03:22 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:21.560597+
> >> mgr.ceph01.vankui (mgr.336635131) 838266 : cluster [DBG] pgmap v606585:
> >> 2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
> >> active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 62
> MiB/s
> >> rd, 221 MiB/s wr, 6.19k op/s
> >> Oct 13 11:03:24 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:23.565974+
> >> mgr.ceph01.vankui (mgr.336635131) 838267 : cluster [DBG] pgmap v606586:
> >> 2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
> >> active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 50
> MiB/s
> >> rd, 246 MiB/s wr, 5.93k op/s
> >> Oct 13 11:03:26 ceph03 bash[4019]: cluster
> 2023-10-13T11:03:25.569471+
> >> 

[ceph-users] Re: Ceph 16.2.14: pgmap updated every few seconds for no apparent reason

2023-10-19 Thread Eugen Block

Hi,

you can change the report interval with this config option (default 2  
seconds):


$ ceph config get mgr mgr_tick_period
2

$ ceph config set mgr mgr_tick_period 10

Regards,
Eugen

Zitat von Chris Palmer :

I have just checked 2 quincy 17.2.6 clusters, and I see exactly the  
same. The pgmap version is bumping every two seconds (which ties in  
with the frequency you observed). Both clusters are healthy with  
nothing apart from client IO happening.


On 13/10/2023 12:09, Zakhar Kirpichenko wrote:

Hi,

I am investigating excessive mon writes in our cluster and wondering
whether excessive pgmap updates could be the culprit. Basically pgmap is
updated every few seconds, sometimes over ten times per minute, in a
healthy cluster with no OSD and/or PG changes:

Oct 13 11:03:03 ceph03 bash[4019]: cluster 2023-10-13T11:03:01.515438+
mgr.ceph01.vankui (mgr.336635131) 838252 : cluster [DBG] pgmap v606575:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 60 MiB/s rd, 109 MiB/s wr, 5.65k op/s
Oct 13 11:03:04 ceph03 bash[4019]: cluster 2023-10-13T11:03:03.520953+
mgr.ceph01.vankui (mgr.336635131) 838253 : cluster [DBG] pgmap v606576:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 128 MiB/s wr, 5.76k op/s
Oct 13 11:03:06 ceph03 bash[4019]: cluster 2023-10-13T11:03:05.524474+
mgr.ceph01.vankui (mgr.336635131) 838255 : cluster [DBG] pgmap v606577:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 122 MiB/s wr, 5.57k op/s
Oct 13 11:03:08 ceph03 bash[4019]: cluster 2023-10-13T11:03:07.530484+
mgr.ceph01.vankui (mgr.336635131) 838256 : cluster [DBG] pgmap v606578:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 79 MiB/s rd, 127 MiB/s wr, 6.62k op/s
Oct 13 11:03:10 ceph03 bash[4019]: cluster 2023-10-13T11:03:09.57+
mgr.ceph01.vankui (mgr.336635131) 838258 : cluster [DBG] pgmap v606579:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 66 MiB/s rd, 104 MiB/s wr, 5.38k op/s
Oct 13 11:03:12 ceph03 bash[4019]: cluster 2023-10-13T11:03:11.537908+
mgr.ceph01.vankui (mgr.336635131) 838259 : cluster [DBG] pgmap v606580:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 85 MiB/s rd, 121 MiB/s wr, 6.43k op/s
Oct 13 11:03:13 ceph03 bash[4019]: cluster 2023-10-13T11:03:13.543490+
mgr.ceph01.vankui (mgr.336635131) 838260 : cluster [DBG] pgmap v606581:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 78 MiB/s rd, 127 MiB/s wr, 6.54k op/s
Oct 13 11:03:16 ceph03 bash[4019]: cluster 2023-10-13T11:03:15.547122+
mgr.ceph01.vankui (mgr.336635131) 838262 : cluster [DBG] pgmap v606582:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 71 MiB/s rd, 122 MiB/s wr, 6.08k op/s
Oct 13 11:03:18 ceph03 bash[4019]: cluster 2023-10-13T11:03:17.553180+
mgr.ceph01.vankui (mgr.336635131) 838263 : cluster [DBG] pgmap v606583:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 75 MiB/s
rd, 176 MiB/s wr, 6.83k op/s
Oct 13 11:03:20 ceph03 bash[4019]: cluster 2023-10-13T11:03:19.555960+
mgr.ceph01.vankui (mgr.336635131) 838264 : cluster [DBG] pgmap v606584:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 58 MiB/s
rd, 161 MiB/s wr, 5.55k op/s
Oct 13 11:03:22 ceph03 bash[4019]: cluster 2023-10-13T11:03:21.560597+
mgr.ceph01.vankui (mgr.336635131) 838266 : cluster [DBG] pgmap v606585:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 62 MiB/s
rd, 221 MiB/s wr, 6.19k op/s
Oct 13 11:03:24 ceph03 bash[4019]: cluster 2023-10-13T11:03:23.565974+
mgr.ceph01.vankui (mgr.336635131) 838267 : cluster [DBG] pgmap v606586:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 50 MiB/s
rd, 246 MiB/s wr, 5.93k op/s
Oct 13 11:03:26 ceph03 bash[4019]: cluster 2023-10-13T11:03:25.569471+
mgr.ceph01.vankui (mgr.336635131) 838269 : cluster [DBG] pgmap v606587:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 41 MiB/s
rd, 240 MiB/s wr, 4.99k op/s
Oct 13 11:03:28 ceph03 bash[4019]: cluster 2023-10-13T11:03:27.575618+
mgr.ceph01.vankui (mgr.336635131) 838270 : cluster [DBG] pgmap v606588:
2400 pgs: 4 active+clean+scrubbing+deep, 2396 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 

[ceph-users] Re: Ceph 16.2.14: pgmap updated every few seconds for no apparent reason

2023-10-13 Thread Chris Palmer
I have just checked 2 quincy 17.2.6 clusters, and I see exactly the 
same. The pgmap version is bumping every two seconds (which ties in with 
the frequency you observed). Both clusters are healthy with nothing 
apart from client IO happening.


On 13/10/2023 12:09, Zakhar Kirpichenko wrote:

Hi,

I am investigating excessive mon writes in our cluster and wondering
whether excessive pgmap updates could be the culprit. Basically pgmap is
updated every few seconds, sometimes over ten times per minute, in a
healthy cluster with no OSD and/or PG changes:

Oct 13 11:03:03 ceph03 bash[4019]: cluster 2023-10-13T11:03:01.515438+
mgr.ceph01.vankui (mgr.336635131) 838252 : cluster [DBG] pgmap v606575:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 60 MiB/s rd, 109 MiB/s wr, 5.65k op/s
Oct 13 11:03:04 ceph03 bash[4019]: cluster 2023-10-13T11:03:03.520953+
mgr.ceph01.vankui (mgr.336635131) 838253 : cluster [DBG] pgmap v606576:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 128 MiB/s wr, 5.76k op/s
Oct 13 11:03:06 ceph03 bash[4019]: cluster 2023-10-13T11:03:05.524474+
mgr.ceph01.vankui (mgr.336635131) 838255 : cluster [DBG] pgmap v606577:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 122 MiB/s wr, 5.57k op/s
Oct 13 11:03:08 ceph03 bash[4019]: cluster 2023-10-13T11:03:07.530484+
mgr.ceph01.vankui (mgr.336635131) 838256 : cluster [DBG] pgmap v606578:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 79 MiB/s rd, 127 MiB/s wr, 6.62k op/s
Oct 13 11:03:10 ceph03 bash[4019]: cluster 2023-10-13T11:03:09.57+
mgr.ceph01.vankui (mgr.336635131) 838258 : cluster [DBG] pgmap v606579:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 66 MiB/s rd, 104 MiB/s wr, 5.38k op/s
Oct 13 11:03:12 ceph03 bash[4019]: cluster 2023-10-13T11:03:11.537908+
mgr.ceph01.vankui (mgr.336635131) 838259 : cluster [DBG] pgmap v606580:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 85 MiB/s rd, 121 MiB/s wr, 6.43k op/s
Oct 13 11:03:13 ceph03 bash[4019]: cluster 2023-10-13T11:03:13.543490+
mgr.ceph01.vankui (mgr.336635131) 838260 : cluster [DBG] pgmap v606581:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 78 MiB/s rd, 127 MiB/s wr, 6.54k op/s
Oct 13 11:03:16 ceph03 bash[4019]: cluster 2023-10-13T11:03:15.547122+
mgr.ceph01.vankui (mgr.336635131) 838262 : cluster [DBG] pgmap v606582:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 71 MiB/s rd, 122 MiB/s wr, 6.08k op/s
Oct 13 11:03:18 ceph03 bash[4019]: cluster 2023-10-13T11:03:17.553180+
mgr.ceph01.vankui (mgr.336635131) 838263 : cluster [DBG] pgmap v606583:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 75 MiB/s
rd, 176 MiB/s wr, 6.83k op/s
Oct 13 11:03:20 ceph03 bash[4019]: cluster 2023-10-13T11:03:19.555960+
mgr.ceph01.vankui (mgr.336635131) 838264 : cluster [DBG] pgmap v606584:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 58 MiB/s
rd, 161 MiB/s wr, 5.55k op/s
Oct 13 11:03:22 ceph03 bash[4019]: cluster 2023-10-13T11:03:21.560597+
mgr.ceph01.vankui (mgr.336635131) 838266 : cluster [DBG] pgmap v606585:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 62 MiB/s
rd, 221 MiB/s wr, 6.19k op/s
Oct 13 11:03:24 ceph03 bash[4019]: cluster 2023-10-13T11:03:23.565974+
mgr.ceph01.vankui (mgr.336635131) 838267 : cluster [DBG] pgmap v606586:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 50 MiB/s
rd, 246 MiB/s wr, 5.93k op/s
Oct 13 11:03:26 ceph03 bash[4019]: cluster 2023-10-13T11:03:25.569471+
mgr.ceph01.vankui (mgr.336635131) 838269 : cluster [DBG] pgmap v606587:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 41 MiB/s
rd, 240 MiB/s wr, 4.99k op/s
Oct 13 11:03:28 ceph03 bash[4019]: cluster 2023-10-13T11:03:27.575618+
mgr.ceph01.vankui (mgr.336635131) 838270 : cluster [DBG] pgmap v606588:
2400 pgs: 4 active+clean+scrubbing+deep, 2396 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 44 MiB/s rd, 259 MiB/s wr, 5.38k op/s
Oct 13 11:03:30 ceph03 bash[4019]: cluster 2023-10-13T11:03:29.578262+
mgr.ceph01.vankui (mgr.336635131) 838271 : cluster [DBG] pgmap v606589:
2400 pgs: 4