Hi, I also observe an increase in pgmap version every second or so, see snippet 
below. I run mimic 13.2.8 without any PG scaling/upmapping. Why does the 
version increase so often?

May 14 12:33:50 ceph-03 journal: cluster 2020-05-14 12:33:48.521546 mgr.ceph-02 
mgr.27460080 192.168.32.66:0/63 114833 : cluster [DBG] pgmap v114860: 2545 pgs: 
2 active+clean+scrubbing+deep, 2543 active+clean; 195 TiB data, 249 TiB used, 
1.5 PiB / 1.8 PiB avail; 4.8 MiB/s rd, 11 MiB/s wr, 1.48 kop/s

May 14 12:33:50 ceph-02 journal: 2020-05-14 12:33:50.543 7fdb57c5b700  0 
log_channel(cluster) log [DBG] : pgmap v114861: 2545 pgs: 2 
active+clean+scrubbing+deep, 2543 active+clean; 195 TiB data, 249 TiB used, 1.5 
PiB / 1.8 PiB avail; 5.6 MiB/s rd, 11 MiB/s wr, 1.21 kop/s

May 14 12:33:52 ceph-02 journal: 2020-05-14 12:33:52.565 7fdb57c5b700  0 
log_channel(cluster) log [DBG] : pgmap v114862: 2545 pgs: 2 
active+clean+scrubbing+deep, 2543 active+clean; 195 TiB data, 249 TiB used, 1.5 
PiB / 1.8 PiB avail; 8.9 MiB/s rd, 16 MiB/s wr, 1.59 kop/s

The version increases every second, here from pgmap v114860 to  pgmap v114862. 
Current cluster status:

[root@gnosis]# ceph status
  cluster:
    id:     ---
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03
    mgr: ceph-02(active), standbys: ceph-01, ceph-03
    mds: con-fs2-1/1/1 up  {0=ceph-08=up:active}, 1 up:standby-replay
    osd: 288 osds: 268 up, 268 in

  data:
    pools:   10 pools, 2545 pgs
    objects: 80.80 M objects, 195 TiB
    usage:   249 TiB used, 1.5 PiB / 1.8 PiB avail
    pgs:     2543 active+clean
             2    active+clean+scrubbing+deep

  io:
    client:   20 MiB/s rd, 21 MiB/s wr, 578 op/s rd, 1.08 kop/s wr

Thanks for any info!
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Nghia Viet Tran <nghia.viet.t...@mgm-tp.com>
Sent: 14 May 2020 03:49:38
To: Bryan Henderson; Ceph users mailing list
Subject: [ceph-users] Re: What is a pgmap?

If your Ceph cluster are running on the latest version of Ceph then the the 
pg_autoscaler probably  is the reason. After the period of time, Ceph will 
check the cluster status and increase/decrease the number of PG in the cluster 
if needed.

On 5/14/20, 03:37, "Bryan Henderson" <bry...@giraffe-data.com> wrote:

    I'm surprised I couldn't find this explained anywhere (I did look), but ...

    What is the pgmap and why does it get updated every few seconds on a tiny
    cluster that's mostly idle?

    I do know what a placement group (PG) is and that when documentation talks
    about placement group maps, it is talking about something else -- mapping of
    PGs to OSDs by CRUSH and OSD maps.

    --
    Bryan Henderson                                   San Jose, California
    _______________________________________________
    ceph-users mailing list -- ceph-users@ceph.io
    To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to