On 26-3-2019 16:39, Ashley Merrick wrote:
Have you upgraded any OSD's?


No didn't go through with the osd's


On a test cluster I saw the same and as I upgraded / restarted the OSD's the PG's started to show online till it was 100%.

I know it says to not change anything to do with pool's during the upgrade so I am guessing there is a code change that cause this till all is on the same version.

will continue


On Tue, Mar 26, 2019 at 11:37 PM Stadsnet <jwil...@stads.net <mailto:jwil...@stads.net>> wrote:


    We did a upgrade from luminous to nautilus

    after upgrading the three monitors we got that all our pgs where
    inactive

       cluster:
         id:     5bafad08-31b2-4716-be77-07ad2e2647eb
         health: HEALTH_ERR
                 noout flag(s) set
                 1 scrub errors
                 Reduced data availability: 1429 pgs inactive
                 316 pgs not deep-scrubbed in time
                 520 pgs not scrubbed in time
                 3 monitors have not enabled msgr2

       services:
         mon: 3 daemons, quorum Ceph-Mon1,Ceph-Mon2,Ceph-Mon3 (age 51m)
         mgr: Ceph-Mon1(active, since 23m), standbys: Ceph-Mon3, Ceph-Mon2
         osd: 103 osds: 103 up, 103 in
              flags noout
         rgw: 2 daemons active (S3-Ceph1, S3-Ceph2)

       data:
         pools:   26 pools, 3248 pgs
         objects: 134.92M objects, 202 TiB
         usage:   392 TiB used, 486 TiB / 879 TiB avail
         pgs:     100.000% pgs unknown
                  3248 unknown

    System seems to keep working.

    Did we loose  reference "-1  0   root default" ?

    is there a fix for that ?

      ceph osd tree
    ID  CLASS WEIGHT    TYPE NAME               STATUS REWEIGHT PRI-AFF
    -18        16.00000 root ssd
    -10         2.00000     host Ceph-Stor1-SSD
      80  nvme   2.00000         osd.80              up  1.00000 1.00000
    -11         2.00000     host Ceph-Stor2-SSD
      81  nvme   2.00000         osd.81              up  1.00000 1.00000
    -12         2.00000     host Ceph-Stor3-SSD
      82  nvme   2.00000         osd.82              up  1.00000 1.00000
    -13         2.00000     host Ceph-Stor4-SSD
      83  nvme   2.00000         osd.83              up  1.00000 1.00000
    -14         2.00000     host Ceph-Stor5-SSD
      84  nvme   2.00000         osd.84              up  1.00000 1.00000
    -15         2.00000     host Ceph-Stor6-SSD
      85  nvme   2.00000         osd.85              up  1.00000 1.00000
    -16         2.00000     host Ceph-Stor7-SSD
      86  nvme   2.00000         osd.86              up  1.00000 1.00000
    -17         2.00000     host Ceph-Stor8-SSD
      87  nvme   2.00000         osd.87              up  1.00000 1.00000
      -1       865.93420 root default
      -2       110.96700     host Ceph-Stor1
       0   hdd   9.09599         osd.0               up  1.00000 1.00000
       1   hdd   9.09599         osd.1               up  1.00000 1.00000
       2   hdd   9.09599         osd.2               up  1.00000 1.00000
       3   hdd   9.09599         osd.3               up  1.00000 1.00000
       4   hdd   9.09599         osd.4               up  1.00000 1.00000
       5   hdd   9.09599         osd.5               up  1.00000 1.00000
       6   hdd   9.09599         osd.6               up  1.00000 1.00000
       7   hdd   9.09599         osd.7               up  1.00000 1.00000
       8   hdd   9.09599         osd.8               up  1.00000 1.00000
       9   hdd   9.09599         osd.9               up  1.00000 1.00000
      88   hdd   9.09599         osd.88              up  1.00000 1.00000
      89   hdd   9.09599         osd.89              up  1.00000 1.00000
      -3       109.15189     host Ceph-Stor2
      10   hdd   9.09599         osd.10              up  1.00000 1.00000
      11   hdd   9.09599         osd.11              up  1.00000 1.00000
      12   hdd   9.09599         osd.12              up  1.00000 1.00000
      13   hdd   9.09599         osd.13              up  1.00000 1.00000
      14   hdd   9.09599         osd.14              up  1.00000 1.00000
      15   hdd   9.09599         osd.15              up  1.00000 1.00000
      16   hdd   9.09599         osd.16              up  1.00000 1.00000
      17   hdd   9.09599         osd.17              up  1.00000 1.00000
      18   hdd   9.09599         osd.18              up  1.00000 1.00000
      19   hdd   9.09599         osd.19              up  1.00000 1.00000
      90   hdd   9.09598         osd.90              up  1.00000 1.00000
      91   hdd   9.09598         osd.91              up  1.00000 1.00000
      -4       109.15189     host Ceph-Stor3
      20   hdd   9.09599         osd.20              up  1.00000 1.00000
      21   hdd   9.09599         osd.21              up  1.00000 1.00000
      22   hdd   9.09599         osd.22              up  1.00000 1.00000
      23   hdd   9.09599         osd.23              up  1.00000 1.00000
      24   hdd   9.09599         osd.24              up  1.00000 1.00000
      25   hdd   9.09599         osd.25              up  1.00000 1.00000
      26   hdd   9.09599         osd.26              up  1.00000 1.00000
      27   hdd   9.09599         osd.27              up  1.00000 1.00000
      28   hdd   9.09599         osd.28              up  1.00000 1.00000
      29   hdd   9.09599         osd.29              up  1.00000 1.00000
      92   hdd   9.09598         osd.92              up  1.00000 1.00000
      93   hdd   9.09598         osd.93              up  0.80002 1.00000
      -5       109.15189     host Ceph-Stor4
      30   hdd   9.09599         osd.30              up  1.00000 1.00000
      31   hdd   9.09599         osd.31              up  1.00000 1.00000
      32   hdd   9.09599         osd.32              up  1.00000 1.00000
      33   hdd   9.09599         osd.33              up  1.00000 1.00000
      34   hdd   9.09599         osd.34              up  0.90002 1.00000
      35   hdd   9.09599         osd.35              up  1.00000 1.00000
      36   hdd   9.09599         osd.36              up  1.00000 1.00000
      37   hdd   9.09599         osd.37              up  1.00000 1.00000
      38   hdd   9.09599         osd.38              up  1.00000 1.00000
      39   hdd   9.09599         osd.39              up  1.00000 1.00000
      94   hdd   9.09598         osd.94              up  1.00000 1.00000
      95   hdd   9.09598         osd.95              up  1.00000 1.00000
      -6       109.15189     host Ceph-Stor5
      40   hdd   9.09599         osd.40              up  1.00000 1.00000
      41   hdd   9.09599         osd.41              up  1.00000 1.00000
      42   hdd   9.09599         osd.42              up  1.00000 1.00000
      43   hdd   9.09599         osd.43              up  1.00000 1.00000
      44   hdd   9.09599         osd.44              up  1.00000 1.00000
      45   hdd   9.09599         osd.45              up  1.00000 1.00000
      46   hdd   9.09599         osd.46              up  1.00000 1.00000
      47   hdd   9.09599         osd.47              up  1.00000 1.00000
      48   hdd   9.09599         osd.48              up  1.00000 1.00000
      49   hdd   9.09599         osd.49              up  1.00000 1.00000
      96   hdd   9.09598         osd.96              up  1.00000 1.00000
      97   hdd   9.09598         osd.97              up  1.00000 1.00000
      -7       109.15187     host Ceph-Stor6
      50   hdd   9.09599         osd.50              up  1.00000 1.00000
      51   hdd   9.09599         osd.51              up  1.00000 1.00000
      52   hdd   9.09598         osd.52              up  0.80005 1.00000
      53   hdd   9.09599         osd.53              up  1.00000 1.00000
      54   hdd   9.09599         osd.54              up  1.00000 1.00000
      55   hdd   9.09599         osd.55              up  1.00000 1.00000
      56   hdd   9.09599         osd.56              up  1.00000 1.00000
      57   hdd   9.09599         osd.57              up  1.00000 1.00000
      58   hdd   9.09599         osd.58              up  1.00000 1.00000
      59   hdd   9.09599         osd.59              up  1.00000 1.00000
      98   hdd   9.09598         osd.98              up  1.00000 1.00000
      99   hdd   9.09598         osd.99              up  1.00000 1.00000
      -8       109.15189     host Ceph-Stor7
      60   hdd   9.09599         osd.60              up  1.00000 1.00000
      61   hdd   9.09599         osd.61              up  1.00000 1.00000
      62   hdd   9.09599         osd.62              up  1.00000 1.00000
      63   hdd   9.09599         osd.63              up  1.00000 1.00000
      64   hdd   9.09599         osd.64              up  1.00000 1.00000
      65   hdd   9.09599         osd.65              up  1.00000 1.00000
      66   hdd   9.09599         osd.66              up  1.00000 1.00000
      67   hdd   9.09599         osd.67              up  1.00000 1.00000
      68   hdd   9.09599         osd.68              up  1.00000 1.00000
      69   hdd   9.09599         osd.69              up  1.00000 1.00000
    100   hdd   9.09598         osd.100             up  1.00000 1.00000
    101   hdd   9.09598         osd.101             up  1.00000 1.00000
      -9       100.05589     host Ceph-Stor8
      70   hdd   9.09599         osd.70              up  0.90002 1.00000
      71   hdd   9.09599         osd.71              up  1.00000 1.00000
      72   hdd   9.09599         osd.72              up  1.00000 1.00000
      73   hdd   9.09599         osd.73              up  0.90002 1.00000
      74   hdd   9.09599         osd.74              up  1.00000 1.00000
      75   hdd   9.09599         osd.75              up  1.00000 1.00000
      76   hdd   9.09599         osd.76              up  1.00000 1.00000
      77   hdd   9.09599         osd.77              up  0.95000 1.00000
      78   hdd   9.09598         osd.78              up  0.95000 1.00000
      79   hdd   9.09599         osd.79              up  1.00000 1.00000
    102   hdd   9.09598         osd.102             up  1.00000 1.00000

    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to