There were no degraded pg's but there were misplaced objects / rebalancing
due to additional drives being added in terms of upgrade state. Before the
new drive, everything was deep scrub on version 18.2.7. The new pool was
added half way through the upgrade process. I guess I'm doing too many
things at the same time.
During the upgrade I did add these config settings since it seemed stuck
and going badly, bunch of "experiencing slow operations in BlueStore"
global advanced bdev_async_discard_threads
1
global advanced bdev_enable_discard
true
I've attached all the things you asked for, unfortunately it's post me
rolling my own ceph-osd without the check (OSD.cc_patch.txt). Tried it on
one OSD first and it seemed happy and functional without crashing, so
brought the others up using the custom binary.
I'm too scared to do anything to pool 57 yet... I'll wait, let a full deep
scrub happen and then delete it I guess, then revert back to the unpatched
version on one of the OSDs and see how it goes.
On Fri, Sep 26, 2025 at 4:37 PM Eugen Block <[email protected]> wrote:
> Hi,
>
> I haven't seen this error yet, did you upgrade while the cluster was
> not healthy? The more history you can provide, the better.
> Can you add the output of these CLI commands?
>
> ceph -s
> ceph health detail
> ceph pg ls-by-pool <pool_with_id_57> (not the entire output, just to
> see if they are listed)
>
> Before deleting a PG, I'd export it with ceph-objectstore-tool, just
> in case. Then you could try to remove it from one OSD (also with
> ceph-objectstore-tool) and see if that single OSD starts again. If it
> works, you could do the same for the remaining PG chunks.
>
> Downgrading is generally not supported, so you might break even more.
>
> Regards,
> Eugen
>
>
> Zitat von Daniel Williams <[email protected]>:
>
> > Some background pool 57 is a new rbd pool (12MiB used) that I was just
> > experimenting with (performance of striped hdd rbd devices), I don't
> think
> > I deleted it but can't say for sure (it appears in ceph df) since it
> > doesn't matter.
> > This pool was created on reef, a full deep scrub has been done several
> > times over since moving to reef (March 2024), likely no deep scrub has
> been
> > done since moving to squid since I've had lots of troubles...
> >
> > This error however has broken a 150TiB machine and worse I don't know
> that
> > a restart won't break others..
> >
> > After a host reboot I've lost half the OSDs on that host, they all refuse
> > to start with:
> >
> > -725> 2025-09-25T18:02:37.157+0000 7f93d0aab8c0 -1 Falling back to
> public
> > interface
> > -2> 2025-09-25T18:02:40.033+0000 7f93d0aab8c0 -1 osd.21 2098994 init
> > missing pg_pool_t for deleted pool 57 for pg 57.3s7; please downgrade to
> > luminous and allow pg deletion to complete before upgrading
> > -1> 2025-09-25T18:02:40.037+0000 7f93d0aab8c0 -1 ./src/osd/OSD.cc: In
> > function 'int OSD::init()' thread 7f93d0aab8c0 time
> > 2025-09-25T18:02:40.040491+0000
> > ./src/osd/OSD.cc: 3867: ceph_abort_msg("abort() called")
> >
> > ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid
> > (stable)
> > 1: (ceph::__ceph_abort(char const*, int, char const*,
> > std::__cxx11::basic_string<char, std::char_traits<char>,
> > std::allocator<char> > const&)+0xb7) [0x560b78a7056a]
> > 2: /usr/bin/ceph-osd(+0x385bcb) [0x560b789f0bcb]
> > 3: main()
> > 4: /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f93d165dd90]
> > 5: __libc_start_main()
> > 6: _start()
> >
> > 0> 2025-09-25T18:02:40.037+0000 7f93d0aab8c0 -1 *** Caught signal
> > (Aborted) **
> > in thread 7f93d0aab8c0 thread_name:ceph-osd
> >
> > ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid
> > (stable)
> > 1: /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f93d1676520]
> > 2: pthread_kill()
> > 3: raise()
> > 4: abort()
> > 5: (ceph::__ceph_abort(char const*, int, char const*,
> > std::__cxx11::basic_string<char, std::char_traits<char>,
> > std::allocator<char> > const&)+0x16a) [0x560b78a7061d]
> > 6: /usr/bin/ceph-osd(+0x385bcb) [0x560b789f0bcb]
> > 7: main()
> > 8: /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f93d165dd90]
> > 9: __libc_start_main()
> > 10: _start()
> > NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
> > to interpret this.
> >
> > -725> 2025-09-25T18:02:37.157+0000 7f93d0aab8c0 -1 Falling back to
> public
> > interface
> > -2> 2025-09-25T18:02:40.033+0000 7f93d0aab8c0 -1 osd.21 2098994 init
> > missing pg_pool_t for deleted pool 57 for pg 57.3s7; please downgrade to
> > luminous and allow pg deletion to complete before upgrading
> > -1> 2025-09-25T18:02:40.037+0000 7f93d0aab8c0 -1 ./src/osd/OSD.cc: In
> > function 'int OSD::init()' thread 7f93d0aab8c0 time
> > 2025-09-25T18:02:40.040491+0000
> > ./src/osd/OSD.cc: 3867: ceph_abort_msg("abort() called")
> >
> > ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid
> > (stable)
> > 1: (ceph::__ceph_abort(char const*, int, char const*,
> > std::__cxx11::basic_string<char, std::char_traits<char>,
> > std::allocator<char> > const&)+0xb7) [0x560b78a7056a]
> > 2: /usr/bin/ceph-osd(+0x385bcb) [0x560b789f0bcb]
> > 3: main()
> > 4: /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f93d165dd90]
> > 5: __libc_start_main()
> > 6: _start()
> >
> > 0> 2025-09-25T18:02:40.037+0000 7f93d0aab8c0 -1 *** Caught signal
> > (Aborted) **
> > in thread 7f93d0aab8c0 thread_name:ceph-osd
> >
> > ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid
> > (stable)
> > 1: /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f93d1676520]
> > 2: pthread_kill()
> > 3: raise()
> > 4: abort()
> > 5: (ceph::__ceph_abort(char const*, int, char const*,
> > std::__cxx11::basic_string<char, std::char_traits<char>,
> > std::allocator<char> > const&)+0x16a) [0x560b78a7061d]
> > 6: /usr/bin/ceph-osd(+0x385bcb) [0x560b789f0bcb]
> > 7: main()
> > 8: /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f93d165dd90]
> > 9: __libc_start_main()
> > 10: _start()
> > NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
> > to interpret this.
> >
> > Aborted
> >
> >
> >
> > Will deleting the PG help? Is there any way I can recover these OSDs?
> Will
> > moving back to reef help?
> >
> > Daniel
> > _______________________________________________
> > ceph-users mailing list -- [email protected]
> > To unsubscribe send an email to [email protected]
>
>
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
HEALTH_ERR 21 failed cephadm daemon(s); Module 'diskprediction_local' has
failed: No module named 'sklearn.svm.classes';
nodown,noout,noscrub,nodeep-scrub flag(s) set; 1 backfillfull osd(s); 2
nearfull osd(s); Low space hindering backfill (add storage if this doesn't
resolve itself): 210 pgs backfill_toofull; Degraded data redundancy:
16334/708013986 objects degraded (0.002%), 451 pgs degraded, 1 pg undersized;
27 pgs not deep-scrubbed in time; 382 pgs not scrubbed in time; 11 pool(s)
backfillfull; 238 daemons have recently crashed
[WRN] CEPHADM_FAILED_DAEMON: 21 failed cephadm daemon(s)
daemon osd.10 on ceph-server-05 is in error state
daemon osd.23 on ceph-server-05 is in error state
daemon osd.47 on ceph-server-05 is in error state
daemon osd.21 on ceph-server-05 is in error state
daemon osd.1 on ceph-server-05 is in error state
daemon osd.15 on ceph-server-05 is in error state
daemon osd.19 on ceph-server-05 is in error state
daemon osd.13 on ceph-server-05 is in error state
daemon osd.20 on ceph-server-05 is in error state
daemon osd.3 on ceph-server-05 is in error state
daemon osd.12 on ceph-server-05 is in error state
[ERR] MGR_MODULE_ERROR: Module 'diskprediction_local' has failed: No module
named 'sklearn.svm.classes'
Module 'diskprediction_local' has failed: No module named
'sklearn.svm.classes'
[WRN] OSDMAP_FLAGS: nodown,noout,noscrub,nodeep-scrub flag(s) set
[WRN] PG_DEGRADED: Degraded data redundancy: 16334/708013986 objects degraded
(0.002%), 451 pgs degraded, 1 pg undersized
pg 36.16d is active+recovery_wait+degraded+remapped, acting
[4,37,17,41,44,40,21,14,25]
pg 36.16e is active+recovery_wait+degraded, acting [21,37,45,13,2,39,0,6,42]
pg 36.175 is active+recovery_wait+degraded+remapped, acting
[51,40,31,2,23,30,33,26,21]
pg 36.176 is active+recovery_wait+degraded+remapped, acting
[41,37,28,44,20,1,29,51,16]
pg 36.177 is active+recovery_wait+degraded+remapped, acting
[21,27,25,34,30,2,28,36,15]
pg 36.178 is active+recovery_wait+degraded+remapped, acting
[36,40,24,51,43,20,26,39,46]
pg 36.179 is active+recovery_wait+degraded+remapped, acting
[43,41,9,51,27,12,1,40,25]
pg 36.17d is active+recovery_wait+degraded+remapped, acting
[40,20,41,39,42,6,27,21,33]
pg 36.17f is active+recovery_wait+degraded+remapped, acting
[33,11,6,28,39,38,20,44,25]
pg 36.180 is active+recovery_wait+degraded+remapped, acting
[26,11,35,40,12,19,15,39,31]
pg 36.181 is active+recovery_wait+degraded+remapped, acting
[45,23,41,14,24,31,43,30,3]
pg 36.182 is active+recovery_wait+degraded+remapped, acting
[13,40,31,45,26,34,9,46,21]
pg 36.187 is active+recovery_wait+degraded+remapped, acting
[30,12,19,40,33,32,28,41,27]
pg 36.188 is active+recovery_wait+degraded+remapped, acting
[26,36,28,33,21,6,8,43,13]
pg 36.18e is active+recovery_wait+degraded+remapped, acting
[5,39,30,11,37,38,21,51,6]
pg 36.19b is active+recovery_wait+degraded+remapped, acting
[36,2,40,19,15,39,41,6,37]
pg 36.19d is active+recovery_wait+degraded+remapped, acting
[46,10,28,12,23,44,39,8,42]
pg 36.19e is active+recovery_wait+degraded+remapped, acting
[29,26,36,24,37,31,15,22,21]
pg 36.19f is active+recovery_wait+degraded+remapped, acting
[45,20,9,42,16,10,13,33,27]
pg 36.1a3 is active+recovery_wait+degraded, acting
[29,26,5,27,14,15,11,2,28]
pg 36.1a6 is active+recovery_wait+degraded+remapped, acting
[21,46,10,25,18,30,28,38,43]
pg 36.1ab is active+recovery_wait+degraded+remapped, acting
[22,34,37,51,31,20,16,38,4]
pg 36.1b3 is active+recovery_wait+degraded+remapped, acting
[23,26,12,9,17,8,45,28,25]
pg 36.1b4 is active+recovery_wait+degraded+remapped, acting
[31,30,4,25,20,26,14,35,8]
pg 36.1c1 is active+recovery_wait+degraded+remapped, acting
[33,19,35,31,43,26,13,11,16]
pg 36.1c6 is active+recovery_wait+degraded+remapped, acting
[39,51,29,5,44,19,3,20,41]
pg 36.1c8 is active+recovery_wait+degraded+remapped, acting
[43,15,1,20,42,30,46,17,44]
pg 36.1c9 is active+recovery_wait+degraded+remapped, acting
[9,16,51,38,45,32,15,26,3]
pg 36.1cb is active+recovery_wait+degraded+remapped, acting
[34,38,43,5,44,3,4,18,15]
pg 36.1cf is active+recovering+degraded+remapped, acting
[30,28,11,10,44,25,38,29,4]
pg 36.1d2 is active+recovery_wait+degraded+remapped, acting
[46,12,39,34,37,8,14,35,23]
pg 36.1d3 is active+recovery_wait+degraded+remapped, acting
[42,15,18,38,2,51,35,40,12]
pg 36.1d6 is active+recovery_wait+degraded+remapped, acting
[8,22,24,9,51,4,12,18,2]
pg 36.1d7 is active+recovery_wait+degraded+remapped, acting
[12,13,27,22,25,42,35,31,14]
pg 36.1da is active+recovery_wait+degraded+remapped, acting
[5,15,26,36,51,40,1,13,18]
pg 36.1db is active+recovery_wait+degraded+remapped, acting
[44,19,31,45,10,40,37,34,23]
pg 36.1dc is active+recovery_wait+degraded+remapped, acting
[35,32,4,9,41,12,3,45,14]
pg 36.1df is active+recovery_wait+degraded+remapped, acting
[46,32,8,9,28,41,2,14,10]
pg 36.1e0 is active+recovery_wait+degraded+remapped, acting
[23,45,31,24,36,22,30,51,8]
pg 36.1e1 is active+recovery_wait+degraded+remapped, acting
[36,10,17,31,3,1,51,34,9]
pg 36.1e3 is active+recovery_wait+degraded+remapped, acting
[35,32,36,34,40,30,37,44,15]
pg 36.1e7 is active+recovery_wait+degraded+remapped, acting
[15,33,34,36,27,43,39,20,30]
pg 36.1ee is active+recovery_wait+degraded+remapped, acting
[26,34,30,6,12,32,29,45,38]
pg 36.1f0 is active+recovery_wait+degraded+remapped, acting
[45,46,23,33,41,15,39,28,3]
pg 36.1f1 is active+recovery_wait+degraded+remapped, acting
[2,29,15,10,9,1,17,38,8]
pg 36.1f3 is active+recovery_wait+degraded+remapped, acting
[39,11,20,45,8,3,4,15,40]
pg 36.1f6 is active+recovery_wait+degraded+remapped, acting
[51,16,17,29,14,20,31,26,9]
pg 36.1f7 is active+recovery_wait+degraded+remapped, acting
[22,38,43,37,51,20,14,11,39]
pg 36.1f9 is active+recovery_wait+degraded+remapped, acting
[13,8,39,30,42,5,41,35,23]
pg 36.1fa is active+recovery_wait+degraded+remapped, acting
[27,3,26,8,12,20,44,23,18]
pg 36.1fc is active+recovery_wait+degraded+remapped, acting
[25,27,37,24,29,2,46,23,44]
[WRN] PG_NOT_DEEP_SCRUBBED: 27 pgs not deep-scrubbed in time
pg 36.1ed not deep-scrubbed since 2025-06-02T08:52:01.978630+0000
pg 36.1e2 not deep-scrubbed since 2025-05-27T22:37:26.983391+0000
pg 36.1cd not deep-scrubbed since 2025-05-31T09:36:24.561777+0000
pg 36.1ac not deep-scrubbed since 2025-06-05T02:06:38.641143+0000
pg 36.191 not deep-scrubbed since 2025-05-31T09:16:52.328032+0000
pg 36.154 not deep-scrubbed since 2025-05-27T12:16:19.296119+0000
pg 36.12b not deep-scrubbed since 2025-05-31T03:45:18.665426+0000
pg 36.111 not deep-scrubbed since 2025-06-06T10:06:10.721467+0000
pg 36.ff not deep-scrubbed since 2025-06-06T08:30:55.849473+0000
pg 36.7a not deep-scrubbed since 2025-05-28T09:42:14.407044+0000
pg 36.83 not deep-scrubbed since 2025-06-03T22:28:22.147319+0000
pg 36.84 not deep-scrubbed since 2025-06-02T03:24:28.829138+0000
pg 36.87 not deep-scrubbed since 2025-05-30T03:47:03.497627+0000
pg 36.a1 not deep-scrubbed since 2025-06-04T22:31:41.550582+0000
pg 36.ae not deep-scrubbed since 2025-06-02T09:25:47.474502+0000
pg 36.ba not deep-scrubbed since 2025-05-28T06:36:59.877788+0000
pg 36.c3 not deep-scrubbed since 2025-06-04T12:06:03.383686+0000
pg 36.c6 not deep-scrubbed since 2025-06-07T01:13:52.510294+0000
pg 36.221 not deep-scrubbed since 2025-06-02T17:18:46.025283+0000
pg 36.25f not deep-scrubbed since 2025-06-04T00:37:01.045152+0000
pg 36.26b not deep-scrubbed since 2025-05-31T22:51:17.600427+0000
pg 36.2d6 not deep-scrubbed since 2025-05-29T22:09:39.789538+0000
pg 36.306 not deep-scrubbed since 2025-06-06T10:24:09.300608+0000
pg 36.318 not deep-scrubbed since 2025-05-31T06:22:57.825946+0000
pg 36.327 not deep-scrubbed since 2025-06-05T11:09:51.504437+0000
pg 36.3bd not deep-scrubbed since 2025-05-28T21:04:18.568936+0000
pg 36.3ca not deep-scrubbed since 2025-06-06T16:47:44.838151+0000
[WRN] PG_NOT_SCRUBBED: 382 pgs not scrubbed in time
pg 36.1f7 not scrubbed since 2025-08-26T00:31:31.134072+0000
pg 36.1f5 not scrubbed since 2025-08-25T10:39:34.140894+0000
pg 36.1f4 not scrubbed since 2025-08-26T00:34:54.740307+0000
pg 36.1f2 not scrubbed since 2025-08-26T04:43:27.378148+0000
pg 36.1f0 not scrubbed since 2025-08-25T19:28:26.481222+0000
pg 36.1ee not scrubbed since 2025-08-26T07:56:08.979846+0000
pg 36.1ed not scrubbed since 2025-08-25T19:31:59.370289+0000
pg 36.1e9 not scrubbed since 2025-08-29T17:30:56.458960+0000
pg 36.1e8 not scrubbed since 2025-08-25T03:48:01.036597+0000
pg 36.1e7 not scrubbed since 2025-08-26T00:57:33.574642+0000
pg 36.1e6 not scrubbed since 2025-08-29T03:47:27.093999+0000
pg 36.1e5 not scrubbed since 2025-08-25T18:10:49.880002+0000
pg 36.1e4 not scrubbed since 2025-08-25T20:36:27.752271+0000
pg 36.1e2 not scrubbed since 2025-08-24T13:21:04.156047+0000
pg 36.1e1 not scrubbed since 2025-08-24T21:45:47.445774+0000
pg 36.1de not scrubbed since 2025-08-24T19:37:10.074524+0000
pg 36.1dc not scrubbed since 2025-08-26T06:45:59.549795+0000
pg 36.1d4 not scrubbed since 2025-08-26T02:59:59.095574+0000
pg 36.1d2 not scrubbed since 2025-08-25T19:32:01.356868+0000
pg 36.1d0 not scrubbed since 2025-08-25T10:49:19.566213+0000
pg 36.1cd not scrubbed since 2025-08-25T17:28:23.918647+0000
pg 36.1ca not scrubbed since 2025-08-25T21:53:08.207961+0000
pg 36.1c5 not scrubbed since 2025-08-26T20:59:34.164260+0000
pg 36.1c4 not scrubbed since 2025-08-25T06:30:46.166745+0000
pg 36.1c2 not scrubbed since 2025-08-26T05:38:16.042117+0000
pg 36.1c1 not scrubbed since 2025-08-25T14:16:27.787485+0000
pg 36.1bf not scrubbed since 2025-08-28T09:22:07.429745+0000
pg 36.1be not scrubbed since 2025-08-25T02:01:11.379648+0000
pg 36.1bd not scrubbed since 2025-08-24T22:29:53.823579+0000
pg 36.1bc not scrubbed since 2025-08-24T03:07:51.968777+0000
pg 36.1b8 not scrubbed since 2025-08-25T16:49:21.070283+0000
pg 36.1b7 not scrubbed since 2025-08-25T17:26:35.241353+0000
pg 36.1b6 not scrubbed since 2025-08-26T07:18:19.950997+0000
pg 36.1b5 not scrubbed since 2025-08-25T07:15:04.472422+0000
pg 36.1b2 not scrubbed since 2025-08-25T00:29:59.675779+0000
pg 36.1b0 not scrubbed since 2025-08-25T11:56:03.202697+0000
pg 36.1af not scrubbed since 2025-08-25T17:47:38.372061+0000
pg 36.1ac not scrubbed since 2025-08-25T10:25:14.976910+0000
pg 36.1aa not scrubbed since 2025-08-25T16:19:33.674132+0000
pg 36.1a4 not scrubbed since 2025-08-26T02:10:49.199096+0000
pg 36.1a1 not scrubbed since 2025-08-25T16:22:17.206844+0000
pg 36.1a0 not scrubbed since 2025-08-25T07:14:18.655639+0000
pg 36.19d not scrubbed since 2025-08-25T19:06:31.842530+0000
pg 36.19c not scrubbed since 2025-08-27T10:54:07.380148+0000
pg 36.198 not scrubbed since 2025-08-25T23:03:25.364452+0000
pg 36.196 not scrubbed since 2025-08-25T22:50:06.225867+0000
pg 36.191 not scrubbed since 2025-08-27T03:32:50.022966+0000
pg 36.18d not scrubbed since 2025-08-26T02:24:47.973507+0000
pg 36.187 not scrubbed since 2025-08-24T21:15:34.350017+0000
pg 36.185 not scrubbed since 2025-08-25T03:43:42.698723+0000
332 more pgs...
[WRN] RECENT_CRASH: 238 daemons have recently crashed
osd.36 crashed on host ceph-server-05 at 2025-09-25T10:40:17.201198Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T10:40:01.684588Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T10:40:00.512562Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T10:40:02.563685Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T10:40:00.128406Z
osd.5 crashed on host ceph-server-05 at 2025-09-25T10:40:00.680096Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T14:57:44.500312Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T14:57:29.228440Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T14:57:28.794827Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T14:57:42.335691Z
osd.24 crashed on host ceph-server-05 at 2025-09-25T14:57:28.939476Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T14:57:42.121107Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T14:57:28.693818Z
osd.5 crashed on host ceph-server-05 at 2025-09-25T14:57:29.199987Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T14:57:29.477333Z
osd.36 crashed on host ceph-server-05 at 2025-09-25T14:57:43.739267Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T15:33:28.920013Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T15:33:30.558606Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T15:33:42.415705Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T15:33:42.564910Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T15:33:42.010683Z
osd.36 crashed on host ceph-server-05 at 2025-09-25T15:33:42.879751Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T15:33:29.354120Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T15:33:30.215018Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T15:33:29.472992Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T15:33:29.310175Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T15:33:30.253579Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T15:33:44.625447Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T15:33:42.752128Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T15:33:28.914200Z
osd.24 crashed on host ceph-server-05 at 2025-09-25T15:33:29.487262Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T15:33:28.749397Z
osd.5 crashed on host ceph-server-05 at 2025-09-25T15:33:29.500974Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T15:33:42.330334Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T15:46:20.389541Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T15:46:24.051269Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T15:46:20.936521Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T15:46:05.466179Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T15:53:09.186801Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T15:46:35.322462Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T15:47:15.505884Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T15:46:34.381522Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T15:45:55.502765Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T15:46:26.463406Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T15:53:08.039662Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T15:46:45.181436Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T15:45:59.543636Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T15:46:18.966795Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T15:46:05.230566Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T15:46:37.178783Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T15:46:07.148388Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T15:46:07.931238Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T15:46:22.732026Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T15:46:26.219552Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T15:53:10.339995Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T15:53:07.758179Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T15:53:07.730803Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T15:53:10.176356Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T15:46:45.829197Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T15:45:56.070112Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T15:53:07.991015Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T15:45:51.031475Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T15:46:39.779710Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T15:46:39.456593Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T15:46:32.423681Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T15:46:50.232125Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T15:46:18.861425Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T15:46:09.322810Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T15:53:08.425025Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T15:53:07.606282Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T15:45:54.762237Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T15:46:33.926534Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T15:53:08.473830Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T15:46:21.523056Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T15:46:08.528371Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T15:46:34.533641Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T15:46:19.247419Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T15:46:49.714783Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T15:46:03.597128Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T15:46:53.905311Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T15:46:00.255026Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T15:46:31.997223Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T15:46:13.438324Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T15:45:52.333023Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T15:46:38.457703Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T15:46:12.865329Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T15:46:05.845792Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T15:53:38.345347Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T15:53:24.427465Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T15:53:45.567368Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T15:53:23.025940Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T15:53:22.963412Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T15:53:27.651012Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T15:53:24.611188Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T15:53:40.397807Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T15:53:46.301720Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T15:53:39.549762Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T15:53:22.773252Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T15:53:38.363252Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T15:53:38.499252Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T15:53:42.727830Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T15:53:28.001849Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T15:53:23.144556Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T15:53:23.912807Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T15:53:25.114013Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T15:53:41.953535Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T15:53:38.389879Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T15:57:01.631593Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T15:57:00.618597Z
osd.24 crashed on host ceph-server-05 at 2025-09-25T15:56:58.807742Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T15:57:18.737770Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T15:57:00.406110Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T15:57:00.608651Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T15:56:59.653594Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T15:56:59.043856Z
osd.36 crashed on host ceph-server-05 at 2025-09-25T15:57:12.588631Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T15:56:59.258337Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T15:56:58.736451Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T15:56:59.685097Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T18:01:05.554019Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T18:01:53.978579Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T18:00:03.226965Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T18:02:14.334713Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T18:00:57.961810Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T18:02:22.549564Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T18:01:44.878650Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T18:01:51.105395Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T18:02:00.400694Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T18:02:22.907985Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T18:00:57.794107Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T18:02:45.228168Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T18:02:25.648566Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T18:01:50.731818Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T18:02:15.585149Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T18:00:59.074343Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T18:02:25.366873Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T18:01:52.363690Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T18:02:29.846821Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T18:01:07.875306Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T18:02:05.498961Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T18:00:40.185579Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T18:00:29.165908Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T18:02:06.562430Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T18:01:39.042986Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T18:02:36.883151Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T18:00:20.050240Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T18:02:39.399440Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T18:01:38.805154Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T18:00:34.240360Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T18:00:28.948804Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T18:00:23.085949Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T18:00:08.350465Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T18:02:24.409784Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T18:00:25.352913Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T18:02:10.811775Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T18:02:09.356024Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T18:00:50.522634Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T18:01:06.918315Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T18:02:23.412500Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T18:00:44.560880Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T18:00:13.411715Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T18:00:52.242894Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T18:00:59.956517Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T18:02:41.716984Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T18:00:35.586042Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T18:00:35.792571Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T18:01:44.010186Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T18:01:48.415599Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T18:00:20.806490Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T18:01:35.942032Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T18:02:40.042010Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T18:00:38.245499Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T18:01:34.267955Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T18:02:09.211289Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T18:01:33.377641Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T18:01:51.669438Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T18:00:05.328107Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T18:01:15.577518Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T18:00:43.949508Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T18:02:07.468732Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T18:00:51.164604Z
osd.13 crashed on host ceph-server-05 at 2025-09-25T18:01:59.314110Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T18:00:20.132141Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T18:00:51.465469Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T18:00:53.562753Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T18:02:39.932483Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T18:01:51.163728Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T18:00:35.443902Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T18:00:08.428508Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T18:02:27.144695Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T18:02:21.640688Z
osd.12 crashed on host ceph-server-05 at 2025-09-25T18:00:21.677099Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T18:00:14.182847Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T18:00:41.983846Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T18:00:16.931908Z
osd.10 crashed on host ceph-server-05 at 2025-09-25T18:02:07.267295Z
osd.19 crashed on host ceph-server-05 at 2025-09-25T18:02:30.660809Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T18:02:08.371824Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T18:00:19.732236Z
osd.20 crashed on host ceph-server-05 at 2025-09-25T18:01:54.204589Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T18:00:35.980764Z
osd.15 crashed on host ceph-server-05 at 2025-09-25T18:00:04.378151Z
osd.3 crashed on host ceph-server-05 at 2025-09-25T18:01:09.215553Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T18:00:50.886998Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T21:11:52.054055Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T21:12:24.103123Z
osd.1 crashed on host ceph-server-05 at 2025-09-25T21:12:08.074196Z
osd.36 crashed on host ceph-server-05 at 2025-09-25T21:26:16.315747Z
osd.23 crashed on host ceph-server-05 at 2025-09-25T21:26:03.773560Z
osd.21 crashed on host ceph-server-05 at 2025-09-25T21:26:03.241901Z
osd.1 crashed on host ceph-server-05 at 2025-09-26T07:33:06.334465Z
osd.1 crashed on host ceph-server-05 at 2025-09-26T07:33:21.315236Z
osd.1 crashed on host ceph-server-05 at 2025-09-26T07:33:35.979386Z
osd.12 crashed on host ceph-server-05 at 2025-09-26T07:42:46.745125Z
osd.3 crashed on host ceph-server-05 at 2025-09-26T07:43:00.293528Z
osd.3 crashed on host ceph-server-05 at 2025-09-26T07:43:34.664977Z
osd.19 crashed on host ceph-server-05 at 2025-09-26T07:42:57.283951Z
osd.15 crashed on host ceph-server-05 at 2025-09-26T07:43:15.742952Z
osd.13 crashed on host ceph-server-05 at 2025-09-26T07:42:38.940859Z
osd.20 crashed on host ceph-server-05 at 2025-09-26T07:42:34.523120Z
osd.12 crashed on host ceph-server-05 at 2025-09-26T07:42:28.012152Z
osd.19 crashed on host ceph-server-05 at 2025-09-26T07:43:14.271476Z
osd.20 crashed on host ceph-server-05 at 2025-09-26T07:42:51.156331Z
osd.23 crashed on host ceph-server-05 at 2025-09-26T07:42:45.892342Z
osd.19 crashed on host ceph-server-05 at 2025-09-26T07:42:41.179365Z
osd.21 crashed on host ceph-server-05 at 2025-09-26T07:42:32.897741Z
osd.10 crashed on host ceph-server-05 at 2025-09-26T07:42:45.718963Z
osd.15 crashed on host ceph-server-05 at 2025-09-26T07:42:45.411672Z
osd.21 crashed on host ceph-server-05 at 2025-09-26T07:42:48.476144Z
osd.10 crashed on host ceph-server-05 at 2025-09-26T07:43:00.931068Z
osd.15 crashed on host ceph-server-05 at 2025-09-26T07:42:30.231791Z
osd.23 crashed on host ceph-server-05 at 2025-09-26T07:43:02.842387Z
osd.3 crashed on host ceph-server-05 at 2025-09-26T07:42:42.726561Z
osd.12 crashed on host ceph-server-05 at 2025-09-26T07:43:04.162020Z
osd.3 crashed on host ceph-server-05 at 2025-09-26T07:43:17.411496Z
osd.21 crashed on host ceph-server-05 at 2025-09-26T07:43:03.787847Z
osd.15 crashed on host ceph-server-05 at 2025-09-26T07:43:00.655865Z
osd.13 crashed on host ceph-server-05 at 2025-09-26T07:42:54.285669Z
root@ceph-server-05:~# ceph -s
cluster:
id: 8a34f3c4-a8cb-46c0-b97a-18cb31bc50dc
health: HEALTH_ERR
21 failed cephadm daemon(s)
Module 'diskprediction_local' has failed: No module named
'sklearn.svm.classes'
nodown,noout,noscrub,nodeep-scrub flag(s) set
Degraded data redundancy: 16334/708013992 objects degraded
(0.002%), 451 pgs degraded, 1 pg undersized
27 pgs not deep-scrubbed in time
383 pgs not scrubbed in time
238 daemons have recently crashed
services:
mon: 3 daemons, quorum ceph-server-04,ceph-server-05,ceph-server-06 (age
29h)
mgr: ceph-server-04.vazijs(active, since 39h), standbys:
ceph-server-06.xbhemg, ceph-server-05.ncgnoq
mds: 1/1 daemons up, 2 standby
osd: 51 osds: 51 up (since 17h), 51 in (since 18h); 877 remapped pgs
flags nodown,noout,noscrub,nodeep-scrub
data:
volumes: 1/1 healthy
pools: 14 pools, 1357 pgs
objects: 79.32M objects, 310 TiB
usage: 485 TiB used, 72 TiB / 557 TiB avail
pgs: 16334/708013992 objects degraded (0.002%)
147297048/708013992 objects misplaced (20.804%)
451 active+clean
393 active+recovery_wait+degraded+remapped
350 active+remapped+backfill_wait
28 active+recovering+degraded+remapped
26 active+recovery_wait+degraded
4 active+remapped+backfilling
3 active+recovering+degraded
1 active+recovery_wait+undersized+degraded+remapped
1 active+recovering+remapped
io:
client: 24 MiB/s rd, 17 KiB/s wr, 8 op/s rd, 1 op/s wr
recovery: 90 MiB/s, 23 objects/s
root@ceph-server-05:~# ceph pg ls-by-pool block_hdd_ec
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS*
LOG LOG_DUPS STATE SINCE VERSION REPORTED
UP ACTING
SCRUB_STAMP DEEP_SCRUB_STAMP
LAST_SCRUB_DURATION SCRUB_SCHEDULING
57.0 0 0 0 0 0 0 0
528 3000 active+clean 18h 2102463'4028
2106004:24411 [51,30,9,17,39,33,12,23,13]p51 [51,30,9,17,39,33,12,23,13]p51
2025-09-16T07:21:43.941968+0000 2025-09-16T07:21:43.941968+0000
0 queued for scrub
57.1 0 0 0 0 0 0 0
500 2194 active+clean 18h 2102463'2694
2106004:22186 [0,10,2,43,8,42,35,27,38]p0 [0,10,2,43,8,42,35,27,38]p0
2025-09-16T07:09:51.090000+0000 2025-09-16T07:09:51.090000+0000
0 queued for scrub
57.2 0 0 0 0 0 0 0
529 3000 active+clean 18h 2102463'8329
2106004:31507 [29,28,17,51,30,8,5,36,10]p29 [29,28,17,51,30,8,5,36,10]p29
2025-09-16T07:10:45.307499+0000 2025-09-16T07:10:45.307499+0000
0 periodic scrub scheduled @ 2025-09-27T02:35:07.948960+0000
57.3 0 0 0 0 0 0 0
500 2500 active+clean 18h 2102463'8387
2106004:31932 [40,29,22,34,37,31,3,36,23]p40 [40,29,22,34,37,31,3,36,23]p40
2025-09-16T07:13:33.129649+0000 2025-09-16T07:13:33.129649+0000
0 periodic scrub scheduled @ 2025-09-27T02:35:01.704301+0000
57.4 0 0 0 0 0 0 0
600 3000 active+clean 18h 2102463'8602
2106004:32381 [16,44,4,0,45,3,32,38,39]p16 [16,44,4,0,45,3,32,38,39]p16
2025-09-18T18:58:53.176827+0000 2025-09-16T07:17:23.517171+0000
46 queued for scrub
57.5 0 0 0 0 0 0 0
500 2500 active+clean 17h 2102463'8349
2106004:29987 [38,35,22,1,19,45,8,6,37]p38 [38,35,22,1,19,45,8,6,37]p38
2025-09-18T18:58:45.623033+0000 2025-09-16T07:15:22.481361+0000
0 queued for scrub
57.6 0 0 0 0 0 0 0
558 2809 active+clean 29h 2102463'3367
2106004:22073 [38,28,32,5,0,36,37,46,34]p38 [38,28,32,5,0,36,37,46,34]p38
2025-09-16T07:16:23.708918+0000 2025-09-16T07:16:23.708918+0000
1171 queued for scrub
57.7 0 0 0 0 0 0 0
500 2500 active+clean 17h 2102463'10380
2106004:34094 [10,19,1,32,39,22,26,51,34]p10 [10,19,1,32,39,22,26,51,34]p10
2025-09-20T12:49:34.092511+0000 2025-09-16T07:18:12.791321+0000
1275 periodic scrub scheduled @ 2025-09-27T02:34:21.072929+0000
57.8 0 0 0 0 0 0 0
526 2900 active+clean 17h 2102463'3426
2106004:28074 [44,51,6,1,36,12,38,33,32]p44 [44,51,6,1,36,12,38,33,32]p44
2025-09-17T08:28:04.533907+0000 2025-09-17T08:28:04.533907+0000
0 queued for scrub
57.9 0 0 0 0 0 0 0
585 3000 active+clean 18h 2102463'8685
2106004:27813 [19,45,15,11,4,43,35,33,39]p19 [19,45,15,11,4,43,35,33,39]p19
2025-09-16T12:02:38.404224+0000 2025-09-16T12:02:38.404224+0000
779 queued for scrub
57.a 0 0 0 0 0 0 0
500 2500 active+clean 18h 2102463'8524
2106004:28004 [19,12,3,41,8,37,27,33,32]p19 [19,12,3,41,8,37,27,33,32]p19
2025-09-16T11:35:24.347278+0000 2025-09-16T11:35:24.347278+0000
878 periodic scrub scheduled @ 2025-09-27T02:35:52.301821+0000
57.b 0 0 0 0 0 0 0
500 2500 active+clean 29h 2102463'3123
2106004:23925 [27,38,17,34,44,6,5,2,45]p27 [27,38,17,34,44,6,5,2,45]p27
2025-09-16T07:18:55.160379+0000 2025-09-16T07:18:55.160379+0000
1292 periodic scrub scheduled @ 2025-09-27T02:34:37.708207+0000
57.c 0 0 0 0 0 0 0
500 2364 active+clean 18h 2102463'2864
2106004:30870 [23,13,10,18,33,40,6,9,4]p23 [23,13,10,18,33,40,6,9,4]p23
2025-09-16T07:12:48.819902+0000 2025-09-16T07:12:48.819902+0000
0 queued for scrub
57.d 0 0 0 0 0 0 0
531 2600 active+clean 18h 2102463'3131
2106004:25865 [34,31,46,10,12,45,18,35,22]p34 [34,31,46,10,12,45,18,35,22]p34
2025-09-16T07:14:35.923271+0000 2025-09-16T07:14:35.923271+0000
0 periodic scrub scheduled @ 2025-09-27T02:33:55.854823+0000
57.e 0 0 0 0 0 0 0
542 2404 active+clean 17h 2102463'2946
2106004:21663 [20,0,1,34,51,6,39,28,17]p20 [20,0,1,34,51,6,39,28,17]p20
2025-09-16T07:13:54.443929+0000 2025-09-16T07:13:54.443929+0000
0 queued for scrub
57.f 0 0 0 0 0 0 0
587 2300 active+clean 29h 2102463'2887
2106004:25953 [32,29,40,2,36,45,14,9,43]p32 [32,29,40,2,36,45,14,9,43]p32
2025-09-19T17:01:55.882924+0000 2025-09-16T07:15:29.010604+0000
0 periodic scrub scheduled @ 2025-09-27T02:35:26.014687+0000
57.10 0 0 0 0 0 0 0
505 3000 active+clean 18h 2102463'13805
2106004:35820 [29,33,40,26,43,51,5,12,36]p29 [29,33,40,26,43,51,5,12,36]p29
2025-09-16T07:16:26.935798+0000 2025-09-16T07:16:26.935798+0000
0 periodic scrub scheduled @ 2025-09-27T02:34:10.856598+0000
57.11 0 0 0 0 0 0 0
596 3000 active+clean 18h 2102463'8501
2106004:25758 [26,31,14,5,12,36,27,18,32]p26 [26,31,14,5,12,36,27,18,32]p26
2025-09-16T07:16:29.731624+0000 2025-09-16T07:16:29.731624+0000
0 queued for scrub
57.12 0 0 0 0 0 0 0
563 2502 active+clean 18h 2102463'3065
2106004:18574 [19,24,0,27,40,39,41,43,4]p19 [19,24,0,27,40,39,41,43,4]p19
2025-09-20T06:36:28.051285+0000 2025-09-16T07:17:39.775258+0000
17 periodic scrub scheduled @ 2025-09-27T02:35:44.286577+0000
57.13 0 0 0 0 0 0 0
553 2902 active+remapped+backfill_wait 18h 2102463'3455
2106004:22961 [43,30,46,22,29,33,19,9,45]p43 [43,30,46,22,29,33,2,9,14]p43
2025-09-16T07:17:17.082177+0000 2025-09-16T07:17:17.082177+0000
1216 queued for scrub
57.14 0 0 0 0 0 0 0
500 2273 active+remapped+backfill_wait 18h 2102463'2773
2106004:29737 [27,39,28,0,5,10,30,42,41]p27 [27,39,28,9,5,20,30,42,41]p27
2025-09-16T07:10:07.969620+0000 2025-09-16T07:10:07.969620+0000
0 no scrub is scheduled
57.15 0 0 0 0 0 0 0
554 2702 active+remapped+backfill_wait 18h 2102463'3256
2106004:20027 [26,11,42,35,28,40,0,3,36]p26 [26,11,42,35,28,40,9,3,24]p26
2025-09-16T07:13:21.271867+0000 2025-09-16T07:13:21.271867+0000
0 queued for scrub
57.16 0 0 0 0 0 0 0
511 2400 active+clean 17h 2102463'2911
2106004:25070 [16,44,42,45,24,1,17,38,18]p16 [16,44,42,45,24,1,17,38,18]p16
2025-09-16T07:11:53.503797+0000 2025-09-16T07:11:53.503797+0000
895 periodic scrub scheduled @ 2025-09-27T02:33:59.806346+0000
57.17 0 0 0 0 0 0 0
565 2702 active+remapped+backfill_wait 18h 2102463'3267
2106004:31405 [28,40,29,4,37,31,19,36,0]p28 [28,40,29,4,37,31,19,36,14]p28
2025-09-16T07:04:01.899444+0000 2025-09-16T07:04:01.899444+0000
0 no scrub is scheduled
57.18 0 0 0 0 0 0 0
511 2300 active+clean 18h 2102463'2811
2106004:17986 [21,36,20,42,34,41,32,35,33]p21 [21,36,20,42,34,41,32,35,33]p21
2025-09-16T07:13:11.405357+0000 2025-09-16T07:13:11.405357+0000
0 periodic scrub scheduled @ 2025-09-27T02:35:58.071901+0000
57.19 0 0 0 0 0 0 0
594 3000 active+remapped+backfill_wait 17h 2102463'14294
2106004:34380 [44,16,51,1,45,23,38,18,35]p44 [44,16,51,38,18,35,38,18,35]p44
2025-09-16T22:28:49.051205+0000 2025-09-16T22:28:49.051205+0000
0 queued for scrub
57.1a 1 0 0 0 19 0 0
589 3000 active+clean 18h 2102463'8289
2106004:26541 [39,17,35,41,14,43,3,45,23]p39 [39,17,35,41,14,43,3,45,23]p39
2025-09-16T11:13:23.306239+0000 2025-09-16T11:13:23.306239+0000
709 queued for scrub
57.1b 0 0 0 0 0 0 0
590 2900 active+clean 17h 2102463'3490
2106004:21712 [39,40,33,45,1,3,43,34,11]p39 [39,40,33,45,1,3,43,34,11]p39
2025-09-16T10:28:27.152497+0000 2025-09-16T10:28:27.152497+0000
732 queued for scrub
57.1c 0 0 0 0 0 0 0
526 2406 active+clean 18h 2102463'2932
2106004:19888 [37,44,51,23,45,0,22,29,28]p37 [37,44,51,23,45,0,22,29,28]p37
2025-09-20T06:36:09.686639+0000 2025-09-16T07:25:59.754125+0000
3 queued for scrub
57.1d 0 0 0 0 0 0 0
500 2500 active+clean 18h 2102463'4681
2106004:26763 [20,2,5,26,37,16,32,27,35]p20 [20,2,5,26,37,16,32,27,35]p20
2025-09-16T10:27:26.081671+0000 2025-09-16T10:27:26.081671+0000
0 periodic scrub scheduled @ 2025-09-27T02:34:16.574754+0000
57.1e 0 0 0 0 0 0 0
595 3000 active+clean 18h 2102463'8599
2106004:28090 [42,16,41,28,38,32,36,21,24]p42 [42,16,41,28,38,32,36,21,24]p42
2025-09-18T18:58:51.062936+0000 2025-09-16T07:15:06.320475+0000
46 queued for scrub
57.1f 0 0 0 0 0 0 0
554 3000 active+remapped+backfill_wait 18h 2102463'8557
2106004:32378 [35,18,39,21,15,12,26,25,51]p35 [35,18,39,26,25,51,26,25,51]p35
2025-09-16T07:18:43.406016+0000 2025-09-16T07:18:43.406016+0000
0 no scrub is scheduled
* NOTE: Omap statistics are gathered during deep scrub and may be inaccurate
soon afterwards depending on utilization. See
http://docs.ceph.com/en/latest/dev/placement-group/#omap-statistics for further
details.
--- a/src/osd/OSD.cc
+++ b/src/osd/OSD.cc
@@ -3797,22 +3797,21 @@ int OSD::init()
set_osdmap(osdmap);
// make sure we don't have legacy pgs deleting
- {
+ if (false) {
vector<coll_t> ls;
int r = store->list_collections(ls);
ceph_assert(r >= 0);
for (auto c : ls) {
spg_t pgid;
- if (c.is_pg(&pgid) &&
- !osdmap->have_pg_pool(pgid.pool())) {
- ghobject_t oid = make_final_pool_info_oid(pgid.pool());
- if (!store->exists(service.meta_ch, oid)) {
- derr << __func__ << " missing pg_pool_t for deleted pool "
- << pgid.pool() << " for pg " << pgid
- << "; please downgrade to luminous and allow "
- << "pg deletion to complete before upgrading" << dendl;
- ceph_abort();
- }
+ if (c.is_pg(&pgid) && !osdmap->have_pg_pool(pgid.pool())) {
+ ghobject_t oid = make_final_pool_info_oid(pgid.pool());
+ if (!store->exists(service.meta_ch, oid)) {
+ derr << __func__ << " missing pg_pool_t for deleted pool "
+ << pgid.pool() << " for pg " << pgid
+ << "; please downgrade to luminous and allow "
+ << "pg deletion to complete before upgrading" << dendl;
+ ceph_abort();
+ }
}
}
}_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]