[ceph-users] Ceph Pacific bluefs enospc bug with newly created OSDs

2023-06-20 Thread Carsten Grommel
D PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 1 4.7 MiB 48 14 MiB 02.1 TiB cephstor5 2 2048 52 TiB 14.27M 146 TiB 95.892.1 TiB cephfs_cephstor5_data 332 95 MiB 118.52k 1.4 GiB 0.022.1 TiB cephfs_cephstor5_metad

[ceph-users] Re: Ceph Pacific bluefs enospc bug with newly created OSDs

2023-06-21 Thread Carsten Grommel
same NVMe but with different logical volumes or updating to quincy. Thank you! Carsten Von: Igor Fedotov Datum: Dienstag, 20. Juni 2023 um 12:48 An: Carsten Grommel , ceph-users@ceph.io Betreff: Re: [ceph-users] Ceph Pacific bluefs enospc bug with newly created OSDs Hi Carsten, first of all

[ceph-users] Problems with long taking deep-scrubbing processes causing PG_NOT_DEEP_SCRUBBED

2020-07-31 Thread Carsten Grommel - Profihost AG
Hi, we are having problems with really long taking deep-scrubb processes causing PG_NOT_DEEP_SCRUBBED and ceph HEALTH_WARN. One ph is waiting since 2020-05-18 for the deep-scrubb. Is there any way to speed up the deep-scrubbing? Ceph-Version: ceph version 14.2.8-3-gc6b8eedb77 (c6b8eedb7710

[ceph-users] Re: Problems with long taking deep-scrubbing processes causing PG_NOT_DEEP_SCRUBBED

2020-08-03 Thread Carsten Grommel - Profihost AG
ka.de wrote: What happen when you do start a scrub manual? Imo ceph osd deep-scrub xyz Hth Mehmet Am 31. Juli 2020 15:35:49 MESZ schrieb Carsten Grommel - Profihost AG : Hi, we are having problems with really long taking deep-scrubb processes causing PG_NOT_DEEP_SCRUBBED and ceph HEALTH_WARN. O

[ceph-users] Re: Problems with long taking deep-scrubbing processes causing PG_NOT_DEEP_SCRUBBED

2020-08-03 Thread Carsten Grommel - Profihost AG
Yeah we tried that already. The health_warn will remain thus it seems that this will not reset the timer. Am 31.07.20 um 19:52 schrieb c...@elchaka.de: What happen when you do start a scrub manual? Imo ceph osd deep-scrub xyz Hth Mehmet Am 31. Juli 2020 15:35:49 MESZ schrieb Carsten