[ceph-users] Re: Ceph Reef v18.2.3 - release date?

2024-05-30 Thread Pierre Riteau
Hi Peter, The upcoming Reef minor release is delayed due to important bugs: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/FMFUZHKNFH4Z5DWS5BAYBPENHTNJCAYS/ On Wed, 29 May 2024 at 21:03, Peter Razumovsky wrote: > Hello! We're waiting brand new minor 18.2.3 due to >

[ceph-users] Re: ceph dashboard reef 18.2.2 radosgw

2024-05-12 Thread Pierre Riteau
the metadata is updated. Best wishes, Pierre Riteau On Wed, 8 May 2024 at 19:51, Christopher Durham wrote: > Hello, > I am uisng 18.2.2 on Rocky 8 Linux. > > I am getting http error 500 whe trying to hit the ceph dashboard on reef > 18.2.2 when trying to look at any of the radosgw pag

[ceph-users] Re: Ceph reef and (slow) backfilling - how to speed it up

2024-04-30 Thread Pierre Riteau
for more information. Best regards, Pierre Riteau On Sat, 27 Apr 2024 at 08:32, Götz Reinicke wrote: > Dear ceph community, > > I’ve a ceph cluster which got upgraded from nautilus/pacific/…to reef over > time. Now I added two new nodes to an existing EC pool as I did with the > pr

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-22 Thread Pierre Riteau
Hello Michel, It might be worth mentioning that the next releases of Reef and Quincy should increase the default value of osd_max_scrubs from 1 to 3. See the Reef pull request: https://github.com/ceph/ceph/pull/55173 You could try increasing this configuration setting if you haven't already, but

[ceph-users] Re: PGs increasing number

2024-03-09 Thread Pierre Riteau
> 14f3a~24,14f5f~36,14f96~3,14fd1~3,14fd7~1,14fe1~13,15028~21,1504a~1,1505d~1,1506b~6,1507c~23,150a6~9, > > 150bb~22,150ed~7,150f5~3d,15138~4,15140~1,15142~9,15150~1,1515e~1,1517c~7,151a3~15,151c1~5,151d7~5, > > 151ed~6,15217~1c,15243~2,15253~2,15257~1,15259~c,152af~6,152c6~a,152d1~

[ceph-users] Re: PGs increasing number

2024-03-09 Thread Pierre Riteau
Hi Michel, This is expected behaviour. As described in Nautilus release notes [1], the `target_max_misplaced_ratio` option throttles both balancer activity and automated adjustments to pgp_num (normally as a result of pg_num changes). Its default value is .05 (5%). Use `ceph osd pool ls detail`