Hi Peter,
The upcoming Reef minor release is delayed due to important bugs:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/FMFUZHKNFH4Z5DWS5BAYBPENHTNJCAYS/
On Wed, 29 May 2024 at 21:03, Peter Razumovsky
wrote:
> Hello! We're waiting brand new minor 18.2.3 due to
>
the
metadata is updated.
Best wishes,
Pierre Riteau
On Wed, 8 May 2024 at 19:51, Christopher Durham wrote:
> Hello,
> I am uisng 18.2.2 on Rocky 8 Linux.
>
> I am getting http error 500 whe trying to hit the ceph dashboard on reef
> 18.2.2 when trying to look at any of the radosgw pag
for more information.
Best regards,
Pierre Riteau
On Sat, 27 Apr 2024 at 08:32, Götz Reinicke
wrote:
> Dear ceph community,
>
> I’ve a ceph cluster which got upgraded from nautilus/pacific/…to reef over
> time. Now I added two new nodes to an existing EC pool as I did with the
> pr
Hello Michel,
It might be worth mentioning that the next releases of Reef and Quincy
should increase the default value of osd_max_scrubs from 1 to 3. See the
Reef pull request: https://github.com/ceph/ceph/pull/55173
You could try increasing this configuration setting if you haven't already,
but
> 14f3a~24,14f5f~36,14f96~3,14fd1~3,14fd7~1,14fe1~13,15028~21,1504a~1,1505d~1,1506b~6,1507c~23,150a6~9,
>
> 150bb~22,150ed~7,150f5~3d,15138~4,15140~1,15142~9,15150~1,1515e~1,1517c~7,151a3~15,151c1~5,151d7~5,
>
> 151ed~6,15217~1c,15243~2,15253~2,15257~1,15259~c,152af~6,152c6~a,152d1~
Hi Michel,
This is expected behaviour. As described in Nautilus release notes [1], the
`target_max_misplaced_ratio` option throttles both balancer activity and
automated adjustments to pgp_num (normally as a result of pg_num changes).
Its default value is .05 (5%).
Use `ceph osd pool ls detail`