[ceph-users] Re: activating+undersized+degraded+remapped

2024-03-17 Thread Joachim Kraftmayer - ceph ambassador
also helpful is the output of: cephpg{poolnum}.{pg-id}query ___ ceph ambassador DACH ceph consultant since 2012 Clyso GmbH - Premier Ceph Foundation Member https://www.clyso.com/ Am 16.03.24 um 13:52 schrieb Eugen Block: Yeah, the whole story would help to

[ceph-users] Re: Emergency, I lost 4 monitors but all osd disk are safe

2023-11-02 Thread Joachim Kraftmayer - ceph ambassador
Hi, another short note regarding the documentation, the paths are designed for a package installation. the paths for container installation look a bit different e.g.: /var/lib/ceph//osd.y/ Joachim ___ ceph ambassador DACH ceph consultant since 2012 Clyso

[ceph-users] Re: Stickyness of writing vs full network storage writing

2023-10-28 Thread Joachim Kraftmayer - ceph ambassador
Hi, I know similar requirements, the motivation and the need behind them. We have chosen a clear approach to this, which also does not make the whole setup too complicated to operate. 1.) Everything that doesn't require strong consistency we do with other tools, especially when it comes to

[ceph-users] Re: Remove empty orphaned PGs not mapped to a pool

2023-10-05 Thread Joachim Kraftmayer - ceph ambassador
@Eugen We have seen the same problems 8 years ago. I can only recommend never to use cache tiering in production. At Cephalocon this was part of my talk and as far as I remember cache tiering will also disappear from ceph soon. Cache tiering has been deprecated in the Reef release as it has

[ceph-users] Re: Balancer blocked as autoscaler not acting on scaling change

2023-10-04 Thread Joachim Kraftmayer - ceph ambassador
Hi, we have often seen strange behavior and also interesting pg targets from pg_autoscaler in the last years. That's why we disable it globally. The commands: ceph osd reweight-by-utilization ceph osd test-reweight-by-utilization are from the time before the upmap balancer was introduced and

[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-12 Thread Joachim Kraftmayer - ceph ambassador
Another the possibility is also the ceph mon discovery via DNS: https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/#looking-up-monitors-through-dns Regards, Joachim ___ ceph ambassador DACH ceph consultant since 2012 Clyso GmbH - Premier Ceph

[ceph-users] Re: replacing all disks in a stretch mode ceph cluster

2023-07-19 Thread Joachim Kraftmayer - ceph ambassador
Hi, short note if you replace the disks with large disks, the weight of the osd and host will change and this will force data migration. Perhaps you read a bit more about the upmap balancer, if you want to avoid data migration during the upgrade phase. Regards, Joachim

[ceph-users] Re: CEPH orch made osd without WAL

2023-07-10 Thread Joachim Kraftmayer - ceph ambassador
you can also test it directly with ceph bench, if the WAL is on the flash device: https://www.clyso.com/blog/verify-ceph-osd-db-and-wal-setup/ Joachim ___ ceph ambassador DACH ceph consultant since 2012 Clyso GmbH - Premier Ceph Foundation Member

[ceph-users] Re: Rook on bare-metal?

2023-07-06 Thread Joachim Kraftmayer - ceph ambassador
Hello we have been following rook since 2018 and have had our experiences both on bare-metal and in the hyperscalers. In the same way, we have been following cephadm from the beginning. Meanwhile, we have been using both in production for years and the decision which orchestrator to use

[ceph-users] Re: Deleting millions of objects

2023-05-17 Thread Joachim Kraftmayer - ceph ambassador
Hi Rok, try this: rgw_delete_multi_obj_max_num - Max number of objects in a single multi-object delete request   (int, advanced)   Default: 1000   Can update at runtime: true   Services: [rgw] config set WHO: client. or client.rgw KEY: rgw_delete_multi_obj_max_num VALUE: 1

[ceph-users] Re: CEPH Version choice

2023-05-15 Thread Joachim Kraftmayer - ceph ambassador
Jens Galsgaard: https://www.youtube.com/playlist?list=PLrBUGiINAakPd9nuoorqeOuS9P9MTWos3 -Original Message- From: Marc Sent: Monday, May 15, 2023 4:42 PM To: Joachim Kraftmayer - ceph ambassador ; Frank Schilder ; Tino Todino Cc: ceph-users@ceph.io Subject: [ceph-users] Re: CEPH Ver

[ceph-users] Re: cephadm does not honor container_image default value

2023-05-15 Thread Joachim Kraftmayer - ceph ambassador
Don't know if it helps, but we have also experienced something similar with osd images. We changed the image tag from version to sha and it did not happen again. ___ ceph ambassador DACH ceph consultant since 2012 Clyso GmbH - Premier Ceph Foundation Member

[ceph-users] Re: CEPH Version choice

2023-05-15 Thread Joachim Kraftmayer - ceph ambassador
Hi, I know the problems that Frank has raised. However, it should also be mentioned that many critical bugs have been fixed in the major versions. We are working on the fixes ourselves. We and others have written a lot of tools for ourselves in the last 10 years to improve migration/update

[ceph-users] Re: Veeam backups to radosgw seem to be very slow

2023-04-26 Thread Joachim Kraftmayer - ceph ambassador
"bucket does not exist" or "permission denied". Had received similar error messages with another client program. The default region did not match the region of the cluster. ___ ceph ambassador DACH ceph consultant since 2012 Clyso GmbH - Premier Ceph Foundation

[ceph-users] Re: OSD_TOO_MANY_REPAIRS on random OSDs causing clients to hang

2023-04-26 Thread Joachim Kraftmayer - ceph ambassador
Hello Thomas, I would strongly recommend you to read the messages on the mailing list regarding ceph version 16.2.11,16.2.12 and 16.2.13. Joachim ___ ceph ambassador DACH ceph consultant since 2012 Clyso GmbH - Premier Ceph Foundation Member