[ceph-users] Re: How to replace an HDD in a OSD with shared SSD for DB/WAL

2023-04-21 Thread Robert Sander
Hi, On 21.04.23 05:44, Tao LIU wrote: I build a Ceph Cluster with cephadm. Every cehp node has 4 OSDs. These 4 OSD were build with 4 HDD (block) and 1 SDD (DB). At present , one HDD is broken, and I am trying to replace the HDD,and build the OSD with the new HDD and the free space of the SDD. I

[ceph-users] Re: Troubleshooting cephadm OSDs aborting start

2023-04-21 Thread André Gemünd
Dear Ceph-users, in the meantime I found this ticket which seems to have the same assertion / stacktrace but was solved: https://tracker.ceph.com/issues/44532 Anyone have any ideas how it could still happen in 16.2.7? Greetings André - Am 17. Apr 2023 um 10:30 schrieb Andre Gemuend andre

[ceph-users] Re: 17.2.6 dashboard: unable to get RGW dashboard working

2023-04-21 Thread Michel Jouvin
Hi, Still interested by some feedback... FYI, today I changed the configuration of the RGW to https (for reasons unrelated to this problem) and it seems the problem preventing the use of a RGW https with the dashboard is fixed now. The problem described in my previous email remains the same (

[ceph-users] Re: pg_autoscaler using uncompressed bytes as pool current total_bytes triggering false POOL_TARGET_SIZE_BYTES_OVERCOMMITTED warnings?

2023-04-21 Thread Christian Rohmann
Hey ceph-users, may I ask (nag) again about this issue?  I am wondering if anybody can confirm my observations? I raised a bug https://tracker.ceph.com/issues/54136, but apart from the assignment to a dev a while ago here was not response yet. Maybe I am just holding it wrong, please someone

[ceph-users] Re: Ceph stretch mode / POOL_BACKFILLFULL

2023-04-21 Thread Kilian Ries
Still didn't find out what will happen when the pool is full - but tried a little bit in our testing environment and i were not able to get the pool full before an OSD got full. So in first place one OSD reached the full ratio (pool not quite full, about 98%) and IO stopped (like expected when a