[ceph-users] Re: cephadm does not redeploy OSD

2023-07-19 Thread Adam King
> > When looking on the very verbous cephadm logs, it seemed that cephadm was > just skipping my node, with a message saying that a node was already part > of another spec. > If you have it, would you mind sharing what this message was? I'm still not totally sure what happened here. On Wed, Jul

[ceph-users] Re: User + Dev Monthly Meeting happening tomorrow

2023-07-19 Thread Laura Flores
Update: The User + Dev Monthly Meeting has been canceled in light of CDS discussions. There are currently no topics on the agenda, so we will resume next month (unless I hear pushback, in which case I will reschedule it for next week). Thanks Laura On Wed, Jul 19, 2023 at 9:05 AM Laura Flores

[ceph-users] Re: Ceph Leadership Team Meeting, 2023-07-19 Minutes

2023-07-19 Thread Patrick Donnelly
Forgot the link: On Wed, Jul 19, 2023 at 2:20 PM Patrick Donnelly wrote: > > Hi folks, > > Today we discussed: > > - Reef is almost ready! The remaining issues are tracked in [1]. In > particular, an epel9 package is holding back the release. [1] https://pad.ceph.com/p/reef_final_blockers --

[ceph-users] Ceph Leadership Team Meeting, 2023-07-19 Minutes

2023-07-19 Thread Patrick Donnelly
Hi folks, Today we discussed: - Reef is almost ready! The remaining issues are tracked in [1]. In particular, an epel9 package is holding back the release. - Vincent Hsu, Storage Group CTO of IBM, presented a proposal outline for a Ceph Foundation Client Council. This council would be composed

[ceph-users] OSD tries (and fails) to scrub the same PGs over and over

2023-07-19 Thread Vladimir Brik
I have a PG that hasn't been scrubbed in over a month and not deep-scrubbed in over two months. I tried forcing with `ceph pg (deep-)scrub` but with no success. Looking at the logs of that PG's primary OSD it looks like every once in a while it attempts (and apparently fails) to scrub that

[ceph-users] Re: RBD image QoS rbd_qos_write_bps_limit and rbd_qos_bps_limit and mkfs performance

2023-07-19 Thread Ilya Dryomov
On Wed, Jul 19, 2023 at 3:58 PM Engelmann Florian wrote: > > Hi Ilya, > > thank you for your fast response! Those mkfs parameters I knew, but the > possibility to exclude discard from rbd QoS was new to me. It looks like this > option is not available with pacific, but with quincy. So we have

[ceph-users] Re: cephadm does not redeploy OSD

2023-07-19 Thread Luis Domingues
So good news, I was not hit by the bug you mention on this thread. What happened, (apparently, I did not tried to replicated it yet) is that I had another OSD (let call it OSD.1) using the db device, but that was part of an old spec. (let call it spec-a). And the OSD (OSD.2) I removed should be

[ceph-users] User + Dev Monthly Meeting happening tomorrow

2023-07-19 Thread Laura Flores
Hi everyone, The User + Dev Monthly Meeting is happening tomorrow, July 20th at 2:00 PM UTC at this link: https://meet.jit.si/ceph-user-dev-monthly Please add any topics you'd like to discuss to the agenda: https://pad.ceph.com/p/ceph-user-dev-monthly-minutes Thanks, Laura Flores -- Laura

[ceph-users] Re: RBD image QoS rbd_qos_write_bps_limit and rbd_qos_bps_limit and mkfs performance

2023-07-19 Thread Engelmann Florian
Hi Ilya, thank you for your fast response! Those mkfs parameters I knew, but the possibility to exclude discard from rbd QoS was new to me. It looks like this option is not available with pacific, but with quincy. So we have to upgrade our clusters first. Is it possible to exclude discard by

[ceph-users] Re: RBD image QoS rbd_qos_write_bps_limit and rbd_qos_bps_limit and mkfs performance

2023-07-19 Thread Ilya Dryomov
On Wed, Jul 19, 2023 at 11:01 AM Engelmann Florian wrote: > > Hi, > > I noticed an incredible high performance drop with mkfs.ext4 (as well as > mkfs.xfs) when setting (almost) "any" value for rbd_qos_write_bps_limit (or > rbd_qos_bps_limit). > > Baseline: 4TB rbd volume

[ceph-users] Re: Another Pacific point release?

2023-07-19 Thread Ilya Dryomov
On Mon, Jul 17, 2023 at 6:26 PM David Orman wrote: > > I'm hoping to see at least one more, if not more than that, but I have no > crystal ball. I definitely support this idea, and strongly suggest it's given > some thought. There have been a lot of delays/missed releases due to all of > the

[ceph-users] Re: replacing all disks in a stretch mode ceph cluster

2023-07-19 Thread Joachim Kraftmayer - ceph ambassador
Hi, short note if you replace the disks with large disks, the weight of the osd and host will change and this will force data migration. Perhaps you read a bit more about the upmap balancer, if you want to avoid data migration during the upgrade phase. Regards, Joachim

[ceph-users] RBD image QoS rbd_qos_write_bps_limit and rbd_qos_bps_limit and mkfs performance

2023-07-19 Thread Engelmann Florian
Hi, I noticed an incredible high performance drop with mkfs.ext4 (as well as mkfs.xfs) when setting (almost) "any" value for rbd_qos_write_bps_limit (or rbd_qos_bps_limit). Baseline: 4TB rbd volume rbd_qos_write_bps_limit = 0 mkfs.ext4: real0m6.688s user0m0.000s sys 0m0.006s

[ceph-users] Re: replacing all disks in a stretch mode ceph cluster

2023-07-19 Thread Eugen Block
Hi, during cluster upgrades from L to N or later one had to rebuild OSDs which were originally deployed by ceph-disk switching to ceph-volume. We've done this on multiple clusters and redeployed one node by one. We did not drain the nodes beforehand because the EC resiliency configuration