[ceph-users] Quincy osd bench in order to define osd_mclock_max_capacity_iops_[hdd|ssd]

2023-06-30 Thread Rafael Diaz Maurin
Hello, I've just upgraded a Pacific cluster into Quincy, and all my osd have the low value osd_mclock_max_capacity_iops_hdd : 315.00. The manuel does not explain how to benchmark the OSD with fio or ceph bench with good options. Can someone have the good ceph bench options or fio options

[ceph-users] Re: ceph-fuse crash

2023-06-30 Thread Milind Changire
If the crash is easily reproducible at your end, could you set debug_client to 20 in the client-side conf file and then reattempt the operation. You could then send over the collected logs and we could take a look at them. FYI - there's also a bug tracker that has identified a similar problem: ht

[ceph-users] Re: Help needed to configure erasure coding LRC plugin

2023-06-30 Thread Eugen Block
I created a tracker issue, maybe that will get some attention: https://tracker.ceph.com/issues/61861 Zitat von Michel Jouvin : Hi Eugen, Thank you very much for these detailed tests that match what I observed and reported earlier. I'm happy to see that we have the same understanding of ho

[ceph-users] Re: Quincy osd bench in order to define osd_mclock_max_capacity_iops_[hdd|ssd]

2023-06-30 Thread Luis Domingues
Hi Rafael. We faced the exact same issue. And we did a bunch of tests and question. We started with some FIOs, but results where quite meh once in production. Ceph bench did not seemed very reliable. What we ended up doing and seems to hold up quite nicely, is the above. It's probably not the

[ceph-users] db/wal pvmoved ok, but gui show old metadatas

2023-06-30 Thread Christophe BAILLON
Hello, we have a Ceph 17.2.5 cluster with a total of 26 nodes, where 15 nodes that have faulty NVMe drives, where the db/wal resides (one NVMe for the first 6 OSDs and another for the remaining 6). We replaced them with new drives and pvmoved it to avoid losing the OSDs. So far, there are n

[ceph-users] Re: Quincy osd bench in order to define osd_mclock_max_capacity_iops_[hdd|ssd]

2023-06-30 Thread Rafael Diaz Maurin
Hi Luis, Thank you for sharing your tricks :) OK it's clever. You by-pass a destroying disk fio bench with a test on a unique PG on a single OSD, and then do some rados bench. In this way you should get a more realistic ceph IOPS ! Rafael Le 30/06/2023 à 15:00, Luis Domingues a écrit : Hi

[ceph-users] Re: [multisite] The purpose of zonegroup

2023-06-30 Thread Casey Bodley
you're correct that the distinction is between metadata and data; metadata like users and buckets will replicate to all zonegroups, while object data only replicates within a single zonegroup. any given bucket is 'owned' by the zonegroup that creates it (or overridden by the LocationConstraint on c

[ceph-users] Re: [multisite] The purpose of zonegroup

2023-06-30 Thread Alexander E. Patrakov
Thanks! This is something that should be copy-pasted at the top of https://docs.ceph.com/en/latest/radosgw/multisite/ Actually, I reported a documentation bug for something very similar. On Fri, Jun 30, 2023 at 11:30 PM Casey Bodley wrote: > > you're correct that the distinction is between metad

[ceph-users] Re: [multisite] The purpose of zonegroup

2023-06-30 Thread Casey Bodley
cc Zac, who has been working on multisite docs in https://tracker.ceph.com/issues/58632 On Fri, Jun 30, 2023 at 11:37 AM Alexander E. Patrakov wrote: > > Thanks! This is something that should be copy-pasted at the top of > https://docs.ceph.com/en/latest/radosgw/multisite/ > > Actually, I reporte

[ceph-users] Reef release candidate - v18.1.2

2023-06-30 Thread Yuri Weinstein
Hi everyone, This is the second release candidate for Reef. The Reef release comes with a new RockDB version (7.9.2) [0], which incorporates several performance improvements and features. Our internal testing doesn't show any side effects from the new version, but we are very eager to hear commun