[ceph-users] Re: bluefs _allocate unable to allocate on bdev 2

2024-09-12 Thread Szabo, Istvan (Agoda)
Hi Stefan and Igor, We are testing to do release update from Ubuntu 20.04 to 22.04 with ceph update, so once the cluster is Quincy, is it safe to set this value in the config db to 4k without rebuilding 100s of osds? As I understand Igor here, in his case https://www.spinics.net/lists/ceph-use

[ceph-users] Re: lifecycle policy on non-replicated buckets

2024-09-12 Thread Christopher Durham
I found out a few things about this. I am using 18.2.2 on the new cluster and 18.2.4 on the new unused cluster, both on Rocky Linux. 1. On a new cluster without any load (18.2.4), I can get the expiration on both sides for a non-replicated bucket just by creating the lifecycle policy.2. On a h

[ceph-users] Cephalocon 2024 Agenda Announced – Sponsorship Opportunities Still Available!

2024-09-12 Thread Dan van der Ster
Dear Ceph Community, We are excited to announce that the agenda for Cephalocon 2024 is now live! This year’s event, hosted at CERN, the birthplace of Ceph at scale, promises to be a unique opportunity to learn from experts, network

[ceph-users] Re: ceph-mgr perf throttle-msgr - what is caused fails?

2024-09-12 Thread Eugen Block
You’re right, in my case it was clear where it came from. But if there’s no spike visible, it’s probably going to be difficult to get to the bottom of it. But did you notice any actual issues or did you just see that value being that high without any connection to an incident? Zitat von K

[ceph-users] Re: [RGW][cephadm] How to configure RGW as code and independantely of daemon names ?

2024-09-12 Thread Anthony D'Atri
If those need improvement, please tag me on a tracker ticket. > On Sep 12, 2024, at 2:37 AM, Robert Sander > wrote: > > Hi, > > On 9/11/24 22:00, Gilles Mocellin wrote: > >> Is there some documentation I didn't find, or is this the kind of detail >> only a >> developper can find ? > > It s

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-12 Thread Laura Flores
Will do, @Rachana Patel . On Thu, Sep 12, 2024 at 3:44 AM Rachana Patel wrote: > Thanks Venky ! > We can now focus on next tasks - > > - Release Notes >https://github.com/ceph/ceph/pull/59539 . Requesting all TLs > to review Release notes. > - Gibba upgrade > - LRC upgrade > > @

[ceph-users] Re: Successfully using dm-cache

2024-09-12 Thread Anthony D'Atri
I *think* the rotational flag isn't used at OSD creation time, but rather each time the OSD starts to select between options that have _hdd and _ssd values. If I'm mistaken, please do englighten me. One can use a udev rule to override the kernel's deduced rotational value > On Sep 12, 2024, at 1

[ceph-users] Re: bluefs _allocate unable to allocate on bdev 2

2024-09-12 Thread Szabo, Istvan (Agoda)
Let me try that on the already dead osds. For not we put these values to the config db but seems like not much help 🙁 │ osdadvanced bluefs_shared_alloc_size 32768 │ osdadvanced osd_max_backfills 1 │ osdadvanced osd_op_thread_suicide_

[ceph-users] Re: Successfully using dm-cache

2024-09-12 Thread Frank Schilder
Hi Michael, yes, at least I'm interested. I also plan to use dm_cache and would like to hear about your continued experience. I have a few specific questions about your set-up: - Sizing: Why do you use only 85G for the cache? Do you also deploy OSDs on the remaining space on the NVMe or is it

[ceph-users] Re: bluefs _allocate unable to allocate on bdev 2

2024-09-12 Thread Stefan Kooman
On 12-09-2024 11:40, Szabo, Istvan (Agoda) wrote: Thank you, so quincy should be ok right? Yes. The problem was a spillover why we went from separate rocksdb and wal back to not separated setup with 4osd/ssd. My osds are 53% full only, would it be possible to somehow increase the default 4%

[ceph-users] Re: bluefs _allocate unable to allocate on bdev 2

2024-09-12 Thread Szabo, Istvan (Agoda)
Thank you, so quincy should be ok right? The problem was a spillover why we went from separate rocksdb and wal back to not separated setup with 4osd/ssd. My osds are 53% full only, would it be possible to somehow increase the default 4% somehow to 8% on an existing osd? _

[ceph-users] Re: bluefs _allocate unable to allocate on bdev 2

2024-09-12 Thread Stefan Kooman
On 12-09-2024 06:43, Szabo, Istvan (Agoda) wrote: Maybe we are running into this bug Igor? https://github.com/ceph/ceph/pull/48854 That would be a solution for the bug you might be hitting (unable to allocate 64K aligned blocks for RocksDB). I would not be surprised if you hit this issue if

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-12 Thread Rachana Patel
Thanks Venky ! We can now focus on next tasks - - Release Notes https://github.com/ceph/ceph/pull/59539 . Requesting all TLs to review Release notes. - Gibba upgrade - LRC upgrade @Laura Flores , Kindly upgrade cluster and share results. On Thu, Sep 12, 2024 at 1:03 AM Venky Sha

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-12 Thread Venky Shankar
On Wed, Sep 11, 2024 at 10:00 PM Laura Flores wrote: > Thanks @Venky Shankar for looking into > https://tracker.ceph.com/issues/68002. > > As for https://tracker.ceph.com/issues/67999, you are right- it is not > cephfs related. Mistake on my part. After looking into this one, it seems > to be a