[ceph-users] Re: Workload that delete 100 M object daily via lifecycle

2023-07-17 Thread Huy Nguyen
Hi, You may want to check out this doc: https://docs.ceph.com/en/quincy/radosgw/config-ref/#lifecycle-settings As I understand it in short: - if there are thousands of buckets, we should increase the rgw_lc_max_worker. - if there are a few buckets that have hundreds of thousands of objects, we

[ceph-users] Re: Another Pacific point release?

2023-07-17 Thread David Orman
I'm hoping to see at least one more, if not more than that, but I have no crystal ball. I definitely support this idea, and strongly suggest it's given some thought. There have been a lot of delays/missed releases due to all of the lab issues, and it's significantly impacted the release cadence

[ceph-users] Workload that delete 100 M object daily via lifecycle

2023-07-17 Thread Ha Nguyen Van
Hi Experts, We plan to setup a Ceph Object to support a S3 workload, that will need to delete 100M file daily via lifecycle. Appreciate your suggestion and setting to handle this kind of scenario Best Regards, Ha ___ ceph-users mailing list --

[ceph-users] Re: Another Pacific point release?

2023-07-17 Thread Konstantin Shalygin
Hi, > On 17 Jul 2023, at 12:53, Ponnuvel Palaniyappan wrote: > > The typical EOL date (2023-06-01) has already passed for Pacific. Just > wondering if there's going to be another Pacific point release (16.2.14) in > the pipeline. Good point! At least, for possibility upgrade RBD clusters from

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-17 Thread Frank Schilder
Hi all, now that host masks seem to work, could somebody please shed some light at the relative priority of these settings: ceph config set osd memory_target X ceph config set osd/host:A memory_target Y ceph config set osd/class:B memory_target Z Which one wins for an OSD on host A in class B?

[ceph-users] Another Pacific point release?

2023-07-17 Thread Ponnuvel Palaniyappan
Hi, The typical EOL date (2023-06-01) has already passed for Pacific. Just wondering if there's going to be another Pacific point release (16.2.14) in the pipeline. -- Regards, Ponnuvel P ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-17 Thread Luis Domingues
It looks indeed to be that bug that I hit. Thanks. Luis Domingues Proton AG --- Original Message --- On Monday, July 17th, 2023 at 07:45, Sridhar Seshasayee wrote: > Hello Luis, > > Please see my response below: > > But when I took a look on the memory usage of my OSDs, I was