[ceph-users] Re: RGW requests piling up

2023-12-28 Thread Gauvain Pocentek
on the index pool, and managed to kill the RGWs (14 of them) after a few hours. I hope this can help someone in the future. Gauvain On Fri, Dec 22, 2023 at 3:09 PM Gauvain Pocentek wrote: > I'd like to say that it was something smart but it was a bit of luck. > > I logged in on a hyperviso

[ceph-users] Re: RGW requests piling up

2023-12-22 Thread Gauvain Pocentek
going to look into adding CPU/RAM monitoring for all the OSDs next. Gauvain On Fri, Dec 22, 2023 at 2:58 PM Drew Weaver wrote: > Can you say how you determined that this was a problem? > > -Original Message- > From: Gauvain Pocentek > Sent: Friday, December 22, 2023 8:0

[ceph-users] Re: RGW requests piling up

2023-12-22 Thread Gauvain Pocentek
happening there. Gauvain On Thu, Dec 21, 2023 at 1:40 PM Gauvain Pocentek wrote: > Hello Ceph users, > > We've been having an issue with RGW for a couple days and we would > appreciate some help, ideas, or guidance to figure out the issue. > > We run a multi-site setup which has b

[ceph-users] RGW requests piling up

2023-12-21 Thread Gauvain Pocentek
Hello Ceph users, We've been having an issue with RGW for a couple days and we would appreciate some help, ideas, or guidance to figure out the issue. We run a multi-site setup which has been working pretty fine so far. We don't actually have data replication enabled yet, only metadata

[ceph-users] Re: Very slow backfilling/remapping of EC pool PGs

2023-03-21 Thread Gauvain Pocentek
rs. I will change the osd_op_queue value once the cluster is stable. Thanks for the help, it's been really useful, and I know a little bit more about Ceph :) Gauvain > ___ > Clyso GmbH - Ceph Foundation Member > > Am 21.03.23 um 12:51 schrieb Gauvain Pocen

[ceph-users] Re: Very slow backfilling/remapping of EC pool PGs

2023-03-21 Thread Gauvain Pocentek
delsregister beim Amtsgericht: Augsburg > Handelsregister-Nummer: HRB 25866 > USt. ID-Nr.: DE275430677 > > Am 21.03.23 um 11:14 schrieb Gauvain Pocentek: > > Hi Joachim, > > > On Tue, Mar 21, 2023 at 10:13 AM Joachim Kraftmayer < > joachim.kraftma...@clyso.com> wrote:

[ceph-users] Very slow backfilling/remapping of EC pool PGs

2023-03-20 Thread Gauvain Pocentek
Hello all, We have an EC (4+2) pool for RGW data, with HDDs + SSDs for WAL/DB. This pool has 9 servers with each 12 disks of 16TBs. About 10 days ago we lost a server and we've removed its OSDs from the cluster. Ceph has started to remap and backfill as expected, but the process has been getting

[ceph-users] Limited set of permissions for an RGW user (S3)

2023-02-13 Thread Gauvain Pocentek
Hi list, A little bit of background: we provide S3 buckets using RGW (running quincy), but users are not allowed to manage their buckets, just read and write objects in them. Buckets are created by an admin user, and read/write permissions are given to end users using S3 bucket policies. We set

[ceph-users] Re: Slow OSD startup and slow ops

2022-10-17 Thread Gauvain Pocentek
Hello, On Fri, Sep 30, 2022 at 8:12 AM Gauvain Pocentek wrote: > Hi Stefan, > > Thanks for your feedback! > > > On Thu, Sep 29, 2022 at 10:28 AM Stefan Kooman wrote: > >> On 9/26/22 18:04, Gauvain Pocentek wrote: >> >> > >> > >&

[ceph-users] Re: Slow OSD startup and slow ops

2022-09-30 Thread Gauvain Pocentek
Hi Stefan, Thanks for your feedback! On Thu, Sep 29, 2022 at 10:28 AM Stefan Kooman wrote: > On 9/26/22 18:04, Gauvain Pocentek wrote: > > > > > > > We are running a Ceph Octopus (15.2.16) cluster with similar > > configuration. We have *a lot* of slo

[ceph-users] Re: Slow OSD startup and slow ops

2022-09-26 Thread Gauvain Pocentek
Hello Stefan, Thank you for your answers. On Thu, Sep 22, 2022 at 5:54 PM Stefan Kooman wrote: > Hi, > > On 9/21/22 18:00, Gauvain Pocentek wrote: > > Hello all, > > > > We are running several Ceph clusters and are facing an issue on one of > > the

[ceph-users] Slow OSD startup and slow ops

2022-09-21 Thread Gauvain Pocentek
Hello all, We are running several Ceph clusters and are facing an issue on one of them, we would appreciate some input on the problems we're seeing. We run Ceph in containers on Centos Stream 8, and we deploy using ceph-ansible. While upgrading ceph from 16.2.7 to 16.2.10, we noticed that OSDs