[ceph-users] Re: Ceph recovery

2023-05-01 Thread wodel youchi
Thank you for the clarification. On Mon, May 1, 2023, 20:11 Wesley Dillingham wrote: > Assuming size=3 and min_size=2 It will run degraded (read/write capable) > until a third host becomes available at which point it will backfill the > third copy on the third host. It will be unable to create

[ceph-users] Re: Ceph recovery

2023-05-01 Thread Wesley Dillingham
Assuming size=3 and min_size=2 It will run degraded (read/write capable) until a third host becomes available at which point it will backfill the third copy on the third host. It will be unable to create the third copy of data if no third host exists. If an additional host is lost the data will

[ceph-users] Re: Ceph recovery network speed

2022-07-01 Thread Curt
On Wed, Jun 29, 2022 at 11:22 PM Curt wrote: > > > On Wed, Jun 29, 2022 at 9:55 PM Stefan Kooman wrote: > >> On 6/29/22 19:34, Curt wrote: >> > Hi Stefan, >> > >> > Thank you, that definitely helped. I bumped it to 20% for now and >> that's >> > giving me around 124 PGs backfilling at 187

[ceph-users] Re: Ceph recovery network speed

2022-06-29 Thread Curt
On Wed, Jun 29, 2022 at 9:55 PM Stefan Kooman wrote: > On 6/29/22 19:34, Curt wrote: > > Hi Stefan, > > > > Thank you, that definitely helped. I bumped it to 20% for now and that's > > giving me around 124 PGs backfilling at 187 MiB/s, 47 Objects/s. I'll > > see how that runs and then increase

[ceph-users] Re: Ceph recovery network speed

2022-06-29 Thread Curt
Hi Stefan, Thank you, that definitely helped. I bumped it to 20% for now and that's giving me around 124 PGs backfilling at 187 MiB/s, 47 Objects/s. I'll see how that runs and then increase it a bit more if the cluster handles it ok. Do you think it's worth enabling scrubbing while backfilling?

[ceph-users] Re: Ceph recovery network speed

2022-06-29 Thread Curt
On Wed, Jun 29, 2022 at 4:42 PM Stefan Kooman wrote: > On 6/29/22 11:21, Curt wrote: > > On Wed, Jun 29, 2022 at 1:06 PM Frank Schilder wrote: > > > >> Hi, > >> > >> did you wait for PG creation and peering to finish after setting pg_num > >> and pgp_num? They should be right on the value you

[ceph-users] Re: Ceph recovery network speed

2022-06-29 Thread Curt
number of objects, > capacity and performance. > > Best regards, > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > > From: Curt > Sent: 28 June 2022 16:33:24 > To: Frank Schilder > Cc: Robert Ga

[ceph-users] Re: Ceph recovery network speed

2022-06-28 Thread Curt
l? > > Best regards, > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > > From: Curt > Sent: 27 June 2022 21:36:27 > To: Frank Schilder > Cc: Robert Gallop; ceph-users@ceph.io > Subject: Re:

[ceph-users] Re: Ceph recovery network speed

2022-06-27 Thread Curt
ctions of disks against > each > > other, but object for object. This might be a feature request: that PG > > space allocation and recovery should follow the model of LVM extends > > (ideally match with LVM extends) to allow recovery/rebalancing larger > > chunks of storage in o

[ceph-users] Re: Ceph recovery network speed

2022-06-27 Thread Curt
t; > >> > I can tell you that boatloads of tiny objects are a huge pain for >> > recovery, even on SSD. Ceph doesn't raid up sections of disks against >> each >> > other, but object for object. This might be a feature request: that PG >> > space allocation

[ceph-users] Re: Ceph recovery network speed

2022-06-27 Thread Robert Gallop
, but object for object. This might be a feature request: that PG > > space allocation and recovery should follow the model of LVM extends > > (ideally match with LVM extends) to allow recovery/rebalancing larger > > chunks of storage in one go, containing parts of a large or many small > > objects. &g

[ceph-users] Re: Ceph recovery network speed

2022-06-27 Thread Curt
containing parts of a large or many small > objects. > > Best regards, > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > ____________ > From: Curt > Sent: 27 June 2022 17:35:19 > To: Frank Schilder >

[ceph-users] Re: Ceph recovery network speed

2022-06-27 Thread Curt
data > (#objects) the most PGs. > > Best regards, > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > ____ > From: Curt > Sent: 24 June 2022 19:04 > To: Anthony D'Atri; ceph-users@ceph.io > S

[ceph-users] Re: Ceph recovery network speed

2022-06-24 Thread Curt
On Sat, Jun 25, 2022 at 3:27 AM Anthony D'Atri wrote: > The pg_autoscaler aims IMHO way too low and I advise turning it off. > > > > > On Jun 24, 2022, at 11:11 AM, Curt wrote: > > > >> You wrote 2TB before, are they 2TB or 18TB? Is that 273 PGs total or > per > > osd? > > Sorry, 18TB of data

[ceph-users] Re: Ceph recovery network speed

2022-06-24 Thread Curt
Nope, majority of read/writes happen at night so it's doing less than 1 MiB/s client io right now, sometimes 0. On Fri, Jun 24, 2022, 22:23 Stefan Kooman wrote: > On 6/24/22 20:09, Curt wrote: > > > > > > On Fri, Jun 24, 2022 at 10:00 PM Stefan Kooman > > wrote: > > > >

[ceph-users] Re: Ceph recovery network speed

2022-06-24 Thread Curt
> You wrote 2TB before, are they 2TB or 18TB? Is that 273 PGs total or per osd? Sorry, 18TB of data and 273 PGs total. > `ceph osd df` will show you toward the right how many PGs are on each OSD. If you have multiple pools, some PGs will have more data than others. > So take an average # of

[ceph-users] Re: Ceph recovery network speed

2022-06-24 Thread Curt
On Fri, Jun 24, 2022 at 10:00 PM Stefan Kooman wrote: > On 6/24/22 19:49, Curt wrote: > > Pool 12 is my erasure coding pool, 2+2. How can I tell if it's > > objections or keys recovering?\ > > ceph -s. wil tell you what type of recovery is going on. > > Is it a cephfs metadata pool? Or a rgw

[ceph-users] Re: Ceph recovery network speed

2022-06-24 Thread Curt
Pool 12 is my erasure coding pool, 2+2. How can I tell if it's objections or keys recovering? Thanks, Curt On Fri, Jun 24, 2022 at 9:39 PM Stefan Kooman wrote: > On 6/24/22 19:04, Curt wrote: > > 2 PG's shouldn't take hours to backfill in my opinion. Just 2TB > enterprise > > HD's. > > > >

[ceph-users] Re: Ceph recovery network speed

2022-06-24 Thread Curt
2 PG's shouldn't take hours to backfill in my opinion. Just 2TB enterprise HD's. Take this log entry below, 72 minutes and still backfilling undersized? Should it be that slow? pg 12.15 is stuck undersized for 72m, current state active+undersized+degraded+remapped+backfilling, last acting

[ceph-users] Re: [Ceph] Recovery is very Slow

2021-10-28 Thread Christian Wuerdig
Yes, just expose each disk as an individual OSD and you'll already be better off. Depending what type of SSD they are - if they can sustain high random write IOPS you may even want to consider partitioning each disk and create 2 OSDs per SSD to make better use of the available IO capacity. For