[ceph-users] Cluster degraded after adding OSDs to increase capacity

2020-08-27 Thread Dallas Jones
My 3-node Ceph cluster (14.2.4) has been running fine for months. However, my data pool became close to full a couple of weeks ago, so I added 12 new OSDs, roughly doubling the capacity of the cluster. However, the pool size has not changed, and the health of the cluster has changed for the worse.

[ceph-users] Re: Cluster degraded after adding OSDs to increase capacity

2020-08-27 Thread Dallas Jones
3 TiB 42 TiB 41 TiB 217 GiB 466 GiB 81 TiB 33.86 MIN/MAX VAR: 0.00/2.63 STDDEV: 37.27 On Thu, Aug 27, 2020 at 8:43 AM Eugen Block wrote: > Hi, > > are the new OSDs in the same root and is it the same device class? Can > you share the output of ‚ceph osd df tree‘? > > &

[ceph-users] Re: Cluster degraded after adding OSDs to increase capacity

2020-08-28 Thread Dallas Jones
Thanks for the reply. I dialed up the value for max backfills yesterday, which increased my recovery throughput from about 1mbps to 5ish. After tweaking osd_recovery_sleep_hdd, I'm seeing 50-60MBPS - which is fairly epic. No clients are currently using this cluster, so I'm not worried about tanking

[ceph-users] Re: Cluster degraded after adding OSDs to increase capacity

2020-08-31 Thread Dallas Jones
uld be appreciated. > > Thank you, > > Dominic L. Hilsbos, MBA > Director – Information Technology > Perform Air International, Inc. > dhils...@performair.com > www.PerformAir.com > > > From: Dallas Jones [mailto:djo...@tech4learning.com] > Sent: Friday, August 2

[ceph-users] OSD host count affecting available pool size?

2020-10-19 Thread Dallas Jones
Hi, Ceph brain trust: I'm still trying to wrap my head around some capacity planning for Ceph, and I can't find a definitive answer to this question in the docs (at least one that penetrates my mental haze)... Does the OSD host count affect the total available pool size? My cluster consists of th

[ceph-users] Re: OSD host count affecting available pool size?

2020-10-19 Thread Dallas Jones
code shards are separated across hosts and a single host failure will not affect availability. I think this means what I thought it would mean - having the OSDs concentrated onto fewer hosts is limiting the volume size... On Mon, Oct 19, 2020 at 9:08 AM Dallas Jones wrote: > Hi, Ceph bra

[ceph-users] Clearing contents of OSDs without removing them?

2020-12-18 Thread Dallas Jones
Stumbling closer toward a usable production cluster with Ceph, but I have yet another stupid n00b question I'm hoping you all will tolerate. I have 38 OSDs up and in across 4 hosts. I (maybe prematurely) removed my test filesystem as well as the metadata and data pools used by the deleted filesyst