[ceph-users] Re: Ceph OSD imbalance and performance

2023-02-28 Thread Dave Ingram
broken stuff and horrendous performance. Thanks Reed! -Dave > > -Reed > > On Feb 28, 2023, at 11:12 AM, Dave Ingram wrote: > > There is a > lot of variability in drive sizes - two different sets of admins added > disks sized between 6TB and 16TB and I suspect this and i

[ceph-users] Re: Ceph OSD imbalance and performance

2023-02-28 Thread Dave Ingram
don't appear to have actual catastrophic errors based on smartctl and other tools. On Tue, Feb 28, 2023 at 12:21 PM Janne Johansson wrote: > Den tis 28 feb. 2023 kl 18:13 skrev Dave Ingram : > > There are also several > > scrub errors. In short, it's a complete wreck. > > >

[ceph-users] Ceph OSD imbalance and performance

2023-02-28 Thread Dave Ingram
Hello, Our ceph cluster performance has become horrifically slow over the past few months. Nobody here is terribly familiar with ceph and we're inheriting this cluster without much direction. Architecture: 40Gbps QDR IB fabric between all ceph nodes and our ovirt VM hosts. 11 OSD nodes with a