[ceph-users] Re: SAS vs SATA for OSD

2021-06-03 Thread Mark Nelson
I suspect the behavior of the controller and the behavior of the drive firmware will end up mattering more than SAS vs SATA.  As always it's best if you can test it first before committing to buying a pile of them.  Historically I have seen SATA drives that have performed well as far as HDDs go

[ceph-users] Re: SAS vs SATA for OSD

2021-06-03 Thread Jamie Fargen
Dave- These are just general observations of how SATA drives operate in storage clusters. It has been a while since I have run a storage cluster with SATA drives, but in the past I did notice that SATA drives would drop off the controllers pretty frequently. Depending on many factors, it may just

[ceph-users] Re: SAS vs SATA for OSD

2021-06-03 Thread Anthony D'Atri
Agreed. I think oh …. maybe 15-20 years ago there was often a wider difference between SAS and SATA drives, but with modern queuing etc. my sense is that there is less of an advantage. Seek and rotational latency I suspect dwarf interface differences wrt performance. The HBA may be a bigger

[ceph-users] Re: SAS vs SATA for OSD - WAL+DB sizing.

2021-06-03 Thread Mark Nelson
FWIW, those guidelines try to be sort of a one-size-fits-all recommendation that may not apply to your situation.  Typically RBD has pretty low metadata overhead so you can get away with smaller DB partitions.  4% should easily be enough.  If you are running heavy RGW write workloads with small

[ceph-users] Re: SAS vs SATA for OSD - WAL+DB sizing.

2021-06-03 Thread Anthony D'Atri
In releases before … Pacific I think, there are certain discrete capacities that DB will actually utilize: the sum of RocksDB levels. Lots of discussion in the archives. AIUI in those releases, with a 500 GB BlueStore WAL+DB device, you’ll with default settings only actually use ~~300 GB most

[ceph-users] Re: SAS vs SATA for OSD - WAL+DB sizing.

2021-06-03 Thread Dave Hall
Anthony, I had recently found a reference in the Ceph docs that indicated something like 40GB per TB for WAL+DB space. For a 12TB HDD that comes out to 480GB. If this is no longer the guideline I'd be glad to save a couple dollars. -Dave -- Dave Hall Binghamton University kdh...@binghamton.edu

[ceph-users] Re: SAS vs SATA for OSD - WAL+DB sizing.

2021-06-03 Thread Dave Hall
Mark, We are running a mix of RGW, RDB, and CephFS. Our CephFS is pretty big, but we're moving a lot of it to RGW. What prompted me to go looking for a guideline was a high frequency of Spillover warnings as our cluster filled up past the 50% mark. That was with 14.2.9, I think. I understand t