[ceph-users] Re: SATA vs SAS

2021-08-23 Thread Kai Börnert

Basically yes, but I would not say supercritical.

If it cannot deliver enough iops for ceph, it will stall even slow 
consumer hdds, if it is fast enough, the hdd/cpu/network will be the 
bottleneck, so there is not much to gain after that point.


This is more a warning to check before buying a large amount of ssds, if 
they do perform well, when used by ceph, as the access and load patterns 
are quite different than what normal benchmarks compare.



On 8/23/21 1:03 PM, Roland Giesler wrote:

On Mon, 23 Aug 2021 at 00:59, Kai Börnert  wrote:

As far as i understand, more important factor (for the ssds) is if they
have power loss protections (so they can use their ondevice write cache)
and how many iops they have when using direct writes with queue depth 1

So what you're saying is that where the WAL is stored is
supercritical, since it could kill performance completely?


I just did a test for a hdd with block.db on ssd cluster using extra
cheap consumer ssds, adding the ssds reduced! the performance by about
1-2 magnitudes

While it is running the benchmark ssds are at 100%io according to
iostat, the hdds are below 10%, the performance is an absolute joke

pinksupervisor:~$ sudo rados bench -p scbench 5 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size
4194304 for up to 5 seconds or 0 objects
Total time run: 15.5223
Total writes made:  21
Write size: 4194304
Object size:4194304
Bandwidth (MB/sec): 5.41157
Stddev Bandwidth:   3.19595
Max bandwidth (MB/sec): 12
Min bandwidth (MB/sec): 0
Average IOPS:   1
Stddev IOPS:0.798809
Max IOPS:   3
Min IOPS:   0
Average Latency(s): 11.1352
Stddev Latency(s):  4.79918
Max latency(s): 15.4896
Min latency(s): 1.13759

tl;dr the interface is not that important, a good sata drive can easily
beat a sas drive

On 8/21/21 10:34 PM, Teoman Onay wrote:

You seem to focus only on the controller bandwith while you should also
consider disk rpms. Most SATA drives runs at 7200rpm while SAS ones goes
from 10k to 15k rpm which increases the number of iops.

Sata 80 iops
Sas 10k 120iops
Sas 15k 180iops

MBTF of SAS drives is also higher than SATA ones.

What is your use case ? RGW ?  Small or large files ? RBD ?



On Sat, 21 Aug 2021, 19:47 Roland Giesler,  wrote:


Hi all,

(I asked this on the Proxmox forums, but I think it may be more
appropriate here.)

In your practical experience, when I choose new hardware for a
cluster, is there any noticable difference between using SATA or SAS
drives. I know SAS drives can have a 12Gb/s interface and I think SATA
can only do 6Gb/s, but in my experience the drives themselves can't
write at 12Gb/s anyway, so it makes little if any difference.

I use a combination of SSD's and SAS drives in my current cluster (in
different ceph pools), but I suspect that if I choose SATA enterprise
class drives for this project, it will get the same level of
performance.

I think with ceph the hard error rate of drives becomes less relevant
that if I had used some level of RAID.

Also, if I go with SATA, I can use AMD Epyc processors (and I don't
want to use a different supplier), which gives me a lot of extra cores
per unit at a lesser price, which of course all adds up to a better
deal in the end.

I'd like to specifically hear from you what your experience is in this
regard.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: SATA vs SAS

2021-08-23 Thread Peter Lieven

Am 23.08.21 um 00:53 schrieb Kai Börnert:

As far as i understand, more important factor (for the ssds) is if they have 
power loss protections (so they can use their ondevice write cache) and how 
many iops they have when using direct writes with queue depth 1

I just did a test for a hdd with block.db on ssd cluster using extra cheap 
consumer ssds, adding the ssds reduced! the performance by about 1-2 magnitudes



You want to use SSDs with power loss protection. Also make sure that you have 
the write cache is disabled on the SSD, enabling the write cache can be a 
significant

performance penalty.


sdparm --clear WCE /dev/sdX


Peter



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: SATA vs SAS

2021-08-23 Thread Roland Giesler
On Mon, 23 Aug 2021 at 00:59, Kai Börnert  wrote:
>
> As far as i understand, more important factor (for the ssds) is if they
> have power loss protections (so they can use their ondevice write cache)
> and how many iops they have when using direct writes with queue depth 1

So what you're saying is that where the WAL is stored is
supercritical, since it could kill performance completely?

> I just did a test for a hdd with block.db on ssd cluster using extra
> cheap consumer ssds, adding the ssds reduced! the performance by about
> 1-2 magnitudes
>
> While it is running the benchmark ssds are at 100%io according to
> iostat, the hdds are below 10%, the performance is an absolute joke
>
> pinksupervisor:~$ sudo rados bench -p scbench 5 write --no-cleanup
> hints = 1
> Maintaining 16 concurrent writes of 4194304 bytes to objects of size
> 4194304 for up to 5 seconds or 0 objects
> Total time run: 15.5223
> Total writes made:  21
> Write size: 4194304
> Object size:4194304
> Bandwidth (MB/sec): 5.41157
> Stddev Bandwidth:   3.19595
> Max bandwidth (MB/sec): 12
> Min bandwidth (MB/sec): 0
> Average IOPS:   1
> Stddev IOPS:0.798809
> Max IOPS:   3
> Min IOPS:   0
> Average Latency(s): 11.1352
> Stddev Latency(s):  4.79918
> Max latency(s): 15.4896
> Min latency(s): 1.13759
>
> tl;dr the interface is not that important, a good sata drive can easily
> beat a sas drive
>
> On 8/21/21 10:34 PM, Teoman Onay wrote:
> > You seem to focus only on the controller bandwith while you should also
> > consider disk rpms. Most SATA drives runs at 7200rpm while SAS ones goes
> > from 10k to 15k rpm which increases the number of iops.
> >
> > Sata 80 iops
> > Sas 10k 120iops
> > Sas 15k 180iops
> >
> > MBTF of SAS drives is also higher than SATA ones.
> >
> > What is your use case ? RGW ?  Small or large files ? RBD ?
> >
> >
> >
> > On Sat, 21 Aug 2021, 19:47 Roland Giesler,  wrote:
> >
> >> Hi all,
> >>
> >> (I asked this on the Proxmox forums, but I think it may be more
> >> appropriate here.)
> >>
> >> In your practical experience, when I choose new hardware for a
> >> cluster, is there any noticable difference between using SATA or SAS
> >> drives. I know SAS drives can have a 12Gb/s interface and I think SATA
> >> can only do 6Gb/s, but in my experience the drives themselves can't
> >> write at 12Gb/s anyway, so it makes little if any difference.
> >>
> >> I use a combination of SSD's and SAS drives in my current cluster (in
> >> different ceph pools), but I suspect that if I choose SATA enterprise
> >> class drives for this project, it will get the same level of
> >> performance.
> >>
> >> I think with ceph the hard error rate of drives becomes less relevant
> >> that if I had used some level of RAID.
> >>
> >> Also, if I go with SATA, I can use AMD Epyc processors (and I don't
> >> want to use a different supplier), which gives me a lot of extra cores
> >> per unit at a lesser price, which of course all adds up to a better
> >> deal in the end.
> >>
> >> I'd like to specifically hear from you what your experience is in this
> >> regard.
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >>
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: SATA vs SAS

2021-08-23 Thread Roland Giesler
On Sat, 21 Aug 2021 at 22:34, Teoman Onay  wrote:
>
> You seem to focus only on the controller bandwith while you should also 
> consider disk rpms. Most SATA drives runs at 7200rpm while SAS ones goes from 
> 10k to 15k rpm which increases the number of iops.
>
> Sata 80 iops
> Sas 10k 120iops
> Sas 15k 180iops

We currently have Seagate ST2000NX0433 SAS (2TB) drives and am
considering getting Toshiba MG07ACA14TA 14TB 3.5-Inch LFF 6Gbps 7.2K
RPM 4Kn MG07ACA Series SATA Hard Drives instead.

So spin speed is the same.
The SATA drives have 248 MiB/s Maximum Sustained Data Transfer Speed
vs the 136MB/s of the SAS drives.
The MTBF of the SAS drives is 2.5 Million hours vs the SAS's 2 million hours

So on paper the new SATA drives look better than the SAS drive.

Of course, that is not taking the controller and SCSI features into account.

>
> MBTF of SAS drives is also higher than SATA ones.
>
> What is your use case ? RGW ?  Small or large files ? RBD ?

RBD, general usage for KVM and LXC in a multi-tenant hosting
environment.  The Database hosting is done on NVMe SSD's and the WAL's
on Intel SSD's.

The I guess the question really is how import is it for ceph to an
intelligent drive interface.  From my limited understanding of this,
it seems that the whole design of ceph is so that this doesn't really
matter than much, unlike in a traditional RAID environment.

>
>
>
> On Sat, 21 Aug 2021, 19:47 Roland Giesler,  wrote:
>>
>> Hi all,
>>
>> (I asked this on the Proxmox forums, but I think it may be more
>> appropriate here.)
>>
>> In your practical experience, when I choose new hardware for a
>> cluster, is there any noticable difference between using SATA or SAS
>> drives. I know SAS drives can have a 12Gb/s interface and I think SATA
>> can only do 6Gb/s, but in my experience the drives themselves can't
>> write at 12Gb/s anyway, so it makes little if any difference.
>>
>> I use a combination of SSD's and SAS drives in my current cluster (in
>> different ceph pools), but I suspect that if I choose SATA enterprise
>> class drives for this project, it will get the same level of
>> performance.
>>
>> I think with ceph the hard error rate of drives becomes less relevant
>> that if I had used some level of RAID.
>>
>> Also, if I go with SATA, I can use AMD Epyc processors (and I don't
>> want to use a different supplier), which gives me a lot of extra cores
>> per unit at a lesser price, which of course all adds up to a better
>> deal in the end.
>>
>> I'd like to specifically hear from you what your experience is in this 
>> regard.
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: SATA vs SAS

2021-08-22 Thread Kai Börnert
As far as i understand, more important factor (for the ssds) is if they 
have power loss protections (so they can use their ondevice write cache) 
and how many iops they have when using direct writes with queue depth 1


I just did a test for a hdd with block.db on ssd cluster using extra 
cheap consumer ssds, adding the ssds reduced! the performance by about 
1-2 magnitudes


While it is running the benchmark ssds are at 100%io according to 
iostat, the hdds are below 10%, the performance is an absolute joke


pinksupervisor:~$ sudo rados bench -p scbench 5 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 
4194304 for up to 5 seconds or 0 objects

Total time run: 15.5223
Total writes made:  21
Write size: 4194304
Object size:    4194304
Bandwidth (MB/sec): 5.41157
Stddev Bandwidth:   3.19595
Max bandwidth (MB/sec): 12
Min bandwidth (MB/sec): 0
Average IOPS:   1
Stddev IOPS:    0.798809
Max IOPS:   3
Min IOPS:   0
Average Latency(s): 11.1352
Stddev Latency(s):  4.79918
Max latency(s): 15.4896
Min latency(s): 1.13759

tl;dr the interface is not that important, a good sata drive can easily 
beat a sas drive


On 8/21/21 10:34 PM, Teoman Onay wrote:

You seem to focus only on the controller bandwith while you should also
consider disk rpms. Most SATA drives runs at 7200rpm while SAS ones goes
from 10k to 15k rpm which increases the number of iops.

Sata 80 iops
Sas 10k 120iops
Sas 15k 180iops

MBTF of SAS drives is also higher than SATA ones.

What is your use case ? RGW ?  Small or large files ? RBD ?



On Sat, 21 Aug 2021, 19:47 Roland Giesler,  wrote:


Hi all,

(I asked this on the Proxmox forums, but I think it may be more
appropriate here.)

In your practical experience, when I choose new hardware for a
cluster, is there any noticable difference between using SATA or SAS
drives. I know SAS drives can have a 12Gb/s interface and I think SATA
can only do 6Gb/s, but in my experience the drives themselves can't
write at 12Gb/s anyway, so it makes little if any difference.

I use a combination of SSD's and SAS drives in my current cluster (in
different ceph pools), but I suspect that if I choose SATA enterprise
class drives for this project, it will get the same level of
performance.

I think with ceph the hard error rate of drives becomes less relevant
that if I had used some level of RAID.

Also, if I go with SATA, I can use AMD Epyc processors (and I don't
want to use a different supplier), which gives me a lot of extra cores
per unit at a lesser price, which of course all adds up to a better
deal in the end.

I'd like to specifically hear from you what your experience is in this
regard.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: SATA vs SAS

2021-08-21 Thread Teoman Onay
You seem to focus only on the controller bandwith while you should also
consider disk rpms. Most SATA drives runs at 7200rpm while SAS ones goes
from 10k to 15k rpm which increases the number of iops.

Sata 80 iops
Sas 10k 120iops
Sas 15k 180iops

MBTF of SAS drives is also higher than SATA ones.

What is your use case ? RGW ?  Small or large files ? RBD ?



On Sat, 21 Aug 2021, 19:47 Roland Giesler,  wrote:

> Hi all,
>
> (I asked this on the Proxmox forums, but I think it may be more
> appropriate here.)
>
> In your practical experience, when I choose new hardware for a
> cluster, is there any noticable difference between using SATA or SAS
> drives. I know SAS drives can have a 12Gb/s interface and I think SATA
> can only do 6Gb/s, but in my experience the drives themselves can't
> write at 12Gb/s anyway, so it makes little if any difference.
>
> I use a combination of SSD's and SAS drives in my current cluster (in
> different ceph pools), but I suspect that if I choose SATA enterprise
> class drives for this project, it will get the same level of
> performance.
>
> I think with ceph the hard error rate of drives becomes less relevant
> that if I had used some level of RAID.
>
> Also, if I go with SATA, I can use AMD Epyc processors (and I don't
> want to use a different supplier), which gives me a lot of extra cores
> per unit at a lesser price, which of course all adds up to a better
> deal in the end.
>
> I'd like to specifically hear from you what your experience is in this
> regard.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io