Seems in line with what I'd expect for the hardware.

Your hardware seems to be way overspecced, you'd be fine with half the
RAM, half the CPU and way cheaper disks.
In fakt, a good SATA 4kn disk can be faster than a SAS 512e disk.

I'd probably only use the 25G network for both networks instead of
using both. Splitting the network usually doesn't help.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Mon, Apr 8, 2019 at 2:16 PM Lars Täuber <taeu...@bbaw.de> wrote:
>
> Hi there,
>
> i'm new to ceph and just got my first cluster running.
> Now i'd like to know if the performance we get is expectable.
>
> Is there a website with benchmark results somewhere where i could have a look 
> to compare with our HW and our results?
>
> This are the results:
> rados bench single threaded:
> # rados bench 10 write --rbd-cache=false -t 1
>
> Object size:            4194304
> Bandwidth (MB/sec):     53.7186
> Stddev Bandwidth:       3.86437
> Max bandwidth (MB/sec): 60
> Min bandwidth (MB/sec): 48
> Average IOPS:           13
> Stddev IOPS:            0.966092
> Average Latency(s):     0.0744599
> Stddev Latency(s):      0.00911778
>
> nearly maxing out one (idle) client with 28 threads
> # rados bench 10 write --rbd-cache=false -t 28
>
> Bandwidth (MB/sec):     850.451
> Stddev Bandwidth:       40.6699
> Max bandwidth (MB/sec): 904
> Min bandwidth (MB/sec): 748
> Average IOPS:           212
> Stddev IOPS:            10.1675
> Average Latency(s):     0.131309
> Stddev Latency(s):      0.0318489
>
> four concurrent benchmarks on four clients each with 24 threads:
> Bandwidth (MB/sec):     396     376     381     389
> Stddev Bandwidth:       30      25      22      22
> Max bandwidth (MB/sec): 440     420     416     428
> Min bandwidth (MB/sec): 352     348     344     364
> Average IOPS:           99      94      95      97
> Stddev IOPS:            7.5     6.3     5.6     5.6
> Average Latency(s):     0.24    0.25    0.25    0.24
> Stddev Latency(s):      0.12    0.15    0.15    0.14
>
> summing up: write mode
> ~1500 MB/sec Bandwidth
> ~385 IOPS
> ~0.25s Latency
>
> rand mode:
> ~3500 MB/sec
> ~920 IOPS
> ~0.154s Latency
>
>
>
> Maybe someone could judge our numbers. I am actually very satisfied with the 
> values.
>
> The (mostly idle) cluster is build from these components:
> * 10GB frontend network, bonding two connections to mon-, mds- and osd-nodes
> ** no bonding to clients
> * 25GB backend network, bonding two connections to osd-nodes
>
>
> cluster:
> * 3x mon, 2x Intel(R) Xeon(R) Bronze 3104 CPU @ 1.70GHz, 64GB RAM
> * 3x mds, 1x Intel(R) Xeon(R) Gold 5115 CPU @ 2.40GHz, 128MB RAM
> * 7x OSD-nodes, 2x Intel(R) Xeon(R) Silver 4112 CPU @ 2.60GHz, 96GB RAM
> ** 4x 6TB SAS HDD HGST HUS726T6TAL5204 (5x on two nodes, max. 6x per chassis 
> for later growth)
> ** 2x 800GB SAS SSD WDC WUSTM3280ASS200 => SW-RAID1 => LVM ~116 GiB per OSD 
> for DB and WAL
>
> erasure encoded pool: (made for CephFS)
> * plugin=clay k=5 m=2 d=6 crush-failure-domain=host
>
> Thanks and best regards
> Lars
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to