On 16/09/2020 07:26, Danni Setiawan wrote:
Hi all,

I'm trying to find performance penalty with OSD HDD when using WAL/DB in faster device (SSD/NVMe) vs WAL/DB in same device (HDD) for different workload (RBD, RGW with index bucket in SSD pool, and CephFS with metadata in SSD pool). I want to know if giving up disk slot for WAL/DB device is worth vs adding more OSD.

Unfortunately I cannot find the benchmark for these kind workload. Has anyone ever done this benchmark?

For everything except CephFS, fio looks like a best tool for benchmarking. It can benchmark ceph on all levels: rados, rbd, http/S3. Moreover, it has excellent configuration options, detailed metrics and it can run with multi-server workload (one fio client forcing many fio servers to do benchmarking). The own fio performance is at about 15M IOPS (null engine per fio-server), and it scales horizontally.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to