Hi,
inspired from the performance test Mark did, I tried to compile my own one.
I have four OSD processes on one Node, each process has a Intel 710 SSD
for its journal and 4 SAS Disk via an Lsi 9266-8i in Raid 0.
If I test the SSD with fio they are quite fast and the w_wait time is
quite low.
Hi Martin,
I haven't tested the 9266-8i specifically, but it may behave similarly
to the 9265-8i. This is just a theory, but I get the impression that
the controller itself introduces some latency getting data to disk, and
that it may get worse as the more data is pushed across the
Hi Mark,
I think there is no differences between the 9266-8i and the 9265-8i,
except for the cache vault and the angel of the SAS connectors.
In the last test, which I posted, the SSDs where connected to the
onboard SATA ports. Further test showed if I reduce the the object size
(the -b
On Mon, 15 Oct 2012, Travis Rhoden wrote:
Martin,
btw.
Is there a nice way to format the output of ceph --admin-daemon
ceph-osd.0.asok perf_dump?
I use:
ceph --admin-daemon /var/run/ceph/ceph-osd.3.asok perf dump | python
-mjson.tool
There is also