You are right. anyway our sysbench result for random R/W gets so worse,
sysbench by default sets up file-fsync-freq=100.
Do you guys have any idea for debug and tuning ceph cluster for better
random IO performance?
Thanks.
___
ceph-users mailing list
cep
: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] seqwrite gets good performance but random rw gets
worse
Hi again,
when setup file-fsync-freq=1 (fsync for each time writing) and
file-fsync-freq=0 (never fsync by sysbench), the result gets huge difference.
(one is 382.94Kb/sec, another is
Hi again,
when setup file-fsync-freq=1 (fsync for each time writing) and
file-fsync-freq=0 (never fsync by sysbench), the result gets huge
difference.
(one is 382.94Kb/sec, another is 25.921Mb/sec).
How do you think of it? thanks.
file-fsync-freq=1,
# sysbench --test=fileio --file-total-size=5G
00 IOPS for the
> same test).
>
>
>
>
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Ken Peng
> *Sent:* Wednesday, 25 May 2016 5:02 PM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] seqwrite gets good performance but random r
com] *On Behalf
> Of *Ken Peng
> *Sent:* Wednesday, 25 May 2016 5:02 PM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] seqwrite gets good performance but random rw gets
> worse
>
>
>
> Hello,
>
> We have a cluster with 20+ hosts and 200+ OSDs, each 4
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ken
Peng
Sent: Wednesday, 25 May 2016 5:02 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] seqwrite gets good performance but random rw gets worse
Hello,
We have a cluster with 20+ hosts and 200+ OSDs, each 4T SATA disk for an OSD,
no
Hello,
We have a cluster with 20+ hosts and 200+ OSDs, each 4T SATA disk for an
OSD, no SSD cache.
OS is Ubuntu 16.04 LTS, ceph version 10.2.0
Both data network and cluster network are 10Gbps.
We run ceph as block storage service only (rbd client within VM).
For testing within a VM with sysbench