Sync will always be lower – it will cause it to wait for previous writes to complete before issuing more so it will effectively throttle writes to a queue depth of 1.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ken Peng Sent: Wednesday, 25 May 2016 6:36 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] seqwrite gets good performance but random rw gets worse Hi again, when setup file-fsync-freq=1 (fsync for each time writing) and file-fsync-freq=0 (never fsync by sysbench), the result gets huge difference. (one is 382.94Kb/sec, another is 25.921Mb/sec). How do you think of it? thanks. file-fsync-freq=1, # sysbench --test=fileio --file-total-size=5G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 --file-fsync-freq=1 run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Initializing random number generator from timer. Extra file open flags: 0 128 files, 40Mb each 5Gb total file size Block size 16Kb Number of random requests for random IO: 0 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 1 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! Time limit exceeded, exiting... Done. Operations performed: 4309 Read, 2873 Write, 367707 Other = 374889 Total Read 67.328Mb Written 44.891Mb Total transferred 112.22Mb (382.94Kb/sec) 23.93 Requests/sec executed Test execution summary: total time: 300.0782s total number of events: 7182 total time taken by event execution: 2.3207 per-request statistics: min: 0.01ms avg: 0.32ms max: 80.17ms approx. 95 percentile: 1.48ms Threads fairness: events (avg/stddev): 7182.0000/0.00 execution time (avg/stddev): 2.3207/0.00 file-fsync-freq=0, # sysbench --test=fileio --file-total-size=5G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 --file-fsync-freq=0 run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Initializing random number generator from timer. Extra file open flags: 0 128 files, 40Mb each 5Gb total file size Block size 16Kb Number of random requests for random IO: 0 Read/Write ratio for combined random IO test: 1.50 Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! Time limit exceeded, exiting... Done. Operations performed: 298613 Read, 199075 Write, 0 Other = 497688 Total Read 4.5565Gb Written 3.0376Gb Total transferred 7.5941Gb (25.921Mb/sec) 1658.93 Requests/sec executed Test execution summary: total time: 300.0049s total number of events: 497688 total time taken by event execution: 299.7026 per-request statistics: min: 0.00ms avg: 0.60ms max: 2211.13ms approx. 95 percentile: 1.21ms Threads fairness: events (avg/stddev): 497688.0000/0.00 execution time (avg/stddev): 299.7026/0.00 2016-05-25 15:01 GMT+08:00 Ken Peng <k...@dnsbed.com<mailto:k...@dnsbed.com>>: Hello, We have a cluster with 20+ hosts and 200+ OSDs, each 4T SATA disk for an OSD, no SSD cache. OS is Ubuntu 16.04 LTS, ceph version 10.2.0 Both data network and cluster network are 10Gbps. We run ceph as block storage service only (rbd client within VM). For testing within a VM with sysbench tool, we see that the seqwrite has a relatively good performance, it can reach 170.37Mb/sec, but random read/write always gets bad result, it can be only 474.63Kb/sec (shown as below). Can you help give the idea why the random IO is so worse? Thanks. This is what sysbench outputs, # sysbench --test=fileio --file-total-size=5G prepare sysbench 0.4.12: multi-threaded system evaluation benchmark 128 files, 40960Kb each, 5120Mb total Creating files for the test... # sysbench --test=fileio --file-total-size=5G --file-test-mode=seqwr --init-rng=on --max-time=300 --max-requests=0 run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Initializing random number generator from timer. Extra file open flags: 0 128 files, 40Mb each 5Gb total file size Block size 16Kb Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing sequential write (creation) test Threads started! Done. Operations performed: 0 Read, 327680 Write, 128 Other = 327808 Total Read 0b Written 5Gb Total transferred 5Gb (170.37Mb/sec) 10903.42 Requests/sec executed Test execution summary: total time: 30.0530s total number of events: 327680 total time taken by event execution: 28.5936 per-request statistics: min: 0.01ms avg: 0.09ms max: 192.84ms approx. 95 percentile: 0.03ms Threads fairness: events (avg/stddev): 327680.0000/0.00 execution time (avg/stddev): 28.5936/0.00 # sysbench --test=fileio --file-total-size=5G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Initializing random number generator from timer. Extra file open flags: 0 128 files, 40Mb each 5Gb total file size Block size 16Kb Number of random requests for random IO: 0 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! Time limit exceeded, exiting... Done. Operations performed: 5340 Read, 3560 Write, 11269 Other = 20169 Total Read 83.438Mb Written 55.625Mb Total transferred 139.06Mb (474.63Kb/sec) 29.66 Requests/sec executed Test execution summary: total time: 300.0216s total number of events: 8900 total time taken by event execution: 6.4774 per-request statistics: min: 0.01ms avg: 0.73ms max: 90.18ms approx. 95 percentile: 1.60ms Threads fairness: events (avg/stddev): 8900.0000/0.00 execution time (avg/stddev): 6.4774/0.00 Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake.
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com