On Thu, Apr 23, 2015 at 3:05 PM, Nick Fisk <n...@fisk.me.uk> wrote:
> I have had a look through the fio runs, could you also try and run a couple
> of jobs with iodepth=64 instead of numjobs=64. I know they should do the
> same thing, but the numbers with the former are easier to understand.

Maybe it's an issue of interpretation, but this doesn't actually seem to work:

testfile: (g=0): rw=randwrite, bs=128K-128K/128K-128K/128K-128K,
ioengine=sync, iodepth=64

fio-2.1.3

Starting 1 process


testfile: (groupid=0, jobs=1): err= 0: pid=5967: Thu Apr 23 19:40:14 2015

  write: io=5762.7MB, bw=3278.3KB/s, iops=25, runt=1800048msec

    clat (msec): min=5, max=807, avg=39.03, stdev=59.24

     lat (msec): min=5, max=807, avg=39.04, stdev=59.24

    clat percentiles (msec):

     |  1.00th=[    6],  5.00th=[    7], 10.00th=[    7], 20.00th=[    8],

     | 30.00th=[   10], 40.00th=[   13], 50.00th=[   19], 60.00th=[   27],

     | 70.00th=[   37], 80.00th=[   50], 90.00th=[   92], 95.00th=[  149],

     | 99.00th=[  306], 99.50th=[  416], 99.90th=[  545], 99.95th=[  586],

     | 99.99th=[  725]

    bw (KB  /s): min=  214, max=12142, per=100.00%, avg=3352.50, stdev=1595.13

    lat (msec) : 10=32.27%, 20=19.42%, 50=28.25%, 100=10.94%, 250=7.67%

    lat (msec) : 500=1.25%, 750=0.20%, 1000=0.01%

  cpu          : usr=0.06%, sys=0.18%, ctx=46686, majf=0, minf=29

  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%

     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

     issued    : total=r=0/w=46101/d=0, short=r=0/w=0/d=0


Run status group 0 (all jobs):

  WRITE: io=5762.7MB, aggrb=3278KB/s, minb=3278KB/s, maxb=3278KB/s,
mint=1800048msec, maxt=1800048msec


Disk stats (read/write):

  vdb: ios=0/46809, merge=0/355, ticks=0/1837916, in_queue=1837916, util=99.95%


It's like it didn't honor the setting.  (And whew, 25 iops & 3M/sec, ouch.)

Thanks!
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to