Hi,
Quick results for 1/5/10 jobs:
# fio --filename=/dev/nvme0n1 --direct=1 --sync=1 --rw=write --bs=4k
--numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting
--name=journal-test
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync,
iodepth=1
fio-2.1.3
Starting 1 process
Jobs: 1 (f=1): [W] [100.0% done] [0KB/373.2MB/0KB /s] [0/95.6K/0 iops]
[eta 00m:00s]
journal-test: (groupid=0, jobs=1): err= 0: pid=99634: Fri Jan 8
13:51:53 2016
write: io=21116MB, bw=360373KB/s, iops=90093, runt= 60000msec
clat (usec): min=7, max=14738, avg=10.79, stdev=29.04
lat (usec): min=7, max=14738, avg=10.84, stdev=29.04
clat percentiles (usec):
| 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8],
| 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9],
| 70.00th=[ 9], 80.00th=[ 12], 90.00th=[ 18], 95.00th=[ 22],
| 99.00th=[ 34], 99.50th=[ 37], 99.90th=[ 50], 99.95th=[ 54],
| 99.99th=[ 72]
bw (KB /s): min=192456, max=394392, per=99.97%, avg=360254.66,
stdev=46490.05
lat (usec) : 10=73.77%, 20=18.79%, 50=7.33%, 100=0.10%, 250=0.01%
lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%
cpu : usr=15.92%, sys=13.08%, ctx=5405192, majf=0, minf=27
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
issued : total=r=0/w=5405592/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=21116MB, aggrb=360372KB/s, minb=360372KB/s, maxb=360372KB/s,
mint=60000msec, maxt=60000msec
Disk stats (read/write):
nvme0n1: ios=0/5397207, merge=0/0, ticks=0/42596, in_queue=42596,
util=71.01%
# fio --filename=/dev/nvme0n1 --direct=1 --sync=1 --rw=write --bs=4k
--numjobs=5 --iodepth=1 --runtime=60 --time_based --group_reporting
--name=journal-test
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync,
iodepth=1
...
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync,
iodepth=1
fio-2.1.3
Starting 5 processes
Jobs: 5 (f=5): [WWWWW] [100.0% done] [0KB/1023MB/0KB /s] [0/262K/0 iops]
[eta 00m:00s]
journal-test: (groupid=0, jobs=5): err= 0: pid=99932: Fri Jan 8
13:57:07 2016
write: io=57723MB, bw=985120KB/s, iops=246279, runt= 60001msec
clat (usec): min=7, max=23102, avg=20.00, stdev=78.26
lat (usec): min=7, max=23102, avg=20.05, stdev=78.26
clat percentiles (usec):
| 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12],
| 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 18],
| 70.00th=[ 21], 80.00th=[ 25], 90.00th=[ 29], 95.00th=[ 36],
| 99.00th=[ 62], 99.50th=[ 77], 99.90th=[ 193], 99.95th=[ 612],
| 99.99th=[ 1816]
bw (KB /s): min=139512, max=225144, per=19.99%, avg=196941.33,
stdev=20911.73
lat (usec) : 10=6.84%, 20=59.99%, 50=31.33%, 100=1.61%, 250=0.14%
lat (usec) : 500=0.03%, 750=0.02%, 1000=0.01%
lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
cpu : usr=8.79%, sys=7.32%, ctx=14776785, majf=0, minf=138
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
issued : total=r=0/w=14777043/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=57723MB, aggrb=985119KB/s, minb=985119KB/s, maxb=985119KB/s,
mint=60001msec, maxt=60001msec
Disk stats (read/write):
nvme0n1: ios=0/14754265, merge=0/0, ticks=0/253092, in_queue=254880,
util=100.00%
# fio --filename=/dev/nvme0n1 --direct=1 --sync=1 --rw=write --bs=4k
--numjobs=10 --iodepth=1 --runtime=60 --time_based --group_reporting
--name=journal-test
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync,
iodepth=1
...
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync,
iodepth=1
fio-2.1.3
Starting 10 processes
Jobs: 10 (f=10): [WWWWWWWWWW] [100.0% done] [0KB/1026MB/0KB /s]
[0/263K/0 iops] [eta 00m:00s]
journal-test: (groupid=0, jobs=10): err= 0: pid=100004: Fri Jan 8
13:58:24 2016
write: io=65679MB, bw=1094.7MB/s, iops=280224, runt= 60001msec
clat (usec): min=7, max=23679, avg=35.33, stdev=118.33
lat (usec): min=7, max=23679, avg=35.39, stdev=118.34
clat percentiles (usec):
| 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12],
| 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 22], 60.00th=[ 27],
| 70.00th=[ 33], 80.00th=[ 45], 90.00th=[ 68], 95.00th=[ 90],
| 99.00th=[ 167], 99.50th=[ 231], 99.90th=[ 1064], 99.95th=[ 1528],
| 99.99th=[ 2416]
bw (KB /s): min=66600, max=141064, per=10.01%, avg=112165.00,
stdev=16560.67
lat (usec) : 10=6.54%, 20=38.42%, 50=37.34%, 100=13.61%, 250=3.64%
lat (usec) : 500=0.21%, 750=0.07%, 1000=0.05%
lat (msec) : 2=0.09%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
cpu : usr=4.87%, sys=5.34%, ctx=16813963, majf=0, minf=288
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
issued : total=r=0/w=16813767/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=65679MB, aggrb=1094.7MB/s, minb=1094.7MB/s, maxb=1094.7MB/s,
mint=60001msec, maxt=60001msec
Disk stats (read/write):
nvme0n1: ios=0/16791403, merge=0/0, ticks=0/537680, in_queue=542112,
util=100.00%
# smartctl -d scsi -i /dev/nvme0n1
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-45-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Vendor: NVMe
Product: INTEL SSDPEDMD01
Revision: 8DV1
User Capacity: 1,600,321,314,816 bytes [1.60 TB]
Logical block size: 512 bytes
Rotation Rate: Solid State Device
W dniu 01/08/2016 o 02:39 PM, Burkhard Linke pisze:
> Hi,
>
> I want to start another round of SSD discussion since we are about to
> buy some new servers for our ceph cluster. We plan to use hosts with
> 12x 4TB drives and two SSD journals drives. I'm fancying Intel P3700
> PCI-e drives, but Sebastien Han's blog does not contain performance
> data for these drives yet.
>
> Is anyone able to share some benchmark results for Intel P3700 PCI-e
> drives?
>
> Best regards,
> Burkhard
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
PS
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com