I've already explained how you can scale up IOP #'s and unless that is
your real workload, you won't see that in practice.

See, running a high # of parallel jobs spread evenly across.

I don't find the conversation genuine, so I'm not going to continue it.


-----Original Message-----
From: Richard Elling [mailto:richard.ell...@gmail.com] 
Sent: 2009-10-20 16:39
To: Dupuy, Robert
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Sun Flash Accelerator F20

On Oct 20, 2009, at 1:58 PM, Robert Dupuy wrote:

> "there is no consistent latency measurement in the industry"
>
> You bring up an important point, as did another poster earlier in  
> the thread, and certainly its an issue that needs to be addressed.
>
> "I'd be surprised if anyone could answer such a question while  
> simultaneously being credible."
>
>
http://download.intel.com/design/flash/nand/extreme/extreme-sata-ssd-pro
duct-brief.pdf
>
> Intel:  X-25E read latency 75 microseconds

... but they don't say where it was measured or how big it was...

> http://www.sun.com/storage/disk_systems/sss/f5100/specs.xml
>
> Sun:  F5100 read latency 410 microseconds

... for 1M transfers... I have no idea what the units are, though...  
bytes?

> http://www.fusionio.com/PDFs/Data_Sheet_ioDrive_2.pdf
>
> Fusion-IO:  read latency less than 50 microseconds
>
> Fusion-IO lists theirs as .05ms

...at the same time they quote 119,790 IOPS @ 4KB.  By my calculator,
that is 8.3 microseconds per IOP, so clearly the latency itself doesn't
have a direct impact on IOPs.

> I find the latency measures to be useful.

Yes, but since we are seeing benchmarks showing 1.6 MIOPS (mega-IOPS :-)
on a system which claims 410 microseconds of latency, it really isn't
clear to me how to apply the numbers to capacity planning. To wit, there
is some limit to the number of concurrent IOPS that can be processed per
device, so do I need more devices, faster devices, or devices which can
handle more concurrent IOPS?

> I know it isn't perfect, and I agree benchmarks can be deceiving,  
> heck I criticized one vendors benchmarks in this thread already :)
>
> But, I did find, that for me, I just take a very simple, single  
> thread, read as fast you can approach, and get the # of random  
> access per second, as one type of measurement, that gives you some  
> data, on the raw access ability of the drive.

> No doubt in some cases, you want to test multithreaded IO too, but  
> my application is very latency sensitive, so this initial test was  
> telling.

cool.

> As I got into the actual performance of my app, the lower latency  
> drives, performed better than the higher latency drives...all of  
> this was on SSD.

Note: the F5100 has SAS expanders which add latency.
  -- richard

> (I did not test the F5100 personally, I'm talking about the SSD  
> drives that I did test).
>
> So, yes, SSD and HDD are different, but latency is still important.
> -- 
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to