On Thu, Jun 10, 2010 at 9:39 AM, Andrey Kuzmin
<andrey.v.kuz...@gmail.com> wrote:
> On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski <mi...@task.gda.pl> wrote:
>>
>> On 21/10/2009 03:54, Bob Friesenhahn wrote:
>>>
>>> I would be interested to know how many IOPS an OS like Solaris is able to
>>> push through a single device interface.  The normal driver stack is likely
>>> limited as to how many IOPS it can sustain for a given LUN since the driver
>>> stack is optimized for high latency devices like disk drives.  If you are
>>> creating a driver stack, the design decisions you make when requests will be
>>> satisfied in about 12ms would be much different than if requests are
>>> satisfied in 50us.  Limitations of existing software stacks are likely
>>> reasons why Sun is designing hardware with more device interfaces and more
>>> independent devices.
>>
>>
>> Open Solaris 2009.06, 1KB READ I/O:
>>
>> # dd of=/dev/null bs=1k if=/dev/rdsk/c0t0d0p0&
>
> /dev/null is usually a poor choice for a test lie this. Just to be on the
> safe side, I'd rerun it with /dev/random.
> Regards,
> Andrey

(aside from other replies about read vs. write and /dev/random...)

Testing performance of disk by reading from /dev/random and writing to
disk is misguided.  From random(7d):

   Applications retrieve random bytes by reading /dev/random
   or /dev/urandom. The /dev/random interface returns random
   bytes only when sufficient amount of entropy has been collected.

In other words, when the kernel doesn't think that it can give high
quality random numbers, it stops providing them until it has gathered
enough entropy.  It will pause your reads.

If instead you use /dev/urandom, the above problem doesn't exist, but
the generation of random numbers is CPU-intensive.  There is a
reasonable chance (particularly with slow CPU's and fast disk) that
you will be testing the speed of /dev/urandom rather than the speed of
the disk or other I/O components.

If your goal is to provide data that is not all 0's to prevent ZFS
compression from making the file sparse or want to be sure that
compression doesn't otherwise make the actual writes smaller, you
could try something like:

# create a file just over 100 MB
dd if=/dev/random of=/tmp/randomdata bs=513 count=204401
# repeatedly feed that file to dd
while true ; do cat /tmp/randomdataa ; done | dd of=/my/test/file
bs=... count=...

The above should make it so that it will take a while before there are
two blocks that are identical, thus confounding deduplication as well.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to