On Fri, 22 Sep 2006, johansen wrote:
ZFS uses a 128k block size.  If you change dd to use a
bs=128k, do you observe any performance improvement?

   I had tried other sizes with much the same results, but
hadnt gone as large as 128K.  With bs=128K, it gets worse:

| # time dd if=zeros-10g of=/dev/null bs=128k count=102400
| 81920+0 records in
| 81920+0 records out
| | real 2m19.023s
| user    0m0.105s
| sys     0m8.514s

It's also worth noting that this dd used less system and
user time than the read from the raw device, yet took a
longer time in "real" time.

   I think some of the blocks might be cached, as I have run
this a number of times.  I really dont know how the time
might be accounted for -- However, the real time is correct
as that is what I see while waiting for the command to
complete.

   Is there any other info I can provide which would help?

harley.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to