On Mon, Jul 13, 2009 at 3:16 PM, Joerg
Schilling<joerg.schill...@fokus.fraunhofer.de> wrote:
> Bob Friesenhahn <bfrie...@simple.dallas.tx.us> wrote:
>
>> On Mon, 13 Jul 2009, Mike Gerdts wrote:
>> >
>> > FWIW, I hit another bug if I turn off primarycache.
>> >
>> > http://defect.opensolaris.org/bz/show_bug.cgi?id=10004
>> >
>> > This causes really abysmal performance - but equally so for repeat runs!
>>
>> It is quite facinating seeing the huge difference in I/O performance
>> from these various reports.  The bug you reported seems likely to be
>> that without at least a little bit of caching, it is necessary to
>> re-request the underlying 128K ZFS block several times as the program
>> does numerous smaller I/Os (cpio uses 10240 bytes?) across it.
>
> cpio reads/writes in 8192 byte chunks from the filesystem.
>
> BTW: star by default creates a shared memory based FIFO of 8 MB size and
> reads in the biggest possible size that would currently fit into the FIFO.
>
> Jörg

Using cpio's -C option seems to not change the behavior for this bug,
but I did see a performance difference with the case where I hadn't
modified the zfs caching behavior.  That is, the performance of the
tmpfs backed vdisk more than doubled with "cpio -o -C $((1024 * 1024))
>/dev/null".  At this point cpio was spending roughly 13% usr and 87%
sys.

I haven't tried star, but I did see that I could also reproduce with
"cat $file | cat > /dev/null".  This seems like a worthless use of
cat, but it forces cat to actually copy data from input to output
unlike when cat can mmap input and output.  When it does that and
output is /dev/null Solaris is smart enough to avoid any reads.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to