Bob Friesenhahn <bfrie...@simple.dallas.tx.us> wrote:

> On Mon, 13 Jul 2009, Joerg Schilling wrote:
> >
> > cpio reads/writes in 8192 byte chunks from the filesystem.
>
> Yes, I was just reading the cpio manual page and see that.  I think 
> that re-reading the 128K zfs block 16 times to satisfy each request 
> for 8192 bytes explains the 16X performance loss when caching is 
> disabled.  I don't think that this is strictly a bug since it is what 
> the database folks are looking for.

cpio spends 1.6x more SYStem CPU time than star. This may mainly be a result
from the fact that cpio (when using the cpio archive format) reads/writes 512 
byte blocks from/to the archive file.

cpio by default spends 19x more USER CPU time than star. This seems to be a 
result of the inapropriate header structure with the cpio archive format and 
reblocking and cannot be easily changed (well you could use "scpio" - or in 
other words the "cpio" CLI personality of star, but this reduces the USER CPU
time only by 10%-50% compared to Sun cpio).

cpio is a program from the past that does no fit well in our current world.
The internal limits cannot be lifted without creating a new incompatible 
archive format.

In other words: if you use cpio for your work, you have to live with it's 
problems ;-)

If you like to play with different parameter values (e.g. read sizes), cpio 
is unsuitable for tests. Star allows you to set big filesystem read sizes by
using the FIFO and playing with the fifo size and smell filesystem read sizes by
switching off the FIFO and playing with the archive block size.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
       j...@cs.tu-berlin.de                (uni)  
       joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to