Frank Penczek writes:
 > Hi,
 > 
 > On Dec 17, 2007 10:37 AM, Roch - PAE <[EMAIL PROTECTED]> wrote:
 > >
 > >
 > > dd uses a default block size of 512B.  Does this map to your
 > > expected usage ? When I quickly tested the CPU cost of small
 > > read from cache, I did see that ZFS was more costly than UFS
 > > up to a crossover between 8K and 16K.   We might need a more
 > > comprehensive study of that (data in/out of cache, different
 > > recordsize  &    alignment constraints   ).   But  for small
 > > syscalls, I think we might need some work  in ZFS to make it
 > > CPU efficient.
 > >
 > > So first,  does  small sequential write    to a large  file,
 > > matches an interesting use case ?
 > 
 > The pool holds home directories so small sequential writes to one
 > large file present one of a few interesting use cases.

Can you be more specific here ?

Do you have a body of application that would do small
sequential writes; or one in particular ? Another
interesting info is if we expect those to be allocating
writes or overwrite (beware that some app, move the old file 
out, then run allocating writes, then unlink the original
file).



 > The performance is equally disappointing for many (small) files
 > like compiling projects in svn repositories.
 > 

???

-r


 > Cheers,
 >   Frank

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to