Typically you want to do something like this:

  Write 1,000,000 files of varying length.
  Randomly select and remove 500,000 of those files.
  Repeat (a) creating files, and (b) randomly removing files, until your file 
system is full enough for your test, or you run out of time.

That's a pretty simplistic method, and there are lots of useful variations 
(e.g. fill the file system entirely the first time before removing roughly half 
of the storage, to ensure there's not a big contiguous chunk of space that's 
never been allocated).

For ZFS, you may also want to throw snapshots into the mix, depending on what 
you're trying to test.

If you care about directory performance, or anything which walks the file tree, 
it's also useful to randomly move the files around.

If you care about the performance of files which are appended to, you'd also 
want to append to some random number of files.

(If you're really serious about this, you'd want to record a set of operations 
on a filesystem over a period of years and either play it back or generate a 
similar pattern. I don't know of any publically available traces which are good 
enough to do aging with, though.)
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to