> When you have a striped storage device under a
> file system, then the database or file system's view
> of contiguous data is not contiguous on the media.

Right.  That's a good reason to use fairly large stripes.  (The primary 
limiting factor for stripe size is efficient parallel access; using a 100 MB 
stripe size means that an average 100 MB file gets less than two disks' worth 
of throughput.)

ZFS, of course, doesn't have this problem, since it's handling the layout on 
the media; it can store things as contiguously as it wants.

> There are many different ways to place the data on the media and we would 
> typically
> strive for a diverse stochastic spread.

Err ... why?

A random distribution makes reasonable sense if you assume that future read 
requests are independent, or that they are dependent in unpredictable ways. 
Now, if you've got sufficient I/O streams, you could argue that requests *are* 
independent, but in many other cases they are not, and they're usually 
predictable (particularly after a startup period). Optimizing for the predicted 
access cases makes sense. (Optimizing for observed access may make sense in 
some cases as well.)

-- Anton
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to