> From: Haudy Kazemi [mailto:kaze0...@umn.edu]
> 
> With regard to multiuser systems and how that negates the need to
> defragment, I think that is only partially true.  As long as the files
> are defragmented enough so that each particular read request only
> requires one seek before it is time to service the next read request,
> further defragmentation may offer only marginal benefit.  On the other

Here's a great way to quantify how much "fragmentation" is acceptable:

Suppose you want to ensure at least 99% efficiency of the drive.  At most 1%
time wasted by seeking.
Suppose you're talking about 7200rpm sata drives, which sustain 500Mbit/s
transfer, and have average seek time 8ms.

8ms is 1% of 800ms.
In 800ms, the drive could read 400 Mbit of sequential data.
That's 40 MB

So as long as the "fragment" size of your files are approx 40 MB or larger,
then fragmentation has a negligible effect on performance.  One seek per
every 40MB read/written will yield less than 1% performance impact.

For the heck of it, let's see how that would have computed with 15krpm SAS
drives.
Sustained transfer 1Gbit/s, and average seek 3.5ms
3.5ms is 1% of 350ms
In 350ms, the drive could read 350 Mbit (call it 43MB)

That's certainly in the ballpark of 40 MB.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to