On Sun, 23 Jun 2002 14:40:34 -0800
civileme <[EMAIL PROTECTED]> said with temporary authority

> James wrote:
> 
> >On Fri, 21 Jun 2002 03:19:23 -0400
> >Rick Thomas <[EMAIL PROTECTED]> said with temporary authority
> >
> >>It's something you used to have to do on Windows disks. 
> >>"De-Fragment".
> >>
> >>When you write a lot of small files, then delete some of them, the
> >>"allocation bitmap" for the disk gets to look like a swiss cheese --
> >>lots of little holes.  The little holes get used for the next
> >>file(s) you write, and those files become "fragmented".  The net
> >>effect is that reading and writing files from a fragmented disk
> >>takes longer than from an un-fragmented disk, where the files are
> >>mostly contiguous.  Sometimes a _lot_ longer for a really badly
> >>fragged disk. People used to sell utilities for de-frag'ing windows
> >>disks, for lots of money.
> >>
> >>Nowadays, it's cheaper not to bother... when a disk becomes fragged,
> >>you just throw it away and get a newer, bigger, cheaper, one...
> >>(;->)
> >>
> >>
> >>Rick
> >>
> >
> >correct me if I'm wrong... (it happens a lot that I am, trust me....
> >I'm married, I know ) but the difference between vfat and ext2 is the
> >way they write back a file.  With vfat, say with a 4 gig partition
> >and 2 gigs of data, it attempts to write the file back to the same
> >space that it came from.  If the file won't fit it then points to the
> >remaining part written in the first available free space that will
> >hold it.  As single file could have 4, 5 or more fragments as it
> >grows larger and larger.  (it will maintain the existing fragments
> >and create new ones as needed.)  ext2 as I understand it looks at the
> >original spot, determines if it will fit, and if not writes the
> >changed file to a new location that has enough continuous space to
> >hold the entire file.  This minimizes fragmentation but does tend to
> >have data " all over the place."  Aesthetically unpleasing but once a
> >file is found in the map yields a faster read, and less fragments
> >that despite theories to the contrary,  do get lost.  Now if you have
> >1.8 gigs of data on a 2 gig drive the ability to find free space is
> >severely reduced.  Maybe this is the problem in Alaska.  The drive is
> >too full.  I don't know, but it is interesting how it happened and
> >worth looking into for sure. 
> >
> >James
> >
> >>>>um...whats "defrag"?
> >>>>
> >>>>Mark
> >>>>
> >>>.. ya know.. it's for taking off the frag.
> >>>
> >>>Damian
> >>>
> >>
> >>
> >
> >
> >--------------------------------------------------------------------
> >----
> >
> >Want to buy your Pack or Services from MandrakeSoft? 
> >Go to http://www.mandrakestore.com
> >
> Turns out, this fellow never responded.  I have reproduced the
> fragging by deliberately doing a no-no,  removing the reserve and then
> using a modified version of my filesystem exerciser that creates files
> of random size between 2k and 800K (modified to fill the filesystem
> 90%) then expands them in place.   Now ext2 will automatically
> "fragment" large files where a single block cannot contain the whole
> file, so an older ext2 version without support for sparse superblocks
> might show some fragmentation on big files first time
> 
> With the reserved blocks at a healthy setting, the fragmentation
> doesn't happen in the same way.  That's odd.  It appears they are fair
> game for a scratch area.
> 
> Civileme

Civilme... Since I'm just now starting to get into file systems and how
they do/don't work.  (AFS xFS Coda HFS etc)  I'm curious if you can
recommend any reading on this subject.  This does interest me. 
Especially since you can recreate it, deliberately.  They are doing some
experiments with RLF (Really Large Files) as they call it, 1 terabyte or
more.  And the more I understand the more intelligently I can listen.

James

> 
> 
> 
> 
> 
> 
> 

Want to buy your Pack or Services from MandrakeSoft? 
Go to http://www.mandrakestore.com

Reply via email to