-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


The Tuesday 2008-01-22 at 12:39 -0500, James Knott wrote:

 Isn't ntfs more resistant?

No, it still gets fragmented

Too bad.

 I suppose FAT has outgrown its initial design usage for floppies and small
 disks, and it has been a practical sucess, despite its shorthcommings. It
 is not inherently a bad system, just... different. Other systems were
 better designed.

 Is not the ext2 design newer than fat? The fragmentation problem of fat
 was known before linux was born.

I don't know when ext2 was invented, but other fragmentation resistant file systems were around before NTFS. For example, HPFS, which was actually created by MS, when they were doing OS/2 work for IBM, predates NTFS by a few years.

I suppose it was invented in the early 90's, same as linux.

 There is another detail: IMO, fragmentation of fat occurs not because of
 the format, but because of the way it is used. It would be the task of the
 operating system to avoid fragmentation of the files, by writing them
 properly, and even correcting them later on. The format allows for that,
 but the operating system does not.

File systems, such as HPFS and ext2 try to resist fragmenting, by storing a file in the smallest free space that will hold it and only fragment if a big enough contiguous free space does not exist. This means fragmentation is unlikely, until the drive is almost full. On the other hand, FAT and (IIRC) NTFS simply grab the next available free space, whether big enough or not and if necessary, additional blocks of free space, until there's room for the file. This means that it might save a file in multiple pieces, when it could have simply found a single block that was large enough.

But that is not a characteristic of the FAT format, but of how the operating system uses it. It is perfectly possible to seek a large enough free area in the disk, then save the file there. It is the operating system who saves time by saving in the first space it finds, instead of searching harder for the best fit. It is not the definition of the format that is at fault, is the implementation.


Then, if later, a file grows, and there is no free space at the end, it has to be fragmented - same in fat as in any other system. I think that in ext2 the system tries to leave space at the end for that chance, which is, I think, one of the reasons why performance decreases if the disk is full. I think that in ext2 the file can be moved elsewhere, but I'm not sure. In msdos this was not possible because the OS was not the only one capable of accessing the filesystem.

I once thought of writing a program that would defrag a fat drive in the background, without stopping jobs... just an idea, I never started writing, though.


- -- Cheers,
       Carlos E. R.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.4-svn0 (GNU/Linux)

iD8DBQFHllNktTMYHG2NR9URArk2AJ9FNil26on1qiHJASR6Yx1XUm1CfACfRtDV
TM0DlniKM6OYZlk/AUd3TIw=
=amhm
-----END PGP SIGNATURE-----
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to