On Fri, 28 Nov 2008 13:46:01 +0200
Alan McKinnon <[EMAIL PROTECTED]> wrote:

> On Friday 28 November 2008 13:14:42 Dale wrote:
> > If this is a little high, what would be the best way to defrag it?
>
> By not defragging it.

I beg to defer. The simplest way to defrag a partition is to make
backup and restore. If it's worth the effort is another story.

>
> It's not Windows. Windows boxes needs defragging not because
> fragmentation is a huge problem in itself, but because windows
> filesystems are a steaming mess of [EMAIL PROTECTED] that do little right and 
> most
> things wrong. Defrag treats the symptom, not the cause :-)
>

Personally I think NTFS is one of the things MS have done right. It is
fast, stable and has the features of the Linux FSes and even more. It
has journal, quotas, permissions, mount points, symbolic links. Does
any of ext, reiserfs or xfs have compression and/or encryption
capabilities? I don't think so.
I have some experience with MS Windows and I've never seen data
corruption after a system crash or power loss, a thing I can't say
about ReiserFS or ext3 (when not mounted with data=journal).


> Reiser tends to self-balance itself out. What is especially
> noteworthy is that none of the general purpose Linux filesystems
> provide a defrag utility. Theodore 'Tso and Hans Reiser are both
> exceptional programmers, if there was a need for such a tool they
> would assuredly have written one. They did not, so there probably
> isn't.


It would be just as easy to pull the exactly opposite argument out:
since they like to experiment and "to boldly go where no man has gone
before", they won't bother to write a defrag tool, because it would
be too trivial and no fun. (just an example speculation on my part).


Now let's put the assumptions aside and do a test.

[EMAIL PROTECTED] # test $ cat /usr/portage/packages/All/* > test1
[EMAIL PROTECTED] # test $ cp test1 test2
[EMAIL PROTECTED] # test $ ls -lah
total 2.3G
drwxr-xr-x  2 root users 4.0K 2008-11-29 01:38 .
drwxr-xr-x 44 root users 4.0K 2008-11-29 01:36 ..
-rw-r--r--  1 root users 1.2G 2008-11-29 01:38 test1
-rw-r--r--  1 root users 1.2G 2008-11-29 01:40 test2
localhost test # filefrag *
test1: 1125 extents found, perfection would be 10 extents
test2: 1923 extents found, perfection would be 10 extents
localhost test # time cat test1 > /dev/null

real    0m26.747s
user    0m2.110s
sys     0m1.450s
localhost test # time cat test2 > /dev/null

real    0m29.825s
user    0m1.780s
sys     0m1.690s


All this with ext3 (rw,noatime,nodiratime,data=journal,commit=1) on
a partition with 84% free space.

It took 29.825-26.747=3.078 seconds more to read the same data, when
the file has
1923-1125=798 additional fragments. So, the fragmentation led to ~10%
performance
decrease in this case and it appears the dogma "Linux FS-es are smart
and don't need
to be de-fragmented" is not quite true, right? Unfortunately I have no
Windows at hand
to make a similar test for comparison, but I believe the results won't
be quite different.
BTW fiflefrag is written by Mr T'so.


> Any Linux defrag tool you encounter will have been written by a third
> party separate from the developers. It will move blocks around and
> update superblocks, the drive will have to be unmounted for that to
> work and a slight misunderstanding of how to do it will ruin data.
>
> Are you willing to take the very real risk of data corruption?


Who says it has to work that way? :)
I have seen on the Net a defrag tool written by Mr Con Kolivas [1]. It
was just a bash script which basically only copies the file into a new
one, checks the number of fragments of both files and deletes the one
that has more extents. :)

[1] http://en.wikipedia.org/wiki/Con_Kolivas


-- 
Best regards,
Daniel

Reply via email to