I still think that MS's "undelete" feature plays a part in this. When
data is "deleted" in Windows, NTFS marks blocks to be released without
actually erasing them. Rather than reusing released blocks, NTFS
prefers new, unused blocks, which leads to fragmentation.

 

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of WEAVER,
Simon (external)
Sent: Wednesday, February 20, 2008 11:21 AM
To: Tony T.; veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] NBU 5.1: Disk staging causing heavy
fragmentation

 

Today's NTFS handles fragmentation alot better - in fact, FAT and FAT32
were really the main file systems that would always get fragmented.
That is not to say NTFS is not immune to the fragmentation that people
may experience, but there are ways around to minimise it even more.

 

Depending on the volume itself, and its intention is the key to keeping
fragmentation down. When you format a volume you get the option of a
"cluster" size. But you must be aware of what the volume itself will be
storing. (for example, large files, or millions of small files).

 

By default, when formatting, Windows keeps a "default" setting in
place. Choosing a smaller cluster variable will waste less disk space
but likely to cause fragmentation.

 

Likewise, a larger cluster variable will cause less fragmentation but
waste space. further details can be found in the online help of Win2k3,
XP, 2000, ect !

 

Not to put my foot in it, but I am sure other systems suffer, but maybe
its a NTFS thing ;-)

Simon.

 

 

 

________________________________

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tony T.
Sent: Wednesday, February 20, 2008 4:07 PM
To: veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] NBU 5.1: Disk staging causing heavy
fragmentation

 

        
        It's NTFS and you're creating and deleting a lot of files on
the volume so of course it will fragment.  Either defragment the volume
or set the minimum threshold lower so that more files get deleted when
the cleanup process runs to reduce the fragmentation.
        
           .../Ed

        
        -- 
        Ed Wilts, Mounds View, MN, USA
        mailto:[EMAIL PROTECTED]


Thanks for the info guys.

It sounds like fragmentation is  just a given when it comes to backing
up to disk?  I understand that, as seeing it explained does make sense.
I have been looking for some of this "well documented" information and
have come up empty.  Searching for fragmentation on Symantecs site is
like a journey through the looking glass.  I will keep looking, but if
anyone has any links to a white paper or something it would be much
appreciated.

Also, when you say "set the minimum threshold lower so that more files
get deleted..."  This confused me; I mean, isn't the fragmentation
being caused by so many file creation/deletions?  Wouldn't increasing
the amount of files being deleted also increase the fragmentation?

Or did I misread that?

Thanks again for the info,

T.

 

This email (including any attachments) may contain confidential and/or
privileged information or information otherwise protected from
disclosure.
If you are not the intended recipient, please notify the sender
immediately, do not copy this message or any attachments and do not use
it
for any purpose or disclose its content to any person, but delete this
message and any attachments from your system. Astrium disclaims any and
all
liability if this email transmission was virus corrupted, altered or
falsified.
---------------------------------------------------------------------
Astrium Limited, Registered in England and Wales No. 2449259
REGISTERED OFFICE:-
Gunnels Wood Road, Stevenage, Hertfordshire, SG1 2AS, England

 

_______________________________________________
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

Reply via email to