Jay Sprenkle wrote:
On 9/13/05, Dennis Jenkins <[EMAIL PROTECTED]> wrote:
Even vacuuming won't defrag the file. Disk space is allocated by the OS
and the OS makes no guarantees.
Won't Dr. Hipp's method of making a backup copy also defrag the file?
i.e.
execute begin exclusive to lock it.
copy the file
commit
rename the files and use the backup copy as the new current database.
Assuming your disk free space isn't heavily fragmented.
If it is fragmented I believe this will tend to reduce the fragmentation
with time,
depending on what else is going on at the same time on the machine.
It depends on lots of things: the OS, the filesystem, the % free space
on the file system, other processes that are causing the OS to allocate
disk blocks. I have noticed that Windows XP totally sucks at keeping
files fragment free when copying them. Even if there is enough free
space to hold the destination file contiguously, the OS won't do it. I
have rarely bothered to check file fragmentation on Linux and FreeBSD
systems, so I don't know how those handle it (but I would assume it to
be much more intelligent than NTFS).
To Ben's point, I neglected to consider table space fragmentation. He
has a very good point. I read the source code to the VACUUM function.
My understanding is that the resulting file won't have any table space
fragmentation, but I could be wrong.