Picture a sponge: a good part of it is air - holes. Most databases keep
available disk blocks available for themselves within the database file;
they don't set them out, but when data comes in and gets deleted, it
leaves a hole. When more data comes in, if it doesn't fit within an
available hole, it'll allocate a little more space to the file (the file
grows) so that it can write the new data in a contiguous block(s).
Later, if a smaller chunk of data comes in, it may put that in one of
the holes that's big enough to fit it, but think of that one for a
moment: that will probably leave a very small hole that's unlikely to
ever be filled because typical data isn't small enough for it. Repeat
several dozen/hundred/thousand/etc. times a day, depending on your write
frequency to the database.

Compacting the database will remove all the holes, big and small, just
like squeezing out a sponge doesn't decrease the mass of the sponge, but
removes all the things inside the sponge that aren't sponge. But when
you start writing to it again, it starts over.

There's probably a curve to the growth over time - the initial growth
after a compact is probably exponentially more than the growth several
weeks later. I'd guess it'd level off to some near-linear growth at some
point.

Still - nothing wrong with compacting regularly.

Wouldn't it be a dream if PM could do this in the background during a
set time each day - say, you allocate 1am - 6 am every day to this, and
it could maintain and groom the database? That'd be cool. But note that
this would not necessarily yield a compacted database every morning
(compacted databases may not be as efficient as those with "breathing
room" inside them), but if it could combine non-contiguous blocks in
some sort of database de-frag routine, and pull out holes under a set
threshold, that might be useful.

</dreaming>
 

On 10/27/06 at 8:18 PM, Christian Roth ([EMAIL PROTECTED]) said:

>Wayne Brissette wrote:
>
>>Can I ask a serious question? How often do people run into this 2 GB limit?
>
>About once every two to three weeks. I currently have about 230k
>messages in my database, and get about 350 new emails each day.
>
>The puzzling thing (I do not understand since I'm no DB guru) is that
>after compacting, the database is just slightly over 1 GB in size. Yet,
>it fills to nearly 2 GB in the mentioned three weeks. This is funny
>because I sure do not receive 200k messages in three weeks (which is
>what my db contains in about 1 GB of space after compacting). It looks
>like the DB grows not linear with its amount of mail stored in it, but
>at least quadratically or "near" exponentially when not compacting it.
>
>This is also a technical question to the CTM team - why does my DB grow
>that fast, and why am I able to shrink it to about 55% its former size
>only by compacting the additional 10k emails of those mentioned three weeks?
>
>I'd guess I should be able to get about at least 6-9 months worth of
>email before compacting is required again, yet - I don't :-((
>
>Regards, Christian.
>
>


Steve Abrahamson
Ascending Technologies
FileMaker 7 Certified Developer
        http://www.asctech.com
        [EMAIL PROTECTED]



Reply via email to