On Tue, Feb 28, 2006 at 03:11:50PM -0800, Craig Barratt wrote:

> I should probably change BackupPC to use 2^30 too; I don't know
> why I picked 1000 * 2^20 - maybe that's what disk drives use
> for raw capacity?

I would suggest this, since that is what most things report.

> The pool size is the sum of the number of blocks used for each
> file in the pool.  In general that should be less than reported
> by "df" since the pc directories contain full directory trees
> (just containing hardlinks of course) and Xfer log files that
> occupy some disk space.

There is going to be overhead for the directories, but some filesystems
(notably reiserfs) store the tails of files without requiring a full block
for them.  I've found that so far, the space used by the directories (which
is also fairly efficient on reiserfs) is quite a bit less than space saved
by the more efficient storage of tails.

I'm not going to run du again since it takes a very long time, but last
time I did it, there was about 1.5 GB of space difference between du, and
df (of 16GB total).  Each backup tree took about 30-100MB, so they would even
each other out after about 20 or so backups.

Dave


-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to