Output from df -h looks like the following:
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 129G 23G 100G 19% /
/dev/sda1 99M 27M 68M 29% /boot
none 1006M 0 1006M 0% /dev/shm
/dev/sda3 49G 109M 46G 1% /home
/dev/sda2 97G 2.5G 89G 3% /var
/dev/sdb1 459G 131G 305G 31% /BACKUPS
FileSet looks like the following
# List of files to be backed up
FileSet {
Name = "aname"
Include {
Options {
signature = MD5
}
File = /
File = /boot
File = /var
File = /home
}
Exclude {
File = /proc
File = /tmp
File = /sys
File = dev
File = /.journal
File = /.fsck
File = /mnt
File = /var/spool/squid
File = /BACKUPS
}
}
The backup is to disk for this single system and the backup is well over
143GB in space. The actual data being backed up is less then 30GB. Why
is this backup so big?
With Compression on this same backup (with locally attached storage) takes
up around 30GB of space and takes over 8 hours to backup. 8 hours is way
to long to back up 30GB of space.
Simlarly a 10GB NT client backing up to same storage and director with
compression enabled across the network takes about 48 minutes and takes up
way less space. What is going on?
Also, I have used the override Pool values in my schedule. If an
Incremental backup kicks off due to schedule, but is later changed to full
backup due to not having and existing Full backup, the backup job will
still use the Incremental Pool. If the job was changed to Full, why
wasn't the Pool changed to Full?
Thanks.
Scott
--
-------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users