Hello,

On 12/29/2005 1:15 AM, Trevor Morrison wrote:
Arno,

I believe I have fixed part of the problem by using a working copy of a friends dir conf file's fileset. But when I restore it fills the mount points: /var, /usr/, /home,/boot correctly but still overflows the / partition and consequently fails the restore. When I do a du -ks from the root it correctly reports the total bytes of the restore but the / is all filled up. I don't get it. Any help is appreciated.

It would be quite interesting to see which files are so much bigger (or more) than before the backup. In the root directory, 'du -ksx' is one way to find the directories which grow beyond available space. Once you see a larger directory than expected, you can investigate further.

The other option would be to observe the restoration process itself, i.e. with bconsole, use 'sta client=xxx' or 'sta sd=yyy' during restore to see which files are written.

Or, the - in my opinion - best solution, would be a simple catalog query using bconsole:
After entering que, I can get the following output:
Choose a query (1-20): 6
Enter Client Name: ork-fd
+-------+--------+-------+---------------------+----------+---------------+--------------+
| JobId | Client | Level | StartTime           | JobFiles | JobBytes      | 
VolumeName   |
+-------+--------+-------+---------------------+----------+---------------+--------------+
|   656 | ork-fd | I     | 2005-01-03 08:45:58 |      397 |   334,860,412 | 
DAT-120-0008 |
|   672 | ork-fd | F     | 2005-01-04 12:02:53 |  198,254 | 3,212,502,182 | 
DLT-IV-0009  |
| 1,667 | ork-fd | F     | 2005-04-05 08:20:01 |  189,040 | 3,274,261,262 | 
DAT-120-0018 |
| 1,667 | ork-fd | F     | 2005-04-05 08:20:01 |  189,040 | 3,274,261,262 | 
DAT-120-0019 |
| 1,962 | ork-fd | F     | 2005-05-03 08:20:02 |  189,309 | 3,297,982,194 | 
DLT-IV-0032  |
| 2,327 | ork-fd | F     | 2005-06-07 08:20:04 |  199,933 | 3,519,981,110 | 
DLT-IV-0026  |
| 2,455 | ork-fd | D     | 2005-06-21 08:20:03 |    5,568 |   464,885,118 | 
DAT-120-0016 |
| 2,521 | ork-fd | D     | 2005-06-28 08:20:02 |    8,586 |   511,513,697 | 
DAT-120-0016 |
| 2,589 | ork-fd | F     | 2005-07-05 08:20:04 |  200,255 | 3,524,625,483 | 
DLT-IV-0029  |
| 2,589 | ork-fd | F     | 2005-07-05 08:20:04 |  200,255 | 3,524,625,483 | 
DLT-IV-0033  |
| 2,663 | ork-fd | D     | 2005-07-12 08:20:02 |    3,723 |   427,631,422 | 
DAT-120-0034 |
| 2,763 | ork-fd | D     | 2005-07-19 11:14:52 |   10,633 |   468,652,425 | 
DAT-120-0021 |
| 2,830 | ork-fd | D     | 2005-07-26 10:56:32 |   15,716 |   477,986,238 | 
DAT-120-0041 |
| 2,899 | ork-fd | F     | 2005-08-02 08:20:06 |  199,290 | 3,250,178,144 | 
DLT-IV-0037  |
| 2,954 | ork-fd | D     | 2005-08-23 11:26:53 |    5,599 |   468,837,763 | 
DAT-120-0041 |
| 3,052 | ork-fd | D     | 2005-08-30 08:25:04 |   13,184 |   590,454,214 | 
DAT-120-0041 |
| 3,138 | ork-fd | F     | 2005-09-06 08:20:06 |  189,835 | 3,257,083,731 | 
DLT-IV-0044  |
| 3,211 | ork-fd | D     | 2005-09-13 08:25:04 |    3,833 |   359,171,475 | 
DAT-120-0043 |
| 3,330 | ork-fd | D     | 2005-09-20 11:27:05 |    9,309 |   461,918,843 | 
DAT-120-0039 |
| 3,411 | ork-fd | D     | 2005-09-27 08:25:04 |   12,725 |   529,653,164 | 
DAT-120-0047 |
| 3,424 | ork-fd | I     | 2005-09-28 08:25:04 |    1,007 |   228,831,699 | 
DAT-120-0022 |
| 3,428 | ork-fd | I     | 2005-09-29 11:33:41 |      660 |   226,641,143 | 
DAT-120-0022 |
| 3,443 | ork-fd | I     | 2005-09-30 08:25:04 |      376 |   217,538,129 | 
DAT-120-0022 |
| 3,491 | ork-fd | F     | 2005-10-04 09:58:49 |  180,908 | 3,276,048,100 | 
DLT-IV-0049  |
| 3,507 | ork-fd | F     | 2005-10-05 10:34:53 |  181,865 | 3,274,143,152 | 
DLT-IV-0052  |
| 3,507 | ork-fd | F     | 2005-10-05 10:34:53 |  181,865 | 3,274,143,152 | 
DLT-IV-0053  |
| 3,523 | ork-fd | I     | 2005-10-06 08:25:03 |      305 |   223,397,635 | 
DAT-120-0022 |
| 3,599 | ork-fd | D     | 2005-10-11 08:25:04 |    1,653 |   261,578,355 | 
DAT-120-0050 |
| 3,745 | ork-fd | D     | 2005-10-18 08:25:04 |    6,540 |   428,830,405 | 
DAT-120-0054 |
| 3,841 | ork-fd | D     | 2005-10-25 08:25:04 |   12,031 |   479,907,321 | 
DAT-120-0054 |
| 3,932 | ork-fd | F     | 2005-11-01 20:46:20 |  185,417 | 3,302,536,319 | 
DLT-IV-0056  |
| 4,027 | ork-fd | D     | 2005-11-08 08:25:04 |    3,689 |   400,309,630 | 
DAT-120-0050 |
| 4,112 | ork-fd | D     | 2005-11-15 08:25:04 |    6,874 |   322,347,897 | 
DAT-120-0052 |
| 4,216 | ork-fd | D     | 2005-11-22 20:00:55 |   10,494 |   368,506,470 | 
DAT-120-0015 |
| 4,301 | ork-fd | D     | 2005-11-29 08:25:04 |   11,539 |   418,006,263 | 
DAT-120-0005 |
| 4,377 | ork-fd | F     | 2005-12-06 08:20:05 |  190,443 | 3,217,870,655 | 
DLT-IV-0058  |
| 4,466 | ork-fd | D     | 2005-12-14 00:40:30 |    8,190 |   452,753,034 | 
DAT-120-0006 |
| 4,503 | ork-fd | I     | 2005-12-16 22:04:54 |    5,709 |   321,723,058 | 
DAT-120-0024 |
| 4,597 | ork-fd | D     | 2005-12-28 01:46:29 |   17,002 |   532,750,818 | 
DAT-120-0028 |
| 4,613 | ork-fd | I     | 2005-12-28 21:54:38 |      397 |   274,310,877 | 
DAT-120-0024 |
+-------+--------+-------+---------------------+----------+---------------+--------------+

So, for example, any restore trying to restore more than about 3 GB would be very astonishing for me.

To get more information, you can query the catalog for more detailed information. For example, if, during a certain restore, you discover that one backup job is the reason for the overflow of a partition, you could query the catalog to list all files associated with that job.

The only reason for the behaviour you describe I see would be backing up a directory with lots of file renames or file removals or replacements. For example, imagine the following:

Create file A, 1 GB in size
backup
delete file A
create file B, 1 GB
backup
delete B
create C, 1 GB
backup
restore
Files A, B and C will be restored, and they will consume 3 GB disk space, where before only 1 GB was used at a time.

DVD images in the temp directory or something in /var might behave like this...

Arno


TIA,

Trevor

Arno Lehmann wrote:

Hello,

On 12/28/2005 12:37 AM, Trevor Morrison wrote:

Arno,

I can send the job output which is 40 MB or 2.97 MB compressed to you if you email will accept something that large. Let me know.



Quite large... 40 MB Job report output?

I could accept such a mail, but I doubt that I will find the time to read through it...

What do the job reports for the backup jobs you want to restore state they saved? And what does the catalog query tell you about the amount of data belonging to hat client?

And, of course, you could post the report to a website and only mail the URL.

Arno


Thanks,

Trevor

Arno Lehmann wrote:

Hello,

On 12/27/2005 10:57 PM, Trevor Morrison wrote:

Hi,

I have no problems backing up a particular working server (RH 9) with about 3.2 GB worth of data. I am testing out the restore of this backup on another box and it will restore to the test box, but will try to write 16 GB worth of data instead of just 3.2! It is only a 10 GB drive to begin with. I have attached my bacula-fd and dir .conf file if that can help find a solution.





it would be helpful to see some more detailed output, for example some job report data. Perhaps there *are* much more than 3 GB of data for a full restore... sparse files, hard links and a huge amount of changing data can lead to what you observe.

Arno

TIA,


------------------------------------------------------------------------

#
# Default  Bacula File Daemon Configuration file
#
#  For Bacula release 1.36.3 (22 April 2005) -- redhat (Stentz)
#
# There is not much to change here except perhaps the
# File daemon Name to
#

#
# List Directors who are permitted to contact this File daemon
#
Director {
  Name = stud-dir
  Password = "HR1kfYn1dlc2enm1+mwg3yT3cNNfXqt9FitHbxVPmvPB"
}

#
# Restricted Director, used by tray-monitor to get the
#   status of the file daemon
#
Director {
  Name = stud-mon
  Password = "VrP8lmsqatXuth6F7sQMgJFWV87OdQeC1SHZLP0/5nEw"
  Monitor = yes
}

#
# "Global" File daemon configuration specifications
#
FileDaemon {                            Name = stud-fd
  FDport = 9102                  # where we listen for the director
  WorkingDirectory = /usr/bacula/bin/working
  Pid Directory = /usr/bacula/bin
  Maximum Concurrent Jobs = 20
}
FileDaemon {                        Name = porthos-fd
  FDport = 9102                  # where we listen for the director
  WorkingDirectory = /usr/bacula/bin/working
  Pid Directory = /usr/bacula/bin
  Maximum Concurrent Jobs = 20
}
FileDaemon {                            Name = hailee1-fd
  FDport = 9102                  # where we listen for the director
  WorkingDirectory = /usr/bacula/bin/working
  Pid Directory = /usr/bacula/bin
  Maximum Concurrent Jobs = 20
}
# Send all messages except skipped files back to Director
Messages {
  Name = Standard
  director = stud-dir = all, !skipped
}


------------------------------------------------------------------------

#
# Default Bacula Director Configuration file
#
#  The only thing that MUST be changed is to add one or more
#   file or directory names in the Include directive of the
#   FileSet resource.
#
#  For Bacula release 1.36.3 (22 April 2005) -- redhat (Stentz)
#
#  You might also want to change the default email address
#   from root to your address.  See the "mail" and "operator"
#   directives in the Messages resource.
#

Director {                            # define myself
  Name = stud-dir
  DIRport = 9101                # where we listen for UA connections
  QueryFile = "/usr/bacula/bin/query.sql"
  WorkingDirectory = "/usr/bacula/bin/working"
  PidDirectory = "/usr/bacula/bin"
  Maximum Concurrent Jobs = 1
Password = "QiL5p4zWezlcHnUWGrMzPF+tIU30zZFsi0VSIpejHsdO" # Console password
  Messages = Daemon
}

Schedule {
  Name = "Daily"
  Run = Full Fri at 10:00am
  Run = Differential sat-thu at 9:00am
}

Schedule {
  Name = "Catalog"
  Run = Full mon-sun at 11:00am
}

JobDefs {
  Name = "Defaults"
  Type = Backup
  Schedule = "Daily"
  Storage = 8mmDrive
  Messages = Standard
  Pool = Daily
  Priority = 10
}

#
# Define the main nightly save backup job
#   By default, this job will back up to disk in /tmp
Job {
  Name = "stud"
  JobDefs = "Defaults"
  Client=stud-fd
  FileSet="Full Set Stud"
  Write Bootstrap = "/usr/bacula/bin/working/stud.bsr"
}

Job {
  Name = "Porthos"
  JobDefs = "Defaults"
  Client=porthos-fd
  FileSet="Full Set Porthos"
  Write Bootstrap = "/usr/bacula/bin/working/porthos.bsr"
}

Job {
  Name = "Hailee1"
  JobDefs = "Defaults"
  Client=hailee1-fd
  FileSet="Full Set Hailee1"
  Write Bootstrap = "/usr/bacula/bin/working/hailee1.bsr"
}

# Backup the catalog database (after the nightly save)
Job {
  Name = "BackupCatalog"
  JobDefs = "Defaults"
  Client=stud-fd
  FileSet="Catalog"
  Schedule = "Catalog"
  # This creates an ASCII copy of the catalog
  RunBeforeJob = "/usr/bacula/bin/make_catalog_backup bacula bacula"
  # This deletes the copy of the catalog
  RunAfterJob  = "/usr/bacula/bin/delete_catalog_backup"
  Write Bootstrap = "/usr/bacula/bin/working/BackupCatalog.bsr"
  Priority = 15
}

# Standard Restore template, to be changed by Console program
Job {
  Name = "Restore testbox"
  Type = Restore
Client= testbox-fd FileSet="Full Set Hailee1" Storage = 8mmDrive Pool = Daily
  Messages = Standard
  Where = /
}


# List of files to be backed up
FileSet {
  Name = "Full Set Stud"
  Include {
    Options {
      signature = MD5
      onefs = yes
    }
    File = /
    File = /var
    File = /usr
    File = /storage
    File = /home
    File = /boot
  }
  Exclude {
    File = /tmp/*
    File = /.journal
    File = /.fsck
  }
}

# List of files to be backed up
FileSet {
  Name = "Full Set Porthos"
  Include {
    Options {
      signature = MD5
      onefs = yes
    }
    File = /
    File = /var
    File = /usr
    File = /home
    File = /boot
    File = /chroot

  }
  Exclude {
    File = /mnt/porthos/*
    File = /tmp/*
    File = /.journal
    File = /.fsck
  }
}

# List of files to be backed up
FileSet {
  Name = "Full Set Hailee1"
  Include {
    Options {
      signature = MD5
    wilddir = /mnt/hailee
    wilddir = /proc
    wilddir = /tmp
    wilddir    = /.journal
    wilddir = /.fsck
          exclude = yes
    }
    File = /
    File = /var
    File = /usr
    File = /home
    File = /boot
   }
  Exclude {
    File = /mnt
    File = /tmp/*
    File = /.journal
    File = /.fsck
  }
}

# This is the backup of the catalog
FileSet {
  Name = "Catalog"
  Include {
    Options {
      signature = MD5
    }
    File = /usr/bacula/bin/working/bacula.sql
  }
}

# Client (File Services) to backup
Client {
  Name = stud-fd
  Address = stud.hailix.com
  FDPort = 9102
  Catalog = MyCatalog
Password = "HR1kfYn1dlc2enm1+mwg3yT3cNNfXqt9FitHbxVPmvPB" # password for FileDaemon
  File Retention = 8 days            # 30 days
  Job Retention = 9 days            # six months
  AutoPrune = yes                     # Prune expired Jobs/Files
}

Client {
  Name = porthos-fd
#  Address = porthos.hailix.com
   Address = 172.16.1.2
  FDPort = 9102
  Catalog = MyCatalog
Password = "HR1kfYn1dlc2enm1+mwg3yT3cNNfXqt9FitHbxVPmvPB" # password for FileDaemon
  File Retention = 8 days            # 30 days
  Job Retention = 9 days            # six months
  AutoPrune = yes                     # Prune expired Jobs/Files
}
Client {
  Name = hailee1-fd
 # Address = hailee1.hailix.com
   Address = 172.16.1.3
  FDPort = 9102
  Catalog = MyCatalog
Password = "HR1kfYn1dlc2enm1+mwg3yT3cNNfXqt9FitHbxVPmvPB" # password for FileDaemon
  File Retention = 8 Days            # 30 days
  Job Retention = 9 Days            # six months
  AutoPrune = yes                     # Prune expired Jobs/Files
}

Client {
  Name = testbox-fd
  Address = 192.168.2.250
  FDPort = 9102
  Catalog = MyCatalog
#Password = "HR1kfYn1dlc2enm1+mwg3yT3cNNfXqt9FitHbxVPmvPB" # password for FileDaemon
  Password = ""          # password for FileDaemon
  File Retention = 8 Days            # 30 days
  Job Retention = 9 Days            # six months
  AutoPrune = yes                     # Prune expired Jobs/Files
}

Storage {
  Name = "8mmDrive"
#  Do not use "localhost" here
#Address = stud.hailix.com # N.B. Use a fully qualified name here Address = 192.168.2.10 # N.B. Use a fully qualified name here
  SDPort = 9103
  Password = "lAskEHzAgzWVO3xgy0DAPH8rBJ5pocgaXMRiRQmmNK5u"
  Device = "Exabyte 8mm"
  MediaType = "8mm"
}


# Generic catalog service
Catalog {
  Name = MyCatalog
  dbname = bacula; user = bacula; password = ""
}

# Reasonable message delivery -- send most everything to email address
#  and to the console
Messages {
  Name = Standard
#
# NOTE! If you send to two email or more email addresses, you will need
#  to replace the %r in the from field (-f part) with a single valid
#  email address in both the mailcommand and the operatorcommand.
#
mailcommand = "/usr/bacula/bin/bsmtp -h hailee1.hailix.com -f \"\(Bacula\) %r\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/bacula/bin/bsmtp -h hailee1.hailix.com -f \"\(Bacula\) %r\" -s \"Bacula: Intervention needed for %j\" %r" mail = [EMAIL PROTECTED] = all, !skipped operator = [EMAIL PROTECTED] = mount
  console = all, !skipped, !saved
#
# WARNING! the following will create a file that you must cycle from
#          time to time as it will grow indefinitely. However, it will
#          also keep all your messages if they scroll off the console.
#
  append = "/usr/bacula/bin/working/log" = all, !skipped
}


#
# Message delivery for daemon messages (no job).
Messages {
  Name = Daemon
mailcommand = "/usr/bacula/bin/bsmtp -h hailee1.hailix.com -f \"\(Bacula\) %r\" -s \"Bacula daemon message\" %r" mail = [EMAIL PROTECTED] = all, !skipped console = all, !skipped, !saved
  append = "/usr/bacula/bin/working/log" = all, !skipped
}



    # Default pool definition
Pool {
  Name = Daily
  Pool Type = Backup
Recycle = yes # Bacula can automatically recycle Volumes
  Recycle Current Volume= yes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 10 days         # one year
Accept Any Volume = yes # write on any volume in the pool
}

#
# Restricted console used by tray-monitor to get the status of the director
#
Console {
  Name = stud-mon
  Password = "S1+f60qKA6UdtDLgmupMxConk1XefZMJ3Z2gtzFiiZjt"
  CommandACL = status, .status
}









--
IT-Service Lehmann                    [EMAIL PROTECTED]
Arno Lehmann                  http://www.its-lehmann.de


-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to