Hello,
One "feature" recently added was for Bacula to detect an out of space condition on the partition it is writing.  When this happens, it is pretty much catastrophic since no matter what Bacula does it cannot add additional space.  This is what is happening to you.   You mention that you "have to change a disk" but do not give any details.  It might be possible that previous versions allowed you to simply swap out disks, and if that was the case, it would help to know exactly what you were doing and what kind of "swappable" disks you were using.

One small point: strictly you do not lose data on these backups that fail, but the backups do fail and thus all your data is not saved.

Best regards,
Kern

On 06/14/2018 07:00 AM, Jerry Lowry wrote:
Hi,

Each time I have to change a disk during my offsite backups I get errors from the job that is running and it fails. My storage and pool definitions follow below, they have not changed for the last 10 years and have been working without any error or problems up until I upgraded to 9.0.3 of bacula and migrated the database to MariaDB 10.2.8-1.
If any other configuration files are needed I can add them.  I loose data on each of these backups because of these errors. 

Any help with this would be great,

thanks,
jerry

# Definition of file storage device
Storage {
  Name = midswap            # offsite disk
# Do not use "localhost" here   
  #Address = kilchis                # N.B. Use a fully qualified name here
  Address = kilchis              # N.B. Use a fully qualified name here
  SDPort = 9103
  Password = ""
  Device = MidSwap
  Media Type = File
}
# File Pool definition
Pool {
  Name = OffsiteMid
  Pool Type = Copy
  Next Pool = OffsiteMid
  Storage = midswap
  Recycle = yes                       # Bacula can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 30 years         # thirty years
  Maximum Volume Bytes = 1800G       # Limit Volume to disk size
  Maximum Volumes = 10               # Limit number of Volumes in Pool
}

---------------

emails sent at disk full message:


13-Jun 17:52 kilchis JobId 37853: Job BackupUsers.2018-06-12_23.47.07_32 is waiting. Cannot find any appendable volumes.

Please use the "label" command to create a new Volume for:

    Storage:      "MidSwap" (/MidSwap)

    Pool:         OffsiteMid

    Media type:   File



13-Jun 17:52 kilchis JobId 37851: Fatal error: Out of freespace caused End of Volume "homeMS-5" at 981661189531 on device "MidSwap" (/MidSwap). Write of 64512 bytes got 10853.

13-Jun 17:52 kilchis JobId 37851: Elapsed time=02:59:41, Transfer rate=67.40 M Bytes/second


12-Jun 23:47 kilchis-dir JobId 37850: Copying using JobId=37780 Job=BackupUsers.2018-06-09_20.05.00_18

13-Jun 14:52 kilchis-dir JobId 37850: Start Copying JobId 37850, Job=CopyHMDiskToDisk.2018-06-12_23.47.07_29

13-Jun 14:52 kilchis-dir JobId 37850: Using Device "Home" to read.

13-Jun 14:52 kilchis JobId 37850: Ready to read from volume "home-6" on File device "Home" (/engineering/Home).

13-Jun 14:52 kilchis JobId 37850: Forward spacing Volume "home-6" to addr=824369125834 13-Jun 17:39 kilchis JobId 37850: End of Volume "home-6" at addr=1503238496266 on device "Home" (/engineering/Home).

13-Jun 17:39 kilchis JobId 37850: Ready to read from volume "home-7" on File device "Home" (/engineering/Home).

13-Jun 17:39 kilchis JobId 37850: Forward spacing Volume "home-7" to addr=215 13-Jun 17:52 kilchis JobId 37850: Error: bsock.c:649 Write error sending 65540 bytes to client:10.20.10.21:9103: ERR=Connection reset by peer 13-Jun 17:52 kilchis JobId 37850: Fatal error: read.c:277 Error sending to File daemon. ERR=Connection reset by peer 13-Jun 17:52 kilchis JobId 37850: Elapsed time=02:59:42, Transfer rate=67.39 M Bytes/second 13-Jun 17:52 kilchis JobId 37850: Error: bsock.c:537 Socket has errors=1 on call to client:10.20.10.21:9103 13-Jun 17:52 kilchis JobId 37850: Error: bsock.c:537 Socket has errors=1 on call to client:10.20.10.21:9103 13-Jun 17:52 kilchis-dir JobId 37850: Error: Bacula kilchis-dir 9.0.6 (20Nov17):

 


Build OS:               x86_64-pc-linux-gnu redhat

  Prev Backup JobId:      37780

  Prev Backup Job:        BackupUsers.2018-06-09_20.05.00_18

  New Backup JobId:       37851

  Current JobId:          37850

  Current Job:            CopyHMDiskToDisk.2018-06-12_23.47.07_29

  Backup Level:           Full

  Client:                 kilchis-fd

  FileSet:                "Mid Set" 2011-04-11 13:13:32

  Read Pool:              "HomePool" (From Command input)

  Read Storage:           "home" (From Job resource)

  Write Pool:             "OffsiteMid" (From Command input)

  Write Storage:          "midswap" (From Command input)

  Catalog:                "MyCatalog" (From Client resource)

  Start time:             13-Jun-2018 14:52:20

  End time:               13-Jun-2018 17:52:04

  Elapsed time:           2 hours 59 mins 44 secs

  Priority:               10

  SD Files Written:       1,784,587

  SD Bytes Written:       726,665,971,203 (726.6 GB)

  Rate:                   67383.7 KB/s

  Volume name(s):         homeMS-5

  Volume Session Id:      82

  Volume Session Time:    1528397911

  Last Volume Bytes:      981,661,189,531 (981.6 GB)

  SD Errors:              3

  SD termination status:  Error

  Termination:            *** Copying Error ***




------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to