Re: [Bacula-users] Why are copy jobs creating incremental jobs?

2018-06-14 Thread Mariusz Mazur
czw., 14 cze 2018 o 16:56 Martin Simmons  napisał(a):

> I've noticed that too, but I think it might just be a bug in the output of
> status dir (it shows the default level for that job name from the config
> file
> instead of the actual level).  If you check the catalog with "list jobs"
> when
> you will probably find that they are full jobs.
>

Yup, you're right, in the db they're all marked as copy/full and not
backup/incremental as the emails and director claim. Weird. (Also – does
not happen with all jobs, copying my catalog correctly marked it as full.)

Anyway, seems it's a known issue: http://bugs.bacula.org/view.php?id=2286
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Why are copy jobs creating incremental jobs?

2018-06-14 Thread Martin Simmons
> On Wed, 13 Jun 2018 17:14:23 +0200, Mariusz Mazur said:
> 
> Hi, I'm running bacula 7.4.7 and have a job to occasionally copy recent
> full jobs from my full pool to my tapes (Full-Poll -> Tape-Pool).
> 
> The gist of the copy job is here:
> 
> Name = CopyFull2Tape
> Type = Copy
> Level = Full
> Pool = Full-Pool
> Selection Type = SQL Query
> Selection Pattern = "
> select max(j.jobid) from job j, pool p where
> p.name='Full-Pool' and j.poolid=p.poolid and
> j.jobstatus='T' and j.type='B' and j.level='F' and j.jobbytes>0 and
> starttime>now()-'3 weeks'::interval
> group by j.name;"
> 
> So I'm explicitly only copying completed full jobs. And yet, the director
> gives me this:
> 
> 27888  Copy Full  0 0  CopyFull2Tape is waiting on max
> Storage jobs
> 27902  Copy Full  0 0  CopyFull2Tape is running
> 27903  Back Incr  0 0  ca2-regular   is running
> 27904  Copy Full  0 0  CopyFull2Tape is waiting on max
> Storage jobs
> 27905  Back Incr  0 0  db5-regular   is waiting
> execution
> 27906  Copy Full  0 0  CopyFull2Tape is waiting on max
> Storage jobs
> 27907  Back Incr  0 0  dbc1n1-cfgis waiting
> execution
> 27908  Copy Full  0 0  CopyFull2Tape is waiting on max
> Storage jobs
> 
> What are those 'Back Incr' jobs? It's confusing.

I've noticed that too, but I think it might just be a bug in the output of
status dir (it shows the default level for that job name from the config file
instead of the actual level).  If you check the catalog with "list jobs" when
you will probably find that they are full jobs.

__Martin

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ongoing problems with v 9.0.3

2018-06-14 Thread Phil Stracchino
On 06/14/18 01:00, Jerry Lowry wrote:
> Hi,
> 
> Each time I have to change a disk during my offsite backups I get errors
> from the job that is running and it fails. My storage and pool
> definitions follow below, they have not changed for the last 10 years
> and have been working without any error or problems up until I upgraded
> to 9.0.3 of bacula and migrated the database to MariaDB 10.2.8-1.
> If any other configuration files are needed I can add them.  I loose
> data on each of these backups because of these errors. 


If this configuration was working for you before, it is likely you are
running into a volume-change bug that was fixed in, I think, 9.0.4 in
which Bacula polled the storage device thousands of times a second
during volume changes and failed out the job.

You should update to 9.0.6 or later and retry.


-- 
  Phil Stracchino
  Babylon Communications
  ph...@caerllewys.net
  p...@co.ordinate.org
  Landline: +1.603.293.8485
  Mobile:   +1.603.998.6958

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ongoing problems with v 9.0.3

2018-06-14 Thread Kern Sibbald

  
  
Hello,
  One "feature" recently added was for Bacula to detect an out of
  space condition on the partition it is writing.  When this
  happens, it is pretty much catastrophic since no matter what
  Bacula does it cannot add additional space.  This is what is
  happening to you.   You mention that you "have to change a disk"
  but do not give any details.  It might be possible that previous
  versions allowed you to simply swap out disks, and if that was the
  case, it would help to know exactly what you were doing and what
  kind of "swappable" disks you were using.
  
  One small point: strictly you do not lose data on these backups
  that fail, but the backups do fail and thus all your data is not
  saved.
  
  Best regards,
  Kern
  
  On 06/14/2018 07:00 AM, Jerry Lowry wrote:


  
Hi,


Each time I have to change a disk during my offsite backups
  I get errors from the job that is running and it fails. My
  storage and pool definitions follow below, they have not
  changed for the last 10 years and have been working without
  any error or problems up until I upgraded to 9.0.3 of bacula
  and migrated the database to MariaDB 10.2.8-1.
If any other configuration files are needed I can add
  them.  I loose data on each of these backups because of these
  errors.  



Any help with this would be great,


thanks,
jerry



# Definition of file storage device
  Storage {
    Name = midswap            # offsite disk
  # Do not use "localhost" here    
    #Address = kilchis    # N.B. Use a fully
  qualified name here
    Address = kilchis  # N.B. Use a fully qualified
  name here
    SDPort = 9103
    Password = ""
    Device = MidSwap
    Media Type = File
  }
  # File Pool definition
  Pool {
    Name = OffsiteMid
    Pool Type = Copy
    Next Pool = OffsiteMid
    Storage = midswap
    Recycle = yes   # Bacula can
  automatically recycle Volumes
    AutoPrune = yes # Prune expired volumes
    Volume Retention = 30 years # thirty years
    Maximum Volume Bytes = 1800G   # Limit Volume to disk
  size 
    Maximum Volumes = 10   # Limit number of Volumes
  in Pool
  }



---

  emails
sent at disk full message:
  
  
  13-Jun
17:52 kilchis JobId 37853: Job
BackupUsers.2018-06-12_23.47.07_32 is waiting. Cannot find
any appendable
volumes.
  Please
use the "label" command to create a new
Volume for:
     
Storage:  "MidSwap" (/MidSwap)
     
Pool: OffsiteMid
     
Media
type:   File





13-Jun 17:52 kilchis JobId 37851: Fatal error: Out of
  freespace caused End of Volume "homeMS-5" at 981661189531 on
  device
  "MidSwap" (/MidSwap). Write of 64512 bytes got 10853.
  13-Jun
17:52 kilchis JobId 37851: Elapsed time=02:59:41,
Transfer rate=67.40 M Bytes/second




  12-Jun
23:47 kilchis-dir JobId 37850: Copying using
JobId=37780 Job=BackupUsers.2018-06-09_20.05.00_18
  13-Jun
14:52 kilchis-dir JobId 37850: Start Copying JobId
37850, Job=CopyHMDiskToDisk.2018-06-12_23.47.07_29
  13-Jun
14:52 kilchis-dir JobId 37850: Using Device
"Home" to read.
  13-Jun
14:52 kilchis JobId 37850: Ready to read from
volume "home-6" on File device "Home" (/engineering/Home).
  13-Jun
14:52 kilchis JobId 37850: Forward spacing Volume
"home-6" to addr=824369125834 13-Jun 17:39 kilchis JobId
37850: End
of Volume "home-6" at addr=1503238496266 on device "Home"
(/engineering/Home).
  13-Jun
17:39 kilchis JobId 37850: Ready to read from
volume "home-7" on File device "Home" (/engineering/Home).
  13-Jun
17:39 kilchis JobId 37850: Forward spacing Volume
"home-7" to addr=215 13-Jun 17:52 kilchis JobId 37850:
Error:
bsock.c:649 Write error sending 65540 bytes to client:10.20.10.21:9103:
ERR=Connection reset by peer 13-Jun 17:52 kilchis JobId
37850: Fatal error: