Re: [Bacula-users] Correct way to exclude directories on Windows

2016-10-25 Thread Luc Van der Veken
Martin Simmons said:
> It looks like you didn't try
>
> exclude {
>   file = "C:/System Volume Information"
> }
>

 - I could have sworn that the documentation said you must
append a trailing slash if you want to indicate a directory.  Checked it
out, and it's exactly the opposite, it explicitly tells you NOT to append
it, in at least four places.

Sorry for bothering you with this level of, eh, just tell me to RTFM
better :)



What I remembered (correctly), and where I got things mixed up, is that
the documentation also says
>> 5. When using wild-cards or regular expressions, directory names are
>> always terminated with a slash (/) and filenames have no trailing
>> slash.

*When using wild-cards or regular expressions* should be clear enough, but
it says that in a section of text between a bold-faced title "Exclude {
file-list }" and the next section, which is about Options.




--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Correct way to exclude directories on Windows

2016-10-24 Thread Luc Van der Veken
I wrote

> culprit was a file "D:\Documents and Settings\{guid}" of 18 GB that
> changed every day.

Oops, sorry, I meant "D:\System Volume Information\28{guid}".



--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Correct way to exclude directories on Windows

2016-10-24 Thread Luc Van der Veken
Hi all,
What is the correct way to exclude a directory on Windows in an Exclude
section?

I just found out that neither of these works, the directory is always
included.
It seems to work as expected for files, but not for directories (slash at
the end).

exclude {
  file = "System Volume Information/"
  file = System Volume Information/
  file = "System\ Volume\ Information/"
  file = "C:/System\ Volume\ Information/"
  file = "C:/System\ Volume\ Information/"
}

But this works, is that the only way?

include {
   options {
  exclude = yes
  wilddir = "*:/System Volume Information"
   }
   File = 
}

To my shame I must admit that I've been backing up older Windows versions
(Server 2003, XP, WIndows 7 etc.) for 3 years using the first variation
above, without noticing that the directory was included.

It's only after a Windows 10 PC was added a week ago that I started
noticing something off. Incremental backups of 20 GB per day, every day,
even when the system had been powered off all day and was just powered on
by Wake-On-Lan at night to back it up, didn't seem quite normal. The
culprit was a file "D:\Documents and Settings\{guid}" of 18 GB that
changed every day.



--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] power failure problem

2015-09-02 Thread Luc Van der Veken
> I don't understand why it want to write on drive 1 and then tries to load a 
> tape on drive 0!

Notice the hyphens: drive 0 (/dev/nst0) seems to have a tag "Drive-1" stuck 
onto to it, and Drive 1 the same with "Drive-2".

Your problem is probably that the power fail occurred half way through a write 
operation, and now the drive can't locate the end of data anymore.

I suggest checking the tape condition with btape (commands like status, 
readlabel, eod, scan).



-Original Message-
From: Kevin I. Hodges [mailto:k.i.hod...@reading.ac.uk] 
Sent: 02 September 2015 22:06
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] power failure problem

hi

I've had bacula, version 7.0.5 running for sometime happily doing my 
backups to a twin drive IBM autochanger. However, over the weekend we had a 
power failure and bacula seems to have really got its knickers in a twist. The 
autochanger itself seems to be fine, I can load and unload tapes with no 
problems and all the volumes seem to be in the correct slot locations, however 
when I try and run a backup job I get the following errors and no backup:

02-Sep 19:35 swlx1.rdg.ac.uk-dir JobId 396: Start Backup JobId 396, 
Job=BackupClient1.2015-09-02_19.35.04_34
02-Sep 19:35 swlx1.rdg.ac.uk-dir JobId 396: Using Device "Drive-1" to write.
02-Sep 19:35 swlx1.rdg.ac.uk-sd JobId 396: 3304 Issuing autochanger "load slot 
3, drive 0" command.
02-Sep 19:40 swlx1.rdg.ac.uk-sd JobId 396: Fatal error: 3992 Bad autochanger 
"load slot 3, drive 0": ERR=Child died from signal 15: Termination.
Results=Program killed by Bacula (timeout)

02-Sep 19:40 swlx1.rdg.ac.uk-fd JobId 396: Fatal error: job.c:2444 Bad response 
from SD to Append Data command. Wanted 3000 OK data
, got 3903 Error append data:

02-Sep 19:40 swlx1.rdg.ac.uk-dir JobId 396: Error: Bacula swlx1.rdg.ac.uk-dir 
7.0.5 (28Jul14):
  Build OS:   x86_64-unknown-linux-gnu redhat Enterprise release
  JobId:  396
  Job:BackupClient1.2015-09-02_19.35.04_34
  Backup Level:   Incremental, since=2015-08-27 19:05:03
  Client: "swlx1.rdg.ac.uk-fd" 7.0.5 (28Jul14) 
x86_64-unknown-linux-gnu,redhat,Enterprise release
  FileSet:"Athena" 2015-06-04 19:05:00
  Pool:   "Default" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"DigitalTapeLibrary" (From Job resource)
  Scheduled time: 02-Sep-2015 19:35:00
  Start time: 02-Sep-2015 19:35:06
  End time:   02-Sep-2015 19:40:07
  Elapsed time:   5 mins 1 sec
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s):
  Volume Session Id:  1
  Volume Session Time:1441218705
  Last Volume Bytes:  7,912,094,883,840 (7.912 TB)
  Non-fatal FD errors:1
  SD Errors:  1
  FD termination status:  Error
  SD termination status:  Error
  Termination:*** Backup Error ***

after this the tape from slot 3 is loaded in drive 0:

Data Transfer Element 0:Full (Storage Element 3 Loaded):VolumeTag = MD0011L6

the storage status after this is:

Device status:
Autochanger "DigitalTapeLibrary" with devices:
   "Drive-1" (/dev/nst0)
   "Drive-2" (/dev/nst1)

Device "Drive-1" (/dev/nst0) is not open.
Drive 0 is not loaded.
==

Device "Drive-2" (/dev/nst1) is not open.
Drive 1 is not loaded.
==


Used Volume status:
Reserved volume: MD0011L6 on tape device "Drive-1" (/dev/nst0)
Reader=0 writers=0 reserves=0 volinuse=1

I don't understand why it want to write on drive 1 and then tries to load a 
tape on drive 0!

I've several posts that seem to have had similar problems but no solutions. 
Does anyone have any idea how to recover from this situation? Any help 
gratefully received.

Kevin


--
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140

Re: [Bacula-users] fileset which only compresses files which are not already compressed like gzip, jpeg, mpeg

2015-08-26 Thread Luc Van der Veken
Yes, exclude will cause the matching files to be excluded.

What I’m not certain of in the configuration I posted is whether it won’t 
exclude everything, I never tried working with two separate options sections in 
the same fileset.

It could be that it will (exclude everything), and that you have to use two 
separate ‘Include’ sections, each with one of the ‘Options’ sections, to get 
the effect you wanted (or at least I think you wanted).

Like this:

Fileset {
Name = “FullSet”
Include {
   Options {
   RegexFile = 
regex_for_uncompressed_files
   Exclude = yes
}
File = /
}
Include {
Options {
   Compression = gzip
   RegexFile = 
regex_for_compressed_files
   Exclude = yes
}
File = /
}
}

I think this is the way to avoid that “this configuration does not work” 
comment in the documentation, but I’d have to test it to be certain.
The first include would handle “everything excluding uncompressed files”, i.e. 
all compressed files, and back them up without compression.
The second would include “everything excluding compressed files”, i.e. all 
uncompressed files, and back them up with compression.


From: Martin Feldbacher [mailto:martin.feldbac...@stegbauer.info]
Sent: 26 August 2015 16:28
To: Luc Van der Veken 
Cc: bacula-users 
Subject: Re: [Bacula-users] fileset which only compresses files which are not 
already compressed like gzip, jpeg, mpeg

Hi Luc,

doesn't mean the Exclude-option, that the files found with the 
"RegexFile"-option will be excluded from backup? or how can I understand this 
exclude option?

________
Von: "Luc Van der Veken" mailto:luc...@wimionline.com>>
An: "bacula-users" 
mailto:bacula-users@lists.sourceforge.net>>
Gesendet: Mittwoch, 26. August 2015 08:53:46
Betreff: Re: [Bacula-users] fileset which only compresses files which are not 
already compressed like gzip, jpeg, mpeg

Hi Ana,

Won’t your solution exclude compressed files, instead of including them without 
a second round of compression?
I think if it can be done, what the OP asked, the right approach would be 
closer to his own, just using ‘RegexFile’ instead of ‘RegexDir’.

I would have tried something like this (untried, untested, probably wrong):

Fileset {
Name = “FullSet”
Include {
   Options {
   RegexFile = 
regex_for_uncompressed_files
   Exclude = yes
   }
Options {
   Compression = gzip
   RegexFile = regex_for_compressed_files
   Exclude = yes
}
File = /
}
}

A bit of De Morganized Boolean logic should circumvent the problem that all 
files not matched by any Options directive are included by default.
The first Options will exclude all uncompressed files – meaning include all 
compressed, and back them up without compression.
The second will exclude all compressed files – meaning include all uncompressed 
ones, and back them up with compression.


From: Ana Emília M. Arruda [mailto:emiliaarr...@gmail.com]
Sent: 25 August 2015 19:50
To: Martin Feldbacher 
mailto:martin.feldbac...@stegbauer.info>>
Cc: 
Bacula-users@lists.sourceforge.net<mailto:Bacula-users@lists.sourceforge.net>
Subject: Re: [Bacula-users] fileset which only compresses files which are not 
already compressed like gzip, jpeg, mpeg

Hello Martin,

FileSet {
  Name = "Full Set"
  Include {
 Options {
  compression = gzip
 }
 Options {
   RegexFile = "\.[gG]?[zZ][iI][pP]"
   RegexFile = "\.[jJ][pP][eE]?[gG]"
  exclude = yes
 }
 File = /
  }
}

This should work.

Best regards,
Ana

On Tue, Aug 25, 2015 at 11:10 AM, Martin Feldbacher 
mailto:martin.feldbac...@stegbauer.info>> 
wrote:
Hello,

I'm searching help with a fileset which only compresses files which are not 
already compressed (like gzip,jpeg,mpeg and so on) in my whole root directory..
my first idea was the following:


FileSet {
  Name = "Full Set"
  Include {
 Options {
  RegexDir = regex for all files with ending .gzip, .zip, .jpeg, and so 
on
 }
 Options {
  RegexDir = inverted regex from above, don't know if this works
  compression = gzip
 }
 File = /
  }
}

but then I saw the examples at 
http://www.bacula.org/5.2.x-manuals/en/main/main/Configuring_Director.html so 
that this doesn't work..

does anybo

Re: [Bacula-users] fileset which only compresses files which are not already compressed like gzip, jpeg, mpeg

2015-08-25 Thread Luc Van der Veken
Hi Ana,

Won’t your solution exclude compressed files, instead of including them without 
a second round of compression?
I think if it can be done, what the OP asked, the right approach would be 
closer to his own, just using ‘RegexFile’ instead of ‘RegexDir’.

I would have tried something like this (untried, untested, probably wrong):

Fileset {
Name = “FullSet”
Include {
   Options {
   RegexFile = 
regex_for_uncompressed_files
   Exclude = yes
   }
Options {
   Compression = gzip
   RegexFile = regex_for_compressed_files
   Exclude = yes
}
File = /
}
}

A bit of De Morganized Boolean logic should circumvent the problem that all 
files not matched by any Options directive are included by default.
The first Options will exclude all uncompressed files – meaning include all 
compressed, and back them up without compression.
The second will exclude all compressed files – meaning include all uncompressed 
ones, and back them up with compression.


From: Ana Emília M. Arruda [mailto:emiliaarr...@gmail.com]
Sent: 25 August 2015 19:50
To: Martin Feldbacher 
Cc: Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] fileset which only compresses files which are not 
already compressed like gzip, jpeg, mpeg

Hello Martin,

FileSet {
  Name = "Full Set"
  Include {
 Options {
  compression = gzip
 }
 Options {
   RegexFile = "\.[gG]?[zZ][iI][pP]"
   RegexFile = "\.[jJ][pP][eE]?[gG]"
  exclude = yes
 }
 File = /
  }
}

This should work.

Best regards,
Ana

On Tue, Aug 25, 2015 at 11:10 AM, Martin Feldbacher 
mailto:martin.feldbac...@stegbauer.info>> 
wrote:
Hello,

I'm searching help with a fileset which only compresses files which are not 
already compressed (like gzip,jpeg,mpeg and so on) in my whole root directory..
my first idea was the following:


FileSet {
  Name = "Full Set"
  Include {
 Options {
  RegexDir = regex for all files with ending .gzip, .zip, .jpeg, and so 
on
 }
 Options {
  RegexDir = inverted regex from above, don't know if this works
  compression = gzip
 }
 File = /
  }
}

but then I saw the examples at 
http://www.bacula.org/5.2.x-manuals/en/main/main/Configuring_Director.html so 
that this doesn't work..

does anybody have an idea how to solve this without creating two filesets for 
one client?

thankfull for any help,

greets
martin


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Still don't get auto-pruning

2015-08-12 Thread Luc Van der Veken
I still don't really get auto-pruning, I think.
Am I seeing 'prune' and 'purge' as more closely related than they are?

Snippet from the manual, about what happens when Bacula needs a volume:

AutoPrune = yes|no
If AutoPrune is set to yes (default), Bacula will automatically apply the 
Volume retention period when running a Job and it needs a new Volume but no 
appendable volumes are available. At that point, Bacula will prune all Volumes 
that can be pruned (i.e. AutoPrune set) in an attempt to find a usable volume.

Either my configuration is still wrong after reviewing it several times, or 
that "prune all volumes" doesn't mean that more than one will be purged at that 
time.
The way I see it happen, it just recycles the oldest volume and leaves all 
others alone, no matter how far 'over date' they are.
I've got file volumes with volume, file and job retention all set to 3 months, 
that are 4 months old and still sitting on disk marked "full".
When I try to list jobs or files on those volumes, it doesn't find them in the 
database anymore.

In another pool (used for incremental backups) it's the same with retentions of 
15 days, full files of 3 months old still sitting there.

Is the only way to purge the excess volumes to do it manually (or create a 
bconsole script)?

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula daemon message- where did it come from?

2015-08-11 Thread Luc Van der Veken
Hi all,

If I didn’t misunderstand anything, Michael said he checked the Received 
headers as well.

Those indicate at what time the message arrived at different mail servers along 
its path, I assume they show that bsmtp passed it to the local outgoing mail 
server at 11:00 ?

BTW, I would never trust the ‘Date’ header.  Some mail programs fill it in when 
you compose the mail, some when you actually send it (it could sit in your 
outbox for a while), and some don’t include it at all and rely on the mail 
server to add it.  And to make things worse, if you use MS Outlook on Exchange, 
it doesn’t even show its value in the message list, it shows the time it 
arrived in your mailbox instead (for example Ana’s mail I’m replying to has a 
timestamp of 4:06 inside, but in Outlook’s message list it says 4:09).


From: Ana Emília M. Arruda [mailto:emiliaarr...@gmail.com]
Sent: 12 August 2015 4:06
To: Michael Schwager 
Cc: Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula daemon message- where did it come from?

Hello Michael,
​From your first post, the header of the email shows a 6:00 AM time, not the 
11:00 AM. Are your sure they are the same e-mails? Is there any other e-mail in 
bac...@example.com mailbox that arrived at 11:00 AM?

-- Forwarded message --
From: Bacula 
Date: Sun, Aug 9, 2015 at 6:00 AM
Subject: Bacula daemon message
To: it@
​example.com

​Best regards,
Ana​


Looking carefully your first post, the e-mail sent with the authentication 
error took place at 6 am and not 11 am. This log register do not seems to be 
the log for your error message. Instead it seems to be an e-mail sent by bacula 
after restarting.
Em ter, 11 de ago de 2015 às 15:21, Michael Schwager 
mailto:mschwa...@mochotrading.com>> escreveu:
On Tue, Aug 11, 2015 at 8:49 AM, Martin Simmons 
mailto:mar...@lispworks.com>> wrote:

Looks like a bug to me (I've just created
http://bugs.bacula.org/view.php?id=2159).

​Thanks. I have contributed to your bug report.

- Mike Schwager
​ (aka, "The Most Greyish of Gnomes")​

  Linux Network Engineer, Mocho Trading LLC
  312-646-4783 Phone312-637-0011 Cell   
 312-957-9804 Fax


This message is for the named person(s) use only. It may contain confidential 
proprietary or legally privileged information. No confidentiality or privilege 
is waived or lost by any mistransmission. If you receive this message in error, 
please immediately delete it and all copies of it from your system, destroy any 
hard copies of it and notify the sender. You must not, directly or indirectly 
use, disclose, distribute, print, or copy any part of this message if you are 
not the intended recipient. Mocho Trading LLC reserves the right to monitor all 
e-mail communications through its networks. Any views expressed in this message 
are those of the individual sender, except where the message states otherwise 
and the sender is authorized to state them to be the views of any such 
entity.--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal Error on a job but I successfully set up 13 others...

2015-07-15 Thread Luc Van der Veken
Thegame32 said:

> Bacula-dir Warning: bsock.c:127 Could not connect to Client: dc0-fd on 
> dc0.teamworld.com:9102. ERR=Connection > refused

It looks like it's really a connection issue (at TCP level), it doesn't even 
get so far as to check the password.
The machine *is* reachable though, it is actively refusing the connection, else 
you'd get a time-out instead of 'connection refused'.


Could the FD service be blocked by the windows firewall (if the firewall is on, 
which it probably won't be if the 'dc' name means 'domain controller') ?

Is the service actually running, or are you assuming so because it is set to 
auto start?
It could fail to start for some reason.


-Original Message-
From: THEgame32 [mailto:bacula-fo...@backupcentral.com] 
Sent: 02 June 2015 22:22
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Fatal Error on a job but I successfully set up 13 
others...

Let me prelude this by saying that I've been trying to fix this issue for a 
couple of weeks now.  I will include the error then the files in this post... 
any suggestions will be appreciated... just can't figure it out since I have 13 
other re-occurring jobs that worked just fine for me.

Here is the error:



Bacula-dir No prior Full backup Job record found.
 No prior or suitable Full backup found in catalog. Doing FULL backup.
Bacula-dir Using Device "Qnap4"
 Start Backup JobId 14734, Job=dc0.2015-06-01_19.00.00_53
Bacula-dir Warning: bsock.c:127 Could not connect to Client: dc0-fd on 
dc0.teamworld.com:9102. ERR=Connection refused
Retrying ...
Bacula-dir Fatal error: bsock.c:133 Unable to connect to Client: dc0-fd on 
dc0.teamworld.com:9102. ERR=Connection refused
 Fatal error: No Job status returned from FD.
 
Error: Bacula Bacula-dir 5.2.6 (21Feb12):
  Build OS:   i486-pc-linux-gnu debian 6.0.4
  JobId:  14734
  Job:dc0.2015-06-01_19.00.00_53
  Backup Level:   Full (upgraded from Differential)
  Client: "dc0-fd" 5.2.10 (28Jun12) Microsoft Windows Server 
2008 R2 Standard Edition Service Pack 1 (build 7601), 64-bit,Cross-compile,Win64
  FileSet:"dc0" 2015-05-27 21:30:00
  Pool:   "File" (From Run pool override)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"Qnap4" (From Job resource)
  Scheduled time: 01-Jun-2015 19:00:00
  Start time: 01-Jun-2015 19:10:19
  End time:   01-Jun-2015 19:13:19
  Elapsed time:   3 mins 
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): 
  Volume Session Id:  77
  Volume Session Time:1432753587
  Last Volume Bytes:  2,732,131,489 (2.732 GB)
  Non-fatal FD errors:1
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Waiting on FD
  Termination:*** Backup Error ***
---

Here is the bacula-dir.conf piece:





Client {
  Name = dc0-fd
  Address = dc0.teamworld.com
  FDPort = 9102
  Catalog = MyCatalog
  Password = "ubdwMU3zCtf+ROyLnOpoi5epJP0yWJUujZqX6vGuGEnx"  # password for 
FileDaemon
  File Retention = 30 days# 30 days
  Job Retention = 6 months# six months
  AutoPrune = yes # Prune expired Jobs/Files
}

FileSet {
  Name = dc0
  Include {
File = C:/
   # Plugin = "vss:/@SYSTEMSTATE/"
Options {
}
  }
  Exclude {
File = pagefile.sys
  }
}

Job {
  Name = dc0
  Type = Backup
  Level = Differential
  Client = dc0-fd
  FileSet = dc0
  Schedule = WeeklyCycle
  Storage = Qnap4
  Pool = File
  Messages = Standard
}

---

Here is the -fd.conf piece:



# Client (File Services) to backup
Client {
  Name = dc0-fd
  Address = dc0.teamworld.com
  FDPort = 9102
  Catalog = MyCatalog
  Password = "ubdwMU3zCtf+ROyLnOpoi5epJP0yWJUujZqX6vGuGEnx"  # password for 
FileDaemon
  File Retention = 30 days# 30 days
  Job Retention = 6 months# six months
  AutoPrune = yes # Prune expired Jobs/Files
}

---



Please let me know what I could possibly be doing wrong.  What is so different 
from this particular backup job from my others?!

+--
|This was sent by ar.theg...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

Re: [Bacula-users] restore on windows to different location not working

2015-07-10 Thread Luc Van der Veken
Yes, I've seen the same, or very similar.

The "different location" the files were restored to was accessible as a Samba 
share on a Linux machine.

The files were there, but Windows users accessing the share over the network 
didn't see them because the file permissions didn't match the share permissions.


-Original Message-
From: Bill Arlofski [mailto:waa-bac...@revpol.com] 
Sent: 09 July 2015 16:25
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] restore on windows to different location not working

On 07/09/2015 07:56 AM, Jaroslav Mechacek wrote:
> I have been trying to restore files to different location ( mod Where ),
> but it does not work. What happens is this: the restore starts and the files
> start to appear in the restore destination, but after the restore is finished,
> all  files disappear. Restore to original  ( Where = / ) location works 
> perfectly.
> Restore to different location worked only on win 2003.
> Any help with this would be greatly appreciated.
> 
> Best regards
> Jaroslav

Hi Jaroslav,

I have seen this before. What appears to be happening is that the files still
exist in the restore location, but the Bacula FD is applying the original
permissions to them once all files have been restored. Those permissions cause
them to become "special system files" (or whatever Microsoft calls them)

I am no Windows expert, but I think I recall that they will be visible if you
set your Windows Explorer to "not hide system files" (or one of those similar
settings)

Hope this helps...

Bill



-- 
Bill Arlofski
http://www.revpol.com/bacula
-- Not responsible for anything below this line --

--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incrementals not happening

2015-07-06 Thread Luc Van der Veken
In your jobdefs, in default.conf:

  Max Full Interval = 90

90 what?
Maybe make that "90 days" to be certain. I'm not entirely sure myself and it's 
hard to find in the manual, but I think the default unit is seconds.



From: James Chamberlain [mailto:jam...@exa.com]
Sent: 07 July 2015 1:11
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Incrementals not happening

Hi all,

I'm trying to figure out what's wrong with my configuration.  If run a full 
backup on the system Hawking, and then try to run an incremental against 
Hawking, the job gets promoted to full and I don't know why.  If I do a restore 
against Hawking, the most recent full backup is found and Bacula is able to 
build a file system tree.  To me, that says that the previous full backup ought 
to be suitable.  I'm running Bacula 7.0.5.

Does anyone have any suggestions?

Thanks,

James



Director configuration stripped of passwords and attached below.

As requested in the FAQ, here's the relevant section of the output from "list 
jobs":

*list jobs
+---+---+-+--+---++---+---+
| JobId | Name  | StartTime   | Type | Level | JobFiles   | 
JobBytes  | JobStatus |
+---+---+-+--+---++---+---+
|57 | Hawking   | 2015-07-06 15:33:11 | B| F | 52,145 | 
9,849,260,308 | T |
|59 | Hawking   | 2015-07-06 15:55:25 | B| F | 52,145 | 
9,852,606,517 | T |
|60 | Hawking   | 2015-07-06 16:20:07 | B| F | 52,145 | 
9,855,333,619 | T |
+---+---+-+--+---++---+---+

Here's the Job report output from the prior Full save:

06-Jul 15:55 diesel-dir JobId 59: 06-Jul 15:55 
diesel-dir JobId 59: No prior or suitable Full backup 
found in catalog. Doing FULL backup.
06-Jul 15:55 diesel-dir JobId 59: Start Backup JobId 
59, Job=Hawking.2015-07-06_15.55.23_22
06-Jul 15:55 diesel-dir JobId 59: Using Device "LTO-5" 
to write.
06-Jul 15:55 diesel-sd JobId 59: Spooling data ...
06-Jul 16:04 diesel-sd JobId 59: Committing spooled 
data to Volume "AGM703L5". Despooling 9,869,706,491 bytes ...
06-Jul 16:12 diesel-sd JobId 59: Despooling elapsed 
time = 00:06:40, Transfer rate = 24.67 M Bytes/second
06-Jul 16:12 diesel-sd JobId 59: Elapsed time=00:17:17, 
Transfer rate=9.507 M Bytes/second
06-Jul 16:12 diesel-sd JobId 59: Sending spooled attrs 
to the Director. Despooling 11,030,831 bytes ...
06-Jul 16:13 diesel-dir JobId 59: Bacula 
diesel-dir 7.0.5 (28Jul14):
 Build OS:   x86_64-unknown-linux-gnu redhat
 JobId:  59
 Job:Hawking.2015-07-06_15.55.23_22
 Backup Level:   Full (upgraded from Incremental)
 Client: "Hawking" i686-redhat-linux-gnu,redhat,
 FileSet:"Hawking-FileSet" 2015-06-08 19:00:01
 Pool:   "Default" (From Job resource)
 Catalog:"MyCatalog" (From Client resource)
 Storage:"LTO-5" (From Pool resource)
 Scheduled time: 06-Jul-2015 15:55:22
 Start time: 06-Jul-2015 15:55:25
 End time:   06-Jul-2015 16:13:01
 Elapsed time:   17 mins 36 secs
 Priority:   10
 FD Files Written:   52,145
 SD Files Written:   52,145
 FD Bytes Written:   9,852,606,517 (9.852 GB)
 SD Bytes Written:   9,859,413,603 (9.859 GB)
 Rate:   9330.1 KB/s
 Software Compression:   None
 VSS:no
 Encryption: no
 Accurate:   no
 Volume name(s): AGM703L5
 Volume Session Id:  34
 Volume Session Time:1432647685
 Last Volume Bytes:  2,372,386,996,224 (2.372 TB)
 Non-fatal FD errors:0
 SD Errors:  0
 FD termination status:  OK
 SD termination status:  OK
 Termination:Backup OK

06-Jul 16:13 diesel-dir JobId 59: Begin pruning Jobs 
older than 3 months .
06-Jul 16:13 diesel-dir JobId 59: No Jobs found to 
prune.
06-Jul 16:13 diesel-dir JobId 59: Begin pruning Files.
06-Jul 16:13 diesel-dir JobId 59: No Files found to 
prune.
06-Jul 16:13 diesel-dir JobId 59: End auto prune.

Here's the output of "list jobid=59", for the job ID prior to the Full save:

*llist jobid=59
   JobId: 59
 Job: Hawking.2015-07-06_15.55.23_22
Name: Hawking
 PurgedFiles: 0
Type: B
   Level: F
ClientId: 5
Name: Hawking
 

Re: [Bacula-users] Why it takes so much to "Building directory tree"

2015-07-06 Thread Luc Van der Veken
With that result, I would not increase the innodb buffer setting.

Your system is already swapping (660 MB of 2 GB swap space in use).
There's nothing that impairs overall system performance as much as that.



-Original Message-
From: f-otake [mailto:f-ot...@kinryokai.net] 
Sent: 05 July 2015 16:31
To: bacula-users
Subject: Re: [Bacula-users] Why it takes so much to "Building directory tree"

Thanks for Radosław Korzeniewski and Luc Van der Veken
I measure the time of Building directory tree on my previous case and it 
took only 12seconds.
also check free command when running.
 total   used   free sharedbuffers cached
Mem:   1020300 932360  87940220  46276 341924
-/+ buffers/cache: 544160 476140
Swap:  2097148 6601681436980

It maybe assign more to innodb_buffer_pool_size, but keep same setting.

--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Why it takes so much to "Building directory tree"

2015-07-05 Thread Luc Van der Veken
You should look out when copying an innodb_buffer_pool_size from a webpage or 
so, the best value for that setting depends too much on local conditions, esp. 
available RAM, database size, and what else is running (and needing memory) on 
the same server.

In general, the higher the setting, the better, but you have to take care not 
to take so much that other components suffer from it.
And of course, setting it higher than your database size is useless, but I 
don't think that will be an issue with Bacula (my bacula db is 11 GB).


I think it's a good approach to log on to the server while backups are running 
and examine the output of the 'free' command a number of times, if possible 
under different circumstances (while clients are being backed up, while the 
director is being backed up, while the MySQL database itself is being dumped, 
while a restore is running, etc.).


If you see that 'free' reports a lot of free memory all of the time, allocate 
part of that for database use by adding it to innodb_buffer_pool_size.
If you experience problems or slow behavior later, check it again. If it is 
reporting low memory then (or especially swap space use of more than a few MB), 
decrease innodb_buffer_pool_size to free some up.


Even better is setting up Munin (or similar) to monitor your Bacula machine and 
examine the memory use graphs.


-Original Message-
From: f-ot...@kinryokai.net [mailto:f-ot...@kinryokai.net] 
Sent: 05 July 2015 1:17
To: Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Why it takes so much to "Building directory tree"

Many thanks for Heitor Faria(Mr./Ms.) and Randoslaw
Koreniewski(Mr./Ms)
Mysql DB was MyISAM therefore changed to InnoDB.
and also inserted few lines as suggested on http://bacula.us/tuning/
as follows
 sort_buffer_size = 2MB
 innodb_buffer_pool_size = 128MB
 innodb_flush_log_at_trx_commit = 0
 innodb_flush_method = O_DIRECT
then "Bulding directory tree" finish almost immidiatly.
FYI following case take only less than 30seconds
+---+---+--+---+-++
| JobId | Level | JobFiles | JobBytes  | StartTime   |
VolumeName |
+---+---+--+---+-++
| 4 | F |  258,441 | 2,262,481,683 | 2015-07-02 14:32:50 |
Vol-0004   |
|14 | I |  158 |79,623,411 | 2015-07-03 04:32:49 |
Vol-0014   |
+---+---+--+---+-++
Thanks again for the help.
F.Otake



--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows client backup error

2015-06-01 Thread Luc Van der Veken
I don’t think those backslashes (\) instead of normal (/) are the problem here, 
those in the error messages are coming from Windows, not from the fileset or 
bacula.

But [I think – never actually tried it] if you have a fileset resource that 
says drives C through W are to be included, it is to be expected that you will 
get an error message for each of those drives that can’t be found at backup 
time.

Another likely side effect is that unless you include a ‘drivetype=fixed’ 
option, it will back up everything it finds at those letters, including a 
CD-ROM disc that happens to be left in the drive (and another one at the same 
drive letter tomorrow), a USB drive someone forgot to unplug, etc.


I prefer to have _at least_ one fileset resource for each individual client, 
and even split them into two or more (plus as many separate jobs) in a few 
special cases.
For general purpose machines, my filesets are often just identical copies of 
the same base set (just the name changed), but this allows for easy tweaking on 
a per-client basis as soon and as often as it becomes necessary.

It also allows me to keep client, job and fileset resources together in one 
configuration file per client. Anything that has to be changed for a client is 
always in that client’s file, no change there will ever affect another client, 
and adding or removing a client is just a minute’s work without having to find 
your way (again) in one long file.


From: Heitor Faria [mailto:hei...@bacula.com.br]
Sent: 01 June 2015 13:53
To: More, Ankush; Ana Emília M. Arruda
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Windows client backup error

Ankush: windows filesets should be with regular dash and usually with capital 
letters. It is on the Bacula Config. Manual.
--
Heitor Medrado de Faria
+55 61 82684220
Precisa de treinamento Bacula presencial, telepresencial ou online? Acesse: 
http://www.bacula.com.br
Em 1 de junho de 2015 06:19:32 BRT, "More, Ankush" 
mailto:ankush.m...@capgemini.com>> escreveu:
Hi Ana,


I have created single file set from drives C: to W: for all windows client.
Drives which are failed not present.


Please find log:


| bacula-dir JobId 1569: No prior Full backup Job record found.
   |
| bacula-dir JobId 1569: No prior or suitable Full backup found in catalog. 
Doing FULL backup.
|
| bacula-dir JobId 1569: Start Backup JobId 1569, 
Job=CTPTSTWIN04-W-BkpTape.2015-05-28_13.50.02_21


|
| bacula-dir JobId 1569: Using Device "Drive-4" to write.
 |
| ctptstwin04-fd JobId 1569: Generate VSS snapshots. Driver="Win32 VSS", 
Drive(s)="CDEFGHIJKLMNOPQRSTUVW"
|
   | ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive 
"e:\" failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "f:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "g:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "h:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "i:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "j:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "k:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "l:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "m:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "n:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "o:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "p:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "q:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "r:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "s:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "t:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "u:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "v:\" 
failed.
|
| ctptstwin04-fd JobId 1569: Fatal error: Generate VSS snapshot of drive "w:\" 
failed. |
| ctptstwin04-fd JobId 1569: VSS Writer (BackupComplete): "System Writer", 
State: 0x1 (VSS_WS_STABLE)
|
| ctptstwin04-fd JobId 1569: VSS Writer (BackupComplete): "MSDEWriter", State: 
0x1 (VSS_WS_STABLE)
|
| ctptstwin04-fd JobId 1569: VSS Writer (BackupComplete): "Registry 

Re: [Bacula-users] Is Bacula and a LTO drive right for me?

2015-05-20 Thread Luc Van der Veken
You said it yourself: RAID. Not magic, it is only writing on multiple disks in 
parallel.

The disks I have in my NAS are WD Reds, rated at 112 MB/s and performance 
tested (by Tom's Hardware) to write at speeds from about 70 MB/s (center of the 
disk) to about 150 MB/s (outside).

I've seen much higher speeds in my NAS, where they are configured in a 12 drive 
array with 2 parity disks.
The NAS has two 1 GB network interfaces bonded together to 2 Gbit. That 
translates to roughly 250 MB/s, but the network is *still* more of a bottleneck 
than the disks.

The disks only become the deciding factor when there's a lot of non-sequential 
access (many people using it at the same time, plus it's also doubling as the 
iSCSI store for a few less important VMWare virtual machines).



-Original Message-
From: Dimitri Maziuk [mailto:dmaz...@bmrb.wisc.edu] 
Sent: 19 May 2015 19:43
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Is Bacula and a LTO drive right for me?

On 05/19/2015 11:42 AM, Bryn Hughes wrote:
...
> recent LTO versions (5/6) you really need to make sure your disk setup 
> on your backup storage server is capable of keeping up with the tape 
> drive - most consumer hard drives top out at about 120MB/sec for 
> sequential reads and much less than that for random I/O (such as having 
> backup jobs writing to the disk at the same time as the tape drive is 
> reading from it)

Wow these seagate nas drives must be magic then: it's 120MB/s sequential
read yet I get consistent 340MB/s sustained writes

> 11-May 23:00 starfish-sd JobId 13: Despooling elapsed time = 00:02:24, 
> Transfer rate = 345.8 M Bytes/second
...
> 12-May 12:30 starfish-sd JobId 13: Despooling elapsed time = 00:02:25, 
> Transfer rate = 343.4 M Bytes/second
...
> 14-May 03:33 starfish-sd JobId 34: Despooling elapsed time = 00:02:27, 
> Transfer rate = 338.8 M Bytes/second
...
> 18-May 19:49 starfish-sd JobId 57: Despooling elapsed time = 00:02:26, 
> Transfer rate = 341.1 M Bytes/second

and so on. On software raid-5-ish (raidz1) with lz4 compression at
filesystem level.

How about you iostat that LTO 5/6 drive while a restore job is reading
from it and a backup job is writing at the same time and then post speed
comparisons?

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] confused about differentials

2015-05-18 Thread Luc Van der Veken
Hi Ana Emilia, thanks for clearing that up.

But then I think I may have a problem :(

I am using the Dir and FD from standard Ubuntu 12.04 repositories, 5.2.5.

I only have a few Windows clients. For those I got the Windows FD offered by 
Bacula Systems two years ago, which was at that time 6.0.6. That was the only 
one offered, as far as I remember. Today it would be 7.0.5.

Backups made with that combination run without errors or warnings.
I never had (or tried) to restore a full system, only some individual files, 
but that always worked too, so far.

I am also using community FD 5.2.10 on one machine (my own workstation), that 
looks OK too.

This is some logging output, from a job picked at random, from 6.0.6 FD with 
director/SD 5.2.5.

  Build OS:   x86_64-pc-linux-gnu ubuntu 12.04
  JobId:  29291
  Job:qvsrv.2015-05-15_20.05.01_50
  Backup Level:   Differential, since=2015-04-24 21:37:40
  Client: "qvsrv-fd" 6.0.6 (30Sep12) Microsoft Windows Server 
2008 R2 Standard Edition Service Pack 1 (build 7601), 64-bit,Cross-compile,Win64
  FileSet:"qvsrvSet" 2013-07-30 20:05:01
  Pool:   "File" (From Job DiffPool override)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"File" (From Pool resource)
  Scheduled time: 15-May-2015 20:05:01
  Start time: 16-May-2015 00:53:13
  End time:   16-May-2015 01:37:31
  Elapsed time:   44 mins 18 secs
  Priority:   10
  FD Files Written:   4,044
  SD Files Written:   4,044
  FD Bytes Written:   8,663,741,214 (8.663 GB)
  SD Bytes Written:   8,664,580,063 (8.664 GB)
  Rate:   3259.5 KB/s
  Software Compression:   65.5 %
  VSS:yes
  Encryption: no
  Accurate:   no
  Volume name(s): FileStorage0133|FileStorage0134|FileStorage0135
  Volume Session Id:  284
  Volume Session Time:1430987205
  Last Volume Bytes:  4,089,208,805 (4.089 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

I don’t see anything abnormal in it. The backup size may appear small even for 
a differential, but it’s an intranet reporting server with only little activity.


From: Ana Emília M. Arruda [mailto:emiliaarr...@gmail.com]
Sent: 18 May 2015 15:14
To: Luc Van der Veken
Cc: bacula-users
Subject: Re: [Bacula-users] confused about differentials

Hello Luc,

I think there is a misunderstooding here.

On Mon, May 18, 2015 at 3:28 AM, Luc Van der Veken 
mailto:luc...@wimionline.com>> wrote:
Hi,Kern,

Do you mean that exactly as you say it,
​​
“FD that is more recent that the Dir and the SD […] is a big problem”?
I thought it was the other way around, that a FD _older_ than dir/SD would 
cause problems.

​
​
“FD that is more recent that the Dir and the SD […] is a big problem”
This is exactly what Kern said. Older versions of FD working with newer Dir and 
SD vesions are not a problem. Instead, "FD more recent than Dir and SD" are.


The way you say it here, the FD Bacula Systems provides for community users 
would force everyone who wants to add Windows clients to an existing 
configuration, to upgrade his dir and SD to 7.0.5.

​FD versions older than Dir and SD are not a problem. I have a 5.2.10 Windows 
client working with a 7.0.5 Dir and SD.​ If you have 5.2.10 windows clients 
(the most recent community version), you should have Dir and SD running at 
least version 5.2.10.

Regards,
Ana



From: Kern Sibbald [mailto:k...@sibbald.com<mailto:k...@sibbald.com>]
Sent: 13 May 2015 18:13
To: Ian Young; Radosław Korzeniewski
Cc: bacula-users
Subject: Re: [Bacula-users] confused about differentials


We have never had any version of Bacula in which Differential backups were 
incomplete.  Running with a FD that is more recent that the Dir and the SD or a 
Dir and SD that are not identical is a big problem.  Until that is fixed it 
doesn't make much sense to speculate about any odd behavior.  Also, if you are 
doing something very unusual in your FileSet or using multiple FileSets in your 
backup, there could be a configuration problem or perhaps even an undiscovered 
bug, but the first thing to do is correct any possible version problems you 
have.

Best regards,
Kern

On 13.05.2015 15:18, Ian Young wrote:

On 13 May 2015, at 07:20, Radosław Korzeniewski 
mailto:rados...@korzeniewski.net>> wrote:

I appear to be seeing the same problem with CentOS 6 / CentOS 6 combinations:


OS versions doesn't matter. What is important: Bacula Dir/SD vs. File Daemon 
versions. The supported configuration require Dir/SD in the same version every 
time and FD not newer.

I should have been clearer. I meant that I was apparently seeing the same 
pro

Re: [Bacula-users] confused about differentials

2015-05-17 Thread Luc Van der Veken
Hi,Kern,

Do you mean that exactly as you say it, “FD that is more recent that the Dir 
and the SD […] is a big problem”?
I thought it was the other way around, that a FD _older_ than dir/SD would 
cause problems.

The way you say it here, the FD Bacula Systems provides for community users 
would force everyone who wants to add Windows clients to an existing 
configuration, to upgrade his dir and SD to 7.0.5.


From: Kern Sibbald [mailto:k...@sibbald.com]
Sent: 13 May 2015 18:13
To: Ian Young; Radosław Korzeniewski
Cc: bacula-users
Subject: Re: [Bacula-users] confused about differentials


We have never had any version of Bacula in which Differential backups were 
incomplete.  Running with a FD that is more recent that the Dir and the SD or a 
Dir and SD that are not identical is a big problem.  Until that is fixed it 
doesn't make much sense to speculate about any odd behavior.  Also, if you are 
doing something very unusual in your FileSet or using multiple FileSets in your 
backup, there could be a configuration problem or perhaps even an undiscovered 
bug, but the first thing to do is correct any possible version problems you 
have.

Best regards,
Kern

On 13.05.2015 15:18, Ian Young wrote:

On 13 May 2015, at 07:20, Radosław Korzeniewski 
mailto:rados...@korzeniewski.net>> wrote:

I appear to be seeing the same problem with CentOS 6 / CentOS 6 combinations:


OS versions doesn't matter. What is important: Bacula Dir/SD vs. File Daemon 
versions. The supported configuration require Dir/SD in the same version every 
time and FD not newer.

I should have been clearer. I meant that I was apparently seeing the same 
problem on at least one setup where the FD, DIR and SD versions were all the 
same.

I take your point, though, that the Director/SD should not be older than the 
clients, so I need to fix that. Fortunately the (virtual) machine running the 
Director and Storage daemons is dedicated to that task, so it should be 
relatively easy to build a new CentOS 7 machine to get 5.2.13.

Recommended version in May 2015 is Bacula 7.0.5, not 5.2.13.

I understand that, but deploying 7.0.5 would be significantly harder in my 
environment than moving to 5.2.13 so it would be something of a last resort.

I don't think you're saying that I need to move to the latest version to get 
reliable backups, are you?

If anyone knew of a bug in 5.2.x (or for that matter in 5.0.x) that caused 
differentials to be incomplete, I'd obviously feel differently (but then, I 
imagine Red Hat would too, as 5.2.13 is what they are shipping in their most 
recent release).

I don't think I actually have a version mismatch problem (as I'm seeing the 
same issue with matched versions), but there are all sorts of reasons this 
might make my problem go away: there may be a bug in the version of 5.0 shipped 
with RHEL/CentOS, or I may have a configuration problem. Either way, starting 
from scratch and transitioning clients over may help.

First of all. Did you ever test that it is not working?

As stated, I have seen significant data loss when attempting to restore a 
production system. This is not theoretical, although perhaps the subject line 
led you astray.


Differential backup does not backup a files which were deleted in the mean 
time. So in real system it is very unlikely (it must meet a specific 
conditions) you get the same number of files backed up in Incremental and 
Differential levels.

I don't believe this is the problem I'm seeing. For example, from my original 
mail:

| 9,929 | srv-c701-backup | 2015-05-01 23:05:04 | B| I |   91,030 | 
3,450,906,888 | T
| 9,938 | srv-c701-backup | 2015-05-02 23:05:03 | B| D |  112 | 
2,184,302 | T

The 91,000 files in job 9929 were NOT all deleted before job 9938 was run, but 
do not appear in that job.

-- Ian







--

One dashboard for servers and applications across Physical-Virtual-Cloud

Widest out-of-the-box monitoring support with 50+ applications

Performance metrics, stats and reports that give you Actionable Insights

Deep dive visibility with transaction tracing using APM Insight.

http://ad.doubleclick.net/ddm/clk/290420510;117567292;y




___

Bacula-users mailing list

Bacula-users@lists.sourceforge.net

https://lists.sourceforge.net/lists/listinfo/bacula-users

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://list

Re: [Bacula-users] RunScript order of multiple Commands

2015-05-11 Thread Luc Van der Veken
At the risk of sounding overly obvious ---  ;)

If you want to be absolutely certain, you can always take full control 
yourself: put the commands in a shell script, and execute that with a single 
Command line.

Otherwise more questions arise, for instance if you have ‘FailJobOnError = 
yes’, will command 2 still be executed if command 1 fails?


From: Alex Domoradov [mailto:alex@gmail.com]
Sent: 11 May 2015 14:17
To: ZeroUno
Cc: bacula-users
Subject: Re: [Bacula-users] RunScript order of multiple Commands

​
> Or are they executed in the order in which they are written in the
configuration, with the second waiting for the first to complete and so on?
it's seem so

RunScript {
   RunsWhen = Before
   FailJobOnError = No
   Command = "/etc/bacula/script_1.sh"
   Command = "/etc/bacula/script_3.sh"
   Command = "/etc/bacula/script_2.sh"
}

# cat /etc/bacula/script_1.sh
#!/bin/bash
echo "command 1, $(date)" >> /tmp/1.log
sleep 60

# cat /etc/bacula/script_2.sh
#!/bin/bash
echo "command 2, $(date)" >> /tmp/1.log
sleep 120

# cat /etc/bacula/script_3.sh
#!/bin/bash
echo "command 3, $(date)" >> /tmp/1.log
sleep 240
# cat /tmp/1.log
command 1, Mon May 11 12:08:49 UTC 2015
command 3, Mon May 11 12:09:49 UTC 2015
command 2, Mon May 11 12:13:49 UTC 2015

​


On Mon, May 11, 2015 at 1:21 PM, ZeroUno 
mailto:zerozerouno...@gmail.com>> wrote:
Hi,
I'm using bacula 5.2.13 on RedHat 6.3 (cannot change version), and I
have a very basic question to which I cannot find an answer online.

In the description of the RunScript directive I read that you can
specify more than one Command option per RunScript.
But when you do it, in which order are the different Commands executed?
Are they executed in random order, maybe even simultaneously?
Or are they executed in the order in which they are written in the
configuration, with the second waiting for the first to complete and so on?

Thank you.

--
01


--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] NFS mount back ups?

2015-05-08 Thread Luc Van der Veken
Romer Ventura said:
> I cant omit the fstype = nfs, as bacula will fail since bacula checks for 
> that.

[Based on version 5.x]
I think bacula will check the file system type _only_ if you specify it, and 
default to “everything” when you omit it.
You can’t even use it for Windows clients.

I don’t specify ‘fstype=’ anywhere, and my backups run fine, of local disks as 
well as NFS shares (that’s how I back up a NAS on which I can’t install a file 
daemon).

As far as I understood it, it is optional; you can specify zero, one or more 
filesystem types to include; and the main (or only) reason why it exists is so 
you can avoid getting into an endless loop traversing mount points when you set 
onefs = no.


From: Romer Ventura [mailto:rvent...@h-st.com]
Sent: 08 May 2015 15:47
To: 'Kern Sibbald'
Cc: 'bacula-users'
Subject: Re: [Bacula-users] NFS mount back ups?

Yeah, I actually thought about that. So I did an estimate and it came up with:
2000 OK estimate files=49,851 bytes=23,481,423,809

Didn’t do a estimate listing till you mentioned it, here it is:
2000 OK estimate files=49,851 bytes=23,481,423,809


I cant omit the fstype = nfs, as bacula will fail since bacula checks for that.

From: Kern Sibbald [mailto:k...@sibbald.com]
Sent: Friday, May 08, 2015 1:28 AM
To: Romer Ventura
Cc: 'bacula-users'
Subject: Re: [Bacula-users] NFS mount back ups?

The "fstype = nfs" may be restricting the directories to be backed up more than 
you expect.  You can probably see what is going on with an "estimate listing 
..." command.

On 07.05.2015 22:57, Romer Ventura wrote:
Well, my file set is pretty simple:
FileSet {
Name = M2KFileSet
  Include {
Options {
  signature = MD5
  compression = GZIP
  onefs = no
  fstype = nfs
}
File = /mnt/nfs/hsigux
  }
}

I find it hard to believe it’s compressing 33GB of data down to 3GB.. haha

Back up level is always full for this job, if I go into restore, it appears all 
files have been copied. It’s just there are so many I cant really easily notice 
if anything was omitted.. I did noticed some files have different owner and 
group. There are some like:
drwxrwxr-x  14 111  75
drwxrwxr-x   9 111  75
drwxr-xr-x   2 root root
drwxr-xr-x   3 root 75
drwxr-xr-x   9 113  ntp

Maybe the root mapping on NFS is not working right, but if that was the case 
bacula would complain about it like it did before.

Thanks

From: John Lockard [mailto:jlock...@umich.edu]
Sent: Thursday, May 07, 2015 2:48 PM
To: Romer Ventura
Cc: bacula-users
Subject: Re: [Bacula-users] NFS mount back ups?

Compression?
Backup level (files not backed up because they haven't been changed)?
Exclusions?
Have you gone through a full list of files to be backed up and a full list of 
the files which were actually backed up?

On Thu, May 7, 2015 at 1:42 PM, Romer Ventura 
mailto:rvent...@h-st.com>> wrote:
Hello,

I have 2 HP-UX 11.31 and I have ERP data I need to back up on those systems. 
Since there is no client for it, I decided to copy the files to a temp location 
every night, and export that temp folder via NFS. I mount these exports into my 
bacula server and set it up so that it backs up those mount point.

Everything seems to be working, however, the total size of the ERP data is 
about 33GB, but bacula is only copying 3.2GB, I see no errors in the bacula 
app, logs or the bacula server itself. There are no errors on any of the HP-UX 
servers either..

Any ideas on what to do? Or detect why bacula it is stopping at 3.2GB and 
marking the job as OK..?

Thanks

--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



--
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  Suite 205 | 309 Maynard Street
  jlock...@umich.edu |Ann Arbor, MI  
48104-2211
 www.umich.edu/~jlockard | 734-936-7255 | 
734-763-9677 FAX
---
- The University of Michigan will never ask you for your password -



--

One dashboard for servers and applications across Physical-Virtual-Cloud

Widest out-of-the-box monitoring support with 50+ applications

Performance metrics, stats and reports that give you Actionable Insights

D

Re: [Bacula-users] NFS mount back ups?

2015-05-07 Thread Luc Van der Veken
Romer Ventura said:
> I find it hard to believe it’s compressing 33GB of data down to 3GB.. haha
It depends on the data.
I regularly see even better compression than that on internal debugging 
logfiles of an ATM: 250 MB per file down to about 5 MB. Although that is with 
RAR, not GZip…

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Same job started twice

2015-05-04 Thread Luc Van der Veken
Sorry, posted the wrong pool definition, and that's why Maximum Concurrent Jobs 
was set to 5 instead of 1.

The right one:

Pool {
  Name = Tape
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 3 months
  File Retention = 3 months   # 
Override retentions set for individual clients
 Job Retention = 3 months
  Storage = Tape
}


From: Luc Van der Veken
Sent: 04 May 2015 9:10
To: bacula-users@lists.sourceforge.net
Subject: RE: [Bacula-users] Same job started twice

Hi all,

I seem to be suffering from a side effect to "Allow duplicates = no" and the 
"Cancel [Queued | Lower Level] duplicates" settings.

I make full / differential backups to disk in the weekend, and copy those to 
tape for off-site storage on Monday.

There's only one copy job definition with its corresponding schedule. When it 
was executed, it used to create a separate new job for each job it had to copy. 
These were queued up and executed one after the other because Maximum 
Concurrent Jobs = 1 on the tape device.

This morning, all those copy jobs except for the first one failed, saying they 
were duplicates.


Schedule {
  Name = "TransferToTapeSchedule"
  Run = Full mon at 07:00
}

Job {
  Name = Transfer to Tape
## used to migrate instead of copy until 2014-10-15
#  Type = Migrate
  Type = Copy
  Pool = File
#  Selection Type = PoolTime
  Selection Type = PoolUncopiedJobs
  Messages = Standard
  Client = bacula-main  # required and checked for validity, but ignored at 
runtime
  Level = full  # idem
  FileSet = BaculaSet   # ditto

# DO NOT run at lower priority than backup jobs,
# has adverse effect of holding them up until this job is finished.
  Priority = 10

  ## only for migration jobs
#  Purge Migration Job = yes# purge migrated jobs after successful migration
  Schedule = TransferToTapeSchedule
  Maximum Concurrent Jobs = 5
  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes
}

Pool {
  Name = File
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 3 months
  Maximum Volume Bytes = 50G
  Maximum Volumes = 100
  LabelFormat = "FileStorage"
  Action On Purge = Truncate
  Storage = File
  Next Pool = Tape
# data used to be left on disk for 1 week and then moved to tape (Migration 
Time = 1 week).
# changed, now copy to tape, can be done right away
  Migration Time = 1 second
}

That "Maximum Concurrent Jobs = 5" in the job definition was probably copied in 
by accident, it should be 1, but I don't think that is causing the problem.

The result: a whole bunch of errors like the one below, and only one job copied.


04-May 07:00 bacula-dir JobId 28916: Fatal error: JobId 28913 already running. 
Duplicate job not allowed.

04-May 07:00 bacula-dir JobId 28916: Copying using JobId=28884 
Job=A2sMonitor.2015-05-01_20.05.00_21

04-May 07:00 bacula-dir JobId 28916: Bootstrap records written to 
/var/lib/bacula/bacula-dir.restore.245.bsr

04-May 07:00 bacula-dir JobId 28916: Error: Bacula bacula-dir 5.2.5 (26Jan12):

  Build OS:   x86_64-pc-linux-gnu ubuntu 12.04

  Prev Backup JobId:  28884

  Prev Backup Job:A2sMonitor.2015-05-01_20.05.00_21

  New Backup JobId:   28917

  Current JobId:  28916

  Current Job:TransfertoTape.2015-05-04_07.00.01_50

  Backup Level:   Full

  Client: bacula-main

  FileSet:"BaculaSet" 2013-09-27 20:05:00

  Read Pool:  "File" (From Job resource)

  Read Storage:   "File" (From Pool resource)

  Write Pool: "Tape" (From Job Pool's NextPool resource)

  Write Storage:  "Tape" (From Storage from Pool's NextPool resource)

  Catalog:"MyCatalog" (From Client resource)

  Start time: 04-May-2015 07:00:01

  End time:   04-May-2015 07:00:01

  Elapsed time:   0 secs

  Priority:   10

  SD Files Written:   0

  SD Bytes Written:   0 (0 B)

  Rate:   0.0 KB/s

  Volume name(s):

  Volume Session Id:  0

  Volume Session Time:0

  Last Volume Bytes:  0 (0 B)

  SD Errors:  0

  SD termination status:

  Termination:*** Copying Error ***



From: Bryn Hughes [mailto:li...@nashira.ca]
Sent: 30 April 2015 15:07
To: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: Re: [Bacula-users] Same job started twice

These directives might also be useful to you:

  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes

Bryn

On 2015-04-30 02:57 AM, Luc Van der Veken wrote:
So simple that I'm a bit embarrassed: a Maximum Concurrent Jobs setting in the 
Job resource itself should prevent it.

I thought that settin

Re: [Bacula-users] Moving from serial jobs to parallel jobs.

2015-05-04 Thread Luc Van der Veken
This approach can help too: besides doing them in parallel (limited to 5 
concurrent jobs because ultimately it all winds up on the same disks), I also 
divided them into 4 groups.
From the 1st to 4th Friday night each month, a full backup is done of one group 
and differential of the other three. If there's a fifth Friday, it's 
differential for all.

I am using only one pool and device for all, just raised 'Maximum Concurrent 
Jobs' above 1.
That causes interleaving, but on disk that should be no problem.

Retention is the same for all clients and jobs in my case. If that is not so 
for you, you may better go for multiple pools/devices.


-Original Message-
From: Dan Langille [mailto:d...@langille.org] 
Sent: 03 May 2015 15:43
To: bacula-users
Subject: [Bacula-users] Moving from serial jobs to parallel jobs.

I think I'm going to start doing my backups (to disk) in parallel. The monthly 
full backups take *hours*.  They still aren't done, 8 hours later.

  I've heard different approaches, but I think a different device for each 
client sounds best.  All the same pools, but on different devices.  Which also 
means different directories.  Should be fun.

—
Dan Langille
http://langille.org/





--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Same job started twice

2015-05-04 Thread Luc Van der Veken
Hi all,

I seem to be suffering from a side effect to "Allow duplicates = no" and the 
"Cancel [Queued | Lower Level] duplicates" settings.

I make full / differential backups to disk in the weekend, and copy those to 
tape for off-site storage on Monday.

There's only one copy job definition with its corresponding schedule. When it 
was executed, it used to create a separate new job for each job it had to copy. 
These were queued up and executed one after the other because Maximum 
Concurrent Jobs = 1 on the tape device.

This morning, all those copy jobs except for the first one failed, saying they 
were duplicates.


Schedule {
  Name = "TransferToTapeSchedule"
  Run = Full mon at 07:00
}

Job {
  Name = Transfer to Tape
## used to migrate instead of copy until 2014-10-15
#  Type = Migrate
  Type = Copy
  Pool = File
#  Selection Type = PoolTime
  Selection Type = PoolUncopiedJobs
  Messages = Standard
  Client = bacula-main  # required and checked for validity, but ignored at 
runtime
  Level = full  # idem
  FileSet = BaculaSet   # ditto

# DO NOT run at lower priority than backup jobs,
# has adverse effect of holding them up until this job is finished.
  Priority = 10

  ## only for migration jobs
#  Purge Migration Job = yes# purge migrated jobs after successful migration
  Schedule = TransferToTapeSchedule
  Maximum Concurrent Jobs = 5
  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes
}

Pool {
  Name = File
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 3 months
  Maximum Volume Bytes = 50G
  Maximum Volumes = 100
  LabelFormat = "FileStorage"
  Action On Purge = Truncate
  Storage = File
  Next Pool = Tape
# data used to be left on disk for 1 week and then moved to tape (Migration 
Time = 1 week).
# changed, now copy to tape, can be done right away
  Migration Time = 1 second
}

That "Maximum Concurrent Jobs = 5" in the job definition was probably copied in 
by accident, it should be 1, but I don't think that is causing the problem.

The result: a whole bunch of errors like the one below, and only one job copied.


04-May 07:00 bacula-dir JobId 28916: Fatal error: JobId 28913 already running. 
Duplicate job not allowed.

04-May 07:00 bacula-dir JobId 28916: Copying using JobId=28884 
Job=A2sMonitor.2015-05-01_20.05.00_21

04-May 07:00 bacula-dir JobId 28916: Bootstrap records written to 
/var/lib/bacula/bacula-dir.restore.245.bsr

04-May 07:00 bacula-dir JobId 28916: Error: Bacula bacula-dir 5.2.5 (26Jan12):

  Build OS:   x86_64-pc-linux-gnu ubuntu 12.04

  Prev Backup JobId:  28884

  Prev Backup Job:A2sMonitor.2015-05-01_20.05.00_21

  New Backup JobId:   28917

  Current JobId:  28916

  Current Job:TransfertoTape.2015-05-04_07.00.01_50

  Backup Level:   Full

  Client: bacula-main

  FileSet:"BaculaSet" 2013-09-27 20:05:00

  Read Pool:  "File" (From Job resource)

  Read Storage:   "File" (From Pool resource)

  Write Pool: "Tape" (From Job Pool's NextPool resource)

  Write Storage:  "Tape" (From Storage from Pool's NextPool resource)

  Catalog:"MyCatalog" (From Client resource)

  Start time: 04-May-2015 07:00:01

  End time:   04-May-2015 07:00:01

  Elapsed time:   0 secs

  Priority:   10

  SD Files Written:   0

  SD Bytes Written:   0 (0 B)

  Rate:   0.0 KB/s

  Volume name(s):

  Volume Session Id:  0

  Volume Session Time:0

  Last Volume Bytes:  0 (0 B)

  SD Errors:  0

  SD termination status:

  Termination:*** Copying Error ***



From: Bryn Hughes [mailto:li...@nashira.ca]
Sent: 30 April 2015 15:07
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Same job started twice

These directives might also be useful to you:

  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes

Bryn

On 2015-04-30 02:57 AM, Luc Van der Veken wrote:
So simple that I'm a bit embarrassed: a Maximum Concurrent Jobs setting in the 
Job resource itself should prevent it.

I thought that setting was applicable to all kinds of resources except for job 
resources themselves, should have checked the documentation sooner.


From: Luc Van der Veken [mailto:luc...@wimionline.com]
Sent: 30 April 2015 9:09
To: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: [Bacula-users] Same job started twice

Hi all,

Is it possible that, in version 5.2.5 (Ubuntu),


1)  An incremental job is started according to schedule, before a previous 
full run of the same job has finished?

2)  A nasty side effect when that happens is that the incrementa

[Bacula-users] Orphaned buffers

2015-04-30 Thread Luc Van der Veken
Hi,

When I start bacula-dir 5.2.5 with the -t switch, I get a series of messages 
about orphaned buffers.

bacula-dir: smartall.c:404 Orphaned buffer: bacula-dir 7 bytes at 21a8e88 from 
parse_conf.c:416
bacula-dir: smartall.c:404 Orphaned buffer: bacula-dir 10 bytes at 21bff18 from 
inc_conf.c:598
bacula-dir: smartall.c:404 Orphaned buffer: bacula-dir 10 bytes at 21c0908 from 
inc_conf.c:598
bacula-dir: smartall.c:404 Orphaned buffer: bacula-dir 10 bytes at 21c1458 from 
inc_conf.c:598
bacula-dir: smartall.c:404 Orphaned buffer: bacula-dir 10 bytes at 21c1dc8 from 
inc_conf.c:598
bacula-dir: smartall.c:404 Orphaned buffer: bacula-dir 10 bytes at 21c2738 from 
inc_conf.c:598
bacula-dir: smartall.c:404 Orphaned buffer: bacula-dir 10 bytes at 21c2ef8 from 
inc_conf.c:598

Without -t, it starts clean.
Is(or was) this something that only rears its head when you are verifying the 
configuration, or is it something I should pay more attention to?

Google came up with this:

The "Orphaned buffer" message suggests that some field of an item in the
config is being initialized more than once.

I don't have any double (repeated) entries in my configuration that I know of, 
but there might be (not sure) some settings in jobdefs or schedules that are 
being overridden in individual job definitions, or something like that.  Is 
that a bad idea?

Thanks for any reactions.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Same job started twice

2015-04-30 Thread Luc Van der Veken
So simple that I'm a bit embarrassed: a Maximum Concurrent Jobs setting in the 
Job resource itself should prevent it.

I thought that setting was applicable to all kinds of resources except for job 
resources themselves, should have checked the documentation sooner.


From: Luc Van der Veken [mailto:luc...@wimionline.com]
Sent: 30 April 2015 9:09
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Same job started twice

Hi all,

Is it possible that, in version 5.2.5 (Ubuntu),


1)  An incremental job is started according to schedule, before a previous 
full run of the same job has finished?

2)  A nasty side effect when that happens is that the incremental job is 
bounced to full because "Prior failed job found in catalog. Upgrading to 
Full.", while there have been no errors?

I seem to be in that situation now.

The client has 'maximum concurrent jobs' set to 3, because the same client is 
used for backing up different NFS-mounted shares as separate jobs. Most of 
those are small, except for one, and that's that one that has the problem.

The normal schedule is either full or differential on Friday night, incremental 
on Monday through Thursday, and nothing on Saturday or Sunday.
Because many full jobs are scheduled for Friday and only a limited number run 
concurrently, it usually only really starts on Saturday morning.


The first overrun of Full into the next scheduled run was caused not by the job 
itself taking too long, but by a copy job that was copying that job from disk 
to tape, and that had to wait for a new blank tape for too long.
>From there on I think it took longer than 24 hours to complete because it ran 
>two schedules of the same job concurrently each time.
At least that's what the director and catalog report.

Fom Webacula:

Information from DB Catalog : List of Running Jobs

IdJob Name   Status   Level ErrorsClientStart 
Time
yy-mm-dd
28822NAS-ElvisRunning   F - 
 NAS   2015-04-29 11:29:48
28851NAS-ElvisRunning   F - 
 NAS   2015-04-29 20:06:00

Both are incremental jobs upgraded to Full because of a 'previous error' that 
never occurred.
I just canceled the later one to give the other time to finish before it's 
rescheduled again tonight at 20:06:00.


Besides that, there must be something else I have to find. I don't think it's 
normal that a backup of 600 GB from an NFS share to disk on another NFS share 
takes more than 20 hours, as the last 'normal' run last Saturday did (the 
physical machine the job is on is the SD itself, backing up an NFS share to 
another NFS share).

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Same job started twice

2015-04-30 Thread Luc Van der Veken
At least the last issue, why that job takes so long, was quickly found.

It turns out a few of my colleagues are dumping a few terabytes of Veeam 
backups to the same target store at the same time, at a rate so high that the 
virtual machines they're backing up stop responding because Veeam is using the 
vmware host's network interface to the full.


From: Luc Van der Veken
Sent: 30 April 2015 9:09
To: bacula-users@lists.sourceforge.net
Subject: Same job started twice

Hi all,

Is it possible that, in version 5.2.5 (Ubuntu),


1)  An incremental job is started according to schedule, before a previous 
full run of the same job has finished?

2)  A nasty side effect when that happens is that the incremental job is 
bounced to full because "Prior failed job found in catalog. Upgrading to 
Full.", while there have been no errors?

I seem to be in that situation now.

The client has 'maximum concurrent jobs' set to 3, because the same client is 
used for backing up different NFS-mounted shares as separate jobs. Most of 
those are small, except for one, and that's that one that has the problem.

The normal schedule is either full or differential on Friday night, incremental 
on Monday through Thursday, and nothing on Saturday or Sunday.
Because many full jobs are scheduled for Friday and only a limited number run 
concurrently, it usually only really starts on Saturday morning.


The first overrun of Full into the next scheduled run was caused not by the job 
itself taking too long, but by a copy job that was copying that job from disk 
to tape, and that had to wait for a new blank tape for too long.
>From there on I think it took longer than 24 hours to complete because it ran 
>two schedules of the same job concurrently each time.
At least that's what the director and catalog report.

Fom Webacula:

Information from DB Catalog : List of Running Jobs

IdJob Name   Status   Level ErrorsClientStart 
Time
yy-mm-dd
28822NAS-ElvisRunning   F - 
 NAS   2015-04-29 11:29:48
28851NAS-ElvisRunning   F - 
 NAS   2015-04-29 20:06:00

Both are incremental jobs upgraded to Full because of a 'previous error' that 
never occurred.
I just canceled the later one to give the other time to finish before it's 
rescheduled again tonight at 20:06:00.


Besides that, there must be something else I have to find. I don't think it's 
normal that a backup of 600 GB from an NFS share to disk on another NFS share 
takes more than 20 hours, as the last 'normal' run last Saturday did (the 
physical machine the job is on is the SD itself, backing up an NFS share to 
another NFS share).

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Same job started twice

2015-04-30 Thread Luc Van der Veken
Hi all,

Is it possible that, in version 5.2.5 (Ubuntu),


1)  An incremental job is started according to schedule, before a previous 
full run of the same job has finished?

2)  A nasty side effect when that happens is that the incremental job is 
bounced to full because "Prior failed job found in catalog. Upgrading to 
Full.", while there have been no errors?

I seem to be in that situation now.

The client has 'maximum concurrent jobs' set to 3, because the same client is 
used for backing up different NFS-mounted shares as separate jobs. Most of 
those are small, except for one, and that's that one that has the problem.

The normal schedule is either full or differential on Friday night, incremental 
on Monday through Thursday, and nothing on Saturday or Sunday.
Because many full jobs are scheduled for Friday and only a limited number run 
concurrently, it usually only really starts on Saturday morning.


The first overrun of Full into the next scheduled run was caused not by the job 
itself taking too long, but by a copy job that was copying that job from disk 
to tape, and that had to wait for a new blank tape for too long.
>From there on I think it took longer than 24 hours to complete because it ran 
>two schedules of the same job concurrently each time.
At least that's what the director and catalog report.

Fom Webacula:

Information from DB Catalog : List of Running Jobs

IdJob Name   Status   Level ErrorsClientStart 
Time
yy-mm-dd
28822NAS-ElvisRunning   F - 
 NAS   2015-04-29 11:29:48
28851NAS-ElvisRunning   F - 
 NAS   2015-04-29 20:06:00

Both are incremental jobs upgraded to Full because of a 'previous error' that 
never occurred.
I just canceled the later one to give the other time to finish before it's 
rescheduled again tonight at 20:06:00.


Besides that, there must be something else I have to find. I don't think it's 
normal that a backup of 600 GB from an NFS share to disk on another NFS share 
takes more than 20 hours, as the last 'normal' run last Saturday did (the 
physical machine the job is on is the SD itself, backing up an NFS share to 
another NFS share).

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge console messages?

2015-04-28 Thread Luc Van der Veken
Answering myself: I put this in a script in my home directory as a temporary 
solution.
It may still take some time, but I suppose far less than letting it all be 
displayed.

#!/bin/sh
echo messages | bconsole > /dev/null
bconsole


From: Luc Van der Veken [mailto:luc...@wimionline.com]
Sent: 28 April 2015 15:02
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Purge console messages?

Is there a way to clear the buffer containing the console messages waiting to 
be displayed in bconsole?
Or to purge anything older than, say, a day?

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Purge console messages?

2015-04-28 Thread Luc Van der Veken
Is there a way to clear the buffer containing the console messages waiting to 
be displayed in bconsole?
Or to purge anything older than, say, a day?


I already get all messages via e-mail, nicely collected in one mail per job.
Also, I use Webacula for the daily stuff like checking job results, so I don't 
often fire up bconsole.

Just now I launched it, got that "you have messages", and typed in the Dreaded 
Command 'messages'.
And then spent the next few minutes (literally minutes), watching a backlog of 
messages from probably more than a month scroll over my SSH window at blazing 
speed.

Is there a way to tell bconsole to just kill that list, instead of acting like 
computers in bad movies do, where the hero can actually read all that stuff 
while it flies across his screen, and find the abnormality and fix the bug 
that's killing the lunar colony while it happens?  :)



BTW, now that I'm here anyway, the reason why I launched the console is to go 
looking for what caused this.
I wouldn't have asked the list (yet) as I expect to find some reason for it, 
searching and finding for yourself is a better teacher than asking someone else 
;)

One job backs up a network share of about 600 GB, mounted through NFS on the SD 
because there's no FD on the server.

Its last result, which was a FULL run, reported this:


2015-04-25 19:14:31   bacula-dir JobId 28704: Bacula bacula-dir 5.2.5 (26Jan12):

[snip]
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

An INCREMENTAL run of the same job, started last night, reported this:


2015-04-27 20:06:00   bacula-dir JobId 28793: Prior failed job found in 
catalog. Upgrading to Full.

According to Webacula, there were NO failed jobs in the last 7 days.
There were no other runs of the same job scheduled between the full run last 
Saturday and the incremental that got bounced to full yesterday.
It's the first time I see this happening. The configuration for this job hasn't 
changed in months.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Shadow Copy issue - Fatal error: VssOject is NULL

2015-04-21 Thread Luc Van der Veken
As I said, it's not necessarily a disk space issue.
But it is harder to find if it isn't :(

It can also be a RAM issue, even if it would appear that there's plenty of free 
memory left.
Some kernel and driver parts of windows allocate a fixed pool of memory at 
boot, and can't grow that later.

The high part of the error code won't be of much help here either, 7 is 
'facility_win32', which means that a win32 API call failed. Could hardly be 
more generic.


I can only suggest that you google for the error code, see if anything comes up 
that resembles the symptoms you get, and then check the conditions more deeply 
and see if they apply in your case...

Maybe a good place to start: there is a thread about this error, related to VSS 
but with EMC instead of Bacula, at backup central.
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/emc-networker-19/64bit-win-client-fails-with-vss-errors-100397/
A couple of answers near the bottom of the page give solutions that "worked for 
them".



-Original Message-
From: global16 [mailto:bacula-fo...@backupcentral.com] 
Sent: 21 April 2015 19:19
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Shadow Copy issue - Fatal error: VssOject is NULL

(From: global16 [mailto:bacula-forum < at > backupcentral.com] )
On the client, these entries are logged in C:\Program 
Files\Bacula\working\zhfs01.trace
fs01: vss_generic.cpp:366-0 VSSClientGeneric::Initialize: CoInitialize returned 
0x80070008

Is one of the server's disks full or nearing its capacity?
0x80070008 means "Not enough storage is available to process this command."
This message does not always indicate a lack of disk space, but that is the 
first thing to check if you get it. 
------------------

Thanks for the tips Luc Van der Veken, unfortunately I don't see disk space as 
being an issue.  The 'E' volume on the client is only 60% used with over 500GB 
of free space.  The destination storage pool also has plenty of space free.  It 
is strange that if I reboot the client server, backups with VSS will succeed 
for a few days and then start failing again...

+--
|This was sent by bumba...@hotmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Shadow Copy issue - Fatal error: VssOject is NULL

2015-04-21 Thread Luc Van der Veken
(From: global16 [mailto:bacula-fo...@backupcentral.com] )
> On the client, these entries are logged in C:\Program 
> Files\Bacula\working\zhfs01.trace
>   fs01: vss_generic.cpp:366-0 VSSClientGeneric::Initialize: CoInitialize 
> returned 0x80070008

Is one of the server's disks full or nearing its capacity?

0x80070008 means "Not enough storage is available to process this command."

This message does not always indicate a lack of disk space, but that is the 
first thing to check if you get it.



Tip: to quickly translate a Windows error code like 0x80070008 to a readable 
message, take the lower half - the last 4 digits, in this case 0008, convert it 
from hexadecimal to decimal - in this case 8, and then pass that value after 
'net helpmsg ' at a command prompt.
Works for most error codes, up to a decimal value of a few hundreds.

C:\>net helpmsg 8
Not enough storage is available to process this command.


Some other common ones:
C:\>net helpmsg 3
The system cannot find the path specified.

C:\>net helpmsg 5
Access is denied.

The upper part - 0x8007 - gives info about where the error originated, and the 
sort of message (first digit 8 or C = error), but there's no such way to 
quickly decode it at a command prompt.


-Original Message-
From: global16 [mailto:bacula-fo...@backupcentral.com] 
Sent: 21 April 2015 2:03
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Shadow Copy issue - Fatal error: VssOject is NULL

I have been running the same version of Bacula on a Windows 2k12 server for 
over a year (5.2.10).  A few weeks ago, we started getting all our jobs failing 
over the weekend due to an apparent issue with VSS.  Has anyone encountered the 
following fatal error:

20-Apr 19:30 bacula-zha JobId 5546: Start Backup JobId 5546, 
Job=Software_Share_Backup_-_Encrypted.2015-04-20_19.30.00_41
20-Apr 19:30 bacula-zha JobId 5546: Using Device "FileStorage-software" to 
write.
20-Apr 19:30 bacula-zha JobId 5546: Sending Accurate information.
20-Apr 19:30 bacula-sd JobId 5546: Volume "zha-software-encrypted-vol0360" 
previously written, moving to end of data.
20-Apr 19:30 bacula-sd JobId 5546: Ready to append to end of Volume 
"zha-software-encrypted-vol0360" size=1775302040
20-Apr 19:30 fs01 JobId 5546: Fatal error: VSS API failure calling 
"CoInitialize". ERR=Unexpected error. The error code is logged in the error log 
file.
20-Apr 19:30 fs01 JobId 5546: Fatal error: VSS was not initialized properly. 
ERR=Cannot create a file when that file already exists.

20-Apr 19:30 fs01 JobId 5546: Fatal error: VssOject is NULL.
20-Apr 19:30 bacula-sd JobId 5546: Elapsed time=00:00:06, Transfer rate=0  
Bytes/second
20-Apr 19:30 bacula-zha JobId 5546: Error: Bacula bacula-zha 5.2.13 (19Jan13):

The only way to "fix" this is to reboot the client.  Items I have tried:

* Restarted the Bacula service on the client - backups still failed
* Tested connection to client using the 'estimate' command, success. Issue 
isolated to Windows server. 
* List current VSS instances via cmd prompt (vssadmin list writers) - All 
states reported as Stable / No writers reporting errors 
* Manually create a shadow copy on the client (vssadmin create shadow /for=e: ) 
and then manually remove all shadow copies (vssadmin delete shadows /all) - 
backup still fails
* Ran System File Checker (sfc /scannow) - No issues found, backups still fail


After rebooting the client, backups using VSS work fine for a few days and then 
start failing with the same FATAL error.  It is not always the same job that 
fails first.

On the client, these entries are logged in C:\Program 
Files\Bacula\working\zhfs01.trace
  fs01: vss_generic.cpp:366-0 VSSClientGeneric::Initialize: CoInitialize 
returned 0x80070008


If anyone has any ideas on how to further troubleshoot this issue, or have 
encountered this before, I'd greatly appreciate it.[/img]


--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job batches

2015-03-03 Thread Luc Van der Veken
Thanks.

I'll give it a try, but because of the way my schedules are organized, I won't 
know if it worked until this time next month.


From: Bryn Hughes [mailto:li...@nashira.ca]
Sent: 03 March 2015 16:14
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Job batches

On 2015-03-03 07:01 AM, Luc Van der Veken wrote:
If jobs are added in two batches at different times, does the oldest batch have 
to be completely finished before the newer one is started?

My backups are made fd -> file (disk) store -> copy to tape (an old LTO2 drive).

One client is huge compared to the others, copying it to tape takes some time 
(about 5 tapes).

My problem is that new incremental backups that should start while it is being 
copied, just sit there "waiting for execution" until the copy operation has 
completed - yet they are set to run at a higher priority, neither the storage 
they are to be written to nor any source fd is in use at the time, and the sd 
and director both have sufficiently high 'maximum concurrent jobs' settings.

I've gone over all configuration files several times to see if I haven't 
forgotten a 'maximum concurrent' or so, but I find no reason why those jobs 
shouldn't start.

PS: sorry if this is a repeat question, it sounds rather familiar while I am 
writing it, but I didn't find an older version.  It's also possible that I 
started writing it a few months ago, but then decided not to post it and 
continue searching a bit more ;)



Do all of your jobs have the same priority setting?  Jobs with different 
priorities won't execute at the same time.

Ah actually I see that you say above you're using different priority levels.  
Make everything the same priority and give it a try.

The priority thing is a little weird, it doesn't work quite the way one might 
expect.  It won't kick off a higher priority job until the current running job 
has completed, so:

- Job 'A' is running with priority 10
- Job 'B' is queued with priority 5

Job 'B' won't execute until Job 'A' has completed.

Something like:

- Job 'A' is running with priority 10
- Job 'B' is queued with priority 5
- Job 'C' is queued with priority 10
- Job 'D' is queued with priority 5

Job 'A' will run until it is complete, then Job 'B' and 'D' will kick off at 
the same time assuming there's no concurrent limits exceeded, then Job 'C' will 
kick off once 'B' and 'D' are completed.

--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] it seems that Automatic Volume Recycling doesn't work

2015-03-03 Thread Luc Van der Veken
Oops, sorry, it looks like I mistook a very old message in a search result for 
a newly arrived one and replied to it...

From: Luc Van der Veken
Sent: 03 March 2015 16:11
To: bacula-users@lists.sourceforge.net
Cc: 'sos...@mail.com'
Subject: RE: [Bacula-users] it seems that Automatic Volume Recycling doesn't 
work

Hi soshogh,
It looks like you edited the volume retention period in the pool resource.
If any volumes were created before that change, did you update those?  In 
bconsole, 'update volume', option 12 (Volume from Pool).
I'm running the same version on the same OS, and recycling works fine here.

From: sos...@mail.com<mailto:sos...@mail.com> [mailto:sos...@mail.com]
Sent: 09 May 2014 18:51
To: bacula-users
Subject: [Bacula-users] it seems that Automatic Volume Recycling doesn't work

Hi list
I installed bacula via apt-get on ubuntu 12.04 64bit ,
The version is 5.2.5 .
It "seems" that Automatic Volume Recycling doesn't work
Is there anything I miss ?

## ## ## ## ## ## ## ##
## s dir
## ## ## ## ## ## ## ##

backup-svr-dir Version: 5.2.5 (26 January 2012) x86_64-pc-linux-gnu ubuntu 12.04
Daemon started 09-May-14 23:55. Jobs: run=0, running=2 mode=0,0
Heap: heap=405,504 smbytes=131,970 max_bytes=164,829 bufs=585 max_bufs=586

Scheduled Jobs:
Level  Type Pri  Scheduled  Name   Volume
===
IncrementalBackup10  10-May-14 04:15BackupClient1  *unknown*
IncrementalBackup10  10-May-14 04:15b_184*unknown*
IncrementalBackup10  10-May-14 04:15b_192  *unknown*
IncrementalBackup10  10-May-14 04:15backup_linux *unknown*
IncrementalBackup10  10-May-14 04:15backup_jie*unknown*
Full   Backup11  10-May-14 23:10BackupCatalog  *unknown*


Running Jobs:
Console connected at 09-May-14 23:55
Console connected at 10-May-14 00:38
JobId Level   Name   Status
==
   836 FullBackupCatalog.2014-05-09_23.55.41_03 is waiting for an 
appendable Volume
   837 FullBackupCatalog.2014-05-10_00.00.39_04 is waiting execution





## ## ## ## ## ## ## ##
## list volume
## ## ## ## ## ## ## ##

Pool: Default
No results to list.
Pool: File
+-++---+-++--+--+-+--+---+---+-+
| MediaId | VolumeName | VolStatus | Enabled | VolBytes   | VolFiles | 
VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten |
+-++---+-++--+--+-+--+---+---+-+
|   1 | File1  | Full  |   1 | 64,424,490,627 |   14 |
2,592,000 |   1 |0 | 0 | File  | 2014-04-17 04:16:53 |
|   2 | File2  | Full  |   1 | 64,424,489,826 |   14 |
2,592,000 |   1 |0 | 0 | File  | 2014-05-08 22:50:18 |
|   3 | File3  | Full  |   1 | 64,424,445,818 |   14 |
2,592,000 |   1 |0 | 0 | File  | 2014-04-25 04:17:08 |
|   4 | File4  | Full  |   1 | 64,424,467,353 |   14 |
2,592,000 |   1 |0 | 0 | File  | 2014-05-09 06:58:35 |
+-++---+-++--+--+-+--+---+---+-+
Pool: Scratch
No results to list.




## ## ## ## ## ## ## ##
## cfg
## ## ## ## ## ## ## ##

Pool {
  Name = File
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 3 days # one year
  Maximum Volume Bytes = 60G  # Limit Volume size to something 
reasonable
  Maximum Volumes = 2   # Limit number of Volumes in Pool
}






















sos...@mail.com<mailto:sos...@mail.com>
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] it seems that Automatic Volume Recycling doesn't work

2015-03-03 Thread Luc Van der Veken
Hi soshogh,
It looks like you edited the volume retention period in the pool resource.
If any volumes were created before that change, did you update those?  In 
bconsole, 'update volume', option 12 (Volume from Pool).
I'm running the same version on the same OS, and recycling works fine here.

From: sos...@mail.com [mailto:sos...@mail.com]
Sent: 09 May 2014 18:51
To: bacula-users
Subject: [Bacula-users] it seems that Automatic Volume Recycling doesn't work

Hi list
I installed bacula via apt-get on ubuntu 12.04 64bit ,
The version is 5.2.5 .
It "seems" that Automatic Volume Recycling doesn't work
Is there anything I miss ?

## ## ## ## ## ## ## ##
## s dir
## ## ## ## ## ## ## ##

backup-svr-dir Version: 5.2.5 (26 January 2012) x86_64-pc-linux-gnu ubuntu 12.04
Daemon started 09-May-14 23:55. Jobs: run=0, running=2 mode=0,0
Heap: heap=405,504 smbytes=131,970 max_bytes=164,829 bufs=585 max_bufs=586

Scheduled Jobs:
Level  Type Pri  Scheduled  Name   Volume
===
IncrementalBackup10  10-May-14 04:15BackupClient1  *unknown*
IncrementalBackup10  10-May-14 04:15b_184*unknown*
IncrementalBackup10  10-May-14 04:15b_192  *unknown*
IncrementalBackup10  10-May-14 04:15backup_linux *unknown*
IncrementalBackup10  10-May-14 04:15backup_jie*unknown*
Full   Backup11  10-May-14 23:10BackupCatalog  *unknown*


Running Jobs:
Console connected at 09-May-14 23:55
Console connected at 10-May-14 00:38
JobId Level   Name   Status
==
   836 FullBackupCatalog.2014-05-09_23.55.41_03 is waiting for an 
appendable Volume
   837 FullBackupCatalog.2014-05-10_00.00.39_04 is waiting execution





## ## ## ## ## ## ## ##
## list volume
## ## ## ## ## ## ## ##

Pool: Default
No results to list.
Pool: File
+-++---+-++--+--+-+--+---+---+-+
| MediaId | VolumeName | VolStatus | Enabled | VolBytes   | VolFiles | 
VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten |
+-++---+-++--+--+-+--+---+---+-+
|   1 | File1  | Full  |   1 | 64,424,490,627 |   14 |
2,592,000 |   1 |0 | 0 | File  | 2014-04-17 04:16:53 |
|   2 | File2  | Full  |   1 | 64,424,489,826 |   14 |
2,592,000 |   1 |0 | 0 | File  | 2014-05-08 22:50:18 |
|   3 | File3  | Full  |   1 | 64,424,445,818 |   14 |
2,592,000 |   1 |0 | 0 | File  | 2014-04-25 04:17:08 |
|   4 | File4  | Full  |   1 | 64,424,467,353 |   14 |
2,592,000 |   1 |0 | 0 | File  | 2014-05-09 06:58:35 |
+-++---+-++--+--+-+--+---+---+-+
Pool: Scratch
No results to list.




## ## ## ## ## ## ## ##
## cfg
## ## ## ## ## ## ## ##

Pool {
  Name = File
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 3 days # one year
  Maximum Volume Bytes = 60G  # Limit Volume size to something 
reasonable
  Maximum Volumes = 2   # Limit number of Volumes in Pool
}






















sos...@mail.com
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job batches

2015-03-03 Thread Luc Van der Veken
If jobs are added in two batches at different times, does the oldest batch have 
to be completely finished before the newer one is started?

My backups are made fd -> file (disk) store -> copy to tape (an old LTO2 drive).

One client is huge compared to the others, copying it to tape takes some time 
(about 5 tapes).

My problem is that new incremental backups that should start while it is being 
copied, just sit there "waiting for execution" until the copy operation has 
completed - yet they are set to run at a higher priority, neither the storage 
they are to be written to nor any source fd is in use at the time, and the sd 
and director both have sufficiently high 'maximum concurrent jobs' settings.

I've gone over all configuration files several times to see if I haven't 
forgotten a 'maximum concurrent' or so, but I find no reason why those jobs 
shouldn't start.

PS: sorry if this is a repeat question, it sounds rather familiar while I am 
writing it, but I didn't find an older version.  It's also possible that I 
started writing it a few months ago, but then decided not to post it and 
continue searching a bit more ;)


--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Runscript - fail job on error

2015-02-27 Thread Luc Van der Veken
While testing something, I ran into this (director version 5.2.5, Ubuntu 12.04, 
client is an Ubuntu 14.04 running fd version 5.2.6).

Shouldn't "Fail job on error = No" have made it continue regardless of the 
error?

  RunScript {
Command = "/etc/bacula/scripts/pre_backup.sh %l"
Runs on Client = Yes
Runs When = Before
Runs On Success = Yes
Runs On Failure = Yes
Fail job on error = No
  }

I had a syntax error in the script, and got this in the log:

2015-02-27 14:48:05   git-luc-fd JobId 26973: Error: Runscript: 
ClientRunBeforeJob returned non-zero status=2. ERR=Child exited with code 2
2015-02-27 14:48:05   bacula-dir JobId 26973: Fatal error: Bad response to 
ClientRunBeforeJob command: wanted 2000 OK RunBefore
, got 2905 Bad RunBeforeJob command.
2015-02-27 14:48:05   bacula-dir JobId 26973: Fatal error: Client "git-luc-fd" 
RunScript failed.
[...]
Start time: 27-Feb-2015 14:48:04
  End time:   27-Feb-2015 14:48:05
  Elapsed time:   1 sec
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s

--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Weird recursive issue [SOLVED]

2015-02-12 Thread Luc Van der Veken
From: Bill Arlofski 
>
> If this is true, it seems like the v7 behavior is the correct behavior, so...
> problem solved I guess. :)

I fail to see the logic in that, but then I may be misunderstanding something.

  Include {
Options {
  Exclude = Yes
  Wilddir = */Temp
}
File = C:/
File = D:/
  }

Means to me: back up C and D, excluding what's specified in the options 
section, i.e. excluding Temp directories.

  Include {
Options {
  Exclude = Yes
}
File = C:/
File = D:/
  }

Means to me: back up C and D, excluding what's specified in the options 
section, i.e. excluding absolutely nothing.


--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up a bare git repository in full and incremental mode

2015-02-11 Thread Luc Van der Veken
Have you tried asking in a git forum/list?
There must be some people with knowledge of git that also use bacula, but I 
would think you'd have more luck there.

Stack overflow is also a good place for all things developer:
http://stackoverflow.com/questions/12129148/incremental-backups-with-git-bundle-for-all-branches


They seem to take an approach of 'to make an incremental backup, git needs 
access to the previous backup to see what's already in there'.

Someone also suggests --since, but the danger in that is that you risk skipping 
changes if you don't specify the _exact_ time and date of the previous backup 
(or earlier). Also, if a backup ever fails, the next one (the next day) will 
just proceed as if it succeeded.


-Original Message-
From: Thorsten Reichelt [mailto:bacula-us...@thorsten-reichelt.de] 
Sent: 11 February 2015 19:05
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Backing up a bare git repository in full and 
incremental mode

Hi!

I have to backup one or more bare git repositories but I cannot figure
out whats the best way to do this.

I want to use this backup plan:
Every month => full backup (all branches, tags...)
Every night => incremental backup (changes since last incremental/full)

For that I run a pre-backup script that calls

===
cd $repo
git bundle create /tmp/gitbackups/$repo-all --all
===

for every found repository to perform the monthly full backup.
But I am not shure how to perform an incremental or differential backup
with git. For my SVN repositories I can simply call something like
"svnadmin dump -r 1024 --incremental > /tmp/svnbackups/$repo-inc".

But how to handle this with git?
Maybe I can call "git bundle create --since="yesterday"
/tmp/gitbackups/$repo-inc --all" but seems not to be the same.

Are there any working solutions for this? ;)

Thorsten

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Permissions Error

2015-02-04 Thread Luc Van der Veken
Your first and last screenshots say the path is /mnt/iscsi, but the error 
message says /mybackup.
Did you change the path? Reload the configuration after changing it?


From: Greenhagen, Quinton (RTA) [mailto:quinton.greenha...@mso.umt.edu]
Sent: 04 February 2015 17:38
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Bacula Permissions Error

Hello Bacula Users,

I am currently having an issue with my Storage Daemon and I can't for the life 
of me figure out what is going on. Every time Bacula goes to run a job I 
receive a notification saying intervention is needed and to mount the Volume, 
or label a new one for the job:

[cid:image001.png@01D0411C.C8D61F30]

I remote into the Bacula Server we have and run bconsole, and then use the 
label command to try and make a new volume I get the following message:

[cid:image002.png@01D0411C.C8D61F30]

I looked around on other posts about this and have checked my directory 
permissions, but I still can't figure out what is going wrong! I changed 
ownership and modify rights quite a few times, but it still is not working for 
me.
[cid:image003.png@01D0411C.C8D61F30]
Does anyone know what might be the issue? Any help would be greatly appreciated!

Quinton Greenhagen

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 7 on FreeBSD, suddenly can't do anything

2015-01-27 Thread Luc Van der Veken
On 27 January 2015 21:18, dweimer wrote:

> it appears to already have determined the IP prior to it hanging.
> 
> Putting it back to IP instantly connects.

Yes, but I see it's connecting to another IP address (...5.4 instead of ...1.4).
Was it supposed to do that because you changed network settings between 
attempts, or is that the problem?

| Connecting to Director bacula.dweimer.local:9101
| bconsole: bsock.c:208-0 Current 192.168.1.4:9101 All 192.168.1.4:9101

versus

| Connecting to Director 192.168.5.4:9101
| bconsole: bsock.c:208-0 Current 192.168.5.4:9101 All 192.168.5.4:9101
| bconsole: bsock.c:137-0 who=Director daemon host=192.168.5.4 port=9101
| bconsole: bsock.c:310-0 OK connected to server  Director daemon 
192.168.5.4:9101.


--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 7 on FreeBSD, suddenly can't do anything

2015-01-27 Thread Luc Van der Veken
Probably not what's wrong in your case, it's even a different OS here (Linux), 
but the symptoms are so similar that I reply anyway:

* Did you recently install avahi or (Apple's) bonjour / zeroconf?
* Are you using a '.local' TLD for a local DNS domain?

If you have both, that combination breaks name resolution.

It had me scratching my head for a while, because it was still half working. My 
nagios and bacula daemons couldn't find any *.local names anymore, but I could 
still ping the same machines by name at the command prompt.



-Original Message-
From: dweimer [mailto:dwei...@dweimer.net] 
Sent: 27 January 2015 3:38
To: Bacula Users
Subject: Re: [Bacula-users] Bacula 7 on FreeBSD, suddenly can't do anything

On 01/26/2015 8:04 pm, dweimer wrote:
> My bconsole program suddenly can't connect to the director, My webacula
> setup does very slowly (as in 3 minutes to load a page) give me data.
> Everything was running fine when backups ran just after midnight. No
> updates were installed since the backups were ran, I have restarted
> services, rebooted the server and no change. I have verified that its
> listening on the ports, that DNS and network is working correctly. The
> only log I get is an operation timed out in the system messages log.
> 
> Anybody have any ideas on what to check, can't for the life of me 
> figure
> out what it is hanging up on. I have the Debug level set to 100 running
> in foreground, and still not getting an error logged on the bacula-dir.
> 
> --
> Thanks,
> Dean E. Weimer
> http://www.dweimer.net/

I did find a work around, but not sure where the problem is for sure, 
going to post on FreeBSD list as well. the fix, convert all my hostnames 
in configuration files from fully qualified DNS to IP addresses. I just 
did this on a hunch, but apparently something somehow has broken in the 
Bacula name resolution on FreeBSD or at least suddenly on my system.

-- 
Thanks,
Dean E. Weimer
http://www.dweimer.net/

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] how to debug a job

2015-01-22 Thread Luc Van der Veken
Are you sure bacula is at fault?
I can think of circumstances where the way the source data are organized is to 
blame.

1) Average file size: 1 GB as a million files of 1 KB will be much slower to 
read than a single 1 GB file.
2) Too many files in one directory can make access very slow.

The effect is multiplied if these two are walking hand in hand...


Look at the way some applications spread their data over many subdirectories to 
get around the second problem (Squid proxy comes to mind, with its cached data 
distributed over 4096 directories organized in 2 levels: 16 at the first level, 
each containing 256 subdirs at the second level, each of those containing up to 
256 files).

Another example: some time ago, I and about 200 others had to upload half a 
dozen files a day, each, to a government FTP server.
After several months during which it was never cleaned up, just getting a 
directory listing of the 'incoming' directory took more than 15 minutes. A 
*large* part of those 15 minutes was not transmission time, but a delay before 
anything started coming in.
Problem: all the usual FTP clients for Windows automatically do an 'ls' after 
every 'cd'. It started happening more and more that the server would time out 
the control channel while the client was still waiting for a response on the 
data channel...


-Original Message-
From: Dimitri Maziuk [mailto:dmaz...@bmrb.wisc.edu] 
Sent: 21 January 2015 23:13
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] how to debug a job

(Take 2)

I've a client with ~316GB to back up. Currently the backup's been
running for 5 days and wrote 33GB to the spool file. Previous runs
failed with

> User specified Job spool size reached: JobSpoolSize=49,807,365,050 
> MaxJobSpoolSize=49,807,360,000
> Writing spooled data to Volume. Despooling 49,807,365,050 bytes ...
> Error: Watchdog sending kill after 518401 secs to thread stalled reading File 
> daemon.

Why is it taking 5 days to write 33GB?

Load avg on the client is 0.9%. Iperf clocks the connection at 110MB/s.
Iostat shows zero wait and .25MB/s read on the client's disk. every few
seconds bacula-fd shows up in iotop w/ read speed around 200-300K/s.
This is a healthy standard sata drive capable of 100MB/s, with ext4
filesystem.

It's a linux (centos 6) x64 client v. 5.0 and server v. 5.2.13 from
slaanesh repo.

How do I find out what's taking so long? What's the debug level I should
give to bacula-fd? Where do debug messages go? Anyone knows?

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedule: week days and month days?

2015-01-19 Thread Luc Van der Veken
From: Bill Arlofski [mailto:waa-bac...@revpol.com] 
> If I do:
>
> * status director days=30
>
> It shows me tonight's scheduled jobs, two jobs from tomorrow morning, all of
> Saturday's scheduled jobs followed by all the jobs to be run on February 7th.

That's probably correct. The documentation says it lists the FIRST occurrence 
of each job, not all of them.

"If you have multiple run statements, the first occurrence of each run 
statement for the job will be displayed for the period specified."


When I tried it, it listed the first occurrence for each level (inc, diff, 
full) for each job.


--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Weekly full backups on tape, Daily incrementals on File

2015-01-16 Thread Luc Van der Veken
You can try limiting your configuration to a single job, and tell it when to do 
a full or incremental backup through the schedule, like below.
Any backup type specified in the Schedule overrides the “Level = Full” in the 
job definition (which is still required to be there, but will be ignored if you 
specify it in the schedule).

Which pool to use can be specified per backup type in the job def.
As with Level, the ‘Pool =’ line is required, but it only provides a default. 
‘Full Backup Pool’ and ‘Incremental Backup Pool’ override it.

The storage can be specified in the pool resource, I moved it there because you 
have two pools on different storage devices. This isn’t required in either the 
job or the pool def, but if it isn’t specified in one it must be in the other.

Also remember that if you change anything in the include or exclude list in a 
fileset resource between backups, the next backup will default to full again 
(unless you tell it not to, by including a “Ignore Fileset Changes = yes” line 
in it – which will itself only take effect after the next backup, so you may 
still be saddled up with a full the first time if you’d rather have an 
incremental).


Based upon your configuration, I come to something like this (but without 
testing, absolutely no guarantee that it is working).


Schedule {
  Name = "WeeklyCycle"
  Run = Full sat at 20:10
  Run = Incremental mon-fri at 20:10
}

Job {
  Name = homedir-bioinfo03-weekly
  Client = bioinfo03.ibi.unicamp.br-fd
  JobDefs = DefaultJob
  FileSet = homedir-bioinfo03
  Level = Full
  Schedule = WeeklyCycle
  Spool Data = Yes
  Pool = Tape-Weekly
  Full Backup Pool = Tape-Weekly
  Incremental Backup Pool = File-Daily
  Type = Backup
  Messages = Standard
}

Pool {
  Name = Tape-Weekly
  Pool Type = Backup
  Storage = tape-autochanger
  Recycle = yes
  AutoPrune = yes
  Purge Oldest Volume = Yes
  Volume Retention = 28 days
  LabelFormat="Week-"
}

Pool {
  Name = File-Daily
  Pool Type = Backup
  Storage = File
  Recycle = yes
  AutoPrune = yes
  Purge Oldest Volume = Yes
  Volume Retention = 6 days
  Maximum Volume Bytes = 80G
  Maximum Volumes = 5
  Label Format = "Vol-"
}

From: Gustavo Lacerda [mailto:glace...@lge.ibi.unicamp.br]
Sent: 16 January 2015 15:23
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Weekly full backups on tape, Daily incrementals on File

Hi,

This is the first time I'm configuring bacula. I have a autochan ger with 8 
slots and 1 drive. I tried to schedule a weekly full backup on tape and daily 
differential backups on file.  This is a small test. The whole full has 2GB in 
data. The tape backup run and terminated with no errors. However, the daily 
differential backups on File didn't recognize that I had alread done a full 
backup on tape and asks me to do a Full backup on file. Could you please help 
me?


The relevant parts of my dir.conf:

Job {
  Name = homedir-bioinfo03-weekly
  Client = bioinfo03.ibi.unicamp.br-fd
  JobDefs = DefaultJob
  FileSet = homedir-bioinfo03
  Level = Full
  Schedule = WeeklyCycle
  Storage = tape-autochanger
  Spool Data = Yes
  Pool = Tape-Weekly
  Type = Backup
  Messages = Standard
}

Job {
  Name = homedir-bioinfo03-daily
  Client = bioinfo03.ibi.unicamp.br-fd
  JobDefs = DefaultJob
  FileSet = homedir-bioinfo03
  Level = Differential
  Schedule = WeeklyCycle
  Storage = File
  Spool Data = no
  Pool = File-Daily
  Type = Backup
  Messages = Standard
}

FileSet {
  Name = "homedir-bioinfo03"
  Include {
Options {
  signature = MD5
  onefs = yes
}
  File=/usr/local/data/lge/eduformi
  }
}

Pool {
  Name = Tape-Weekly
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Purge Oldest Volume = Yes
  Volume Retention = 28 days
  LabelFormat="Week-"
}

Pool {
  Name = File-Daily
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Purge Oldest Volume = Yes
  Volume Retention = 6 days
  Maximum Volume Bytes = 80G
  Maximum Volumes = 5
  Label Format = "Vol-"
}

Best regards,
Gustavo
--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dir inserting Attributes

2015-01-05 Thread Luc Van der Veken
It sounds like the ‘Name’ index on the Filename table doesn’t fit in RAM 
anymore in its entirety.
Have you tried increasing your MySQL buffers?


From: Leandro César [mailto:leandro.cesar.d...@gmail.com]
Sent: 05 January 2015 14:52
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Dir inserting Attributes

​Hello everyone
​!​


I'm having trouble resolving a slowdown after the end of a job.

So I check the status of the client, have the following return:

*status client=x.com.br
Connecting to Client xx.com.br at .bkp:9102

x.com.br Version: 5.0.3 (04 August 2010)  
x86_64-slackware-linux-gnu slackware Slackware 13.37.0
Daemon started 21-Dec-14 06:54. Jobs: run=20 running=0.
 Heap: heap=270,336 smbytes=88,939 max_bytes=250,626 bufs=76 max_bufs=774
 Sizeof: boffset_t=8 size_t=8 debug=0 trace=0

Running Jobs:
Director connected at: 05-Jan-15 11:26
No Jobs running.


Terminated Jobs:
 JobId  LevelFiles  Bytes   Status   FinishedName
==
  48948  Full121,6759.750 G  OK   05-Jan-15 10:56 x_backup


But I see the following status to the Director:

Running Jobs:
Console connected at 05-Jan-15 11:23
 JobId Level   Name   Status
==
 48948 Fullxx_backup.2015-01-05_10.27.58_03 Dir inserting Attributes



When checking in the
​Database
 (mysql), I view the query below running for a long time: (30 minutes)


​show processlist;
+++---++-+--+--+--+
| Id | User   | Host  | db | Command | Time | State| Info   

  |
+++---++-+--+--+--+
|  3 | bacula | localhost | bacula | Sleep   |  523 |  | NULL   

  |
|  4 | bacula | localhost | bacula | Query   | 1843 | Sending data | INSERT 
INTO Filename (Name) SELECT a.Name FROM (SELECT DISTINCT Name FROM batch) AS a 
WHERE NOT EXIS |
|  7 | root   | localhost | NULL   | Query   |0 | NULL | show 
processlist 
|
+++---++-+--+--+--+
3 rows in set (0.00 sec)​


For information, this client is an email server with many files. But to perform 
the test put to make the Backup only in a specific directory (Approx 10Gb).
​​
​
Performing the backup of all this Server (1.1TB). the message ​Dir inserting 
Attributes
​​
​
is on
​ Director ​
appears on for more than 30 hours
​.​


​T​
hank you all..
​

--
--
Att,
Leandro César

--
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedule question

2014-12-11 Thread Luc Van der Veken
> From: D S 

> Hello,
>
> is there a quick way to set the schedule to be "every other week"
> (to create full backups every 14 days i.e. on even weeks since
> 01.01.1971 for example)


Not exactly AFAIK, but you can get close by specifying 1st + 3rd week of the 
month or 1st + 3rd + 5th +... week of the year.

This explains how the scheduler works internally, scroll up a bit for the 
syntax:
http://www.bacula.org/5.2.x-manuals/en/main/main/Configuring_Director.html#SECTION00146


--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] get files of list incr

2014-12-05 Thread Luc Van der Veken
The easiest way (imo) is if you have installed Webacula.
Click the job number in the job history, then in the job details page click 
“Listing Files”.
That will show all files backed up in that job, along with their modification 
timestamp and size.

If the job is no longer shown in the recent job list but you know its job ID, 
you can enter the uri manually:
http://your-webacula-server/webacula/file/list/jobid/12345



At the console, ‘list files jobid=’ will list the files, but only the total 
size.


From: Thomas Manninger [mailto:dbgtmas...@gmx.at]
Sent: 05 December 2014 9:56
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] get files of list incr

Hello!

is it possible to get a list of all saved files of my last incr backup and the 
size of the files?
I need it, because my incr backup of a server is greater than 10gb every day, 
and i will know, which files are so big.

Thanks

Regards
Thomas
--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] trouble with Filesets

2014-11-12 Thread Luc Van der Veken
1 – Case correct?  Is it really named SomeFolder, or is it Somefolder or 
somefolder?
You can add the “Ignore Case = yes” option, but I’m not sure that will extend 
into an Exclude section.

2 – Have you tried appending a slash?  First the docs say that you must append 
one to indicate a directory if you use wildcards in an exclude section (I 
suppose that means end on / instead on /*), but then further on there’s a 
“don’t add trailing /” comment in one of the examples, so I’m not sure how to 
do it either.
I went for the exclude=yes option and specifying my excludes there to be sure.


From: Florian [mailto:florian.spl...@web.de]
Sent: 13 November 2014 7:51
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] trouble with Filesets

Hello.

I seem to have some problems with my filesets...

They are all for windows machines and are set up like this:

  Name = "Server1 Files Set"
  Include {
Options {
  signature = SHA1
  compression = GZIP
}
  File = D:/
  }
  Exclude {
  File = D:/SomeFolder
  }

"SomeFolder" is still being backed up. Guess I didn't quite understand this 
yet. Do I have to use Wilddir und exclude = yes in the include options instead?

Regards,

Florian S.
--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://pubads.g.doubleclick.net/gampad/clk?id=154624111&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 7 list volumes

2014-10-31 Thread Luc Van der Veken
And sorry for the double reply, but I only noticed this after hitting send: 
you’re not at a MySQL prompt, try omitting that semicolon an the end ;)


From: Luc Van der Veken
Sent: 31 October 2014 15:31
To: bacula-users@lists.sourceforge.net
Cc: 'Tim Dunphy'
Subject: RE: [Bacula-users] bacula 7 list volumes

Not running bacula 7 here, but have you tried ‘list volume’?

Both work in 5.2, but the online help (“h list”) only mentions ‘volume’.

*h list
  Command   Description
  ===   ===
  list  List objects from catalog

Arguments:
pools | jobs | jobtotals | volume | media  | files 
jobid= | copies jobid=


From: Tim Dunphy [mailto:bluethu...@gmail.com]
Sent: 31 October 2014 15:19
To: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: [Bacula-users] bacula 7 list volumes

Hey guys,

 I seem to remember being able to 'list volumes' in older versions of bconsole. 
But now that I'm on bacula 7 on the server it doesn't seem to understand that 
command.

*list volumes;
Unknown list keyword: volumes;

And I've even dug through some bacula documentation that seems to suggest 
that's a valid command. So while I have lost my mind, I don't think it's to the 
extent that I'm just making up commands that I thought I could use.

Can I get some help here as to why this is happening?

Thanks
Tim

--
GPG me!!

gpg --keyserver pool.sks-keyservers.net<http://pool.sks-keyservers.net> 
--recv-keys F186197B
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 7 list volumes

2014-10-31 Thread Luc Van der Veken
Not running bacula 7 here, but have you tried ‘list volume’?

Both work in 5.2, but the online help (“h list”) only mentions ‘volume’.

*h list
  Command   Description
  ===   ===
  list  List objects from catalog

Arguments:
pools | jobs | jobtotals | volume | media  | files 
jobid= | copies jobid=


From: Tim Dunphy [mailto:bluethu...@gmail.com]
Sent: 31 October 2014 15:19
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] bacula 7 list volumes

Hey guys,

 I seem to remember being able to 'list volumes' in older versions of bconsole. 
But now that I'm on bacula 7 on the server it doesn't seem to understand that 
command.

*list volumes;
Unknown list keyword: volumes;

And I've even dug through some bacula documentation that seems to suggest 
that's a valid command. So while I have lost my mind, I don't think it's to the 
extent that I'm just making up commands that I thought I could use.

Can I get some help here as to why this is happening?

Thanks
Tim

--
GPG me!!

gpg --keyserver pool.sks-keyservers.net 
--recv-keys F186197B
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrate: copy or duplicate?

2014-10-28 Thread Luc Van der Veken
Sorry, as usual I think I found it within minutes after sending out a request 
for help.
I should have re-read the documentation first, instead of afterward.

  Selection Type = PoolUncopiedJobs

I had left it at PoolTime and reduced the time to 1 minute.


-Original Message-
From: Luc Van der Veken [mailto:luc...@wimionline.com] 
Sent: 28 October 2014 8:31
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Migrate: copy or duplicate?

Sorry if this is a stupid question, but with copy instead of migrate, how do 
you keep Bacula from copying the same jobs over and over again each time a copy 
job is being run?

My original backup schema was like this:

* Full backup to disk once a month on Friday night, systems distributed about 
evenly over 1st to 4th Friday, 1 month retention.
* Differential backup the other Fridays, 1 month retention.
* Incremental every Monday to Thursday, with 1 week retention.
** Migrate full and differential backups older than 1 week to tape on Monday.
** Jobs moved to tape get retention of 3 months.  Tapes are stored off-site.

But this meant that
- I got off-site backups only after a full week,
- restores almost always required the tapes to be brought in,
so a few weeks ago, I changed the last two steps into this:

** Changed migrate into a copy job that copies full and differential backups to 
tape on Monday. No minimum age restrictions.
** Retention of jobs on disk 1 month, copies on tape 3 months.


I was used to some fluctuation in the number of tapes per week I needed with 
the first schema, but it started to look a bit ridiculous yesterday, so I 
checked the original job numbers: it was copying the same full backups to tape 
for the third time.


Doesn't bacula keep track of which jobs it has already copied and which not?
Actually this is something that had crossed my mind when I was making the 
changes, but that I dismissed because it has to keep track of those copies to 
promote them to main later, so it must "know" about them.


The job definition for the migrate job that I changed to copy:

# Migration job to move older jobs to tape
Job {
  Name = Transfer to Tape
#  Type = Migrate
  Type = Copy
  Pool = File
  Selection Type = PoolTime
  Messages = Standard
  Client = bacula-main  # required and checked for validity, but ignored at 
runtime
  Level = full  # idem
  FileSet = BaculaSet   # ditto
  Priority = 20
## only for migration jobs
#  Purge Migration Job = yes# purge migrated jobs after successful migration
  Schedule = TransferToTapeSchedule
  Maximum Concurrent Jobs = 5
}

-Original Message-----
From: Luc Van der Veken [mailto:luc...@wimionline.com] 
Sent: 15 September 2014 13:22
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Migrate: copy or duplicate?

From: Radosław Korzeniewski [mailto:rados...@korzeniewski.net] 

> No. All copies goes to database as well, but they are indirectly available
> for restore and are promoted as a main backup only when original job expire.
> I could be wrong about it, but it was working as described last time I've 
> check.

Thanks, it looks like I misunderstood or misread that part of the docs.

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrate: copy or duplicate?

2014-10-28 Thread Luc Van der Veken
Sorry if this is a stupid question, but with copy instead of migrate, how do 
you keep Bacula from copying the same jobs over and over again each time a copy 
job is being run?

My original backup schema was like this:

* Full backup to disk once a month on Friday night, systems distributed about 
evenly over 1st to 4th Friday, 1 month retention.
* Differential backup the other Fridays, 1 month retention.
* Incremental every Monday to Thursday, with 1 week retention.
** Migrate full and differential backups older than 1 week to tape on Monday.
** Jobs moved to tape get retention of 3 months.  Tapes are stored off-site.

But this meant that
- I got off-site backups only after a full week,
- restores almost always required the tapes to be brought in,
so a few weeks ago, I changed the last two steps into this:

** Changed migrate into a copy job that copies full and differential backups to 
tape on Monday. No minimum age restrictions.
** Retention of jobs on disk 1 month, copies on tape 3 months.


I was used to some fluctuation in the number of tapes per week I needed with 
the first schema, but it started to look a bit ridiculous yesterday, so I 
checked the original job numbers: it was copying the same full backups to tape 
for the third time.


Doesn't bacula keep track of which jobs it has already copied and which not?
Actually this is something that had crossed my mind when I was making the 
changes, but that I dismissed because it has to keep track of those copies to 
promote them to main later, so it must "know" about them.


The job definition for the migrate job that I changed to copy:

# Migration job to move older jobs to tape
Job {
  Name = Transfer to Tape
#  Type = Migrate
  Type = Copy
  Pool = File
  Selection Type = PoolTime
  Messages = Standard
  Client = bacula-main  # required and checked for validity, but ignored at 
runtime
  Level = full  # idem
  FileSet = BaculaSet   # ditto
  Priority = 20
## only for migration jobs
#  Purge Migration Job = yes# purge migrated jobs after successful migration
  Schedule = TransferToTapeSchedule
  Maximum Concurrent Jobs = 5
}

-Original Message-----
From: Luc Van der Veken [mailto:luc...@wimionline.com] 
Sent: 15 September 2014 13:22
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Migrate: copy or duplicate?

From: Radosław Korzeniewski [mailto:rados...@korzeniewski.net] 

> No. All copies goes to database as well, but they are indirectly available
> for restore and are promoted as a main backup only when original job expire.
> I could be wrong about it, but it was working as described last time I've 
> check.

Thanks, it looks like I misunderstood or misread that part of the docs.

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Configuration reload for bacula-sd

2014-10-24 Thread Luc Van der Veken
I've noticed this under Ubuntu (Server 12.04) too.
It looks like reload doesn't have any effect for bacula-sd 5.2.x, so you have 
to wait until no jobs are running and then restart it.

I haven't tried to figure out if it is a limitation in bacula-sd or a 
configuration error in the service commands that control it, because my backups 
run at night and I only make configuration changes during daytime anyhow.


-Original Message-
From: Andrea Carpani [mailto:andrea.carp...@dnshosting.it] 
Sent: 24 October 2014 9:48
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Configuration reload for bacula-sd

Hi all,

I'm still new to the product and I was playing around with Bacula 5.2.6 
(latest packages that come with CentOS 6.5). I added a new storage 
device to the Storage Daemon. I tried to run a job that used this 
device, but the backup failed apparently because bacula-sd wan't aware 
of this new device.

I tried to use

service bacula-sd reload

but this didn't work, so I had to restart bacula-sd, but this broke my 
running backups. So my question is: is there a way to reload bacula-sd 
configuration on the fly?

regards,

Andrea
.a.c.



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backups failed with gethostbyname() error

2014-09-24 Thread Luc Van der Veken
Hi Tim,

It works now, but it looks like that didn’t work at the time of the backup.
A network problem at that moment, a (dns) server that was down or being 
rebooted, a power failure with servers running on UPS but one of the switches 
not, or so?

Anyhow, I would look at the error as a symptom, with the cause somewhere 
outside bacula.  The backup didn’t exactly fail, it couldn’t be started because 
bacula couldn’t find the server it was supposed to back up.


From: Tim Dunphy [mailto:bluethu...@gmail.com]
Sent: 24 September 2014 18:50
To: Dimitri Maziuk
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] backups failed with gethostbyname() error

Hey Dmitriy,

 Yes, definitely the client host is resolving well in DNS from the bacula 
server.

 Take a look!

[root@ops:~] #host beta-new.mydomain.com
beta-new.mydomain.com has address 162.xx.xx.xx



[root@ops:~] #dig beta-new.mydomain.com

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-20.P1.el5_8.6 <<>> 
beta-new.mydomain.com
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35466
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;beta-new.mydomain.com. IN  A

;; ANSWER SECTION:
beta-new.mydomain.com.  61  IN  A   
162.xx.xx.xx

;; Query time: 1 msec
;; SERVER: 172.16.0.23#53(172.16.0.23)
;; WHEN: Wed Sep 24 12:48:16 2014
;; MSG SIZE  rcvd: 55

But as mentioned I tried putting the hostname and IP for the remote server I 
want to backup into the /etc/hosts file on my bacula server. We'll see where 
that gets us for tonight's run, just in case this was some kind of DNS issue.

Thanks!
Tim

On Wed, Sep 24, 2014 at 12:37 PM, Dimitri Maziuk 
mailto:dmaz...@bmrb.wisc.edu>> wrote:
On 09/24/2014 10:01 AM, Tim Dunphy wrote:

> 24-Sep 03:11 ops.mydomain.com JobId 218: Error: 
> bsock.c:194 gethostbyname()
> for host "beta-new.mydomain.com" failed: 
> ERR=Name or service not known

In case it's not clear from other posts, this looks like
ops.mydomain.com can't resolve 
beta-new.mydomain.com and the check is to
run "host" or "dig" beta-new.mydomain.com on ops.

--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu


--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



--
GPG me!!

gpg --keyserver pool.sks-keyservers.net 
--recv-keys F186197B
--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Send Messages through external mail service

2014-09-17 Thread Luc Van der Veken
Hi Florian,

When I was talking about an external mail server, I was thinking of one in your 
local network, external to your bacula server.
Sorry if I wasn't thinking far enough ;)

For most (home & small business) accounts, you should not need authentication 
if you send the mail through the outgoing mail server of your own internet 
provider.  At least that's how things work here in Belgium.
In that case it's best to set your own address as the From address, to make 
sure that the mail is accepted. This is true especially if you are sending to 
an address not hosted by your own isp (for example a gmail address).
AFAIK all providers require that the FROM address is at least one in an 
existing domain nowadays (and a domain for which an MX DNS record exists, so 
they know it has a mail server somewhere).



On top of that, I would recommend setting up a local mail server (meaning in 
your LAN, not necessarily on the bacula machine) to relay the mail to the right 
destination(s) for you, if only to avoid your messages getting lost if your 
internet connection is down at the time they are being sent.  A local mail 
server will accept the mail and keep trying to pass it on until it succeeds, 
whereas bsmtp will try just once.  I'm managing a 30-site WAN in which internal 
mail is being handled that way, and my experience with the reliability of even 
"Professional" DSL and cable connections is that such a buffering mail server 
at each location is no luxury.

But setting up a mail server is a discussion for another list ;)



-Original Message-
From: Florian [mailto:florian.spl...@web.de] 
Sent: 17 September 2014 10:11
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Send Messages through external mail service

Hello.

The problem is that I need to authenticate, when using gmx or gmail for 
instance and I have found no way of doing this with bsmtp only.

Regards,

Florian S.

Am 16.09.2014 um 15:52 schrieb Luc Van der Veken:
> I'm using bsmtp and it works fine, just configure the correct host
> in the command line (-h hostname) to send the mails through your
> external smtp server.
>
>
> -Original Message-
> From: Florian [mailto:florian.spl...@web.de]
> Sent: 16 September 2014 15:00
> To: bacula-users@lists.sourceforge.net
> Subject: [Bacula-users] Send Messages through external mail service
>
> Hello.
>
> I am currently trying to get the bacula email notifications to work.
> I would like to use an existing, external mail account to send these
> notifications instead of setting up an smtp server.
>
> Can I use bsmtp to do this or do I require additional packages?
> If so, which packages would you suggest?
>
> Thanks in advance!
>
> Regards,
>
> Florian S.
>
> --
> Want excitement?
> Manually upgrade your production database.
> When you want reliability, choose Perforce.
> Perforce version control. Predictably reliable.
> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Send Messages through external mail service

2014-09-16 Thread Luc Van der Veken
I'm using bsmtp and it works fine, just configure the correct host
in the command line (-h hostname) to send the mails through your
external smtp server.


-Original Message-
From: Florian [mailto:florian.spl...@web.de] 
Sent: 16 September 2014 15:00
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Send Messages through external mail service

Hello.

I am currently trying to get the bacula email notifications to work.
I would like to use an existing, external mail account to send these 
notifications instead of setting up an smtp server.

Can I use bsmtp to do this or do I require additional packages?
If so, which packages would you suggest?

Thanks in advance!

Regards,

Florian S.

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce.
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce.
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Wakeonlan in bacula

2014-09-15 Thread Luc Van der Veken
From: Bill Arlofski [mailto:waa-bac...@revpol.com]
> 1. From a shell prompt:   bacula-dir -t -c /path/to/bacula-dir.conf


The documentation seems to agree with you, but I once found another command 
somewhere (don't remember exactly where), with just the -v switch.

So the reload script I created to check & reload without having to enter the 
full details looks like this (there are three lines if wrapping should occur, # 
prepended to 2nd and 3rd line for safety):

#!/bin/sh
# bacula-dir -v /etc/bacula/bacula-dir.conf
# [ $? -eq 0 ] && service bacula-director reload || echo Config error found, 
NOT reloading.


It seems to work as intended, with good as well as with bad config files.
Is it really trying to start another instance of bacula-dir, or was I using 
some undocumented feature without realizing?


When I add a -t switch, with or without -c, I get 4 or 5 lines of output about 
orphaned buffers.
With only -v, I don't get those, but I *do* get the correct exit code to 
indicate OK or bad config.


--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrate: copy or duplicate?

2014-09-15 Thread Luc Van der Veken
From: Radosław Korzeniewski [mailto:rados...@korzeniewski.net] 

> No. All copies goes to database as well, but they are indirectly available
> for restore and are promoted as a main backup only when original job expire.
> I could be wrong about it, but it was working as described last time I've 
> check.

Thanks, it looks like I misunderstood or misread that part of the docs.

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] SD for synology NAS?

2014-09-15 Thread Luc Van der Veken
Has anyone created the storage and/or file daemons for use on a Synology NAS 
running DSM 5.0?

Some time ago I found directions on the web for compiling and installing bacula 
on DSM 4.0, but that required some NAS hacking (I regarded it as a sort of 
jailbreaking) that I'd rather not experiment with on a production system.

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Migrate: copy or duplicate?

2014-09-15 Thread Luc Van der Veken
Hi all,

My current  configuration backs up to a NAS, and later though a migration job 
moves completed backups to tape for off-site storage.

That puts up a dilemma: copy, or move?

When I copy the backups to tape, I understand that only the original (NAS) 
version remains in the database, which would complicate things if ever a backup 
has to be restored from tape (find out which tape contains the files you are 
looking for, then scan that tape to get its catalog, etc.).

When I move (migrate) them to tape, the backups on NAS are deleted at that 
time. Tapes are stored off-site, so I have to get in my car and go get them 
before I can restore anything.

Isn't there a way to achieve a combination of both of these, so a copy would 
sort of "duplicate" the data in the database as well, and allow me to restore 
from the NAS or from tape, whichever is handy and available?

It would be even better (saving space on the NAS) if I could use different 
retention periods for both copies.

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] DAT72 USB tape-drive support on Linux 2.6 kernel?

2014-08-12 Thread Luc Van der Veken
http://wiki.bacula.org/doku.php?id=hardware_results

It looks like that drive is listed, or at least one that’s very similar 
(Freecom USB DAT-72e).


From: Huub Van Niekerk [mailto:huubvanniek...@yahoo.com]
Sent: 12 August 2014 10:43
To: Andreas Nastke; Bacula-users
Subject: Re: [Bacula-users] DAT72 USB tape-drive support on Linux 2.6 kernel?

Hi,

It actually is the HP EB625A, internal USB DAT72 drive but packed as an 
external Freecom USB DAT drive. As far as I know, USB isn't SCSI...

On Tuesday, August 12, 2014 10:10 AM, Andreas Nastke 
mailto:nas...@gdp-group.com>> wrote:

start by plugging the hardware in and power it on.

search the logs for messages from whatever driver on your system feels
responsible for this hardware.


Huub Van Niekerk schrieb:
> Thank you for the response.
> You correctly mention that it has to talk SCSIit's an external USB drive 
> and I don't know the internals of the drive.
> Apart from that, I'm still waiting for an answer on a earlier question about 
> database tables. Bacula hasn't run yet because there are no Bacula MySQL 
> tables, and yes, MySQL runs fine with another database (24/7 actually).
>
>
>
> On Monday, August 11, 2014 8:28 PM, Dan Langille 
> mailto:d...@langille.org>> wrote:
>
>
>
> On Aug 8, 2014, at 9:08 AM, Huub Van Niekerk 
> mailto:huubvanniek...@yahoo.com>> wrote:
>
>
>
> Hi,
>>
>> After reading the manuals of v.5 and 7, I doubt my USB DAT72/DDS tapedrive 
>> is supported. I used to work with Barracuda, but that license expired. So 
>> can anyone give me advise about this ?
>
> Bacula doesn’t care about the make/model of your tape drive.  Bacula only 
> cares that it talks SCSI.  If the OS supports it, Bacula supports it.. in 
> general.
>
> Have you tried it yet?  What’s stopping you?
>
> —
> Dan Langille

>
>
> 
>
> --
>
>
> 
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Mit besten Grüßen / Kind Regards

Andreas Nastke
IT System Management

g/d/p Markt- und Sozialforschung GmbH
Ein Unternehmen der Forschungsgruppe g/d/p
Richardstr. 18
D-22081 Hamburg
Fon: +49 (0)40 / 29876-117
Fax: +49 (0)40 / 29876-127
nas...@gdp-group.com
www.gdp-group.com

Sitz der Gesellschaft ist Hamburg, Handelsregister Hamburg, HRB 40482
Geschäftsführer: Christa Braaß, Volker Rohweder

---
This e-mail may contain confidential and/or privileged information.  If
you are not the intended recipient please notify the sender and  delete
this e-mail from your whole system. Any unauthorised copying, disclosure
or distribution of the material in this e-mail is strictly forbidden.

---

? It does work using Barracuda backup software
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula.org web site non-functional?

2014-07-25 Thread Luc Van der Veken
Oops, my bad...

The Blog and Recent Topics sections at the top remain the same everywhere and 
take up so much space that I have to page down to see the actual content, which 
*is* there.


-Original Message-
From: Luc Van der Veken 
Sent: 25 July 2014 10:39
To: bacula-users@lists.sourceforge.net
Subject: RE: [Bacula-users] bacula.org web site non-functional?

Hi Kern,

It isn't working for me either (Chrome on Windows 7).
All links just seem to refresh the opening page, can't get into the 
documentation or support pages.


-Original Message-
From: Kern Sibbald [mailto:k...@sibbald.com] 
Sent: 25 July 2014 9:52
To: Wolfgang Denk; bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] bacula.org web site non-functional?

Hello,

www.bacula.org is perfectly functional.

What OS and browser are you using.

Systems known to work, Windows IE, Mac Safari, Linux Firefox,  Windows
Chrome, ...

Kern

On 07/25/2014 09:41 AM, Wolfgang Denk wrote:
> Hi,
>
> is it only for me that the bacula.org web site is non-functional?
> I get always only an entry page, and no matter which link I click, the
> content will not change - exception: the links in the "Downloads" menu
> still work.
>
> Especially, it is impossible to get access to the documentation...
>
> Or am I missing something?
>
> Best regards,
>
> Wolfgang Denk
>


--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula.org web site non-functional?

2014-07-25 Thread Luc Van der Veken
Hi Kern,

It isn't working for me either (Chrome on Windows 7).
All links just seem to refresh the opening page, can't get into the 
documentation or support pages.


-Original Message-
From: Kern Sibbald [mailto:k...@sibbald.com] 
Sent: 25 July 2014 9:52
To: Wolfgang Denk; bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] bacula.org web site non-functional?

Hello,

www.bacula.org is perfectly functional.

What OS and browser are you using.

Systems known to work, Windows IE, Mac Safari, Linux Firefox,  Windows
Chrome, ...

Kern

On 07/25/2014 09:41 AM, Wolfgang Denk wrote:
> Hi,
>
> is it only for me that the bacula.org web site is non-functional?
> I get always only an entry page, and no matter which link I click, the
> content will not change - exception: the links in the "Downloads" menu
> still work.
>
> Especially, it is impossible to get access to the documentation...
>
> Or am I missing something?
>
> Best regards,
>
> Wolfgang Denk
>


--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] "Socket terminated" message after backup complete

2014-07-07 Thread Luc Van der Veken
Thanks, this ('harmless') is what I expected after having googled for it a bit.

When I started using bacula, I got the enterprise binaries for Windows after 
reading that they were no longer being produced in the community version.
5.2.10 was the latest version of the community windows binaries I found at that 
time, 6.0.6 in enterprise, and that's what I'm using.

According to 
http://www.baculasystems.com/windows-binaries-for-bacula-community-users, 6.0.6 
is still the latest version.
Does this mean the bug was never fixed there, or is it the text on that page 
that needs updating?
Or is there still something else entirely, and is it not this bug that's 
hitting me?



-Original Message-
From: Thomas Lohman [mailto:thom...@mtl.mit.edu] 
Sent: 07 July 2014 20:12
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] "Socket terminated" message after backup complete


> Because traffic is going through those firewalls, I had already
> configured keepalive packets (heartbeat) at 300 seconds. In my first
> tests, backups *did* fail because that was missing.  Now they don't
> seem to fail anymore, but there's that "socket terminated" message
> every now and then that doesn't belong there.
>

Hi,

This seems like the problem that you're having.

http://bugs.bacula.org/view.php?id=1925

I believe this was fixed in community client version 5.2.12 and I can 
verify that we no longer see these warning/error messages on clients 
that have been upgraded to >= 5.2.12.  We still see it on Windows 
machines that are running 5.2.10.  I don't know which version of the 
Enterprise client has this fix in it.

The messages themselves are mainly harmless so you can ignore them if 
you want to.


--tom

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] "Socket terminated" message after backup complete

2014-07-07 Thread Luc Van der Veken
Sorry for the subject line, forgot to replace it by something more descriptive 
:(


-Original Message-
From: Luc Van der Veken [mailto:luc...@wimionline.com] 
Sent: 07 July 2014 8:54
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] FW: Bacula: Backup OK of sphad01-fd Differential

I'm using Bacula to back up a shop that's about half linux, half windows.

Three of the Windows machines are located in another network.
A few times per week, I find a result like the one below for a back-up of one 
of these three.
As it stands now, I have a total of 4 such messages for 3 machines over the 
last 7 days.

Traffic is routed through two firewalls that regard each other as trusted (each 
network has its own firewall, each had a spare port in it, I just connected a 
cable between those spare ports and configured routing so everything passes, no 
restricting firewall rules for that connection).

Because traffic is going through those firewalls, I had already configured 
keepalive packets (heartbeat) at 300 seconds.
In my first tests, backups *did* fail because that was missing.  Now they don't 
seem to fail anymore, but there's that "socket terminated" message every now 
and then that doesn't belong there.

Director and SD are 5.2.5 (default version found in Ubuntu 12.04 LTS).
All Windows clients are using the enterprise FD version 6.0.6.
The windows clients that are exposing these symptoms are Server 2008 R2.  A 
server 2003 that was previously located in the same network also exhibited the 
problem, but neither version in the local (to bacula-dir and bacula-fd) network 
ever does.


Does anyone have an idea what might cause this?


[The Pre and post backup jobs you see are just empty files on this machine, I 
have them configured everywhere and fill in the files where necessary].


04-Jul 22:09 bacula-dir JobId 19317: Start Backup JobId 19317, 
Job=sphad01.2014-07-04_20.05.00_38
04-Jul 22:09 bacula-dir JobId 19317: Using Device "FileStorage"
04-Jul 22:09 sphad01-fd JobId 19317: shell command: run ClientBeforeJob 
""C:/Program Files/Eurautomat/BaculaPreBackup.cmd" Differential"
04-Jul 22:09 sphad01-fd JobId 19317: Generate VSS snapshots. Driver="Win64 
VSS", Drive(s)="CE"
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "Task 
Scheduler Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "VSS Metadata 
Store Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "Performance 
Counters Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "System 
Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "ASR Writer", 
State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "FRS Writer", 
State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "WMI Writer", 
State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "Shadow Copy 
Optimization Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "Registry 
Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "COM+ REGDB 
Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "Dhcp Jet 
Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "NTDS", 
State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 bacula-sd-sd JobId 19317: Job write elapsed time = 00:15:26, 
Transfer rate = 10.46 M Bytes/second
04-Jul 22:24 sphad01-fd JobId 19317: Error: lib/bsock.c:350 Socket is 
terminated=1 on call to client:10.9.0.89:9102
04-Jul 22:24 sphad01-fd JobId 19317: shell command: run ClientAfterJob 
""C:/Program Files/Eurautomat/BaculaPostBackup.cmd" Differential"
04-Jul 22:24 bacula-dir JobId 19317: Bacula bacula-dir 5.2.5 (26Jan12):
  Build OS:   x86_64-pc-linux-gnu ubuntu 12.04
  JobId:  19317
  Job:sphad01.2014-07-04_20.05.00_38
  Backup Level:   Differential, since=2014-06-14 10:53:31
  Client: "sphad01-fd" 6.0.6 (30Sep12) Microsoft Windows Server 
2008 R2 Standard Edition Service Pack 1 (build 7601), 64-bit,Cross-compile,Win64
  FileSet:"sphad01-set" 2014-04-29 09:34:07
  Pool:   "File" (From Job DiffPool override)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"File" (From Pool resource)
  Scheduled time: 04-Jul-2014 20:05:00
  Start time:  

[Bacula-users] FW: Bacula: Backup OK of sphad01-fd Differential

2014-07-07 Thread Luc Van der Veken
I'm using Bacula to back up a shop that's about half linux, half windows.

Three of the Windows machines are located in another network.
A few times per week, I find a result like the one below for a back-up of one 
of these three.
As it stands now, I have a total of 4 such messages for 3 machines over the 
last 7 days.

Traffic is routed through two firewalls that regard each other as trusted (each 
network has its own firewall, each had a spare port in it, I just connected a 
cable between those spare ports and configured routing so everything passes, no 
restricting firewall rules for that connection).

Because traffic is going through those firewalls, I had already configured 
keepalive packets (heartbeat) at 300 seconds.
In my first tests, backups *did* fail because that was missing.  Now they don't 
seem to fail anymore, but there's that "socket terminated" message every now 
and then that doesn't belong there.

Director and SD are 5.2.5 (default version found in Ubuntu 12.04 LTS).
All Windows clients are using the enterprise FD version 6.0.6.
The windows clients that are exposing these symptoms are Server 2008 R2.  A 
server 2003 that was previously located in the same network also exhibited the 
problem, but neither version in the local (to bacula-dir and bacula-fd) network 
ever does.


Does anyone have an idea what might cause this?


[The Pre and post backup jobs you see are just empty files on this machine, I 
have them configured everywhere and fill in the files where necessary].


04-Jul 22:09 bacula-dir JobId 19317: Start Backup JobId 19317, 
Job=sphad01.2014-07-04_20.05.00_38
04-Jul 22:09 bacula-dir JobId 19317: Using Device "FileStorage"
04-Jul 22:09 sphad01-fd JobId 19317: shell command: run ClientBeforeJob 
""C:/Program Files/Eurautomat/BaculaPreBackup.cmd" Differential"
04-Jul 22:09 sphad01-fd JobId 19317: Generate VSS snapshots. Driver="Win64 
VSS", Drive(s)="CE"
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "Task 
Scheduler Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "VSS Metadata 
Store Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "Performance 
Counters Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "System 
Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "ASR Writer", 
State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "FRS Writer", 
State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "WMI Writer", 
State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "Shadow Copy 
Optimization Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "Registry 
Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "COM+ REGDB 
Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "Dhcp Jet 
Writer", State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 sphad01-fd JobId 19317: VSS Writer (BackupComplete): "NTDS", 
State: 0x1 (VSS_WS_STABLE)
04-Jul 22:24 bacula-sd-sd JobId 19317: Job write elapsed time = 00:15:26, 
Transfer rate = 10.46 M Bytes/second
04-Jul 22:24 sphad01-fd JobId 19317: Error: lib/bsock.c:350 Socket is 
terminated=1 on call to client:10.9.0.89:9102
04-Jul 22:24 sphad01-fd JobId 19317: shell command: run ClientAfterJob 
""C:/Program Files/Eurautomat/BaculaPostBackup.cmd" Differential"
04-Jul 22:24 bacula-dir JobId 19317: Bacula bacula-dir 5.2.5 (26Jan12):
  Build OS:   x86_64-pc-linux-gnu ubuntu 12.04
  JobId:  19317
  Job:sphad01.2014-07-04_20.05.00_38
  Backup Level:   Differential, since=2014-06-14 10:53:31
  Client: "sphad01-fd" 6.0.6 (30Sep12) Microsoft Windows Server 
2008 R2 Standard Edition Service Pack 1 (build 7601), 64-bit,Cross-compile,Win64
  FileSet:"sphad01-set" 2014-04-29 09:34:07
  Pool:   "File" (From Job DiffPool override)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"File" (From Pool resource)
  Scheduled time: 04-Jul-2014 20:05:00
  Start time: 04-Jul-2014 22:09:07
  End time:   04-Jul-2014 22:24:34
  Elapsed time:   15 mins 27 secs
  Priority:   10
  FD Files Written:   7,470
  SD Files Written:   7,470
  FD Bytes Written:   9,690,701,338 (9.690 GB)
  SD Bytes Written:   9,692,463,456 (9.692 GB)
  Rate:   10453.8 KB/s
  Software Compression:   35.0 %
  VSS:yes
  Encryption: no
  Accurate:   no
  Volume name(s): FileStorage0003
  Volume Session Id:  1001
  Volume Session Time:1401794541
  Last V

Re: [Bacula-users] Volume not recycling?

2014-05-23 Thread Luc Van der Veken
I’m not the expert, so I could be wrong, but these are a few things I noticed.
I hope if I’m wrong, whether it be completely or in a detail, someone will 
correct me so I learn something from it myself.

- Looks like it’s a disk volume, and there’s no limit on volume size or job 
count or time, nor a “Use Volume Once”. I think it will just keep appending 
without ever starting a new volume.
- File & job retention 1 year, would span 12 full backups.
- Volume retention 1 month: that starts counting after the last write to the 
volume, not at day one.  It also may never happen (or at least later than you 
expect, I don’t know if there’s a default maximum size) because there’s no 
limit on volume use.



From: Henrique Machado [mailto:henri@gmail.com]
Sent: 23 May 2014 14:52
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Volume not recycling?

Hello!

I'm backuping a windows 2008 R2 64bits fileserver. I just want a full backup at 
1st friday and incremental at another days.

Current Windows Used disk: 1.13 TB
Current Full backup Volume: 3.4 TB
Current Incremental backup Volume: 167 GB

Why my full backup volume is 3x more than my backuped disk?

Thanks.


OS: CentOS release 6.5
Bacula Version: 5.0.0


#fileserver.conf

Client {
  Name = fileserver-fd
  Password = 123456789
  Address = fileserver.domain.local
  FDPort = 9102
  Catalog = MyCatalog
  File Retention = 1 year
  Job Retention = 1 year
}

Job {
  Name = fileserverDefault-Job
  Type = Backup
  Level = Incremental
  Client = fileserver-fd
  FileSet = fileserverDefault-Fileset
  Schedule = fileserverDefault-Schedule
  Storage = Default
  Pool = Default
  Full Backup Pool = "fileserverFull-Pool"
  Incremental Backup Pool = "fileserverIncremental-Pool"
  Messages = Standard
}

Pool {
  Name = fileserverFull-Pool
  Pool Type = Backup
  Volume Retention = 1 month
  Recycle = yes
  AutoPrune = yes
  LabelFormat = fileserverFull-
}

Pool {
  Name = fileserverIncremental-Pool
  Pool Type = Backup
  Volume Retention = 1 month
  Recycle = yes
  AutoPrune = yes
  LabelFormat = fileserverIncremental-
}

FileSet {
  Name = fileserverDefault-Fileset
  Include {
File = D:/Public
File = D:/Users
Options {
  Signature = MD5
  Compression = GZIP5

}
  }
}


Schedule {
  Name = fileserverDefault-Schedule
  Run = Level=Full Pool=fileserverFull-Pool 1st fri at 19:00
  Run = Level=Incremental Pool=fileserverIncremental-Pool daily at 22:00
}

--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] limit bandwidth during backup job

2014-05-22 Thread Luc Van der Veken
I think 5.x enterprise, 7.0 open.

You could try using trickle, as suggested in http://www.iniy.org/?p=195
Never tried it, don't know if it works, but it looks promising.


-Original Message-
From: Eric Bollengier [mailto:eric.bolleng...@baculasystems.com] 
Sent: 22 May 2014 9:23
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] limit bandwidth during backup job

Hello,

On 05/21/14 15:52, Steven Haigh wrote:
> On 21/05/14 23:27, kamran ayub wrote:
>> Dear team,
>>
>> Please help me out in limiting bandwidth speed of backup jobs of client
>> metioned in director conf.
> 
> The manual is a good start:
> http://www.bacula.org/5.2.x-manuals/en/main/main/Configuring_Director.html#SECTION00143
> 
> You'll see the speed limit section in there.
> 

I think you found a mistake in the 5.2 manual, the speed limitation
feature was
introduced in bacula 7.0. It means you need Director and Clients running
Bacula 7.0.
Bye







--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] latest bacula client (bacula-fd) for Windows

2014-04-17 Thread Luc Van der Veken
http://blog.bacula.org/p710/

Second to last paragraph: “We are still working on new Windows binaries as well 
as releasing a full set of binaries for many platforms. Hopefully that will be 
finished before the end of April.”

To close the gap between 5.2.10 and 7.0, Windows binaries 6.0.6 are available 
commercially for a small fee at 
http://www.baculasystems.com/windows-binaries-for-bacula-community-users

I don’t know if any of these  can be used with 7.0.

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] When does bacula use multiple tape drives

2014-03-31 Thread Luc Van der Veken
> From: John Drescher [mailto:dresche...@gmail.com] 

> This is possible to have job writing at the same time to the same
> volume loaded in the same bacula storage device. In this case bacula
> will interleave the data.

Hi John,

Does that apply to file storage as well, or only tape?

I've been trying to get it to do that on a file store, but jobs kept getting on 
hold "waiting for storage".
Maybe I overlooked a maximum connections setting somewhere?  In that case (I 
think) it must be one that isn't used in the default configuration files.
Or does the interleave size or a spool buffer have to be set explicitly before 
it will work?


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] New server, strange behavior

2014-03-25 Thread Luc Van der Veken
An additional question: now that the director is running again (after a reboot, 
actually), I notice that the catalog says the failed job 14976 is still 
running, while the director says it isn't (and actually doesn't even include it 
in the 'status' output, neither as running nor as terminated).

Does that faulty catalog state need to be fixed, or will it go away when the 
maximum run time is expired?
If necessary, how can it be fixed?  Google doesn't seem to be able to find the 
answer to that right away.

I restarted the job to see of the problem comes back, and the director does 
list the new instance as job 14977.

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] New server, strange behavior

2014-03-25 Thread Luc Van der Veken
Hi,

I had a bacula director daemon die on me today, after simply restarting it 
everything looks fine again.
Before it stopped, some strange things happened with the backup it was pulling 
in from a client.

The server (running director and storage daemon) is a VM running bacula 5.2.5 
on Ubuntu server 12.04 LTS
As file daemon I am using the bacula systems enterprise windows client 6.0.6.
The server was created by cloning another VM that has been working flawlessly 
for months. After it was cloned, the machine name was changed, database cleaned 
up by purging all jobs, files, volumes & everything I could find, and finally 
the config files were cleaned out so I could start adding a new set of clients 
and jobs.


Currently, there are two clients and two jobs defined.
The clients are two almost identical (and very old) windows machines, running 
the same application, they differ only in name and address (and database 
content).

One of these jobs has been running successfully for a week, the other was added 
yesterday and was started for the first time around 3AM this morning.

Now the oldest client was still backed up successfully, but with the new one it 
almost looks as if a number of things went wrong at the same time -- 
essentially, it looks like the connection between FD and SD was lost, while at 
the same time the connection between FD and director, which is running on the 
same machine as the SD, remained up.


But the client is not where I am focusing now, I'm trying to find out what 
happened to the director at or after that moment.

When I came in this morning, I discovered that the director daemon was no 
longer running.
The log file ends like this (names edited):

25-Mar 05:25 bacula-dir-2 JobId 14975: Rescheduled Job 
client2.2014-03-24_09.15.32_13 at 25-Mar-2014 05:25 to re-run in 900 seconds 
(25-Mar-2014 05:40).
25-Mar 05:25 bacula-dir-2 JobId 14976: Job client2.2014-03-25_05.25.16_58 
waiting 900 seconds for scheduled start time.
25-Mar 05:26 bacula-dir-2 JobId 14976: Fatal error: Max run time exceeded. Job 
canceled.
25-Mar 05:26 bacula-dir-2 JobId 14976: Fatal error: Job canceled because max 
start delay time exceeded.
25-Mar 05:25 bacula-dir-2 JobId 14976: Job client2.2014-03-25_05.25.16_58 
waiting 900 seconds for scheduled start time.
25-Mar 05:26 bacula-dir-2 JobId 14976: Fatal error: Max run time exceeded. Job 
canceled.
25-Mar 05:26 bacula-dir-2 JobId 14976: Fatal error: Job canceled because max 
start delay time exceeded.

Which is strange in more than one regard:

* Reschedule in 900 seconds, then time out a minute later.

* Doing that twice in a row, but also the clock seems to have run 
backwards in-between, so I guess I'm just seeing the same messages written to 
the log twice.

* There is no maximum run time defined in my config, so the default of 
6 days should apply, but this client was only added to the .conf yesterday.
In fact, the FD's are running speed-capped at 1.5 Mbps on 2 Mbps connections, 
it was expected to take somewhere between 16 and 20 hours to finish, but it 
failed (and hence the reschedule) after 2.5 hours.  The other client completed 
in 15 hours, and that one's database is slightly smaller.

* No indication as to why the daemon stopped.  All I can add is that it 
still mailed this job's result to me, so it must have happened after it was 
considered finished, and before I arrived at about 7:30.
I checked other log files (syslog etc.), but no indication there either.

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula only showing directory structure on restores

2014-03-20 Thread Luc Van der Veken
This reminds me of something I noticed a couple of weeks ago when I was using 
Webacula.

Some directories that should be there, were not shown in the restore selection 
list.
When I entered their path manually, it found them and everything looked OK from 
there on down.

I don't know if it was caused by webacula itself or the underlying bacula setup.


From: James Lumby [mailto:jlu...@icontrolesi.com]
Sent: 19 March 2014 20:36
To: Greg Woods
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula only showing directory structure on restores

I appreciate that, the reason I brought it up is because someone had previously 
mentioned I should check it.  All of my retention variables are set to 365 days 
and I am not up against that dead line yet.  I just can't find a reason as to 
why I can only restore directory structure (and even then only down to a 
certain level) and all the files below it are gone.

Thank you,
James Lumby
Infrastructure Manager
iControl ESI
972.239.9200 x213
jlu...@icontrolesi.com
www.icontrolesi.com
DALLAS | SAN FRANCISCO

From: Greg Woods mailto:g...@gregandeva.net>>
Date: Wednesday, March 19, 2014 at 12:46
To: James Lumby mailto:jlu...@icontrolesi.com>>
Cc: 
"bacula-users@lists.sourceforge.net" 
mailto:bacula-users@lists.sourceforge.net>>
Subject: Re: [Bacula-users] Bacula only showing directory structure on restores

On Wed, 2014-03-19 at 16:32 +, James Lumby wrote:
   Files do not seem to have expired as the file retention is 365
days.

Since I was confused by this as well, I thought I'd jump in here
briefly. "File retention" really means how long the file records in the
database are kept. It has nothing directly to do with how long the
actual files are kept. The latter is a function of when the volumes are
recycled, which is controlled by volume retention. I don't know if this
is related to your issue, but I thought I'd point it out so that you
don't beat your head against the same wall that I did.

--Greg




--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Internet Tolerant

2014-03-05 Thread Luc Van der Veken
Can you verify if your external IP address is the same before and after the 
error?

I know of a DSL provider that regularly forces a new external IP address onto 
its clients to prevent them running servers on a standard subscription level 
(as opposed to a more expensive professional level, which is sold with a fixed 
IP).
They don't do it very often - once every 36 hours.
If something like that is happening, all existing connections are lost at that 
point, which could explain your problem.

I have several incoming OpenVPN tunnels at my office, some from accounts at 
this provider.  They tend go down briefly before automatically reconnecting, 
I've always assumed that is when the IP changes.

Disclaimer: I never tried to use bacula over those tunnels...
 

-Original Message-
From: andersonn21 [mailto:bacula-fo...@backupcentral.com] 
Sent: 05 March 2014 19:13
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Internet Tolerant

... Sorry, submitted before I was finished ...

This is stopping at about 3.5 hours, and 500+ MB of data backed up. I am 
attempting to backup about 6GB over a standard DSL line using OpenVPN.


--
Subversion Kills Productivity. Get off Subversion & Make the Move to Perforce.
With Perforce, you get hassle-free workflows. Merge that actually works. 
Faster operations. Version large binaries.  Built-in WAN optimization and the
freedom to use Git, Perforce or both. Make the move to Perforce.
http://pubads.g.doubleclick.net/gampad/clk?id=122218951&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users