Re: [Bacula-users] Fw: Aw: Re: Renamed and moved files are backed up again

2017-09-06 Thread Roberts, Ben
Hi Fabian,

Others on the list may correct me if I’m wrong, but fundamentally Bacula treats 
different filenames as different entities to be backed up, regardless of 
content being shared across multiple files. The flags you have been changing 
only relax the checks for a given filename to see whether it should be backed 
up or not based on changes to content or metadata. Renaming or moving a file 
will cause the object to be backed up under the new filename again.

There are three possible solutions to your problem I’m aware of, both 
enterprise features. I have not used either, so may be wrong about their 
capabilities.

- Accept that moved/renamed files will be backed up multiple times, and have 
sufficient storage on your SD to account for this. If you can, minimise the 
number of moved/renamed files.

- Deduplication volumes 
(http://www.baculasystems.com/wp-content/uploads/bacula-enterprise-v6-deduplication-volumes1.pdf)
I think the data is still sent over the network and committed to the filesystem 
again, but because the data content is identical, the blocks are only stored 
once on disk and so doesn’t take up additional storage space. Looking at the 
PDF, this requires either ZFS or Netapp. Deduplication on ZFS at least demands 
a lot of memory overhead, so your SD machine has the be appropriately spec’d.

- Global Endpoint Deduplication 
(https://www.baculasystems.com/products/bacula-enterprise-data-backup-software/global-endpoint-deduplication)
Stores identical blocks only once across all clients. I think this will handle 
the case where the filenames are different but the content is shared, as 
happens with file moves and renames.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cancel Job thats already queued or running

2017-07-27 Thread Roberts, Ben
Hi Oliver,

> It’s possible to define that a job will be cancelled if this job is already 
> queued or running?
Take a look at the “Allow Duplicate Jobs” and related directives in the Job 
resource, described here: 
http://www.bacula.org/9.0.x-manuals/en/main/Configuring_Director.html#SECTION00183

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula largest installations ever

2017-07-10 Thread Roberts, Ben
I manage a sizeable Bacula installation:
~70 FD
~10,000 volumes
~2,000 jobs currently in catalog of 45,000 jobs (lifetime for this instance of 
director/catalog)
~4.8 PB currently tracked by jobs in catalog
~30.1TB largest single job (which took 3d 9h on last run)

This workload previously served fine by a single director/catalog open-source 
edition for many years (running on mysql until the beginning of this year, 
since switched to postgres but not because of any performance issues)
Maintaining this workload currently requires less than 1 person-day of effort 
per month (after a fair amount of tuning)

Regards,
Ben Roberts

From: Jason Voorhees [mailto:jvoorhe...@gmail.com]
Sent: 10 July 2017 17:18
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Bacula largest installations ever


** This is an external e-mail. Please treat attachments and links as 
potentially dangerous. **

Hello bats:

I was wondering if you can share any of your biggest Bacula
deployments you ever made. I mean anything like "I use Bacula to
backup thousands of desktops/servers", or "I have a backup
infrastructure with Bacula for more than 30TB of data being backed
up".

It's just that I want to know how robust and scalable Bacula might be
compared to other competitors such as TSM, HP Dataprotector, Backup
Exec, among others.

This is because one of my customer want to backup more than 2,000
desktops and a backup software is under evaluation.

Thanks in advance for your comments, if any.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! 
http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Special Permissions to Stop and Start Services during backup

2017-04-28 Thread Roberts, Ben
Hi Jim,

Note that sudo requires the command be executed from a TTY by default for 
security, which isn't compatible with how system services run. Do you have a 
defaults entry for bacula that disables the "requiretty" option? Not having 
this would manifest as a permission denied as if the sudo rule hadn't taken 
effect.

> Defaults:bacula !requiretty

Giving bacula full access to systemctl is also not consistent with the 
principles of least privilege, and potentially dangerous. You would be safer 
providing multiple sudo rules to start and stop just the services you need 
bacula to have control over.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dell TL1000 IBM3850-HH7

2017-03-19 Thread Roberts, Ben
Hi Jim,

> I am using bacula-7.0.5-7 which looks like it was packaged for el7 in May of 
> 2015. I will use the SPEC and related files from it with the latest source 
> and see where things go.

If you're looking for newer el6/7 releases to avoid building from source, take 
a look at Slaanesh's COPR repository: 
https://copr.fedorainfracloud.org/coprs/slaanesh/Bacula/. These include the 
latest releases since Jan 2017 and so will contain the btape fixes mentioned 
earlier in this thread.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] btape fill failure on HP LTO6/4 drives

2017-01-29 Thread Roberts, Ben
This is very interesting to hear, I was just about to upgrade my backup boxes 
to 11.3 this month. I've been ignoring the EOT error for a long time since it 
has no impact on my ability to take or restore backups, but I shall look 
forward to confirming your findings in the near future.

Regards,
Ben Roberts

From: Kern Sibbald [mailto:k...@sibbald.com]
Sent: 28 January 2017 13:03
To: Allan Black ; bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] btape fill failure on HP LTO6/4 drives

Very interesting. It seems we do not run a full fill test on real tape
drives very often nor do we run it on alternative systems like
Solaris. Too bad Solaris did not run the full Bacula regression tests :-)

Kern

On 01/28/2017 01:00 PM, Allan Black wrote:
> On 09/05/14 14:09, Allan Black wrote:
>> On 01/04/14 10:26, Roberts, Ben wrote:
>>>> It appears that the OS tape driver does not properly
>>>> implement back space record after an EOT. This is a defect of the operating
>>>> system driver, but it is not fatal for Bacula.
>>> Indeed I was seeing the same failure to backspace over EOT error in the job 
>>> logs:
>>> End of Volume "GSA784L6" at 3910:8005 on device "drive-1-tapestore1" 
>>> (/dev/rmt/1mbn). Write of 64512 bytes got 0.
>>> Error: Backspace record at EOT failed. ERR=I/O error
>>> End of medium on Volume "GSA784L6" Bytes=3,909,704,343,552 
>>> Blocks=60,604,295 at 28-Mar-2014 21:28
>> I am getting exactly the same symptoms, also under Solaris 11, except this 
>> time
>> with an LTO2 drive. The drive worked perfectly under Solaris 10, though, and 
>> I
>> only started seeing this after upgrading to 11.1.
>>
>> The btape fill/m test gave me this at the end of the first tape:
>>
>> Wrote block=316, file,blk=204,13499 VolBytes=203,857,855,488 rate=20.62 
>> MB/s
>> 08-May 16:59 btape JobId 0: End of Volume "TestVolume1" at 204:15112 on 
>> device
>> "lto" (/dev/rmt/4cbn). Write of 64512 bytes got 0.
>> 08-May 16:59 btape JobId 0: Error: Backspace record at EOT failed. ERR=I/O 
>> error
>> btape: btape.c:2702 Last block at: 204:15111 this_dev_block_num=15112
>> btape: btape.c:2737 End of tape 204:-1. Volume Bytes=203,961,913,344. Write 
>> rate
>> = 20.60 MB/s
>> 08-May 16:59 btape JobId 0: End of medium on Volume "TestVolume1"
>> Bytes=203,961,913,344 Blocks=3,161,612 at 08-May-2014 16:59.
> This is quite an old thread, but I'm reviving it because I have something 
> significant
> to add - I'm pleased to report that BSR over EOT is now working after an 
> upgrade to
> Solaris 11.3.
>
>> I have difficulty believing that the Solaris mtio and/or st modules fail to
>> handle EOT properly,
> I now believe it, though :-)
>
> A while ago, when researching this, I came across a document on 
> support.oracle.com.
> (Doc ID 1919928.1 if you want to go looking for it!) It wasn't really the 
> same problem,
> but it did contain a statement that struck a chord with me:
>
> "Solaris 11 changed the way initial tape position is determined. st now uses 
> Long Form
> Read Position to determine the tapes position."
>
> No  man :-)
> In other words, Sun/Oracle rewrote a large part of the st driver, and broke 
> it.
>
> The btape fill test would write to the tape OK with "Backward Space Record = 
> no", but
> the unfill test would get an I/O error trying to read the end of tape 1. I 
> believe the
> data were all written correctly but btape couldn't position the tape properly 
> for unfill
> due to problems with the Solaris 11 st driver.
>
> I suspect that Bacula under early Solaris 11 wouldn't be able to recover some 
> data from
> a backup, if that data happened to be near the end of a tape. Restoring an 
> entire backup
> would possibly be OK though.
>
> [ufsdump/ufsrestore work fine with multiple tapes because ufsrestore doesn't 
> seek]
>
> I can't be sure exactly where the bug appeared and disappeared, but it's a 
> good bet it's
> been there since Solaris 11 was first released. Based on the Solaris versions 
> I have
> used, I can say this much:
>
> Solaris 10 - Works as expected
> Solaris 11.1.0.24.2 - BSR over EOT causes I/O error
> Solaris 11.1.18.5.0 - BSR over EOT causes I/O error
> Solaris 11.2.8.4.0 - BSR over EOT causes I/O error
> Solaris 11.3.13.4.0 - Works as expected
>
> So it was fixed somewhere between 11.2.8 and 11.3.13 - now I wish I had 
> upgraded to
> Solaris 11.3 months ago!
>
> Allan
>
> ---

Re: [Bacula-users] Git repo down?

2017-01-02 Thread Roberts, Ben
Hi Michael,

> Fatal: repository 'http://git.bacula.org/bacula' not found.
Add ".git" to the end of your clone URL: http://git.bacula.org/bacula.git

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula in the cloud

2016-10-19 Thread Roberts, Ben
The documentation is outdated and this limit was removed (or perhaps vastly 
increased?) somewhere around the 7 mark. I’ve had jobs running a lot longer 
since upgrading.

In branch-5.2: 
http://www.bacula.org/git/cgit.cgi/bacula/tree/bacula/src/lib/bnet.c#n784
bsock->timeout = 60 * 60 * 6 * 24;   /* 6 days timeout */

In branch-7.0 this line is removed.

(Unfortunately I can’t see a way to get a direct link from cgit directly to a 
line at a particular commit.)

Regards,
Ben Roberts

From: Clark, Patti [mailto:clar...@ornl.gov]
Sent: 18 October 2016 22:29
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula in the cloud

From Bacula’s main.pdf documentation:

Max Run Time =  The time specifies the maximum allowed time that a job 
may run, counted from when the job starts, (not necessarily the same as when 
the job was scheduled).
By default, the the watchdog thread will kill any Job that has run more than 6 
days. The maximum watchdog timeout is independent of MaxRunTime and cannot be 
changed.


Patti Clark
Linux System Administrator
R&D Systems Support Oak Ridge National Laboratory

From: Josip Deanovic 
mailto:djosip+n...@linuxpages.net>>
Date: Tuesday, October 18, 2016 at 4:06 PM
To: 
"bacula-users@lists.sourceforge.net" 
mailto:bacula-users@lists.sourceforge.net>>
Subject: Re: [Bacula-users] Bacula in the cloud

On Tuesday 2016-10-18 12:34:08 Jason Voorhees wrote:
Thank you all for your responses.
I'll take a look at Bacula systems' whitepaper to see what they're
talking about. Meanwhile I'll explore some of the alternatives
discussed on this thread like copying files with scripts and making a
replica on SpiderOak or anything similar.
I hope we can have an interesting solution for this "problem" in the
near future.


Hi Jason!

You have said that "Bacula can't run jobs for so long without modifying
source code and recompiling".

What did you mean by that and can you give an example of the problem
you have experienced?

I am asking because I am not aware of the bacula's job duration related
limitations.

--
Josip Deanovic

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! 
http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of system outside of restrictive firewall?

2016-08-05 Thread Roberts, Ben
Hi Andreas,

Support for this was added in 7.0, you want "SD Calls Client" option enabled. 
It's covered in the manual here: 
http://www.bacula.org/7.4.x-manuals/en/main/New_Features_in_7_0_0.html#SECTION00512000

Alternative methods would be setting up a VPN (e.g. openvpn) to bypass the 
restrictive firewall.

Regards,
Ben Roberts

From: Andreas Koch [mailto:k...@esa.informatik.tu-darmstadt.de]
Sent: 05 August 2016 13:50
To: Bacula Users 
Subject: [Bacula-users] Backup of system outside of restrictive firewall?

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello all,

while we have been extremely happy over the years using Bacula to handle our
internal systems, we are a bit stumped now on how to backup a machine
outside of a rather restrictive firewall.

Said firewall is basically configured to deny all incoming connections (but
allows connections initiated from the inside).

With the default approach Bacula uses

1. Director (inside of firewall) tells File Daemon (outside of firewall) on
remote machine to begin backup -- OK

2. File Daemon (outside of firewall) attempts to connect to Storage Daemon
(inside of firewall) -- FAILS

we are getting nowhere. Is there a possibility to configure the Storage
Daemon to use something like a ``pull'' mode, resulting in

1. Director (inside of firewall) tells File Daemon (outside of firewall) on
remote machine to begin backup

2. Director (inside of firewall) tells Storage Daemon (inside of firewall) to
connect to File Daemon (outside of firewall)

3. File Daemon (outside of firewall) can now stream data to Storage Daemon
(inside of firewall)

I'd also be interested to know how other users have tackled such a setup!

Many thanks,
Andreas
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iEYEARECAAYFAleki5UACgkQk5ta2EV7Doz9MQCfRmU9ImeSRts5MnAxfLkTeWmJ
yt8An0FFKTpRrN8j9w734hntvEN2P5bn
=pBW4
-END PGP SIGNATURE-

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage Devices not displaying in bconsole status

2016-02-10 Thread Roberts, Ben
Hi Shon,

> I am having an issue where when I run a status command in bconsole, select 
> "Storage",
> I am only presented with the option for status on 3 of my defined storage 
> resources.
> I am trying to figure out why this is, but am being left with a blank. 
> Backups do seem
> to be running at present, but I should have a lot more devices to status.

This was a documented change for 7.0 
(http://www.bacula.org/7.2.x-manuals/en/main/New_Features_in_7_0_0.html#SECTION00414000)
 . When multiple storage definitions point at the same SD instance, Bacula is 
de-duplicating the entries that show up in the list, since selecting any one of 
them shows the status of the entire SD multiple entries in the list would give 
the same output.

I agree it’s confusing when a seemingly arbitrary subset of your defined 
storages show up, and you have to remember which storage resource points at 
which SD, and if the one you wanted is not there, which of the entries listed 
is on the same SD instance.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bconsole Reload Sometimes Doesn't Load New Job Resource Definitions

2015-10-20 Thread Roberts, Ben
> On a particular customer system, we find that the new job resource has not 
> been loaded after the `reload` command has completed.

My workflow relies on the reload command heavily (puppet drops in updated 
config files and executes the reload command afterwards; this normally works 
very well. Occasionally I have problems and this is always evidenced by “Error: 
Too many open reload requests. Request ignored” in the messages output. The 
normal cause for this in my experience is too many connected bconsoles (which 
appear to count against the director’s concurrent job limit?) Killing off the 
old bconsole sessions that people have spawned under screen weeks before and 
forgotten usually fixes this. I rarely have to restart the director service 
more than once or twice a year and that’s normally for other issues.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Purge console messages?

2015-04-28 Thread Roberts, Ben
Hi Luc,

If you don't want the messages to show up in the console at all, why not 
disable them at source rather than strip them out retrospectively?

Messages {
  console = !all
}

(which might be the default since I have "console = all, !skipped, !saved" in 
my config file, I can't see from a quick scan of the manual)

Regards,
Ben Roberts

From: Luc Van der Veken [mailto:luc...@wimionline.com]
Sent: 28 April 2015 14:51
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Purge console messages?

Answering myself: I put this in a script in my home directory as a temporary 
solution.
It may still take some time, but I suppose far less than letting it all be 
displayed.

#!/bin/sh
echo messages | bconsole > /dev/null
bconsole


From: Luc Van der Veken [mailto:luc...@wimionline.com]
Sent: 28 April 2015 15:02
To: 
bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Purge console messages?

Is there a way to clear the buffer containing the console messages waiting to 
be displayed in bconsole?
Or to purge anything older than, say, a day?

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] List files in a volume

2015-04-22 Thread Roberts, Ben
Hi,

> It would be far quicker to find the files I needed
> and it means I can do this search without interfering with the daily
> backups which are using the same drive. By the way, the CCTV isn't on 100%
> of the time, so having said that the files might not be present and/or the
> filenames not exact, hence the need to browse the volumes.

Bacula stores file details in two ways.

- In the catalog, indexed by JobId (which in turn is linked to a Volume via the 
VolumeMedia table). These records are held as long as the File Retention 
period. If you need to restore a file from a fixed date you can use one of the 
options in the "restore" bconsole command which will ultimately give you a list 
of files contained within the backup for you to mark and restore.

You can also use two "query" commands from the example query.sql file shipped 
with Bacula, #14 "List Jobs stored for a given Volume name" and #12 "List Files 
for a selected JobId" as an alternative to using the "restore" command.

Once the file retention time is reached these records are purged from the 
database and you'll have to restore the entire backup to retrieve the files, or 
us "bls" to list the individual volume contents.

- Within the backup volume itself as a header right before the file contents. 
"bls" will scan through the entire volume, collecting the file records and then 
present you with an "ls -l" like output. You need the volumes for this 
operation because it has to physically read the entire volume. 

"bls" and the related "bextract" are a lower-level tools intended for data 
recovery outside of typical day-to-day usage of Bacula. You will likely need to 
stop your Bacula-sd daemon in order to run them which can also be inconvenient.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] STK SL-3000 experiences

2015-03-05 Thread Roberts, Ben
Hi Tony,

> Has anyone got any experience using bacula with the Storagetek SL-3000?
> Or the SL-150 for that matter (they should behave the same way for what
> I'm told)
> 
> I'm looking for real-life experience here, not something like "if it can
> be managed with mtx, then it _should_ work.."

I used an SL-500 with Bacula under Solaris 10 for a couple of years during 
which time it worked absolutely fine (it's since been replaced with a smaller, 
higher-capacity Fujitsu unit).

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] how to debug a job

2015-01-25 Thread Roberts, Ben
Hi Kern,

>> Hard-coded, huh? Nobody's tried backing up that big data I keep hearing 
>> about?
> No, there is no change in the hard coded 6 day limit, and at the moment, I 
> personally
> am not planning to implement anything, for two reasons: 1. I would like to 
> limit the
> number of new Directives to what is absolutely necessary because there are 
> already
> more than I can remember.  2. In my opinion, any Job that runs more than 6 
> days is
> virtually destined to have problems during restore -- i.e. you will likely 
> have backup
> dates that span 6 days of time in a single job.  That appears to me to be 
> something very undesirable.

Just to add my views to the 6-day limit conversation I frequently have issues 
running into this limit. As I write I am crossing my fingers hoping that a 44TB 
backup job will complete in time. If it maintains a constant speed to 100MB/sec 
it should in theory take 5.6 days however my other weekend jobs have slowed 
this average down to just 71MB/sec so I suspect it’s going to fail ☹. If it 
does, I think I’ll be out of options other than to patch and recompile Bacula 
to remove the limit myself.

This is a ZFS snapshot being written via bpipe so regardless of how long it 
takes to write out on the storage daemon there’s no risk of internal 
inconsistency. While I understand the reason behind the limit when backups are 
taken at the file level, in this case the six-day limit serves no technical 
purpose, and only causes me more work. An additional directive to configure 
this, even if not used by 99% of Bacula users would be much appreciated if only 
to save the maintenance overhead of patching each Bacula release.

Regards,

Ben Roberts

From: Kern Sibbald [mailto:k...@sibbald.com]
Sent: 25 January 2015 10:56
To: Radosław Korzeniewski; Dimitri Maziuk
Cc: bacula-users
Subject: Re: [Bacula-users] how to debug a job

On 23.01.2015 19:22, Radosław Korzeniewski wrote:
Hello,

2015-01-22 3:42 GMT+01:00 Dimitri Maziuk 
mailto:dmaz...@bmrb.wisc.edu>>:
On 01/21/2015 06:41 PM, Bill Arlofski wrote:

> Bacula has a hard-coded 6 day limit on a job's run time.   518401 seconds =
> 6.1157 days, so it appears that is the cause for the watchdog killing the 
> job.

Hard-coded, huh? Nobody's tried backing up that big data I keep hearing
about?

Yes, but nobody was interested in changing it to the config parameter. It is 
possible that someone did that in 7.x, I need to check.

No, there is no change in the hard coded 6 day limit, and at the moment, I 
personally am not planning to implement anything, for two reasons: 1. I would 
like to limit the number of new Directives to what is absolutely necessary 
because there are already more than I can remember.  2. In my opinion, any Job 
that runs more than 6 days is virtually destined to have problems during 
restore -- i.e. you will likely have backup dates that span 6 days of time in a 
single job.  That appears to me to be something very undesirable.

Best regards,
Kern




> Does it ask you for a new volume?

No. Good guess, but the storage is a vchanger and it's working just fine.

If you are using a vchanger then I guess it is a disk only backup, so why do 
you spool data? It is not required but making you backup slow down at least 2 
times then standard job. You can define an attributes spooling only.

best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net




--

New Year. New Location. New Benefits. New Data Center in Ashburn, VA.

GigeNET is offering a free month of service with a new server in Ashburn.

Choose from 2 high performing configs, both with 100TB of bandwidth.

Higher redundancy.Lower latency.Increased capacity.Completely compliant.

http://p.sf.net/sfu/gigenet




___

Bacula-users mailing list

Bacula-users@lists.sourceforge.net

https://lists.sourceforge.net/lists/listinfo/bacula-users

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, nu

Re: [Bacula-users] Bacula daemon message

2015-01-20 Thread Roberts, Ben
I think you could easily achieve the same result by setting a suitable 
retention period and either letting bacula automatically prune volumes when it 
needs to, or if you really, really want to control it all manually by running 
an explicit admin job to “prune expired volume” ahead of your cycle. You 
shouldn’t need to manage this manually, Bacula is designed to operate within a 
fixed amount of diskspace and provided this is large enough to hold the backup 
data for the retention periods you set it will operate completely autonomously; 
it doesn’t expect or require you to clean up after it. Depending on how this 
RunScript is triggered, you’re telling Bacula to effectively delete all your 
backups, so I hope you’re doing something like preserving the data elsewhere 
first.

Bacula is not complaining that you have no volumes left to purge, it’s 
complaining because you’ve told it to purge volumes attached to a given storage 
resource when that storage resource doesn’t exist. Here “File” is referring to 
the name of a Storage { Name=”File” } resource in your Bacula-dir.conf…

Ben Roberts

From: Polcari, Joe (Contractor) [mailto:joe_polc...@cable.comcast.com]
Sent: 20 January 2015 16:39
To: Roberts, Ben; mike.br...@devnull.net.nz
Cc: bacula-users@lists.sourceforge.net
Subject: RE: [Bacula-users] Bacula daemon message

This is all disk storage. I’m trying to truncate all expired volumes 
immediately after all backups have completed so as to have maximum space 
available when the next backup cycle begins.
I can check disk space between backups and know there are no 5GB files that I 
don’t need.
I do this instead of waiting for each backup to prune the volumes when it runs.
I can see that this is failing though because I have no volumes of zero size 
and the line in the email that says “Storage resource "File": not found”

From: Roberts, Ben [mailto:ben.robe...@gsacapital.com]
Sent: Tuesday, January 20, 2015 11:30 AM
To: Polcari, Joe (Contractor); 
mike.br...@devnull.net.nz<mailto:mike.br...@devnull.net.nz>
Cc: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: RE: [Bacula-users] Bacula daemon message

Isn’t that an incredibly dangerous command to run automatically? What are you 
actually trying to achieve that bacaula’s internal volume management doesn’t do 
for you?

Ben Roberts

From: Polcari, Joe (Contractor) [mailto:joe_polc...@cable.comcast.com]
Sent: 20 January 2015 16:20
To: mike.br...@devnull.net.nz<mailto:mike.br...@devnull.net.nz>
Cc: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: Re: [Bacula-users] Bacula daemon message

Aha! Found the source, now how do I stop the message?

  RunScript {
RunsWhen=After
RunsOnClient=No
Console = "purge volume action=all storage=File allpools"
  }

I was trying to keep space usage to a minimum, I also have
  Action On Purge = Truncate
in the default Pool definition.


From: mike.br...@devnull.net.nz<mailto:mike.br...@devnull.net.nz> 
[mailto:mike.br...@devnull.net.nz]
Sent: Sunday, January 18, 2015 2:48 PM
To: Polcari, Joe (Contractor)
Cc: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: RE: [Bacula-users] Bacula daemon message


I doubt that this is a databse issue.

Messages from console commands in Job RunScripts are logged against jobid 0.

In Bacula 7 there is bug though and the messages are only sent when the config 
is reloaded or the director is restarted rather than being sent with the 
messages for the job that called the run script.  Might that account for the 
"random" times the messages are received?

On 2015-01-19 05:35, Polcari, Joe (Contractor) wrote:
I was thinking it was more of a corrupted database issue. I've pretty much 
checked everything else.
Is there something significant about "JobID 0"?

From: Brady, Mike [mailto:mike.br...@devnull.net.nz]
Sent: Saturday, January 17, 2015 8:27 PM
To: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: Re: [Bacula-users] Bacula daemon message


What about other users crontabs?  The bacula user for instance.

Or perhaps a RunScript on one of your Bacula Jobs?

As Bryn said, this is not something that Bacula is doing on its own.

Regards

Mike



On 2015-01-18 09:29, Polcari, Joe (Contractor) wrote:
[root@cdcdbaculadir ~]# crontab -l
no crontab for root
[root@cdcdbaculadir ~]#

From: Bryn Hughes [mailto:li...@nashira.ca]
Sent: Saturday, January 17, 2015 3:13 PM
To: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: Re: [Bacula-users] Bacula daemon message

Are you SURE there isn't anything like a crontab somewhere that someone set up? 
 This very very very very much has to be something that is being either typed 
in to bconsole or executed via a script somewhere.  It isn't something Bacula 
is d

Re: [Bacula-users] Bacula daemon message

2015-01-20 Thread Roberts, Ben
Isn’t that an incredibly dangerous command to run automatically? What are you 
actually trying to achieve that bacaula’s internal volume management doesn’t do 
for you?

Ben Roberts

From: Polcari, Joe (Contractor) [mailto:joe_polc...@cable.comcast.com]
Sent: 20 January 2015 16:20
To: mike.br...@devnull.net.nz
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula daemon message

Aha! Found the source, now how do I stop the message?

  RunScript {
RunsWhen=After
RunsOnClient=No
Console = "purge volume action=all storage=File allpools"
  }

I was trying to keep space usage to a minimum, I also have
  Action On Purge = Truncate
in the default Pool definition.


From: mike.br...@devnull.net.nz<mailto:mike.br...@devnull.net.nz> 
[mailto:mike.br...@devnull.net.nz]
Sent: Sunday, January 18, 2015 2:48 PM
To: Polcari, Joe (Contractor)
Cc: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: RE: [Bacula-users] Bacula daemon message


I doubt that this is a databse issue.

Messages from console commands in Job RunScripts are logged against jobid 0.

In Bacula 7 there is bug though and the messages are only sent when the config 
is reloaded or the director is restarted rather than being sent with the 
messages for the job that called the run script.  Might that account for the 
"random" times the messages are received?

On 2015-01-19 05:35, Polcari, Joe (Contractor) wrote:
I was thinking it was more of a corrupted database issue. I've pretty much 
checked everything else.
Is there something significant about "JobID 0"?

From: Brady, Mike [mailto:mike.br...@devnull.net.nz]
Sent: Saturday, January 17, 2015 8:27 PM
To: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: Re: [Bacula-users] Bacula daemon message


What about other users crontabs?  The bacula user for instance.

Or perhaps a RunScript on one of your Bacula Jobs?

As Bryn said, this is not something that Bacula is doing on its own.

Regards

Mike



On 2015-01-18 09:29, Polcari, Joe (Contractor) wrote:
[root@cdcdbaculadir ~]# crontab -l
no crontab for root
[root@cdcdbaculadir ~]#

From: Bryn Hughes [mailto:li...@nashira.ca]
Sent: Saturday, January 17, 2015 3:13 PM
To: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: Re: [Bacula-users] Bacula daemon message

Are you SURE there isn't anything like a crontab somewhere that someone set up? 
 This very very very very much has to be something that is being either typed 
in to bconsole or executed via a script somewhere.  It isn't something Bacula 
is doing on its own.

Bryn

On 2015-01-17 10:54 AM, Polcari, Joe (Contractor) wrote:
Nope, none of that. No admin jobs. All my storage is File. Backups and restores 
work fine.

From: Roberts, Ben [mailto:ben.robe...@gsacapital.com]
Sent: Saturday, January 17, 2015 1:05 PM
To: Polcari, Joe (Contractor); 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: RE: Bacula daemon message

Do you have an admin job in your config that's running console commands? It 
looks like something is executing purge commands automatically which sounds 
really rather dangerous to the health of your backups. The good news is it 
doesn't seem to be working because you no longer have a "File" storage any 
more, but if you were ever to re-create it you might find backups being 
randomly deleted.

Ben Roberts

> -Original Message-
> From: Polcari, Joe (Contractor) [mailto:joe_polc...@cable.comcast.com]
> Sent: 17 January 2015 17:22
> To: 
> bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
> Subject: [Bacula-users] FW: Bacula daemon message
>
> How do I stop these? They seem to come at random times once a day.
>
> -Original Message-
> From: root@xxx<mailto:root@xxx> On Behalf Of Bacula
> Sent: Saturday, January 17, 2015 11:04 AM
> To: bac...@localhost.xxx<mailto:bac...@localhost.xxx>
> Subject: Bacula daemon message
>
> 17-Jan 11:04 xxx-dir JobId 0:
> This command can be DANGEROUS!!!
>
> It purges (deletes) all Files from a Job, JobId, Client or Volume; or it
> purges (deletes) all Jobs from a Client or Volume without regard to
> retention periods. Normally you should use the PRUNE command, which
> respects retention periods.
> 17-Jan 11:04 xxx-dir JobId 0: Automatically selected Catalog: MyCatalog
> 17-Jan 11:04 xxx-dir JobId 0: Using Catalog "MyCatalog"
> 17-Jan 11:04 xxx-dir JobId 0: Storage resource "File": not found 17-Jan
> 11:04 xxx-dir JobId 0: The defined Storage resources are:
> 17-Jan 11:04 xxx-dir JobId 0: 1: Client1Storage
> 17-Jan 11:04 xxx-dir JobId 0: 2: Client2Storage
> 17-Jan 11:04 xxx-dir JobId 0: 3: Client3S

Re: [Bacula-users] Bacula daemon message

2015-01-17 Thread Roberts, Ben
Do you have an admin job in your config that's running console commands? It 
looks like something is executing purge commands automatically which sounds 
really rather dangerous to the health of your backups. The good news is it 
doesn't seem to be working because you no longer have a "File" storage any 
more, but if you were ever to re-create it you might find backups being 
randomly deleted.

Ben Roberts

> -Original Message-
> From: Polcari, Joe (Contractor) [mailto:joe_polc...@cable.comcast.com]
> Sent: 17 January 2015 17:22
> To: bacula-users@lists.sourceforge.net
> Subject: [Bacula-users] FW: Bacula daemon message
> 
> How do I stop these? They seem to come at random times once a day.
> 
> -Original Message-
> From: root@xxx On Behalf Of Bacula
> Sent: Saturday, January 17, 2015 11:04 AM
> To: bac...@localhost.xxx
> Subject: Bacula daemon message
> 
> 17-Jan 11:04 xxx-dir JobId 0:
> This command can be DANGEROUS!!!
> 
> It purges (deletes) all Files from a Job, JobId, Client or Volume; or it
> purges (deletes) all Jobs from a Client or Volume without regard to
> retention periods. Normally you should use the PRUNE command, which
> respects retention periods.
> 17-Jan 11:04 xxx-dir JobId 0: Automatically selected Catalog: MyCatalog
> 17-Jan 11:04 xxx-dir JobId 0: Using Catalog "MyCatalog"
> 17-Jan 11:04 xxx-dir JobId 0: Storage resource "File": not found 17-Jan
> 11:04 xxx-dir JobId 0: The defined Storage resources are:
> 17-Jan 11:04 xxx-dir JobId 0:  1: Client1Storage
> 17-Jan 11:04 xxx-dir JobId 0:  2: Client2Storage
> 17-Jan 11:04 xxx-dir JobId 0:  3: Client3Storage
> 17-Jan 11:04 xxx-dir JobId 0:  4: Client4Storage
> 17-Jan 11:04 xxx-dir JobId 0:  5: Client5Storage
> 17-Jan 11:04 xxx-dir JobId 0:  6: Client6Storage
> 17-Jan 11:04 xxx-dir JobId 0: Selection aborted, nothing done.
> 
> --
> 
> New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
> GigeNET is offering a free month of service with a new server in Ashburn.
> Choose from 2 high performing configs, both with 100TB of bandwidth.
> Higher redundancy.Lower latency.Increased capacity.Completely compliant.
> http://p.sf.net/sfu/gigenet
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO speed optimisation

2014-12-17 Thread Roberts, Ben
Hi Alan,

> It isn't a problem as long as you close off all tapes with smaller block
> sizes _before_ reconfiguring bacula for the larger block size.

That's great, thanks for letting me know; it's not clear from the manual that 
that should work. I won't be able to make this change right now as I'm about to 
hit the end-of-year backup window where the tapes run solidly for a few weeks, 
but it's definitely something I'll try toward the end of January! I will just 
need to pick a good time to make the change so I don't have to waste 
barely-used tapes.

Regards,
Ben Roberts


> -Original Message-
> From: Alan Brown [mailto:a.br...@ucl.ac.uk]
> Sent: 17 December 2014 13:50
> To: Roberts, Ben; Cejka Rudolf; Bryn Hughes; Radosław Korzeniewski; Kern
> Sibbald
> Cc: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] LTO speed optimisation
> 
> On 17/12/14 11:32, Roberts, Ben wrote:
> 
> >  > I think (and hope) that the only problem you will get into, is that
> > if you  > write a volume with 512K blocks, currently, you will not be
> > able to read it  > back with a Bacula configured for 64K block sizes.
> >
> > My main concern is reconfiguring Bacula to write larger block sizes,
> > then not be able to read older media with smaller block sizes if I
> > have to restore old data. If you don't think this will be a problem,
> > then I'm happy to try and report back. The warnings in the
> > documentation have scared me off doing this so far.
> 
> 
> It isn't a problem as long as you close off all tapes with smaller block
> sizes _before_ reconfiguring bacula for the larger block size.
> 
> Bacula cannot handle having the block size change midway through a tape.
> Apart from that if it is configured for larger blocks it will read
> smaller-block tapes happily.
> 

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mini-Status report

2014-12-06 Thread Roberts, Ben
Hi Kern,

While writing my mail I did begin to wonder if actually the v7 upgrade had the 
larger impact, but on balance I think both were worth doing so I’m not going to 
go revert the auto pruning changes for the time being. I’m currently working 
with the order of about 5,000 volumes so a little way off the numbers you 
quote, but I suspect we’ll hit 10,000 within 3 years. For reference we were 
running 5.2.13 for the last 12 months.

I am using mysql and I’m aware that’s not the preferred database engine for 
Bacula, however there are a couple of other constraints that affected my 
decision not to switch to postgres yet. One is that we’re running the Bacula 
daemons and catalog on Solaris 11 and mysql is by far the easier system to get 
running on that platform for fairly obvious reasons. The other is that we have 
a lot more mysql experience than postgres in-house so there would be a 
significant overhead in installing and maintaining it just for this purpose.

Our bacula setup needs a bit of love every 6-12 months when we run into 
performance limitations due to data growth. So far I’ve managed to avoid 
changing the catalog by implementing other optimisations (block-level backups, 
additional storage capacity, configuration tweaks). At some point I may have to 
bite the bullet and switch, but I’m hoping to hold off for at least the next 
few rounds of improvement work! ☺

Regards,
Ben Roberts

From: Kern Sibbald [mailto:k...@sibbald.com]
Sent: 06 December 2014 10:52
To: Roberts, Ben; bacula-users
Subject: Re: [Bacula-users] Mini-Status report

Hello Ben,

Thanks for your feedback on pruning. While I was reading your email, I kept 
wondering what version of Bacula you were running, because slowing down of the 
console with version 5.2.x and before was a typical symptom, while in 7.0.x 
this should largely be fixed even without delaying the pruning.  At the end of 
your email, you pointed out that you also switched to 7.0.x.   Previously 
Bacula used a single thread process at a time that was accessing the database, 
with version 7.0a.x (if I remember right), we separated the console into a 
second DB connection so it runs in its own thread and generally does not need 
to wait for pruning or attribute spooling that can go on at the same time.

If you are currently using MySQL as the backend database and you still find 
that either pruning or inserting file attributes takes a long time with Bacula 
version 7.0.x, I would recommend you to move to PostgreSQL.  Out of the box, 
PostgreSQL is a bit harder to get to work with Bacula for two reasons: 1. 
Getting the login permissions setup correctly for Bacula to access and modify 
the database is more complex than MySQL which works out of the box. 2. The 
default installation of PostgreSQL is very inefficient for medium to large 
Bacula installations. It requires a little bit of tuning of about 10 of the 
parameters, then PostgreSQL will perform significantly better than MySQL.   I 
have heard some good comments about MariaDB, but we have not yet tried it and 
thus don't officially support it.

Final note: the current versions of Bacula are designed to handle about 1,000 
volumes, if you start getting 20,000 or 100,000 volumes, you will very likely 
run into severe performance problems.  This is currently on my radar screen, 
and I now think I know how to fix it (not trivial) but haven't yet found the 
time.

Best regards,
Kern

On 12/06/2014 11:33 AM, Roberts, Ben wrote:
Hi Kern,

Thanks for updating the docs regarding auto prune. I disabled this last week 
and it has immediately fixed a long-running performance problem we were having 
when Bacula was looking for appendable volumes for a running job. I bring this 
up not only to report success but also because our issue was slightly different 
than initially described, so worth having on the record in this thread in case 
it affects other people in future.

We could experience this issue even when only 2-3 jobs were running at once. If 
one job ran out of volumes, the console would become slow to respond to any 
queries that touched the catalog (~30-60 seconds). If two jobs ran out at the 
same time the console might block for several minutes at a time. When three or 
more jobs all ran out the console would become practically unusable, blocking 
for so long that you'd have to go away and do something else in the meantime. I 
think Bacula rescans for appendable volumes every ~5minutes, so this process 
would compound if there was more than 5mins work to do each cycle. It made 
resolving an out-of-volumes issue an absolute pain and as a result it was 
frequently ignored causing our already long-running jobs to hit the 6 day limit 
and fail. This had gotten increasingly worse over the last few months which is 
likely because the number of volumes we consume grows over time (~+50/month). 
While the end-of-job pruning scans would also have caused this

The reason for this is that while bacula 

Re: [Bacula-users] Mini-Status report

2014-12-06 Thread Roberts, Ben
Hi Kern,

Thanks for updating the docs regarding auto prune. I disabled this last week 
and it has immediately fixed a long-running performance problem we were having 
when Bacula was looking for appendable volumes for a running job. I bring this 
up not only to report success but also because our issue was slightly different 
than initially described, so worth having on the record in this thread in case 
it affects other people in future.

We could experience this issue even when only 2-3 jobs were running at once. If 
one job ran out of volumes, the console would become slow to respond to any 
queries that touched the catalog (~30-60 seconds). If two jobs ran out at the 
same time the console might block for several minutes at a time. When three or 
more jobs all ran out the console would become practically unusable, blocking 
for so long that you'd have to go away and do something else in the meantime. I 
think Bacula rescans for appendable volumes every ~5minutes, so this process 
would compound if there was more than 5mins work to do each cycle. It made 
resolving an out-of-volumes issue an absolute pain and as a result it was 
frequently ignored causing our already long-running jobs to hit the 6 day limit 
and fail. This had gotten increasingly worse over the last few months which is 
likely because the number of volumes we consume grows over time (~+50/month). 
While the end-of-job pruning scans would also have caused this

The reason for this is that while bacula is scanning for appendable volumes it 
appears to issue queries against the catalog regarding individual volumes in 
sequence (I guess this is to find the list of jobs on each volume one at a 
time, then check the job retention periods to see if any can be deleted?) We 
have several thousand volumes, so executing these queries takes a while and I 
get the feeling bacula will only do these in sequence, due to either a lock or 
other single-threaded behaviour. So when there were several of these operations 
queued up because multiple jobs were seeking volumes at the same time, it's 
understandable why the window to slip a bconsole query in and have a timely 
response was very narrow and why most bconsole commands would take a very long 
time to return.

Full disclosure, I did also upgrade to 7 at the same time as making the pruning 
changes and the separate catalog connection may also have helped with this, 
however both together certainly give the desirable performance we used to 
experience.

Ben Roberts


> -Original Message-
> From: Kern Sibbald [mailto:k...@sibbald.com]
> Sent: 25 November 2014 13:42
> To: bacula-users; bacula-devel
> Subject: [Bacula-users] Mini-Status report
> 
> Hello,
> 
> This is a mini-status report just to let you know that someone on one of
> these lists said that the new white papers advise using:
> 
>   Maximum Concurrent Jobs = 1
> 
> In the bacula-sd.conf Device resource.  I have reviewed the three white
> papers listed on the bacula.org site and I find no such mention.  Almost
> all the examples use MaximumConcurrentJobs=5 and several set it to 10.
> 
> If someone can point me to the place (if it exists) where 1 in mentioned,
> I will be happy to fix it.
> 
> In addition, another person pointed out that manual_prune.pl does not seem
> to be available. That was an oversight on my part, and that file is now
> included as a download item under White Papers.  The essential elements of
> what manual_prune.pl does has been integrated directly into the Bacula
> source code, and in the next few days I will update the documentation to
> refer to the new way of doing "manual" pruning (i.e.
> pruning once a day rather than at the end of every Job).
> 
> For those of you who are not aware of the pruning issues, you need to know
> that when one has many jobs (say 1000 per day) and many Volumes, the
> normal Job pruning that Bacula does at the end of each Job can become a
> performance problem (not likely with less than 50 jobs per day).  To
> resolve it, it is best to turn off automatic pruning but then to schedule
> an Admin job once a day that will do the equivalent.  The program (now
> integrated into the code) manual_prune.pl was written by Bacula Systems
> for the Enterprise customers, but is published for your use with the
> community version.
> 
> Finally, I have made a few very minor changes to the Best Practices for
> Disk Based Backup document.
> 
> Best regards,
> Kern
> 
> --
> 
> Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from
> Actuate! Instantly Supercharge Your Business Reports and Dashboards with
> Interactivity, Sharing, Native Excel Exports, App Integration & more Get
> technology previously reserved for billion-dollar corporations, FREE
> http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clkt
> rk
> ___
> Bacula-users mailing list
>

Re: [Bacula-users] Problems with Quantum SuperLoader 3 LTO-6

2014-12-03 Thread Roberts, Ben
Hi Rainer,

> First problem:
> Starting Bacula this way doesn't work anymore:
> cd /etc/init.d
> ./bacula-dir start

You're going to have to do some good old fashioned debug to work this out. 
Check the logs, try running the exact commands the init script executes (make 
sure to do it as the same user the director runs under to eliminate any 
permissions issue). You've proved that the director can started, now you need 
to isolate how it's being started differently by the init script.

> All works fine, shows amount of slots and also the labels of stored tapes.
Mtx-changer parses this output using a regular expression. You may wish to make 
sure the data is being returned in the same format, and tweak the pattern match 
if it's different. Look for "VolumeTag" in the mtx-changer script.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Compatibility.

2014-11-30 Thread Roberts, Ben
Hi Erik,

> Is it possible to run bacula-dir and bacula-sd on bacula-5.2.13 with
> bacula-fd running 7.0.x on another box?

The only officially supported combination is (director version == storage 
daemon version) >= file daemon version.

Your mileage may vary but don't expect a newer file daemon to work with an 
older director/storage daemon.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] (no subject)

2014-11-21 Thread Roberts, Ben
Hi Luca,

I’ve been using an LT60 S2 (SAS) with LTO-6 and LTO-04 drives for 12 months 
(many hundreds of TB written). I’ve had no problems until recently and have 
been happy with the unit on the whole (it is a significant improvement over the 
StorageTek unit it replaced). I’m currently working through an issue with tech 
support regarding the barcode labels being incorrectly scanned, though we 
believe at the moment the fault is with the media labels rather than the 
barcode reader. I have been very impressed with the Fujitsu support while 
working through this case on the one occasion I’ve had to use them in reference 
to this unit.

I believe the LT60 is identical hardware (bar the front panel, firmware) to 
HP’s 48-slot offering as well, just in case you had any reason to buy from one 
or other vendor.

Regards,
Ben Roberts

From: Luca Codutti [mailto:lucacodu...@gmail.com]
Sent: 21 November 2014 11:23
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] (no subject)

Dear Bacula users,
I am currently planning a tape backup system for a research group in an 
academic institution for which I thought about bacula. I have in my hands two 
offers for the purpose which involve either a DELL LT2000 (2 drives, connected 
through sas) LTO-6 or a Fujitsu eternus lt-60 LTO-6 (sas). Are there some 
drawbacks in choosing bacula for these two systems I am not aware of?
Best Regards

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] LTO speed optimisation

2014-11-05 Thread Roberts, Ben
Hi all,

I'd like to try and make some speed improvements to my Bacula setup (5.2.13, 
Solaris11). I have data (and attribute) spooling enabled using a pool of 46x 
1TB directly-attached SAS disks dedicated to this purpose. Data is being 
despooled to 2x directly-attached SAS LTO6 drives at around 100mB/sec each. I 
think I should be able to get closer to the ~160mB/s maximum uncompressed 
thoughput the drives and tape media support (ref: 
http://docs.oracle.com/cd/E38452_01/en/LTO6_Vol4_E1/LTO6_Vol4_E1.pdf).

I've just done a speed test and can read from the spool array at a sustained 
300mB/sec even while other jobs are running, so I'm sure there's no bottleneck 
at the disk layer. My suspicion is that the bottleneck is at the application 
layer, probably due to the way I have Bacula configured.

Having read through Bareos' tuning paper 
(http://www.bareos.org/en/Whitepapers/articles/Speed_Tuning_of_Tape_Drives.html),
 I've updated the max file size from 1->50GB which increased the throughput 
from ~75 to ~100mB/sec. I believe I need to look at tuning the block size to 
gain the last bit of improvement.

Is it still the case in Bacula that changing the Maximum Block Size renders 
previously used/labelled tapes to become unreadable? I'm up to almost 1,000 
tape media already written, so making these unusable for restores without 
restarting the SD to change configs would be less than ideal. I see Bareos is 
touting a feature to make changes to block size at the pool level rather than 
the storage level and so this problem can be avoided by moving newer backups to 
a different pool while still keeping older backups readable. I haven't seen any 
reference to this in the Bacula manual; is it something that's already 
supported or in the plans for a future version?

For reference, this is one of the the relevant drive definitions I'm using, 
just in case there's something else that would help which I might have missed:
Device {
   Name = drive-1-tapestore1 
   Archive Device = /dev/rmt/1mbn
   Device Type = Tape 
   Media Type = LTO6
   AutoChanger = yes
   Removable media = yes
   Random access = no
   Requires Mount = no
   Drive Index = 1
   Maximum Concurrent Jobs = 3
   Maximum Spool Size = 1024G
   Maximum File Size = 50G  
  
   Autoselect = yes 
  
 }

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] File volumes and scratch pool

2014-10-01 Thread Roberts, Ben
Hi Peter,

Bacula will always prefer to take a new volume (in Append state) rather than 
reuse an existing one (in the Purged, Recycled state) so as to preserve your 
backed up data for as long as possible. It will also use volumes in the backup 
pool and only resort to taking volumes from the Scratch pool when there are 
none suitable for writing in the pool.

I’m not entirely certain which of those two criteria will win out; I think 
purged volumes in the backup pool will be used before Bacula tries to use an 
unused volume from the Scratch pool but you may wish to check this.

I can understand why you’d want Bacula to reuse old volumes within one of your 
backup pools first (so as to maximise the number of volumes left in the scratch 
pool), but can’t see a reason why you’d prefer to reuse recycled volumes 
already moved back into the scratch pool before unused ones. If you did mean 
the latter, perhaps you can reply with your reasoning.

Ben Roberts

From: Peter Wood [mailto:peterwood...@gmail.com]
Sent: 01 October 2014 22:01
To: Roberts, Ben
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] File volumes and scratch pool

Thank you Ben.

I applied the changes and I'll monitor the behavior to confirm I did it right.

If I have mix of recycled volumes and brand new, never used volumes in Scratch, 
which ones Bacula will grab first?

I'd really prefer to setup Bacula so it will reuse the old ones instead of 
grabbing brand new volumes.

On Wed, Oct 1, 2014 at 1:07 PM, Roberts, Ben 
mailto:ben.robe...@gsacapital.com>> wrote:
Hi Peter,

You need to set “Recycle Pool = Scratch” in your Scratch pool (and make sure 
you haven’t overridden it in any other poo)l. Note that this setting is applied 
to the volume when it’s created, so after changing the Scratch pool definition 
you will need to update all your volumes to re-apply the Recycle pool. It might 
be easiest to do this by directly modifying the catalog, else you can use a 
quick bash script to generate a sequence of “update volume…” commands to echo 
into bconsole.

Regards,
Ben Roberts

From: Peter Wood [mailto:peterwood...@gmail.com<mailto:peterwood...@gmail.com>]
Sent: 01 October 2014 20:56
To: 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: [Bacula-users] File volumes and scratch pool

In Bacula-5.2.13 how do I setup expired volumes to go in the Scratch pool?

I create new file type volumes like this:
  label storage=File volume=vol001 pool=Scratch

When needed volumes are taken out of Scratch and assigned to the appropriate 
pool and used. Once in the pull they are never released back to Scratch after 
they expire.

After retentionperiod ends volumes are reused but only within the pool they 
have been originally assigned.

I see volume properties that may be related but I can't find documentation 
about them:
scratchpoolid: 0
recyclepoolid: 0

Any help in making volumes go in Scratch after retention period expire so they 
can be reused by any job.

Thank you,

-- Peter

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient. If 
you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents. This communication is for 
informational purposes only. It is not intended as an offer or solicitation for 
the purchase or sale of any financial instrument or as an official confirmation 
of any transaction. Any comments or statements made herein do not necessarily 
reflect those of GSA Capital. GSA Capital Partners LLP is authorised and 
regulated by the Financial Conduct Authority and is registered in England and 
Wales at Stratton House, 5 Stratton Street, London W1J 8LA, number OC309261. 
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at 

Re: [Bacula-users] File volumes and scratch pool

2014-10-01 Thread Roberts, Ben
Hi Peter,

You need to set “Recycle Pool = Scratch” in your Scratch pool (and make sure 
you haven’t overridden it in any other poo)l. Note that this setting is applied 
to the volume when it’s created, so after changing the Scratch pool definition 
you will need to update all your volumes to re-apply the Recycle pool. It might 
be easiest to do this by directly modifying the catalog, else you can use a 
quick bash script to generate a sequence of “update volume…” commands to echo 
into bconsole.

Regards,
Ben Roberts

From: Peter Wood [mailto:peterwood...@gmail.com]
Sent: 01 October 2014 20:56
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] File volumes and scratch pool

In Bacula-5.2.13 how do I setup expired volumes to go in the Scratch pool?

I create new file type volumes like this:
  label storage=File volume=vol001 pool=Scratch

When needed volumes are taken out of Scratch and assigned to the appropriate 
pool and used. Once in the pull they are never released back to Scratch after 
they expire.

After retentionperiod ends volumes are reused but only within the pool they 
have been originally assigned.

I see volume properties that may be related but I can't find documentation 
about them:
scratchpoolid: 0
recyclepoolid: 0

Any help in making volumes go in Scratch after retention period expire so they 
can be reused by any job.

Thank you,

-- Peter

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autochanger: yes or no?

2014-09-18 Thread Roberts, Ben
Fair enough.

I used a one-liner of bash to pre-create of my volume files and fill the 
barcodes file, then used bconsole's label command to label all of them in one 
go. For example on my 1700-vol storage array, this was:

bash:
for f in {1..1700}; do
  dd if=/dev/zero of=$root/storage/$hostname-autochanger/slot$f bs=1 count=0 
seek=107374182400;
  echo "$f:hostname-autochanger-$f" >> 
$root/storage/$hostname-autochanger/barcodes;
done

bconsole:
label pool=Scratch storage=foo slots=1-1700 barcodes

Regards,
Ben Roberts


> -Original Message-
> From: Florian [mailto:florian.spl...@web.de]
> Sent: 18 September 2014 11:17
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Autochanger: yes or no?
> 
> I considered a similar approach at first, but after asking other users in
> this network, how long they want to keep backups, we decided on a rather
> long time period of multiple years. Considering that and the fact that we
> want to use one Volume per month, it would be a lot of work to pre-label
> everything.
> 
> So for now I switched back to simple devices without autochanger, so that
> the Volumes just get created when needed. I still have to test if it works
> though.
> 
> Regards,
> Florian S.
> 
> Am 18.09.2014 um 11:42 schrieb Roberts, Ben:
> > > Now I actually had to wonder: Is there no way to have bacula do both
> > > labeling AND switching Volumes automatically?
> > > Or do I just misunderstand?
> >
> > Personally I pre-label all my volumes into the Scratch pool, and let
> > Bacula handle moving volumes from there to a backup pool as needed.
> > They're then recycled back into the Scratch pool when the volume
> > retention period is reached. That way I can label up volumes just once
> > in a batch when purchasing new storage, and Bacula handles the rest.
> > Bacula doesn't need to re-label volumes during day-to-day operation.
> >
> > Regards,
> > Ben Roberts
> 
> 
> --
> 
> Want excitement?
> Manually upgrade your production database.
> When you want reliability, choose Perforce Perforce version control.
> Predictably reliable.
> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clkt
> rk
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autochanger: yes or no?

2014-09-18 Thread Roberts, Ben
> Now I actually had to wonder: Is there no way to have bacula do both
> labeling AND switching Volumes automatically?
> Or do I just misunderstand?

Personally I pre-label all my volumes into the Scratch pool, and let Bacula 
handle moving volumes from there to a backup pool as needed. They're then 
recycled back into the Scratch pool when the volume retention period is 
reached. That way I can label up volumes just once in a batch when purchasing 
new storage, and Bacula handles the rest. Bacula doesn't need to re-label 
volumes during day-to-day operation.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to handle many storage devices?

2014-09-18 Thread Roberts, Ben
Hi Florian,

You would want a minimum of one Storage Device per distinct Media Type for a 
reserved restore device and there's no requirement to have a reserved restore 
device per client/job/etc so a single restore device would be OK. You can have 
additional reserved restore devices but unless you expect to be called upon to 
do more than one concurrent restore there's no benefit in doing so. Creating 
dozens of unnecessary devices would likely only cause confusion as to which 
ones are intended for which purpose (when manually running a job you can either 
specify drive=name, or you'll be prompted to pick a drive by numeric index and 
there's no obvious mapping of name to index without looking at the config 
files). 

As an example, my file-based autochanger has six devices; one dedicated restore 
device, and 5 others since I wish to be able to run 5 jobs concurrently and set 
"Maximum Concurrent Jobs=1" on each of my devices to prevent interleaving. We 
do dozens of backups a day, and rarely need to do restores but when we do 
they'll be reasonably urgent, so this pattern is suitable for my workload. I 
may not want to interrupt a 3-day backup job to run a restore, but multiple 
restores can usually be done sequentially.

You should let your own business requirements dictate how you set your system 
up. These questions may help you choose the right setup:
- How many backups do you expect to run a day?
- How many do you want to run concurrently?
- How long do your average backup jobs take? (minutes, hours, days?)
- Do you want to avoid interleaving of multiple jobs onto the same volume? 
- How often do you think you'll need to do restores?
- Are you likely to need to do multiple restores concurrently?
- Would interrupting a backup job in order to do an urgent restore be 
problematic?

Regards,
Ben Roberts

> -Original Message-
> From: Florian [mailto:florian.spl...@web.de]
> Sent: 18 September 2014 07:00
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] How to handle many storage devices?
> 
> 
> Am 17.09.2014 um 21:02 schrieb J. Echter:
> > Am 17.09.2014 14:24, schrieb Florian:
> >> Hello, everyone.
> >>
> >> Now that I am finally done with testing bacula, I am setting it up
> >> for 5 clients.
> >> For each client I have a folder in which I want to store its Volumes.
> >>
> >> During the tests, I used an Autochanger in the storage daemon.
> >> Considering the different storage locations, do I need two devices
> >> for each client? (backup and restore) Also, is there is a reason to
> >> use more than one Autochanger?
> >>
> >> Regards,
> >>
> >> Florian S.
> > Hi,
> >
> > i personally would backup to one volume (depends on the size of all
> > clients)
> >
> > Also no autochanger is needed. If you specify the volumes / sd's you
> > would be fine. imho.
> >
> > cheers.
> 
> Hello.
> 
> I have a few reasons to go with seperate Volumes for each client, but
> that's not too important. I use an autochanger for the possibility of
> switching volumes automatically and processing multiple jobs at once.
> Well, I doubt I need more than one Autochanger, but what I am still unsure
> about is the devices for restore jobs. I was told that with a second
> device I can make sure that restore is possible at any time, but does it
> also mean I need one such device for each archive path?
> 
> Regards,
> 
> Florian
> 
> 
> --
> 
> Want excitement?
> Manually upgrade your production database.
> When you want reliability, choose Perforce Perforce version control.
> Predictably reliable.
> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clkt
> rk
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
Want excitement?
Manually upgrade your production database.
When you w

Re: [Bacula-users] Autochanger: yes or no?

2014-09-11 Thread Roberts, Ben
Hi Florian,

>> Is using one Device Resource in the Storage daemon enough or is there any
>> use for an autochanger in this case?
> We do the same thing as this and have a single device resource with no
> autochanger.

Both options have their merits and drawbacks, so you should assess which would 
be more appropriate for the workload you want to use.

Using an autochanger means:
- you have have multiple storage devices, and therefore run multiple jobs 
concurrently using different volumes (either in the same, or in different pools)
- you can start using new pools in future without having to create new storage 
devices or restart the SD
- you need to pre-create and pre-label your storage volumes (I would advocate 
using the scratch pool, and letting Bacula move volumes around as needed)
- if you have at least 2 storage devices, you can reserve one and always be 
able to run a restore job at any time, regardless of whether backups are in 
progress

Using a single storage device:
- you can only ever read/write to a single volume at a time
- you will have to add a new storage device and restart the SD if you ever want 
to add a new pool (which can be tricky to find a time if your schedule gets 
pretty full)
- you don't need to pre-create/pre-label volumes and Bacula can do it on demand
- you will need to set Max Pool Volumes or similar to prevent Bacula from 
completely filling the disk
- you might have to interrupt running jobs if you're called upon to do an 
emergency restore

Hope this helps.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scripting bconsole (Re: vchanger and "intervention needed")

2014-08-18 Thread Roberts, Ben
> >   You could use the expect command to script a reply to request as in:
> 
> I could also use python's subprocess.Popen or execvp(3) and dup(2) in c.
> The way every proper unix utility works is
> 
> bconsole -c "delete volume=xyz" -y

Bacula will do this for you without any other tools; you just need to amend the 
delete volume line to include "yes" on the end to suppress the prompt, e.g.
"delete volume=ABC123 yes"

It doesn't appear to be documented in the manual but does work (I just tested 
it).

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.
This communication is for informational purposes only.
It is not intended as an offer or solicitation for the purchase or sale of any 
financial instrument or as an official confirmation of any transaction.
Any comments or statements made herein do not necessarily reflect those of GSA 
Capital.
GSA Capital Partners LLP is authorised and regulated by the Financial Conduct 
Authority and is registered in England and Wales at Stratton House, 5 Stratton 
Street, London W1J 8LA, number OC309261.
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Loading Tape at the Wrong Library

2014-08-07 Thread Roberts, Ben
Hi Vinicius,

You need to use a unique Media Type for each autochanger, e.g. “LTO5-Library1” 
and “LTO5-Library2”. These are arbitrary string values, the exact name doesn’t 
matter. Bacula believes a drive in library1 is suitable for loading a tape from 
library2 because the same Media Type is used for each.

Regards,
Ben Roberts

From: Vinicius Alexandre Pereira de Souza [mailto:vinicius.apso...@gmail.com]
Sent: 07 August 2014 17:39
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Bacula Loading Tape at the Wrong Library


Hello everybody,

I'm new to Bacula, and i'm having a problem using different storages in a same 
pool.

I have two HP Tape Libraries installed on the Storage (LibHP08 and LibHP09), i 
tried to configure some jobs to access both libraries, but I'm having some 
trouble with bacula accessing the wrong library.

For example, bacula tries to access volume "G00022L5" on slot 20 at LibHP08, 
but it reaches the volume "H00011L5" on slot 20 at LibHP09. Basically, it tries 
to get a tape on the correct slot, but in the wrong Library. It Generates the 
Following error:



LibHP 3307 Issuing autochanger "unload slot 20, drive 1" command.

 Warning: Director wanted Volume "G00022L5".

Current Volume "H00011L5" not acceptable because:

1998 Volume "H00011L5" catalog status is Append, not in Pool.

Then, bacula unloads the drive, tries to find the correct tape, but loads the 
wrong one, generates the error again, and so on.

The job never completes, since it doesn't find the correct tape.

Some of my Pools:



Pool {

  Name = machine-Pool-Weekly

  Pool Type = Backup

  Storage = LibHP08, LibHP09

  Recycle = yes

  AutoPrune = yes

  Volume Retention = 34 days

}



Pool {

  Name = machine-Pool-Monthly

  Pool Type = Backup

  Storage = LibHP08, LibHP09

  Recycle = yes

  AutoPrune = yes

  Volume Retention = 1825 days

}



Devices/Autochangers Config:



#

## An autochanger device with four drives

##  Library HP (LibHP08)

##

Autochanger {

  Name = LibHP08_Changer

  Device = LibHP08-drive_1, LibHP08-drive_2, LibHP08-drive_3, LibHP08-drive_4

  Changer Command = "/usr/lib64/bacula/mtx-changer %c %o %S %a %d"

  Changer Device = /dev/tape/by-id/scsi-35001438016063c04

}



#

## An autochanger device with four drives

##  Library HP (LibHP09)

##

Autochanger {

  Name = LibHP09_Changer

  Device = LibHP09-drive_1, LibHP09-drive_2, LibHP09-drive_3, LibHP09-drive_4

  Changer Command = "/usr/lib64/bacula/mtx-changer %c %o %S %a %d"

  Changer Device = /dev/tape/by-id/scsi-3500143801606395c

}



Device {

  Name = LibHP08-drive_1  #

  Drive Index = 0

  Media Type = LTO-5

  Archive Device = /dev/tape/by-id/scsi-35001438016063c05-nst

  AutomaticMount = yes;   # when device opened, read it

  AlwaysOpen = yes;

  RemovableMedia = yes;

  RandomAccess = no;

  AutoChanger = yes

  Alert Command = "sh -c 'smartctl -H -l error %c'"

  Maximum Changer Wait = 600

  Maximum Concurrent Jobs = 1

  LabelMedia = yes

}



Device {

  Name = LibHP08-drive_2  #

  Drive Index = 1

  Media Type = LTO-5

  Archive Device = /dev/tape/by-id/scsi-35001438016063c08-nst

  AutomaticMount = yes;   # when device opened, read it

  AlwaysOpen = yes;

  RemovableMedia = yes;

  RandomAccess = no;

  AutoChanger = yes

  Alert Command = "sh -c 'smartctl -H -l error %c'"

  Maximum Changer Wait = 600

  Maximum Concurrent Jobs = 1

  LabelMedia = yes

}



Device {

  Name = LibHP08-drive_3  #

  Drive Index = 2

  Media Type = LTO-5

  Archive Device = /dev/tape/by-id/scsi-35001438016063c0b-nst

  AutomaticMount = yes;   # when device opened, read it

  AlwaysOpen = yes;

  RemovableMedia = yes;

  RandomAccess = no;

  AutoChanger = yes

  Alert Command = "sh -c 'smartctl -H -l error %c'"

  Maximum Changer Wait = 600

  Maximum Concurrent Jobs = 1

  LabelMedia = yes

}



Device {

  Name = LibHP08-drive_4  #

  Drive Index = 3

  Media Type = LTO-5

  Archive Device = /dev/tape/by-id/scsi-35001438016063c0e-nst

  AutomaticMount = yes;   # when device opened, read it

  AlwaysOpen = yes;

  RemovableMedia = yes;

  RandomAccess = no;

  AutoChanger = yes

  Alert Command = "sh -c 'smartctl -H -l error %c'"

  Maximum Changer Wait = 600

  Maximum Concurrent Jobs = 1

  LabelMedia = yes

}



Device {

  Name = LibHP09-drive_1  #

  Drive Index = 0

  Media Type = LTO-5

  Archive Device = /dev/tape/by-id/scsi-3500143801606395d-nst

  AutomaticMount = yes;   # when device opened, read it

  AlwaysOpen = yes;

  RemovableMedia = yes;

  RandomAccess = no;

  AutoChanger = yes

  Alert Command = "sh -c 'smartctl -H -l error %c'"

  Maximum Changer Wait = 600

  Maximum Concurrent Jobs = 1

  LabelMedia = yes

}



Device {

  Name = LibHP09-drive_2  #

  Drive Index = 1

  Media Type = LTO-5

  Archive Device = /de

Re: [Bacula-users] Backup on Tape

2014-08-04 Thread Roberts, Ben
> 1) Can we tune bacula config files for maximize backup speed?
Turn on attribute spooling to save on database round-trips during the backup 
run (these will be inserted at the end). Try to measure where your bottleneck 
is and then see if you can do anything about it: read I/O on the FD machine, 
network throughput, write I/O on the SD machine.

> 2) Can bacula use both tape drive simultaneous backup for same pool?
You can use multiple drives to write multiple volumes from the same pool 
concurrently, but each drive would need to be writing for a different job. You 
cannot split a single job across multiple devices.

> 3) Any other suggestion/tips?
I had problems with big backups (multi-TB) taking too long (>6 days). I got 
around this by taking a block-level backup using the bpipe plugin instead of a 
file-level backup. I'm using ZFS, so send/recv but if you're using another 
system it could be done with LVM/Btrfs or even tar. These run much faster 
because the underlying disks can provide big sequential reads, and there's no 
need to deal with individual file metadata. The downside is not being able to 
restore individual files, and having to have enough scratch space to restore an 
entire backup if you need data from it. So it's a compromise, and it's up to 
you whether this would fit with your workload.

Regards,
Ben Roberts

P.S. This is the second time John Drescher has just beaten me to a posting 
today!

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.  
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.  This communication is for 
informational purposes only.  It is not intended as an offer or solicitation 
for the purchase or sale of any financial instrument or as an official 
confirmation of any transaction.   Any comments or statements made herein do 
not necessarily reflect those of GSA Capital. GSA Capital Partners LLP is 
authorised and regulated by the Financial Conduct Authority and is registered 
in England and Wales at Stratton House, 5 Stratton Street, London W1J 8LA, 
number OC309261. GSA Capital Services Limited is registered in England and 
Wales at the same address, number 5320529.


--
Infragistics Professional
Build stunning WinForms apps today!
Reboot your WinForms applications with our WinForms controls. 
Build a bridge from your legacy apps to the future.
http://pubads.g.doubleclick.net/gampad/clk?id=153845071&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to forcefully recycle tape

2014-08-04 Thread Roberts, Ben
Hi Ankush,

> I would like to forcefully recycle tape, I follow below command from
> bconsole but not working.

> +-+--+---+---+-+
> | MediaId | VolumeName | VolStatus | Enabled | VolBytes| VolFiles |
> +-+--+---+---+-+
> |   2 | Tape1  | Full  |   1 | 477,598,464,000 |  501 |   
>  2,592,000 |   1
> Enter *MediaId or Volume name: 2
> sql_get.c:1098 Media record for Volume "2" not found.

As the prompt says if you wish to refer to a volume by its MediaId,
you must use the syntax "*2", otherwise it looks for a volume with
VolumeName="2", which you probably don't have.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.  
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.  This communication is for 
informational purposes only.  It is not intended as an offer or solicitation 
for the purchase or sale of any financial instrument or as an official 
confirmation of any transaction.   Any comments or statements made herein do 
not necessarily reflect those of GSA Capital. GSA Capital Partners LLP is 
authorised and regulated by the Financial Conduct Authority and is registered 
in England and Wales at Stratton House, 5 Stratton Street, London W1J 8LA, 
number OC309261. GSA Capital Services Limited is registered in England and 
Wales at the same address, number 5320529.


--
Infragistics Professional
Build stunning WinForms apps today!
Reboot your WinForms applications with our WinForms controls. 
Build a bridge from your legacy apps to the future.
http://pubads.g.doubleclick.net/gampad/clk?id=153845071&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula with LTO6 on TS3100

2014-07-25 Thread Roberts, Ben
Hi Philip,

Bacula doesn’t have any issues with LTO6 media in general. I’m running with HP 
LTO6 drives (in a Fujitsu LT60 S2, which I understand to be functionally 
identical to an HP MSL4048) perfectly happily.

Regards,
Ben Roberts

From: Stevens, Philip [mailto:philip.stev...@igb.fraunhofer.de]
Sent: 25 July 2014 09:36
To: Ana Emília M. Arruda
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula with LTO6 on TS3100

Hi,

I am also a little concerned about the LTO6 tapes. Bacula can handle those as 
well?

Thanks
___
Philip Stevens, M. Sc.
Fraunhofer Institute for Interfacial Engineering and Biotechnology IGB
Nobelstraße 12, 70569 Stuttgart, Germany
Tel +49 711 970-4079 | Fax +49 711 970-4200
philip.stev...@igb.fraunhofer.de | 
www.igb.fraunhofer.de

Von: Ana Emília M. Arruda [mailto:emiliaarr...@gmail.com]
Gesendet: Donnerstag, 24. Juli 2014 22:47
Cc: 
bacula-users@lists.sourceforge.net
Betreff: Re: [Bacula-users] Bacula with LTO6 on TS3100

Hi Phil,

Bacula works perfectly with TS3100 and nativa linux drivers.

Regards,
Ana

On Wed, Jul 23, 2014 at 4:17 AM, Stevens, Philip 
mailto:philip.stev...@igb.fraunhofer.de>> 
wrote:
Hi list,

We just bought a new IBM TS3100 Tape Lib with LTO6 Cartrigdes. Since I don’t 
want to spend another fortune on software I was wondering if it is possible to 
get Bacula working with that Tape Lib. The Setup is as followed:

Essentially there is just one server which needs to be backuped (which carries 
a lot of data since it holds next generation sequencing data). The Server 
itself is connected to the storage via SAS. The Tape Lib and the server which 
need to be backuped are in the same (Ethernet)network. Server is running Red 
Hat Enterprise Linux Server 6.3. As mentioned the Lib is a TS3100 (IBM) with 
LTO6 cartrigdes.

Any help or hint what Software I can use is highly appreciated.

Thank you,

Phil



--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient. If 
you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents. This communication is for 
informational purposes only. It is not intended as an offer or solicitation for 
the purchase or sale of any financial instrument or as an official confirmation 
of any transaction. Any comments or statements made herein do not necessarily 
reflect those of GSA Capital. GSA Capital Partners LLP is authorised and 
regulated by the Financial Conduct Authority and is registered in England and 
Wales at Stratton House, 5 Stratton Street, London W1J 8LA, number OC309261. 
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Baculum probelm with bconsole

2014-06-12 Thread Roberts, Ben
> Has nobody had this problem?
> Or did anyone succeed in compiling baculum on Centos 6.5?
> If so I would like to know what I am missing to get this running.

I had a similar problem with webacula. By default my bconsole binary had 
something like 744 permissions so only root (the owner) could execute it. You 
may need to tweak the permissions to let non-root users run it (something like 
sudo chmod g+x bconsole, sudo chgrp somegroup bconsole). You may also need to 
modify the permissions of bconsole.conf so the same group can read it.

You don't want to make the config file world-readable as that would let anybody 
manage your Bacula director, including restore potentially sensitive backup 
data.

Regards,
Ben Roberts



This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.  
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.  This communication is for 
informational purposes only.  It is not intended as an offer or solicitation 
for the purchase or sale of any financial instrument or as an official 
confirmation of any transaction.   Any comments or statements made herein do 
not necessarily reflect those of GSA Capital. GSA Capital Partners LLP is 
authorised and regulated by the Financial Conduct Authority and is registered 
in England and Wales at Stratton House, 5 Stratton Street, London W1J 8LA, 
number OC309261. GSA Capital Services Limited is registered in England and 
Wales at the same address, number 5320529.


--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing & Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Autochanger questions

2014-06-04 Thread Roberts, Ben
Hi all,

I've got a tape library with 3xLTO6 drives and 1xLTO4 drive in it. All four 
drives are managed by the same library, and the same changer device is used to 
load either media type into any drive. I'd like to use them as follows:
1x LTO4 drive free for use by any job (likely to be restores only)
2x LTO6 drives free for use by any job
1x LTO6 drive reserved for administrative tasks, such as restores and bulk 
volume labelling
(All four drives could be used to read from LTO4 media, but I'm happy to ignore 
that for now to keep the config simpler. The reason we bought an LTO4 drive is 
to read from LTO2/3 media)

I've currently have the following config, but it doesn't behave quite as I'd 
like:

Autochanger {
  Name = tapestore1-autochanger-lto6
  Device = drive-0-tapestore1, drive-1-tapestore1
  Device = drive-2-tapestore1
  Changer Device = /dev/scsi/changer/foo
  Changer Command = "/opt/bacula/etc/mtx-changer %c %o %S %a %d"
}

Autochanger {
  Name = tapestore1-autochanger-lto4
  Device = drive-3-tapestore1
  Changer Device = /dev/scsi/changer/foo
  Changer Command = "/opt/bacula/etc/mtx-changer %c %o %S %a %d"
}

Device {
  Name = drive-0-tapestore1
  Archive Device = /dev/rmt/0mbn
  Device Type = Tape
  Media Type = LTO6
  AutoChanger = yes
  Removable media = yes;
  Drive Index = 0
  Maximum Concurrent Jobs = 1
  # Drive used for admin tasks, not autoselected for jobs
  Autoselect = no
}

Device {
  Name = drive-1-tapestore1
  Archive Device = /dev/rmt/1mbn
  Device Type = Tape
  Media Type = LTO6
  AutoChanger = yes
  Removable media = yes;
  Drive Index = 1
  Autoselect = yes
}
# Same for drive-2 and drive-3 (LTO4)

All pools are set to reference the Autochanger devices only.

The problems with this setup are:
- I'm not sure how to indicate I want to use a particular drive for a restore
- Selecting drive-0-tapestore1 for label barcodes when prompted doesn't work (I 
can't remember the exact message but it's something like . This is directly 
related to the Autoselect=no setting as toggling it to yes, or using one of the 
other drives does work as expected.

What is the recommended way to setup a drive specifically for restores? Is it 
to create a new Storage resource in the director that refers to the specific 
Device resource in the SD? And if so, will that work correctly with the 
autochanger to load the right tapes into the drive?

Secondly, is it correct to have two autochanger resources to handle the 
distinct media types, or is Bacula clever enough to manage both types with a 
single resource?

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.  
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.  This communication is for 
informational purposes only.  It is not intended as an offer or solicitation 
for the purchase or sale of any financial instrument or as an official 
confirmation of any transaction.   Any comments or statements made herein do 
not necessarily reflect those of GSA Capital. GSA Capital Partners LLP is 
authorised and regulated by the Financial Conduct Authority and is registered 
in England and Wales at Stratton House, 5 Stratton Street, London W1J 8LA, 
number OC309261. GSA Capital Services Limited is registered in England and 
Wales at the same address, number 5320529.


--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their 
applications. Written by three acclaimed leaders in the field, 
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Change a volume from 'Error' to 'Append'

2014-05-21 Thread Roberts, Ben
If you want to be sure that it was the file mismatch rather than a media error 
on the tape, look through the job logs for subsequent jobs to the one that had 
the network blip. You'll see something like:

29-Jan 08:39 backup1-sd JobId 28425: Volume "XXX675" previously written, moving 
to end of data.
29-Jan 08:41 backup1-sd JobId 28425: Error: Bacula cannot write on tape Volume 
"XXX675" because: The number of files mismatch! Volume=300 Catalog=299
29-Jan 08:41 backup1-sd JobId 28425: Marking Volume "XXX675" in Error in 
Catalog.

Ben Roberts


> -Original Message-
> From: Korbinian Grote [mailto:gr...@genomatix.de]
> Sent: 21 May 2014 14:15
> To: John Drescher; Roberts, Ben; bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Change a volume from 'Error' to 'Append'
>
> Thanks Ben & John for the quick answers!
>
> The volume in question was a 1.5/3 TB  LTO 5 tape and though they don't cost
> the world anymore, I'd have hated to let it go to waste. Anyway, querying the
> database showed that there was just the one single job that failed assigned to
> it, so deleting that job and marking it purged should hopefully have done the
> trick. It's now residing in the scratch pool waiting to be used, so I'll know
> as soon as one of the other pools are full...
>
> Thanks again!
>
> Korbinian
>
>
> Am 21.05.2014 um 14:46 schrieb John Drescher :
>
> >> I've observed similar issues in the past. A job fails part way through and
> the catalog is not updated leaving a mismatch in the number of files in the
> volume versus what's recorded in the catalog. This is then detected the next
> time the volume is used, and Bacula marks the volume as Error without writing
> anything to it.
> >>
> >> Given the mismatch, the safest thing to do is mark the volume as Full, and
> wait for the retention period to expire so the volume can be recycled.
> Unfortunately this wastes space (more painful when you have 100GB volumes and
> it failed within the first few gig...), so you'll need to either have
> sufficient spare space on your SD that you can cope with this, or be willing
> to sacrifice the other jobs on the volume and delete jobs+purge or purge it
> immediately to regain the space.
> >>
> >> I'm not sure how Bacula would behave if you change it straight to append;
> whether it would leave a hole in the volume, or whether this mismatch would
> corrupt subsequent jobs.
> >
> > The times I have done this there was no problem. For file based
> > volumes I keep them small 5 to 10 GB and in this case it does not
> > really waste space to mark the volume Full and or recycling the job
> > with the error.
> >
> > John
>
> --
> Dr. Korbinian Grote
> Product Manager
>
> Genomatix Software GmbH   http://www.genomatix.de
> Bayerstr. 85a, 80335 Muenchen, Germany
>
> HRB 117956 - Amtsgericht Muenchen
> Geschaeftsfuehrer: Klaus J.W. May


This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.  
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.  This communication is for 
informational purposes only.  It is not intended as an offer or solicitation 
for the purchase or sale of any financial instrument or as an official 
confirmation of any transaction.   Any comments or statements made herein do 
not necessarily reflect those of GSA Capital. GSA Capital Partners LLP is 
authorised and regulated by the Financial Conduct Authority and is registered 
in England and Wales at Stratton House, 5 Stratton Street, London W1J 8LA, 
number OC309261. GSA Capital Services Limited is registered in England and 
Wales at the same address, number 5320529.


--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Change a volume from 'Error' to 'Append'

2014-05-21 Thread Roberts, Ben
Hi Korbinian,

I've observed similar issues in the past. A job fails part way through and the 
catalog is not updated leaving a mismatch in the number of files in the volume 
versus what's recorded in the catalog. This is then detected the next time the 
volume is used, and Bacula marks the volume as Error without writing anything 
to it.

Given the mismatch, the safest thing to do is mark the volume as Full, and wait 
for the retention period to expire so the volume can be recycled. Unfortunately 
this wastes space (more painful when you have 100GB volumes and it failed 
within the first few gig...), so you'll need to either have sufficient spare 
space on your SD that you can cope with this, or be willing to sacrifice the 
other jobs on the volume and delete jobs+purge or purge it immediately to 
regain the space.

I'm not sure how Bacula would behave if you change it straight to append; 
whether it would leave a hole in the volume, or whether this mismatch would 
corrupt subsequent jobs.

Regards,
Ben Roberts

> -Original Message-
> From: Korbinian Grote [mailto:gr...@genomatix.de]
> Sent: 21 May 2014 11:59
> To: bacula-users@lists.sourceforge.net
> Subject: [Bacula-users] Change a volume from 'Error' to 'Append'
>
> Hi everyone.
>
> While I do have some basic knowledge of bacula (i.e. viewing logs, restoring
> files/folders, labeling volumes) I couldn't really find conclusive
> documentation on how to resolve the following situation:
>
> We're using bacula with a Tandberg tape library. One of the volumes (tapes)
> has a status of 'Error' when I do a 'list volumes'
> in bconsole. Looking at the logs at around the time of the 'Last written to'
> field, I can see that a - probably running job - got cancelled due to losing
> connection to the server. (Which coincides with a switch problem we had at
> that point, so that is probably what happened).
> The job had accessed two volumes but only one shows an 'Error' the other one
> is marked full. Querying the database I get 0 files and bytes written to
> either volume for the job.
>
> How do I go forward making the volume appendable again? (Assuming that the
> reason for the 'Error' is the interrupted job?)
>
> Thanks in advance,
>
> Korbinian
>
>
>
> --
> "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
> Instantly run your Selenium tests across 300+ browser/OS combos.
> Get unparalleled scalability from the best Selenium testing platform available
> Simple to use. Nothing to install. Get started now for free."
> http://p.sf.net/sfu/SauceLabs
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.  
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.  This communication is for 
informational purposes only.  It is not intended as an offer or solicitation 
for the purchase or sale of any financial instrument or as an official 
confirmation of any transaction.   Any comments or statements made herein do 
not necessarily reflect those of GSA Capital. GSA Capital Partners LLP is 
authorised and regulated by the Financial Conduct Authority and is registered 
in England and Wales at Stratton House, 5 Stratton Street, London W1J 8LA, 
number OC309261. GSA Capital Services Limited is registered in England and 
Wales at the same address, number 5320529.


--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] btape fill failure on HP LTO6/4 drives

2014-04-01 Thread Roberts, Ben
Hi Kern,

> It appears that the OS tape driver does not properly
> implement back space record after an EOT.  This is a defect of the operating
> system driver, but it is not fatal for Bacula.
>
> You will very likely see this defect show up when Bacula fills a tape and
> writes the final EOT mark then tries to verify that the last block was written
> correctly.  Due to the OS driver defect this will fail, but it is only a check
> and your data may still be good.  The results from btape re-reading what was
> written does not look encouraging, and it may indicate that the last block is
> not properly written.  I personally would be worried.

Indeed I was seeing the same failure to backspace over EOT error in the job 
logs:
End of Volume "GSA784L6" at 3910:8005 on device "drive-1-tapestore1" 
(/dev/rmt/1mbn). Write of 64512 bytes got 0.
Error: Backspace record at EOT failed. ERR=I/O error
End of medium on Volume "GSA784L6" Bytes=3,909,704,343,552 Blocks=60,604,295 at 
28-Mar-2014 21:28

> This can happen if you are not running the tape drive in the right mode
> -- since I have not looked at the Solaris tape drive naming conventions for
> about 8 years now, I forget.  I suggest that you verify that using
> /dev/rmt/0mbn
> is what is recommended in the manual.

Rechecked the manual for this. I think 0mbn would have been an acceptable mode.
m=medium density (I've since removed this to let the tape run at its highest 
density and am retesting now).
b=BSD-compatible (Needed by Bacula)
n=non-rewind (Wanted by Bacula)

> If you can really do multi-volume restores correctly and you have verified
> that every byte is correct then you are probably OK.

I've successfully restored 3 jobs that were written in the same way. All three 
were bpipe backups of zfs streams so I'm fairly confident the restore is 
byte-perfect, or the zfs recv would have bailed out. I've updated the config 
with "Backward Space Record = no" to disable this check.

Regards,
Ben Roberts

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.  
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.  This communication is for 
informational purposes only.  It is not intended as an offer or solicitation 
for the purchase or sale of any financial instrument or as an official 
confirmation of any transaction.   Any comments or statements made herein do 
not necessarily reflect those of GSA Capital. GSA Capital Partners LLP is 
authorised and regulated by the Financial Conduct Authority and is registered 
in England and Wales at Stratton House, 5 Stratton Street, London W1J 8LA, 
number OC309261. GSA Capital Services Limited is registered in England and 
Wales at the same address, number 5320529.


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] btape fill failure on HP LTO6/4 drives

2014-03-28 Thread Roberts, Ben
Hi all,

While setting up a new tape library with HP LTO6 drives, btape test runs 
successfully, but btape fill fails with what looks like a problem handling the 
last block on the tape (perhaps an off-by-one error?). The outputs below are 
repeatable with both LTO6 and LTO4 media, but I've been able to do a multi-tape 
backup and restore OK; so I think the problem is isolated to btape rather than 
Bacula-sd and any common code between them. Will do some more testing, but 
obviously filling multiple LTO6 tapes takes a little while!

Has anyone seen this before?

This is on Solaris 5.11 (x86_64) running Bacula 5.2.13, locally compiled.


btape output:
..snip..
Wrote block=1909, file,blk=954,10936 VolBytes=2,499,540,811,776 rate=107.7 
MB/s
19-Mar 15:23 btape JobId 0: End of Volume "TestVolume1" at 954:11250 on device 
"drive-0-tapestore1" (/dev/rmt/0mbn). Write of 131072 bytes got 0.
19-Mar 15:23 btape JobId 0: Error: Backspace record at EOT failed. ERR=I/O error
btape: btape.c:2714-0 Last block at: 954:11249 this_dev_block_num=11250
btape: btape.c:2749-0 End of tape 954:-1. Volume Bytes=2,499,581,968,384. Write 
rate = 107.7 MB/s
btape: btape.c:2320-0 Wrote 1000 blocks on second tape. Done.
Done writing 0 records ...
btape: btape.c:2389-0 Wrote state file last_block_num1=11249 last_block_num2=0
btape: btape.c:2404-0

15:23:52 Done filling tape at 954:-1. Now beginning re-read of tape ...
btape: btape.c:2485-0 Enter do_unfill
19-Mar 15:24 btape JobId 0: Ready to read from volume "TestVolume1" on device 
"drive-0-tapestore1" (/dev/rmt/0mbn).
Rewinding.
Reading the first 1 records from 0:0.
1 records read now at 1:2502
Reposition from 1:2502 to 954:11249
Reposition error. ERR=dev.c:1366 ioctl MTFSR 11249 error on 
"drive-0-tapestore1" (/dev/rmt/0mbn). ERR=I/O error.

btape: btape.c:2412-0 do_unfill failed.


tapeinfo -f /dev/rmt/0mbn
Product Type: Tape Drive
Vendor ID: 'HP  '
Product ID: 'Ultrium 6-SCSI  '
Revision: '338B'
Attached Changer API: No
SerialNumber: ''
MinBlock: 1
MaxBlock: 16777215
Ready: no


Relevant Bacula-sd.conf snippet:

Autochanger {
  Name = tapestore1-autochanger
  Device = drive-0-tapestore1, drive-1-tapestore1
  Device = drive-2-tapestore1, drive-3-tapestore1
  Changer Device = /dev/scsi/changer/c1t5d1
  Changer Command = "/opt/bacula/etc/mtx-changer %c %o %S %a %d"
}

Device {
  Name = drive-0-tapestore1
  Archive Device = /dev/rmt/0mbn
  Device Type = Tape
  Media Type = LT06
  AutoChanger = yes
  Removable media = yes;
  Label Media = no
  Random access = no;
  Requires Mount = no;
  Maximum Changer Wait = 180
  Drive Index = 0
  Maximum Spool Size = 100G
}


Full btape test output:
Tape block granularity is 1024 bytes.
btape: butil.c:290-0 Using device: "/dev/rmt/0mbn" for writing.
btape: btape.c:477-0 open device "drive-0-tapestore1" (/dev/rmt/0mbn): OK
*test

=== Write, rewind, and re-read test ===

I'm going to write 1 records and an EOF
then write 1 records and an EOF, then rewind,
and re-read the data to verify that it is correct.

This is an *essential* feature ...

btape: btape.c:1157-0 Wrote 1 blocks of 130972 bytes.
btape: btape.c:609-0 Wrote 1 EOF to "drive-0-tapestore1" (/dev/rmt/0mbn)
btape: btape.c:1173-0 Wrote 1 blocks of 130972 bytes.
btape: btape.c:609-0 Wrote 1 EOF to "drive-0-tapestore1" (/dev/rmt/0mbn)
btape: btape.c:1215-0 Rewind OK.
1 blocks re-read correctly.
Got EOF on tape.
1 blocks re-read correctly.
=== Test Succeeded. End Write, rewind, and re-read test ===

btape: btape.c:1283-0 Block position test
btape: btape.c:1295-0 Rewind OK.
Reposition to file:block 0:4
Block 5 re-read correctly.
Reposition to file:block 0:200
Block 201 re-read correctly.
Reposition to file:block 0:
Block 1 re-read correctly.
Reposition to file:block 1:0
Block 10001 re-read correctly.
Reposition to file:block 1:600
Block 10601 re-read correctly.
Reposition to file:block 1:
Block 2 re-read correctly.
=== Test Succeeded. End Write, rewind, and re-read test ===



=== Append files test ===

This test is essential to Bacula.

I'm going to write one record  in file 0,
   two records in file 1,
 and three records in file 2

btape: btape.c:579-0 Rewound "drive-0-tapestore1" (/dev/rmt/0mbn)
btape: btape.c:1914-0 Wrote one record of 130972 bytes.
btape: btape.c:1916-0 Wrote block to device.
btape: btape.c:609-0 Wrote 1 EOF to "drive-0-tapestore1" (/dev/rmt/0mbn)
btape: btape.c:1914-0 Wrote one record of 130972 bytes.
btape: btape.c:1916-0 Wrote block to device.
btape: btape.c:1914-0 Wrote one record of 130972 bytes.
btape: btape.c:1916-0 Wrote block to device.
btape: btape.c:609-0 Wrote 1 EOF to "drive-0-tapestore1" (/dev/rmt/0mbn)
btape: btape.c:1914-0 Wrote one record of 130972 bytes.
btape: btape.c:1916-0 Wrote block to device.
btape: btape.c:1914-0 Wrote one record of 130972 bytes.
btape: btape.c:1916-0 Wrote block to device.
btape: btape.c:1914-0 W

Re: [Bacula-users] Backup Full and Incremental...

2014-03-14 Thread Roberts, Ben
You are correct, you cannot have an incremental without having already done a 
full. You will see in the logs that it says something like "no previous full 
found, upgrading to full".

Even if you could make a backup application take an "incremental" backup 
without having done a previous full, it would have to backup every file anyway, 
which would make it a full in everything but name. And you probably wouldn't 
want a complete backup of everything to sit in your incremental volume pool - 
it would be far bigger than your regular incremental jobs and screw up any 
provisioning calculations you had made. Bacula is being helpful by actually 
upgrading the job to a full.

From: Gilberto Nunes [mailto:gilberto.nune...@gmail.com]
Sent: 14 March 2014 14:40
To: Roberts, Ben
Cc: John Drescher; bacula-users
Subject: Re: [Bacula-users] Backup Full and Incremental...

Well guys
Score zero, again!

Don't work as I expect!

Perhaps I was misunderstanding the configurations and how bacula works... Or 
any other backup tool..
What I want to do is make an Incremental Backup today, has a Full backup either 
or not...
So, now I think: make an incremenal backup from WHERE and from WHAT?

First, I need to do a Full backup, in order to do the differential or 
incremental backup...

So, what I need to do FIRST is a Full Backup and based on this Full backup 
running an Incremental backup, right???

I can't running an Incremental Backup without a Full Backup that never took 
place!

Correct me, if I wrong!

Thanks

2014-03-14 11:24 GMT-03:00 Gilberto Nunes 
mailto:gilberto.nune...@gmail.com>>:
Sorry... "what happen..."  Poor English here... Brazilian guy try speak in 
English... \o

2014-03-14 11:21 GMT-03:00 Gilberto Nunes 
mailto:gilberto.nune...@gmail.com>>:

So much thanks...
I will try and see what happened...
Cheers


2014-03-14 11:18 GMT-03:00 Roberts, Ben 
mailto:ben.robe...@gsacapital.com>>:

You can do it two ways:

(Note the order of the arguments here is different from what you suggested - 
again check the docs for more info about what options you can specify here)
Schedule {
 Name = Backup-Samba
  Run = Level=Full Pool=Samba-Full on 2nd fri at 18:30
}

Or you can do as I did in the example and use the Job options:
Job {
  Pool = Samba-Full # Redundant given the options below, but still required
  Full Backup Pool = Samba-Full
  Incremental Backup Pool = Samba-Incremental
}

I prefer the latter form, since all the information about where the data will 
be written to is stored with the Job.

Ben Roberts

IT Infrastructure

GSA Capital Partners LLP

Stratton House
5 Stratton Street

London W1J 8LA

D +44 (0)20 7959 7661

T +44 (0)20 7959 8800


www.gsacapital.com<http://www.gsacapital.com/>



From: Gilberto Nunes 
[mailto:gilberto.nune...@gmail.com<mailto:gilberto.nune...@gmail.com>]
Sent: 14 March 2014 14:14
To: Roberts, Ben
Cc: John Drescher; bacula-users

Subject: Re: [Bacula-users] Backup Full and Incremental...

Just one more thing
I have differents Volumes, one for Full and other for Incremental...
On the Schedule I can define Full and Incremental like that?

Schedule {

  Name = Backup-Samba

  Run = Level=Full on 2nd fri at 18:30 Pool=Samba-Full # Monthly full

  Run = Level=Incremental at 02:30 Pool=Samba-Incremental# Daily incremental

}

Thanks


2014-03-14 11:00 GMT-03:00 Roberts, Ben 
mailto:ben.robe...@gsacapital.com>>:

Instead of specifying the pools in the schedule, I use the Full Backup Pool and 
Incremental Backup Pool options as below.



Pool {

  Name = Samba-Full

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 30 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = FullMensal

}

Pool {

  Name = Samba-Incremental

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 365 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = Incremental

}

Job {

  Name = Backup-Samba

  Type = Backup

  Level = Incremental

  Client = server-fd

  FileSet = DADOS

  Schedule = Backup-Samba

  Storage = File

  Pool = Samba-Full # Redundant given the options below, but still required

  Full Backup Pool = Samba-Full

  Incremental Backup Pool = Samba-Incremental

  Messages = Standard

}

Schedule {

  Name = Backup-Samba

  Run = Level=Full on 2nd fri at 18:30 # Monthly full

  Run = Level=Incremental at 02:30 # Daily incremental

}

Regards,
Ben Roberts

From: Gilberto Nunes 
[mailto:gilberto.nune...@gmail.com<mailto:gilberto.nune...@gmail.com>]
Sent: 14 March 2014 13:48
To: John Drescher
Cc: bacula-users
Subject: Re: [Bacula-users] Backup Full and Incremental...

Just to complemente I do this:
It will run a Full backup today, at 18hs30 and later, around 02hs30 will run a 
Incremental backup...
This the configuration:

Pool {

  Name = Samba-Full

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 30 days

  Recycle = yes

  AutoPrune = yes


Re: [Bacula-users] Backup Full and Incremental...

2014-03-14 Thread Roberts, Ben
You can do it two ways:

(Note the order of the arguments here is different from what you suggested - 
again check the docs for more info about what options you can specify here)
Schedule {
 Name = Backup-Samba
  Run = Level=Full Pool=Samba-Full on 2nd fri at 18:30
}

Or you can do as I did in the example and use the Job options:
Job {
  Pool = Samba-Full # Redundant given the options below, but still required
  Full Backup Pool = Samba-Full
  Incremental Backup Pool = Samba-Incremental
}

I prefer the latter form, since all the information about where the data will 
be written to is stored with the Job.

Ben Roberts

IT Infrastructure

GSA Capital Partners LLP

Stratton House
5 Stratton Street

London W1J 8LA

D +44 (0)20 7959 7661

T +44 (0)20 7959 8800


www.gsacapital.com<http://www.gsacapital.com/>



From: Gilberto Nunes [mailto:gilberto.nune...@gmail.com]
Sent: 14 March 2014 14:14
To: Roberts, Ben
Cc: John Drescher; bacula-users
Subject: Re: [Bacula-users] Backup Full and Incremental...

Just one more thing
I have differents Volumes, one for Full and other for Incremental...
On the Schedule I can define Full and Incremental like that?

Schedule {

  Name = Backup-Samba

  Run = Level=Full on 2nd fri at 18:30 Pool=Samba-Full # Monthly full

  Run = Level=Incremental at 02:30 Pool=Samba-Incremental# Daily incremental

}

Thanks


2014-03-14 11:00 GMT-03:00 Roberts, Ben 
mailto:ben.robe...@gsacapital.com>>:

Instead of specifying the pools in the schedule, I use the Full Backup Pool and 
Incremental Backup Pool options as below.



Pool {

  Name = Samba-Full

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 30 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = FullMensal

}

Pool {

  Name = Samba-Incremental

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 365 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = Incremental

}

Job {

  Name = Backup-Samba

  Type = Backup

  Level = Incremental

  Client = server-fd

  FileSet = DADOS

  Schedule = Backup-Samba

  Storage = File

  Pool = Samba-Full # Redundant given the options below, but still required

  Full Backup Pool = Samba-Full

  Incremental Backup Pool = Samba-Incremental

  Messages = Standard

}

Schedule {

  Name = Backup-Samba

  Run = Level=Full on 2nd fri at 18:30 # Monthly full

  Run = Level=Incremental at 02:30 # Daily incremental

}

Regards,
Ben Roberts

From: Gilberto Nunes 
[mailto:gilberto.nune...@gmail.com<mailto:gilberto.nune...@gmail.com>]
Sent: 14 March 2014 13:48
To: John Drescher
Cc: bacula-users
Subject: Re: [Bacula-users] Backup Full and Incremental...

Just to complemente I do this:
It will run a Full backup today, at 18hs30 and later, around 02hs30 will run a 
Incremental backup...
This the configuration:

Pool {

  Name = Samba-Full

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 30 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = FullMensal

}

Pool {

  Name = Samba-Incremental

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 365 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = Incremental

}

Job {

  Name = Backup-Samba-Completo

  Type = Backup

  Level = Full

  Client = server-fd

  FileSet = DADOS

  Schedule = Samba-Full

  Storage = File

  Pool = Samba-Full

  Messages = Standard

}

Job {

  Name = Backup-Samba-Incremental

  Type = Backup

  Level = Incremental

  Client = server-fd

  FileSet = DADOS

  Schedule = Samba-Inc

  Storage = File

  Pool = Samba-Incremental

  Messages = Standard

}

Schedule {

  Name = Samba-Full

  Run = Level=Full Pool=Samba-Full on 14 fri at 18:30

}

Schedule {

  Name = Samba-Inc

  Run = Level=Incremental Pool=Samba-Incremental at 02:30

}



I don't know if work as expected but we will see...

Thanks a lot


2014-03-14 10:25 GMT-03:00 Gilberto Nunes 
mailto:gilberto.nune...@gmail.com>>:
Oh... Ok Friends... Take your time... Don't worry... I'll continue research...

2014-03-14 10:24 GMT-03:00 John Drescher 
mailto:dresche...@gmail.com>>:

On Fri, Mar 14, 2014 at 9:22 AM, John Drescher 
mailto:dresche...@gmail.com>> wrote:
> On Fri, Mar 14, 2014 at 9:19 AM, Gilberto Nunes
> mailto:gilberto.nune...@gmail.com>> wrote:
>> But, and if I use different Volume for different Level??
>> I create two Volume, one for each Level, Full and Incremental...
>> How can I handle this?
>
> Odd / even pools. And more than 1 Run= for Full. More than 1 Run= for
> Incremental where you specify the pool also in the Run=
That may not work the way you want with the Incremental. Give me a few
minutes. Its time for breakfast..

--
John M. Drescher



--
Gilberto Ferreira



--
Gilberto Ferreira


This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient. If 
you are

Re: [Bacula-users] Backup Full and Incremental...

2014-03-14 Thread Roberts, Ben
Correct - see the documentation for the Schedule Run directive: 
http://www.bacula.org/5.2.x-manuals/en/main/main/Configuring_Director.html#SECTION00145

"Run = Job-overrides Date-time-specification
The Job-overrides permit overriding the Level, the Storage, the Messages, and 
the Pool specifications provided in the Job resource."

Regards,
Ben Roberts

From: Gilberto Nunes [mailto:gilberto.nune...@gmail.com]
Sent: 14 March 2014 14:03
To: Roberts, Ben
Cc: John Drescher; bacula-users
Subject: Re: [Bacula-users] Backup Full and Incremental...

but this will override the Level statement on the Job definition?

2014-03-14 11:00 GMT-03:00 Roberts, Ben 
mailto:ben.robe...@gsacapital.com>>:

Instead of specifying the pools in the schedule, I use the Full Backup Pool and 
Incremental Backup Pool options as below.



Pool {

  Name = Samba-Full

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 30 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = FullMensal

}

Pool {

  Name = Samba-Incremental

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 365 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = Incremental

}

Job {

  Name = Backup-Samba

  Type = Backup

  Level = Incremental

  Client = server-fd

  FileSet = DADOS

  Schedule = Backup-Samba

  Storage = File

  Pool = Samba-Full # Redundant given the options below, but still required

  Full Backup Pool = Samba-Full

  Incremental Backup Pool = Samba-Incremental

  Messages = Standard

}

Schedule {

  Name = Backup-Samba

  Run = Level=Full on 2nd fri at 18:30 # Monthly full

  Run = Level=Incremental at 02:30 # Daily incremental

}

Regards,
Ben Roberts

From: Gilberto Nunes 
[mailto:gilberto.nune...@gmail.com<mailto:gilberto.nune...@gmail.com>]
Sent: 14 March 2014 13:48
To: John Drescher
Cc: bacula-users
Subject: Re: [Bacula-users] Backup Full and Incremental...

Just to complemente I do this:
It will run a Full backup today, at 18hs30 and later, around 02hs30 will run a 
Incremental backup...
This the configuration:

Pool {

  Name = Samba-Full

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 30 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = FullMensal

}

Pool {

  Name = Samba-Incremental

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 365 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = Incremental

}

Job {

  Name = Backup-Samba-Completo

  Type = Backup

  Level = Full

  Client = server-fd

  FileSet = DADOS

  Schedule = Samba-Full

  Storage = File

  Pool = Samba-Full

  Messages = Standard

}

Job {

  Name = Backup-Samba-Incremental

  Type = Backup

  Level = Incremental

  Client = server-fd

  FileSet = DADOS

  Schedule = Samba-Inc

  Storage = File

  Pool = Samba-Incremental

  Messages = Standard

}

Schedule {

  Name = Samba-Full

  Run = Level=Full Pool=Samba-Full on 14 fri at 18:30

}

Schedule {

  Name = Samba-Inc

  Run = Level=Incremental Pool=Samba-Incremental at 02:30

}



I don't know if work as expected but we will see...

Thanks a lot


2014-03-14 10:25 GMT-03:00 Gilberto Nunes 
mailto:gilberto.nune...@gmail.com>>:
Oh... Ok Friends... Take your time... Don't worry... I'll continue research...

2014-03-14 10:24 GMT-03:00 John Drescher 
mailto:dresche...@gmail.com>>:

On Fri, Mar 14, 2014 at 9:22 AM, John Drescher 
mailto:dresche...@gmail.com>> wrote:
> On Fri, Mar 14, 2014 at 9:19 AM, Gilberto Nunes
> mailto:gilberto.nune...@gmail.com>> wrote:
>> But, and if I use different Volume for different Level??
>> I create two Volume, one for each Level, Full and Incremental...
>> How can I handle this?
>
> Odd / even pools. And more than 1 Run= for Full. More than 1 Run= for
> Incremental where you specify the pool also in the Run=
That may not work the way you want with the Incremental. Give me a few
minutes. Its time for breakfast..

--
John M. Drescher



--
Gilberto Ferreira



--
Gilberto Ferreira


This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient. If 
you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents. This communication is for 
informational purposes only. It is not intended as an offer or solicitation for 
the purchase or sale of any financial instrument or as an official confirmation 
of any transaction. Any comments or statements made herein do not necessarily 
reflect those of GSA Capital. GSA Capital Partners LLP is authorised and 
regulated by the Financial Conduct Authority and is registered in England and 
Wales at Stratton House, 5 Stratton Street, London W1J 8LA, number OC309261. 
GSA Capital Services L

Re: [Bacula-users] Backup Full and Incremental...

2014-03-14 Thread Roberts, Ben
Instead of specifying the pools in the schedule, I use the Full Backup Pool and 
Incremental Backup Pool options as below.



Pool {

  Name = Samba-Full

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 30 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = FullMensal

}

Pool {

  Name = Samba-Incremental

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 365 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = Incremental

}

Job {

  Name = Backup-Samba

  Type = Backup

  Level = Incremental

  Client = server-fd

  FileSet = DADOS

  Schedule = Backup-Samba

  Storage = File

  Pool = Samba-Full # Redundant given the options below, but still required

  Full Backup Pool = Samba-Full

  Incremental Backup Pool = Samba-Incremental

  Messages = Standard

}

Schedule {

  Name = Backup-Samba

  Run = Level=Full on 2nd fri at 18:30 # Monthly full

  Run = Level=Incremental at 02:30 # Daily incremental

}

Regards,
Ben Roberts

From: Gilberto Nunes [mailto:gilberto.nune...@gmail.com]
Sent: 14 March 2014 13:48
To: John Drescher
Cc: bacula-users
Subject: Re: [Bacula-users] Backup Full and Incremental...

Just to complemente I do this:
It will run a Full backup today, at 18hs30 and later, around 02hs30 will run a 
Incremental backup...
This the configuration:

Pool {

  Name = Samba-Full

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 30 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = FullMensal

}

Pool {

  Name = Samba-Incremental

  Pool Type = Backup

  Maximum Volume Jobs = 1

  Volume Retention = 365 days

  Recycle = yes

  AutoPrune = yes

  LabelFormat = Incremental

}

Job {

  Name = Backup-Samba-Completo

  Type = Backup

  Level = Full

  Client = server-fd

  FileSet = DADOS

  Schedule = Samba-Full

  Storage = File

  Pool = Samba-Full

  Messages = Standard

}

Job {

  Name = Backup-Samba-Incremental

  Type = Backup

  Level = Incremental

  Client = server-fd

  FileSet = DADOS

  Schedule = Samba-Inc

  Storage = File

  Pool = Samba-Incremental

  Messages = Standard

}

Schedule {

  Name = Samba-Full

  Run = Level=Full Pool=Samba-Full on 14 fri at 18:30

}

Schedule {

  Name = Samba-Inc

  Run = Level=Incremental Pool=Samba-Incremental at 02:30

}



I don't know if work as expected but we will see...

Thanks a lot


2014-03-14 10:25 GMT-03:00 Gilberto Nunes 
mailto:gilberto.nune...@gmail.com>>:
Oh... Ok Friends... Take your time... Don't worry... I'll continue research...

2014-03-14 10:24 GMT-03:00 John Drescher 
mailto:dresche...@gmail.com>>:

On Fri, Mar 14, 2014 at 9:22 AM, John Drescher 
mailto:dresche...@gmail.com>> wrote:
> On Fri, Mar 14, 2014 at 9:19 AM, Gilberto Nunes
> mailto:gilberto.nune...@gmail.com>> wrote:
>> But, and if I use different Volume for different Level??
>> I create two Volume, one for each Level, Full and Incremental...
>> How can I handle this?
>
> Odd / even pools. And more than 1 Run= for Full. More than 1 Run= for
> Incremental where you specify the pool also in the Run=
That may not work the way you want with the Incremental. Give me a few
minutes. Its time for breakfast..

--
John M. Drescher



--
Gilberto Ferreira



--
Gilberto Ferreira


This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient. If 
you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents. This communication is for 
informational purposes only. It is not intended as an offer or solicitation for 
the purchase or sale of any financial instrument or as an official confirmation 
of any transaction. Any comments or statements made herein do not necessarily 
reflect those of GSA Capital. GSA Capital Partners LLP is authorised and 
regulated by the Financial Conduct Authority and is registered in England and 
Wales at Stratton House, 5 Stratton Street, London W1J 8LA, number OC309261. 
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Progress estimates for jobs

2014-03-14 Thread Roberts, Ben
Hi Uwe,

That would be useful if you could, thanks :)

Ben Roberts
IT Infrastructure
GSA Capital Partners LLP
Stratton House
5 Stratton Street
London W1J 8LA
D +44 (0)20 7959 7661
T +44 (0)20 7959 8800

www.gsacapital.com



> -Original Message-
> From: Uwe Schuerkamp [mailto:uwe.schuerk...@nionex.net]
> Sent: 14 March 2014 09:36
> To: Roberts, Ben
> Cc: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Progress estimates for jobs
>
> Hi folks,
>
> I created a similar console based tool a while ago which works similar to
> "top" (minus all the fancy keyboard shortcuts) but which works great in a
> watch -n 60 loop, you can find a screenshot here:
>
> https://dl.dropboxusercontent.com/u/1983539/btop.png
>
> btop calculates time remaining based on the previous backup run for a job and
> its current speed as reported by the client.
>
> Let me know if you're interested and I'll try cleaning it up for an alpha
> release.
>
> Cheers, Uwe
>
> --
> uwe.schuerk...@nionex.net fon: [+49] 5242.91 - 4740, fax:-69 72
> Hauptsitz: Avenwedder Str. 55, D-33311 Gütersloh, Germany Registergericht
> Gütersloh HRB 4196, Geschäftsführer: H. Gosewehr, D. Suda NIONEX --- Ein
> Unternehmen der Bertelsmann SE & Co. KGaA
>


This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.  
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.  This communication is for 
informational purposes only.  It is not intended as an offer or solicitation 
for the purchase or sale of any financial instrument or as an official 
confirmation of any transaction.   Any comments or statements made herein do 
not necessarily reflect those of GSA Capital. GSA Capital Partners LLP is 
authorised and regulated by the Financial Conduct Authority and is registered 
in England and Wales at Stratton House, 5 Stratton Street, London W1J 8LA, 
number OC309261. GSA Capital Services Limited is registered in England and 
Wales at the same address, number 5320529.


--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Progress estimates for jobs

2014-03-12 Thread Roberts, Ben
Hi,

I've built a simple dashboard internally for displaying the current status and 
most recent completion of each job in the director. It also makes an estimate 
of the average job size and run frequency (which we use for alerting when jobs 
run unexpectedly late or are abnormally sized). I pull this data directly from 
the catalog with a rather complex sql query.

The one additional feature I'd like to add is an estimate of progress based on 
current amount backed up, start time, and average size. I think I should be 
able to calculate this information from the JobMedia table. Indeed this works 
pretty well for tape-based backups but produces crazy numbers (many orders of 
magnitude too big) for disk-based backups.

I'm currently using a simplified query like this to extract the data, and have 
the dashboard deliberately throw away any numbers that look wrong:
SELECT JobId, SUM(EndBlock-StartBlock)*64512 AS 'CurrentBytes' FROM JobMedia 
GROUP BY JobId;

I'm obviously making a few assumptions here:

-  A block is always 63k. This seems to hold true for LTO4 tapes. Are 
blocks for disk-based backups always a fixed size, and is this also 63k?

-  That the start block and end block lie in the same file. Again this 
holds true for LTO4 backups, but often not for disk backups
Are files fixed numbers of blocks long? Can I make any inference as to how much 
data has been backed up for a job with a jobmedia record spanning over file 
numbers?  I couldn't glean any useful answers to these questions from the 
schema documentation.

Is there a better, or indeed any other way to retrieve this data from the 
Catalog? I'd prefer to avoid scripting bconsole access and scraping the results 
of show storage, or show jobid.

For reference, this has been tested against 5.0.2 and 5.2.13, using a MySQL 
catalog.

Regards,
Ben Roberts


This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient. If 
you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents. This communication is for 
informational purposes only. It is not intended as an offer or solicitation for 
the purchase or sale of any financial instrument or as an official confirmation 
of any transaction. Any comments or statements made herein do not necessarily 
reflect those of GSA Capital. GSA Capital Partners LLP is authorised and 
regulated by the Financial Conduct Authority and is registered in England and 
Wales at Stratton House, 5 Stratton Street, London W1J 8LA, number OC309261. 
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Help please. Incremental backup works very strange.

2014-03-05 Thread Roberts, Ben
You will need to enable Accurate mode in your job or jobdefs resource. This 
lets Bacula keep track of files that have been deleted. See 
http://www.bacula.org/manuals/en/install/install/Configuring_Director.html#SECTION0063
 for more info.

Regards,
Ben Roberts

> -Original Message-
> From: Логинов Илья [mailto:joshk...@ya.ru]
> Sent: 05 March 2014 11:40
> To: bacula-users@lists.sourceforge.net
> Subject: [Bacula-users] Help please. Incremental backup works very
> strange.
>
> Hello, I have a problem with Bacula.
> The problem is that the incremental backup works very strange.
> I create full backup of the empty folder, then I load it files and do
> incremental backup. Here everything is working properly and correctly
> restored.
> But when I clean the folder and do incremental second backup (already
> empty folder) and do recover latest incremental backup, files that have
> been deleted are restored, Really Bacula in incremental backup not be
> stored information about what files no longer exist? Maybe somehow make
> that not restores deleted files. In documentation this not described.
>
> Thanks))
>
> ---
> ---
> Subversion Kills Productivity. Get off Subversion & Make the Move to
> Perforce.
> With Perforce, you get hassle-free workflows. Merge that actually
> works.
> Faster operations. Version large binaries.  Built-in WAN optimization
> and the freedom to use Git, Perforce or both. Make the move to
> Perforce.
> http://pubads.g.doubleclick.net/gampad/clk?id=122218951&iu=/4140/ostg.c
> lktrk
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.  
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.  This communication is for 
informational purposes only.  It is not intended as an offer or solicitation 
for the purchase or sale of any financial instrument or as an official 
confirmation of any transaction.   Any comments or statements made herein do 
not necessarily reflect those of GSA Capital. GSA Capital Partners LLP is 
authorised and regulated by the Financial Conduct Authority and is registered 
in England and Wales at Stratton House, 5 Stratton Street, London W1J 8LA, 
number OC309261. GSA Capital Services Limited is registered in England and 
Wales at the same address, number 5320529.


--
Subversion Kills Productivity. Get off Subversion & Make the Move to Perforce.
With Perforce, you get hassle-free workflows. Merge that actually works. 
Faster operations. Version large binaries.  Built-in WAN optimization and the
freedom to use Git, Perforce or both. Make the move to Perforce.
http://pubads.g.doubleclick.net/gampad/clk?id=122218951&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SOLVED Errors restoring from disk backup: Volume data error Wanted ID: "BB02", got ""

2014-01-21 Thread Roberts, Ben
Hi Martin, Kern

Confirmed that at least recompiling on the target machine, if not the upgrade 
to 5.2.13 at the same time fixed this, and I was able to restore 10TB over the 
weekend from backups previously indicated as faulty. It looks like the root 
cause was binary-incompatibility in the system libraries bacula was linking to 
compared to what it was built against that manifested only as a read error 
during restores.

Thanks again for your help, very much appreciated!

Ben Roberts

IT Infrastructure

GSA Capital Partners LLP

Stratton House
5 Stratton Street

London W1J 8LA

D +44 (0)20 7959 7661

T +44 (0)20 7959 8800


www.gsacapital.com<http://www.gsacapital.com/>



From: Kern Sibbald [mailto:k...@sibbald.com]
Sent: 17 January 2014 19:35
To: Roberts, Ben; bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Errors restoring from disk backup: Volume data 
error Wanted ID: "BB02", got ""

Hello Ben,

Great!  Thanks for the feedback.

Good luck,

Kern

On 01/17/2014 06:34 PM, Roberts, Ben wrote:
Hi Kern,

I verified that the failures were happening on a 5.0.x FD as well as a 5.2.x 
FD. At the time, I hadn't realised this was unsupported or that it was even 
happening. Following Martin's observation earlier that the corruption was 
happening conveniently closely to the 2^32 overflow boundary, I've re-compiled 
the director/sd (and took the opportunity to upgrade to 5.2.13) and am just 
trying a full restore of one of the failing backups now - so far 460GB restored 
which is a record for this server. It looks like the problem was entirely my 
fault - using a copy of the DIR/SD compiled for one OS on a newer version of OS 
and that the corruption was happening on reading the data stream back in rather 
than while it was being written to the backup volumes.

I'll do a few more TB of restores to confirm the upgraded director is doing the 
correct thing and that this doesn't need any further investigation from the 
Bacula side.

Noted Re the same version of DIR/SD. I have not and will not be attempting to 
cross versions here.

Regards,

Ben Roberts

IT Infrastructure

GSA Capital Partners LLP

Stratton House
5 Stratton Street

London W1J 8LA

D +44 (0)20 7959 7661

T +44 (0)20 7959 8800


www.gsacapital.com<http://www.gsacapital.com/>



From: Kern Sibbald [mailto:k...@sibbald.com]
Sent: 17 January 2014 17:23
To: Roberts, Ben; 
bacula-users@lists.sourceforge.net<mailto:bacula-users@lists.sourceforge.net>
Subject: Re: [Bacula-users] Errors restoring from disk backup: Volume data 
error Wanted ID: "BB02", got ""

Hello,

Every case of this particular error message that I have seen has been
due to data corruption outside of Bacula.  Typically this happens when
a disk drive is bad, but since you are running ZFS and its checksums
are good, I can see only several other possibilities:

1. The ZFS code is messed up.  Running a current distribution with the
ZFS kernel module should not have this problem, but if you are running
something a bit older or using a user file system rather than the kernel
module you could have problems.

2. You have bad cables or a bad disk controller.

3. You seem to be using 5.2.x FDs with 5.0.x Director/SD,
which is not supported.  Your FDs should never be a higher
version that the DIR/SD, but may be lower.  In addition your
DIR and SD must always be the same version.

Oops, I just re-read your email and probably point 1 does not apply
since you seem to be running ZFS on Solaris so there is little or no
possibility that the code is bad.

Best regards,
Kern

On 01/14/2014 06:04 PM, Roberts, Ben wrote:
Hi all,

I've recently setup a new Bacula director/storage daemon in preparation to move 
our existing backups to newer hardware. During testing, I've run into problems 
doing restores of backups taken to disk, failing with the messages:



Error: block.c:275 Volume data error at 24:4294944994! Wanted ID: "BB02", got 
"". Buffer discarded.

Fatal error: fd_cmds.c:169 Command error with FD, hanging up.

Similar errors are reported for both file-level backups, and block-level 
backups made using bpipe. I've seen the instructions in 
http://www.bacula.org/en/dev-manual/main/main/Restore_Command.html#SECTION002110,
 but these only seem to apply to tape backups rather than disk ones. 
Regardless, I've tried striping the positional information from the bootstrap 
file with no effect.

Some relevant notes from my testing:

-  The issue does not affect every backup made, but does affect a 
significant proportion tested.

-  A single job can be affected at multiple locations, i.e. skipping 
one affected file might see the job fail again at a subsequent file.

-  Attempting to restore the same job multiple times elicits failures 
at the same block each time. Re-running the job may produce a restorable 
backup, otherwise a job t

Re: [Bacula-users] Errors restoring from disk backup: Volume data error Wanted ID: "BB02", got ""

2014-01-17 Thread Roberts, Ben
Hi Kern,

I verified that the failures were happening on a 5.0.x FD as well as a 5.2.x 
FD. At the time, I hadn't realised this was unsupported or that it was even 
happening. Following Martin's observation earlier that the corruption was 
happening conveniently closely to the 2^32 overflow boundary, I've re-compiled 
the director/sd (and took the opportunity to upgrade to 5.2.13) and am just 
trying a full restore of one of the failing backups now - so far 460GB restored 
which is a record for this server. It looks like the problem was entirely my 
fault - using a copy of the DIR/SD compiled for one OS on a newer version of OS 
and that the corruption was happening on reading the data stream back in rather 
than while it was being written to the backup volumes.

I'll do a few more TB of restores to confirm the upgraded director is doing the 
correct thing and that this doesn't need any further investigation from the 
Bacula side.

Noted Re the same version of DIR/SD. I have not and will not be attempting to 
cross versions here.

Regards,

Ben Roberts

IT Infrastructure

GSA Capital Partners LLP

Stratton House
5 Stratton Street

London W1J 8LA

D +44 (0)20 7959 7661

T +44 (0)20 7959 8800


www.gsacapital.com<http://www.gsacapital.com/>



From: Kern Sibbald [mailto:k...@sibbald.com]
Sent: 17 January 2014 17:23
To: Roberts, Ben; bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Errors restoring from disk backup: Volume data 
error Wanted ID: "BB02", got ""

Hello,

Every case of this particular error message that I have seen has been
due to data corruption outside of Bacula.  Typically this happens when
a disk drive is bad, but since you are running ZFS and its checksums
are good, I can see only several other possibilities:

1. The ZFS code is messed up.  Running a current distribution with the
ZFS kernel module should not have this problem, but if you are running
something a bit older or using a user file system rather than the kernel
module you could have problems.

2. You have bad cables or a bad disk controller.

3. You seem to be using 5.2.x FDs with 5.0.x Director/SD,
which is not supported.  Your FDs should never be a higher
version that the DIR/SD, but may be lower.  In addition your
DIR and SD must always be the same version.

Oops, I just re-read your email and probably point 1 does not apply
since you seem to be running ZFS on Solaris so there is little or no
possibility that the code is bad.

Best regards,
Kern

On 01/14/2014 06:04 PM, Roberts, Ben wrote:
Hi all,

I've recently setup a new Bacula director/storage daemon in preparation to move 
our existing backups to newer hardware. During testing, I've run into problems 
doing restores of backups taken to disk, failing with the messages:



Error: block.c:275 Volume data error at 24:4294944994! Wanted ID: "BB02", got 
"". Buffer discarded.

Fatal error: fd_cmds.c:169 Command error with FD, hanging up.

Similar errors are reported for both file-level backups, and block-level 
backups made using bpipe. I've seen the instructions in 
http://www.bacula.org/en/dev-manual/main/main/Restore_Command.html#SECTION002110,
 but these only seem to apply to tape backups rather than disk ones. 
Regardless, I've tried striping the positional information from the bootstrap 
file with no effect.

Some relevant notes from my testing:

-  The issue does not affect every backup made, but does affect a 
significant proportion tested.

-  A single job can be affected at multiple locations, i.e. skipping 
one affected file might see the job fail again at a subsequent file.

-  Attempting to restore the same job multiple times elicits failures 
at the same block each time. Re-running the job may produce a restorable 
backup, otherwise a job that will fail at a different location again. Other 
jobs fail at different locations.

-  All data is stored on ZFS, which reports completely clean of any 
checksum errors at the filesystem level

-  The server is not reporting any hardware issues, e.g. corrected or 
uncorrectable memory reads, disk accesses etc.

-  The backup jobs are multiple TB in size, and restores frequently 
fail within the first couple hundred GB.

-  The storage daemon is configured with a disk-changer backed 
autochanger, writing to 100GB volumes, all residing within the same ZFS 
filesystem (sitting atop a large RAID-Z2 disk array).

The director is running "Version: 5.0.2 (28 April 2010) i386-pc-solaris2.10 
solaris 5.10" (compiled on solaris 5.10, running on 5.11). Storage daemon runs 
on the same machine as the director.  (I'm loosely tied to this version so the 
director can interact with a storage daemon on another machine connected to a 
tape changer).
A sample client is running "Version: 5.2.13 (19 February 2013)  
i386-pc-solaris2.11 solaris 5.11&qu

Re: [Bacula-users] Errors restoring from disk backup: Volume data error Wanted ID: "BB02", got ""

2014-01-17 Thread Roberts, Ben
> Are the failing block numbers always a little below 2^32 (like 4294944994 and
> 4294941825 in your messages)?  If so, that maybe suggests a compiler
> bug if the same source code works when compiled on the older machine (or is it
> the same binary too?).

That is a very good spot, I hadn't picked up on the pattern.

23-Dec 09:18 backup3-sd JobId 147: Error: block.c:275 Volume data error at 
24:4294915733! Wanted ID: "BB02", got "". Buffer discarded.
23-Dec 11:58 backup3-sd JobId 149: Error: block.c:275 Volume data error at 
24:4294915733! Wanted ID: "BB02", got "". Buffer discarded.
23-Dec 12:37 backup3-sd JobId 151: Error: block.c:275 Volume data error at 
24:4294940498! Wanted ID: "BB02", got "". Buffer discarded.
23-Dec 14:23 backup3-sd JobId 153: Error: block.c:275 Volume data error at 
24:4294944995! Wanted ID: "BB02", got "". Buffer discarded.
03-Jan 09:50 backup3-sd JobId 209: Error: block.c:275 Volume data error at 
24:4294945000! Wanted ID: "BB02", got "". Buffer discarded.
03-Jan 12:49 backup3-sd JobId 213: Error: block.c:275 Volume data error at 
24:4294944999! Wanted ID: "BB02", got "". Buffer discarded.
03-Jan 13:18 backup3-sd JobId 214: Error: block.c:275 Volume data error at 
24:4294944994! Wanted ID: "BB02", got "". Buffer discarded.

I took a shortcut and reused the same package/binaries on the 5.11 machine as 
we had been running on 5.10. Since it appeared to work for everything I tried 
(including small restores) I thought that it was working properly.

I think I was heading toward the same conclusion so will look at either 
rebuilding for 5.11 or sticking a director on a Linux machine and access the 
ZFS store via NFS (though I'm not sure what performance impact to expect from 
doing the latter).

Thanks for the pointer, I'll let you know if using an alternate binary fixes 
this.

Ben Roberts
IT Infrastructure


This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient.  
If you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents.  This communication is for 
informational purposes only.  It is not intended as an offer or solicitation 
for the purchase or sale of any financial instrument or as an official 
confirmation of any transaction.   Any comments or statements made herein do 
not necessarily reflect those of GSA Capital. GSA Capital Partners LLP is 
authorised and regulated by the Financial Conduct Authority and is registered 
in England and Wales at Stratton House, 5 Stratton Street, London W1J 8LA, 
number OC309261. GSA Capital Services Limited is registered in England and 
Wales at the same address, number 5320529.


--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments & Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Errors restoring from disk backup: Volume data error Wanted ID: "BB02", got ""

2014-01-16 Thread Roberts, Ben
Hi Josh,

> Could you downgrade the client to 5.0.2? I know SD and DIR are backward 
> compatible with older clients, but I'm not so sure what happens when the 
> client is a newer version.

Since this exact topic came up in IRC last night, I tried a restore of a backup 
made by a 5.0 client (RHEL 5.0.0-12.el6), and saw the same failure 74GB in).


15-Jan 23:51 backup3-sd JobId 263: Error: block.c:275 Volume data error at 
24:4294941825! Wanted ID: "BB02", got "". Buffer discarded.

15-Jan 23:51 backup3-sd JobId 263: Fatal error: fd_cmds.c:169 Command error 
with FD, hanging up.

I've also been doing some test restores from our older bacula infra (running 
exactly the same DIR/SD versions) and have successfully restored >10TB so it's 
looking like the issue is limited to the newer machine only.

Regards,

Ben Roberts

IT Infrastructure





This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient. If 
you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents. This communication is for 
informational purposes only. It is not intended as an offer or solicitation for 
the purchase or sale of any financial instrument or as an official confirmation 
of any transaction. Any comments or statements made herein do not necessarily 
reflect those of GSA Capital. GSA Capital Partners LLP is authorised and 
regulated by the Financial Conduct Authority and is registered in England and 
Wales at Stratton House, 5 Stratton Street, London W1J 8LA, number OC309261. 
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments & Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Errors restoring from disk backup: Volume data error Wanted ID: "BB02", got ""

2014-01-14 Thread Roberts, Ben
Hi all,

I've recently setup a new Bacula director/storage daemon in preparation to move 
our existing backups to newer hardware. During testing, I've run into problems 
doing restores of backups taken to disk, failing with the messages:



Error: block.c:275 Volume data error at 24:4294944994! Wanted ID: "BB02", got 
"". Buffer discarded.

Fatal error: fd_cmds.c:169 Command error with FD, hanging up.

Similar errors are reported for both file-level backups, and block-level 
backups made using bpipe. I've seen the instructions in 
http://www.bacula.org/en/dev-manual/main/main/Restore_Command.html#SECTION002110,
 but these only seem to apply to tape backups rather than disk ones. 
Regardless, I've tried striping the positional information from the bootstrap 
file with no effect.

Some relevant notes from my testing:

-  The issue does not affect every backup made, but does affect a 
significant proportion tested.

-  A single job can be affected at multiple locations, i.e. skipping 
one affected file might see the job fail again at a subsequent file.

-  Attempting to restore the same job multiple times elicits failures 
at the same block each time. Re-running the job may produce a restorable 
backup, otherwise a job that will fail at a different location again. Other 
jobs fail at different locations.

-  All data is stored on ZFS, which reports completely clean of any 
checksum errors at the filesystem level

-  The server is not reporting any hardware issues, e.g. corrected or 
uncorrectable memory reads, disk accesses etc.

-  The backup jobs are multiple TB in size, and restores frequently 
fail within the first couple hundred GB.

-  The storage daemon is configured with a disk-changer backed 
autochanger, writing to 100GB volumes, all residing within the same ZFS 
filesystem (sitting atop a large RAID-Z2 disk array).

The director is running "Version: 5.0.2 (28 April 2010) i386-pc-solaris2.10 
solaris 5.10" (compiled on solaris 5.10, running on 5.11). Storage daemon runs 
on the same machine as the director.  (I'm loosely tied to this version so the 
director can interact with a storage daemon on another machine connected to a 
tape changer).
A sample client is running "Version: 5.2.13 (19 February 2013)  
i386-pc-solaris2.11 solaris 5.11".

>From my understanding of how the Bacula components fit together, I suspect the 
>corruption must be happening in the Storage daemon (since this is the only 
>component that would be interested in the BB02 block header?) before the data 
>is written to disk (otherwise ZFS would be reporting read/write errors).

Is this an issue that's been seen before on other disk backups? Can anyone 
provide any assistance in locating and fixing the cause of the corruption? Any 
help would be greatly appreciated.

Regards,

Ben Roberts

IT Infrastructure


--- Relevant config excerpts:

Autochanger {
  Name = backup3-autochanger
  Device = drive-restore-backup3, drive-1-backup3
  Device = drive-2-backup3, drive-3-backup3
  Device = drive-4-backup3, drive-5-backup3
  Changer Device = /data2/bacula/storage/backup3-autochanger.conf
  Changer Command = "/opt/bacula/etc/disk-changer %c %o %S %a %d"
}

Device {
  Name = drive-1-backup3
  Archive Device = /data2/bacula/storage/backup3-autochanger/drive1
  Device Type = File
  Media Type = File-backup3
  AutoChanger = yes
  Removable media = no
  Random access = yes
  Requires Mount = no
  Always Open = no
  Label Media = yes
  Maximum Changer Wait = 180
  Drive Index = 1
  Maximum Spool Size = 100G
}
...

Storage {
  Name = backup3-sd
  Address = backup3.local
  Device = backup3-autochanger
  Media Type = File-backup3
  Autochanger = yes
}

Pool {
Name = Disk-45Day-backup3
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Job Retention = 45 days
Volume Retention = 45 days
Label Format = Disk-45Day-backup3-
Storage = backup3-sd
Maximum Volume Bytes = 100G
}


This email and any files transmitted with it contain confidential and 
proprietary information and is solely for the use of the intended recipient. If 
you are not the intended recipient please return the email to the sender and 
delete it from your computer and you must not use, disclose, distribute, copy, 
print or rely on this email or its contents. This communication is for 
informational purposes only. It is not intended as an offer or solicitation for 
the purchase or sale of any financial instrument or as an official confirmation 
of any transaction. Any comments or statements made herein do not necessarily 
reflect those of GSA Capital. GSA Capital Partners LLP is authorised and 
regulated by the Financial Conduct Authority and is registered in England and 
Wales at Stratton House, 5 Stratton Street, London W1J 8LA, number OC309261. 
GSA Capital Services Limited is registered in England and Wales at the same 
address, number 5320529.

-