Re: [Bacula-users] opportunistic backup?

2016-11-16 Thread Bryn Hughes
On 2016-11-16 06:12 AM, Paul J R wrote:
> Hi All,
>
> I have a data set that i'd like to backup thats large and not very
> important. Backing it up is a "nice to have" not a must have. I've been
> trying to find a way to back it up to disk that isnt disruptive to the
> normal flow of backups, but everytime i end up in a place where bacula
> wants to do a full backup of it (which takes too long and ends up
> getting cancelled).
>
> Currently im using 7.0.5 and something i noticed in the 7.4 tree is the
> ability to resume stopped jobs but from my brief testing of it I wont
> quite do what Im after either. Ideally what im trying to achieve is give
> bacula 1 hour a night to backup as much as it can and then stop. Setting
> a time limit doesnt work cause the backup just gets cancelled and it
> forgets everything its backed up already and tries to start from scratch
> again the following night.
>
> VirtualFull doesnt really do what im after either and i've also tried
> populating the database directly in a way that makes it think its
> already got a full backup (varying results, and none of them fantastic).
> A full backup of the dataset in one hit isnt realistically achievable.
>
> Before I give up though, im curious if anyone has tried doing similar
> and what results/ideas they had that might work?
>
> Thanks in advance!

While there's many things that Bacula does very well, what you are 
describing is not one of them.

Check out CrashPlan - you can set it up to back up to another machine 
for free. I use it in exactly this sort of manner to back up a few 
laptops to a server. It's particularly good at handling intermittent 
connectivity, changing files, things like that. You don't get the same 
amount of control over expiry, it doesn't give you any real way to send 
data out to tape (you'd have to use Bacula or something to do a full on 
the entire directory it is using after stopping CrashPlan or using 
filesystem snapshots) but it will handle multiple machines, multiple 
versions of backed up files and can be scheduled the way you want.

Another tool that works well for stuff like this (plus doing a whole lot 
of other things) is OwnCloud. I use that on my main personal laptop now. 
The files deposited on the OwnCloud server are a lot easier to back up 
with tools like Bacula as well, though there is still a separate 
database involved that you have to worry about. That reminds me to go 
check and make sure I've included OwnCloud's DB in my Bacula jobs!

Bryn

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore an entire directory from some day

2016-10-20 Thread Bryn Hughes

On 2016-10-20 01:36 PM, Dante F. B. Colò wrote:

Hello Folks

I already have been using Bacula for years, in the case of incremental 
backups maybe i`m doing do it wrong but when  i need for example 
restore a folder from someday  i restore it from the last full job 
before that date i need and all subsequents incrementals backup until 
the day , is there a better way to do this ?




That's what an Incremental is; a backup set containing whatever was not 
in the last Incremental.


A Differential on the other hand contains everything since the last 
Full. A typical strategy would be to run a weekly Differential and then 
daily Incrementals; that way you need the Full, the Differential and 
then just the Incrementals since the last Differential.


Bryn
--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with In-Changer Media after replacing Tape Library

2016-03-01 Thread Bryn Hughes
Appears you never did an 'Update Slots' on the old library after 
emptying the tapes from it...


You can manually change the 'InChanger' flag from a '1' to a '0' for the 
affected tapes, it at least doesn't look like there are too many of 
them.  You can also delete those volumes all together if you no longer 
need to get any data from them.


Bryn

On 2016-03-01 05:52 AM, Mingus Dew wrote:

Dear All,
  I recently replaced a malfunctioning LTO5 TL2000 tape library 
with a new LTO6 TL4000 tape library. I've been having a lot of failing 
tape jobs since then, and I think I've finally rooted out the problem, 
but am not sure where to begin fixing it.


Essentially, when I do a query to show what Volumes Bacula thinks are 
in the changer, it is still showing me Volumes and Slots from the 
previous library mixed with Volumes and Slots from the new, higher 
capacity library.


I'm using Bacula Community Edition 7.0.5 on CentOS 6.7 x86_64. Here is 
the output from bconsole query. The old library entries all have 
Storage "TL2000" and Media Type "LTO5". Any guidance is very appreciated.


Yours,
Shon

+-++-+-+--+-+---+---+

| MediaId | VolumeName | GB   | Storage | Slot | Pool| 
MediaType | VolStatus |


+-++-+-+--+-+---+---+

|   2,065 | 15 | 300.6804   | TL2000  |1 | 
Mentora_LTO5_Tapes  | LTO-5 | Full  |


|   2,109 | 05 | 1100.4370   | TL2000  |2 | 
Synnefo_LTO5_Tapes  | LTO-5 | Append|


|   2,113 | 08 | 0.0001   | TL2000  |3 | 
KillerIT_LTO5_Tapes | LTO-5 | Append|


|   2,098 | 51 | 411.7481   | TL2000  |4 | 
Mentora_LTO5_Tapes  | LTO-5 | Append|


|   2,115 | 04 | 0.0001   | TL2000  |6 | 
KillerIT_LTO5_Tapes | LTO-5 | Append|


|   2,111 | 07 | 258.3787   | TL2000  |7 | 
KillerIT_LTO5_Tapes | LTO-5 | Append|


|   2,108 | 12 | 1452.4568   | TL2000  |8 | 
Calgon_LTO5_Tapes   | LTO-5 | Full  |


|   2,110 | 02 | 1664.2599   | TL2000  |9 | 
Calgon_LTO5_Tapes   | LTO-5 | Full  |


|   2,116 | 03 | 0.0001   | TL2000  |   10 | 
KillerIT_LTO5_Tapes | LTO-5 | Append|


|   2,117 | 06 | 0.0001   | TL2000  |   11 | 
KillerIT_LTO5_Tapes | LTO-5 | Append|


|   2,112 | 09 | 3882.6981   | TL2000  |   12 | 
Mentora_LTO5_Tapes  | LTO-5 | Full  |


|   2,704 | 21L6   | 0.   | TL4000  |3 | 
Mentora_LTO6_Tapes  | LTO-6 | Recycle   |


|   2,738 | 37L6   | 0.0001   | TL4000  |   37 | Calgon_LTO6_Tapes 
  | LTO-6 | Append|


|   2,739 | 40L6   | 0.0001   | TL4000  |   38 | 
Synnefo_LTO6_Tapes  | LTO-6 | Append|


|   2,740 | 43L6   | 0.0001   | TL4000  |   39 | 
KillerIT_LTO6_Tapes | LTO-6 | Append|


|   2,741 | 46L6   | 0.0001   | TL4000  |   40 | Voith_LTO6_Tapes  
  | LTO-6 | Append|


|   2,742 | 38L6   | 0.0001   | TL4000  |   41 | Scratch_LTO6  
  | LTO-6 | Append|


|   2,743 | 41L6   | 0.0001   | TL4000  |   42 | Scratch_LTO6  
  | LTO-6 | Append|


|   2,744 | 44L6   | 0.0001   | TL4000  |   43 | Scratch_LTO6  
  | LTO-6 | Append|


|   2,745 | 47L6   | 0.0001   | TL4000  |   44 | Scratch_LTO6  
  | LTO-6 | Append|


|   2,746 | 39L6   | 0.0001   | TL4000  |   45 | Scratch_LTO6  
  | LTO-6 | Append|


|   2,747 | 42L6   | 0.0001   | TL4000  |   46 | Scratch_LTO6  
  | LTO-6 | Append|


|   2,748 | 45L6   | 0.0001   | TL4000  |   47 | Scratch_LTO6  
  | LTO-6 | Append|


+-++-+-+--+-+---+---+



--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net

Re: [Bacula-users] Question about versions to be installed in an Linux / Windows environment

2015-12-24 Thread Bryn Hughes
On 2015-12-24 07:34 AM, Daniel Bareiro wrote:
> Hi Wanderlei and Greg.
>
> On 23/12/15 15:34, Greg Woods wrote:
>
>>> am thinking of using Debian Jessie which includes Bacula 5.2.6 in its
>>> repository for director and storage daemon. But I remember a few cases
>>> where I have experienced incompatibilities if the client versions are
>>> far from the server version. Correct me if I'm wrong, please.
>> The requirement is FD <= (SD == DIR). The storage daemon and director
>> must be the same version, and must be newer than or the same as  any
>> file daemons of any of the clients. I do use a 7.0.5 director and
>> storage daemon (compiled from source on Debian Jessie in the SD case)
>> with some clients that have 5.2.6 file daemons and that works just fine.
> Thanks for your answers and for the considerations mentioned about the
> versions.
>
> Greg, you mention having compiled the SD on Jessie, but I guess you've
> also compiled the Director, right? Since both must have the same
> version. Though I suppose that the Director may also be on another host
> that includes Bacula 7.0.5 in its repositories.
>
> About using Jessie, is it worth compile the Director and SD? That is,
> improvements regarding the versions provided by Jessie (5.2.6) make a
> substantial difference? What improvements have you noticed?
>
> My idea is to use Jessie for Director and storage daemon. Among the
> client hosts I've Squeeze LTS, CentOS 5.10, Ubuntu and Microsoft Windows
> server (2008R2 SP1 server edition and 2003R2 SP2 standard edition).
>
>  From what I was looking for Squeeze, the latest version is on Backports
> (5.2.6) because the version on the squeeze-lts repository is even older
> (5.0.2). I think the Ubuntu version of Bacula is the same as on Debian
> Squeeze. Moreover, I think CentOS does not include Bacula in their
> repositories (at least in the official repositories, according I was
> watching). So maybe in this case the compilation is the only alternative.
>
> Have you found any problem using some version of File Daemon for Windows
> (especially in versions of Windows such as those mentioned above)?
>
> Thanks again for your replies.
>
> Best regards,
> Daniel
>

I too am running a 7.0.5 director and storage daemon with mostly 5.2.6 
clients.

Compiling the 7.x binaries is simple enough on a Debian/Ubuntu box to 
make it well worth it.  I haven't seen any particular reason to worry 
about the file daemon (clients) though, they appear to work fine with 
the 5.x binaries as shipped.  However on the director/storage side 
there's been more than a few bugs squashed between 5.2 and 7.0.5!

Bryn



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula and NFS

2015-12-16 Thread Bryn Hughes
The 'ERR=Connection refused' message means that the storage daemon 
itself isn't accepting connections. This could be due to firewall rules 
or due to mismatched passwords between the storage daemon and the other 
components.  I don't believe it is being caused by or is related to your 
NFS mount in the slightest.


Bryn


On 2015-12-04 06:19 AM, Richard Robbins wrote:
I am new to Bacula and would like to run the Bacula director on a 
CentOS 7 virtual machine with the FQDN of bacula.itinker.net 
 and use a NAS device as my storage 
repository.  For now, my NAS is a somewhat dated Netgear ReadyNAS 
device that I'm going to replace with a new Synology box in the 
not-too-distant future.


I've got Version 7.0.5 of the Bacula components runnning on the Centos 
machine and can backup and restore to a local directory without 
difficulty.  I'm struggling to get the NAS into the mix.


In my all local configuration I backup to /bacula/backup and restore 
to /bacula/restore.


I had hoped that I could tweak the system so that I mount an NFS v3 
share at /bacula/backup.


The OS mounts the NFS share at that point and I'm able to read and 
write files without difficulty but when I fire up Bacula the program 
hangs with accompanying warning messages "Warning:  bsock.c:112 Could 
not connect to Storage daemon on bacula.itinker.net:9103 
. ERR=Connection refused.


Since I'm able to read and write the NFS share outside of Bacula I'm 
stumped as to what's getting the way when Bacula runs.


In a perfect world I suppose I'd run the director and SD on the NAS 
itself, but I'm not up to attempting to build the current Bacula 
system on my older NAS.  Maybe I should try to compile the storage 
daemon but not the director on the NAS and then point the director on 
my VM to a daemon running on the NAS.  But that too is more work than 
just mounting the NFS share as I am doing at the moment.


Another approach would be to create an iSCSI target and pass that to 
my VM as a virtual disc which would just be embedded in the virtual 
hardware prior to system boot time, but I'd like to avoid that if 
possible.


Your thoughts and guidance will be greatly appreciated.

-- Rich


--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-15 Thread Bryn Hughes
On 2015-12-15 04:50 PM, Lewis, Dave wrote:
>> -Original Message-
>> From: Uwe Schuerkamp [mailto:uwe.schuerk...@nionex.net]
>> Sent: Tuesday, December 15, 2015 4:47 AM
>> To: Lewis, Dave
>> Cc: bacula-users@lists.sourceforge.net
>> Subject: Re: [Bacula-users] Bacula backup speed
>>
> ...
>> Hi Dave,
>>
>> how large is your catalog database? How many entries do you have in
>> your File table, for instance? Attribute despooling should be much
>> faster than what you're seeing even on SATA disks.
> Hi Uwe,
>
> I don't know much about databases, but I'm learning.
>
> We have 659,172 entries in the File table.
>
>
>> I guess your mysql setup could do with some optimization w/r to buffer
>> pool size (I hope you're using InnoDB as the backend db engine) and
>> lock / write strategies.
> The command
>  SHOW TABLE STATUS;
> shows that we're using InnoDB.
>
>
>> As your DB runs on the director machine, I'd assign at last 50% of the
>> available RAM if your catalog has a similar size.
>>
>> A quick google search came up with the following query to determine
>> your catalog db size:
>>
>> SELECT table_schema "DB Name",
>> Round(Sum(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB"
>> FROM information_schema.tables GROUP BY table_schema;
>>
>> All the best, Uwe
> The above command gave
> ++---+
> | DB Name| DB Size in MB |
> ++---+
> | bacula | 216.6 |
> | information_schema | 0.0   |
> ++---+
>
> To assign 50% of RAM (we have 16 GB total) I suppose I should add the line
>  innodb_buffer_pool_size = 8G
> in /etc/mysql/my.cnf, then I assume restart MySQL. But maybe we don't need it 
> that big at this time, since the database is much smaller.
>
> Our my.cnf doesn't currently have a line for innodb_buffer_pool_size; I don't 
> know what it uses by default.
>
> Thanks,
> Dave
>
MySQL is clever enough that it won't assign memory until it actually 
needs it.  Since your DB is only 216MB it shouldn't end up taking the 
full 8GB no matter what happens.  However during attribute despooling 
the buffer pool isn't really coming in to play that much; you aren't 
doing any SELECT queries during an insert so cached data isn't going to 
help.  On restore however the buffer pool will be handy.

One other thing you may wish to look at is adding this line to your my.cnf:

innodb_flush_log_at_trx_commit=2

By default MySQL is trying to write each INSERT to disk one at a time 
which isn't really necessary with something like bacula.  It makes more 
sense in the case of financial transactions or something like that.  The 
above command will have MySQL flush out writes in batches once per 
second rather than trying to hit the disk (and wait) for each one.  This 
can make a significant improvement in attribute spooling.  A single 7200 
RPM disk can't handle more than about 90 transactions per second, so the 
fewer individual disk transactions that are occurring the better.

Bryn

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-14 Thread Bryn Hughes
Sounds pretty clear that you have some performance issues on your database.

What are you running (MySQL/Postgresql/Sqlite/etc) for your DB? Is it 
running on the same server as your director?  What kind of disk storage 
do you have under your database?

Bryn

On 2015-12-14 01:12 PM, Lewis, Dave wrote:
> Hi,
>
> Thanks. I ran it again with attribute spooling. That sped up the backup of 
> data to the disk pool - instead of 6 hours it took less than 2 - but writing 
> the file metadata afterwards took nearly 6 hours.
>
> 12-Dec 18:24 jubjub-sd JobId 583: Job write elapsed time = 01:51:55, Transfer 
> rate = 703.0 K Bytes/second
> 12-Dec 18:24 jubjub-sd JobId 583: Sending spooled attrs to the Director.  
> Despooling 120,266,153 bytes ...
> 13-Dec 00:11 jubjub-dir JobId 583: Bacula jubjub-dir 5.2.6 (21Feb12):
>Elapsed time:   7 hours 39 mins 13 secs
>FD Files Written:   391,552
>SD Files Written:   391,552
>FD Bytes Written:   4,486,007,552 (4.486 GB)
>SD Bytes Written:   4,720,742,979 (4.720 GB)
>Rate:   162.8 KB/s
>Software Compression:   None
>Encryption: yes
>Accurate:   no
>
> So the transfer rate increased from about 200 KB/s to about 700 KB/s, but the 
> total elapsed time increased.
>
> Thanks,
> Dave
>
>
> -Original Message-
> From: Christian Manal [mailto:moen...@informatik.uni-bremen.de]
> Sent: Thursday, December 10, 2015 6:14 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Bacula backup speed
>
> On 10.12.2015 01:06, Lewis, Dave wrote:
>> Does anyone know what's causing the OS backups to be so slow and what
>> I can do to speed them up?
> Hi,
>
> the problem might be number of files, as in, writing all the file metadata to 
> the catalog could very well be your bottle neck.
>
> Try enabling attribute spooling, so all the metadata is collected and 
> commited to the DB in one go instead of file by file.
>
>
> Regards,
> Christian Manal
>
> --
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> IMPORTANT NOTICE: This e-mail is meant only for the use of the intended 
> recipient. It may contain confidential information which is legally 
> privileged or otherwise protected by law. If you received this e-mail in 
> error or from someone who is not authorized to send it to you, you are 
> strictly prohibited from reviewing, using, disseminating, distributing or 
> copying the e-mail. PLEASE NOTIFY US IMMEDIATELY OF THE ERROR BY RETURN 
> E-MAIL AND DELETE THIS MESSAGE FROM YOUR SYSTEM. Thank you for your 
> cooperation.
>
> --
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] stat problems

2015-11-20 Thread Bryn Hughes
Is it just me or are all your slashes backwards for Windows?  Shouldn't 
it be C:\ not C:/ ?

Bryn

On 2015-11-19 10:25 AM, Craig Shiroma wrote:
> Hello,
>
> I came across a problem with a Windows 2012R2 server that generated 
> the below error messages in the log.  The full backed up only a few 
> megabytes when it should've backed up 10G obviously because of the 
> below problems.  However, I ran another full right after noticing the 
> problem without doing anything to the server and it completed with no 
> problem.  Has any experience this?  Know the cause?
>
> My concern is the backup ended with a "Backup OK with warnings" 
> (status = T), so I never got an email indicating a problem.
>
> Regards,
> -craig
>
> 17-Nov 14:44  JobId 208:  Could not stat 
> "C:/Documents and Settings": ERR=The system cannot find the file 
> specified.
> 17-Nov 14:44  JobId 208:  Could not stat 
> "C:/inetpub": ERR=The system cannot find the file specified.
> 17-Nov 14:44  JobId 208:  Could not stat 
> "C:/PerfLogs": ERR=The system cannot find the file specified.
> 17-Nov 14:44  JobId 208:  Could not stat "C:/Perl64": 
> ERR=The system cannot find the file specified.
> 17-Nov 14:44  JobId 208:  Could not stat "C:/Program 
> Files": ERR=The system cannot find the file specified.
> 17-Nov 14:44  JobId 208:  Could not stat "C:/Program 
> Files (x86)": ERR=The system cannot find the file specified.
> 17-Nov 14:44  JobId 208:  Could not stat 
> "C:/ProgramData": ERR=The system cannot find the file specified.
> 17-Nov 14:44  JobId 208:  Could not stat 
> "C:/Python32": ERR=The system cannot find the file specified.
> 17-Nov 14:44  JobId 208:  Could not stat "C:/Users": 
> ERR=The system cannot find the file specified.
> 17-Nov 14:44  JobId 208:  Could not stat 
> "C:/Windows": ERR=The system cannot find the file specified.
> 17-Nov 14:44  JobId 208: Error: findlib/attribs.c:923 
> Error in GetFileAttributesExW: file 
> \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy5\: ERR=The system 
> cannot find the file specified.
> 17-Nov 14:44  JobId 208:  Cannot open "C:/": ERR=The 
> system cannot find the file specified.
> .
> 17-Nov 14:44  JobId 208:  Could not stat "E:/": 
> ERR=The system cannot find the path specified.
> 17-Nov 14:44  JobId 208:  Could not stat "F:/": 
> ERR=The system cannot find the path specified.
> 17-Nov 14:44  JobId 208:  Could not stat "G:/": 
> ERR=The system cannot find the path specified.
> 17-Nov 14:44  JobId 208:  Could not stat "H:/": 
> ERR=The system cannot find the path specified.
>
>


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] lto4 hardware compression doesn't work when encryprion on?

2015-11-09 Thread Bryn Hughes
On 2015-11-09 09:35 AM, John Drescher wrote:
> On Mon, Nov 9, 2015 at 12:26 PM, jstacey  
> wrote:
>> I recently enabled encryption in my bacula-fd.conf with these entries:
>>
>>PKI Signatures = Yes
>>PKI Encryption = Yes
>>PKI Keypair = "/etc/bacula/client.pem"
>>PKI Master Key = "/etc/bacula/master.cert"
>>
>> The encryption works but now my LTO4 tapes can only store around 812MB 
>> instead of the usual 1.2 -> 1.4 TB  Is this normal?  I might enable gzip 
>> software compression but this is going to take a chunk out of my CPU. I read 
>> that it is also possible to enable hardware encryption on the drive. Maybe 
>> this method would allow me to keep using hardware compression? Thanks
>>
>
> Perhaps the software encryption is not very compressible. Remember
> some encryption methods make your data look more random. And random
> data does not compress at all.
>
> John
>
Encrypted data is generally not compressible.  You need to get the 
compression happening before the encryption; this can either be done by 
using software compression as well as software encryption in Bacula or 
by getting the LTO4 features working for you.

Bryn

--
Presto, an open source distributed SQL query engine for big data, initially
developed by Facebook, enables you to easily query your data on Hadoop in a 
more interactive manner. Teradata is also now providing full enterprise
support for Presto. Download a free open source copy now.
http://pubads.g.doubleclick.net/gampad/clk?id=250295911=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy job with PoolUncopiedJobs

2015-11-09 Thread Bryn Hughes
On 2015-10-30 02:38 PM, Jerry Lowry wrote:
> Hi,
>
> Centos 5.11 64bit OS
> Bacula 5.2.6 on all directors and clients
>
> I have run across a problem with one of my copy jobs.  The job is 
> setup with the PoolUncopiedJobs parameter.  The jobs are failing with 
> the following:
> 30-Oct 11:05 distress JobId 28325: Error: block.c:291 Volume data error at 
> 3:87310201! Wanted ID: "BB02", got "Í". Buffer discarded.
> (not all of them get the same "got")
> I can understand the error because I had problems with the backup 
> disks during the date it is trying to copy.
>
> Question is,  How can I get around these bad backups to the date where 
> the disks were functioning properly and get them copied to my offsite 
> disk?
>
> thanks for the pointers.
>
> jerry
>
I was running in to issues like this somewhat regularly under v5.2.x.  
In my case upgrading to v7.0.x solved the problems completely.

Bryn

--
Presto, an open source distributed SQL query engine for big data, initially
developed by Facebook, enables you to easily query your data on Hadoop in a 
more interactive manner. Teradata is also now providing full enterprise
support for Presto. Download a free open source copy now.
http://pubads.g.doubleclick.net/gampad/clk?id=250295911=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] wiping test backups and starting again.

2015-11-05 Thread Bryn Hughes
It's worth considering a filesystem that supports error 
detection/correction such as ZFS... Though to really make use of that 
you need to have a machine with ECC RAM. Not sure what all you have for 
a layout, but bitflips in backup data can make for a bad time.


Generally speaking I doubt you'll see a massive difference in 
performance from one FS to the other when dealing with Bacula. Where you 
really notice a difference is things like "deleting a directory with 
10,000 files in it" which is one area where EXT* performs very poorly.  
That however isn't a use case that you'd encounter with Bacula (or 
really any other backup system apart from maybe some sort of rsync-based 
setup).  Raw sequential read/write performance won't be massively 
different one FS to another; at most you're probably talking about a few 
percentage points unless you've got something like a badly aligned RAID 
array underneath it all.  Either way it's usually network bandwidth 
that's the bottleneck rather than disk, unless you're trying to do a lot 
of simultaneous I/O.


Bryn

On 2015-11-03 12:23 PM, Thing wrote:
Hmm is there any difference in performance between file system types, 
ext4 and XFS?


hence why I pondered a re-format.



On 2 November 2015 at 14:36, Randy Katz > wrote:


Yes, however, formatting the disk is a bit extreme, you can just
go to the designated
directory and remove all the files, if it takes a while you can
background the task:

cd /baculabackupdirectory

nohup rm -f * &

or if you have subdirectories

nohup rm -rf * &



On 11/1/2015 11:01 AM, Thing wrote:

Hi,

Reading the FAQ,

cd /src/cats
./drop_mysql_tables
./make_mysql_tables

Then just format the disk to wipe the volumes?

Anything else needed to do?





--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Deadlock error

2015-08-06 Thread Bryn Hughes
I think what Kern is getting at is that your database is what threw the 
error, not Bacula.  Whatever DB you are using is what is having the issue.


Bryn

On 2015-08-06 09:11 AM, Craig Shiroma wrote:

Hi Kern,

Thank you very much for the reply!  Would you have any suggestions on 
what may be causing this problem or how I can debug it?  Obviously, 
I'm encountering deadlocks when accurate backup runs on some of our 
hosts and we want to use accurate backup on all of our hosts if possible.


Warmest regards,
-craig

On Thu, Aug 6, 2015 at 12:11 AM, Kern Sibbald k...@sibbald.com 
mailto:k...@sibbald.com wrote:


On 06.08.2015 10:15, Craig Shiroma wrote:

Hello again,

I just thought I'd update this post with more information in
hopes of getting some explanation for the deadlocks.

I ran with Accurate backup on our test VMs (RHEL) for a couple of
days and got the same errors on some VMs that were running
accurate and some that were not.  These hosts were running
concurrently.  I would say 90% of the hosts that were configured
to use Accurate finished successfully.  However, there were a few
that failed with the deadlock error -- some that were configured
to use accurate and some that were not configured to use
accurate.  Also, on all of these, a second job started for each
of the affected hosts right after Bacula detected the deadlock
even though it said a reschedule would happen 3600 seconds later
(the 3600 seconds is correct).

Tonight, I disabled accurate on all hosts and the deadlocks did
not happen.  No errors were detected and all the backups finished
successfully.

Some questions...
1.  Can I back up multiple hosts concurrently with some hosts
configured to use accurate and some configured not to use
accurate?  Or, is it an all or none thing, meaning all hosts that
run concurrently must either be using accurate backup or not
using accurate backup (cannot mix the two)?

2. It seems like the hosts that get out of the starting gate
first are the ones affected.  I am configured to run 50 jobs
concurrently.  Again, no problems with accurate turned off on all
hosts for months now.

3. Why is Bacula spinning off a new job right away after it
detects the deadlock for each affected job instead of waiting
until the rescheduled job runs?  I verified that there were no
duplicate jobs in the queue before the backups started running,
no jobs were running before the start of the backups, and I did
not start any of these backups manually to cause a second job to
appear.


Bacula is not aware of any SQL internal deadlocks.



From the INNODB Monitor output:

TRANSACTION:
TRANSACTION 208788977, ACTIVE 1 sec setting auto-inc lock
mysql tables in use 4, locked 4
9 lock struct(s), heap size 1184, 5 row lock(s)
MySQL thread id 50808, OS thread handle 0x7f8f2c3b4700, query id
29558637 host 192.168.10.99 bacula Sending data
INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat,
MD5, DeltaSeq) SELECT batch.FileIndex, batch.JobId, Path.PathId,
Filename.FilenameId,batch.LStat, batch.MD5, batch.DeltaSeq FROM
batch JOIN Path ON (batch.Path = Path.Path) JOIN Filename ON
(batch.Name = Filename.Name)
WAITING FOR THIS LOCK TO BE GRANTED:
TABLE LOCK table `bacula`.`File` trx id 208788977 lock mode
AUTO-INC waiting
WE ROLL BACK TRANSACTION (2)

I am running Bacula 7.0.5 on RHEL 6.6 x64 with Director, Storage
and Catalog running on separate RHEL 6.6 hosts.  Our clients are
RHEL 6's, 5's and Windows Servers 2008 and 2012R2.

Any help would be much appreciated.

Warmest regards,
-craig

On Tue, Aug 4, 2015 at 1:56 PM, Craig Shiroma
shiroma.crai...@gmail.com mailto:shiroma.crai...@gmail.com wrote:

BTW, I suppose there could've been two jobs for the host(s)
in scheduling queue.  If this was the case, is there a way to
find out after the fact?  If this did actually happen, what
could cause duplicate jobs to be scheduled on the same day at
the same time?  I know no one manually ran the jobs in
question.  Again, this only was a problem for a few of the
jobs that ran last night, not all of them and some to do
accurate backup and some not.

Regards,
-craig

On Tue, Aug 4, 2015 at 9:27 AM, Craig Shiroma
shiroma.crai...@gmail.com
mailto:shiroma.crai...@gmail.com wrote:

Hello,

I had a few backups fail last night with the following error:

2015-08-03 18:02:46bacula-dir JobId 123984: b INTO File
(FileIndex, JobId, PathId, FilenameId, LStat, MD5,
DeltaSeq) SELECT batch.FileIndex, batch.JobId,
Path.PathId, Filename.FilenameId,batch.LStat, batch.MD5,
batch.DeltaSeq FROM batch JOIN Path ON (batch.Path =

Re: [Bacula-users] Multiple full backups in same month

2015-06-25 Thread Bryn Hughes

On 2015-06-25 10:17 AM, Silver Salonen wrote:
On Thu, Jun 25, 2015 at 7:56 PM, Heitor Faria hei...@bacula.com.br 
mailto:hei...@bacula.com.br wrote:


Citando Thomas Lohman thom...@mtl.mit.edu
mailto:thom...@mtl.mit.edu:

The question now is: bacula decides if it will
upgrade jobs when it
queues the jobs or when it starts the jobs?
According to the logs
above I think it is when it starts.

To my mind it's upgraded when it's queued... I hope
I'm wrong :)

Hi, it is done when the job is queued to run.  So, if you
see it listed
under Running jobs in bconsole then it's already been
decided.  Queued
to run isn't necessarily the same as when the job actually
starts due to
other factors/settings.


Ok, so the option Allow Duplicate Job=no can at least
prevent multiple full backups of the same server in a row as
stated before?

*Allow Duplicate JobS.
I think you must use Cancel Running Duplicates = yes in order to
cancel eventually duplicated jobs submited.


Please note that (according to my knowledge and experience some years 
back) even with this option the duplicate job will first be upgraded 
to Full, then marked as failed and upgraded again the next time a job 
is scheduled (which is buggy behavior to my mind and I reported it in 
http://bugs.bacula.org/view.php?id=1507).




This is the particular stanza I use in my JobDefs to deal with this 
situation:


  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes
  Reschedule On Error = yes
  Reschedule Interval = 1 hour
  Reschedule Times = 4

Bacula behaves the way I would want it to with these settings.  In the 
scenario described by the original poster the initial Full waiting in 
the queue would have caused all subsequent incremental jobs to be 
canceled, preventing them from being upgraded to Fulls as well.  Once 
the Full had completed successfully the next scheduled Incremental that 
came along AFTER the Full was done would be treated as a normal incremental.


The 'Reschedule On Error' is smart enough not to create new jobs, at 
least in my case.  I'm on 7.0.5 now but I was running 5.2.x previously 
with this config.


Bryn
--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow Attribute spooling

2015-06-08 Thread Bryn Hughes
On 2015-06-08 03:25 AM, Denis Witt wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On Mon, 08 Jun 2015 11:10:11 +0100
 Alan Brown a...@mssl.ucl.ac.uk wrote:

 What are you using for spool disk?
   
 If it's not fast SSD then you can't run more than a couple of
 simultaneous backups (you are seek limited with mechanical drives)
 Hi Alan,

 I'm using normal (7.200 RPM) HDDs. But, as pointed out in
 my mail, there is only one job running (despooling Attributes) at the
 moment. All other jobs are still waiting for execution (waiting on
 max storage jobs).

 On the SD there are about no IO-Operations. iotop shows some 100KB
 once in a while. CPU is 100% idle, same on the Director.

 The MySQL-Machine writes more or less constant 450KB/sec. to disk
 (mysql process). top shows about 20% wait and 76% idle.

 So it shouldn't slow down things that much.

 Doesn't it?

 Thanks.


Can you share your MySQL configuration?

MySQL has a few different disk write modes, some of which are more 
aggressive about laying down small chunks of data one by one while 
others will group things together more making INSERTs in particular 
quite a bit faster.

I wouldn't necessarily finger memory; your IOWAIT time indicates you're 
waiting on disk.  I'm not sure how many CPUs you have in your MySQL box 
or anything like that but it sounds like you've probably got at least 
one core almost totally IO bound.  What's the disk under your MySQL 
database?

Bryn


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Use multiple HDDs to backup files to

2015-06-02 Thread Bryn Hughes
Do you have some specific reason to need to have these two volumes separate?

You might be best off using LVM to create a single logical drive. That 
would certainly be a more flexible solution long term.

Bryn

On 2015-05-20 09:32 AM, SPQR wrote:
 I would like to set up bacula - well, my first steps have been successful.

 All my backups are written to /bacula/backup. But / has only 900GB. So I want 
 bacula to write to /sdb1/bacula/backup, too. /sdb1 has another 950GB of 
 space. The sum of available backup-space should be 1850GB HDD:

 Well, I got the following config of /etc/bacula/bacula-sd.conf:

 Storage { # definition of myself
Name = backup-sd
SDPort = 9103  # Director's port
WorkingDirectory = /var/lib/bacula
Pid Directory = /var/run/bacula
Maximum Concurrent Jobs = 20
SDAddress = backup.example.com
 }

 Director {
Name = backup-dir
Password = doyoureallywanttoknow
 }
 Device {
Name = FileStorage
Media Type = File
Archive Device = /bacula/backup
LabelMedia = yes;   # lets Bacula label unlabeled media
Random Access = Yes;
AutomaticMount = yes;   # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
 }

 Okay - now I would like to add another device. I guess I could just add a 
 second entry like this:

 Device {
Name = FileStorage
Media Type = File
Archive Device = /sdb1/bacula/backup
LabelMedia = yes;   # lets Bacula label unlabeled media
Random Access = Yes;
AutomaticMount = yes;   # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
 }

 ?

 Do I have to change something else so that bacula is able to write more than 
 900GB - if the first 900GB are full it should switch to the second hdd.


 Thank you very much :-)

 +--
 |This was sent by r...@ehrenwert.it via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--



 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bsmtp operation to retry sending e-mails

2015-05-28 Thread Bryn Hughes
The easiest way to resolve that is to run postfix (or the MTA of your 
choice) on the local machine and have it configured to forward mail to 
your mail gateway.  Then bacula (or anything else that needs to send 
mail) just sends it to the local MTA and it handles everything from 
there.  Retries, routing, failover, etc can be implemented in the MTA.

Generally I consider this to be a good practice; let individual 
applications have simple mail code that just sends to a local MTA and 
leave all the complex stuff - TLS, authenticated sending, MX lookups, 
transport tables, queuing, etc to software that is purpose-written for that.

Bryn

On 2015-05-28 02:01 AM, Kern Sibbald wrote:
 Hello,

 No, it does not retry.

 Best regards,
 Kern


 On 28.05.2015 09:11, Silver Salonen wrote:
 Hi.

 Does anyone know what does bsmtp do when the SMTP server is eg. out of
 disk space and sending of e-mail fails? Does it retry after some time?

 --
 Silver

 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is Bacula and a LTO drive right for me?

2015-05-20 Thread Bryn Hughes
On 2015-05-19 10:43 AM, Dimitri Maziuk wrote:
 On 05/19/2015 11:42 AM, Bryn Hughes wrote:
 ...
 recent LTO versions (5/6) you really need to make sure your disk setup
 on your backup storage server is capable of keeping up with the tape
 drive - most consumer hard drives top out at about 120MB/sec for
 sequential reads and much less than that for random I/O (such as having
 backup jobs writing to the disk at the same time as the tape drive is
 reading from it)
 Wow these seagate nas drives must be magic then: it's 120MB/s sequential
 read yet I get consistent 340MB/s sustained writes

 11-May 23:00 starfish-sd JobId 13: Despooling elapsed time = 00:02:24, 
 Transfer rate = 345.8 M Bytes/second
 ...
 12-May 12:30 starfish-sd JobId 13: Despooling elapsed time = 00:02:25, 
 Transfer rate = 343.4 M Bytes/second
 ...
 14-May 03:33 starfish-sd JobId 34: Despooling elapsed time = 00:02:27, 
 Transfer rate = 338.8 M Bytes/second
 ...
 18-May 19:49 starfish-sd JobId 57: Despooling elapsed time = 00:02:26, 
 Transfer rate = 341.1 M Bytes/second
 and so on. On software raid-5-ish (raidz1) with lz4 compression at
 filesystem level.

You realize that using a RAIDZ with LZ4 compression is going to give far 
better sequential performance than a single stand alone drive would with 
uncompressed data, right?  ZFS RAIDZ volumes are perfect for sequential 
read/writes though they are still IOP limited.

There's no magic.  You have an appropriate configuration for what you're 
doing, exactly what I was intending to convey to the individual looking 
to build a brand new backup system.  Try running straight EXT4 on a 
single drive with jobs spooling to the drive while it tries to feed an 
LTO5 drive and let's see if you can sustain anywhere close to 
100MB/sec.  Hint: you can't.

My Bacula SD box has 5 old WD 'green' 1.5TB drives (5400 RPM) in a 
RAIDZ1 setup.  Works great.


 How about you iostat that LTO 5/6 drive while a restore job is reading
 from it and a backup job is writing at the same time and then post speed
 comparisons?


You can't run both a restore and a backup to a tape device at the same 
time. Tape doesn't work like that.

Bryn


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is Bacula and a LTO drive right for me?

2015-05-20 Thread Bryn Hughes
On 2015-05-20 12:16 PM, Dimitri Maziuk wrote:
 On 05/20/2015 01:09 PM, Bryn Hughes wrote:

 What do you call it when multiple backup jobs are writing to disk
 storage while at the same time a copy job is reading from that same disk
 subsystem and writing to tape?
 I call the comparison FUD because compare that to multiple backup jobs
 writing to tape at the same time while a copy job is reading from the
 same tape.

No, I have never, ever, ever said anything about that.

 ... You will be hard pressed to sustain more than 30-40
 MB/sec with even just 2-3 streams running on a single consumer hard
 drive.  More streams (concurrent backup jobs) will reduce throughput
 further.  This isn't FUD, this is REALITY.
 My reality is 340MB/s with 10 clients spooling to consumer ssd and 1
 write job despooling to consumer spinning rust. You may be hard pressed
 to do 1/10th of that in your reality but nobody else has to live there.


What on earth do you not understand about the physical limits of disk 
drives, and how things change when you change the technology or layout 
of the system?  10 sessions spooling to an SSD is completely different 
from writing to a single drive.  You seem to be saying that you're 
despooling to a RAIDZ1 array with compression enabled, that again has 
nothing to do with single disk performance.

Single disk hard drive performance is all that I've been talking about.  
And single disk hard drives can't be read fast enough to feed an LTO 
drive.  If one is designing a backup system that expects multiple 
sessions to be spooling to a disk device that is also going to be 
despooling to a tape device one needs to use something other than a 
single drive - be it a RAID array, an SSD, etc.

WD Red drives have max throughput ranging from 144MB/sec to 175MB/sec 
depending on capacity:

http://wdc.custhelp.com/app/answers/detail/search/1/a_id/9491#

Seagate NAS drives have a max throughput ranging from 149MB/sec to 
180MB/sec depending on capacity:

http://www.seagate.com/www-content/product-content/nas-fam/nas-hdd/en-us/docs/nas-hdd-ds1789-3-1409gb.pdf

Those are the MAXIMUM throughputs the drives can sustain when doing a 
purely sequential workload.  Actual performance will be less. Start 
moving the heads at all while doing I/O and the rates will plummet 
quickly.  One reader and one writer accessing the drive at the same time 
will be lucky to get a combined total of even 50% of that.

And with that I'm fully sick of this conversation.

Bryn

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is Bacula and a LTO drive right for me?

2015-05-20 Thread Bryn Hughes
On 2015-05-20 10:10 AM, Dimitri Maziuk wrote:
 Then why TF are you quoting lto sustained write to hdd random i/o? Disk
 subsystem can't keep up with modern tape in the same way apples can't
 keep up with oranges. Compare optimized sustained writes to optimized
 sustained writes or random seek time to random seek time or stop
 spreading fud.

I really don't know what point you think you are making.

What do you call it when multiple backup jobs are writing to disk 
storage while at the same time a copy job is reading from that same disk 
subsystem and writing to tape?  True we're talking about multiple 
'sequential' streams at the same time rather than pure random I/O, but 
for all intents and purposes that behaves as random I/O since you've now 
got seeks involved. You will be hard pressed to sustain more than 30-40 
MB/sec with even just 2-3 streams running on a single consumer hard 
drive.  More streams (concurrent backup jobs) will reduce throughput 
further.  This isn't FUD, this is REALITY.

If you want to keep a modern tape drive writing at its full speed you 
need a disk subsystem that is fast enough to supply that tape drive.  If 
you want to run multiple backup jobs to your spool device while writing 
to tape at the same time you need a decently fast disk subsystem that 
can keep up.  When designing a backup system you need to keep these 
things in mind.

Bryn



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is Bacula and a LTO drive right for me?

2015-05-19 Thread Bryn Hughes
On 2015-05-17 02:48 AM, Florian Rist wrote:

 So you will need more than one tape for a full backup and this will
 need
 manual intervention for changing tapes.
 Yes, I know, as long as this is supported (and as far as I can see it
 is) by Bracula that's fin. I remember it was difficult using Amanda in
 1996 or so.

Autochangers are well supported by Bacula.  Manual tape changes with 
notification emails can be set up as well - we ran that for quite a 
while before I got my changer.

 If this is not a problem for you Bacula and an LTO drive is a
 feasible solution. Otherwise you should probably think about a tape
 library for this work.
 Yes, a library would be good, but I cant afford one now. Maybe I could
 get a old used library and replace the drive by a new one...

I've had good luck with used libraries overall.  Cleanliness is super 
important - if it's filthy on the outside and has lots of dust bunnies 
it's not a good candidate, but I've got a 10+ year old IBM LTO library 
here that's working great, and a 5+ year old Neo 2000 that's likewise 
working great.  While I could certainly do with newer (higher capacity) 
drives all my tape drives are working well too.

 ​Yes, You can have full/differential/incremental backups in a
 ​hard disk and then write once a week to tape. This is commonly
 known as disk-to-disk-to-tape (D2D2T) backups.
 OK, in this case how much hard drive storage would I need? Enough to
 hold the full backup ore just a few TB for the weekly incremental
 backups?

Depends what you want to do - you can use spooling to keep the tape 
drive fed at its max rate, though no new data will be flowing from the 
client to the storage daemon when things are writing out - your backup 
speeds will be reduced but your tape drive will work a lot better.  If 
you have enough space on disk to write out the entire thing and then 
copy to tape later you won't have that problem.
 ​What do you mean about slow Network? Could you be more specific?
 100Mb/s and not the best topology, a few PC via VPN through 100 MBit/s
 LAN

 Are your data go across wan links?​ There are lots of known cases
 with Bacula working across wan links with no problems.
 No WAN, I just wondered if I need to ensure a minimum data throughout
 for the LTO. I member some problems in this respect with the DDS drive.

Yes, there is definitely a minimum throughput for LTO drives.  If you 
aren't writing from local storage then you almost certainly won't be 
able to meet it for anything at all modern.  LTO-3 was the last 
generation that could be fed by a gigabit ethernet link (full speed is 
80MB/sec), newer technologies have much higher throughput rates.  As 
mentioned above, the way to solve this is to write out to local disk 
storage first and then copy to tape.  If you're using any of the more 
recent LTO versions (5/6) you really need to make sure your disk setup 
on your backup storage server is capable of keeping up with the tape 
drive - most consumer hard drives top out at about 120MB/sec for 
sequential reads and much less than that for random I/O (such as having 
backup jobs writing to the disk at the same time as the tape drive is 
reading from it)

The most flexible approach would be to have enough local storage for at 
least one full backup from everything plus a healthy amount of 
differentials and incrementals, that way most restores don't touch the 
tapes unless something Really Bad Happens.  If that isn't practical then 
Bacula can be configured to spool to tape (I do this with my 
incrementals as they have a hard time pushing data fast enough to 
maintain my drive's write speed due to files being scattered around the 
source disks).

When using spooling you give Bacula some space, it writes till it hits 
whatever your max spool size is, then it stops spooling and writes out 
that data to tape, when done it erases what it had on disk previously 
and continues spooling data from where it left off. As mentioned above 
there is an impact on your overall backup times with this as no new data 
will be flowing in during the periods when Bacula is writing to tape, 
however it'll keep your tape throughput in the 'happy' zone.

Bryn

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Same job started twice

2015-05-13 Thread Bryn Hughes

Hey Luc,

Generally I wouldn't use the 'Allow duplicates' on the 'Copy' jobs - 
they already have a built in duplicate handling mechanism.  It's very 
useful for actual backups, but I would not turn it on for Copy jobs.


Bryn

On 2015-05-04 12:10 AM, Luc Van der Veken wrote:


Hi all,

I seem to be suffering from a side effect to “Allow duplicates = no” 
and the “Cancel [Queued | Lower Level] duplicates” settings.


I make full / differential backups to disk in the weekend, and copy 
those to tape for off-site storage on Monday.


There’s only one copy job definition with its corresponding schedule. 
When it was executed, it used to create a separate new job for each 
job it had to copy. These were queued up and executed one after the 
other because Maximum Concurrent Jobs = 1 on the tape device.


This morning, all those copy jobs except for the first one failed, 
saying they were duplicates.


Schedule {

Name = TransferToTapeSchedule

Run = Full mon at 07:00

}

Job {

Name = Transfer to Tape

## used to migrate instead of copy until 2014-10-15

#Type = Migrate

Type = Copy

Pool = File

#Selection Type = PoolTime

Selection Type = PoolUncopiedJobs

Messages = Standard

Client = bacula-main# required and checked for validity, but ignored 
at runtime


Level = full# idem

FileSet = BaculaSet# ditto

# DO NOT run at lower priority than backup jobs,

# has adverse effect of holding them up until this job is finished.

Priority = 10

## only for migration jobs

#Purge Migration Job = yes # purge migrated jobs after successful 
migration


Schedule = TransferToTapeSchedule

Maximum Concurrent Jobs = 5

Allow Duplicate Jobs = no

Cancel Lower Level Duplicates = yes

Cancel Queued Duplicates = yes

}

Pool {

Name = File

Pool Type = Backup

Recycle = yes

AutoPrune = yes

Volume Retention = 3 months

Maximum Volume Bytes = 50G

Maximum Volumes = 100

LabelFormat = FileStorage

Action On Purge = Truncate

Storage = File

Next Pool = Tape

# data used to be left on disk for 1 week and then moved to tape 
(Migration Time = 1 week).


# changed, now copy to tape, can be done right away

Migration Time = 1 second

}

That “Maximum Concurrent Jobs = 5” in the job definition was probably 
copied in by accident, it should be 1, but I don’t think that is 
causing the problem.


The result: a whole bunch of errors like the one below, and only one 
job copied.


04-May 07:00 bacula-dir JobId 28916: Fatal error: JobId 28913 already 
running. Duplicate job not allowed.


04-May 07:00 bacula-dir JobId 28916: Copying using JobId=28884 
Job=A2sMonitor.2015-05-01_20.05.00_21


04-May 07:00 bacula-dir JobId 28916: Bootstrap records written to 
/var/lib/bacula/bacula-dir.restore.245.bsr


04-May 07:00 bacula-dir JobId 28916: Error: Bacula bacula-dir 5.2.5 
(26Jan12):


Build OS:x86_64-pc-linux-gnu ubuntu 12.04

Prev Backup JobId:28884

Prev Backup Job:A2sMonitor.2015-05-01_20.05.00_21

New Backup JobId:28917

Current JobId:28916

Current Job:TransfertoTape.2015-05-04_07.00.01_50

Backup Level:Full

Client:bacula-main

FileSet:BaculaSet 2013-09-27 20:05:00

Read Pool:File (From Job resource)

Read Storage:File (From Pool resource)

Write Pool:Tape (From Job Pool's NextPool resource)

Write Storage:Tape (From Storage from Pool's NextPool resource)

Catalog:MyCatalog (From Client resource)

Start time:04-May-2015 07:00:01

End time:04-May-2015 07:00:01

Elapsed time:0 secs

Priority:10

SD Files Written:0

SD Bytes Written:0 (0 B)

Rate:0.0 KB/s

Volume name(s):

Volume Session Id:0

Volume Session Time:0

Last Volume Bytes:0 (0 B)

SD Errors:0

SD termination status:

Termination:*** Copying Error ***

*From:*Bryn Hughes [mailto:li...@nashira.ca]
*Sent:* 30 April 2015 15:07
*To:* bacula-users@lists.sourceforge.net
*Subject:* Re: [Bacula-users] Same job started twice

These directives might also be useful to you:

  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes

Bryn

On 2015-04-30 02:57 AM, Luc Van der Veken wrote:

So simple that I’m a bit embarrassed: a Maximum Concurrent Jobs
setting in the Job resource itself should prevent it.

I thought that setting was applicable to all kinds of resources
except for job resources themselves, should have checked the
documentation sooner.

*From:*Luc Van der Veken [mailto:luc...@wimionline.com]
*Sent:* 30 April 2015 9:09
*To:* bacula-users@lists.sourceforge.net
mailto:bacula-users@lists.sourceforge.net
*Subject:* [Bacula-users] Same job started twice

Hi all,

Is it possible that, in version 5.2.5 (Ubuntu),

1)An incremental job is started according to schedule, before a
previous full run of the same job has finished?

2)A nasty side effect when that happens is that the incremental
job is bounced to full because “Prior failed job found in catalog.
Upgrading to Full.”, while there have been no errors?

I seem to be in that situation now.

The client has ‘maximum

Re: [Bacula-users] Locking previous backups to prevent pruning

2015-05-11 Thread Bryn Hughes
I guess what I was looking for is a way to do this without having to 
sort out which volumes are needed, though now that I think about how 
expiry works that makes sense. What I was wanting in my head is an 
archive flag at the job level, but forgetting that the volume expiry is 
what drives the bus so to speak.


Bryn


On 2015-05-11 05:15 AM, Ana Emília M. Arruda wrote:

Hello Bryn,

Do you mean to prevent pruning of volumes? If this is the case, you 
can update volumes status to archive.


Best regards,
Ana

On Sat, May 9, 2015 at 4:39 PM, Bryn Hughes li...@nashira.ca 
mailto:li...@nashira.ca wrote:


I have a server that recently suffered a double disk failure. Rather
than rebuilding we've reassigned its duties to other machines.  I'd
however like to prevent Bacula from expiring the last
full+incrementals
from it until I'm REALLY sure we've gotten everything we need.

Is there a way to do this through the console?

Bryn


--
One dashboard for servers and applications across
Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable
Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
mailto:Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Locking previous backups to prevent pruning

2015-05-09 Thread Bryn Hughes
I have a server that recently suffered a double disk failure. Rather 
than rebuilding we've reassigned its duties to other machines.  I'd 
however like to prevent Bacula from expiring the last full+incrementals 
from it until I'm REALLY sure we've gotten everything we need.

Is there a way to do this through the console?

Bryn

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Same job started twice

2015-04-30 Thread Bryn Hughes

These directives might also be useful to you:

  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes

Bryn

On 2015-04-30 02:57 AM, Luc Van der Veken wrote:


So simple that I’m a bit embarrassed: a Maximum Concurrent Jobs 
setting in the Job resource itself should prevent it.


I thought that setting was applicable to all kinds of resources except 
for job resources themselves, should have checked the documentation 
sooner.


*From:*Luc Van der Veken [mailto:luc...@wimionline.com]
*Sent:* 30 April 2015 9:09
*To:* bacula-users@lists.sourceforge.net
*Subject:* [Bacula-users] Same job started twice

Hi all,

Is it possible that, in version 5.2.5 (Ubuntu),

1)An incremental job is started according to schedule, before a 
previous full run of the same job has finished?


2)A nasty side effect when that happens is that the incremental job is 
bounced to full because “Prior failed job found in catalog. Upgrading 
to Full.”, while there have been no errors?


I seem to be in that situation now.

The client has ‘maximum concurrent jobs’ set to 3, because the same 
client is used for backing up different NFS-mounted shares as separate 
jobs. Most of those are small, except for one, and that’s that one 
that has the problem.


The normal schedule is either full or differential on Friday night, 
incremental on Monday through Thursday, and nothing on Saturday or Sunday.


Because many full jobs are scheduled for Friday and only a limited 
number run concurrently, it usually only really starts on Saturday 
morning.


The first overrun of Full into the next scheduled run was caused not 
by the job itself taking too long, but by a copy job that was copying 
that job from disk to tape, and that had to wait for a new blank tape 
for too long.


From there on I think it took longer than 24 hours to complete because 
it ran two schedules of the same job concurrently each time.


At least that’s what the director and catalog report.

Fom Webacula:

Information from DB Catalog : List of Running Jobs

IdJob NameStatusLevelErrorsClientStart Time

yy-mm-dd

28822NAS-ElvisRunningF-NAS2015-04-29 11:29:48

28851NAS-ElvisRunningF-NAS2015-04-29 20:06:00

Both are incremental jobs upgraded to Full because of a ‘previous 
error’ that never occurred.


I just canceled the later one to give the other time to finish before 
it’s rescheduled again tonight at 20:06:00.


Besides that, there must be something else I have to find. I don’t 
think it’s normal that a backup of 600 GB from an NFS share to disk on 
another NFS share takes more than 20 hours, as the last ‘normal’ run 
last Saturday did (the physical machine the job is on is the SD 
itself, backing up an NFS share to another NFS share).




--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] HP 1/8 G2 LTO6 Ultrium 6250 SAS autoloader

2015-04-29 Thread Bryn Hughes
On 2015-04-29 07:56 AM, Michael Ivanov wrote:
 Hallo,

 Does anybody use HP 1/8 G2 LTO6 Ultrium 6250 SAS autoloader device with 
 bacula?
 We've been offered to buy this model but regrettab ly I couldn't find any
 bacula compatibility statement about it.

 Best regards,

Bacula is compatible with anything that the Linux 'mt-st' utilities can 
talk to.  Bacula simply calls the system commands to control the 
autoloader, so provided they haven't implemented anything totally 
proprietary it should be fine.

Bryn

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-users replyTo address

2015-04-08 Thread Bryn Hughes
On 2015-04-08 07:44 AM, Dimitri Maziuk wrote:
 On 2015-04-08 06:24, Robert Oschwald wrote:

 Is it possible for the admins to set the reply-to address to the 
 list-address so replies are posted to the list instead of the author?
 TFRFC (5322) says

 The originator fields also provide the information required when
  replying to a message.  When the Reply-To: field is present, it
  indicates the address(es) to which the author of the message suggests
  that replies be sent.

 Since the list admin is *not* the author of the message, the answer is no.

 HTH
 Dima


Nobody said anything about the 'list admin' here...

If you want to be pedantic, the message author generally intends that 
replies be sent to the mailing list, so to properly honor TFRFC 5322 the 
Reply-To: field should be changed to the list address in order to honor 
the message author's intent.

Bryn

--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify disk volumes

2015-04-03 Thread Bryn Hughes
Also worth mentioning... what version of Bacula are you using?  I was 
seeing 'corrupted' disk volumes fairly frequently.  I have ZFS as the 
underlying storage; there's no way the data was getting corrupted on 
disk, it was something Bacula was doing internally before writing it 
out.  I upgraded to 7.0.5 and the issue went away completely.


Bryn

On 2015-04-02 06:39 PM, Ana Emília M. Arruda wrote:

Hi Jeff,

I was wondering if dumping the volume label with bls could help. Maybe 
an admin job running a script like the attached one could help?


./checkVolLabel.sh --storage=FileStorage --pool=File --user=bacula 
--password=bacula

Volume label for file1 OK
Volume label for file2 OK
Volume label for file3 OK
Volume label for File4 OK
Volume label for file5 OK

Best regards,
Ana

On Wed, Apr 1, 2015 at 11:34 AM, Jeff MacDonald j...@terida.com 
mailto:j...@terida.com wrote:


Hi

I do disk based backup. I just found out the hard way that one of
my volumes was corrupted.

I am curious if there is a proactive mesaure that I can take to
verify each of my volumes such that they contain the correct BB02
headers in them?



— Jeff MacDonald







--
Dive into the World of Parallel Programming The Go Parallel
Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your
hub for all
things parallel software development, from weekly thought
leadership blogs to
news, videos, case studies, tutorials and more. Take a look and
join the
conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
mailto:Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the
conversation now. http://goparallel.sourceforge.net/


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Reserving devices for pools

2015-04-02 Thread Bryn Hughes
On 2015-04-02 10:50 AM, Heitor Faria wrote:
 I have a tape library with several LTO5 tape drives.  I have 3 tape pools and
 a scratch pool.  Pool1 is quite large and has most of the jobs.  The jobs
 are in a fairly standard, monthly FULL, weekly DIFFERENTIAL, and daily
 INCREMENTAL schedule.  The FULL backups are fairly evenly divided across
 1st-4th Saturdays and are quite large (multiple TBs).  Pools 2  3 are
 smaller as are the jobs.  The issue that I am trying to solve is to have at
 least one tape drive available for Pools 2  3 that Pool1 doesn't use.
 Hello Patricia. IMHO there are two Bacula safe ways to write on multiple tabe 
 library drives at the same time:

 1. Having Maximum Concurrent Jobs in each device on bacula-sd (probably is 
 what you have). If you want to avoid tape spreading just increase the maximum 
 value for each device to more than all Jobs submited to Pool1.

 2. Don't using Maximum Concurrent Jobs but scheduling Jobs for different 
 pools at a given time.

 It appears that the only way to do this is to use a different media type for
 Pools 2  3 and for any of the tape drives that I want reserved.
 I don't linke using different media types for the same kind of media, but 
 it's my personal view.

 Of course
 this would mean that Pools 2  3 would not be able to share the Scratch
 pool?  Is anyone using multiple scratch pools or is that possible?
 Never tested but seems possible. Of course the Scratch pools gotta have 
 unique names:

 Pool Resource (Bacula Manual):

 ScratchPool = pool-resource-name
 This directive permits to specify a dedicate Scratch for the current pool. 
 This pool will replace the special pool named Scrach for volume selection. 
 For more information about Scratch see Scratch PoolTheScratchPool section of 
 this manual. This is useful when using multiple storage sharing the same 
 mediatype or when you want to dedicate volumes to a particular set of pool.


Multiple scratch pools are no problem.  Just remember to set the correct 
RecyclePool settings in your Pool directive.

For your specific question around drives and keeping them reserved, you 
don't actually need multiple Scratch pools unless you want them.  The 
drive that will be used is not related to the original scratch pool the 
drive came from.

I think you could achieve what you are trying to do by using two Changer 
definitions.  Put some of the drives in one 'Changer' and the other 
drives in the other.  Then have your Pool definitions use the changer 
that contains the drives you want.

Bryn

--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula doesn't write on the whole tape

2015-03-26 Thread Bryn Hughes
On 2015-03-26 08:26 AM, Markus Rosjat wrote:
 Hi there,

 I know this question was asked befor and maybe it will be again in the
 future but I'm kinda frustrated with it.
 We have a LTO-3 hp tapeloader and suddenly our jobs began to randomly
 mark tapes as full even the tape was writen with half of the amount or
 sometimes less. I found some old post about it on the list but with a
 aswer like
 solved but I dont tell because it was sooo simple you get frustrated
 again. What we did so far without solving the problem.

1. replacing the scsi controller - no effect
2. replacing the scsi cable - no effect
3. replacing the terminator - no effect
4. replacing the tapeloader - no effect

 so even with all physical parts removed bacula insist that the tapes are
 fully writen after 150gb or even 80gb. We used brand new, never written
 tapes too and got told they are full. but checking showed we had once
 again just written less then expected.

 So if somebody out there with the same problem and a solution please share.

 Regards

What Bacula version are you using?

I did have issues like that in the past, quite a number of years ago 
though.  I was using a stand-alone HP LTO-3 drive at the time.  I 
believe the drive was kicking back some sort of error that Bacula was 
interpreting as being the end of the tape.  The problems went away when 
I switched to another tape drive (which happened to be an IBM instead of 
an HP).

Bryn

--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula newbie has some questions.

2015-03-16 Thread Bryn Hughes
7.x works just fine.  There's not really any technical reason that it 
isn't in the distros yet, more just that nobody working with the various 
distros has taken the time to package it and put it in their 
repositories.  I was able to build it quite easily myself, there's 
several features that are improved enough I'd say it's well worth 
putting in the effort.


Windows clients are currently stuck at 5.x, but they work fine with 
7.x.  The director needs to be = the client versions so old clients 
work fine with a new director, but not vice versa.  I have plenty of 5.x 
clients working with my 7.x director and storage daemon.


You might want to consider just doing an incremental backup say 
Mon/Tues, a differential Wed and then incrementals again Thurs/Fri/Sat? 
Unless you really really need differentials for some reason?  Right now 
you're backing up the same data again and again every day which you may 
not want.  IE anything new created on Monday gets backed up Monday, 
Tuesday, Wednesday, Thursday, Friday and Saturday.  If all the data is 
going to the same volume then really just using incremental all week and 
not bothering with a differential at all would probably be fine.  There 
wouldn't really be any substantial impact on restore given you're not 
using tapes.


Bryn


On 2015-03-16 10:48 AM, Joseph Wagner wrote:

Greetings all. I just joined the mailing list.

I've been playing with Bacula 5.x for a while now and I have some 
questions.


Is Bacula 7.x ready for production or should I stick with 5.x? 5.x 
seems to be the default on some Linux distros.


As far as Windows clients are concerned. Which version should I be 
using for Bacula 5.x? I downloaded the binaries from the source forge 
page but those haven't been updated since 2012. They seem to work fine 
for the most part but I did experience a fatal networking error once 
or twice. Something else could have been the culprit there though.


As of now I've been backing up a single server to a 1TB SATA drive. A 
full backup is about 400 GB or so. It mostly works fine but I run into 
retention/recycling issues after a while. I've made some tweaks over 
time and it seems to work better but I'd like some input on this from 
someone for experienced with Bacula.


The job schedule is as follows...

Level=Full sun at 00:00 , Level=Differential mon-sat at 00:00

Basically I'd like this job to do this every week and prune old data 
when the media fills up. I originally set the retention period for the 
volume to 7 days, job retention to 30 days. The job is set to auto 
prune as well. However after about two weeks the media fills up and 
the job can no longer write to that volume. Today I just set the 
retention periods to 1 day just to see what would happen over time? Is 
this a bad idea? What would be a sane config to meet my goal?


Any help would be greatly appreciated.

Thanks,
--
Joseph Wagner


--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the
conversation now. http://goparallel.sourceforge.net/


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] IBM TS3100 Missing I/O Station

2015-03-13 Thread Bryn Hughes

Hi Wayne,

I have an IBM 24 slot library as well which I had a similar issue on.  
The cause is that the I/O station is listed differently than the rest of 
the slots by the mt command.  I had to tweak the mtx-changer script 
slightly to recognize my I/O station as a slot.


It's because of the parsing logic in the script, the extra 
'IMPORT/EXPORT' bit causes the script to ignore it.  You can tweak the 
script so it sees that as a slot.


I've attached my version that treats the I/O station as a slot.

Bryn

On 2015-03-13 10:26 AM, Wayne Merricks wrote:

Hi all,

Apologies but my google skills have failed me, hopefully I can catch
someone in a happy end of working week mood.

Just wondering if anyone has used an old IBM TS3100 with Bacula and got
it to recognise tapes that are in the IO Station?

I'm running Bacula 5.2.5 on Ubuntu 12.04.  If I run an mtx status
command I get something like the following:

Storage Changer /dev/sg11:1 Drives, 24 Slots ( 1 Import/Export )
Data Transfer Element 0:Full (Storage Element 3 Loaded):VolumeTag = NPC282L2
Storage Element 1:Full :VolumeTag=NPC281L2

snip

Storage Element 23:Full :VolumeTag=NPC305L2
Storage Element 24 IMPORT/EXPORT:Full :VolumeTag=NPC280L2

So I can see all 24 slots + the tape in the drive (from slot 3). Bacula
seems completely blind to Storage Element 24.

If I run update slots in bconsole the 24th slot is missing:

3306 Issuing autochanger slots command.
Device IBM has 24 slots.
Connecting to Storage daemon Autochanger ...
3306 Issuing autochanger list command.
Catalogue record for Volume NPC281L2 updated to reference slot 1.

snip

Catalogue record for Volume NPC305L2 updated to reference slot 23.


Its not a massive problem but it has been bugging me for a while. If
anyone has any ideas it would be appreciated.




#!/bin/sh
#
# Bacula interface to mtx autoloader
#
#  If you set in your Device resource
#
#  Changer Command = path-to-this-script/mtx-changer %c %o %S %a %d
#you will have the following input to this script:
#
#  So Bacula will always call with all the following arguments, even though
#in come cases, not all are used.
#
#  mtx-changer changer-device command slot archive-device drive-index
#  $1  $2   $3$4   $5
#
#  for example:
#
#  mtx-changer /dev/sg0 load 1 /dev/nst0 0 (on a Linux system)
# 
#  will request to load the first cartidge into drive 0, where
#   the SCSI control channel is /dev/sg0, and the read/write device
#   is /dev/nst0.
#
#  The commands are:
#  CommandFunction
#  unload unload a given slot
#  load   load a given slot
#  loaded which slot is loaded?
#  list   list Volume names (requires barcode reader)
#  slots  how many slots total?
#  listalllist all info
#  transfer
#
#  Slots are numbered from 1 ...
#  Drives are numbered from 0 ...
#
#
#  If you need to an offline, refer to the drive as $4
#e.g.   mt -f $4 offline
#
#  Many changers need an offline after the unload. Also many
#   changers need a sleep 60 after the mtx load.
#
#  N.B. If you change the script, take care to return either 
#   the mtx exit code or a 0. If the script exits with a non-zero
#   exit code, Bacula will assume the request failed.
#

# source our conf file
if test ! -f /etc/bacula/scripts/mtx-changer.conf ; then
  echo !!
  echo ERROR: /etc/bacula/scripts/mtx-changer.conf file not found
  echo !!
  exit 1
fi
. /etc/bacula/scripts/mtx-changer.conf

MTX=/usr/sbin/mtx

if test ${debug_log} -ne 0 ; then
  touch /var/lib/bacula/mtx.log
fi
dbgfile=/var/lib/bacula/mtx.log
debug() {
if test -f $dbgfile; then
echo `date +\%Y%m%d-%H:%M:%S\` $*  $dbgfile
fi
}


#
# Create a temporary file
#
make_temp_file() {
  TMPFILE=`mktemp /var/lib/bacula/mtx.XX`
  if test x${TMPFILE} = x; then
 TMPFILE=/var/lib/bacula/mtx.$$
 if test -f ${TMPFILE}; then
echo ERROR: Temp file security problem on: ${TMPFILE}
exit 1
 fi
  fi
}

#
# The purpose of this function to wait a maximum 
#   time for the drive. It will
#   return as soon as the drive is ready, or after
#   waiting a maximum of 300 seconds.
# Note, this is very system dependent, so if you are
#   not running on Linux, you will probably need to
#   re-write it, or at least change the grep target.
#   We've attempted to get the appropriate OS grep targets
#   in the code at the top of this script.
#
wait_for_drive() {
  i=0 
  while [ $i -le 300 ]; do  # Wait max 300 seconds
if mt -f $1 status 21 | grep ${ready} /dev/null 21; then
  break
fi
debug Device $1 - not ready, retrying...
sleep 1
i=`expr $i + 1`
  done
}

# check parameter count on commandline
#
check_parm_count() {
pCount=$1
pCountNeed=$2
if test 

Re: [Bacula-users] Job batches

2015-03-03 Thread Bryn Hughes

On 2015-03-03 07:01 AM, Luc Van der Veken wrote:


If jobs are added in two batches at different times, does the oldest 
batch have to be completely finished before the newer one is started?


My backups are made fd - file (disk) store - copy to tape (an old 
LTO2 drive).


One client is huge compared to the others, copying it to tape takes 
some time (about 5 tapes).


My problem is that new incremental backups that should start while it 
is being copied, just sit there “waiting for execution” until the copy 
operation has completed – yet they are set to run at a higher 
priority, neither the storage they are to be written to nor any source 
fd is in use at the time, and the sd and director both have 
sufficiently high ‘maximum concurrent jobs’ settings.


I’ve gone over all configuration files several times to see if I 
haven’t forgotten a ‘maximum concurrent’ or so, but I find no reason 
why those jobs shouldn’t start.


PS: sorry if this is a repeat question, it sounds rather familiar 
while I am writing it, but I didn’t find an older version.It’s also 
possible that I started writing it a few months ago, but then decided 
not to post it and continue searching a bit more ;)





Do all of your jobs have the same priority setting?  Jobs with different 
priorities won't execute at the same time.


Ah actually I see that you say above you're using different priority 
levels.  Make everything the same priority and give it a try.


The priority thing is a little weird, it doesn't work quite the way one 
might expect.  It won't kick off a higher priority job until the current 
running job has completed, so:


- Job 'A' is running with priority 10
- Job 'B' is queued with priority 5

Job 'B' won't execute until Job 'A' has completed.

Something like:

- Job 'A' is running with priority 10
- Job 'B' is queued with priority 5
- Job 'C' is queued with priority 10
- Job 'D' is queued with priority 5

Job 'A' will run until it is complete, then Job 'B' and 'D' will kick 
off at the same time assuming there's no concurrent limits exceeded, 
then Job 'C' will kick off once 'B' and 'D' are completed.



--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Help with monitoring/tunning Bacula v5.2.5

2015-02-19 Thread Bryn Hughes
Guessing those 3.4M files are mostly small files?  They are likely 
scattered across the disk.


Are you backing up to a disk destination, or to tape? From your verbiage 
I'm guessing to disk...


Have you tried looking at iostat on the source server when a backup is 
running? I would not at all be surprised to see that your source disks 
are basically 100% busy if you have a very large number of small files.  
What kind of data are you backing up? Is this something like a mail 
store filled with maildirs or a subversion repository filled with small 
files?


Bryn

On 2015-01-28 03:43 PM, Sergio Nemirovsky wrote:


We are using Bacula 5.2.5 to backup a server with a huge amount of files.

After a full backup was done (It took 3+ days), the incremental took 
10+ hours.


Notice the lines in the log:

27-Jan 23:10 server-xxx-fd JobId 1157: /run is a different filesystem. 
Will not descend from / into it.


28-Jan 09:16 server-xxx-fd JobId 1157: /boot is a different 
filesystem. Will not descend from / into it.


We are not defining the size of the volumes, so all the data from the 
job goes into just one volume.


We have 3.4M files in this incremental.

Could this be the cause of the delay?

What would be the best approach to tune this beast?

Thanks!

Log:



27-Jan 23:10 server-xxx-fd JobId 1157: /dev is a different filesystem. 
Will not descend from / into it.


27-Jan 23:10 server-xxx-fd JobId 1157: /run is a different filesystem. 
Will not descend from / into it.


28-Jan 09:16 server-xxx-fd JobId 1157: /boot is a different 
filesystem. Will not descend from / into it.


28-Jan 09:16 server-xxx-sd JobId 1157: Job write elapsed time = 
10:06:52, Transfer rate = 253.8 K Bytes/second


.

.

.Non-fatal FD errors:0

  SD Errors:  0

  FD termination status:  OK

  SD termination status:  OK

  Termination:Backup OK





--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=190641631iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula splitting big jobs in to two

2015-02-09 Thread Bryn Hughes
I'm in a somewhat similar boat, I don't have 70TB, but I do have 21.5TB 
and LTO3 equipment (max 80 MB/sec).  If it were possible to keep a tape 
drive streaming constantly without ever having to change tapes or 
unloading it would still take me many days to finish.

The easiest way to start splitting things up is at the root filesystem 
level for your data directory.  I assume you don't have 70TB in files 
just all thrown together in one big directory, there is probably some 
sort of organizational structure?

In my case we generate a new directory at the root level each year and 
then put the year's work in to that.  I create a separate fileset for 
each individual year.  I then have a separate job for each year.  Using 
JobDefs and a common schedule minimizes the configuration work, you end 
up with nothing more really than a Job and a Fileset, each with only a 
few lines in your config file.

Look for something like this in how your data is laid out.  Just 
remember to create additional jobs as new stuff is added - again in my 
case we're only doing this once a year.  I have the root directory of 
the file shares locked so new folders can't be created by anyone but the 
admins, this allows me to ensure the correct backup config is added at 
the same time.

Bryn


On 2015-02-02 06:23 AM, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] wrote:
 Andy,

 I have to backup about 70TB in one job.

 Uthra

 -Original Message-
 From: akent04 [mailto:bacula-fo...@backupcentral.com]
 Sent: Friday, January 30, 2015 9:50 AM
 To: bacula-users@lists.sourceforge.net
 Subject: [Bacula-users] bacula splitting big jobs in to two

 I run bacula 5.2.12 on a RHEL server which is attached to a Tape Library. I 
 have two LTO5 tape drives. Since the data on one of my server has grown big 
 the back-up takes 10-12 days to complete. I would like to split this job in 
 to two jobs. Has anybody done this kind of a set-up? I need some guidance on 
 how to go about it.


 Mr. Uthra: at a glance I think the only way to do that is having to Job 
 configurations for the same Client, with two FileSet configurations for them 
 respectively.
 This is very manual, since you will have to load balance the different 
 backup paths for each FileSet by yourself.
 Just in time: unless there is a huge amount of information on this server 
 it's not normal for a Backup Job take 12 days to complete, and maybe there is 
 some bottlenecks in your structure / configuration. 


 Regards,
 ==Heitor
  Medrado de Faria - LPIC-III | ITIL-F Jan. 26 - Fev. 06 - Novo Treinamento 
 Telepresencial Bacula: http://www.bacula.com.br/?p=2174
 61  (tel:%2B55%2061%202021-8260)8268-4220 (tel:%2B55%2061%208268-4220)
 Site: heitorfaria  at  gmail.com (heitorfaria  at  gmail.com) 
 ===

 I would have to echo the above post in regards to bottlenecks. How much data 
 are you backing up?
 I'm able to back up almost 500GB(over a million files) in around 4-5 hours 
 and that is to an RDX cartridge in a Tandberg Quikstation. Twice that (around 
 1TB) would probably only take about 8-10 hours, less than a day. If you have 
 only around 1-4TB of data you're backing up 10-12 days is abnormal for that 
 when dealing with some type of local hardware(not over Internet or VPN), and 
 I'd look into why it's taking so long instead of trying to work around it.

 -Andy



--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula OCFS2 FileSystem Support

2015-02-04 Thread Bryn Hughes

Support in what regard?

If you can see the files at the OS level then Bacula will back them up.  
I'm not particularly familiar with OCFS2 in terms of how it handles its 
metadata; anything that follows the 'normal' POSIX rules will be 
maintained. If OCFS2 is doing anything like custom ACLs or other 
metadata type stuff you might not retain those specific attributes, 
though the data itself should be fine.


Bryn


On 2015-01-28 11:25 AM, Heitor Faria wrote:

Dear Bacula Users,

Does anyone know if Bacula (coommunity / enterprise) supports 
this OCFS2 FileSystem?


Regards,

Heitor Medrado de Faria - LPIC-III | ITIL-F
Need Bacula training? 
https://www.udemy.com/bacula-backup-software/?couponCode=bacula-users

+55 61 tel:%2B55%2061%202021-82608268-4220 tel:%2B55%2061%208268-4220
Site: www.bacula.com.br http://www.bacula.com.br/ | Facebook: 
heitor.faria http://www.facebook.com/heitor.faria | Gtalk: 
heitorfa...@gmail.com mailto:heitorfa...@gmail.com





--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Should I have a server in every location?

2015-01-26 Thread Bryn Hughes
You should have a storage server in each location for sure.  You COULD 
run a single director but it would make more sense to run a director at 
each location, at least in my mind.


You'll need Bacula v7.0 or later to get this all working properly, but 
what you'd do:


- Run your regular backups at each site, using the on-site storage and 
director.
- At some later time, run a 'Copy' job that makes a copy of all the 
backed up jobs at each site to another site.


Each site would do all of its own restores etc locally, but should the 
local copy be destroyed you'd have another copy off-site that could be used.


You will need a separate set of storage at each site for the local stuff 
vs. the off-site stuff - this can be as simple as a different directory 
defined in the storage daemon's configuration if you are using 
disk-based backups.


Bryn


On 2015-01-26 07:00 PM, Damien Hull wrote:


I’m brand new to Bacula. Getting some good information from the list. 
As I’ve mentioned before, I’ve got three offices in different 
locations. I’m needing offsite storage of data as part of a disaster 
recovery plan. I’ve got a couple of questions


·Should I have a server in every location?

·Can Bacula servers send data between each other?

Here’s my thinking

1.Local backup in case someone deletes a file

2.Use another remote server for offsite storage

Thanks!



--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula daemon message

2015-01-17 Thread Bryn Hughes
Are you SURE there isn't anything like a crontab somewhere that someone 
set up?  This very very very very much has to be something that is being 
either typed in to bconsole or executed via a script somewhere.  It 
isn't something Bacula is doing on its own.


Bryn

On 2015-01-17 10:54 AM, Polcari, Joe (Contractor) wrote:


Nope, none of that. No admin jobs. All my storage is File. Backups and 
restores work fine.


*From:*Roberts, Ben [mailto:ben.robe...@gsacapital.com]
*Sent:* Saturday, January 17, 2015 1:05 PM
*To:* Polcari, Joe (Contractor); bacula-users@lists.sourceforge.net
*Subject:* RE: Bacula daemon message

Do you have an admin job in your config that's running console 
commands? It looks like something is executing purge commands 
automatically which sounds really rather dangerous to the health of 
your backups. The good news is it doesn't seem to be working because 
you no longer have a File storage any more, but if you were ever to 
re-create it you might find backups being randomly deleted.


Ben Roberts

 -Original Message-
 From: Polcari, Joe (Contractor) [mailto:joe_polc...@cable.comcast.com]
 Sent: 17 January 2015 17:22
 To: bacula-users@lists.sourceforge.net 
mailto:bacula-users@lists.sourceforge.net

 Subject: [Bacula-users] FW: Bacula daemon message

 How do I stop these? They seem to come at random times once a day.

 -Original Message-
 From: root@xxx mailto:root@xxx On Behalf Of Bacula
 Sent: Saturday, January 17, 2015 11:04 AM
 To: bac...@localhost.xxx mailto:bac...@localhost.xxx
 Subject: Bacula daemon message

 17-Jan 11:04 xxx-dir JobId 0:
 This command can be DANGEROUS!!!

 It purges (deletes) all Files from a Job, JobId, Client or Volume; or it
 purges (deletes) all Jobs from a Client or Volume without regard to
 retention periods. Normally you should use the PRUNE command, which
 respects retention periods.
 17-Jan 11:04 xxx-dir JobId 0: Automatically selected Catalog: MyCatalog
 17-Jan 11:04 xxx-dir JobId 0: Using Catalog MyCatalog
 17-Jan 11:04 xxx-dir JobId 0: Storage resource File: not found 17-Jan
 11:04 xxx-dir JobId 0: The defined Storage resources are:
 17-Jan 11:04 xxx-dir JobId 0: 1: Client1Storage
 17-Jan 11:04 xxx-dir JobId 0: 2: Client2Storage
 17-Jan 11:04 xxx-dir JobId 0: 3: Client3Storage
 17-Jan 11:04 xxx-dir JobId 0: 4: Client4Storage
 17-Jan 11:04 xxx-dir JobId 0: 5: Client5Storage
 17-Jan 11:04 xxx-dir JobId 0: 6: Client6Storage
 17-Jan 11:04 xxx-dir JobId 0: Selection aborted, nothing done.




--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Upgrade question.

2015-01-13 Thread Bryn Hughes
On 2015-01-08 03:22 PM, Erik P. Olsen wrote:
 I am planning to upgrade bacula from 5.2.13 to 7.0.5. I will back-up the 
 current
 catalogue, remove 5.2.13, install 7.0.5, copy the conf files from 5.2.13 to
 7.0.5, restore the catalogue and start bacula again.

 Is that feasible?

 I also have to back-up a Windows 7 system. Which version will work on that
 system? Right now I am using version 5.2.10 on Windows.

 My host and the other clients are all Linux Fedora 21.


Yep, it is feasible though you are making more work than necessary. 
There is no reason to remove the database or remove the conf files; in 
fact it could be detrimental.  Upgrading in place will ensure the proper 
scripts are executed to upgrade the database, you'll have to do them 
manually otherwise.

Bacula 7.0.5 will work fine with 5.x clients, so you can leave the 
Windows clients as-is.  It is critical that the storage and director 
daemons are all the same version, but clients can be older.  I have a 
mixed environment - my director/storage are on 7.0.5 and my clients are 
a few different v5.2.x versions.

Bryn

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup obsessed with a specific tape

2015-01-06 Thread Bryn Hughes
On 2015-01-06 10:21 AM, philhu wrote:
 No

 All tapes are physically labelled with barcodes as well as labelled by Bacula 
 and shown as status=append and used on tape 64,512, and in same pool.


It would be helpful if you could show the output of 'list volumes' as 
your initial description of the pools was somewhat confusing.

Bryn

--
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with Tape.

2014-12-23 Thread Bryn Hughes
EXTREMELY unlikely that it is cable related - those errors would show up 
in dmesg as SCSI/SAS/FC/whatever errors, not as tape drive hardware 
errors. All of those transports have their own error 
detection/correction; the drive will never get a corrupted signal from 
the HBA or vice versa.

Very much sounds like you've got a problem with your drive itself to 
me.  Probably time for a service call...

Bryn

On 2014-12-23 05:31 AM, Clark, Patricia A. wrote:
 Dirt is not the issue unless you are operating in a dirty environment.
 There is something wrong with either the tape drive itself or the
 connection to the drive.  The connection could be the cable or the HBA on
 the computer.  There are likely error messages in your logs that will help
 with the diagnosis.  Changing cables is the easiest/fastest first thing.
 After that, look at the drive and any documentation about the drive and
 run any available diagnostics.

 Patti Clark
 Linux System Administrator
 RD Systems Support Oak Ridge National Laboratory


 On 12/20/14, 3:49 AM, Danixu86 bacula-fo...@backupcentral.com wrote:

 Hi there, first of all i'm sorry for my english.

 My problem is with Tape Drive, recently i'm getting errors on a lot of
 tapes and the most of times in about 6GB of backup... These errors are:
 Dirty Drive, or Hardware Error and when i eject the tape then Dirty Drive.
 I've tried with 4 new tapes (2 Sony and 2 Qantum), and yesterday with
 used tape, and all give me error on about 5-6GB. Cleaner Tape is new too,
 and i've got errors with two different cleaner tapes...

 Someone knows what can be the problem?, Can i clean the Drive manually?,
 because maybe the problem is some dirt that tape cleaner can't
 eliminate...

 Tape Drive is a Quantum LTO-4 HH.

 ¡¡Greetings and thanks.


--
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dell powerVault 124T tape library

2014-12-04 Thread Bryn Hughes

On 2014-12-04 07:33 AM, Ian Lord wrote:


Hi,

We are looking for a solution to backup freebsd machine with large ZFS 
pools (8TB)


We thought of purchasing a tape library with LTO6 drives which will 
gives us 40TB of Retention


Everyone seems to indicate bacula as a good solution but when I look 
at the documentation, it seems 10 years outdated:


http://blog.bacula.org/general/supported-autochangers/

I can’t complain because it’s open source and I really respect the 
concept and know how documentation is hard to maintain, but I would 
like to know if I can be safe to purchase a 7000$ library, a new 
server running freebsd and that it will work. This is not the kind of 
things I have sitting in my lab J


Is anyone running such a setup and can assure me it can be done 
without too much customization and troubles. If I know it can be done 
for a normal poweruser (I’m not a programmer that can modify source 
code to fix things). I’ll invest time in learning and reading 
documentation, but it will be a lot more reassuring to know people 
done it before


Thanks


Bacula just talks to the command-line programs for your OS that deal 
with the library.  On Linux they are the mt-st programs.  There's an 
autochanger script which just calls the OS commands to query the library 
and make it 'do stuff' so really what you are looking for is whether the 
library is supported on FreeBSD - as long as you can script change 
tapes what tapes are in what slots etc you will be good to go.  The 
default scripts work with most changers, though I had to make a couple 
of tweaks for my IBM for instance.


Bryn
--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows File Daemon maximum speed at 36 MByte/s

2014-12-01 Thread Bryn Hughes
On 2014-11-29 02:43 AM, Patrick wrote:
 Hi Josh,

 Thanks for your reply.

 You can try to isolate the problem. Try running a Windows backup with
 VSS, compression, and encryption all turned off. Compare a Windows VM
 against a Linux VM on the same host if possible.
 I don’t use compression or encryption, but VSS. I disabled VSS and get about 
 40 MByte/s. If I also set acl support = no, I get about 43 MByte/s.
 It’s better, but far away from the 90 MByte/s from a linux box with similar 
 data.

 Last night I have took a look at the bacula console during the full backups 
 and could see one Windows clients with 65 MByte/s. This server hosts a SQL 
 Server with big database dump files. The Bacula File Daemon is faster with 
 big files.

 How can I tune the Bacula File Daemon to increase the transfer rate for a lot 
 of small files? Can I set the maximum memory usage? Is a ramdisk which 
 collects the files from the hard drive and send it as bunches to bacula 
 possible?

More than likely you are running in to Windows NTFS limitations, not 
Bacula limitations.  NTFS has never been the fastest filesystem 
especially with lots of small files.

Bryn

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-6 - tape cleaning frequency

2014-11-10 Thread Bryn Hughes
On 2014-11-10 07:32 AM, John Drescher wrote:
 How often does your LTO-6 tape drive need to be cleaned?

 I ask because a recent post mentioned every 10 tapes.  With my limited tape 
 knowledge, that seems rather high, but I have no LTO experience.

 That seems excessive to me although I am still using a dual drive LTO2
 autoloader. For that I run a cleaning cycle 2 times per year for each
 of the HP LTO2 tape drives.

 John


I have an IBM LTO-3 autoloader here.  I just have a cleaning cartridge 
in the reserved slot in the library and I let it take care of itself 
rather than trying to do it on any sort of schedule.  It loads the 
cleaning cartridge a couple of times a year.  I bought this library used 
- when I first started using it I had to run cleanings every few weeks 
but they tapered off quite significantly after it had been in use for a 
while.

The research I've done in the past on LTO technology has implied that 
you should never need to trigger a cleaning cycle on your own - the 
drive will let you know if it needs it.  Cleaning tapes should only be 
used when the drive requests one.  Our bigger library we had in the 
office ran for years without ever requesting a cleaning.

It's worth mentioning that different manufacturers have different 
cleaning programs in their drives.  For instance my IBM drive runs a 
much shorter cleaning cycle and has about 3-4x more cycles 'allowed' on 
a cleaning tape versus my HP drive I was using before it.  My experience 
with my limited sample size of a half dozen LTO drives would seem to 
indicate that the IBM drive requests cleaning more often too.

Also worth mentioning - the way your drive is used will have a 
significant impact on how many cleaning cycles it needs.  If you aren't 
flowing data fast enough to it to keep it streaming at full speed it 
will build up a lot of cruft on the heads much faster. Bacula can help 
with this greatly by spooling data before writing, in particular with 
incremental backups that have many small files scattered all over the disk.

Bryn

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] large jobs that fail

2014-11-07 Thread Bryn Hughes
On 2014-11-07 04:40 AM, Jeff MacDonald wrote:
 Lets say you are doing a 500 gig job and it fails after 450 gigs.. and you 
 are using disk based backups.

 Do you ever bother to try to reclaim that space? I mean I delete the job, so 
 the volume can have that data recycled when the time comes, but do you ever 
 go thru the extra trouble of finding the volumes that only have data from 
 that job on , and deleting them from disk etc?

 jeff.

Are you asking if I PERSONALLY try to reclaim the space, or if Bacula 
itself will try to?

Bacula will keep the backed up data and apply the normal retention 
policies to it even if the backup doesn't complete successfully. That 
450 gigs of data on disk can still be used for a restore despite the 
backup not completing successfully; the complexity of restoring the data 
will depend on the circumstances around how it failed.  If for instance 
the director crashed and the records never got inserted into the 
database restore is going to be a lot more complex than if the client 
went offline for some reason.  However the data is still there and can 
still be read one way or the other.

On a personal level, whether I try to reclaim the space or not will 
depend on whether or not I have enough space to have that partial backup 
sitting around or not.  If it's not causing any issues then I'd be 
inclined to let it sit till it expired on its own.

Bryn



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO speed optimisation

2014-11-05 Thread Bryn Hughes

Hi Ben,

To start with I would watch the server using iostat and top to make sure 
you aren't running out of CPU and that you really aren't maxing out your 
disks during your backup.  In particular pay attention to the '%util' - 
I like to run 'iostat -kx 2' which will show the stats every 2 seconds.  
Of course the usefulness of that will depend on your disk configuration 
- if you have a RAID card presenting a single LUN to the server then the 
numbers may be a bit faulty, but it should still give an indication of 
what is going on.  I see you are allowing 3 concurrent jobs on your tape 
drives - if all 3 of these are spooling from the same set of disks at 
once you may have some impact on performance since you are now requiring 
a lot of seek operations.


There are also some settings around buffers for bacula that I've had to 
tweak in the past to get ideal performance.  Check Maximum Network 
Buffer Size in your configs.


Finally there are some OS-level settings for the 'st' driver (I'm 
assuming you are on Linux).  With my LTO3 drives I need to add this to 
the kernel command line:


st=buffer_kbs:256,max_buffers:32

LTO6 may need similar tweaks.  Without the st driver buffer size and 
number increased I had a hard time keeping an LTO3 drive running at full 
speed (80MB/sec).


Bryn

On 2014-11-05 03:48 AM, Roberts, Ben wrote:

Hi all,

I'd like to try and make some speed improvements to my Bacula setup 
(5.2.13, Solaris11). I have data (and attribute) spooling enabled 
using a pool of 46x 1TB directly-attached SAS disks dedicated to this 
purpose. Data is being despooled to 2x directly-attached SAS LTO6 
drives at around 100mB/sec each. I think I should be able to get 
closer to the ~160mB/s maximum uncompressed thoughput the drives and 
tape media support (ref: 
http://docs.oracle.com/cd/E38452_01/en/LTO6_Vol4_E1/LTO6_Vol4_E1.pdf) 
http://docs.oracle.com/cd/E38452_01/en/LTO6_Vol4_E1/LTO6_Vol4_E1.pdf%29.


I've just done a speed test and can read from the spool array at a 
sustained 300mB/sec even while other jobs are running, so I'm sure 
there's no bottleneck at the disk layer. My suspicion is that the 
bottleneck is at the application layer, probably due to the way I have 
Bacula configured.


Having read through Bareos' tuning paper 
(http://www.bareos.org/en/Whitepapers/articles/Speed_Tuning_of_Tape_Drives.html), 
I've updated the max file size from 1-50GB which increased the 
throughput from ~75 to ~100mB/sec. I believe I need to look at tuning 
the block size to gain the last bit of improvement.


Is it still the case in Bacula that changing the Maximum Block Size 
renders previously used/labelled tapes to become unreadable? I'm up to 
almost 1,000 tape media already written, so making these unusable for 
restores without restarting the SD to change configs would be less 
than ideal. I see Bareos is touting a feature to make changes to block 
size at the pool level rather than the storage level and so this 
problem can be avoided by moving newer backups to a different pool 
while still keeping older backups readable. I haven't seen any 
reference to this in the Bacula manual; is it something that's already 
supported or in the plans for a future version?


For reference, this is one of the the relevant drive definitions I'm 
using, just in case there's something else that would help which I 
might have missed:

Device {
Name = drive-1-tapestore1
Archive Device = /dev/rmt/1mbn
Device Type = Tape
Media Type = LTO6
AutoChanger = yes
Removable media = yes
Random access = no
Requires Mount = no
Drive Index = 1
Maximum Concurrent Jobs = 3
Maximum Spool Size = 1024G
Maximum File Size = 50G
Autoselect = yes
}

Regards,
Ben Roberts




--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread Bryn Hughes
On 14-10-30 06:27 AM, Jeff MacDonald wrote:
 Hi,

 I have some backups going at 2MB/s which for a 380gig backup is just too 
 slow. I’m trying to find my bottleneck.

 Some questions:

 - Is the rate of the backup only shown in “messages” or is it stored in the 
 db anywhere. Or could I just do jobbytes / endtime-starttime in the jobs 
 table?

 - Does bacula write data to disk via a stream or lots of little latency 
 dependant writes?

 My environment looks like this

 - Bacula (and postgres on the same VM), a MS Small business server and 3 or 4 
 other VMs run on a 6 disk array of 7200rpm SATA disks ( I bet this is already 
 my slowpoint )
 - Bacula stores its backups on a NFS mounted NAS, about .7ms of ping away.

 Tips/Suggestions?

 Jeff.

What is the content of your backups?  Some things (ie thousands of tiny 
files) will cause a lot of seeks on the machine to be backed up.  If you 
aren't using attribute spooling then each backed up file also causes a 
record to be inserted in to the database, which may take time depending 
on your DB environment.

The 'suggestions' for tuning will be different if you are backing up a 
few dozen 10GB files versus backing up a million 10kb files.

Bryn

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread Bryn Hughes

On 14-10-30 07:50 AM, Jeff MacDonald wrote:



Tips/Suggestions?

Jeff.


What is the content of your backups?  Some things (ie thousands of tiny
files) will cause a lot of seeks on the machine to be backed up.  If you
aren't using attribute spooling then each backed up file also causes a
record to be inserted in to the database, which may take time depending
on your DB environment.

The 'suggestions' for tuning will be different if you are backing up a
few dozen 10GB files versus backing up a million 10kb files.


Its mostly a windows os, with all its sundry smaller files and a few 
larger database dumps etc.


I guess what I have to accertain is the slow part getting the data 
FROM the servers or the slow part putting the data TO the storage.


I’m not sure which value the rate is in the job report, or if rate is 
somehow encompassing both.


jeff

The job report rate will be the final average rate of the job, it 
doesn't know/specify the difference between the 'input' rate and the 
'output' rate.


Yep, you're going to need to do some investigation on the storage side 
of the VM machine you are backing up, the director itself, the storage 
daemon itself (though I'm guessing it is on the same system as the 
director for you) and the final storage.


Also it's not quite clear from your description, is the final storage on 
a different NAS all together from your VMs? (hoping so!) What 
virtualization platform are you running?


Finally the question about attribute spooling is a big one - if you are 
backing up a lot of small files and you do not have attribute spooling 
turned on, you will have abysmal performance especially if the director 
is running on the same disks that you are backing up.


Database writes are (almost) always synchronous writes, meaning the 
system will stop and wait for the storage layer to say yes the data is 
ACTUALLY committed to disk before proceeding.  If you are seeking all 
over backing up a bunch of small files, then trying to do a whole ton of 
tiny DB writes at the same time to the same spindles your hard drive 
heads are going to be flying around like crazy.  An array of 7200 RPM 
disks in any sort of parity RAID configuration will not be able to 
handle more than 50-90 random IOPs (Operations per Second) at best in 
real life, with a DB write or a file read counting as an IOP.  If you 
are backing up lots of small files randomly distributed around the 
storage you are quite likely hitting an IOP wall - an IOP to read the 
file and an IOP to write the DB record means not more than 25-45 files 
per second.  4kb files = 100-180kb/sec and a completely maxed out 
storage layer.


Even WITH attribute spooling enabled you are still going to be in a 
less-than-ideal position since the spooled attributes still need to be 
written to the same spindles with the hardware configuration you've 
described.


Bryn


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread Bryn Hughes
On 14-10-30 08:27 AM, Jeff MacDonald wrote:
 On Oct 30, 2014, at 12:17 PM, Bryn Hughes li...@nashira.ca wrote:
 The job report rate will be the final average rate of the job, it doesn't 
 know/specify the difference between the 'input' rate and the 'output' rate.

 Yep, you're going to need to do some investigation on the storage side of 
 the VM machine you are backing up, the director itself, the storage daemon 
 itself (though I'm guessing it is on the same system as the director for 
 you) and the final storage.

 Also it's not quite clear from your description, is the final storage on a 
 different NAS all together from your VMs? (hoping so!)  What virtualization 
 platform are you running?

 Finally the question about attribute spooling is a big one - if you are 
 backing up a lot of small files and you do not have attribute spooling 
 turned on, you will have abysmal performance especially if the director is 
 running on the same disks that you are backing up.

 Database writes are (almost) always synchronous writes, meaning the system 
 will stop and wait for the storage layer to say yes the data is ACTUALLY 
 committed to disk before proceeding.  If you are seeking all over backing 
 up a bunch of small files, then trying to do a whole ton of tiny DB writes 
 at the same time to the same spindles your hard drive heads are going to be 
 flying around like crazy.  An array of 7200 RPM disks in any sort of parity 
 RAID configuration will not be able to handle more than 50-90 random IOPs 
 (Operations per Second) at best in real life, with a DB write or a file read 
 counting as an IOP.  If you are backing up lots of small files randomly 
 distributed around the storage you are quite likely hitting an IOP wall - an 
 IOP to read the file and an IOP to write the DB record means not more than 
 25-45 files per second.  4kb files = 100-180kb/sec and a completely maxed 
 out storage layer.

 Even WITH attribute spooling enabled you are still going to be in a 
 less-than-ideal position since the spooled attributes still need to be 
 written to the same spindles with the hardware configuration you've 
 described.

 Bryn

 This was really helpful and basically just answered all of my questions 
 without having to investigate the actual setup very much.

 I’m using VMWare for my virt platform. Bacula and its postgres live on the 
 same disks that they are backing up (which is local storage) and data is sent 
 off to to a remote NAS via gige.

 My guess is that its an IOP wall like you mentioned.Its running a bunch of 
 VMs that are under heavy usage by the staff.

 Making a stronger and stronger arguement for me to recommend dedicated bacula 
 appliance. 16 gigs of ram, 4 cores. 1tb of 7200 for postgres and a tape drive 
 :)

 jeff.

Just be aware that you might not see a dramatic increase in speed just 
moving Bacula itself!

If you are using VMWare with VMDK files on a VMFS volume you need to be 
aware that any IO by a guest requires a reservation of the entire VMFS 
volume.  Locking is happening at the SCSI layer - if one guest wants to 
read one byte of data nobody else can do anything until its IO operation 
is complete.  Remembering that you probably are only going to get around 
75 IOPs you can see how a VMFS volume with more than a handful of 
virtual machines on it can very quickly end up performing very poorly, 
especially with spinning rust underneath it.  A good RAID card with a 
LOT of cache memory can help with overall system performance, but 
backups by definition are going to be touching lots of areas of data 
that aren't likely to be in cache.

What I'm getting at is you might actually need to focus your efforts and 
dollars on the storage underneath your VMs before you do too much with 
your backup system.  A great big nice happy dedicated Bacula server 
would be nice, but if the VMs are still IOP constrained ESPECIALLY if 
they are actively in use while being backed up you probably won't see 
that much of an improvement.

An easy way to validate this would be to ensure you have attribute 
spooling turned on and to set up the attribute spooling to write to your 
NAS rather than to local storage.  That will get the VM storage 
infrastructure out of your backup pathway.

Bryn


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [7.0.5] Option to start spooling before volume mount?

2014-10-27 Thread Bryn Hughes
On 14-10-26 09:28 AM, Harald Schmalzbauer wrote:
   Bezüglich John Lockard's Nachricht vom 26.10.2014 15:34 (localtime):
 I run into this issue with several of my servers and dealt with it by
 creating migrate jobs.  First job goes to disk.  Second job runs
 some reasonable time later and migrates the D2D job to tape.  I had a
 number of key servers I did this for with the advantage that I could
 offsite the tapes and keep the D2D job on disk till after the next
 backup had run.  This way I had immediate recovery available as well
 as disaster recovery.
 Well, this indeed is a good workarround. Im already using job migration,
 but for other tasks (regular file-backups).
 In that special case, the data to archive onto LTO is already
 backup-data (iSCSI exported WindowsBackupDrives). But of course I could
 do another intermediate-backup. Will do that I guess. As long as I'm not
 occupying too much space from my spool device (2TB only), since my
 backup-pool is pretty well filled and I don't like backing up the same
 data twice on the same spindle-pool…
If you've already got the data on disk from another backup, you can use 
a 'COPY' job instead of a 'MIGRATE' job.  That will give you a copy on 
tape while still preserving the copy on disk as the 'primary' copy 
should you need to do a restore, without needing to run another backup 
job separately.

Bryn

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Configuration reload for bacula-sd

2014-10-24 Thread Bryn Hughes

On 14-10-24 05:28 AM, Andrea Carpani wrote:

On 24/10/2014 12:47, Kern Sibbald wrote:

No, the SD and the FD cannot be reloaded.  On the FD in principle it
would be easy, but on the SD, it would be complicated if drives
changed.  What would you do with Jobs that are using a drive that
would be removed from the reload?

I'm not that deep into backup software, so maybe I'm saying something
stupid, but a config reload could do the same thing that other software
do, that is:
   - continue servicing current requests with the previous conf and
   -  use the new conf for new requests.

Something like a graceful restart apache does.

Does this make sense?

.a.c.

Sense to humans yes, sense to program code not so much.

The nature of the SD is that its configuration should almost never 
change - all it has for config is an inventory of hardware devices. 
There's a certain elegance in simplicity that would be lost trying to 
cover what should be a fairly narrow use case.


Imagine all the debugging you have to do - is this job failing because 
it is trying to use the config as it appears in the config file, or some 
other config in memory from some time in the past? What happens if we 
now have different device names referring to the same physical device, 
we now need a whole locking mechanism that can cover the use case of 
multiple versions of the SD config accessing the same physical device, 
possibly with different parameters and names.  How would you be able to 
tell which version of the config a given job is using?  Remember it is 
easy to start a backup job that can last a couple of days - a full 
backup of a huge fileserver for instance.


Things would get really messy really fast, with practically no benefit.  
Your SD config likely changes what, once or twice per year?  If that?  
It is much safer to just restart the SD when you have an idle period 
between backups.


Bryn
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Configuration reload for bacula-sd

2014-10-24 Thread Bryn Hughes

On 14-10-24 06:26 AM, Andrea Carpani wrote:

On 24/10/2014 14:55, Bryn Hughes wrote:

On 14-10-24 05:28 AM, Andrea Carpani wrote:

On 24/10/2014 12:47, Kern Sibbald wrote:

No, the SD and the FD cannot be reloaded.  On the FD in principle it
would be easy, but on the SD, it would be complicated if drives
changed.  What would you do with Jobs that are using a drive that
would be removed from the reload?

I'm not that deep into backup software, so maybe I'm saying something
stupid, but a config reload could do the same thing that other software
do, that is:
   - continue servicing current requests with the previous conf and
   -  use the new conf for new requests.

Something like a graceful restart apache does.

Does this make sense?

.a.c.

Sense to humans yes, sense to program code not so much.

The nature of the SD is that its configuration should almost never 
change - all it has for config is an inventory of hardware devices. 
There's a certain elegance in simplicity that would be lost trying to 
cover what should be a fairly narrow use case.


Imagine all the debugging you have to do - is this job failing 
because it is trying to use the config as it appears in the config 
file, or some other config in memory from some time in the past? What 
happens if we now have different device names referring to the same 
physical device, we now need a whole locking mechanism that can cover 
the use case of multiple versions of the SD config accessing the same 
physical device, possibly with different parameters and names.  How 
would you be able to tell which version of the config a given job is 
using?  Remember it is easy to start a backup job that can last a 
couple of days - a full backup of a huge fileserver for instance.


Things would get really messy really fast, with practically no 
benefit.  Your SD config likely changes what, once or twice per 
year?  If that?  It is much safer to just restart the SD when you 
have an idle period between backups.


Bryn



Ok, so I guess my need to reload it's conf often might come from the 
fact that I'm using it in a wrong way: I have no tape whatsoever, but 
a 40TB disk array I want to use to backup ~100 hosts.
My plan was to use a separate directory (i.e.: storage device) to keep 
backups for each server. Each storage device would contain backup 
files (volumes) grouped in a Volume pool (one for each host). 
Ideally each volume includes data for a single job, but we can skip 
this for now.


I want such setup because hosts come and go and I want to be able to 
easily delete backup files and reclaim disk space if a host is dismissed.


Can someone suggest me a better way to manage this?

Regards

Andrea
.a.c.


Aha, the root cause emerges... ;)

A better way would be to use a file pool and let Bacula worry about 
managing things.  Really there is no need to be worrying about where 
your individual files are located - that's why you have a director with 
a database.  It knows which files are associated with which hosts, 
there's no need to be adding another layer of complexity. When you do a 
restore you ask Bacula to restore a given host, it knows where the data 
is, it loads up the correct files and takes care of everything.


Bryn
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Configuration reload for bacula-sd

2014-10-24 Thread Bryn Hughes
On 14-10-24 09:32 AM, Alan Brown wrote:
 (Why would you want to disable a drive? If it's offline because it
 failed its cleaning cycle, as a f'instance)
So what you need is a feature request to be able to disable a drive via 
bconsole  Not to be able to dynamically reload the SD configuration.

Bryn

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] sg device node stickyness

2014-10-22 Thread Bryn Hughes
On 14-10-22 01:24 AM, Robert Oschwald wrote:
 Using Bacula Server 7.0.4 on CentOS6_64.

 One thing which annoys me is that the sg device node for TapeAlert changes 
 with every reboot from /dev/sg4 to /dev/sg6 and vice versa.
 Is there any setting I can set in udev to get sticky device nodes for my tape 
 device changer device node?

 System has a LSI 3Ware  9650SE controller installed which is the one fighting 
 for sg device node with my HP Ultrium 4 SCA tape device.

 Thanks,
 Robert


I use the entries in /dev/tape/by-id/ on my servers - not sure if Centos 
has that implemented or not but it's the best way to ensure a consistent 
path to a device, even if it changes SCSI controllers or whatever:

$ ls -l /dev/tape/by-id/
total 0
lrwxrwxrwx 1 root root  9 Oct 17 07:30 
scsi-1IBM_33622LX_002LX23B4384_LL0 - ../../sg3
lrwxrwxrwx 1 root root  9 Oct 17 07:30 scsi-1IBM_ULT3580-TD3_1210240676 
- ../../st0
lrwxrwxrwx 1 root root 10 Oct 17 07:30 
scsi-1IBM_ULT3580-TD3_1210240676-nst - ../../nst0

Even if you move the tape drive to another machine, your config stays 
the same - my changer will always be on 
/dev/tape/by-id/scsi-1IBM_33622LX_002LX23B4384_LL0 and the drive itself 
will always be at /dev/tape/by-id/scsi-1IBM_ULT3580-TD3_1210240676

Now, if you happen to replace a tape drive or something you will need to 
update the config, but no matter what else happens on the server the 
/dev/tape/by-id (and /dev/disk/by-id entries for that matter) will stay 
the same.

Bryn

--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] very long Dir inserting Attributes

2014-10-20 Thread Bryn Hughes
You have a very large number of jobs all trying to do inserts into the 
database at once.  What you are probably hitting is the limit of the 
number of transactions/second that your database server can sustain - 
most likely you are I/O bound at the disk level.

All of those 'inserting attribute' jobs are asking for many, many small 
I/O operations (IOPs) - since they are almost certainly not being 
written sequentially this will be causing a lot of seeks on the drive.  
Basically the way you are doing things is probably causing the head on 
the disk to chatter around like crazy.

You will probably find your jobs complete faster if you run fewer at 
once, as right now you are spending most of your time waiting for the 
heads to move on the disks rather than having them actually write anything.

If you run 'iostat -x 2' you will be able to see the %UTL for your 
physical disks (assuming you are on Linux) - you'll probably see your 
disks are in the high 90% if not 100% when all those 'Dir inserting 
Attributes' jobs are running.  As I mentioned above, what you are seeing 
is almost certainly the IOP limits of your hardware rather than the 
throughput limits.

Bryn

On 14-10-20 02:14 AM, Anton Gorlov wrote:
 The problem is observed recently time.

 After database backup Dir inserting Attributes job runs more than 5 hours.
 Previously, it takes less time. Size of database changed slightly, current 
 size
 - 85Gb.
 the loads of io system: iotop read 5 M/s, write several Kb
 LA 1.4-2.2
 atop retruns postgres process with bacula INSERT ... argument

 hdparm -tT /dev/mapper/main-pgsql

 /dev/mapper/main-pgsql:
Timing cached reads:   14626 MB in  2.00 seconds = 7321.24 MB/sec
Timing buffered disk reads: 918 MB in  3.03 seconds = 302.72 MB/sec



 Request list contains:
 BEGIN; LOCK TABLE Path IN SHARE ROW EXCLUSIVE MODE


 ===
 Running Jobs:
 Console connected at 20-Ок-2014 11:52
JobId Level   Name   Status
 ==
 8116 Fullfluorine.2014-10-19_00.25.00_40 Dir inserting Attributes
 8117 Fullhydrogen.2014-10-19_00.25.00_41 Dir inserting Attributes
 8118 Fullneon.2014-10-19_00.32.01_42 Dir inserting Attributes
 8122 Fullaluminium_DB0.2014-10-19_02.30.00_46 Dir inserting Attributes
 8124 Fullaluminium_DB5.2014-10-19_02.35.00_48 Dir inserting Attributes
 8127 Fullaluminium_DB2.2014-10-19_02.50.01_51 Dir inserting Attributes
 8129 Differe  hydrogen.2014-10-20_00.50.00_57 is waiting on max Client 
 jobs
 8130 Differe  sulfur_bcp.2014-10-20_01.10.00_01 Dir inserting Attributes
 8131 Differe  boron_bcp.2014-10-20_02.00.00_03 Dir inserting Attributes
 8132 Differe  neon.2014-10-20_02.05.00_04 Dir inserting Attributes
 8133 Fullaluminium_PG5.2014-10-20_02.10.00_05 Dir inserting Attributes
 8134 Differe  fluorine.2014-10-20_02.20.00_06 Dir inserting Attributes
 8135 Fullaluminium_DB0.2014-10-20_02.30.00_07 Dir inserting Attributes
 8136 Fullaluminium_DB4.2014-10-20_02.34.00_08 Dir inserting Attributes
 8137 Fullaluminium_DB5.2014-10-20_02.35.00_09 Dir inserting Attributes
 8138 Fullphosphorus_bcp.2014-10-20_02.35.00_10 Dir inserting 
 Attributes
 8139 Fullaluminium_PG0.2014-10-20_02.35.00_11 Dir inserting Attributes
 8140 Fullaluminium_PG2.2014-10-20_02.35.00_12 Dir inserting Attributes
 8141 Fullaluminium_DB2.2014-10-20_02.50.00_13 Dir inserting Attributes
 8142 Fullaluminium_BILL.2014-10-20_04.35.00_14 Dir inserting 
 Attributes
 


 postgresql.conf:
 max_connections = 100
 shared_buffers = 128MB
 temp_buffers = 512MB
 work_mem = 64MB
 maintenance_work_mem = 32MB
 fsync = off
 checkpoint_segments = 90



 --
 Comprehensive Server Monitoring with Site24x7.
 Monitor 10 servers for $9/Month.
 Get alerted through email, SMS, voice calls or mobile push notifications.
 Take corrective actions from your mobile device.
 http://p.sf.net/sfu/Zoho
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Recover full backup from the incrementals

2014-10-15 Thread Bryn Hughes

Really it would come down to what files you needed to restore...

If the files you need are all in the incremental then Bacula will 
restore them without touching the full backup at all.


If you need to restore a full filesystem including files that will not 
have changed since the last full, you will possibly loose some data, 
depending on what has changed.


In the scenario you are talking about (w2 being damaged), you could do a 
full restore of w1 plus the incrementals that depended on it which would 
take you to the end of Saturday, and then you could do a restore of all 
the incrementals going forwards from there.  You would miss any data 
changed after the Saturday incremental run after w1 but before the full 
w2 was run - the first incremental after w2 would have caught any 
changes after w2.  So you would loose about a day's worth of data - 
depending on what type of data changes you are dealing with that may or 
may not be acceptable to you.  The restore would be more complex than 
just saying restore everything to the specified date.


Whether or not this would give you a usable restore point depends 
heavily on your workload - if you are dealing entirely with files that 
are updated in place you would be fine, the end result would be the same 
as if w2 wasn't damaged at all.  If you are dealing with something like 
a busy mail server with a bunch of maildirs - a large number of files 
being created and deleted regularly, you would have a mess. You'd be 
missing a day's worth of mail plus you potentially would have 
'undeleted' quite a bit of data since you would be restoring files that 
may have been later deleted.


So the short answer is 'it depends' - this will be true of nearly any 
backup software when using the approach you describe.


Bryn

On 14-10-15 03:02 AM, Sieu Truc wrote:

Hello,

I have a question about bacula's recovering mechanism.

I do an full backup every week, and the dailly incremental backups, 
specified in the following configuration :


Schedule {
Name = schedule_test
Run = Level=Incremental Pool=Monday Monday at 22:00
Run = Level=Incremental Pool=Tuesday Tuesday at 22:00
Run = Level=Incremental Pool=Wednesday Wednesday at 22:00
Run = Level=Incremental Pool=Thursday Thursday at 22:00
Run = Level=Incremental Pool=Friday Friday at 22:00
Run = Level=Incremental Pool=Saturday Saturday at 22:00
 Run = Level=Full Pool=fullBK Sunday at 22:0
}

After 3 weeks, i have 3 full backup named w1,w2,w3 corresponding to 
each week.


If for some reasons, the backup w2 is damaged, would i lose all the 
incremental backup in the second week from monday to friday ?

And can i restore the backup w2 with w1 ?

Thanks,

PHAM Ngoc Truc


--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Forcing baccula to purge/recycle a tape instead of appending?!

2014-10-15 Thread Bryn Hughes
There are a couple of things you can do in combination to get the result 
you want...

First off, in your pool set a 'volume use duration' - this will restrict 
how long a time span of data can exist on a tape.  For instance in your 
case you could set the volume use duration to 32 days, this would tell 
Bacula not to append any data to a tape 32 days after the first data was 
written to it.  The tape will get a status of 'USED' rather than 
'APPEND' once that time period passes.  Regardless of what happens with 
the expiry / purge jobs this will prevent the scenario you are 
describing from ever happening - Bacula would instead complain that it 
could not find a tape if one wasn't available, it would not append data 
to an 'old' tape until the jobs on it were purged.

Secondly yes, set the volume retention lower.  Bacula will not purge a 
tape unless it needs to so it won't automatically purge data when it 
hits the volume retention date - even if you did need data off of a tape 
that was 11 months and 28 days old but Bacula hadn't needed the tape for 
writing yet, the data would still be there so you won't be loosing anything.

Bryn

On 14-10-15 09:45 AM, Thorsten Reichelt wrote:
 Hi!

 I have a problem with my tape pool configuration.
 I want to use exactly 12 LTO tapes (one tape per month).
 After 12 month the oldest tape shall be reused without appending new data.

 But bacula seems to ignore my recycle and purge configuration (see below).
 After one year I reuse the old tape and bacula appends the new data to it.
 Then, after the second year has passed, the data is still appended to
 the tape!

 Now I have the problem that my september tape ran out of free space and
 I was
 asked to insert the august tape... a few MB later that tape was full
 also and
 bacula asked for the july tape :/ :/

 So I manually purged the september tape and restarted all related backup
 jobs.
 How can I configure bacula to purge the oldest tape instead of appending
 to it
 for more than two years?

 My pool configuration for the tape looks like this:

 # ==
 Pool {
Name = MonthlyTape
Use Volume Once = no
Pool Type = Backup
AutoPrune = yes
Recycle Current Volume = yes
Volume Retention = 12 month
Recycle = yes
 }
 # ==

 Would it make sense to set the retention time to 11 month 1 days?

 Thank you very much!!

Thorsten

 --
 Comprehensive Server Monitoring with Site24x7.
 Monitor 10 servers for $9/Month.
 Get alerted through email, SMS, voice calls or mobile push notifications.
 Take corrective actions from your mobile device.
 http://p.sf.net/sfu/Zoho
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Forcing baccula to purge/recycle a tape instead of appending?!

2014-10-15 Thread Bryn Hughes
One more thing that is worth mentioning...

If you truly want the monthly tapes to only ever be used for that 
particular month, you may wish to consider creating a pool for each 
month instead.  Then in your Schedule you can specify the specific pool 
for that month:

Schedule {
  Name = MonthlyBackups
  Run = Level=Full Pool=January jan 1 at 00:00
  Run = Level=Full Pool=February feb 1 at 00:00
  
  Run = Level=Full Pool=December dec 1 at 00:00
}

If in the future you need multiple tapes for a month, this approach 
would allow you to have multiple tapes in each month's pool as well.

Bryn

On 14-10-15 09:58 AM, Bryn Hughes wrote:
 There are a couple of things you can do in combination to get the result
 you want...

 First off, in your pool set a 'volume use duration' - this will restrict
 how long a time span of data can exist on a tape.  For instance in your
 case you could set the volume use duration to 32 days, this would tell
 Bacula not to append any data to a tape 32 days after the first data was
 written to it.  The tape will get a status of 'USED' rather than
 'APPEND' once that time period passes.  Regardless of what happens with
 the expiry / purge jobs this will prevent the scenario you are
 describing from ever happening - Bacula would instead complain that it
 could not find a tape if one wasn't available, it would not append data
 to an 'old' tape until the jobs on it were purged.

 Secondly yes, set the volume retention lower.  Bacula will not purge a
 tape unless it needs to so it won't automatically purge data when it
 hits the volume retention date - even if you did need data off of a tape
 that was 11 months and 28 days old but Bacula hadn't needed the tape for
 writing yet, the data would still be there so you won't be loosing anything.

 Bryn

 On 14-10-15 09:45 AM, Thorsten Reichelt wrote:
 Hi!

 I have a problem with my tape pool configuration.
 I want to use exactly 12 LTO tapes (one tape per month).
 After 12 month the oldest tape shall be reused without appending new data.

 But bacula seems to ignore my recycle and purge configuration (see below).
 After one year I reuse the old tape and bacula appends the new data to it.
 Then, after the second year has passed, the data is still appended to
 the tape!

 Now I have the problem that my september tape ran out of free space and
 I was
 asked to insert the august tape... a few MB later that tape was full
 also and
 bacula asked for the july tape :/ :/

 So I manually purged the september tape and restarted all related backup
 jobs.
 How can I configure bacula to purge the oldest tape instead of appending
 to it
 for more than two years?

 My pool configuration for the tape looks like this:

 # ==
 Pool {
 Name = MonthlyTape
 Use Volume Once = no
 Pool Type = Backup
 AutoPrune = yes
 Recycle Current Volume = yes
 Volume Retention = 12 month
 Recycle = yes
 }
 # ==

 Would it make sense to set the retention time to 11 month 1 days?

 Thank you very much!!

 Thorsten

 --
 Comprehensive Server Monitoring with Site24x7.
 Monitor 10 servers for $9/Month.
 Get alerted through email, SMS, voice calls or mobile push notifications.
 Take corrective actions from your mobile device.
 http://p.sf.net/sfu/Zoho
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 --
 Comprehensive Server Monitoring with Site24x7.
 Monitor 10 servers for $9/Month.
 Get alerted through email, SMS, voice calls or mobile push notifications.
 Take corrective actions from your mobile device.
 http://p.sf.net/sfu/Zoho
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Communication issues with 7.0.5

2014-10-07 Thread Bryn Hughes
I've been having some issues with my old 5.2.6 installation and I 
thought it would be a good idea to upgrade to a more recent Bacula 
version, so I downloaded v7.0.5 and also the 'debian' files to allow me 
to build packages.  I'm using Ubuntu 12.04.

I have not been able to get communication working between the console 
and the director, I am getting an Authorization Error indicating that I 
probably have something wrong with my password, yet I've quadruple 
checked everything in that regard.

Steps I've taken:

- Started from scratch on the config files - I am using the base config 
included in the package with nothing else in it
- Dumped the database and started from scratch
- Restarted everything

I'm fairly certain at this point that I have a bug related to 
authorization as there's not really any other options left.  I had a 
completely working installation with 5.2.6 and I've been using Bacula 
for a lng time so I'm confident the config is all ok. Is there a 
known issue around using v7.0.5 on Ubuntu 12.04?  This smells like a 
library compatibility issue or something similar to me...

Here's the relevant parts of the config files:

Director {# define myself
   Name = DIRNAME.local-dir
   DIRport = 9101# where we listen for UA connections
   QueryFile = /opt/bacula/scripts/query.sql
   WorkingDirectory = /opt/bacula/working
   PidDirectory = /opt/bacula/working
   Maximum Concurrent Jobs = 20
   Password = 79gG9MACp9ZkQWoXSVxtOglRRxJL_hCuT # Console password
   Messages = Daemon
}

Director {
   Name = DIRNAME.local-dir
   DIRport = 9101
   address = DIRNAME.local
   Password = 79gG9MACp9ZkQWoXSVxtOglRRxJL_hCuT
}

When I start the director with debug mode, I get:

/opt/bacula/bin/bacula-dir -c /opt/bacula/etc/bacula-dir.conf -d100 -f
bacula-dir: dird.c:194-0 Debug level = 100
bacula-dir: address_conf.c:264-0 Initaddr 0.0.0.0:9101
bacula-dir: jcr.c:128-0 read_last_jobs seek to 192
bacula-dir: jcr.c:135-0 Read num_items=0
bacula-dir: dir_plugins.c:148-0 Load dir plugins
bacula-dir: dir_plugins.c:150-0 No dir plugin dir!
bacula-dir: mysql.c:697-0 db_init_database first time
bacula-dir: mysql.c:165-0 mysql_init done
bacula-dir: mysql.c:190-0 mysql_real_connect done
bacula-dir: mysql.c:192-0 db_user=## db_name=## db_password=##
bacula-dir: mysql.c:215-0 opendb ref=1 connected=1 db=1129b20
bacula-dir: mysql.c:237-0 closedb ref=0 connected=1 db=1129b20
bacula-dir: mysql.c:244-0 close db=1129b20
DIRNAME.local-dir: bnet_server.c:87-0 Addresses 0.0.0.0:9101
DIRNAME.local-dir: job.c:1528-0 wstorage=File1
DIRNAME.local-dir: job.c:1537-0 wstore=File1 where=Job resource
DIRNAME.local-dir: job.c:1211-0 JobId=0 created 
Job=*JobMonitor*.2014-10-07_10.42.26_01
DIRNAME.local-dir: bnet.c:566-0 who=client host=###.###.###.### port=9101

bconsole -d100
Connecting to Director DIRNAME.local:9101
bconsole: bsock.c:208-0 Current ###.###.###.###:9101 All 
###.###.###.###:9101
bconsole: bsock.c:137-0 who=Director daemon host=DIRNAME.local port=9101
bconsole: bsock.c:310-0 OK connected to server  Director daemon 
DIRNAME.local:9101.
Director authorization problem.
Most likely the passwords do not agree.
If you are using TLS, there may have been a certificate validation error 
during the TLS handshake.
Please see 
http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION0026
 
for help.




--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Communication issues with 7.0.5

2014-10-07 Thread Bryn Hughes
Yup, I've gone as far as to reboot the server just to make absolutely 
sure it wasn't an old process hanging about.  Plus starting everything 
manually in debug mode rules that out.

Bryn

On 14-10-07 12:01 PM, Jonathan Bayer wrote:
 This happened to me the other day.

 Make sure that ALL the bacula processes are stopped.  My problem was
 that one of them was continuing to hang around with the old config.
 Drove me up the wall until I figured it out.


 JBB


 On 10/7/14, 2:00 PM, Bryn Hughes wrote:
 I've been having some issues with my old 5.2.6 installation and I
 thought it would be a good idea to upgrade to a more recent Bacula
 version, so I downloaded v7.0.5 and also the 'debian' files to allow me
 to build packages.  I'm using Ubuntu 12.04.

 I have not been able to get communication working between the console
 and the director, I am getting an Authorization Error indicating that I
 probably have something wrong with my password, yet I've quadruple
 checked everything in that regard.

 Steps I've taken:

 - Started from scratch on the config files - I am using the base config
 included in the package with nothing else in it
 - Dumped the database and started from scratch
 - Restarted everything

 I'm fairly certain at this point that I have a bug related to
 authorization as there's not really any other options left.  I had a
 completely working installation with 5.2.6 and I've been using Bacula
 for a lng time so I'm confident the config is all ok. Is there a
 known issue around using v7.0.5 on Ubuntu 12.04?  This smells like a
 library compatibility issue or something similar to me...

 Here's the relevant parts of the config files:

 Director {# define myself
  Name = DIRNAME.local-dir
  DIRport = 9101# where we listen for UA connections
  QueryFile = /opt/bacula/scripts/query.sql
  WorkingDirectory = /opt/bacula/working
  PidDirectory = /opt/bacula/working
  Maximum Concurrent Jobs = 20
  Password = 79gG9MACp9ZkQWoXSVxtOglRRxJL_hCuT # Console 
 password
  Messages = Daemon
 }

 Director {
  Name = DIRNAME.local-dir
  DIRport = 9101
  address = DIRNAME.local
  Password = 79gG9MACp9ZkQWoXSVxtOglRRxJL_hCuT
 }

 When I start the director with debug mode, I get:

 /opt/bacula/bin/bacula-dir -c /opt/bacula/etc/bacula-dir.conf -d100 -f
 bacula-dir: dird.c:194-0 Debug level = 100
 bacula-dir: address_conf.c:264-0 Initaddr 0.0.0.0:9101
 bacula-dir: jcr.c:128-0 read_last_jobs seek to 192
 bacula-dir: jcr.c:135-0 Read num_items=0
 bacula-dir: dir_plugins.c:148-0 Load dir plugins
 bacula-dir: dir_plugins.c:150-0 No dir plugin dir!
 bacula-dir: mysql.c:697-0 db_init_database first time
 bacula-dir: mysql.c:165-0 mysql_init done
 bacula-dir: mysql.c:190-0 mysql_real_connect done
 bacula-dir: mysql.c:192-0 db_user=## db_name=## db_password=##
 bacula-dir: mysql.c:215-0 opendb ref=1 connected=1 db=1129b20
 bacula-dir: mysql.c:237-0 closedb ref=0 connected=1 db=1129b20
 bacula-dir: mysql.c:244-0 close db=1129b20
 DIRNAME.local-dir: bnet_server.c:87-0 Addresses 0.0.0.0:9101
 DIRNAME.local-dir: job.c:1528-0 wstorage=File1
 DIRNAME.local-dir: job.c:1537-0 wstore=File1 where=Job resource
 DIRNAME.local-dir: job.c:1211-0 JobId=0 created
 Job=*JobMonitor*.2014-10-07_10.42.26_01
 DIRNAME.local-dir: bnet.c:566-0 who=client host=###.###.###.### port=9101

 bconsole -d100
 Connecting to Director DIRNAME.local:9101
 bconsole: bsock.c:208-0 Current ###.###.###.###:9101 All
 ###.###.###.###:9101
 bconsole: bsock.c:137-0 who=Director daemon host=DIRNAME.local port=9101
 bconsole: bsock.c:310-0 OK connected to server  Director daemon
 DIRNAME.local:9101.
 Director authorization problem.
 Most likely the passwords do not agree.
 If you are using TLS, there may have been a certificate validation error
 during the TLS handshake.
 Please see
 http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION0026
 for help.



--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Communication issues with 7.0.5

2014-10-07 Thread Bryn Hughes
I tried initially with my 5.x config. I then re-wrote a new config from 
scratch and tested.  And then when that didn't work I also tried using 
the bundled default configuration.

I'm quite confident that this is going to turn out to be an issue around 
a version of a library I've compiled again or something of the sort.

Bryn

On 14-10-07 12:49 PM, Jonathan Bayer wrote:
 ok.

 Are you using a migrated config, or a new one?

 What system are you running on?

 I'm asking because I'm running a 7.0.5 on a CentOS 6 box without any
 problems.


 JBB

 On 10/7/14, 3:06 PM, Bryn Hughes wrote:
 Yup, I've gone as far as to reboot the server just to make absolutely
 sure it wasn't an old process hanging about.  Plus starting everything
 manually in debug mode rules that out.

 Bryn

 On 14-10-07 12:01 PM, Jonathan Bayer wrote:
 This happened to me the other day.

 Make sure that ALL the bacula processes are stopped.  My problem was
 that one of them was continuing to hang around with the old config.
 Drove me up the wall until I figured it out.


 JBB


 On 10/7/14, 2:00 PM, Bryn Hughes wrote:
 I've been having some issues with my old 5.2.6 installation and I
 thought it would be a good idea to upgrade to a more recent Bacula
 version, so I downloaded v7.0.5 and also the 'debian' files to allow me
 to build packages.  I'm using Ubuntu 12.04.

 I have not been able to get communication working between the console
 and the director, I am getting an Authorization Error indicating that I
 probably have something wrong with my password, yet I've quadruple
 checked everything in that regard.

 Steps I've taken:

 - Started from scratch on the config files - I am using the base config
 included in the package with nothing else in it
 - Dumped the database and started from scratch
 - Restarted everything

 I'm fairly certain at this point that I have a bug related to
 authorization as there's not really any other options left.  I had a
 completely working installation with 5.2.6 and I've been using Bacula
 for a lng time so I'm confident the config is all ok. Is there a
 known issue around using v7.0.5 on Ubuntu 12.04?  This smells like a
 library compatibility issue or something similar to me...

 Here's the relevant parts of the config files:

 Director {# define myself
Name = DIRNAME.local-dir
DIRport = 9101# where we listen for UA connections
QueryFile = /opt/bacula/scripts/query.sql
WorkingDirectory = /opt/bacula/working
PidDirectory = /opt/bacula/working
Maximum Concurrent Jobs = 20
Password = 79gG9MACp9ZkQWoXSVxtOglRRxJL_hCuT # Console 
 password
Messages = Daemon
 }

 Director {
Name = DIRNAME.local-dir
DIRport = 9101
address = DIRNAME.local
Password = 79gG9MACp9ZkQWoXSVxtOglRRxJL_hCuT
 }

 When I start the director with debug mode, I get:

 /opt/bacula/bin/bacula-dir -c /opt/bacula/etc/bacula-dir.conf -d100 -f
 bacula-dir: dird.c:194-0 Debug level = 100
 bacula-dir: address_conf.c:264-0 Initaddr 0.0.0.0:9101
 bacula-dir: jcr.c:128-0 read_last_jobs seek to 192
 bacula-dir: jcr.c:135-0 Read num_items=0
 bacula-dir: dir_plugins.c:148-0 Load dir plugins
 bacula-dir: dir_plugins.c:150-0 No dir plugin dir!
 bacula-dir: mysql.c:697-0 db_init_database first time
 bacula-dir: mysql.c:165-0 mysql_init done
 bacula-dir: mysql.c:190-0 mysql_real_connect done
 bacula-dir: mysql.c:192-0 db_user=## db_name=## db_password=##
 bacula-dir: mysql.c:215-0 opendb ref=1 connected=1 db=1129b20
 bacula-dir: mysql.c:237-0 closedb ref=0 connected=1 db=1129b20
 bacula-dir: mysql.c:244-0 close db=1129b20
 DIRNAME.local-dir: bnet_server.c:87-0 Addresses 0.0.0.0:9101
 DIRNAME.local-dir: job.c:1528-0 wstorage=File1
 DIRNAME.local-dir: job.c:1537-0 wstore=File1 where=Job resource
 DIRNAME.local-dir: job.c:1211-0 JobId=0 created
 Job=*JobMonitor*.2014-10-07_10.42.26_01
 DIRNAME.local-dir: bnet.c:566-0 who=client host=###.###.###.### port=9101

 bconsole -d100
 Connecting to Director DIRNAME.local:9101
 bconsole: bsock.c:208-0 Current ###.###.###.###:9101 All
 ###.###.###.###:9101
 bconsole: bsock.c:137-0 who=Director daemon host=DIRNAME.local port=9101
 bconsole: bsock.c:310-0 OK connected to server  Director daemon
 DIRNAME.local:9101.
 Director authorization problem.
 Most likely the passwords do not agree.
 If you are using TLS, there may have been a certificate validation error
 during the TLS handshake.
 Please see
 http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION0026
 for help.




--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http

Re: [Bacula-users] Communication issues with 7.0.5

2014-10-07 Thread Bryn Hughes
I've recompiled the same source package with the same debian control 
files and same config files on a Ubuntu 14.04 system and everything 
appears to work fine, so my suspicions about this being related to a 
specific library version in Ubuntu 12.04 appear to be valid...  I know 
it's an older version now but it IS still supported through 2017.

Bryn

On 14-10-07 01:43 PM, Brady, Mike wrote:
 I upgraded my 5.2 system to 7 in place with no configuration changes
 required using the packages from
 http://repos.fedorapeople.org/repos/slaanesh/bacula7/epel-6/x86_64/.

 The 5.2 install used packages from
 https://repos.fedorapeople.org/repos/slaanesh/bacula/epel-6/x86_64/, so
 your mileage may vary depending on where your 5.2 came from.


 On 2014-10-08 09:19, Bryn Hughes wrote:
 I tried initially with my 5.x config. I then re-wrote a new config from
 scratch and tested.  And then when that didn't work I also tried using
 the bundled default configuration.

 I'm quite confident that this is going to turn out to be an issue
 around
 a version of a library I've compiled again or something of the sort.

 Bryn

 On 14-10-07 12:49 PM, Jonathan Bayer wrote:
 ok.

 Are you using a migrated config, or a new one?

 What system are you running on?

 I'm asking because I'm running a 7.0.5 on a CentOS 6 box without any
 problems.


 JBB

 On 10/7/14, 3:06 PM, Bryn Hughes wrote:
 Yup, I've gone as far as to reboot the server just to make absolutely
 sure it wasn't an old process hanging about.  Plus starting
 everything
 manually in debug mode rules that out.

 Bryn

 On 14-10-07 12:01 PM, Jonathan Bayer wrote:
 This happened to me the other day.

 Make sure that ALL the bacula processes are stopped.  My problem was
 that one of them was continuing to hang around with the old config.
 Drove me up the wall until I figured it out.


 JBB


 On 10/7/14, 2:00 PM, Bryn Hughes wrote:
 I've been having some issues with my old 5.2.6 installation and I
 thought it would be a good idea to upgrade to a more recent Bacula
 version, so I downloaded v7.0.5 and also the 'debian' files to
 allow me
 to build packages.  I'm using Ubuntu 12.04.

 I have not been able to get communication working between the
 console
 and the director, I am getting an Authorization Error indicating
 that I
 probably have something wrong with my password, yet I've quadruple
 checked everything in that regard.

 Steps I've taken:

 - Started from scratch on the config files - I am using the base
 config
 included in the package with nothing else in it
 - Dumped the database and started from scratch
 - Restarted everything

 I'm fairly certain at this point that I have a bug related to
 authorization as there's not really any other options left.  I had
 a
 completely working installation with 5.2.6 and I've been using
 Bacula
 for a lng time so I'm confident the config is all ok. Is there
 a
 known issue around using v7.0.5 on Ubuntu 12.04?  This smells like
 a
 library compatibility issue or something similar to me...

 Here's the relevant parts of the config files:

 Director {# define myself
 Name = DIRNAME.local-dir
 DIRport = 9101# where we listen for UA
 connections
 QueryFile = /opt/bacula/scripts/query.sql
 WorkingDirectory = /opt/bacula/working
 PidDirectory = /opt/bacula/working
 Maximum Concurrent Jobs = 20
 Password = 79gG9MACp9ZkQWoXSVxtOglRRxJL_hCuT #
 Console password
 Messages = Daemon
 }

 Director {
 Name = DIRNAME.local-dir
 DIRport = 9101
 address = DIRNAME.local
 Password = 79gG9MACp9ZkQWoXSVxtOglRRxJL_hCuT
 }

 When I start the director with debug mode, I get:

 /opt/bacula/bin/bacula-dir -c /opt/bacula/etc/bacula-dir.conf -d100
 -f
 bacula-dir: dird.c:194-0 Debug level = 100
 bacula-dir: address_conf.c:264-0 Initaddr 0.0.0.0:9101
 bacula-dir: jcr.c:128-0 read_last_jobs seek to 192
 bacula-dir: jcr.c:135-0 Read num_items=0
 bacula-dir: dir_plugins.c:148-0 Load dir plugins
 bacula-dir: dir_plugins.c:150-0 No dir plugin dir!
 bacula-dir: mysql.c:697-0 db_init_database first time
 bacula-dir: mysql.c:165-0 mysql_init done
 bacula-dir: mysql.c:190-0 mysql_real_connect done
 bacula-dir: mysql.c:192-0 db_user=## db_name=##
 db_password=##
 bacula-dir: mysql.c:215-0 opendb ref=1 connected=1 db=1129b20
 bacula-dir: mysql.c:237-0 closedb ref=0 connected=1 db=1129b20
 bacula-dir: mysql.c:244-0 close db=1129b20
 DIRNAME.local-dir: bnet_server.c:87-0 Addresses 0.0.0.0:9101
 DIRNAME.local-dir: job.c:1528-0 wstorage=File1
 DIRNAME.local-dir: job.c:1537-0 wstore=File1 where=Job resource
 DIRNAME.local-dir: job.c:1211-0 JobId=0 created
 Job=*JobMonitor*.2014-10-07_10.42.26_01
 DIRNAME.local-dir: bnet.c:566-0 who=client host=###.###.###.###
 port=9101

 bconsole -d100
 Connecting to Director DIRNAME.local:9101
 bconsole: bsock.c:208-0 Current ###.###.###.###:9101 All
 ###.###.###.###:9101