Re: [Bacula-users] Write Retry on Network Error without Restarting the Job

2016-02-26 Thread Pablo Marques

Here it is, in 7.2: 

http://www.bacula.org/7.2.x-manuals/en/main/New_Features_in_7_2_0.html#SECTION003213000

Pablo
- Original Message -
From: "Kern Sibbald" 
To: "Duane Webber" , "bacula-users" 

Sent: Friday, February 26, 2016 1:04:31 AM
Subject: Re: [Bacula-users] Write Retry on Network Error without Restarting the 
Job

No this feature does not exist.  However, in a recent version of Bacula 
(7.2.0 I think, but maybe 7.4.0) you can restart failed jobs and it will 
start roughly from the point the backup failed.  Look in the New 
Features section of the document for Incomplete Jobs.

Best regards,
Kern

On 02/26/2016 02:58 AM, Duane Webber wrote:
> Is there an option to have Bacula retry write operations without
> restarting the job?   A search shows the last time this question was
> asked was in 2010 (the answer was no) and I'm hoping there is a change
> since then.  I have remote backups that can take days, if not a week
> plus and it is not uncommon to run into a network error where Bacula
> aborts the backup job altogether.
>
> If Bacula does not have this capability, is there a workaround for the
> same such that the end result is that the remote backups are visible to
> Bacula directly.
>
> I appreciate any suggestions.
>
> Thanks,
>
> Duane
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Write Retry on Network Error without Restarting the Job

2016-02-25 Thread Pablo Marques

I believe this feature is now available in version 7.4 


From: "Clark, Patti"  
To: "Duane Webber" , "bacula-users" 
 
Sent: Thursday, February 25, 2016 3:09:40 PM 
Subject: Re: [Bacula-users] Write Retry on Network Error without Restarting the 
Job 

Bacula Enterprise has this feature where a job can be restarted. There is no 
work around other than breaking up the jobs into smaller chunks. 

Patti Clark 
Linux System Administrator 
R&D Systems Support Oak Ridge National Laboratory 

From: Duane Webber < duane.web...@comcast.net > 
Date: Thursday, February 25, 2016 at 11:58 AM 
To: " bacula-users@lists.sourceforge.net " < bacula-users@lists.sourceforge.net 
> 
Subject: [Bacula-users] Write Retry on Network Error without Restarting the Job 

Is there an option to have Bacula retry write operations without restarting the 
job? A search shows the last time this question was asked was in 2010 (the 
answer was no) and I'm hoping there is a change since then. I have remote 
backups that can take days, if not a week plus and it is not uncommon to run 
into a network error where Bacula aborts the backup job altogether. 

If Bacula does not have this capability, is there a workaround for the same 
such that the end result is that the remote backups are visible to Bacula 
directly. 

I appreciate any suggestions. 

Thanks, 

Duane 


Spam 
Phish/Fraud 
Not spam 
Forget previous vote 

-- 
Site24x7 APM Insight: Get Deep Visibility into Application Performance 
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month 
Monitor end-to-end web transactions and take corrective actions now 
Troubleshoot faster and improve end-user experience. Signup Now! 
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 
___ 
Bacula-users mailing list 
Bacula-users@lists.sourceforge.net 
https://lists.sourceforge.net/lists/listinfo/bacula-users 
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Incomplete Jobs

2012-08-20 Thread Pablo Marques
Hi 

Is the feature "Incomplete Jobs/stop/restart"
from Bacula Enterprise 6.0 going to be available
in the (free) version 5 series?

We could definitely use this feature on some
of our Windows users that reboot their machines
or lose their network connection in the middle of a backup.

Regards

Pablo Marques

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups increased to 500GB after adding to IPA domain

2012-04-05 Thread Pablo Marques
Abdullah:

Make sure you have this in your fileset definition

Sparse = yes

Also you can do this in bconsole:

estimate job=client_job_whatever listing
 
it will print the list of files to be backed up.
Look for big files in the list.


Pablo
- Original Message -
From: "Abdullah Sofizada" 
To: bacula-users@lists.sourceforge.net
Sent: Thursday, April 5, 2012 11:21:37 AM
Subject: [Bacula-users] Backups increased to 500GB after adding to IPA domain

Hi guys, this is a very weird one. I been trying to tackle this for the 
past two weeks or so to no avail...

My director runs on Redhat Rhel 5.5 running bacula 5.0.2. My clients are 
Redhat Rhel 5.5 running bacula 5.0.2.

Each of the bacula clients are less than 15 GB of data. Backups of each 
client were fine. But two weeks ago the backups for each of these 
clients ballooned to 550GB each!!

When I do a df -h... the servers only show 15GB of space used. The one 
difference I noticed in the past two weeks is...I added these servers to 
our new IPA domain. Which in essence is an ldap server using kerberos 
authentication for identity management. This server runs on Rhel 6.2.

I have many other clients which are not part of the IPA domain that are 
backing up just fine. So I'm sure it has something to do with this. I 
have even tried to remove my bacula clients out of the IPA domain, than 
ran a backup. But it still reports at 550GB of data being backed up.

I appreciate the help...



-- 
-Abdullah


--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question

2012-03-28 Thread Pablo Marques
Juan, try upgrading to 5.2.6 and use these settings:
==
FileSet {
  Name = "Root-set"
  Include {
Options {
  signature = MD5
  compression = GZIP
  Sparse = yes
  aclsupport = yes
  xattrsupport = yes
 }
File = /
  }
}

Job {
  Name = "client-base-fd-data"
  JobDefs = "jobbaculadefs"
  Client = stor-server3-fd
  FileSet = "Root-set"
  Storage = File
  Pool = File
  Level = Base
}

Job {
  Name = "client-fd-data"
  JobDefs = "jobbaculadefs"
  Client = stor-server3-fd
  FileSet = "Root-set"
  Base = client-base-fd-data
  Storage = File
  Pool = File
  Accurate = yes
}
===

You may need to insert some extra settings in the jobs above.

# Reload bacula and run the base job:
run job=client-base-fd-data
# Let it finish and run the FULL
run job=client-fd-data level=FULL
# Let it finish, check to make sure that this FULL used the Base job in the 
report.
# Now run a differential:
run job=client-fd-data level=Differential
# The report should come back with almost no changes, the differential is 
really a incremental from the last full.

Let me know.

Pablo
- Original Message -
From: "Juan Diaz" 
To: "Pablo Marques" 
Cc: "José Frederico" , 
bacula-users@lists.sourceforge.net
Sent: Wednesday, March 28, 2012 9:44:37 AM
Subject: Re: [Bacula-users] Question

Pablo,

I have tried all day with your configuration and it doesn't work. Could you 
please send me the complete configuration for search the differences between 
the two of them.

We work with Bacula 5.0.2 and linux debian 2.6.32-5-amd64.

Also I tried deduplication with the same machine I did first base after full 
and lately differential and the differential was smaller. But when I use that 
base with another machine the differential is bigger than full backup and that 
doesn't make sense.

I CC the group in case there's someone who had the same trouble.

Thank you very much four your help,


Juan David DIAZ
INSA de Lyon
Universidad ICESI
+33 6 49 59 23 86


- Mail original -
De: "Juan Diaz" 
À: "Pablo Marques" 
Cc: "José Frederico" , 
bacula-users@lists.sourceforge.net
Envoyé: Mercredi 28 Mars 2012 09:47:23
Objet: Re: [Bacula-users] Question

Thank you Pablo but it didn't worked. The incremental or differential continues 
being bigger than the full backup.

Juan David DIAZ
INSA Lyon
Universidad ICESI
+33 6 49 59 23 86


- Mail original -
De: "Pablo Marques" 
À: "Juan Diaz" 
Cc: "José Frederico" , 
bacula-users@lists.sourceforge.net
Envoyé: Mardi 27 Mars 2012 17:40:41
Objet: Re: [Bacula-users] Question

Juan:

That is no happening to us and the only obvious difference between our set up 
and yours is in the fileset definition.

Can you remove these lines in your fileset and give it a try?
BaseJob = pmugcs5
Accurate = mcs
Verify = pin5

This is our fileset:

FileSet {
  Name = "Root-set"
  Include {
Options {
  signature = MD5
  compression = LZO
  Sparse = yes
  aclsupport = yes
  xattrsupport = yes
 }
File = /
  }
}

Good luck

Pablo

- Original Message -
From: "Juan Diaz" 
To: bacula-users@lists.sourceforge.net
Cc: "José Frederico" 
Sent: Tuesday, March 27, 2012 9:35:27 AM
Subject: [Bacula-users] Question

Hello,

Right now we're evaluating and deciding which is going to be our Backup 
software, Bacula is the most optioned so far but we're having troubles with the 
implementation of the file DE duplication.

When we make a base job of the machine virtual that is going to be the base, 
his size is 553 Mb. After that when we make a full backup of a machine that was 
created from the machine "base". His size is 242 Mb that’s ok because it only 
has to make backup of the files that are different of files of the base.

The problem comes when we make a differential backup right after the full 
backup. The result must be 0 Mb or something really small, but we're getting 
backups of 462 Mb.

We used one configuration similar to the one we found in the Bacula’s manual. 
http://www.bacula.org/5.2.x-manuals/en/main/main.pdf Chapter 34.

We have tried removing BaseJob, Accurate and Verify from FileSet but it didn’t 
worked. We have always the same sizes that don't make sense.

We would like to know if you have a solution for our problem, because it worked 
when we don't use DE duplication (just with full and Differential), but we are 
very interested in using it, that would save us a lot of space on ours hard 
drives.

We attached here the job’s configuration:

ob {
  Name = "BackupBasse"
  Type = Backup
  Level = Base
  Client = basse-fd
  FileSet = "TOUT"
 Schedule = "programme2"
  Storage = File
  

Re: [Bacula-users] Question

2012-03-27 Thread Pablo Marques
Juan:

That is no happening to us and the only obvious difference between our set up 
and yours is in the fileset definition.

Can you remove these lines in your fileset and give it a try?
BaseJob = pmugcs5
Accurate = mcs
Verify = pin5

This is our fileset:

FileSet {
  Name = "Root-set"
  Include {
Options {
  signature = MD5
  compression = LZO
  Sparse = yes
  aclsupport = yes
  xattrsupport = yes
 }
File = /
  }
}

Good luck

Pablo

- Original Message -
From: "Juan Diaz" 
To: bacula-users@lists.sourceforge.net
Cc: "José Frederico" 
Sent: Tuesday, March 27, 2012 9:35:27 AM
Subject: [Bacula-users] Question

Hello, 

Right now we're evaluating and deciding which is going to be our Backup 
software, Bacula is the most optioned so far but we're having troubles with the 
implementation of the file DE duplication.

When we make a base job of the machine virtual that is going to be the base, 
his size is 553 Mb. After that when we make a full backup of a machine that was 
created from the machine "base". His size is 242 Mb that’s ok because it only 
has to make backup of the files that are different of files of the base.

The problem comes when we make a differential backup right after the full 
backup. The result must be 0 Mb or something really small, but we're getting 
backups of 462 Mb.

We used one configuration similar to the one we found in the Bacula’s manual. 
http://www.bacula.org/5.2.x-manuals/en/main/main.pdf Chapter 34.

We have tried removing BaseJob, Accurate and Verify from FileSet but it didn’t 
worked. We have always the same sizes that don't make sense.

We would like to know if you have a solution for our problem, because it worked 
when we don't use DE duplication (just with full and Differential), but we are 
very interested in using it, that would save us a lot of space on ours hard 
drives.

We attached here the job’s configuration:

ob {
  Name = "BackupBasse"
  Type = Backup
  Level = Base
  Client = basse-fd
  FileSet = "TOUT"
 Schedule = "programme2"
  Storage = File
  Messages = Standard
  Pool = File
  Priority = 10
  Write Bootstrap = "/var/lib/bacula/%c.bsr"
}
Schedule{
Name = "programme2"
Run = at 14:30
}

Job {
  Name = "Backupftp"
  Type = Backup
  Base = Backupftp, BackupBasse
  Accurate=yes
  Client = basseftp-fd
  FileSet = "TOUT"
  Schedule = "programme"
  Storage = File
  Messages = Standard
  Pool = File
  Priority = 10
  Write Bootstrap = "/var/lib/bacula/%c.bsr"
}

Schedule{
Name = "programme"
Run = Level=Full at 14:31
Run = Level=Incremental at 14:32

}

FileSet {
  Name = "TOUT"
  Include {
Options {
BaseJob = pmugcs5
Accurate = mcs
Verify = pin5
}
File = /
  }
}

Thank you very much for your help,

Juan David DIAZ

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with Jumbo packets?

2012-01-06 Thread Pablo Marques
Hi,

You can use ping to check the end-to-end maximum MTU working size:

ping -Mdo -s 8000 192.168.X.X

increase or reduce the size (8000) until you get pings back from the FD client 
(mine goes up to 8972)

Pablo
- Original Message -
From: "Frank Sweetser" 
To: bacula-users@lists.sourceforge.net
Sent: Friday, January 6, 2012 11:34:44 AM
Subject: Re: [Bacula-users] Problems with Jumbo packets?

On 01/06/2012 11:30 AM, Wolfgang Denk wrote:
> Hello,
> 
> I know that this is not exactly related to Bacula, but maybe some
> other user has seen similar behaviour.
> 
> I have problems when trying to enable support for jumbo frames on  the
> network.  All NICs and switches are supposed to support that, however
> on some systems the communication of the FD to the SD stops as soon as
> I change on the FD the MTU from the default of 1500 to a higher value
> (9000).  The MTU on the DIR and SD can be set to 9000 without visible
> impact on bacula.

Are all of the machines on the same subnet?  If not, you'll also have to check
the MTU on all of your local router interfaces.

Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] amazing backup size

2011-12-18 Thread Pablo Marques
From FAQ Bacula Wiki:

Why is my backup larger than my disk space usage?

The most common culprit of this is having one or more sparse files.

A sparse file is one with large blocks of nothing but zeroes that the operating 
system has optimized. Instead of actually storing disk blocks of nothing but 
zeroes, the filesystem simply contains a note that from point A to point B, the 
file is nothing but zeroes. Only blocks that contain non-zero data are 
allocated physical disk blocks.

The single biggest culprit seems to be the contents of /var/log/lastlog on 64 
bit systems. Since the lastlog file is extended to preallocate space for all 
UIDs, the switch from a 32 bit UID space to a 64 bit UID increases the full 
size to over 1TB.

Luckily the fix is simple - turn on sparse file support in the FileSet, will 
detect sparse files and not store the zerofill blocks.

Another possible cause is that your fileset accidentally includes some folders 
twice. Taken from the manual:

Take special care not to include a directory twice or Bacula will backup 
the same files two times wasting a lot of space on your archive device. 
Including a directory twice is very easy to do. For example:

Include {
  File = /
  File = /usr
  Options { compression=GZIP }
}

on a Unix system where /usr is a subdirectory (rather than a mounted 
filesystem) will cause /usr to be backed up twice.


- Original Message -
From: "John Drescher" 
To: "Tilman Schmidt" 
Cc: "bacula-users" 
Sent: Sunday, December 18, 2011 10:21:58 AM
Subject: Re: [Bacula-users] amazing backup size

On Sun, Dec 18, 2011 at 9:09 AM, Tilman Schmidt
 wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> A newly installed CentOS 6 / Bacula 5 backup server is reporting
> this when backing itself up:
>
>  FD Bytes Written:       53,655,908,904 (53.65 GB)
>  SD Bytes Written:       53,664,006,577 (53.66 GB)
>  Last Volume Bytes:      53,705,852,928 (53.70 GB)
>
> Which is truly amazing because the actual amount of data stored
> on this system is far less than that:
>
> [r2d2@backup ~]$ LANG=C df -h
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/mapper/vg_backup-lv_root
>                       50G  2.2G   45G   5% /
> tmpfs                 1.9G     0  1.9G   0% /dev/shm
> /dev/sda1             485M   47M  413M  11% /boot
> /dev/mapper/vg_backup-lv_home
>                      174G  2.4G  163G   2% /home
>
> Any explanations for that discrepancy?
>

No, I have never seen anything like this. And this is from a bacula
user for 8+ years who has run tens of thousands of backups for a
department with 50+ machines and 30 to 50TB on tape. I suggest you
examine what bacula has saved in the backup. The simplest way is to
use the bat version browser or the new restore viewer if you have
bacula-5.2.X.

John

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Noob user impressions and why I chose not to use Bacula

2011-12-05 Thread Pablo Marques

>> Thanks you Jesse for the feedback.
>> 
>> Regarding the disaster recovery, I have a suggestion for the bacula
>> team:
>> 
>> Why not make the director write the bacula config files and any
>> relevant bsr files at the beginning of each tape? The space wasted on
>> the tape to save these file would be very small.

>Well, the first problem here is that the Director would have to know how
>much space it was going to need for BSR files.  Of course, it could
>pre-allocate a fixed-size block of, say, 1MB for BSR files.

Agree, 1 MB is basically nothing on a tape and it can accommodate easily a huge 
amount of bsr files.
My /etc/bacula is 88k uncompressed.  

>The second problem, it seems to me, is that this would break
>compatibility with all older Bacula volumes and installations.

not necessarily if you make this information at the beginning of the tape look 
like a "volume file".
It will be ignored by old directors because it will look the same as a failed 
job that took space on the tape.  


Pablo

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Noob user impressions and why I chose not to use Bacula

2011-12-05 Thread Pablo Marques
Thanks you Jesse for the feedback.

Regarding the disaster recovery, I have a suggestion for the bacula team: 

Why not make the director write the bacula config files and any relevant bsr 
files at the beginning of each tape?
The space wasted on the tape to save these file would be very small.  

These files could be easily recovered with btape on a total disaster recovery 
situation when you only have tapes (and hopefully a tape drive) and nothing 
else.

How difficult could it be to modify bacula to make the above automated when a 
tape is labeled and/or a recycled tape is written on again for example?

Pablo


- Original Message -
From: "Jesse Molina" 
To: bacula-users@lists.sourceforge.net
Sent: Sunday, December 4, 2011 9:39:06 PM
Subject: [Bacula-users] Noob user impressions and why I chose not to use
Bacula


Hi

Recently I was looking for new backup app for my small network.  Here's 
my story and why I decided that Bacula was not a good choice for me.

I am not a long time user, so opinions and views may not be shared by 
others, but they are true none the less.  You can only be a noob once 
and I hope this criticism can he constructive and helpful.

I am not looking for any response from any users.  You don't need to 
defend Bacula from some noob with a questionable opinion.

Note that some of my notes below were jotted down in haste during 
testing Bacula and some of my comments might be rather harsh or vulgar. 
  I'm not trying to troll or bash, and I hope these comments can be used 
to improve Bacula, and maybe I will get to use it again some day and it 
will be a better product.

I tried to throw all of these notes into a coherent whole, but I'm sure 
some of it will come off as being out of order to not making any sense.

--

I've got two Linux servers, a Mac, Windows, and Linux desktop, and a 
number of remote hosts.  The main host fileset is about 750GB, and all 
of the other various clients roll into about 600GB.  I have a single DLT 
S4 (800GB native) drive direct attached to a primary server host via 
Ultra320 bus.

I am an experienced sysadmin and I've previously been the primary 
maintainer of one TSM system and assisted with multiple others.

In the end, I just wrote my own scripts using ssh, rsync and tar.  It's 
"good enough".

In summary, I could say I was attracted to the idea that Bacula used 
it's own data archive format and had a database for it's catalog, but 
was really turned off when I figured out that configuration was not also 
stored in a database, and how complicated actual restoration was.

I find the modular nature of Bacula's components very attractive, as it 
allows for scaling across multiple hosts for various functions. 
However, I don't understand the historical need to call the File Daemon 
anything other than what it is and what everyone seems to want call it: 
a Client.  Rename it to the Client Daemon and get over it.

While I appreciate the SQL DB used for historical data (Catalog), I find 
that primary configuration and some temporary data is scattered across 
various files.  It makes things complicated and difficult.  It will 
always be necessary to have some small configuration to point towards 
the other daemons and provide passwords, but using config files just 
makes management difficult.  SQL is not hard and Bacula isn't a simple 
program.  I would refer to Nagios vs Icinga as a good example of 
complicated text config systems gone bad.  When you have so much 
re-usable configuration data and complicated relationships, that's what 
DBs were made for.  Add a separate config DB and then all configuration 
should be done via bconsole, and a webUI.  Configuration could be dumped 
and loaded via bconsole or maybe an import/export commands alla 
Juniper/Cisco.

As for user interfaces, bconsole is good and I never really bothered 
with anything else.  The one huge complaint I have is that eject and 
other basic loader controls are absent and should be added.  I got 
really tired of having to umount, ctrl-z, and then call mt just to eject 
my tape during testing.  I realize that this is more complicated for 
autoloader libs, but allow the user to configure a backend-script for 
the command and there you go.  This can be done.



Documentation sucks.  It's just not a priority for this project and it 
shows.  Tons of typos, the formatting and layout is horrible, and for 
the English language I get the impression there are a lot of 
translation-isms.  It was like reading a paper written by five different 
college students where each one wrote a different portion with a 
different writing style.

In a number of cases, two sentences having the same or very similar 
meaning will be in the same paragraph.  Effectively saying the same 
thing twice or more.

For example:
"Bacula can automatically label Volumes if instructed to do so, but this 
feature is not yet fully implemented. "
Really, WTF?  If it's not implemented, don't document it.

http://www.b

[Bacula-users] VirtualFull and Base Jobs

2011-10-22 Thread Pablo Marques
question: 

I have a Base job for a server. 
Full backups based on the Base Job run successfully. 
Incrementals run also fine. 

When I run a VirtualFull job the end result is not a Full backup but an 
Incremental one, based (I guess) on the last Incrementals. 
If I try to restore from the VirtualFull I only see a few files. 

The next Incremental backup run after the VirtualFull will backup all files 
except the ones present on the latest Incrementals. 

Is VirtualFull not compatible with Base Jobs? 

Pablo 
--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Base jobs and incremental backups

2011-05-24 Thread Pablo Marques
You are right about the documentation. It should work the way you describe it.

When you do a FULL after the BASE, it should "examine" all the files and only 
backup those that have changed since the BASE (which they should be very few or 
none).

Can you show us your configuration files to verify you are not missing 
something?
As I understand it, the accurate backup requires the director to send all the 
file definitions of a previous backup to the client, for the client to decide 
what has changed.
That may take a while if the bandwidth is not there.

These are my job definitions config file for one of my servers:
Job {
Name = "server-base-fd-data"
JobDefs = "jobbaculadefs"
Client = server-fd
FileSet = "server-set"
Pool = Yearly
Level = Base
}
Job {
Name = "server-fd-data"
JobDefs = "jobbaculadefs"
Client = server-fd
Schedule = "IncWeeklyCycle"
FileSet = "server-set"
Client Run Before Job = "C:/windows/system32/ntbackup.exe backup systemstate /J 
\"vmbackup Backup\" /F \"C:\\SystemStateBackup.bkf\" /R:yes /L:f /SNAP:on"
Base = server-base-fd-data
Accurate = yes
}

What I do on my setup is I first run the BASE manually, then I run a manual 
FULL, then I just let the regular client schedule run the usual INCREMENTALS 
and DIFFERENTIALS (and FULLS).
I guess I am not using a "generic" BASE backup but a specific one per server.

Pablo

- Original Message -
From: "TipMeAbout" 
To: bacula-users@lists.sourceforge.net
Cc: "Pablo Marques" 
Sent: Tuesday, May 24, 2011 2:41:07 AM
Subject: Re: [Bacula-users] Base jobs and incremental backups



Le mardi 24 mai 2011 j'ai reçu le message suivant:



> Maybe laptop-lan and laptop-wlan need to be the same client (same name). 

>

> Just change the ip address of the client configuration of laptop-lan to be

> the wlan ip and run a FULL backup on it.

>

> Let me know.

>

> Pablo



Hello,



That would be strange if the 2 needs to be the same name or IP address as in 
the doc it is said that it can use backups from other clients as "template" of 
backup:



-

extract of the doc:



A new Job directive Base=Jobx, Joby... permits to specify the list of files 
that will be used during Full backup as base. Job { Name = BackupLinux Level= 
Base ... } Job { Name = BackupZog4 Base = BackupZog4, BackupLinux Accurate = 
yes ... }

In this example, the job BackupZog4 will use the most recent version of all 
files contained in BackupZog4 and BackupLinux jobs. Base jobs should have run 
with level=Base to be used.

-



Changing IP as you request would make me change the IP of the server too (WiFi 
and LAN are not in the same network), so first I would like to do more 
investigations without modifying IPs.

I will make some quick tests with small directories.



Could you post here the example of you base job and related job please ?



Thanks !



JC



--

http://www.tipmeabout.org

http://www.tipmeabout.com --
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Base jobs and incremental backups

2011-05-23 Thread Pablo Marques

Maybe laptop-lan and laptop-wlan need to be the same client (same name).

Just change the ip address of the client configuration of laptop-lan to be the 
wlan ip and run a FULL backup on it.

Let me know.

Pablo
- Original Message -
From: "TipMeAbout" 
To: bacula-users@lists.sourceforge.net
Cc: "Pablo Marques" 
Sent: Monday, May 23, 2011 5:40:17 PM
Subject: Re: [Bacula-users] Base jobs and incremental backups

Le mercredi 18 mai 2011 j'ai reçu le message suivant:

> > Are there any informations in the log file that prove the first full
> > after a base job is effectively using the base job ?
>> [...]


Hello,

I'm back with news, but not good.

I have done a base job backup of the laptop system through LAN connection (job 
name: laptop-lan_system.basejob). Below the definition of this job:

Job {
Name= "laptop-lan_system.basejob"
Enabled= "yes"
Type= "Backup"
Level= "Base" # can be defined in the schedule resource
Client= "laptop-lan"
FileSet= "system"
Messages= "Standard"
Pool= "full"
Full Backup Pool= "full"
Differential Backup Pool= "diff"
Incremental Backup Pool= "incr"
Schedule= "full_1st_of_month"
Storage= "VTD_08"
Priority= 9

}


Then a full backup of this client through WIFI (job name:laptop-wlan_system) 
and mentionning that this job should help itself with the previous base job.  
Below the definition of this job:

Job {
Name= "laptop-wlan_system"
Enabled= "yes"
Type= "backup"
Base= "laptop-lan_system.basejob"
Level= "incremental" # can be defined in the schedule resource
Client= "laptop-wlan"
FileSet= "system"
Messages= "Standard"
Pool= "incr"
Full Backup Pool= "full"
Differential Backup Pool= "diff"
Incremental Backup Pool= "incr"
Schedule= "incr_everyday_but_1st"
Storage= "VTD_04"

}

As I have launched manually the full, I have change in command line the level 
and pool of the job laptop-wlan_system to set them to full.


But unfortunately, it take more than 5 hours to do the backup and from the 
report at the end of the backup, it did not take the base job into 
consideration as no line appears to indicate the percentage of use from the 
base job:

  Build OS:   x86_64-redhat-linux-gnu redhat 
  JobId:  1235
  Job:laptop-wlan_system.2011-05-23_08.46.24_03
  Backup Level:   Full
  Client: "laptop-wlan" 5.0.3 (04Aug10) x86_64-redhat-linux-
gnu,redhat,
  FileSet:"system" 2011-03-18 08:47:54
  Pool:   "full" (From Job FullPool override)
  Catalog:"bacula" (From Client resource)
  Storage:"VTD_04" (From Job resource)
  Scheduled time: 23-mai-2011 08:46:06
  Start time: 23-mai-2011 08:46:26
  End time:   23-mai-2011 14:05:18
  Elapsed time:   5 hours 18 mins 52 secs
  Priority:   10
  FD Files Written:   131,130
  SD Files Written:   131,130
  FD Bytes Written:   7,233,904,662 (7.233 GB)
  SD Bytes Written:   7,252,157,429 (7.252 GB)
  Rate:   378.1 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): full_vol-027|full_vol-028|full_vol-029
  Volume Session Id:  19
  Volume Session Time:1306014285
  Last Volume Bytes:  574,149,829 (574.1 MB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK



So what's wrong with my configuration ?

Thanks for your help !

JC


-- 
http://www.tipmeabout.org
http://www.tipmeabout.com

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Base jobs and incremental backups

2011-05-18 Thread Pablo Marques
> Are there any informations in the log file that prove the first full after a 
> base job is effectively using the base job ? 
The Bacula report after the FULL gives you very useful information, including 
the BASE backup files used. 

This is a sample report after a FULL backup, notice the line Base files/Used 
files: 

JobId: 371 
Job: server-fd-data.2011-05-09_16.58.33_17 
Backup Level: Full (upgraded from Incremental) 
Client: "server-fd" 5.0.3 (04Aug10) Linux,Cross-compile,Win32 
FileSet: "server-set" 2011-05-09 14:29:09 
Pool: "Yearly" (From User input) 
Catalog: "MyCatalog" (From Client resource) 
Storage: "FileStorage" (From Job resource) 
Scheduled time: 09-May-2011 16:58:24 
Start time: 09-May-2011 17:02:10 
End time: 09-May-2011 18:01:16 
Elapsed time: 59 mins 6 secs 
Priority: 10 
FD Files Written: 39,344 
SD Files Written: 39,344 
FD Bytes Written: 8,600,464,982 (8.600 GB) 
SD Bytes Written: 8,606,510,471 (8.606 GB) 
Rate: 2425.4 KB/s 
Software Compression: 39.7 % 
Base files/Used files: 39336/39114 (99.44%) 
VSS: yes 
Encryption: no 
Accurate: yes 
Volume name(s): Tape-Year-0001 
Volume Session Id: 358 
Volume Session Time: 1302812565 
Last Volume Bytes: 252,270,109,076 (252.2 GB) 
Non-fatal FD errors: 0 
SD Errors: 0 
FD termination status: OK 
SD termination status: OK 
Termination: Backup OK 

For your case, depending on how much data changes on the Laptop, it maybe OK 
just to leave it on the WiFi for all backups (except for the BASE). 

Pablo 
- Original Message -
From: "- -"  
To: "Pablo Marques"  
Cc: bacula-users@lists.sourceforge.net 
Sent: Wednesday, May 18, 2011 9:10:55 AM 
Subject: Re: [Bacula-users] Base jobs and incremental backups 


2011/5/18 Pablo Marques < pmarq...@miamilinux.net > 



When you do a Base backup, you need to do a FULL immediately after, 
because the BASE backup is a "special" backup and you cannot restore only from 
it. 

>From then on you can do Incrementals, Fulls or Differentials. 

All the FULL backups that you will run after that fist FULL are going to be 
very small, 
so you should only need to have the laptop on Gig on your first BASE backup. 

Hope this helps. 

Pablo 








Hello ! 

That's an interesting point ! I did not understand when read the documentation 
that a full is necessary after a base job. I though a base replaces a full. 
For me the interesting point of a base job was that you can do a base job of a 
"template" server, and then backup n servers directly without having n full 
backups to do. Writing that, I now understand better the concept of base jobs, 
all the n full would contain pointers on the base jobs. Are there any 
informations in the log file that prove the first full after a base job is 
effectively using the base job ? 
Something like "running 'client-bkp job' based on 'based-job' " ? 

So for my problem, I should try to do a floating IP between the 2 interface - 
GiB and wifi - and do the backup on this interface: if the LAN cable is 
connected, backup goes through the LAN, if there is only the wifi it would use 
it. I think I can do that with a piece of script. eth0 would be "client-lan", 
wlan0 "client-wlan" and the backup interface would be "client-bkp" pointing on 
"eth0:0" if cable is connected or "wlan0:0". 

Thanks for your help ! 

JC 






- Original Message - 
From: "TipMeAbout" < tipmeab...@gmail.com > 
To: bacula-users@lists.sourceforge.net 
Sent: Tuesday, May 17, 2011 5:09:10 PM 
Subject: Re: [Bacula-users] Base jobs and incremental backups 

> On Mon, May 16, 2011 at 11:07 PM, TipMeAbout < tipmeab...@gmail.com > wrote: 
> > Hello all, 
> > 
> > I use Bacula for some times now and I experiment now the base job backup. 
> > I have to solve a problem and I need help: I have to backup a laptop. 
> > 
> > This laptop is most of the time connected by wifi. As it contains quite 
> > large 
> > data, I have decided to do a full the 1st of each month and an 
> > incremental the 
> > other days. 
> > To let the full run quickly, I decided that the laptop will be connected 
> > by its LAN 1Gb connection the 1st and by wifi the rest of the month. So 
> > I have 2 
> > client instances configured: "client-lan" and "client-wlan", one for each 
> > type 
> > of backup, each with its own IP address. But when I start an incremental 
> > for 
> > "client-wlan", Bacula tells me it does not find a valid full, so it 
> > starts a 
> > full through the wifi connection. Too long !! 
> > I have decided to do a base job the 1st of the month by LAN for instance 
> > "client-lan" and then each incremental for "client-wlan"

Re: [Bacula-users] Base jobs and incremental backups

2011-05-17 Thread Pablo Marques

When you do a Base backup, you need to do a FULL immediately after, 
because the BASE backup is a "special" backup and you cannot restore only from 
it.

>From then on you can do Incrementals, Fulls or Differentials.

All the FULL backups that you will run after that fist FULL are going to be 
very small,
so you should only need to have the laptop on Gig on your first BASE backup. 

Hope this helps.

Pablo

- Original Message -
From: "TipMeAbout" 
To: bacula-users@lists.sourceforge.net
Sent: Tuesday, May 17, 2011 5:09:10 PM
Subject: Re: [Bacula-users] Base jobs and incremental backups

> On Mon, May 16, 2011 at 11:07 PM, TipMeAbout  wrote:
> > Hello all,
> > 
> > I use Bacula for some times now and I experiment now the base job backup.
> > I have to solve a problem and I need help: I have to backup a laptop.
> > 
> > This laptop is most of the time connected by wifi. As it contains quite
> > large
> > data, I have decided to do a full the 1st of each month and an
> > incremental the
> > other days.
> > To let the full run quickly, I decided that the laptop will be connected
> > by its LAN 1Gb connection the 1st and by wifi the rest of the month. So
> > I have 2
> > client instances configured: "client-lan" and "client-wlan", one for each
> > type
> > of backup, each with its own IP address. But when I start an incremental
> > for
> > "client-wlan", Bacula tells me it does not find a valid full, so it
> > starts a
> > full through the wifi connection. Too long !!
> > I have decided to do a base job the 1st of the month by LAN for instance
> > "client-lan" and then each incremental for "client-wlan" would be based
> > on this base job. It does not work neither as it still starts a full
> > backup in wifi mode instead of incremental. I have read that a base job
> > is like a full
> > and let full backup base on it: so I change my incremental backup by a
> > full with hope it would saved only some datas, for instance
> > "client-wlan". But after a while running, I have the impression Bacula
> > does a new total full without taking into consideration the base job
> > backup.
> > 
> > So my questions are simple: does base job can be used with incremental
> > backups
> > to realise what I would like and how to do that ?
> > 
> > Thanks in advance for your help !
> > 
> > 
> > JC
> > 
> > 
> > --
> > http://www.tipmeabout.org
> 
> Rather than try to backup the laptop with Bacula, I would use RSYNC to on
> the laptop and then backup the rsync "mirror" of the laptop, that way you
> don't rely on the laptop being connected and once you have done the initial
> rsync the  incrementals are much easier to manage over your WiFi bandwidth.
> You could also trigger your rsync job to run when the interface comes up,
> and if you did this over SSH via an Internet resolvable f.q.d.n then you
> can backup from "anywhere"
> 
> Although Bacula is fantastic for backing up, sometimes other "tools" can
> make the overall process better


Hello,
 
Thanks for your answer !
Of course I could use other tools, but the challenge is to do that with bacula 
and to highlight how to use base job backup as a base for incremental backup.
  
By using RSYNC, I have to provide space left on the bacula server which would 
act as mirror, and then space to backup this mirror with bacula (I use backup 
on disk file of 4GiB). 
Moreover, the rsync would mirror data I don't care (iso, mpeg, jpg) which are 
space and bandwidth consuming through wifi; with bacula, it excluded from the 
beginning. And maintaining 2 lists of exclusion - one for each tool - is not 
the most recommended I think.
 
If someone else has a proposal, I'm still open for an answer.
I will myself think if a solution based on virtual IPs can be set up. If I 
found something, I would of course complete this thread.
 
 See you all !
 
 JC


-- 
http://www.tipmeabout.org

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] encryption keys on backups

2011-04-15 Thread Pablo Marques

Does anybody know what happens if the generated encryption key pair expires? 
Will I be able to restore encrypted backups? 

openssl genrsa -out master.key 2048
openssl req -new -key master.key -x509 -out master.cert -days 1

openssl genrsa -out fd-test.key 2048
openssl req -new -key fd-test.key -x509 -out fd-test.cert -days 1

This keys will expire tomorrow.

A client will have this configuration, and I am backing it up today.

FileDaemon { 
Name = test-fd 
FDport = 9102 
WorkingDirectory = /var/bacula/working 
Pid Directory = /var/run 
Maximum Concurrent Jobs = 4 

PKI Signatures = Yes# Enable Data Signing 
PKI Encryption = Yes# Enable Data Encryption 
PKI Keypair = "/etc/bacula/test-fd.pem"# Public and Private Keys 
PKI Master Key = "/etc/bacula/master.cert"# ONLY the Public Key 
} 

Will I be able to restore this backup a few days from today? 

Pablo

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Pool per client

2011-04-13 Thread Pablo Marques
Gavin:

I tried signalling bacula-sd, but it does not seem to work and still sees the 
same configuration until it is completely restarted.
I will try creating a bacula-sd instance per client and see...


Pablo
- Original Message -
From: "Gavin McCullagh" 
To: "Pablo Marques" 
Cc: bacula-users@lists.sourceforge.net
Sent: Wednesday, April 13, 2011 4:21:23 PM
Subject: Re: [Bacula-users] Pool per client

Hi,

On Wed, 13 Apr 2011, Pablo Marques wrote:

> I guess I could modify bacula-sd an add/remove a file device per client
> as needed. I am not sure if I can "reload" bacula-sd.conf without
> interrupting running backups.

My understanding is that you need a restart, which is likely to kill any
running backups.

> When I add a client I have a "template" client definition with all per client 
> definitions that I need:
> I replace $CLIENT_NAME $IP_ADDRESS $PORT and generate a new file and then I 
> do a "reload" on bconsole, and the client is ready to go. 
> The clients, in my application, decide the backup schedule, and they run it 
> from their bconsole client.
> Each client can only run or restore its own backups. 

I do something similar actually, but it's not quite as big or dynamic so I
manually modify the bacula-sd and don't generally have to worry about
backups that are running at the time I restart the bacula-sd.

This might seem (or be) crazy, but in principal you could run a bacula-sd
for each client with its own bacula-sd instance running on a particular
ip:port.  That would avoid the issue of restarting bacula-sd.

Of course it might be possible to signal bacula-sd to re-read its config,
which would be much simpler :-)

Gavin


-- 
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College 
South Circular Road
Dublin 8
Ireland
Tel: +353 1 4163365
http://www.gcd.ie
http://www.gcd.ie/brochure.pdf
http://www.gcd.ie/opendays
http://www.gcd.ie/ebrochure

This E-mail is from Griffith College.
The E-mail and any files transmitted with it are confidential and may be
privileged and are intended solely for the use of the individual or entity
to whom they are addressed. If you are not the addressee you are prohibited
from disclosing its content, copying it or distributing it otherwise than to
the addressee. If you have received this e-mail in error, please immediately
notify the sender by replying to this e-mail and delete the e-mail from your
computer.

Bellerophon Ltd, trades as Griffith College (registered in Ireland No.
60469) with its registered address as Griffith College Campus, South
Circular Road, Dublin 8, Ireland.


--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Pool per client

2011-04-13 Thread Pablo Marques

This setup would work, but as it turns out, when you have Base jobs, you cannot 
do migrations.

Fatal error: sql_create.c:1147 Can't Copy/Migrate job using BaseJob

Pablo
- Original Message -
From: "Pablo Marques" 
To: "Martin Simmons" 
Cc: bacula-users@lists.sourceforge.net
Sent: Wednesday, April 13, 2011 3:03:44 PM
Subject: Re: [Bacula-users] Pool per client

Martin,

This hack looks very promissing.
I will test it and let you know.

Pablo

- Original Message -
From: "Martin Simmons" 
To: bacula-users@lists.sourceforge.net
Sent: Wednesday, April 13, 2011 11:24:24 AM
Subject: Re: [Bacula-users] Pool per client

>>>>> On Tue, 12 Apr 2011 19:04:40 -0400 (EDT), Pablo Marques said:
> 
> I enabled spooling, but it seems like Bacula requires to mount a tape from
> the client pool on a drive before the client spooling can begin. 
> Can this be avoided? 

AFAIK, no.


> A possible solution would be to do all backups on a special pool and after
> they are done migrate later each client job to each client pool. 
> But I cannot find a way to modify the "Next Pool" dynamically. It is a fixed
> setting on the Pool definition.
> 
> Does anybody have suggestions on how to accomplish this? 

There is a hack:

http://thread.gmane.org/gmane.comp.sysutils.backup.bacula.devel/14084

__Martin

--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Pool per client

2011-04-13 Thread Pablo Marques
Martin,

This hack looks very promissing.
I will test it and let you know.

Pablo

- Original Message -
From: "Martin Simmons" 
To: bacula-users@lists.sourceforge.net
Sent: Wednesday, April 13, 2011 11:24:24 AM
Subject: Re: [Bacula-users] Pool per client

>>>>> On Tue, 12 Apr 2011 19:04:40 -0400 (EDT), Pablo Marques said:
> 
> I enabled spooling, but it seems like Bacula requires to mount a tape from
> the client pool on a drive before the client spooling can begin. 
> Can this be avoided? 

AFAIK, no.


> A possible solution would be to do all backups on a special pool and after
> they are done migrate later each client job to each client pool. 
> But I cannot find a way to modify the "Next Pool" dynamically. It is a fixed
> setting on the Pool definition.
> 
> Does anybody have suggestions on how to accomplish this? 

There is a hack:

http://thread.gmane.org/gmane.comp.sysutils.backup.bacula.devel/14084

__Martin

--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Pool per client

2011-04-13 Thread Pablo Marques
Thanks Greg,

The key in my application is if I can modify bacula-sd.conf dynamically 
without a bacula-sd restart every time I add or remove a client.

bacula-sd restart would kill running jobs.

Pablo

Kurzawa, Greg wrote on Wednesday, April 13, 2011
> Pablo,
> 
> 
> 
> If you’re backing up to disk, you can have as many Jobs running
> simultaneously as you want/need.  The trick here is to define not only a
> unique Pool resource for each Client, but unique Storage, Device and Media
> Type resources as well.  Each Client will have its own resources, so there
> will be no contention.  Following are the relevant pieces of my
> configuration, snipped for readability.
> 
> 
> 
> I admit that when I was setting this up it seemed a bit bulky, but it
> works.  I back up many slow Clients to disk; their jobs all begin and run
> simultaneously.
> 


 
> 
> Of course, if you can’t back up to disk first, this is useless to you.
> 
> 
> 
>  CLIENTS 
> 
> 
> 
> Client {
> 
>   Name = DORCAS
> 
>   Address = DORCAS.URTH.COM
> 
> }
> 
> 
> 
> Client {
> 
>   Name = SEVERIAN
> 
>   Address = SEVERIAN.URTH.COM
> 
> }
> 
> 
> 
>  JOBS 
> 
> 
> 
> Job {
> 
>   name = "DAILY:severian"
> 
>   client = "SEVERIAN"
> 
>   pool = "SEVERIAN"
> 
> }
> 
> 
> 
> Job {
> 
>   name = "DAILY:dorcas"
> 
>   client = "DORCAS"
> 
>   pool = "DORCAS"
> 
> }
> 
> 
> 
>  POOLS 
> 
> 
> 
> Pool {
> 
>   Name = "SEVERIAN"
> 
>   Pool Type = Backup
> 
>   Storage = "SAN:severian"
> 
>   next pool = "LTO4"
> 
>   LabelFormat = "severian-"
> 
> }
> 
> 
> 
> Pool {
> 
>   Name = "DORCAS"
> 
>   Pool Type = Backup
> 
>   Storage = "SAN:dorcas"
> 
>   next pool = "LTO4"
> 
>   LabelFormat = "dorcas-"
> 
> }
> 
> 
> 
>  STORAGE 
> 
> 
> 
> Storage {
> 
>   Name = "SAN:severian"
> 
>   Device = "SAN:severian"
> 
>   Media Type = "FILE:severian"
> 
> }
> 
> 
> 
> Storage {
> 
>   Name = "SAN:dorcas"
> 
>   Device = "SAN:dorcas"
> 
>   Media Type = "FILE:dorcas"
> 
> }
> 
> 
> 
> And in bacula-sd.conf:
> 
> 
> 
> Device {
> 
>   Name = "SAN:severian"
> 
>   Media Type = "FILE:severian"
> 
>   Archive Device = "/dp-SAN/severian"
> 
> }
> 
> 
> 
> Device {
> 
>   Name = "SAN:dorcas"
> 
>   Media Type = "FILE:dorcas"
> 
>   Archive Device = "/dp-SAN/dorcas"
> 
> }
> 
> 
> 
> Greg
> 
> 
> 
> 
> 
> From: Pablo Marques [mailto:pmarq...@miamilinux.net]
> Sent: Wednesday, April 13, 2011 11:01 AM
> To: Kurzawa, Greg
> Cc: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Pool per client
> 
> 
> 
> Thanks Greg,
> 
> But I would still have the problem that I need a device tied up backing up
> each client. The problem I am facing is that I need to backup lots of slow
> clients, and I need to come up with something so I can back them up all at
> the _same_ time on one or maybe a few devices, and still have a Pool per
> client.
> 
> Pablo
> 
> 
> 
> From: "Greg Kurzawa" 
> To: "Pablo Marques" 
> Cc: bacula-users@lists.sourceforge.net
> Sent: Wednesday, April 13, 2011 9:29:31 AM
> Subject: RE: [Bacula-users] Pool per client
> 
> 
> 
> 
> Hi Pablo,
> 
> 
> 
> If you have enough disk space handy, you could send each Client’s data to
> its own disk Pool with its own Next Pool specification.  Each Client’s
> data would be in its own Pool on disk, then move to its own Pool on tape. 
> This is exactly what I’ve done at my site, except the disk Pools all point
> to the same tape Pool.
> 
> 
> 
> Greg
> 
> 
> 
> From: Pablo Marques [mailto:pmarq...@miamilinux.net]
> Sent: Wednesday, April 13, 2011 7:28 AM
> To: Randy Katz
> Cc: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Pool per client
> 
> 
> 
> Thanks Randy,
> 
> Unfortunately Maximum Concurrent Jobs won't cut it because I need a
> different tape per client.  Still I set it to 1000. When spooling is
> enabled, bacula still wants a tape from the client pool on the drive
> before the spooling starts.
> 
> I could create a virtual library with MHVTL and use sev

Re: [Bacula-users] Pool per client

2011-04-13 Thread Pablo Marques

>> But I would still have the problem that I need a device tied up backing
>> up each client.  The problem I am facing is that I need to backup lots of
>> slow clients, and I need to come up with something so I can back them up
>> all at the _same_ time on one or maybe a few devices, and still have a
>> Pool per client.

>I'm not clear if you're trying to avoid lots of physical devices or lots of
>bacula storage device definitions.  You could create one Device {} entry
>per client in the bacula-sd.conf.  These each correspond to a different
>directory on some filesystem.   You then run each backup to its own file
>Device -- these can all happen concurrently.

>You should then be able to migrate each one in turn to tape.

>Or maybe I've missed something?

I would try to avoid changing the device definitions in bacula-sd.conf every 
time I add or delete a client, as this could happen very often.

If I backup, let's say, 500 clients at night, the ideal would be to back them 
up all to the same device at the same time. If one client stalls or loses the 
connection, the others can continue without problems.
If I tie up a device per client, if a client has problems it could render the 
device unusable until the client finishes or the job times out.

I guess I could modify bacula-sd an add/remove a file device per client as 
needed. I am not sure if I can "reload" bacula-sd.conf without interrupting 
running backups.

When I add a client I have a "template" client definition with all per client 
definitions that I need:
I replace $CLIENT_NAME $IP_ADDRESS $PORT and generate a new file and then I do 
a "reload" on bconsole, and the client is ready to go. 
The clients, in my application, decide the backup schedule, and they run it 
from their bconsole client.
Each client can only run or restore its own backups. 

=
 # We need $CLIENT_NAME $IP_ADDRESS $PORT
Client {
  Name = $CLIENT_NAME-fd
  Address = $IP_ADDRESS
  FDPort = $PORT
  Catalog = MyCatalog
  Password = "xccxcc"  # password for FileDaemon
  File Retention = 5 year
  Job Retention = 15 years
  AutoPrune = yes # Prune expired Jobs/Files
}
Console {
  Name = $CLIENT_NAME
  Password = "$CLIENT_NAMEpassword"
  JobACL = "$CLIENT_NAME-fd-data","$CLIENT_NAME-Restore"
  ClientACL = $CLIENT_NAME-fd
  StorageACL = CHANGER
  ScheduleACL = *all*
  PoolACL = $CLIENT_NAME
  FileSetACL = "$CLIENT_NAME-set"
  CatalogACL = MyCatalog
  WhereACL = *all*
  CommandACL = run, restore
}

Job {
  Name = "$CLIENT_NAME-base-fd-data"
  JobDefs = "jobbaculadefs"
  Client = $CLIENT_NAME-fd
  FileSet = "$CLIENT_NAME-set"
  Pool = $CLIENT_NAME
  Level = Base
  SpoolData = yes
  Maximum Concurrent Jobs = 1000
  Max Run Sched Time = 86400
}

Job {
  Name = "$CLIENT_NAME-fd-data"
  JobDefs = "jobbaculadefs"
  Client = $CLIENT_NAME-fd
  FileSet = "$CLIENT_NAME-set"
  Pool = $CLIENT_NAME
  Base = $CLIENT_NAME-base-fd-data
  Accurate = yes
  SpoolData = yes
  Maximum Concurrent Jobs = 1000
  Max Run Sched Time = 86400
}

Job {
  Name = "$CLIENT_NAME-restore"
  Type = Restore
  Client = $CLIENT_NAME-fd
  FileSet="$CLIENT_NAME-set"
  Storage = CHANGER
  Pool = $CLIENT_NAME
  Messages = Standard
  Where = /
}

Pool {
   Name = $CLIENT_NAME
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 15 years
   Recycle Oldest Volume = yes
   Recycle Pool = Scratch
}
# We need $CLIENT_NAME 
FileSet {
  Name = "$CLIENT_NAME-set"
  Include {
Options {
  signature = MD5
  compression = GZIP
  Sparse = yes
  }
  @/etc/bacula/clients-configs/$CLIENT_NAME-filelist
  }
}
===

Hope this clarifies my setup. 

Pablo
--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacul

Re: [Bacula-users] Pool per client

2011-04-13 Thread Pablo Marques
Thanks Greg,

But I would still have the problem that I need a device tied up backing up each 
client.
The problem I am facing is that I need to backup lots of slow clients, and I 
need to come up with something so I can back them up all at the _same_ time on 
one or maybe a few devices, and still have a Pool per client.

Pablo
- Original Message -
From: "Greg Kurzawa" 
To: "Pablo Marques" 
Cc: bacula-users@lists.sourceforge.net
Sent: Wednesday, April 13, 2011 9:29:31 AM
Subject: RE: [Bacula-users] Pool per client




Hi Pablo,



If you have enough disk space handy, you could send each Client’s data to its 
own disk Pool with its own Next Pool specification. Each Client’s data would be 
in its own Pool on disk, then move to its own Pool on tape. This is exactly 
what I’ve done at my site, except the disk Pools all point to the same tape 
Pool.



Greg





From: Pablo Marques [mailto:pmarq...@miamilinux.net]
Sent: Wednesday, April 13, 2011 7:28 AM
To: Randy Katz
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Pool per client




Thanks Randy,

Unfortunately Maximum Concurrent Jobs won't cut it because I need a different 
tape per client. Still I set it to 1000.
When spooling is enabled, bacula still wants a tape from the client pool on the 
drive before the spooling starts.

I could create a virtual library with MHVTL and use several drives or use a 
disk-changer emulator. But I am not sure how scalable these solutions are.

Pablo
- Original Message -


From: "Randy Katz" 
To: bacula-users@lists.sourceforge.net
Sent: Wednesday, April 13, 2011 6:08:54 AM
Subject: Re: [Bacula-users] Pool per client

On 4/12/2011 4:04 PM, Pablo Marques wrote:


I have a setup to backup lots of clients over slow links.
I want to have each client (or group of clients) backed up to dedicated client 
pools, so client1 will go to pool client1, and so on.
That way I have better control of the space used, if a client goes away I can 
simply delete the tapes (or files) an get the space back immediately.
Also it gives me better control on the retention on a per client basis.

The problem is that when I try to backup multiple clients at the same time, the 
storage process has to wait for each job to finish before it can move to the 
next because it needs to change the tape (different client --> different pool). 
Some clients may take many hours to finish, forcing everybody else to wait.

I enabled spooling, but it seems like Bacula requires to mount a tape from the 
client pool on a drive before the client spooling can begin.
Can this be avoided?

A possible solution would be to do all backups on a special pool and after they 
are done migrate later each client job to each client pool.
But I cannot find a way to modify the "Next Pool" dynamically. It is a fixed 
setting on the Pool definition.

Does anybody have suggestions on how to accomplish this?

Look into Maximum Concurrent Jobs in your Storage definition.

Regards,
Randy

--
Forrester Wave Report - Recovery time is now measured in hours and minutes 
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now! http://p.sf.net/sfu/ibm-webcastpromo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users --
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Pool per client

2011-04-13 Thread Pablo Marques
Thanks Randy, 

Unfortunately Maximum Concurrent Jobs won't cut it because I need a different 
tape per client. Still I set it to 1000. 
When spooling is enabled, bacula still wants a tape from the client pool on the 
drive before the spooling starts. 

I could create a virtual library with MHVTL and use several drives or use a 
disk-changer emulator. But I am not sure how scalable these solutions are. 

Pablo 
- Original Message -
From: "Randy Katz"  
To: bacula-users@lists.sourceforge.net 
Sent: Wednesday, April 13, 2011 6:08:54 AM 
Subject: Re: [Bacula-users] Pool per client 

On 4/12/2011 4:04 PM, Pablo Marques wrote: 


I have a setup to backup lots of clients over slow links. 
I want to have each client (or group of clients) backed up to dedicated client 
pools, so client1 will go to pool client1, and so on. 
That way I have better control of the space used, if a client goes away I can 
simply delete the tapes (or files) an get the space back immediately. 
Also it gives me better control on the retention on a per client basis. 

The problem is that when I try to backup multiple clients at the same time, the 
storage process has to wait for each job to finish before it can move to the 
next because it needs to change the tape (different client --> different pool). 
Some clients may take many hours to finish, forcing everybody else to wait. 

I enabled spooling, but it seems like Bacula requires to mount a tape from the 
client pool on a drive before the client spooling can begin. 
Can this be avoided? 

A possible solution would be to do all backups on a special pool and after they 
are done migrate later each client job to each client pool. 
But I cannot find a way to modify the "Next Pool" dynamically. It is a fixed 
setting on the Pool definition. 

Does anybody have suggestions on how to accomplish this? 


Look into Maximum Concurrent Jobs in your Storage definition. 

Regards, 
Randy 

-- 
Forrester Wave Report - Recovery time is now measured in hours and minutes 
not days. Key insights are discussed in the 2010 Forrester Wave Report as 
part of an in-depth evaluation of disaster recovery service providers. 
Forrester found the best-in-class provider in terms of services and vision. 
Read this report now! http://p.sf.net/sfu/ibm-webcastpromo 
___ 
Bacula-users mailing list 
Bacula-users@lists.sourceforge.net 
https://lists.sourceforge.net/lists/listinfo/bacula-users 
--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Pool per client

2011-04-12 Thread Pablo Marques
I have a setup to backup lots of clients over slow links. 
I want to have each client (or group of clients) backed up to dedicated client 
pools, so client1 will go to pool client1, and so on. 
That way I have better control of the space used, if a client goes away I can 
simply delete the tapes (or files) an get the space back immediately. 
Also it gives me better control on the retention on a per client basis. 

The problem is that when I try to backup multiple clients at the same time, the 
storage process has to wait for each job to finish before it can move to the 
next because it needs to change the tape (different client --> different pool). 
Some clients may take many hours to finish, forcing everybody else to wait. 

I enabled spooling, but it seems like Bacula requires to mount a tape from the 
client pool on a drive before the client spooling can begin. 
Can this be avoided? 

A possible solution would be to do all backups on a special pool and after they 
are done migrate later each client job to each client pool. 
But I cannot find a way to modify the "Next Pool" dynamically. It is a fixed 
setting on the Pool definition. 

Does anybody have suggestions on how to accomplish this? 

Thanks 

Pablo 


--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users